corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-673901 | cs/0602087 | Bounds on the Threshold of Linear Programming Decoding | <|reference_start|>Bounds on the Threshold of Linear Programming Decoding: Whereas many results are known about thresholds for ensembles of low-density parity-check codes under message-passing iterative decoding, this is not the case for linear programming decoding. Towards closing this knowledge gap, this paper presents some bounds on the thresholds of low-density parity-check code ensembles under linear programming decoding.<|reference_end|> | arxiv | @article{vontobel2006bounds,
title={Bounds on the Threshold of Linear Programming Decoding},
author={Pascal O. Vontobel and Ralf Koetter},
journal={Proc. IEEE Information Theory Workshop (ITW 2006), Punta del Este,
Uruguay, March 13-17, 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602087},
primaryClass={cs.IT math.IT}
} | vontobel2006bounds |
arxiv-673902 | cs/0602088 | Towards Low-Complexity Linear-Programming Decoding | <|reference_start|>Towards Low-Complexity Linear-Programming Decoding: We consider linear-programming (LP) decoding of low-density parity-check (LDPC) codes. While it is clear that one can use any general-purpose LP solver to solve the LP that appears in the decoding problem, we argue in this paper that the LP at hand is equipped with a lot of structure that one should take advantage of. Towards this goal, we study the dual LP and show how coordinate-ascent methods lead to very simple update rules that are tightly connected to the min-sum algorithm. Moreover, replacing minima in the formula of the dual LP with soft-minima one obtains update rules that are tightly connected to the sum-product algorithm. This shows that LP solvers with complexity similar to the min-sum algorithm and the sum-product algorithm are feasible. Finally, we also discuss some sub-gradient-based methods.<|reference_end|> | arxiv | @article{vontobel2006towards,
title={Towards Low-Complexity Linear-Programming Decoding},
author={Pascal O. Vontobel and Ralf Koetter},
journal={Proc. 4th Int. Symposium on Turbo Codes and Related Topics,
Munich, Germany, April 3-7, 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602088},
primaryClass={cs.IT math.IT}
} | vontobel2006towards |
arxiv-673903 | cs/0602089 | Pseudo-Codeword Analysis of Tanner Graphs from Projective and Euclidean Planes | <|reference_start|>Pseudo-Codeword Analysis of Tanner Graphs from Projective and Euclidean Planes: In order to understand the performance of a code under maximum-likelihood (ML) decoding, one studies the codewords, in particular the minimal codewords, and their Hamming weights. In the context of linear programming (LP) decoding, one's attention needs to be shifted to the pseudo-codewords, in particular to the minimal pseudo-codewords, and their pseudo-weights. In this paper we investigate some families of codes that have good properties under LP decoding, namely certain families of low-density parity-check (LDPC) codes that are derived from projective and Euclidean planes: we study the structure of their minimal pseudo-codewords and give lower bounds on their pseudo-weight.<|reference_end|> | arxiv | @article{smarandache2006pseudo-codeword,
title={Pseudo-Codeword Analysis of Tanner Graphs from Projective and Euclidean
Planes},
author={Roxana Smarandache and Pascal O. Vontobel},
journal={arXiv preprint arXiv:cs/0602089},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602089},
primaryClass={cs.IT cs.DM math.IT}
} | smarandache2006pseudo-codeword |
arxiv-673904 | cs/0602090 | On the Approximation and Smoothed Complexity of Leontief Market Equilibria | <|reference_start|>On the Approximation and Smoothed Complexity of Leontief Market Equilibria: We show that the problem of finding an \epsilon-approximate Nash equilibrium of an n by n two-person games can be reduced to the computation of an (\epsilon/n)^2-approximate market equilibrium of a Leontief economy. Together with a recent result of Chen, Deng and Teng, this polynomial reduction implies that the Leontief market exchange problem does not have a fully polynomial-time approximation scheme, that is, there is no algorithm that can compute an \epsilon-approximate market equilibrium in time polynomial in m, n, and 1/\epsilon, unless PPAD is not in P, We also extend the analysis of our reduction to show, unless PPAD is not in RP, that the smoothed complexity of the Scarf's general fixed-point approximation algorithm (when applying to solve the approximate Leontief market exchange problem) or of any algorithm for computing an approximate market equilibrium of Leontief economies is not polynomial in n and 1/\sigma, under Gaussian or uniform perturbations with magnitude \sigma.<|reference_end|> | arxiv | @article{huang2006on,
title={On the Approximation and Smoothed Complexity of Leontief Market
Equilibria},
author={Li-Sha Huang and Shang-Hua Teng},
journal={arXiv preprint arXiv:cs/0602090},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602090},
primaryClass={cs.GT cs.CC}
} | huang2006on |
arxiv-673905 | cs/0602091 | Feedback Capacity of Stationary Gaussian Channels | <|reference_start|>Feedback Capacity of Stationary Gaussian Channels: The feedback capacity of additive stationary Gaussian noise channels is characterized as the solution to a variational problem. Toward this end, it is proved that the optimal feedback coding scheme is stationary. When specialized to the first-order autoregressive moving average noise spectrum, this variational characterization yields a closed-form expression for the feedback capacity. In particular, this result shows that the celebrated Schalkwijk-Kailath coding scheme achieves the feedback capacity for the first-order autoregressive moving average Gaussian channel, positively answering a long-standing open problem studied by Butman, Schalkwijk-Tiernan, Wolfowitz, Ozarow, Ordentlich, Yang-Kavcic-Tatikonda, and others. More generally, it is shown that a k-dimensional generalization of the Schalkwijk-Kailath coding scheme achieves the feedback capacity for any autoregressive moving average noise spectrum of order k. Simply put, the optimal transmitter iteratively refines the receiver's knowledge of the intended message.<|reference_end|> | arxiv | @article{kim2006feedback,
title={Feedback Capacity of Stationary Gaussian Channels},
author={Young-Han Kim},
journal={arXiv preprint arXiv:cs/0602091},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602091},
primaryClass={cs.IT math.IT}
} | kim2006feedback |
arxiv-673906 | cs/0602092 | Inconsistent parameter estimation in Markov random fields: Benefits in the computation-limited setting | <|reference_start|>Inconsistent parameter estimation in Markov random fields: Benefits in the computation-limited setting: Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working under the restriction of limited computation, we analyze a joint method in which the \emph{same convex variational relaxation} is used to construct an M-estimator for fitting parameters, and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator (i.e., an estimator that returns the ``wrong'' model even in the infinite data limit) can be provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. En route to this result, we analyze the asymptotic properties of M-estimators based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of variational methods. We show that joint estimation/prediction based on the reweighted sum-product algorithm substantially outperforms a commonly used heuristic based on ordinary sum-product.<|reference_end|> | arxiv | @article{wainwright2006inconsistent,
title={Inconsistent parameter estimation in Markov random fields: Benefits in
the computation-limited setting},
author={Martin J. Wainwright},
journal={arXiv preprint arXiv:cs/0602092},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602092},
primaryClass={cs.LG cs.IT math.IT math.ST stat.TH}
} | wainwright2006inconsistent |
arxiv-673907 | cs/0602093 | Rational stochastic languages | <|reference_start|>Rational stochastic languages: The goal of the present paper is to provide a systematic and comprehensive study of rational stochastic languages over a semiring K \in {Q, Q +, R, R+}. A rational stochastic language is a probability distribution over a free monoid \Sigma^* which is rational over K, that is which can be generated by a multiplicity automata with parameters in K. We study the relations between the classes of rational stochastic languages S rat K (\Sigma). We define the notion of residual of a stochastic language and we use it to investigate properties of several subclasses of rational stochastic languages. Lastly, we study the representation of rational stochastic languages by means of multiplicity automata.<|reference_end|> | arxiv | @article{denis2006rational,
title={Rational stochastic languages},
author={Franc{c}ois Denis (LIF), Yann Esposito (LIF)},
journal={arXiv preprint arXiv:cs/0602093},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602093},
primaryClass={cs.LG cs.CL}
} | denis2006rational |
arxiv-673908 | cs/0602094 | Effect of CSMA/CD on Self-Similarity of Network Traffic | <|reference_start|>Effect of CSMA/CD on Self-Similarity of Network Traffic: It is now well known that Internet traffic exhibits self-similarity, which cannot be described by traditional Markovian models such as the Poisson process. The causes of self-similarity of network traffic must be identified because understanding the nature of network traffic is critical in order to properly design and implement computer networks and network services like the World Wide Web. While some researchers have argued self similarity is generated by the typical applications or caused by Transport layer Protocols, it is also possible that the CSMA/CD protocol may cause or at least contribute to this phenomenon. In this paper, we use NS simulator to study the effect of CSMA/CD Exponential Backoff retransmission algorithm on Traffic Self similarity.<|reference_end|> | arxiv | @article{altaher2006effect,
title={Effect of CSMA/CD on Self-Similarity of Network Traffic},
author={Altyeb Altaher},
journal={arXiv preprint arXiv:cs/0602094},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602094},
primaryClass={cs.NI}
} | altaher2006effect |
arxiv-673909 | cs/0602095 | Epsilon-Unfolding Orthogonal Polyhedra | <|reference_start|>Epsilon-Unfolding Orthogonal Polyhedra: An unfolding of a polyhedron is produced by cutting the surface and flattening to a single, connected, planar piece without overlap (except possibly at boundary points). It is a long unsolved problem to determine whether every polyhedron may be unfolded. Here we prove, via an algorithm, that every orthogonal polyhedron (one whose faces meet at right angles) of genus zero may be unfolded. Our cuts are not necessarily along edges of the polyhedron, but they are always parallel to polyhedron edges. For a polyhedron of n vertices, portions of the unfolding will be rectangular strips which, in the worst case, may need to be as thin as epsilon = 1/2^{Omega(n)}.<|reference_end|> | arxiv | @article{damian2006epsilon-unfolding,
title={Epsilon-Unfolding Orthogonal Polyhedra},
author={Mirela Damian, Robin Flatland, Joseph O'Rourke},
journal={arXiv preprint arXiv:cs/0602095},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602095},
primaryClass={cs.CG}
} | damian2006epsilon-unfolding |
arxiv-673910 | cs/0602096 | Difficulties in the Implementation of Quantum Computers | <|reference_start|>Difficulties in the Implementation of Quantum Computers: This paper reviews various engineering hurdles facing the field of quantum computing. Specifically, problems related to decoherence, state preparation, error correction, and implementability of gates are considered.<|reference_end|> | arxiv | @article{ponnath2006difficulties,
title={Difficulties in the Implementation of Quantum Computers},
author={Abhilash Ponnath},
journal={arXiv preprint arXiv:cs/0602096},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602096},
primaryClass={cs.AR}
} | ponnath2006difficulties |
arxiv-673911 | cs/0602097 | The Cubic Public-Key Transformation | <|reference_start|>The Cubic Public-Key Transformation: We propose the use of the cubic transformation for public-key applications and digital signatures. Transformations modulo a prime p or a composite n=pq, where p and q are primes, are used in such a fashion that each transformed value has only 3 roots that makes it a more efficient transformation than the squaring transformation of Rabin, which has 4 roots. Such a transformation, together with additional tag information, makes it possible to uniquely invert each transformed value. The method may be used for other exponents as well.<|reference_end|> | arxiv | @article{kak2006the,
title={The Cubic Public-Key Transformation},
author={Subhash Kak},
journal={Circuits Systems and Signal Processing, vol 26, pp. 353-359, 2007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602097},
primaryClass={cs.CR}
} | kak2006the |
arxiv-673912 | cs/0602098 | Compositional Semantics for the Procedural Interpretation of Logic | <|reference_start|>Compositional Semantics for the Procedural Interpretation of Logic: Semantics of logic programs has been given by proof theory, model theory and by fixpoint of the immediate-consequence operator. If clausal logic is a programming language, then it should also have a compositional semantics. Compositional semantics for programming languages follows the abstract syntax of programs, composing the meaning of a unit by a mathematical operation on the meanings of its constituent units. The procedural interpretation of logic has only yielded an incomplete abstract syntax for logic programs. We complete it and use the result as basis of a compositional semantics. We present for comparison Tarski's algebraization of first-order predicate logic, which is in substance the compositional semantics for his choice of syntax. We characterize our semantics by equivalence with the immediate-consequence operator.<|reference_end|> | arxiv | @article{van emden2006compositional,
title={Compositional Semantics for the Procedural Interpretation of Logic},
author={M.H. van Emden},
journal={arXiv preprint arXiv:cs/0602098},
year={2006},
number={DCS-307-IR},
archivePrefix={arXiv},
eprint={cs/0602098},
primaryClass={cs.PL}
} | van emden2006compositional |
arxiv-673913 | cs/0602099 | Towards Applicative Relational Programming | <|reference_start|>Towards Applicative Relational Programming: Functional programming comes in two flavours: one where ``functions are first-class citizens'' (we call this applicative) and one which is based on equations (we call this declarative). In relational programming clauses play the role of equations. Hence Prolog is declarative. The purpose of this paper is to provide in relational programming a mathematical basis for the relational analog of applicative functional programming. We use the cylindric semantics of first-order logic due to Tarski and provide a new notation for the required cylinders that we call tables. We define the Table/Relation Algebra with operators sufficient to translate Horn clauses into algebraic form. We establish basic mathematical properties of these operators. We show how relations can be first-class citizens, and devise mechanisms for modularity, for local scoping of predicates, and for exporting/importing relations between programs.<|reference_end|> | arxiv | @article{ibrahim2006towards,
title={Towards Applicative Relational Programming},
author={H. Ibrahim and M.H. van Emden},
journal={arXiv preprint arXiv:cs/0602099},
year={2006},
archivePrefix={arXiv},
eprint={cs/0602099},
primaryClass={cs.PL}
} | ibrahim2006towards |
arxiv-673914 | cs/0603001 | BioSig - An application of Octave | <|reference_start|>BioSig - An application of Octave: BioSig is an open source software library for biomedical signal processing. Most users in the field are using Matlab; however, significant effort was undertaken to provide compatibility to Octave, too. This effort has been widely successful, only some non-critical components relying on a graphical user interface are missing. Now, installing BioSig on Octave is as easy as on Matlab. Moreover, a benchmark test based on BioSig has been developed and the benchmark results of several platforms are presented.<|reference_end|> | arxiv | @article{schlögl2006biosig,
title={BioSig - An application of Octave},
author={Alois Schl"ogl},
journal={arXiv preprint arXiv:cs/0603001},
year={2006},
number={Octave2006/16},
archivePrefix={arXiv},
eprint={cs/0603001},
primaryClass={cs.MS}
} | schlögl2006biosig |
arxiv-673915 | cs/0603002 | On comparing sums of square roots of small integers | <|reference_start|>On comparing sums of square roots of small integers: Let $k$ and $n$ be positive integers, $n>k$. Define $r(n,k)$ to be the minimum positive value of $$ |\sqrt{a_1} + ... + \sqrt{a_k} - \sqrt{b_1} - >... -\sqrt{b_k} | $$ where $ a_1, a_2, ..., a_k, b_1, b_2, ..., b_k $ are positive integers no larger than $n$. It is an important problem in computational geometry to determine a good upper bound of $-\log r(n,k)$. In this paper we prove an upper bound of $ 2^{O(n/\log n)} \log n$, which is better than the best known result $O(2^{2k} \log n)$ whenever $ n \leq ck\log k$ for some constant $c$. In particular, our result implies a {\em subexponential} algorithm to compare two sums of square roots of integers of size $o(k\log k)$.<|reference_end|> | arxiv | @article{cheng2006on,
title={On comparing sums of square roots of small integers},
author={Qi Cheng},
journal={arXiv preprint arXiv:cs/0603002},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603002},
primaryClass={cs.CG}
} | cheng2006on |
arxiv-673916 | cs/0603003 | Analyse non standard du bruit | <|reference_start|>Analyse non standard du bruit: Thanks to the nonstandard formalization of fast oscillating functions, due to P. Cartier and Y. Perrin, an appropriate mathematical framework is derived for new non-asymptotic estimation techniques, which do not necessitate any statistical analysis of the noises corrupting any sensor. Various applications are deduced for multiplicative noises, for the length of the parametric estimation windows, and for burst errors.<|reference_end|> | arxiv | @article{fliess2006analyse,
title={Analyse non standard du bruit},
author={Michel Fliess (LIX, INRIA Futurs)},
journal={arXiv preprint arXiv:cs/0603003},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603003},
primaryClass={cs.CE math.LO math.OC math.PR quant-ph}
} | fliess2006analyse |
arxiv-673917 | cs/0603004 | Lamarckian Evolution and the Baldwin Effect in Evolutionary Neural Networks | <|reference_start|>Lamarckian Evolution and the Baldwin Effect in Evolutionary Neural Networks: Hybrid neuro-evolutionary algorithms may be inspired on Darwinian or Lamarckian evolu- tion. In the case of Darwinian evolution, the Baldwin effect, that is, the progressive incorporation of learned characteristics to the genotypes, can be observed and leveraged to improve the search. The purpose of this paper is to carry out an exper- imental study into how learning can improve G-Prop genetic search. Two ways of combining learning and genetic search are explored: one exploits the Baldwin effect, while the other uses a Lamarckian strategy. Our experiments show that using a Lamarckian op- erator makes the algorithm find networks with a low error rate, and the smallest size, while using the Bald- win effect obtains MLPs with the smallest error rate, and a larger size, taking longer to reach a solution. Both approaches obtain a lower average error than other BP-based algorithms like RPROP, other evolu- tionary methods and fuzzy logic based methods<|reference_end|> | arxiv | @article{castillo2006lamarckian,
title={Lamarckian Evolution and the Baldwin Effect in Evolutionary Neural
Networks},
author={P.A. Castillo, M.G. Arenas, J.G. Castellano, J.J. Merelo, A. Prieto,
V. Rivas and G. Romero},
journal={arXiv preprint arXiv:cs/0603004},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603004},
primaryClass={cs.NE}
} | castillo2006lamarckian |
arxiv-673918 | cs/0603005 | A Basic Introduction on Math-Link in Mathematica | <|reference_start|>A Basic Introduction on Math-Link in Mathematica: Starting from the basic ideas of mathematica, we give a detailed description about the way of linking of external programs with mathematica through proper mathlink commands. This article may be quite helpful for the beginners to start with and write programs in mathematica. In the first part, we illustrate how to use a mathemtica notebook and write a complete program in the notebook. Following with this, we also mention elaborately about the utility of the local and global variables those are very essential for writing a program in mathematica. All the commands needed for doing different mathematical operations can be found with some proper examples in the mathematica book written by Stephen Wolfram \cite{wolfram}. In the rest of this article, we concentrate our study on the most significant issue which is the process of linking of {\em external programs} with mathematica, so-called the mathlink operation. By using proper mathlink commands one can run very tedious jobs efficiently and the operations become extremely fast.<|reference_end|> | arxiv | @article{maiti2006a,
title={A Basic Introduction on Math-Link in Mathematica},
author={Santanu K. Maiti},
journal={arXiv preprint arXiv:cs/0603005},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603005},
primaryClass={cs.MS cs.PL}
} | maiti2006a |
arxiv-673919 | cs/0603006 | Pivotal and Pivotal-discriminative Consequence Relations | <|reference_start|>Pivotal and Pivotal-discriminative Consequence Relations: In the present paper, we investigate consequence relations that are both paraconsistent and plausible (but still monotonic). More precisely, we put the focus on pivotal consequence relations, i.e. those relations that can be defined by a pivot (in the style of e.g. D. Makinson). A pivot is a fixed subset of valuations which are considered to be the important ones in the absolute sense. We worked with a general notion of valuation that covers e.g. the classical valuations as well as certain kinds of many-valued valuations. In the many-valued cases, pivotal consequence relations are paraconsistant (in addition to be plausible), i.e. they are capable of drawing reasonable conclusions which contain contradictions. We will provide in our general framework syntactic characterizations of several families of pivotal relations. In addition, we will provide, again in our general framework, characterizations of several families of pivotal discriminative consequence relations. The latter are defined exactly as the plain version, but contradictory conclusions are rejected. We will also answer negatively a representation problem that was left open by Makinson. Finally, we will put in evidence a connexion with X-logics from Forget, Risch, and Siegel. The motivations and the framework of the present paper are very close to those of a previous paper of the author which is about preferential consequence relations.<|reference_end|> | arxiv | @article{ben-naim2006pivotal,
title={Pivotal and Pivotal-discriminative Consequence Relations},
author={Jonathan Ben-Naim (LIF)},
journal={Journal of Logic and Computation Volume 15, number 5 (2005)
679-700},
year={2006},
doi={10.1093/logcom/exi030},
archivePrefix={arXiv},
eprint={cs/0603006},
primaryClass={cs.LO}
} | ben-naim2006pivotal |
arxiv-673920 | cs/0603007 | Complete Enumeration of Stopping Sets of Full-Rank Parity-Check Matrices of Hamming Codes | <|reference_start|>Complete Enumeration of Stopping Sets of Full-Rank Parity-Check Matrices of Hamming Codes: Stopping sets, and in particular their numbers and sizes, play an important role in determining the performance of iterative decoders of linear codes over binary erasure channels. In the 2004 Shannon Lecture, McEliece presented an expression for the number of stopping sets of size three for a full-rank parity-check matrix of the Hamming code. In this correspondence, we derive an expression for the number of stopping sets of any given size for the same parity-check matrix.<|reference_end|> | arxiv | @article{abdel-ghaffar2006complete,
title={Complete Enumeration of Stopping Sets of Full-Rank Parity-Check Matrices
of Hamming Codes},
author={Khaled A. S. Abdel-Ghaffar and Jos H. Weber},
journal={arXiv preprint arXiv:cs/0603007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603007},
primaryClass={cs.IT math.IT}
} | abdel-ghaffar2006complete |
arxiv-673921 | cs/0603008 | Linear Secret Sharing from Algebraic-Geometric Codes | <|reference_start|>Linear Secret Sharing from Algebraic-Geometric Codes: It is well-known that the linear secret-sharing scheme (LSSS) can be constructed from linear error-correcting codes (Brickell [1], R.J. McEliece and D.V.Sarwate [2],Cramer, el.,[3]). The theory of linear codes from algebraic-geometric curves (algebraic-geometric (AG) codes or geometric Goppa code) has been well-developed since the work of V.Goppa and Tsfasman, Vladut, and Zink(see [17], [18] and [19]). In this paper the linear secret-sharing scheme from algebraic-geometric codes, which are non-threshold scheme for curves of genus greater than 0, are presented . We analysis the minimal access structure, $d_{min}$ and $d_{cheat}$([8]), (strongly) multiplicativity and the applications in verifiable secret-sharing (VSS) scheme and secure multi-party computation (MPC) of this construction([3] and [10-11]). Our construction also offers many examples of the self-dually $GF(q)$-representable matroids and many examples of new ideal linear secret-sharing schemes addressing to the problem of the characterization of the access structures for ideal secret-sharing schemes([3] and [9]). The access structures of the linear secret-sharing schemes from the codes on elliptic curves are given explicitly. From the work in this paper we can see that the algebraic-geometric structure of the underlying algebraic curves is an important resource for secret-sharing, matroid theory, verifiable secret-sharing and secure multi-party computation.<|reference_end|> | arxiv | @article{chen2006linear,
title={Linear Secret Sharing from Algebraic-Geometric Codes},
author={Hao Chen},
journal={arXiv preprint arXiv:cs/0603008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603008},
primaryClass={cs.CR cs.IT math.IT}
} | chen2006linear |
arxiv-673922 | cs/0603009 | An Achievability Result for the General Relay Channel | <|reference_start|>An Achievability Result for the General Relay Channel: See cs.IT/0605135: R. Dabora, S. D. Servetto; On the Role of Estimate-and-Forward with Time-Sharing in Cooperative Communications.<|reference_end|> | arxiv | @article{dabora2006an,
title={An Achievability Result for the General Relay Channel},
author={Ron Dabora and Sergio D. Servetto (Cornell University)},
journal={arXiv preprint arXiv:cs/0603009},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603009},
primaryClass={cs.IT math.IT}
} | dabora2006an |
arxiv-673923 | cs/0603010 | Asymptotic constant-factor approximation algorithm for the Traveling Salesperson Problem for Dubins' vehicle | <|reference_start|>Asymptotic constant-factor approximation algorithm for the Traveling Salesperson Problem for Dubins' vehicle: This article proposes the first known algorithm that achieves a constant-factor approximation of the minimum length tour for a Dubins' vehicle through $n$ points on the plane. By Dubins' vehicle, we mean a vehicle constrained to move at constant speed along paths with bounded curvature without reversing direction. For this version of the classic Traveling Salesperson Problem, our algorithm closes the gap between previously established lower and upper bounds; the achievable performance is of order $n^{2/3}$.<|reference_end|> | arxiv | @article{savla2006asymptotic,
title={Asymptotic constant-factor approximation algorithm for the Traveling
Salesperson Problem for Dubins' vehicle},
author={Ketan Savla, Emilio Frazzoli and Francesco Bullo},
journal={arXiv preprint arXiv:cs/0603010},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603010},
primaryClass={cs.RO}
} | savla2006asymptotic |
arxiv-673924 | cs/0603011 | Intrinsically Legal-For-Trade Objects by Digital Signatures | <|reference_start|>Intrinsically Legal-For-Trade Objects by Digital Signatures: The established techniques for legal-for-trade registration of weight values meet the legal requirements, but in praxis they show serious disadvantages. We report on the first implementation of intrinsically legal-for-trade objects, namely weight values signed by the scale, that is accepted by the approval authority. The strict requirements from both the approval- and the verification-authority as well as the limitations due to the hardware of the scale were a special challenge. The presented solution fulfills all legal requirements and eliminates the existing practical disadvantages.<|reference_end|> | arxiv | @article{wiesmaier2006intrinsically,
title={Intrinsically Legal-For-Trade Objects by Digital Signatures},
author={A. Wiesmaier, U. Rauchschwalbe, C. Ludwig, B. Henhapl, M. Ruppert, J.
Buchmann},
journal={Sicherheit 2006: Sicherheit -- Schutz und Zuverlaessigkeit},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603011},
primaryClass={cs.CR}
} | wiesmaier2006intrinsically |
arxiv-673925 | cs/0603012 | Improved Bounds and Schemes for the Declustering Problem | <|reference_start|>Improved Bounds and Schemes for the Declustering Problem: The declustering problem is to allocate given data on parallel working storage devices in such a manner that typical requests find their data evenly distributed on the devices. Using deep results from discrepancy theory, we improve previous work of several authors concerning range queries to higher-dimensional data. We give a declustering scheme with an additive error of $O_d(\log^{d-1} M)$ independent of the data size, where $d$ is the dimension, $M$ the number of storage devices and $d-1$ does not exceed the smallest prime power in the canonical decomposition of $M$ into prime powers. In particular, our schemes work for arbitrary $M$ in dimensions two and three. For general $d$, they work for all $M\geq d-1$ that are powers of two. Concerning lower bounds, we show that a recent proof of a $\Omega_d(\log^{\frac{d-1}{2}} M)$ bound contains an error. We close the gap in the proof and thus establish the bound.<|reference_end|> | arxiv | @article{doerr2006improved,
title={Improved Bounds and Schemes for the Declustering Problem},
author={Benjamin Doerr, Nils Hebbinghaus, S"oren Werth},
journal={arXiv preprint arXiv:cs/0603012},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603012},
primaryClass={cs.DM cs.DS}
} | doerr2006improved |
arxiv-673926 | cs/0603013 | On the MacWilliams Identity for Convolutional Codes | <|reference_start|>On the MacWilliams Identity for Convolutional Codes: The adjacency matrix associated with a convolutional code collects in a detailed manner information about the weight distribution of the code. A MacWilliams Identity Conjecture, stating that the adjacency matrix of a code fully determines the adjacency matrix of the dual code, will be formulated, and an explicit formula for the transformation will be stated. The formula involves the MacWilliams matrix known from complete weight enumerators of block codes. The conjecture will be proven for the class of convolutional codes where either the code itself or its dual does not have Forney indices bigger than one. For the general case the conjecture is backed up by many examples, and a weaker version will be established.<|reference_end|> | arxiv | @article{gluesing-luerssen2006on,
title={On the MacWilliams Identity for Convolutional Codes},
author={Heide Gluesing-Luerssen, Gert Schneider},
journal={arXiv preprint arXiv:cs/0603013},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603013},
primaryClass={cs.IT math.IT math.OC}
} | gluesing-luerssen2006on |
arxiv-673927 | cs/0603014 | Near orders and codes | <|reference_start|>Near orders and codes: Hoholdt, van Lint and Pellikaan used order functions to construct codes by means of Linear Algebra and Semigroup Theory only. However, Geometric Goppa codes that can be represented by this method are mainly those based on just one point. In this paper we introduce the concept of near order function with the aim of generalize this approach in such a way that a of wider family of Geometric Goppa codes can be studied on a more elementary setting.<|reference_end|> | arxiv | @article{carvalho2006near,
title={Near orders and codes},
author={C. Carvalho, C. Munuera, E. Silva, F. Torres},
journal={arXiv preprint arXiv:cs/0603014},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603014},
primaryClass={cs.IT math.IT}
} | carvalho2006near |
arxiv-673928 | cs/0603015 | The Basic Kak Neural Network with Complex Inputs | <|reference_start|>The Basic Kak Neural Network with Complex Inputs: The Kak family of neural networks is able to learn patterns quickly, and this speed of learning can be a decisive advantage over other competing models in many applications. Amongst the implementations of these networks are those using reconfigurable networks, FPGAs and optical networks. In some applications, it is useful to use complex data, and it is with that in mind that this introduction to the basic Kak network with complex inputs is being presented. The training algorithm is prescriptive and the network weights are assigned simply upon examining the inputs. The input is mapped using quaternary encoding for purpose of efficienty. This network family is part of a larger hierarchy of learning schemes that include quantum models.<|reference_end|> | arxiv | @article{rajagopal2006the,
title={The Basic Kak Neural Network with Complex Inputs},
author={Pritam Rajagopal},
journal={arXiv preprint arXiv:cs/0603015},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603015},
primaryClass={cs.NE}
} | rajagopal2006the |
arxiv-673929 | cs/0603016 | Object-Oriented Modeling of Programming Paradigms | <|reference_start|>Object-Oriented Modeling of Programming Paradigms: For the right application, the use of programming paradigms such as functional or logic programming can enormously increase productivity in software development. But these powerful paradigms are tied to exotic programming languages, while the management of software development dictates standardization on a single language. This dilemma can be resolved by using object-oriented programming in a new way. It is conventional to analyze an application by object-oriented modeling. In the new approach, the analysis identifies the paradigm that is ideal for the application; development starts with object-oriented modeling of the paradigm. In this paper we illustrate the new approach by giving examples of object-oriented modeling of dataflow and constraint programming. These examples suggest that it is no longer necessary to embody a programming paradigm in a language dedicated to it.<|reference_end|> | arxiv | @article{van emden2006object-oriented,
title={Object-Oriented Modeling of Programming Paradigms},
author={M.H. van Emden and S.C. Somosan},
journal={arXiv preprint arXiv:cs/0603016},
year={2006},
number={DCS-310-IR},
archivePrefix={arXiv},
eprint={cs/0603016},
primaryClass={cs.SE cs.PL}
} | van emden2006object-oriented |
arxiv-673930 | cs/0603017 | A Measure of Space for Computing over the Reals | <|reference_start|>A Measure of Space for Computing over the Reals: We propose a new complexity measure of space for the BSS model of computation. We define LOGSPACE\_W and PSPACE\_W complexity classes over the reals. We prove that LOGSPACE\_W is included in NC^2\_R and in P\_W, i.e. is small enough for being relevant. We prove that the Real Circuit Decision Problem is P\_R-complete under LOGSPACE\_W reductions, i.e. that LOGSPACE\_W is large enough for containing natural algorithms. We also prove that PSPACE\_W is included in PAR\_R.<|reference_end|> | arxiv | @article{de naurois2006a,
title={A Measure of Space for Computing over the Reals},
author={Paulin Jacob'e De Naurois (LIPN)},
journal={Logical Approaches to Computational Barriers Second Conference on
Computability in Europe, CiE 2006, Royaume-Uni (2006) 231-240},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603017},
primaryClass={cs.CC}
} | de naurois2006a |
arxiv-673931 | cs/0603018 | On Non-coherent MIMO Channels in the Wideband Regime: Capacity and Reliability | <|reference_start|>On Non-coherent MIMO Channels in the Wideband Regime: Capacity and Reliability: We consider a multiple-input, multiple-output (MIMO) wideband Rayleigh block fading channel where the channel state is unknown to both the transmitter and the receiver and there is only an average power constraint on the input. We compute the capacity and analyze its dependence on coherence length, number of antennas and receive signal-to-noise ratio (SNR) per degree of freedom. We establish conditions on the coherence length and number of antennas for the non-coherent channel to have a "near coherent" performance in the wideband regime. We also propose a signaling scheme that is near-capacity achieving in this regime. We compute the error probability for this wideband non-coherent MIMO channel and study its dependence on SNR, number of transmit and receive antennas and coherence length. We show that error probability decays inversely with coherence length and exponentially with the product of the number of transmit and receive antennas. Moreover, channel outage dominates error probability in the wideband regime. We also show that the critical as well as cut-off rates are much smaller than channel capacity in this regime.<|reference_end|> | arxiv | @article{ray2006on,
title={On Non-coherent MIMO Channels in the Wideband Regime: Capacity and
Reliability},
author={Siddharth Ray, Muriel Medard and Lizhong Zheng},
journal={arXiv preprint arXiv:cs/0603018},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603018},
primaryClass={cs.IT math.IT}
} | ray2006on |
arxiv-673932 | cs/0603019 | Characterizing the NP-PSPACE Gap in the Satisfiability Problem for Modal Logic | <|reference_start|>Characterizing the NP-PSPACE Gap in the Satisfiability Problem for Modal Logic: There has been a great of work on characterizing the complexity of the satisfiability and validity problem for modal logics. In particular, Ladner showed that the validity problem for all logics between K, T, and S4 is {\sl PSPACE}-complete, while for S5 it is {\sl NP}-complete. We show that, in a precise sense, it is \emph{negative introspection}, the axiom $\neg K p \rimp K \neg K p$, that causes the gap. In a precise sense, if we require this axiom, then the satisfiability problem is {\sl NP}-complete; without it, it is {\sl PSPACE}-complete.<|reference_end|> | arxiv | @article{halpern2006characterizing,
title={Characterizing the NP-PSPACE Gap in the Satisfiability Problem for Modal
Logic},
author={Joseph Y. Halpern and Leandro Chaves Rego},
journal={arXiv preprint arXiv:cs/0603019},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603019},
primaryClass={cs.LO cs.CC}
} | halpern2006characterizing |
arxiv-673933 | cs/0603020 | Reasoning About Knowledge of Unawareness | <|reference_start|>Reasoning About Knowledge of Unawareness: Awareness has been shown to be a useful addition to standard epistemic logic for many applications. However, standard propositional logics for knowledge and awareness cannot express the fact that an agent knows that there are facts of which he is unaware without there being an explicit fact that the agent knows he is unaware of. We propose a logic for reasoning about knowledge of unawareness, by extending Fagin and Halpern's \emph{Logic of General Awareness}. The logic allows quantification over variables, so that there is a formula in the language that can express the fact that ``an agent explicitly knows that there exists a fact of which he is unaware''. Moreover, that formula can be true without the agent explicitly knowing that he is unaware of any particular formula. We provide a sound and complete axiomatization of the logic, using standard axioms from the literature to capture the quantification operator. Finally, we show that the validity problem for the logic is recursively enumerable, but not decidable.<|reference_end|> | arxiv | @article{halpern2006reasoning,
title={Reasoning About Knowledge of Unawareness},
author={Joseph Y. halpern and Leandro Chaves Rego},
journal={arXiv preprint arXiv:cs/0603020},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603020},
primaryClass={cs.LO cs.MA}
} | halpern2006reasoning |
arxiv-673934 | cs/0603021 | Language Support for Optional Functionality | <|reference_start|>Language Support for Optional Functionality: We recommend a programming construct - availability check - for programs that need to automatically adjust to presence or absence of segments of code. The idea is to check the existence of a valid definition before a function call is invoked. The syntax is that of a simple 'if' statement. The vision is to enable customization of application functionality through addition or removal of optional components, but without requiring complete re-building. Focus is on C-like compiled procedural languages and UNIX-based systems. Essentially, our approach attempts to combine the flexibility of dynamic libraries with the usability of utility (dependency) libraries. We outline the benefits over prevalent strategies mainly in terms of development complexity, crudely measured as lesser lines of code. We also allude to performance and flexibility facets. A Preliminary implementation and figures from early experimental evaluation are presented.<|reference_end|> | arxiv | @article{mukherjee2006language,
title={Language Support for Optional Functionality},
author={Joy Mukherjee, Srinidhi Varadarajan},
journal={arXiv preprint arXiv:cs/0603021},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603021},
primaryClass={cs.PL cs.OS cs.SE}
} | mukherjee2006language |
arxiv-673935 | cs/0603022 | On Separation, Randomness and Linearity for Network Codes over Finite Fields | <|reference_start|>On Separation, Randomness and Linearity for Network Codes over Finite Fields: We examine the issue of separation and code design for networks that operate over finite fields. We demonstrate that source-channel (or source-network) separation holds for several canonical network examples like the noisy multiple access channel and the erasure degraded broadcast channel, when the whole network operates over a common finite field. This robustness of separation is predicated on the fact that noise and inputs are independent, and we examine the failure of separation when noise is dependent on inputs in multiple access channels. Our approach is based on the sufficiency of linear codes. Using a simple and unifying framework, we not only re-establish with economy the optimality of linear codes for single-transmitter, single-receiver channels and for Slepian-Wolf source coding, but also establish the optimality of linear codes for multiple access and for erasure degraded broadcast channels. The linearity allows us to obtain simple optimal code constructions and to study capacity regions of the noisy multiple access and the degraded broadcast channel. The linearity of both source and network coding blurs the delineation between source and network codes. While our results point to the fact that separation of source coding and channel coding is optimal in some canonical networks, we show that decomposing networks into canonical subnetworks may not be effective. Thus, we argue that it may be the lack of decomposability of a network into canonical network modules, rather than the lack of separation between source and channel coding, that presents major challenges for coding over networks.<|reference_end|> | arxiv | @article{ray2006on,
title={On Separation, Randomness and Linearity for Network Codes over Finite
Fields},
author={Siddharth Ray, Michelle Effros, Muriel Medard, Ralf Koetter, Tracey
Ho, David Karger and Jinane Abounadi},
journal={arXiv preprint arXiv:cs/0603022},
year={2006},
number={MIT LIDS Technical Report 2687},
archivePrefix={arXiv},
eprint={cs/0603022},
primaryClass={cs.IT math.IT}
} | ray2006on |
arxiv-673936 | cs/0603023 | Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot | <|reference_start|>Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot: We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods.<|reference_end|> | arxiv | @article{zhumatiy2006metric,
title={Metric State Space Reinforcement Learning for a Vision-Capable Mobile
Robot},
author={Viktor Zhumatiy and Faustino Gomez and Marcus Hutter and Juergen
Schmidhuber},
journal={Proc. 9th International Conf. on Intelligent Autonomous Systems
(IAS 2006) pages 272-281},
year={2006},
number={IDSIA-05-06},
archivePrefix={arXiv},
eprint={cs/0603023},
primaryClass={cs.RO cs.LG}
} | zhumatiy2006metric |
arxiv-673937 | cs/0603024 | Representing Contextualized Information in the NSDL | <|reference_start|>Representing Contextualized Information in the NSDL: The NSDL (National Science Digital Library) is funded by the National Science Foundation to advance science and match education. The inital product was a metadata-based digital library providing search and access to distributed resources. Our recent work recognizes the importance of context - relations, metadata, annotations - for the pedagogical value of a digital library. This new architecture uses Fedora, a tool for representing complex content, data, metadata, web-based services, and semantic relationships, as the basis of an information network overlay (INO). The INO provides an extensible knowl-edge base for an expanding suite of digital library services.<|reference_end|> | arxiv | @article{lagoze2006representing,
title={Representing Contextualized Information in the NSDL},
author={Carl Lagoze, Dean Krafft, Tim Cornwell, Dean Eckstrom, Susan Jesuroga,
Chris Wilper},
journal={arXiv preprint arXiv:cs/0603024},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603024},
primaryClass={cs.DL}
} | lagoze2006representing |
arxiv-673938 | cs/0603025 | Open Answer Set Programming with Guarded Programs | <|reference_start|>Open Answer Set Programming with Guarded Programs: Open answer set programming (OASP) is an extension of answer set programming where one may ground a program with an arbitrary superset of the program's constants. We define a fixed point logic (FPL) extension of Clark's completion such that open answer sets correspond to models of FPL formulas and identify a syntactic subclass of programs, called (loosely) guarded programs. Whereas reasoning with general programs in OASP is undecidable, the FPL translation of (loosely) guarded programs falls in the decidable (loosely) guarded fixed point logic (mu(L)GF). Moreover, we reduce normal closed ASP to loosely guarded OASP, enabling for the first time, a characterization of an answer set semantics by muLGF formulas. We further extend the open answer set semantics for programs with generalized literals. Such generalized programs (gPs) have interesting properties, e.g., the ability to express infinity axioms. We restrict the syntax of gPs such that both rules and generalized literals are guarded. Via a translation to guarded fixed point logic, we deduce 2-exptime-completeness of satisfiability checking in such guarded gPs (GgPs). Bound GgPs are restricted GgPs with exptime-complete satisfiability checking, but still sufficiently expressive to optimally simulate computation tree logic (CTL). We translate Datalog lite programs to GgPs, establishing equivalence of GgPs under an open answer set semantics, alternation-free muGF, and Datalog lite.<|reference_end|> | arxiv | @article{heymans2006open,
title={Open Answer Set Programming with Guarded Programs},
author={Stijn Heymans, Davy Van Nieuwenborgh and Dirk Vermeir},
journal={arXiv preprint arXiv:cs/0603025},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603025},
primaryClass={cs.AI}
} | heymans2006open |
arxiv-673939 | cs/0603026 | The Snowblower Problem | <|reference_start|>The Snowblower Problem: We introduce the snowblower problem (SBP), a new optimization problem that is closely related to milling problems and to some material-handling problems. The objective in the SBP is to compute a short tour for the snowblower to follow to remove all the snow from a domain (driveway, sidewalk, etc.). When a snowblower passes over each region along the tour, it displaces snow into a nearby region. The constraint is that if the snow is piled too high, then the snowblower cannot clear the pile. We give an algorithmic study of the SBP. We show that in general, the problem is NP-complete, and we present polynomial-time approximation algorithms for removing snow under various assumptions about the operation of the snowblower. Most commercially-available snowblowers allow the user to control the direction in which the snow is thrown. We differentiate between the cases in which the snow can be thrown in any direction, in any direction except backwards, and only to the right. For all cases, we give constant-factor approximation algorithms; the constants increase as the throw direction becomes more restricted. Our results are also applicable to robotic vacuuming (or lawnmowing) with bounded capacity dust bin and to some versions of material-handling problems, in which the goal is to rearrange cartons on the floor of a warehouse.<|reference_end|> | arxiv | @article{arkin2006the,
title={The Snowblower Problem},
author={Esther M. Arkin (1), Michael A. Bender (2), Joseph S. B. Mitchell (1),
Valentin Polishchuk (1) ((1) Department of Applied Mathematics and
Statistics, Stony Brook University, (2) Department of Computer Science, Stony
Brook University)},
journal={arXiv preprint arXiv:cs/0603026},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603026},
primaryClass={cs.DS cs.CC cs.RO}
} | arkin2006the |
arxiv-673940 | cs/0603027 | On the Second-Order Statistics of the Instantaneous Mutual Information in Rayleigh Fading Channels | <|reference_start|>On the Second-Order Statistics of the Instantaneous Mutual Information in Rayleigh Fading Channels: In this paper, the second-order statistics of the instantaneous mutual information are studied, in time-varying Rayleigh fading channels, assuming general non-isotropic scattering environments. Specifically, first the autocorrelation function, correlation coefficient, level crossing rate, and the average outage duration of the instantaneous mutual information are investigated in single-input single-output (SISO) systems. Closed-form exact expressions are derived, as well as accurate approximations in low- and high-SNR regimes. Then, the results are extended to multiple-input single-output and single-input multiple-output systems, as well as multiple-input multiple-output systems with orthogonal space-time block code transmission. Monte Carlo simulations are provided to verify the accuracy of the analytical results. The results shed more light on the dynamic behavior of the instantaneous mutual information in mobile fading channels.<|reference_end|> | arxiv | @article{wang2006on,
title={On the Second-Order Statistics of the Instantaneous Mutual Information
in Rayleigh Fading Channels},
author={Shuangquan Wang, Ali Abdi},
journal={arXiv preprint arXiv:cs/0603027},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603027},
primaryClass={cs.IT math.IT}
} | wang2006on |
arxiv-673941 | cs/0603028 | On the tree-transformation power of XSLT | <|reference_start|>On the tree-transformation power of XSLT: XSLT is a standard rule-based programming language for expressing transformations of XML data. The language is currently in transition from version 1.0 to 2.0. In order to understand the computational consequences of this transition, we restrict XSLT to its pure tree-transformation capabilities. Under this focus, we observe that XSLT~1.0 was not yet a computationally complete tree-transformation language: every 1.0 program can be implemented in exponential time. A crucial new feature of version~2.0, however, which allows nodesets over temporary trees, yields completeness. We provide a formal operational semantics for XSLT programs, and establish confluence for this semantics.<|reference_end|> | arxiv | @article{janssen2006on,
title={On the tree-transformation power of XSLT},
author={Wim Janssen, Alexandr Korlyukov, Jan Van den Bussche},
journal={Acta Informatica, Volume 43, Number 6 / January, 2007},
year={2006},
doi={10.1007/s00236-006-0026-8},
archivePrefix={arXiv},
eprint={cs/0603028},
primaryClass={cs.PL cs.DB}
} | janssen2006on |
arxiv-673942 | cs/0603029 | A d-Sequence based Recursive Random Number Generator | <|reference_start|>A d-Sequence based Recursive Random Number Generator: This paper proposes a new recursive technique using d-sequences to generate random numbers.<|reference_end|> | arxiv | @article{parakh2006a,
title={A d-Sequence based Recursive Random Number Generator},
author={Abhishek Parakh},
journal={Proceedings of International Symposium on System and Information
Security -- Sao Jose dos Campos: CTA/ITA/IEC, 2006. 542p},
year={2006},
number={2006/310 Cryptology ePrint Archive},
archivePrefix={arXiv},
eprint={cs/0603029},
primaryClass={cs.CR}
} | parakh2006a |
arxiv-673943 | cs/0603030 | A Service-Centric Approach to a Parameterized RBAC Service | <|reference_start|>A Service-Centric Approach to a Parameterized RBAC Service: Significant research has been done in the area of Role Based Access Control [RBAC]. Within this research there has been a thread of work focusing on adding parameters to the role and permissions within RBAC. The primary benefit of parameter support in RBAC comes in the form of a significant increase in specificity in how permissions may be granted. This paper focuses on implementing a parameterized implementation based heavily upon existing standards.<|reference_end|> | arxiv | @article{adams2006a,
title={A Service-Centric Approach to a Parameterized RBAC Service},
author={Jonathan K. Adams},
journal={In Proceedings of the 5th WSEAS International Conference on
Applied Computer Science (ACOS 2006)},
year={2006},
doi={10.5555/1973598.1973803},
archivePrefix={arXiv},
eprint={cs/0603030},
primaryClass={cs.CR}
} | adams2006a |
arxiv-673944 | cs/0603031 | Performance Analysis of CDMA Signature Optimization with Finite Rate Feedback | <|reference_start|>Performance Analysis of CDMA Signature Optimization with Finite Rate Feedback: We analyze the performance of CDMA signature optimization with finite rate feedback. For a particular user, the receiver selects a signature vector from a signature codebook to avoid the interference from other users, and feeds the corresponding index back to this user through a finite rate and error-free feedback link. We assume the codebook is randomly constructed where the entries are independent and isotropically distributed. It has been shown that the randomly constructed codebook is asymptotically optimal. In this paper, we consider two types of signature selection criteria. One is to select the signature vector that minimizes the interference from other users. The other one is to select the signature vector to match the weakest interference directions. By letting the processing gain, number of users and feedback bits approach infinity with fixed ratios, we derive the exact asymptotic formulas to calculate the average interference for both criteria. Our simulations demonstrate the theoretical formulas. The analysis can be extended to evaluate the signal-to-interference plus noise ratio performance for both match filter and linear minimum mean-square error receivers.<|reference_end|> | arxiv | @article{dai2006performance,
title={Performance Analysis of CDMA Signature Optimization with Finite Rate
Feedback},
author={Wei Dai, Youjian Liu, Brian Rider},
journal={arXiv preprint arXiv:cs/0603031},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603031},
primaryClass={cs.IT cs.DM math.IT}
} | dai2006performance |
arxiv-673945 | cs/0603032 | Market Equilibrium for Bundle Auctions and the Matching Core of Nonnegative TU Games | <|reference_start|>Market Equilibrium for Bundle Auctions and the Matching Core of Nonnegative TU Games: We discuss bundle auctions within the framework of an integer allocation problem. We show that for multi-unit auctions, of which bundle auctions are a special case, market equilibrium and constrained market equilibrium are equivalent concepts. This equivalence, allows us to obtain a computable necessary and sufficient condition for the existence of constrained market equilibrium for bundle auctions. We use this result to obtain a necessary and sufficient condition for the existence of market equilibrium for multi-unit auctions. After obtaining the induced bundle auction of a nonnegative TU game, we show that the existence of market equilibrium implies the existence of a possibly different market equilibrium as well, which corresponds very naturally to an outcome in the matching core of the TU game. Consequently we show that the matching core of the nonnegative TU game is non-empty if and only if the induced market game has a market equilibrium.<|reference_end|> | arxiv | @article{lahiri2006market,
title={Market Equilibrium for Bundle Auctions and the Matching Core of
Nonnegative TU Games},
author={Somdeb Lahiri},
journal={arXiv preprint arXiv:cs/0603032},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603032},
primaryClass={cs.GT}
} | lahiri2006market |
arxiv-673946 | cs/0603033 | Inter-component communication methods in object-oriented frameworks | <|reference_start|>Inter-component communication methods in object-oriented frameworks: Modern frameworks for development of graphical interfaces are using the native controls of the operating system. Because of that they are using operating system events model for inter-component communication. We consider a method to increase inter-component communication speed by sending messages from one component to the other passing over the operating system. Besides the messages subscription helps to avoid receiving of unnecessary messages.<|reference_end|> | arxiv | @article{petrosyan2006inter-component,
title={Inter-component communication methods in object-oriented frameworks},
author={Vaghinak Petrosyan},
journal={arXiv preprint arXiv:cs/0603033},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603033},
primaryClass={cs.SE}
} | petrosyan2006inter-component |
arxiv-673947 | cs/0603034 | Metatheory of actions: beyond consistency | <|reference_start|>Metatheory of actions: beyond consistency: Consistency check has been the only criterion for theory evaluation in logic-based approaches to reasoning about actions. This work goes beyond that and contributes to the metatheory of actions by investigating what other properties a good domain description in reasoning about actions should have. We state some metatheoretical postulates concerning this sore spot. When all postulates are satisfied together we have a modular action theory. Besides being easier to understand and more elaboration tolerant in McCarthy's sense, modular theories have interesting properties. We point out the problems that arise when the postulates about modularity are violated and propose algorithmic checks that can help the designer of an action theory to overcome them.<|reference_end|> | arxiv | @article{herzig2006metatheory,
title={Metatheory of actions: beyond consistency},
author={Andreas Herzig and Ivan Varzinczak},
journal={arXiv preprint arXiv:cs/0603034},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603034},
primaryClass={cs.AI}
} | herzig2006metatheory |
arxiv-673948 | cs/0603035 | Final Results from and Exploitation Plans for MammoGrid | <|reference_start|>Final Results from and Exploitation Plans for MammoGrid: The MammoGrid project has delivered the first deployed instance of a healthgrid for clinical mammography that spans national boundaries. During the last year, the final MammoGrid prototype has undergone a series of rigorous tests undertaken by radiologists in the UK and Italy and this paper draws conclusions from those tests for the benefit of the Healthgrid community. In addition, lessons learned during the lifetime of the project are detailed and recommendations drawn for future health applications using grids. Following the completion of the project, plans have been put in place for the commercialisation of the MammoGrid system and this is also reported in this article. Particular emphasis is placed on the issues surrounding the transition from collaborative research project to a marketable product. This paper concludes by highlighting some of the potential areas of future development and research.<|reference_end|> | arxiv | @article{del frate2006final,
title={Final Results from and Exploitation Plans for MammoGrid},
author={Chiara Del Frate, Jose Galvez, Tamas Hauer, David Manset, Richard
McClatchey, Mohammed Odeh, Dmitry Rogulin, Tony Solomonides, Ruth Warren},
journal={arXiv preprint arXiv:cs/0603035},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603035},
primaryClass={cs.DC}
} | del frate2006final |
arxiv-673949 | cs/0603036 | Health-e-Child : An Integrated Biomedical Platform for Grid-Based Paediatric Applications | <|reference_start|>Health-e-Child : An Integrated Biomedical Platform for Grid-Based Paediatric Applications: There is a compelling demand for the integration and exploitation of heterogeneous biomedical information for improved clinical practice, medical research, and personalised healthcare across the EU. The Health-e-Child project aims at developing an integrated healthcare platform for European Paediatrics, providing seamless integration of traditional and emerging sources of biomedical information. The long-term goal of the project is to provide uninhibited access to universal biomedical knowledge repositories for personalised and preventive healthcare, large-scale information-based biomedical research and training, and informed policy making. The project focus will be on individualised disease prevention, screening, early diagnosis, therapy and follow-up of paediatric heart diseases, inflammatory diseases, and brain tumours. The project will build a Grid-enabled European network of leading clinical centres that will share and annotate biomedical data, validate systems clinically, and diffuse clinical excellence across Europe by setting up new technologies, clinical workflows, and standards. This paper outlines the design approach being adopted in Health-e-Child to enable the delivery of an integrated biomedical information platform.<|reference_end|> | arxiv | @article{freund2006health-e-child,
title={Health-e-Child : An Integrated Biomedical Platform for Grid-Based
Paediatric Applications},
author={Joerg Freund, Dorin Comaniciu, Yannis Ioannis, Peiya Liu, Richard
McClatchey, Edwin Morley-Fletcher, Xavier Pennec, Giacomo Pongiglione, Xiang
(Sean)ZHOU},
journal={arXiv preprint arXiv:cs/0603036},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603036},
primaryClass={cs.DC}
} | freund2006health-e-child |
arxiv-673950 | cs/0603037 | Deriving Conceptual Data Models from Domain Ontologies for Bioinformatics | <|reference_start|>Deriving Conceptual Data Models from Domain Ontologies for Bioinformatics: This paper studies the role that ontologies can play in establishing conceptual data models during the process of information systems development. A mapping algorithm has been proposed and embedded in a special purpose Transformation-Engine to generate a conceptual data model from a given domain ontology. In addition, this paper focuses on applying the proposed approach to a bioinformatics context as the nature of biological data is considered a barrier in representing biological conceptual data models. Both quantitative and qualitative methods have been adopted to critically evaluate this new approach. The results of this evaluation indicate that the quality of the generated conceptual data models can reflect the problem domain entities and the associations between them. The results are encouraging and support the potential role that this approach can play in providing a suitable starting point for conceptual data model development.<|reference_end|> | arxiv | @article{el-ghalayini2006deriving,
title={Deriving Conceptual Data Models from Domain Ontologies for
Bioinformatics},
author={Haya El-Ghalayini, Mohammed Odeh, Richard McClatchey},
journal={arXiv preprint arXiv:cs/0603037},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603037},
primaryClass={cs.SE}
} | el-ghalayini2006deriving |
arxiv-673951 | cs/0603038 | Estimation of linear, non-gaussian causal models in the presence of confounding latent variables | <|reference_start|>Estimation of linear, non-gaussian causal models in the presence of confounding latent variables: The estimation of linear causal models (also known as structural equation models) from data is a well-known problem which has received much attention in the past. Most previous work has, however, made an explicit or implicit assumption of gaussianity, limiting the identifiability of the models. We have recently shown (Shimizu et al, 2005; Hoyer et al, 2006) that for non-gaussian distributions the full causal model can be estimated in the no hidden variables case. In this contribution, we discuss the estimation of the model when confounding latent variables are present. Although in this case uniqueness is no longer guaranteed, there is at most a finite set of models which can fit the data. We develop an algorithm for estimating this set, and describe numerical simulations which confirm the theoretical arguments and demonstrate the practical viability of the approach. Full Matlab code is provided for all simulations.<|reference_end|> | arxiv | @article{hoyer2006estimation,
title={Estimation of linear, non-gaussian causal models in the presence of
confounding latent variables},
author={Patrik O. Hoyer, Shohei Shimizu, Antti J. Kerminen},
journal={arXiv preprint arXiv:cs/0603038},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603038},
primaryClass={cs.AI}
} | hoyer2006estimation |
arxiv-673952 | cs/0603039 | Quantization Bounds on Grassmann Manifolds and Applications to MIMO Communications | <|reference_start|>Quantization Bounds on Grassmann Manifolds and Applications to MIMO Communications: This paper considers the quantization problem on the Grassmann manifold \mathcal{G}_{n,p}, the set of all p-dimensional planes (through the origin) in the n-dimensional Euclidean space. The chief result is a closed-form formula for the volume of a metric ball in the Grassmann manifold when the radius is sufficiently small. This volume formula holds for Grassmann manifolds with arbitrary dimension n and p, while previous results pertained only to p=1, or a fixed p with asymptotically large n. Based on this result, several quantization bounds are derived for sphere packing and rate distortion tradeoff. We establish asymptotically equivalent lower and upper bounds for the rate distortion tradeoff. Since the upper bound is derived by constructing random codes, this result implies that the random codes are asymptotically optimal. The above results are also extended to the more general case, in which \mathcal{G}_{n,q} is quantized through a code in \mathcal{G}_{n,p}, where p and q are not necessarily the same. Finally, we discuss some applications of the derived results to multi-antenna communication systems.<|reference_end|> | arxiv | @article{dai2006quantization,
title={Quantization Bounds on Grassmann Manifolds and Applications to MIMO
Communications},
author={Wei Dai, Youjian Liu, Brian Rider},
journal={arXiv preprint arXiv:cs/0603039},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603039},
primaryClass={cs.IT math.IT}
} | dai2006quantization |
arxiv-673953 | cs/0603040 | On the Information Rate of MIMO Systems with Finite Rate Channel State Feedback Using Beamforming and Power On/Off Strategy | <|reference_start|>On the Information Rate of MIMO Systems with Finite Rate Channel State Feedback Using Beamforming and Power On/Off Strategy: It is well known that Multiple-Input Multiple-Output (MIMO) systems have high spectral efficiency, especially when channel state information at the transmitter (CSIT) is available. When CSIT is obtained by feedback, it is practical to assume that the channel state feedback rate is finite and the CSIT is not perfect. For such a system, we consider beamforming and power on/off strategy for its simplicity and near optimality, where power on/off means that a beamforming vector (beam) is either turned on with a constant power or turned off. The main contribution of this paper is to accurately evaluate the information rate as a function of the channel state feedback rate. Name a beam turned on as an on-beam and the minimum number of the transmit and receive antennas as the dimension of a MIMO system. We prove that the ratio of the optimal number of on-beams and the system dimension converges to a constant for a given signal-to-noise ratio (SNR) when the numbers of transmit and receive antennas approach infinity simultaneously and when beamforming is perfect. Asymptotic formulas are derived to evaluate this ratio and the corresponding information rate per dimension. The asymptotic results can be accurately applied to finite dimensional systems and suggest a power on/off strategy with a constant number of on-beams. For this suboptimal strategy, we take a novel approach to introduce power efficiency factor, which is a function of the feedback rate, to quantify the effect of imperfect beamforming. By combining power efficiency factor and the asymptotic formulas for perfect beamforming case, the information rate of the power on/off strategy with a constant number of on-beams is accurately characterized.<|reference_end|> | arxiv | @article{dai2006on,
title={On the Information Rate of MIMO Systems with Finite Rate Channel State
Feedback Using Beamforming and Power On/Off Strategy},
author={Wei Dai, Youjian Liu, Vincent K.N. Lau, Brian Rider},
journal={arXiv preprint arXiv:cs/0603040},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603040},
primaryClass={cs.IT math.IT}
} | dai2006on |
arxiv-673954 | cs/0603041 | Locally Adaptive Block Thresholding Method with Continuity Constraint | <|reference_start|>Locally Adaptive Block Thresholding Method with Continuity Constraint: We present an algorithm that enables one to perform locally adaptive block thresholding, while maintaining image continuity. Images are divided into sub-images based some standard image attributes and thresholding technique is employed over the sub-images. The present algorithm makes use of the thresholds of neighboring sub-images to calculate a range of values. The image continuity is taken care by choosing the threshold of the sub-image under consideration to lie within the above range. After examining the average range values for various sub-image sizes of a variety of images, it was found that the range of acceptable threshold values is substantially high, justifying our assumption of exploiting the freedom of range for bringing out local details.<|reference_end|> | arxiv | @article{hemachander2006locally,
title={Locally Adaptive Block Thresholding Method with Continuity Constraint},
author={S. Hemachander, Amit Verma, Siddharth Arora, Prasanta K. Panigrahi},
journal={arXiv preprint arXiv:cs/0603041},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603041},
primaryClass={cs.CV}
} | hemachander2006locally |
arxiv-673955 | cs/0603042 | The NoN Approach to Autonomic Face Recognition | <|reference_start|>The NoN Approach to Autonomic Face Recognition: A method of autonomic face recognition based on the biologically plausible network of networks (NoN) model of information processing is presented. The NoN model is based on locally parallel and globally coordinated transformations in which the neurons or computational units form distributed networks, which themselves link to form larger networks. This models the structures in the cerebral cortex described by Mountcastle and the architecture based on that proposed for information processing by Sutton. In the proposed implementation, face images are processed by a nested family of locally operating networks along with a hierarchically superior network that classifies the information from each of the local networks. The results of the experiments yielded a maximum of 98.5% recognition accuracy and an average of 97.4% recognition accuracy on a benchmark database.<|reference_end|> | arxiv | @article{scott2006the,
title={The NoN Approach to Autonomic Face Recognition},
author={Willie L. Scott II},
journal={arXiv preprint arXiv:cs/0603042},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603042},
primaryClass={cs.NE}
} | scott2006the |
arxiv-673956 | cs/0603043 | Time-Space Trade-Offs for Predecessor Search | <|reference_start|>Time-Space Trade-Offs for Predecessor Search: We develop a new technique for proving cell-probe lower bounds for static data structures. Previous lower bounds used a reduction to communication games, which was known not to be tight by counting arguments. We give the first lower bound for an explicit problem which breaks this communication complexity barrier. In addition, our bounds give the first separation between polynomial and near linear space. Such a separation is inherently impossible by communication complexity. Using our lower bound technique and new upper bound constructions, we obtain tight bounds for searching predecessors among a static set of integers. Given a set Y of n integers of l bits each, the goal is to efficiently find predecessor(x) = max{y in Y | y <= x}, by representing Y on a RAM using space S. In external memory, it follows that the optimal strategy is to use either standard B-trees, or a RAM algorithm ignoring the larger block size. In the important case of l = c*lg n, for c>1 (i.e. polynomial universes), and near linear space (such as S = n*poly(lg n)), the optimal search time is Theta(lg l). Thus, our lower bound implies the surprising conclusion that van Emde Boas' classic data structure from [FOCS'75] is optimal in this case. Note that for space n^{1+eps}, a running time of O(lg l / lglg l) was given by Beame and Fich [STOC'99].<|reference_end|> | arxiv | @article{patrascu2006time-space,
title={Time-Space Trade-Offs for Predecessor Search},
author={Mihai Patrascu and Mikkel Thorup},
journal={arXiv preprint arXiv:cs/0603043},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603043},
primaryClass={cs.CC cs.DS}
} | patrascu2006time-space |
arxiv-673957 | cs/0603044 | First Steps in Relational Lattice | <|reference_start|>First Steps in Relational Lattice: Relational lattice reduces the set of six classic relational algebra operators to two binary lattice operations: natural join and inner union. We give an introduction to this theory with emphasis on formal algebraic laws. New results include Spight distributivity criteria and its applications to query transformations.<|reference_end|> | arxiv | @article{spight2006first,
title={First Steps in Relational Lattice},
author={Marshall Spight, Vadim Tropashko},
journal={arXiv preprint arXiv:cs/0603044},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603044},
primaryClass={cs.DB}
} | spight2006first |
arxiv-673958 | cs/0603045 | Information and Errors in Quantum Teleportation | <|reference_start|>Information and Errors in Quantum Teleportation: This article considers the question of the teleportation protocol from an engineering perspective. The protocol ideally requires an authority that ensures that the two communicating parties have a perfectly entangled pair of particles available to them. But this cannot be unconditionally established to the satisfaction of the parties due to the fact that an unknown quantum state cannot be copied. This supports the view that quantum information cannot be treated on the same basis as classical information.<|reference_end|> | arxiv | @article{nedurumalli2006information,
title={Information and Errors in Quantum Teleportation},
author={Balaji Nedurumalli},
journal={arXiv preprint arXiv:cs/0603045},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603045},
primaryClass={cs.IT math.IT}
} | nedurumalli2006information |
arxiv-673959 | cs/0603046 | Trusted Certificates in Quantum Cryptography | <|reference_start|>Trusted Certificates in Quantum Cryptography: This paper analyzes the performance of Kak's three stage quantum cryptographic protocol based on public key cryptography against a man-in-the-middle attack. A method for protecting against such an attack is presented using certificates distributed by a trusted third party.<|reference_end|> | arxiv | @article{perkins2006trusted,
title={Trusted Certificates in Quantum Cryptography},
author={William Perkins},
journal={arXiv preprint arXiv:cs/0603046},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603046},
primaryClass={cs.CR}
} | perkins2006trusted |
arxiv-673960 | cs/0603047 | The Quantum Separability Problem for Gaussian States | <|reference_start|>The Quantum Separability Problem for Gaussian States: Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. This is a brief review in which we consider the problem for states in infinite dimensional Hilbert spaces. We show how the problem becomes tractable for a class of Gaussian states.<|reference_end|> | arxiv | @article{mancini2006the,
title={The Quantum Separability Problem for Gaussian States},
author={Stefano Mancini and Simone Severini},
journal={Electronic Notes in Theoretical Computer Science, Volume 169 ,
March 2007, pp. 121-131},
year={2006},
doi={10.1016/j.entcs.2006.07.034},
archivePrefix={arXiv},
eprint={cs/0603047},
primaryClass={cs.CC quant-ph}
} | mancini2006the |
arxiv-673961 | cs/0603048 | Homogeneity vs Adjacency: generalising some graph decomposition algorithms | <|reference_start|>Homogeneity vs Adjacency: generalising some graph decomposition algorithms: In this paper, a new general decomposition theory inspired from modular graph decomposition is presented. Our main result shows that, within this general theory, most of the nice algorithmic tools developed for modular decomposition are still efficient. This theory not only unifies the usual modular decomposition generalisations such as modular decomposition of directed graphs or decomposition of 2-structures, but also star cutsets and bimodular decomposition. Our general framework provides a decomposition algorithm which improves the best known algorithms for bimodular decomposition.<|reference_end|> | arxiv | @article{xuan2006homogeneity,
title={Homogeneity vs. Adjacency: generalising some graph decomposition
algorithms},
author={Binh Minh Bui Xuan (LIRMM), Michel Habib (LIAFA), Vincent Limouzy
(LIAFA), Fabien De Montgolfier (LIAFA)},
journal={Graph-Theoretic Concepts in Computer Science Springer (Ed.)
(22/06/2006) 278-288},
year={2006},
doi={10.1007/11917496\_25},
archivePrefix={arXiv},
eprint={cs/0603048},
primaryClass={cs.DS}
} | xuan2006homogeneity |
arxiv-673962 | cs/0603049 | State Space Realizations and Monomial Equivalence for Convolutional Codes | <|reference_start|>State Space Realizations and Monomial Equivalence for Convolutional Codes: We will study convolutional codes with the help of state space realizations. It will be shown that two such minimal realizations belong to the same code if and only if they are equivalent under the full state feedback group. This result will be used in order to prove that two codes with positive Forney indices are monomially equivalent if and only if they share the same adjacency matrix. The adjacency matrix counts in a detailed way the weights of all possible outputs and thus contains full information about the weights of the codewords in the given code.<|reference_end|> | arxiv | @article{gluesing-luerssen2006state,
title={State Space Realizations and Monomial Equivalence for Convolutional
Codes},
author={Heide Gluesing-Luerssen and Gert Schneider},
journal={arXiv preprint arXiv:cs/0603049},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603049},
primaryClass={cs.IT math.IT math.OC}
} | gluesing-luerssen2006state |
arxiv-673963 | cs/0603050 | Multiple serial episode matching | <|reference_start|>Multiple serial episode matching: In a previous paper we generalized the Knuth-Morris-Pratt (KMP) pattern matching algorithm and defined a non-conventional kind of RAM, the MP--RAMs (RAMS equipped with extra operations), and designed an O(n) on-line algorithm for solving the serial episode matching problem on MP--RAMs when there is only one single episode. We here give two extensions of this algorithm to the case when we search for several patterns simultaneously and compare them. More preciseley, given $q+1$ strings (a text $t$ of length $n$ and $q$ patterns $m\_1,...,m\_q$) and a natural number $w$, the {\em multiple serial episode matching problem} consists in finding the number of size $w$ windows of text $t$ which contain patterns $m\_1,...,m\_q$ as subsequences, i.e. for each $m\_i$, if $m\_i=p\_1,..., p\_k$, the letters $p\_1,..., p\_k$ occur in the window, in the same order as in $m\_i$, but not necessarily consecutively (they may be interleaved with other letters).} The main contribution is an algorithm solving this problem on-line in time $O(nq)$.<|reference_end|> | arxiv | @article{cegielski2006multiple,
title={Multiple serial episode matching},
author={Patrick Cegielski (LACL), Irene Guessarian (LIAFA), Yuri Matiyasevich
(PDMI)},
journal={CSIT05 (2005) 26-38},
year={2006},
doi={10.1016/j.ipl.2006.02.008},
archivePrefix={arXiv},
eprint={cs/0603050},
primaryClass={cs.DS}
} | cegielski2006multiple |
arxiv-673964 | cs/0603051 | Transitive trust in mobile scenarios | <|reference_start|>Transitive trust in mobile scenarios: Horizontal integration of access technologies to networks and services should be accompanied by some kind of convergence of authentication technologies. The missing link for the federation of user identities across the technological boundaries separating authentication methods can be provided by trusted computing platforms. The concept of establishing transitive trust by trusted computing enables the desired crossdomain authentication functionality. The focus of target application scenarios lies in the realm of mobile networks and devices.<|reference_end|> | arxiv | @article{kuntze2006transitive,
title={Transitive trust in mobile scenarios},
author={Nicolai Kuntze and Andreas U. Schmidt},
journal={arXiv preprint arXiv:cs/0603051},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603051},
primaryClass={cs.CR}
} | kuntze2006transitive |
arxiv-673965 | cs/0603052 | Evaluation of interval extension of the power function by graph decomposition | <|reference_start|>Evaluation of interval extension of the power function by graph decomposition: The subject of our talk is the correct evaluation of interval extension of the function specified by the expression x^y without any constraints on the values of x and y. The core of our approach is a decomposition of the graph of x^y into a small number of parts which can be transformed into subsets of the graph of x^y for non-negative bases x. Because of this fact, evaluation of interval extension of x^y, without any constraints on x and y, is not much harder than evaluation of interval extension of x^y for non-negative bases x.<|reference_end|> | arxiv | @article{petrov2006evaluation,
title={Evaluation of interval extension of the power function by graph
decomposition},
author={Evgueni Petrov},
journal={arXiv preprint arXiv:cs/0603052},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603052},
primaryClass={cs.MS}
} | petrov2006evaluation |
arxiv-673966 | cs/0603053 | Automatic generation of simplified weakest preconditions for integrity constraint verification | <|reference_start|>Automatic generation of simplified weakest preconditions for integrity constraint verification: Given a constraint $c$ assumed to hold on a database $B$ and an update $u$ to be performed on $B$, we address the following question: will $c$ still hold after $u$ is performed? When $B$ is a relational database, we define a confluent terminating rewriting system which, starting from $c$ and $u$, automatically derives a simplified weakest precondition $wp(c,u)$ such that, whenever $B$ satisfies $wp(c,u)$, then the updated database $u(B)$ will satisfy $c$, and moreover $wp(c,u)$ is simplified in the sense that its computation depends only upon the instances of $c$ that may be modified by the update. We then extend the definition of a simplified $wp(c,u)$ to the case of deductive databases; we prove it using fixpoint induction.<|reference_end|> | arxiv | @article{-bouziad2006automatic,
title={Automatic generation of simplified weakest preconditions for integrity
constraint verification},
author={A. Ai T -Bouziad (LIAFA), Irene Guessarian (LIAFA), L. Vieille (NCM)},
journal={arXiv preprint arXiv:cs/0603053},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603053},
primaryClass={cs.DS cs.DB}
} | -bouziad2006automatic |
arxiv-673967 | cs/0603054 | Testing Graph Isomorphism in Parallel by Playing a Game | <|reference_start|>Testing Graph Isomorphism in Parallel by Playing a Game: Our starting point is the observation that if graphs in a class C have low descriptive complexity in first order logic, then the isomorphism problem for C is solvable by a fast parallel algorithm (essentially, by a simple combinatorial algorithm known as the multidimensional Weisfeiler-Lehman algorithm). Using this approach, we prove that isomorphism of graphs of bounded treewidth is testable in TC1, answering an open question posed by Chandrasekharan. Furthermore, we obtain an AC1 algorithm for testing isomorphism of rotation systems (combinatorial specifications of graph embeddings). The AC1 upper bound was known before, but the fact that this bound can be achieved by the simple Weisfeiler-Lehman algorithm is new. Combined with other known results, it also yields a new AC1 isomorphism algorithm for planar graphs.<|reference_end|> | arxiv | @article{grohe2006testing,
title={Testing Graph Isomorphism in Parallel by Playing a Game},
author={Martin Grohe and Oleg Verbitsky},
journal={arXiv preprint arXiv:cs/0603054},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603054},
primaryClass={cs.CC cs.LO}
} | grohe2006testing |
arxiv-673968 | cs/0603055 | Improved Watermarking Scheme Using Decimal Sequences | <|reference_start|>Improved Watermarking Scheme Using Decimal Sequences: This paper presents watermarking algorithms using d-sequences so that the peak signal to noise ratio (PSNR) is maximized and the distortion introduced in the image due to the embedding is minimized. By exploiting the cross correlation property of decimal sequences, the concept of embedding more than one watermark in the same cover image is investigated.<|reference_end|> | arxiv | @article{shaik2006improved,
title={Improved Watermarking Scheme Using Decimal Sequences},
author={Ashfaq N Shaik},
journal={arXiv preprint arXiv:cs/0603055},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603055},
primaryClass={cs.CR}
} | shaik2006improved |
arxiv-673969 | cs/0603056 | Does the arXiv lead to higher citations and reduced publisher downloads for mathematics articles? | <|reference_start|>Does the arXiv lead to higher citations and reduced publisher downloads for mathematics articles?: An analysis of 2,765 articles published in four math journals from 1997 to 2005 indicate that articles deposited in the arXiv received 35% more citations on average than non-deposited articles (an advantage of about 1.1 citations per article), and that this difference was most pronounced for highly-cited articles. Open Access, Early View, and Quality Differential were examined as three non-exclusive postulates for explaining the citation advantage. There was little support for a universal Open Access explanation, and no empirical support for Early View. There was some inferential support for a Quality Differential brought about by more highly-citable articles being deposited in the arXiv. In spite of their citation advantage, arXiv-deposited articles received 23% fewer downloads from the publisher's website (about 10 fewer downloads per article) in all but the most recent two years after publication. The data suggest that arXiv and the publisher's website may be fulfilling distinct functional needs of the reader.<|reference_end|> | arxiv | @article{davis2006does,
title={Does the arXiv lead to higher citations and reduced publisher downloads
for mathematics articles?},
author={Philip M. Davis and Michael J. Fromerth},
journal={Scientometrics Vol. 71, No. 2. (May, 2007)},
year={2006},
doi={10.1007/s11192-007-1661-8},
archivePrefix={arXiv},
eprint={cs/0603056},
primaryClass={cs.DL cs.IR math.HO}
} | davis2006does |
arxiv-673970 | cs/0603057 | Guard Placement For Wireless Localization | <|reference_start|>Guard Placement For Wireless Localization: Motivated by secure wireless networking, we consider the problem of placing fixed localizers that enable mobile communication devices to prove they belong to a secure region that is defined by the interior of a polygon. Each localizer views an infinite wedge of the plane, and a device can prove membership in the secure region if it is inside the wedges for a set of localizers whose common intersection contains no points outside the polygon. This model leads to a broad class of new art gallery type problems, for which we provide upper and lower bounds.<|reference_end|> | arxiv | @article{eppstein2006guard,
title={Guard Placement For Wireless Localization},
author={David Eppstein, Michael T. Goodrich, Nodari Sitchinava},
journal={arXiv preprint arXiv:cs/0603057},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603057},
primaryClass={cs.CG}
} | eppstein2006guard |
arxiv-673971 | cs/0603058 | Convergence of Min-Sum Message Passing for Quadratic Optimization | <|reference_start|>Convergence of Min-Sum Message Passing for Quadratic Optimization: We establish the convergence of the min-sum message passing algorithm for minimization of a broad class of quadratic objective functions: those that admit a convex decomposition. Our results also apply to the equivalent problem of the convergence of Gaussian belief propagation.<|reference_end|> | arxiv | @article{moallemi2006convergence,
title={Convergence of Min-Sum Message Passing for Quadratic Optimization},
author={Ciamac C. Moallemi and Benjamin Van Roy},
journal={arXiv preprint arXiv:cs/0603058},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603058},
primaryClass={cs.IT cs.AI math.IT}
} | moallemi2006convergence |
arxiv-673972 | cs/0603059 | Derivatives of Entropy Rate in Special Families of Hidden Markov Chains | <|reference_start|>Derivatives of Entropy Rate in Special Families of Hidden Markov Chains: Consider a hidden Markov chain obtained as the observation process of an ordinary Markov chain corrupted by noise. Zuk, et. al. [13], [14] showed how, in principle, one can explicitly compute the derivatives of the entropy rate of at extreme values of the noise. Namely, they showed that the derivatives of standard upper approximations to the entropy rate actually stabilize at an explicit finite time. We generalize this result to a natural class of hidden Markov chains called ``Black Holes.'' We also discuss in depth special cases of binary Markov chains observed in binary symmetric noise, and give an abstract formula for the first derivative in terms of a measure on the simplex due to Blackwell.<|reference_end|> | arxiv | @article{han2006derivatives,
title={Derivatives of Entropy Rate in Special Families of Hidden Markov Chains},
author={Guangyue Han, Brian Marcus},
journal={arXiv preprint arXiv:cs/0603059},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603059},
primaryClass={cs.IT math.IT math.PR}
} | han2006derivatives |
arxiv-673973 | cs/0603060 | An Improved Exact Algorithm for the Domatic Number Problem | <|reference_start|>An Improved Exact Algorithm for the Domatic Number Problem: The 3-domatic number problem asks whether a given graph can be partitioned intothree dominating sets. We prove that this problem can be solved by a deterministic algorithm in time 2.695^n (up to polynomial factors). This result improves the previous bound of 2.8805^n, which is due to Fomin, Grandoni, Pyatkin, and Stepanov. To prove our result, we combine an algorithm by Fomin et al. with Yamamoto's algorithm for the satisfiability problem. In addition, we show that the 3-domatic number problem can be solved for graphs G with bounded maximum degree Delta(G) by a randomized algorithm, whose running time is better than the previous bound due to Riege and Rothe whenever Delta(G) >= 5. Our new randomized algorithm employs Schoening's approach to constraint satisfaction problems.<|reference_end|> | arxiv | @article{riege2006an,
title={An Improved Exact Algorithm for the Domatic Number Problem},
author={Tobias Riege, J"org Rothe, Holger Spakowski and Masaki Yamamoto},
journal={arXiv preprint arXiv:cs/0603060},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603060},
primaryClass={cs.CC}
} | riege2006an |
arxiv-673974 | cs/0603061 | Quasi-Orthogonal STBC With Minimum Decoding Complexity | <|reference_start|>Quasi-Orthogonal STBC With Minimum Decoding Complexity: In this paper, we consider a quasi-orthogonal (QO) space-time block code (STBC) with minimum decoding complexity (MDC-QO-STBC). We formulate its algebraic structure and propose a systematic method for its construction. We show that a maximum-likelihood (ML) decoder for this MDC-QOSTBC, for any number of transmit antennas, only requires the joint detection of two real symbols. Assuming the use of a square or rectangular quadratic-amplitude modulation (QAM) or multiple phase-shift keying (MPSK) modulation for this MDC-QOSTBC, we also obtain the optimum constellation rotation angle, in order to achieve full diversity and optimum coding gain. We show that the maximum achievable code rate of these MDC-QOSTBC is 1 for three and four antennas and 3/4 for five to eight antennas. We also show that the proposed MDC-QOSTBC has several desirable properties, such as a more even power distribution among antennas and better scalability in adjusting the number of transmit antennas, compared with the coordinate interleaved orthogonal design (CIOD) and asymmetric CIOD (ACIOD) codes. For the case of an odd number of transmit antennas, MDC-QO-STBC also has better decoding performance than CIOD.<|reference_end|> | arxiv | @article{yuen2006quasi-orthogonal,
title={Quasi-Orthogonal STBC With Minimum Decoding Complexity},
author={Chau Yuen, Yong Liang Guan, Tjeng Thiang Tjhung},
journal={arXiv preprint arXiv:cs/0603061},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603061},
primaryClass={cs.IT math.IT}
} | yuen2006quasi-orthogonal |
arxiv-673975 | cs/0603062 | Implementation and Deployment of a Distributed Network Topology Discovery Algorithm | <|reference_start|>Implementation and Deployment of a Distributed Network Topology Discovery Algorithm: In the past few years, the network measurement community has been interested in the problem of internet topology discovery using a large number (hundreds or thousands) of measurement monitors. The standard way to obtain information about the internet topology is to use the traceroute tool from a small number of monitors. Recent papers have made the case that increasing the number of monitors will give a more accurate view of the topology. However, scaling up the number of monitors is not a trivial process. Duplication of effort close to the monitors wastes time by reexploring well-known parts of the network, and close to destinations might appear to be a distributed denial-of-service (DDoS) attack as the probes converge from a set of sources towards a given destination. In prior work, authors of this report proposed Doubletree, an algorithm for cooperative topology discovery, that reduces the load on the network, i.e., router IP interfaces and end-hosts, while discovering almost as many nodes and links as standard approaches based on traceroute. This report presents our open-source and freely downloadable implementation of Doubletree in a tool we call traceroute@home. We describe the deployment and validation of traceroute@home on the PlanetLab testbed and we report on the lessons learned from this experience. We discuss how traceroute@home can be developed further and discuss ideas for future improvements.<|reference_end|> | arxiv | @article{donnet2006implementation,
title={Implementation and Deployment of a Distributed Network Topology
Discovery Algorithm},
author={Benoit Donnet, Bradley Huffaker, Timur Friedman, kc claffy},
journal={arXiv preprint arXiv:cs/0603062},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603062},
primaryClass={cs.NI}
} | donnet2006implementation |
arxiv-673976 | cs/0603063 | Unary Primitive Recursive Functions | <|reference_start|>Unary Primitive Recursive Functions: In this article, we study some new characterizations of primitive recursive functions based on restricted forms of primitive recursion, improving the pioneering work of R. M. Robinson and M. D. Gladstone in this area. We reduce certain recursion schemes (mixed/pure iteration without parameters) and we characterize one-argument primitive recursive functions as the closure under substitution and iteration of certain optimal sets.<|reference_end|> | arxiv | @article{severin2006unary,
title={Unary Primitive Recursive Functions},
author={Daniel E. Severin},
journal={Journal of Symbolic Logic. Volume 73, Issue 4 (2008), p.
1122--1138},
year={2006},
doi={10.2178/jsl/1230396909},
archivePrefix={arXiv},
eprint={cs/0603063},
primaryClass={cs.SC cs.LO}
} | severin2006unary |
arxiv-673977 | cs/0603064 | Guessing under source uncertainty | <|reference_start|>Guessing under source uncertainty: This paper considers the problem of guessing the realization of a finite alphabet source when some side information is provided. The only knowledge the guesser has about the source and the correlated side information is that the joint source is one among a family. A notion of redundancy is first defined and a new divergence quantity that measures this redundancy is identified. This divergence quantity shares the Pythagorean property with the Kullback-Leibler divergence. Good guessing strategies that minimize the supremum redundancy (over the family) are then identified. The min-sup value measures the richness of the uncertainty set. The min-sup redundancies for two examples - the families of discrete memoryless sources and finite-state arbitrarily varying sources - are then determined.<|reference_end|> | arxiv | @article{sundaresan2006guessing,
title={Guessing under source uncertainty},
author={Rajesh Sundaresan},
journal={arXiv preprint arXiv:cs/0603064},
year={2006},
doi={10.1109/TIT.2006.887466},
archivePrefix={arXiv},
eprint={cs/0603064},
primaryClass={cs.IT math.IT}
} | sundaresan2006guessing |
arxiv-673978 | cs/0603065 | MIMO Broadcast Channels with Finite Rate Feedback | <|reference_start|>MIMO Broadcast Channels with Finite Rate Feedback: Multiple transmit antennas in a downlink channel can provide tremendous capacity (i.e. multiplexing) gains, even when receivers have only single antennas. However, receiver and transmitter channel state information is generally required. In this paper, a system where each receiver has perfect channel knowledge, but the transmitter only receives quantized information regarding the channel instantiation is analyzed. The well known zero forcing transmission technique is considered, and simple expressions for the throughput degradation due to finite rate feedback are derived. A key finding is that the feedback rate per mobile must be increased linearly with the SNR (in dB) in order to achieve the full multiplexing gain, which is in sharp contrast to point-to-point MIMO systems in which it is not necessary to increase the feedback rate as a function of the SNR.<|reference_end|> | arxiv | @article{jindal2006mimo,
title={MIMO Broadcast Channels with Finite Rate Feedback},
author={Nihar Jindal},
journal={IEEE Trans. Information Theory, Vol. 52, No. 11, pp. 5045-5059,
Nov. 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603065},
primaryClass={cs.IT math.IT}
} | jindal2006mimo |
arxiv-673979 | cs/0603066 | A Feedback Reduction Technique for MIMO Broadcast Channels | <|reference_start|>A Feedback Reduction Technique for MIMO Broadcast Channels: A multiple antenna broadcast channel with perfect channel state information at the receivers is considered. If each receiver quantizes its channel knowledge to a finite number of bits which are fed back to the transmitter, the large capacity benefits of the downlink channel can be realized. However, the required number of feedback bits per mobile must be scaled with both the number of transmit antennas and the system SNR, and thus can be quite large in even moderately sized systems. It is shown that a small number of antennas can be used at each receiver to improve the quality of the channel estimate provided to the transmitter. As a result, the required feedback rate per mobile can be significantly decreased.<|reference_end|> | arxiv | @article{jindal2006a,
title={A Feedback Reduction Technique for MIMO Broadcast Channels},
author={Nihar Jindal},
journal={arXiv preprint arXiv:cs/0603066},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603066},
primaryClass={cs.IT math.IT}
} | jindal2006a |
arxiv-673980 | cs/0603067 | Implementing the Three-Stage Quantum Cryptography Protocol | <|reference_start|>Implementing the Three-Stage Quantum Cryptography Protocol: We present simple implementations of Kak's three-stage quantum cryptography protocol. The case where the transformation is applied to more than one qubit at the same time is also considered.<|reference_end|> | arxiv | @article{sivakumar2006implementing,
title={Implementing the Three-Stage Quantum Cryptography Protocol},
author={Priya Sivakumar},
journal={arXiv preprint arXiv:cs/0603067},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603067},
primaryClass={cs.CR}
} | sivakumar2006implementing |
arxiv-673981 | cs/0603068 | Universal Lossless Compression with Unknown Alphabets - The Average Case | <|reference_start|>Universal Lossless Compression with Unknown Alphabets - The Average Case: Universal compression of patterns of sequences generated by independently identically distributed (i.i.d.) sources with unknown, possibly large, alphabets is investigated. A pattern is a sequence of indices that contains all consecutive indices in increasing order of first occurrence. If the alphabet of a source that generated a sequence is unknown, the inevitable cost of coding the unknown alphabet symbols can be exploited to create the pattern of the sequence. This pattern can in turn be compressed by itself. It is shown that if the alphabet size $k$ is essentially small, then the average minimax and maximin redundancies as well as the redundancy of every code for almost every source, when compressing a pattern, consist of at least 0.5 log(n/k^3) bits per each unknown probability parameter, and if all alphabet letters are likely to occur, there exist codes whose redundancy is at most 0.5 log(n/k^2) bits per each unknown probability parameter, where n is the length of the data sequences. Otherwise, if the alphabet is large, these redundancies are essentially at least O(n^{-2/3}) bits per symbol, and there exist codes that achieve redundancy of essentially O(n^{-1/2}) bits per symbol. Two sub-optimal low-complexity sequential algorithms for compression of patterns are presented and their description lengths analyzed, also pointing out that the pattern average universal description length can decrease below the underlying i.i.d.\ entropy for large enough alphabets.<|reference_end|> | arxiv | @article{shamir2006universal,
title={Universal Lossless Compression with Unknown Alphabets - The Average Case},
author={Gil I. Shamir},
journal={arXiv preprint arXiv:cs/0603068},
year={2006},
doi={10.1109/ISIT.2004.1365062},
archivePrefix={arXiv},
eprint={cs/0603068},
primaryClass={cs.IT math.IT}
} | shamir2006universal |
arxiv-673982 | cs/0603069 | The neighbor-scattering number can be computed in polynomial time for interval graphs | <|reference_start|>The neighbor-scattering number can be computed in polynomial time for interval graphs: Neighbor-scattering number is a useful measure for graph vulnerability. For some special kinds of graphs, explicit formulas are given for this number. However, for general graphs it is shown that to compute this number is NP-complete. In this paper, we prove that for interval graphs this number can be computed in polynomial time.<|reference_end|> | arxiv | @article{li2006the,
title={The neighbor-scattering number can be computed in polynomial time for
interval graphs},
author={Fengwei Li, Xueliang Li},
journal={arXiv preprint arXiv:cs/0603069},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603069},
primaryClass={cs.DM math.CO}
} | li2006the |
arxiv-673983 | cs/0603070 | Predicting the Path of an Open System | <|reference_start|>Predicting the Path of an Open System: The expected path of an open system,which is a big Poincare system,has been found in this paper.This path has been obtained from the actual and from the expected droop of the open system.The actual droop has been reconstructed from the variations in the power and in the frequency of the open system.The expected droop has been found as a function of rotation from the expected potential energy of the open system under synchronization of that system.<|reference_end|> | arxiv | @article{stefanov2006predicting,
title={Predicting the Path of an Open System},
author={S.Z.Stefanov},
journal={arXiv preprint arXiv:cs/0603070},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603070},
primaryClass={cs.RO}
} | stefanov2006predicting |
arxiv-673984 | cs/0603071 | An Explicit Solution to Post's Problem over the Reals | <|reference_start|>An Explicit Solution to Post's Problem over the Reals: In the BCSS model of real number computations we prove a concrete and explicit semi-decidable language to be undecidable yet not reducible from (and thus strictly easier than) the real Halting Language. This solution to Post's Problem over the reals significantly differs from its classical, discrete variant where advanced diagonalization techniques are only known to yield the existence of such intermediate Turing degrees. Strengthening the above result, we construct (that is, obtain again explicitly) as well an uncountable number of incomparable semi-decidable Turing degrees below the real Halting problem in the BCSS model. Finally we show the same to hold for the linear BCSS model, that is over (R,+,-,<) rather than (R,+,-,*,/,<).<|reference_end|> | arxiv | @article{meer2006an,
title={An Explicit Solution to Post's Problem over the Reals},
author={Klaus Meer, Martin Ziegler},
journal={arXiv preprint arXiv:cs/0603071},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603071},
primaryClass={cs.LO cs.SC}
} | meer2006an |
arxiv-673985 | cs/0603072 | Distributed Transmit Beamforming using Feedback Control | <|reference_start|>Distributed Transmit Beamforming using Feedback Control: A simple feedback control algorithm is presented for distributed beamforming in a wireless network. A network of wireless sensors that seek to cooperatively transmit a common message signal to a Base Station (BS) is considered. In this case, it is well-known that substantial energy efficiencies are possible by using distributed beamforming. The feedback algorithm is shown to achieve the carrier phase coherence required for beamforming in a scalable and distributed manner. In the proposed algorithm, each sensor independently makes a random adjustment to its carrier phase. Assuming that the BS is able to broadcast one bit of feedback each timeslot about the change in received signal to noise ratio (SNR), the sensors are able to keep the favorable phase adjustments and discard the unfavorable ones, asymptotically achieving perfect phase coherence. A novel analytical model is derived that accurately predicts the convergence rate. The analytical model is used to optimize the algorithm for fast convergence and to establish the scalability of the algorithm.<|reference_end|> | arxiv | @article{mudumbai2006distributed,
title={Distributed Transmit Beamforming using Feedback Control},
author={R. Mudumbai, J. Hespanha, U. Madhow, G. Barriac},
journal={arXiv preprint arXiv:cs/0603072},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603072},
primaryClass={cs.IT math.IT}
} | mudumbai2006distributed |
arxiv-673986 | cs/0603073 | VXA: A Virtual Architecture for Durable Compressed Archives | <|reference_start|>VXA: A Virtual Architecture for Durable Compressed Archives: Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.<|reference_end|> | arxiv | @article{ford2006vxa:,
title={VXA: A Virtual Architecture for Durable Compressed Archives},
author={Bryan Ford},
journal={4th USENIX Conference on File and Storage Technologies, December
2005 (FAST '05), San Francisco, CA},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603073},
primaryClass={cs.DL cs.IR}
} | ford2006vxa: |
arxiv-673987 | cs/0603074 | Peer-to-Peer Communication Across Network Address Translators | <|reference_start|>Peer-to-Peer Communication Across Network Address Translators: Network Address Translation (NAT) causes well-known difficulties for peer-to-peer (P2P) communication, since the peers involved may not be reachable at any globally valid IP address. Several NAT traversal techniques are known, but their documentation is slim, and data about their robustness or relative merits is slimmer. This paper documents and analyzes one of the simplest but most robust and practical NAT traversal techniques, commonly known as "hole punching." Hole punching is moderately well-understood for UDP communication, but we show how it can be reliably used to set up peer-to-peer TCP streams as well. After gathering data on the reliability of this technique on a wide variety of deployed NATs, we find that about 82% of the NATs tested support hole punching for UDP, and about 64% support hole punching for TCP streams. As NAT vendors become increasingly conscious of the needs of important P2P applications such as Voice over IP and online gaming protocols, support for hole punching is likely to increase in the future.<|reference_end|> | arxiv | @article{ford2006peer-to-peer,
title={Peer-to-Peer Communication Across Network Address Translators},
author={Bryan Ford, Pyda Srisuresh, Dan Kegel},
journal={USENIX Annual Technical Conference, April 2005 (USENIX '05),
Anaheim, CA},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603074},
primaryClass={cs.NI cs.CR}
} | ford2006peer-to-peer |
arxiv-673988 | cs/0603075 | Unmanaged Internet Protocol: Taming the Edge Network Management Crisis | <|reference_start|>Unmanaged Internet Protocol: Taming the Edge Network Management Crisis: Though appropriate for core Internet infrastructure, the Internet Protocol is unsuited to routing within and between emerging ad-hoc edge networks due to its dependence on hierarchical, administratively assigned addresses. Existing ad-hoc routing protocols address the management problem but do not scale to Internet-wide networks. The promise of ubiquitous network computing cannot be fulfilled until we develop an Unmanaged Internet Protocol (UIP), a scalable routing protocol that manages itself automatically. UIP must route within and between constantly changing edge networks potentially containing millions or billions of nodes, and must still function within edge networks disconnected from the main Internet, all without imposing the administrative burden of hierarchical address assignment. Such a protocol appears challenging but feasible. We propose an architecture based on self-certifying, cryptographic node identities and a routing algorithm adapted from distributed hash tables.<|reference_end|> | arxiv | @article{ford2006unmanaged,
title={Unmanaged Internet Protocol: Taming the Edge Network Management Crisis},
author={Bryan Ford},
journal={Second Workshop on Hot Topics in Networks (HotNets-II), November
2003, Cambridge, MA},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603075},
primaryClass={cs.NI cs.OS}
} | ford2006unmanaged |
arxiv-673989 | cs/0603076 | User-Relative Names for Globally Connected Personal Devices | <|reference_start|>User-Relative Names for Globally Connected Personal Devices: Nontechnical users who own increasingly ubiquitous network-enabled personal devices such as laptops, digital cameras, and smart phones need a simple, intuitive, and secure way to share information and services between their devices. User Information Architecture, or UIA, is a novel naming and peer-to-peer connectivity architecture addressing this need. Users assign UIA names by "introducing" devices to each other on a common local-area network, but these names remain securely bound to their target as devices migrate. Multiple devices owned by the same user, once introduced, automatically merge their namespaces to form a distributed "personal cluster" that the owner can access or modify from any of his devices. Instead of requiring users to allocate globally unique names from a central authority, UIA enables users to assign their own "user-relative" names both to their own devices and to other users. With UIA, for example, Alice can always access her iPod from any of her own personal devices at any location via the name "ipod", and her friend Bob can access her iPod via a relative name like "ipod.Alice".<|reference_end|> | arxiv | @article{ford2006user-relative,
title={User-Relative Names for Globally Connected Personal Devices},
author={Bryan Ford, Jacob Strauss, Chris Lesniewski-Laas, Sean Rhea, Frans
Kaashoek, Robert Morris},
journal={5th International Workshop on Peer-to-Peer Systems, February 2006
(IPTPS 2006), Santa Barbara, CA},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603076},
primaryClass={cs.NI cs.DC cs.OS}
} | ford2006user-relative |
arxiv-673990 | cs/0603077 | Packrat Parsing: Simple, Powerful, Lazy, Linear Time | <|reference_start|>Packrat Parsing: Simple, Powerful, Lazy, Linear Time: Packrat parsing is a novel technique for implementing parsers in a lazy functional programming language. A packrat parser provides the power and flexibility of top-down parsing with backtracking and unlimited lookahead, but nevertheless guarantees linear parse time. Any language defined by an LL(k) or LR(k) grammar can be recognized by a packrat parser, in addition to many languages that conventional linear-time algorithms do not support. This additional power simplifies the handling of common syntactic idioms such as the widespread but troublesome longest-match rule, enables the use of sophisticated disambiguation strategies such as syntactic and semantic predicates, provides better grammar composition properties, and allows lexical analysis to be integrated seamlessly into parsing. Yet despite its power, packrat parsing shares the same simplicity and elegance as recursive descent parsing; in fact converting a backtracking recursive descent parser into a linear-time packrat parser often involves only a fairly straightforward structural change. This paper describes packrat parsing informally with emphasis on its use in practical applications, and explores its advantages and disadvantages with respect to the more conventional alternatives.<|reference_end|> | arxiv | @article{ford2006packrat,
title={Packrat Parsing: Simple, Powerful, Lazy, Linear Time},
author={Bryan Ford},
journal={International Conference on Functional Programming (ICFP '02),
October 2002, Pittsburgh, PA},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603077},
primaryClass={cs.DS cs.CC cs.PL}
} | ford2006packrat |
arxiv-673991 | cs/0603078 | Consensus Propagation | <|reference_start|>Consensus Propagation: We propose consensus propagation, an asynchronous distributed protocol for averaging numbers across a network. We establish convergence, characterize the convergence rate for regular graphs, and demonstrate that the protocol exhibits better scaling properties than pairwise averaging, an alternative that has received much recent attention. Consensus propagation can be viewed as a special case of belief propagation, and our results contribute to the belief propagation literature. In particular, beyond singly-connected graphs, there are very few classes of relevant problems for which belief propagation is known to converge.<|reference_end|> | arxiv | @article{moallemi2006consensus,
title={Consensus Propagation},
author={Ciamac C. Moallemi and Benjamin Van Roy},
journal={IEEE Transactions on Information Theory, 2006, 52(11): 4753-4766},
year={2006},
doi={10.1109/TIT.2006.883539},
archivePrefix={arXiv},
eprint={cs/0603078},
primaryClass={cs.IT cs.AI cs.NI math.IT}
} | moallemi2006consensus |
arxiv-673992 | cs/0603079 | A compositional Semantics for CHR | <|reference_start|>A compositional Semantics for CHR: Constraint Handling Rules (CHR) are a committed-choice declarative language which has been designed for writing constraint solvers. A CHR program consists of multi-headed guarded rules which allow one to rewrite constraints into simpler ones until a solved form is reached. CHR has received a considerable attention, both from the practical and from the theoretical side. Nevertheless, due the use of multi-headed clauses, there are several aspects of the CHR semantics which have not been clarified yet. In particular, no compositional semantics for CHR has been defined so far. In this paper we introduce a fix-point semantics which characterizes the input/output behavior of a CHR program and which is and-compositional, that is, which allows to retrieve the semantics of a conjunctive query from the semantics of its components. Such a semantics can be used as a basis to define incremental and modular analysis and verification tools.<|reference_end|> | arxiv | @article{gabbrielli2006a,
title={A compositional Semantics for CHR},
author={Maurizio Gabbrielli and Maria Chiara Meo},
journal={arXiv preprint arXiv:cs/0603079},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603079},
primaryClass={cs.PL}
} | gabbrielli2006a |
arxiv-673993 | cs/0603080 | Yet Another Efficient Unification Algorithm | <|reference_start|>Yet Another Efficient Unification Algorithm: The unification algorithm is at the core of the logic programming paradigm, the first unification algorithm being developed by Robinson [5]. More efficient algorithms were developed later [3] and I introduce here yet another efficient unification algorithm centered on a specific data structure, called the Unification Table.<|reference_end|> | arxiv | @article{suciu2006yet,
title={Yet Another Efficient Unification Algorithm},
author={Alin Suciu},
journal={arXiv preprint arXiv:cs/0603080},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603080},
primaryClass={cs.LO cs.AI}
} | suciu2006yet |
arxiv-673994 | cs/0603081 | Application of Support Vector Regression to Interpolation of Sparse Shock Physics Data Sets | <|reference_start|>Application of Support Vector Regression to Interpolation of Sparse Shock Physics Data Sets: Shock physics experiments are often complicated and expensive. As a result, researchers are unable to conduct as many experiments as they would like - leading to sparse data sets. In this paper, Support Vector Machines for regression are applied to velocimetry data sets for shock damaged and melted tin metal. Some success at interpolating between data sets is achieved. Implications for future work are discussed.<|reference_end|> | arxiv | @article{sakhanenko2006application,
title={Application of Support Vector Regression to Interpolation of Sparse
Shock Physics Data Sets},
author={Nikita A. Sakhanenko (1 and 2), George F. Luger (1), Hanna E. Makaruk
(2), David B. Holtkamp (2) ((1) CS Dept. University of New Mexico, (2)
Physics Div. Los Alamos National Laboratory)},
journal={arXiv preprint arXiv:cs/0603081},
year={2006},
number={LA-UR-06-1739},
archivePrefix={arXiv},
eprint={cs/0603081},
primaryClass={cs.AI}
} | sakhanenko2006application |
arxiv-673995 | cs/0603082 | Solving Sparse Integer Linear Systems | <|reference_start|>Solving Sparse Integer Linear Systems: We propose a new algorithm to solve sparse linear systems of equations over the integers. This algorithm is based on a $p$-adic lifting technique combined with the use of block matrices with structured blocks. It achieves a sub-cubic complexity in terms of machine operations subject to a conjecture on the effectiveness of certain sparse projections. A LinBox-based implementation of this algorithm is demonstrated, and emphasizes the practical benefits of this new method over the previous state of the art.<|reference_end|> | arxiv | @article{eberly2006solving,
title={Solving Sparse Integer Linear Systems},
author={Wayne Eberly, Mark Giesbrecht (SCG), Pascal Giorgi (LP2A, SCG), Arne
Storjohann (SCG), Gilles Villard (LIP)},
journal={arXiv preprint arXiv:cs/0603082},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603082},
primaryClass={cs.SC}
} | eberly2006solving |
arxiv-673996 | cs/0603083 | Entropy-optimal Generalized Token Bucket Regulator | <|reference_start|>Entropy-optimal Generalized Token Bucket Regulator: We derive the maximum entropy of a flow (information utility) which conforms to traffic constraints imposed by a generalized token bucket regulator, by taking into account the covert information present in the randomness of packet lengths. Under equality constraints of aggregate tokens and aggregate bucket depth, a generalized token bucket regulator can achieve higher information utility than a standard token bucket regulator. The optimal generalized token bucket regulator has a near-uniform bucket depth sequence and a decreasing token increment sequence.<|reference_end|> | arxiv | @article{gore2006entropy-optimal,
title={Entropy-optimal Generalized Token Bucket Regulator},
author={Ashutosh Deepak Gore and Abhay Karandikar},
journal={arXiv preprint arXiv:cs/0603083},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603083},
primaryClass={cs.IT math.IT}
} | gore2006entropy-optimal |
arxiv-673997 | cs/0603084 | Random 3CNF formulas elude the Lovasz theta function | <|reference_start|>Random 3CNF formulas elude the Lovasz theta function: Let $\phi$ be a 3CNF formula with n variables and m clauses. A simple nonconstructive argument shows that when m is sufficiently large compared to n, most 3CNF formulas are not satisfiable. It is an open question whether there is an efficient refutation algorithm that for most such formulas proves that they are not satisfiable. A possible approach to refute a formula $\phi$ is: first, translate it into a graph $G_{\phi}$ using a generic reduction from 3-SAT to max-IS, then bound the maximum independent set of $G_{\phi}$ using the Lovasz $\vartheta$ function. If the $\vartheta$ function returns a value $< m$, this is a certificate for the unsatisfiability of $\phi$. We show that for random formulas with $m < n^{3/2 -o(1)}$ clauses, the above approach fails, i.e. the $\vartheta$ function is likely to return a value of m.<|reference_end|> | arxiv | @article{feige2006random,
title={Random 3CNF formulas elude the Lovasz theta function},
author={Uriel Feige and Eran Ofek},
journal={arXiv preprint arXiv:cs/0603084},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603084},
primaryClass={cs.CC cs.DS cs.LO}
} | feige2006random |
arxiv-673998 | cs/0603085 | Access Control for Hierarchical Joint-Tenancy | <|reference_start|>Access Control for Hierarchical Joint-Tenancy: Basic role based access control [RBAC] provides a mechanism for segregating access privileges based upon a user's hierarchical roles within an organization. This model doesn't scale well when there is tight integration of multiple hierarchies. In a case where there is joint-tenancy and a requirement for different levels of disclosure based upon a user's hierarchy, or in our case, organization or company, basic RBAC requires these hierarchies to be effectively merged. Specific roles that effectively represent both the user's organizations and roles must be translated to fit within the merged hierarchy to be used to control access. Essentially, users from multiple organizations are served from a single role base with roles designed to constrain their access as needed. Our work proposes, through parameterized roles and privileges, a means for accurately representing both users' roles within their respective hierarchies for providing access to controlled objects. Using this method will reduce the amount of complexity required in terms of the number of roles and privileges. The resulting set of roles, privileges, and objects will make modeling and visualizing the access role hierarchy significantly simplified. This paper will give some background on role based access control, parameterized roles and privileges, and then focus on how RBAC with parameterized roles and privileges can be leveraged as an access control solution for the problems presented by joint tenancy.<|reference_end|> | arxiv | @article{adams2006access,
title={Access Control for Hierarchical Joint-Tenancy},
author={Jonathan K. Adams, Basheer N. Bristow},
journal={WSEAS Transactions on Computers, June 2006, Issue 6, Volume 5, p.
1313-1318},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603085},
primaryClass={cs.CR}
} | adams2006access |
arxiv-673999 | cs/0603086 | Matching Edges in Images ; Application to Face Recognition | <|reference_start|>Matching Edges in Images ; Application to Face Recognition: This communication describes a representation of images as a set of edges characterized by their position and orientation. This representation allows the comparison of two images and the computation of their similarity. The first step in this computation of similarity is the seach of a geometrical basis of the two dimensional space where the two images are represented simultaneously after transformation of one of them. Presently, this simultaneous representation takes into account a shift and a scaling ; it may be extended to rotations or other global geometrical transformations. An elementary probabilistic computation shows that a sufficient but not excessive number of trials (a few tens) ensures that the exhibition of this common basis is guaranteed in spite of possible errors in the detection of edges. When this first step is performed, the search of similarity between the two images reduces to counting the coincidence of edges in the two images. The approach may be applied to many problems of pattern matching ; it was checked on face recognition.<|reference_end|> | arxiv | @article{roux2006matching,
title={Matching Edges in Images ; Application to Face Recognition},
author={Joel Le Roux (1), Philippe Chaurand (1) and Mickael Urrutia (1) ((1)
University of Nice (France))},
journal={arXiv preprint arXiv:cs/0603086},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603086},
primaryClass={cs.CV}
} | roux2006matching |
arxiv-674000 | cs/0603087 | IP over P2P: Enabling Self-configuring Virtual IP Networks for Grid Computing | <|reference_start|>IP over P2P: Enabling Self-configuring Virtual IP Networks for Grid Computing: Peer-to-peer (P2P) networks have mostly focused on task oriented networking, where networks are constructed for single applications, i.e. file-sharing, DNS caching, etc. In this work, we introduce IPOP, a system for creating virtual IP networks on top of a P2P overlay. IPOP enables seamless access to Grid resources spanning multiple domains by aggregating them into a virtual IP network that is completely isolated from the physical network. The virtual IP network provided by IPOP supports deployment of existing IP-based protocols over a robust, self-configuring P2P overlay. We present implementation details as well as experimental measurement results taken from LAN, WAN, and Planet-Lab tests.<|reference_end|> | arxiv | @article{ganguly2006ip,
title={IP over P2P: Enabling Self-configuring Virtual IP Networks for Grid
Computing},
author={Arijit Ganguly, Abhishek Agrawal, P. Oscar Boykin, Renato Figueiredo},
journal={arXiv preprint arXiv:cs/0603087},
year={2006},
archivePrefix={arXiv},
eprint={cs/0603087},
primaryClass={cs.DC cs.NI}
} | ganguly2006ip |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.