corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-672501 | cs/0501022 | Algebraic Properties for Selector Functions | <|reference_start|>Algebraic Properties for Selector Functions: The nondeterministic advice complexity of the P-selective sets is known to be exactly linear. Regarding the deterministic advice complexity of the P-selective sets--i.e., the amount of Karp--Lipton advice needed for polynomial-time machines to recognize them in general--the best current upper bound is quadratic [Ko, 1983] and the best current lower bound is linear [Hemaspaandra and Torenvliet, 1996]. We prove that every associatively P-selective set is commutatively, associatively P-selective. Using this, we establish an algebraic sufficient condition for the P-selective sets to have a linear upper bound (which thus would match the existing lower bound) on their deterministic advice complexity: If all P-selective sets are associatively P-selective then the deterministic advice complexity of the P-selective sets is linear. The weakest previously known sufficient condition was P=NP. We also establish related results for algebraic properties of, and advice complexity of, the nondeterministically selective sets.<|reference_end|> | arxiv | @article{hemaspaandra2005algebraic,
title={Algebraic Properties for Selector Functions},
author={Lane A. Hemaspaandra, Harald Hempel, and Arfst Nickelsen},
journal={SICOMP, V. 33, Number 6, pp. 1309--1337, 2004},
year={2005},
number={URCS-TR-778 (January 7, 2004 revision)},
archivePrefix={arXiv},
eprint={cs/0501022},
primaryClass={cs.CC}
} | hemaspaandra2005algebraic |
arxiv-672502 | cs/0501023 | No-cloning principal can alone provide security | <|reference_start|>No-cloning principal can alone provide security: Existing quantum key distribution schemes need the support of classical authentication scheme to ensure security. This is a conceptual drawback of quantum cryptography. It is pointed out that quantum cryptosystem does not need any support of classical cryptosystem to ensure security. No-cloning principal can alone provide security in communication. Even no-cloning principle itself can help to authenticate each bit of information. It implies that quantum password need not to be a secret password.<|reference_end|> | arxiv | @article{mitra2005no-cloning,
title={No-cloning principal can alone provide security},
author={Arindam Mitra},
journal={arXiv preprint arXiv:cs/0501023},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501023},
primaryClass={cs.IT math.IT}
} | mitra2005no-cloning |
arxiv-672503 | cs/0501024 | Effectively Open Real Functions | <|reference_start|>Effectively Open Real Functions: A function f is continuous iff the PRE-image f^{-1}[V] of any open set V is open again. Dual to this topological property, f is called OPEN iff the IMAGE f[U] of any open set U is open again. Several classical Open Mapping Theorems in Analysis provide a variety of sufficient conditions for openness. By the Main Theorem of Recursive Analysis, computable real functions are necessarily continuous. In fact they admit a well-known characterization in terms of the mapping V+->f^{-1}[V] being EFFECTIVE: Given a list of open rational balls exhausting V, a Turing Machine can generate a corresponding list for f^{-1}[V]. Analogously, EFFECTIVE OPENNESS requires the mapping U+->f[U] on open real subsets to be effective. By effectivizing classical Open Mapping Theorems as well as from application of Tarski's Quantifier Elimination, the present work reveals several rich classes of functions to be effectively open.<|reference_end|> | arxiv | @article{ziegler2005effectively,
title={Effectively Open Real Functions},
author={Martin Ziegler},
journal={pp.827-849 in Journal of Complexity vol.22 (2006)},
year={2005},
doi={10.1016/j.jco.2006.05.002},
archivePrefix={arXiv},
eprint={cs/0501024},
primaryClass={cs.LO}
} | ziegler2005effectively |
arxiv-672504 | cs/0501025 | A Logic for Non-Monotone Inductive Definitions | <|reference_start|>A Logic for Non-Monotone Inductive Definitions: Well-known principles of induction include monotone induction and different sorts of non-monotone induction such as inflationary induction, induction over well-founded sets and iterated induction. In this work, we define a logic formalizing induction over well-founded sets and monotone and iterated induction. Just as the principle of positive induction has been formalized in FO(LFP), and the principle of inflationary induction has been formalized in FO(IFP), this paper formalizes the principle of iterated induction in a new logic for Non-Monotone Inductive Definitions (ID-logic). The semantics of the logic is strongly influenced by the well-founded semantics of logic programming. Our main result concerns the modularity properties of inductive definitions in ID-logic. Specifically, we formulate conditions under which a simultaneous definition $\D$ of several relations is logically equivalent to a conjunction of smaller definitions $\D_1 \land ... \land \D_n$ with disjoint sets of defined predicates. The difficulty of the result comes from the fact that predicates $P_i$ and $P_j$ defined in $\D_i$ and $\D_j$, respectively, may be mutually connected by simultaneous induction. Since logic programming and abductive logic programming under well-founded semantics are proper fragments of our logic, our modularity results are applicable there as well.<|reference_end|> | arxiv | @article{denecker2005a,
title={A Logic for Non-Monotone Inductive Definitions},
author={Marc Denecker and Eugenia Ternovska},
journal={arXiv preprint arXiv:cs/0501025},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501025},
primaryClass={cs.AI cs.LO}
} | denecker2005a |
arxiv-672505 | cs/0501026 | On the Sensitivity of Cyclically-Invariant Boolean Functions | <|reference_start|>On the Sensitivity of Cyclically-Invariant Boolean Functions: In this paper we construct a cyclically invariant Boolean function whose sensitivity is $\Theta(n^{1/3})$. This result answers two previously published questions. Tur\'an (1984) asked if any Boolean function, invariant under some transitive group of permutations, has sensitivity $\Omega(\sqrt{n})$. Kenyon and Kutin (2004) asked whether for a ``nice'' function the product of 0-sensitivity and 1-sensitivity is $\Omega(n)$. Our function answers both questions in the negative. We also prove that for minterm-transitive functions (a natural class of Boolean functions including our example) the sensitivity is $\Omega(n^{1/3})$. Hence for this class of functions sensitivity and block sensitivity are polynomially related.<|reference_end|> | arxiv | @article{chakraborty2005on,
title={On the Sensitivity of Cyclically-Invariant Boolean Functions},
author={Sourav Chakraborty},
journal={arXiv preprint arXiv:cs/0501026},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501026},
primaryClass={cs.CC}
} | chakraborty2005on |
arxiv-672506 | cs/0501027 | Enforcing Bulk Mail Classification | <|reference_start|>Enforcing Bulk Mail Classification: Spam costs US corporations upwards of $8.9 billion a year, and comprises as much as 40% of all email received. Solutions exist to reduce the amount of spam seen by end users, but cannot withstand sophisticated attacks. Worse yet, many will occasionally misclassify and silently drop legitimate email. Spammers take advantage of the near-zero cost of sending email to flood the network, knowing that success even a tiny fraction of the time means a profit. End users, however, have proven unwilling to pay money to send email to friends and family. We show that it is feasible to extend the existing mail system to reduce the amount of unwanted email, without misclassifying email, and without charging well-behaved users. We require that bulk email senders accurately classify each email message they send as an advertisement with an area of interest or else be charged a small negative incentive per message delivered. Recipients are able to filter out email outside their scope of interest, while senders are able to focus their sendings to the appropriate audience.<|reference_end|> | arxiv | @article{greenberg2005enforcing,
title={Enforcing Bulk Mail Classification},
author={Evan P. Greenberg, David R. Cheriton},
journal={arXiv preprint arXiv:cs/0501027},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501027},
primaryClass={cs.NI}
} | greenberg2005enforcing |
arxiv-672507 | cs/0501028 | An Empirical Study of MDL Model Selection with Infinite Parametric Complexity | <|reference_start|>An Empirical Study of MDL Model Selection with Infinite Parametric Complexity: Parametric complexity is a central concept in MDL model selection. In practice it often turns out to be infinite, even for quite simple models such as the Poisson and Geometric families. In such cases, MDL model selection as based on NML and Bayesian inference based on Jeffreys' prior can not be used. Several ways to resolve this problem have been proposed. We conduct experiments to compare and evaluate their behaviour on small sample sizes. We find interestingly poor behaviour for the plug-in predictive code; a restricted NML model performs quite well but it is questionable if the results validate its theoretical motivation. The Bayesian model with the improper Jeffreys' prior is the most dependable.<|reference_end|> | arxiv | @article{de rooij2005an,
title={An Empirical Study of MDL Model Selection with Infinite Parametric
Complexity},
author={Steven de Rooij and Peter Grunwald},
journal={arXiv preprint arXiv:cs/0501028},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501028},
primaryClass={cs.LG cs.IT math.IT}
} | de rooij2005an |
arxiv-672508 | cs/0501029 | Estimating Range Queries using Aggregate Data with Integrity Constraints: a Probabilistic Approach | <|reference_start|>Estimating Range Queries using Aggregate Data with Integrity Constraints: a Probabilistic Approach: The problem of recovering (count and sum) range queries over multidimensional data only on the basis of aggregate information on such data is addressed. This problem can be formalized as follows. Suppose that a transformation T producing a summary from a multidimensional data set is used. Now, given a data set D, a summary S=T(D) and a range query r on D, the problem consists of studying r by modelling it as a random variable defined over the sample space of all the data sets D' such that T(D) = S. The study of such a random variable, done by the definition of its probability distribution and the computation of its mean value and variance, represents a well-founded, theoretical probabilistic approach for estimating the query only on the basis of the available information (that is the summary S) without assumptions on original data.<|reference_end|> | arxiv | @article{buccafurri2005estimating,
title={Estimating Range Queries using Aggregate Data with Integrity
Constraints: a Probabilistic Approach},
author={Francesco Buccafurri, Filippo Furfaro, Domenico Sacca'},
journal={arXiv preprint arXiv:cs/0501029},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501029},
primaryClass={cs.DB}
} | buccafurri2005estimating |
arxiv-672509 | cs/0501030 | Generalized Laplace transformations and integration of hyperbolic systems of linear partial differential equations | <|reference_start|>Generalized Laplace transformations and integration of hyperbolic systems of linear partial differential equations: We give a new procedure for generalized factorization and construction of the complete solution of strictly hyperbolic linear partial differential equations or strictly hyperbolic systems of such equations in the plane. This procedure generalizes the classical theory of Laplace transformations of second-order equations in the plane.<|reference_end|> | arxiv | @article{tsarev2005generalized,
title={Generalized Laplace transformations and integration of hyperbolic
systems of linear partial differential equations},
author={Sergey P. Tsarev},
journal={arXiv preprint arXiv:cs/0501030},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501030},
primaryClass={cs.SC math.AP nlin.SI}
} | tsarev2005generalized |
arxiv-672510 | cs/0501031 | From truth to computability II | <|reference_start|>From truth to computability II: Computability logic is a formal theory of computational tasks and resources. Formulas in it represent interactive computational problems, and "truth" is understood as algorithmic solvability. Interactive computational problems, in turn, are defined as a certain sort games between a machine and its environment, with logical operators standing for operations on such games. Within the ambitious program of finding axiomatizations for incrementally rich fragments of this semantically introduced logic, the earlier article "From truth to computability I" proved soundness and completeness for system CL3, whose language has the so called parallel connectives (including negation), choice connectives, choice quantifiers, and blind quantifiers. The present paper extends that result to the significantly more expressive system CL4 with the same collection of logical operators. What makes CL4 expressive is the presence of two sorts of atoms in its language: elementary atoms, representing elementary computational problems (i.e. predicates, i.e. problems of zero degree of interactivity), and general atoms, representing arbitrary computational problems. CL4 conservatively extends CL3, with the latter being nothing but the general-atom-free fragment of the former. Removing the blind (classical) group of quantifiers from the language of CL4 is shown to yield a decidable logic despite the fact that the latter is still first-order. A comprehensive online source on computability logic can be found at http://www.cis.upenn.edu/~giorgi/cl.html<|reference_end|> | arxiv | @article{japaridze2005from,
title={From truth to computability II},
author={Giorgi Japaridze},
journal={Theoretical Computer Science 379 (2007), pp. 20-52},
year={2005},
doi={10.1016/j.tcs.2007.01.004},
archivePrefix={arXiv},
eprint={cs/0501031},
primaryClass={cs.LO cs.AI math.LO}
} | japaridze2005from |
arxiv-672511 | cs/0501032 | On Partially Additive Kleene Algebras | <|reference_start|>On Partially Additive Kleene Algebras: We define the notion of a partially additive Kleene algebra, which is a Kleene algebra where the + operation need only be partially defined. These structures formalize a number of examples that cannot be handled directly by Kleene algebras. We relate partially additive Kleene algebras to existing algebraic structures, by exhibiting categorical connections with Kleene algebras, partially additive categories, and closed semirings.<|reference_end|> | arxiv | @article{pucella2005on,
title={On Partially Additive Kleene Algebras},
author={Riccardo Pucella},
journal={arXiv preprint arXiv:cs/0501032},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501032},
primaryClass={cs.LO}
} | pucella2005on |
arxiv-672512 | cs/0501033 | Playful, streamlike computation | <|reference_start|>Playful, streamlike computation: We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation -- that is, computation of successive bits of information upon request. The core of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms on concrete data structures in the late seventies, culminating in the design of the programming language CDS, in which the semantics of programs of any type can be explored interactively. Around one decade later, two major insights of Cartwright and Felleisen on one hand, and of Lamarche on the other hand gave new, decisive impulses to the study of sequentiality. Cartwright and Felleisen observed that sequential algorithms give a direct semantics to control operators like \"call-cc\" and proposed to include explicit errors both in the syntax and in the semantics of the language PCF. Lamarche (unpublished) connected sequential algorithms to linear logic and games. The successful program of games semantics has spanned over the nineties until now, starting with syntax-independent characterizations of the term model of PCF by Abramsky, Jagadeesan, and Malacaria on one hand, and by Hyland and Ong on the other hand.<|reference_end|> | arxiv | @article{curien2005playful,,
title={Playful, streamlike computation},
author={Pierre-Louis Curien (PPS)},
journal={Domain theory, logic and computation, Kluwer Academic Publishers
(Ed.) (2003) 1-24},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501033},
primaryClass={cs.LO}
} | curien2005playful, |
arxiv-672513 | cs/0501034 | Symmetry and interactivity in Programming | <|reference_start|>Symmetry and interactivity in Programming: We recall some of the early occurrences of the notions of interactivity and symmetry in the operational and denotational semantics of programming languages. We suggest some connections with ludics.<|reference_end|> | arxiv | @article{curien2005symmetry,
title={Symmetry and interactivity in Programming},
author={Pierre-Louis Curien (PPS)},
journal={Bulletin of Symbolic Logic 9, 2 (2003) 169-180},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501034},
primaryClass={cs.LO}
} | curien2005symmetry |
arxiv-672514 | cs/0501035 | Introduction to linear logic and ludics, part I | <|reference_start|>Introduction to linear logic and ludics, part I: This two-parts paper offers a survey of linear logic and ludics, which were introduced by Girard in 1986 and 2001, respectively. Both theories revisit mathematical logic from first principles, with inspiration from and applications to computer science. The present part I covers an introduction to the connectives and proof rules of linear logic, to its decidability properties, and to its models. Part II will deal with proof nets, a graph-like representation of proofs which is one of the major innovations of linear logic, and will present an introduction to ludics.<|reference_end|> | arxiv | @article{curien2005introduction,
title={Introduction to linear logic and ludics, part I},
author={Pierre-Louis Curien (PPS)},
journal={Advances in Mathematics (China) 34, 5 (2005) 513-544},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501035},
primaryClass={cs.LO}
} | curien2005introduction |
arxiv-672515 | cs/0501036 | Enabling Agents to Dynamically Select Protocols for Interactions | <|reference_start|>Enabling Agents to Dynamically Select Protocols for Interactions: in this paper we describe a method which allows agents to dynamically select protocols and roles when they need to execute collaborative tasks<|reference_end|> | arxiv | @article{aknine2005enabling,
title={Enabling Agents to Dynamically Select Protocols for Interactions},
author={Jose Ghislain Quenum Samir Aknine},
journal={arXiv preprint arXiv:cs/0501036},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501036},
primaryClass={cs.MA cs.SE}
} | aknine2005enabling |
arxiv-672516 | cs/0501037 | Oligopolistic Competition in an Evolutionary Environment: a Computer Simulation | <|reference_start|>Oligopolistic Competition in an Evolutionary Environment: a Computer Simulation: The following notes contain a computer simulation concerning effective competition in an evolutionary environment. The scope is to underline the existence of a side effect pertaining to the competitive processes: the tendency toward an excess of supply by producers which operate in a strongly competitive situation. A set of four oligopolistic firms will be employed in the formal reconstruction. The simulation will operate following the Edmond Malinvaud "short side" approach, as far as the price adjustment is concerned, and the sequential Hicksian "weeks" structure with regard of the temporal characterization. The content of the present paper ought to be considered as a development of the writing: Michele Tucci, Evolution and Gravitation: a Computer Simulation of a Non-Walrasian Equilibrium Model, published with the E-print Archives at arXiv.com (section: Computer Science, registration number: cs.CY/0209017). In such a paper there can be found some preliminary considerations regarding the comparison between the evolutionary and the gravitational paradigms and the evaluation of approaches belonging to rival schools of economic thought.<|reference_end|> | arxiv | @article{tucci2005oligopolistic,
title={Oligopolistic Competition in an Evolutionary Environment: a Computer
Simulation},
author={Michele Tucci},
journal={arXiv preprint arXiv:cs/0501037},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501037},
primaryClass={cs.CY}
} | tucci2005oligopolistic |
arxiv-672517 | cs/0501038 | Data Tastes Better Seasoned: Introducing the ASH Family of Hashing Algorithms | <|reference_start|>Data Tastes Better Seasoned: Introducing the ASH Family of Hashing Algorithms: Over the recent months it has become clear that the current generation of cryptographic hashing algorithms are insufficient to meet future needs. The ASH family of algorithms provides modifications to the existing SHA-2 family. These modifications are designed with two main goals: 1) Providing increased collision resistance. 2) Increasing mitigation of security risks post-collision. The unique public/private sections and salt/pepper design elements provide increased flexibility for a broad range of applications. The ASH family is a new generation of cryptographic hashing algorithms.<|reference_end|> | arxiv | @article{capelis2005data,
title={Data Tastes Better Seasoned: Introducing the ASH Family of Hashing
Algorithms},
author={D.J. Capelis},
journal={arXiv preprint arXiv:cs/0501038},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501038},
primaryClass={cs.CR}
} | capelis2005data |
arxiv-672518 | cs/0501039 | Introduction to linear logic and ludics, part II | <|reference_start|>Introduction to linear logic and ludics, part II: This paper is the second part of an introduction to linear logic and ludics, both due to Girard. It is devoted to proof nets, in the limited, yet central, framework of multiplicative linear logic and to ludics, which has been recently developped in an aim of further unveiling the fundamental interactive nature of computation and logic. We hope to offer a few computer science insights into this new theory.<|reference_end|> | arxiv | @article{curien2005introduction,
title={Introduction to linear logic and ludics, part II},
author={Pierre-Louis Curien (PPS)},
journal={Advances in Mathematics (China) 35, 1 (2006) 1-44},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501039},
primaryClass={cs.LO}
} | curien2005introduction |
arxiv-672519 | cs/0501040 | Split-2 Bisimilarity has a Finite Axiomatization over CCS with<br> Hennessy's Merge | <|reference_start|>Split-2 Bisimilarity has a Finite Axiomatization over CCS with<br> Hennessy's Merge: This note shows that split-2 bisimulation equivalence (also known as timed equivalence) affords a finite equational axiomatization over the process algebra obtained by adding an auxiliary operation proposed by Hennessy in 1981 to the recursion, relabelling and restriction free fragment of Milner's Calculus of Communicating Systems. Thus the addition of a single binary operation, viz. Hennessy's merge, is sufficient for the finite equational axiomatization of parallel composition modulo this non-interleaving equivalence. This result is in sharp contrast to a theorem previously obtained by the same authors to the effect that the same language is not finitely based modulo bisimulation equivalence.<|reference_end|> | arxiv | @article{aceto2005split-2,
title={Split-2 Bisimilarity has a Finite Axiomatization over CCS with<br>
Hennessy's Merge},
author={Luca Aceto, Wan Fokkink, Anna Ingolfsdottir and Bas Luttik},
journal={Logical Methods in Computer Science, Volume 1, Issue 1 (March 9,
2005) lmcs:2273},
year={2005},
doi={10.2168/LMCS-1(1:3)2005},
archivePrefix={arXiv},
eprint={cs/0501040},
primaryClass={cs.LO}
} | aceto2005split-2 |
arxiv-672520 | cs/0501041 | Two Iterative Algorithms for Solving Systems of Simultaneous Linear Algebraic Equations with Real Matrices of Coefficients | <|reference_start|>Two Iterative Algorithms for Solving Systems of Simultaneous Linear Algebraic Equations with Real Matrices of Coefficients: The paper describes two iterative algorithms for solving general systems of M simultaneous linear algebraic equations (SLAE) with real matrices of coefficients. The system can be determined, underdetermined, and overdetermined. Linearly dependent equations are also allowed. Both algorithms use the method of Lagrange multipliers to transform the original SLAE into a positively determined function F of real original variables X(i) (i=1,...,N) and Lagrange multipliers Lambda(i) (i=1,...,M). Function F is differentiated with respect to variables X(i) and the obtained relationships are used to express F in terms of Lagrange multipliers Lambda(i). The obtained function is minimized with respect to variables Lambda(i) with the help of one of two the following minimization techniques: (1) relaxation method or (2) method of conjugate gradients by Fletcher and Reeves. Numerical examples are given.<|reference_end|> | arxiv | @article{kondratiev2005two,
title={Two Iterative Algorithms for Solving Systems of Simultaneous Linear
Algebraic Equations with Real Matrices of Coefficients},
author={A. S. Kondratiev (1 and 2) and N. P. Polishchuk (2) ((1) Moscow Power
Engineering Institute, (2) Altair Naval Research Institute of Radio
Electronics)},
journal={arXiv preprint arXiv:cs/0501041},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501041},
primaryClass={cs.NA}
} | kondratiev2005two |
arxiv-672521 | cs/0501042 | Maintaining Consistency of Data on the Web | <|reference_start|>Maintaining Consistency of Data on the Web: Increasingly more data is becoming available on the Web, estimates speaking of 1 billion documents in 2002. Most of the documents are Web pages whose data is considered to be in XML format, expecting it to eventually replace HTML. A common problem in designing and maintaining a Web site is that data on a Web page often replicates or derives from other data, the so-called base data, that is usually not contained in the deriving or replicating page. Consequently, replicas and derivations become inconsistent upon modifying base data in a Web page or a relational database. For example, after assigning a thesis to a student and modifying the Web page that describes it in detail, the thesis is still incorrectly contained in the list of offered thesis, missing in the list of ongoing thesis, and missing in the advisor's teaching record. The thesis presents a solution by proposing a combined approach that provides for maintaining consistency of data in Web pages that (i) replicate data in relational databases, or (ii) replicate or derive from data in Web pages. Upon modifying base data, the modification is immediately pushed to affected Web pages. There, maintenance is performed incrementally by only modifying the affected part of the page instead of re-generating the whole page from scratch.<|reference_end|> | arxiv | @article{bernauer2005maintaining,
title={Maintaining Consistency of Data on the Web},
author={Martin Bernauer},
journal={arXiv preprint arXiv:cs/0501042},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501042},
primaryClass={cs.DB cs.DS}
} | bernauer2005maintaining |
arxiv-672522 | cs/0501043 | Proving Correctness and Completeness of Normal Programs - a Declarative Approach | <|reference_start|>Proving Correctness and Completeness of Normal Programs - a Declarative Approach: We advocate a declarative approach to proving properties of logic programs. Total correctness can be separated into correctness, completeness and clean termination; the latter includes non-floundering. Only clean termination depends on the operational semantics, in particular on the selection rule. We show how to deal with correctness and completeness in a declarative way, treating programs only from the logical point of view. Specifications used in this approach are interpretations (or theories). We point out that specifications for correctness may differ from those for completeness, as usually there are answers which are neither considered erroneous nor required to be computed. We present proof methods for correctness and completeness for definite programs and generalize them to normal programs. For normal programs we use the 3-valued completion semantics; this is a standard semantics corresponding to negation as finite failure. The proof methods employ solely the classical 2-valued logic. We use a 2-valued characterization of the 3-valued completion semantics which may be of separate interest. The presented methods are compared with an approach based on operational semantics. We also employ the ideas of this work to generalize a known method of proving termination of normal programs.<|reference_end|> | arxiv | @article{drabent2005proving,
title={Proving Correctness and Completeness of Normal Programs - a Declarative
Approach},
author={W. Drabent, M. Milkowska},
journal={Theory and Practice of Logic Programming, 5(6):669-711, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501043},
primaryClass={cs.LO cs.PL}
} | drabent2005proving |
arxiv-672523 | cs/0501044 | Augmented Segmentation and Visualization for Presentation Videos | <|reference_start|>Augmented Segmentation and Visualization for Presentation Videos: We investigate methods of segmenting, visualizing, and indexing presentation videos by separately considering audio and visual data. The audio track is segmented by speaker, and augmented with key phrases which are extracted using an Automatic Speech Recognizer (ASR). The video track is segmented by visual dissimilarities and augmented by representative key frames. An interactive user interface combines a visual representation of audio, video, text, and key frames, and allows the user to navigate a presentation video. We also explore clustering and labeling of speaker data and present preliminary results.<|reference_end|> | arxiv | @article{haubold2005augmented,
title={Augmented Segmentation and Visualization for Presentation Videos},
author={Alexander Haubold, John R. Kender},
journal={arXiv preprint arXiv:cs/0501044},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501044},
primaryClass={cs.MM cs.IR}
} | haubold2005augmented |
arxiv-672524 | cs/0501045 | Improved Approximation Algorithms for Geometric Set Cover | <|reference_start|>Improved Approximation Algorithms for Geometric Set Cover: Given a collection S of subsets of some set U, and M a subset of U, the set cover problem is to find the smallest subcollection C of S such that M is a subset of the union of the sets in C. While the general problem is NP-hard to solve, even approximately, here we consider some geometric special cases, where usually U = R^d. Extending prior results, we show that approximation algorithms with provable performance exist, under a certain general condition: that for a random subset R of S and function f(), there is a decomposition of the portion of U not covered by R into an expected f(|R|) regions, each region of a particular simple form. We show that under this condition, a cover of size O(f(|C|)) can be found. Our proof involves the generalization of shallow cuttings to more general geometric situations. We obtain constant-factor approximation algorithms for covering by unit cubes in R^3, for guarding a one-dimensional terrain, and for covering by similar-sized fat triangles in R^2. We also obtain improved approximation guarantees for fat triangles, of arbitrary size, and for a class of fat objects.<|reference_end|> | arxiv | @article{clarkson2005improved,
title={Improved Approximation Algorithms for Geometric Set Cover},
author={Kenneth L. Clarkson and Kasturi Varadarajan},
journal={arXiv preprint arXiv:cs/0501045},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501045},
primaryClass={cs.CG cs.DS}
} | clarkson2005improved |
arxiv-672525 | cs/0501046 | Thermodynamics of used punched tape: A weak and a strong equivalence principle | <|reference_start|>Thermodynamics of used punched tape: A weak and a strong equivalence principle: We study the repeated use of a monotonic recording medium--such as punched tape or photographic plate--where marks can be added at any time but never erased. (For practical purposes, also the electromagnetic "ether" falls into this class.) Our emphasis is on the case where the successive users act independently and selfishly, but not maliciously; typically, the "first user" would be a blind natural process tending to degrade the recording medium, and the "second user" a human trying to make the most of whatever capacity is left. To what extent is a length of used tape "equivalent"--for information transmission purposes--to a shorter length of virgin tape? Can we characterize a piece of used tape by an appropriate "effective length" and forget all other details? We identify two equivalence principles. The weak principle is exact, but only holds for a sequence of infinitesimal usage increments. The strong principle holds for any amount of incremental usage, but is only approximate; nonetheless, it is quite accurate even in the worst case and is virtually exact over most of the range--becoming exact in the limit of heavily used tape. The fact that strong equivalence does not hold exactly, but then it does almost exactly, comes as a bit of a surprise.<|reference_end|> | arxiv | @article{toffoli2005thermodynamics,
title={Thermodynamics of used punched tape: A weak and a strong equivalence
principle},
author={Tommaso Toffoli},
journal={arXiv preprint arXiv:cs/0501046},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501046},
primaryClass={cs.IT math.IT}
} | toffoli2005thermodynamics |
arxiv-672526 | cs/0501047 | Impact of Channel Estimation Errors on Multiuser Detection via the Replica Method | <|reference_start|>Impact of Channel Estimation Errors on Multiuser Detection via the Replica Method: For practical wireless DS-CDMA systems, channel estimation is imperfect due to noise and interference. In this paper, the impact of channel estimation errors on multiuser detection (MUD) is analyzed under the framework of the replica method. System performance is obtained in the large system limit for optimal MUD, linear MUD and turbo MUD, and is validated by numerical results for finite systems.<|reference_end|> | arxiv | @article{li2005impact,
title={Impact of Channel Estimation Errors on Multiuser Detection via the
Replica Method},
author={Husheng Li and H. V. Poor},
journal={arXiv preprint arXiv:cs/0501047},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501047},
primaryClass={cs.IT math.IT}
} | li2005impact |
arxiv-672527 | cs/0501048 | Low Complexity Joint Iterative Equalization and Multiuser Detection in Dispersive DS-CDMA Channels | <|reference_start|>Low Complexity Joint Iterative Equalization and Multiuser Detection in Dispersive DS-CDMA Channels: Communications in dispersive direct-sequence code-division multiple-access (DS-CDMA) channels suffer from intersymbol and multiple-access interference, which can significantly impair performance. Joint maximum \textit{a posteriori} probability (MAP) equalization and multiuser detection with error control decoding can be used to mitigate this interference and to achieve the optimal bit error rate. Unfortunately, such optimal detection typically requires prohibitive computational complexity. This problem is addressed in this paper through the development of a reduced state trellis search detection algorithm, based on decision feedback from channel decoders. The performance of this algorithm is analyzed in the large-system limit. This analysis and simulations show that this low-complexity algorithm can obtain near-optimal performance under moderate signal-to-noise ratio and attains larger system load capacity than parallel interference cancellation.<|reference_end|> | arxiv | @article{li2005low,
title={Low Complexity Joint Iterative Equalization and Multiuser Detection in
Dispersive DS-CDMA Channels},
author={Husheng Li and H. V. Poor},
journal={arXiv preprint arXiv:cs/0501048},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501048},
primaryClass={cs.IT math.IT}
} | li2005low |
arxiv-672528 | cs/0501049 | Performance Evaluation of Impulse Radio UWB Systems with Pulse-Based Polarity Randomization | <|reference_start|>Performance Evaluation of Impulse Radio UWB Systems with Pulse-Based Polarity Randomization: In this paper, the performance of a binary phase shift keyed random time-hopping impulse radio system with pulse-based polarity randomization is analyzed. Transmission over frequency-selective channels is considered and the effects of inter-frame interference and multiple access interference on the performance of a generic Rake receiver are investigated for both synchronous and asynchronous systems. Closed form (approximate) expressions for the probability of error that are valid for various Rake combining schemes are derived. The asynchronous system is modelled as a chip-synchronous system with uniformly distributed timing jitter for the transmitted pulses of interfering users. This model allows the analytical technique developed for the synchronous case to be extended to the asynchronous case. An approximate closed-form expression for the probability of bit error, expressed in terms of the autocorrelation function of the transmitted pulse, is derived for the asynchronous case. Then, transmission over an additive white Gaussian noise channel is studied as a special case, and the effects of multiple-access interference is investigated for both synchronous and asynchronous systems. The analysis shows that the chip-synchronous assumption can result in over-estimating the error probability, and the degree of over-estimation mainly depends on the autocorrelation function of the ultra-wideband pulse and the signal-to-interference-plus-noise-ratio of the system. Simulations studies support the approximate analysis.<|reference_end|> | arxiv | @article{gezici2005performance,
title={Performance Evaluation of Impulse Radio UWB Systems with Pulse-Based
Polarity Randomization},
author={Sinan Gezici, Hisashi Kobayashi, H. Vincent Poor, and Andreas F.
Molisch},
journal={arXiv preprint arXiv:cs/0501049},
year={2005},
doi={10.1109/TSP.2005.849197},
archivePrefix={arXiv},
eprint={cs/0501049},
primaryClass={cs.IT math.IT}
} | gezici2005performance |
arxiv-672529 | cs/0501050 | Energy-Efficient Joint Estimation in Sensor Networks: Analog vs Digital | <|reference_start|>Energy-Efficient Joint Estimation in Sensor Networks: Analog vs Digital: Sensor networks in which energy is a limited resource so that energy consumption must be minimized for the intended application are considered. In this context, an energy-efficient method for the joint estimation of an unknown analog source under a given distortion constraint is proposed. The approach is purely analog, in which each sensor simply amplifies and forwards the noise-corrupted analog bservation to the fusion center for joint estimation. The total transmission power across all the sensor nodes is minimized while satisfying a distortion requirement on the joint estimate. The energy efficiency of this analog approach is compared with previously proposed digital approaches with and without coding. It is shown in our simulation that the analog approach is more energy-efficient than the digital system without coding, and in some cases outperforms the digital system with optimal coding.<|reference_end|> | arxiv | @article{cui2005energy-efficient,
title={Energy-Efficient Joint Estimation in Sensor Networks: Analog vs. Digital},
author={Shuguang Cui, Jinjun Xiao, Zhi-Quan Luo, Andrea Goldsmith, and H.
Vincent Poor},
journal={arXiv preprint arXiv:cs/0501050},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501050},
primaryClass={cs.IT math.IT}
} | cui2005energy-efficient |
arxiv-672530 | cs/0501051 | On the Capacity of Multiple Antenna Systems in Rician Fading | <|reference_start|>On the Capacity of Multiple Antenna Systems in Rician Fading: The effect of Rician-ness on the capacity of multiple antenna systems is investigated under the assumption that channel state information (CSI) is available only at the receiver. The average-power-constrained capacity of such systems is considered under two different assumptions on the knowledge about the fading available at the transmitter: the case in which the transmitter has no knowledge of fading at all, and the case in which the transmitter has knowledge of the distribution of the fading process but not the instantaneous CSI. The exact capacity is given for the former case while capacity bounds are derived for the latter case. A new signalling scheme is also proposed for the latter case and it is shown that by exploiting the knowledge of Rician-ness at the transmitter via this signalling scheme, significant capacity gain can be achieved. The derived capacity bounds are evaluated explicitly to provide numerical results in some representative situations.<|reference_end|> | arxiv | @article{jayaweera2005on,
title={On the Capacity of Multiple Antenna Systems in Rician Fading},
author={Sudharman K. Jayaweera (1) and H. Vincent Poor (2) ((1) Wichita State
University (2) Princeton University)},
journal={arXiv preprint arXiv:cs/0501051},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501051},
primaryClass={cs.IT math.IT}
} | jayaweera2005on |
arxiv-672531 | cs/0501052 | Stochastic Differential Games in a Non-Markovian Setting | <|reference_start|>Stochastic Differential Games in a Non-Markovian Setting: Stochastic differential games are considered in a non-Markovian setting. Typically, in stochastic differential games the modulating process of the diffusion equation describing the state flow is taken to be Markovian. Then Nash equilibria or other types of solution such as Pareto equilibria are constructed using Hamilton-Jacobi-Bellman (HJB) equations. But in a non-Markovian setting the HJB method is not applicable. To examine the non-Markovian case, this paper considers the situation in which the modulating process is a fractional Brownian motion. Fractional noise calculus is used for such models to find the Nash equilibria explicitly. Although fractional Brownian motion is taken as the modulating process because of its versatility in modeling in the fields of finance and networks, the approach in this paper has the merit of being applicable to more general Gaussian stochastic differential games with only slight conceptual modifications. This work has applications in finance to stock price modeling which incorporates the effect of institutional investors, and to stochastic differential portfolio games in markets in which the stock prices follow diffusions modulated with fractional Brownian motion.<|reference_end|> | arxiv | @article{bayraktar2005stochastic,
title={Stochastic Differential Games in a Non-Markovian Setting},
author={Erhan Bayraktar, H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0501052},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501052},
primaryClass={cs.IT cs.CE math.IT}
} | bayraktar2005stochastic |
arxiv-672532 | cs/0501053 | Relational Algebra as non-Distributive Lattice | <|reference_start|>Relational Algebra as non-Distributive Lattice: We reduce the set of classic relational algebra operators to two binary operations: natural join and generalized union. We further demonstrate that this set of operators is relationally complete and honors lattice axioms.<|reference_end|> | arxiv | @article{tropashko2005relational,
title={Relational Algebra as non-Distributive Lattice},
author={Vadim Tropashko},
journal={arXiv preprint arXiv:cs/0501053},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501053},
primaryClass={cs.DB}
} | tropashko2005relational |
arxiv-672533 | cs/0501054 | Arbitrage in Fractal Modulated Markets When the Volatility is Stochastic | <|reference_start|>Arbitrage in Fractal Modulated Markets When the Volatility is Stochastic: In this paper an arbitrage strategy is constructed for the modified Black-Scholes model driven by fractional Brownian motion or by a time changed fractional Brownian motion, when the volatility is stochastic. This latter property allows the heavy tailedness of the log returns of the stock prices to be also accounted for in addition to the long range dependence introduced by the fractional Brownian motion. Work has been done previously on this problem for the case with constant `volatility' and without a time change; here these results are extended to the case of stochastic volatility models when the modulator is fractional Brownian motion or a time change of it. (Volatility in fractional Black-Scholes models does not carry the same meaning as in the classic Black-Scholes framework, which is made clear in the text.) Since fractional Brownian motion is not a semi-martingale, the Black-Scholes differential equation is not well-defined sense for arbitrary predictable volatility processes. However, it is shown here that any almost surely continuous and adapted process having zero quadratic variation can act as an integrator over functions of the integrator and over the family of continuous adapted semi-martingales. Moreover it is shown that the integral also has zero quadratic variation, and therefore that the integral itself can be an integrator. This property of the integral is crucial in developing the arbitrage strategy. Since fractional Brownian motion and a time change of fractional Brownian motion have zero quadratic variation, these results are applicable to these cases in particular. The appropriateness of fractional Brownian motion as a means of modeling stock price returns is discussed as well.<|reference_end|> | arxiv | @article{bayraktar2005arbitrage,
title={Arbitrage in Fractal Modulated Markets When the Volatility is Stochastic},
author={Erhan Bayraktar, H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0501054},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501054},
primaryClass={cs.IT cs.CE math.IT}
} | bayraktar2005arbitrage |
arxiv-672534 | cs/0501055 | Consistency Problems for Jump-Diffusion Models | <|reference_start|>Consistency Problems for Jump-Diffusion Models: In this paper consistency problems for multi-factor jump-diffusion models, where the jump parts follow multivariate point processes are examined. First the gap between jump-diffusion models and generalized Heath-Jarrow-Morton (HJM) models is bridged. By applying the drift condition for a generalized arbitrage-free HJM model, the consistency condition for jump-diffusion models is derived. Then we consider a case in which the forward rate curve has a separable structure, and obtain a specific version of the general consistency condition. In particular, a necessary and sufficient condition for a jump-diffusion model to be affine is provided. Finally the Nelson-Siegel type of forward curve structures is discussed. It is demonstrated that under regularity condition, there exists no jump-diffusion model consistent with the Nelson-Siegel curves.<|reference_end|> | arxiv | @article{bayraktar2005consistency,
title={Consistency Problems for Jump-Diffusion Models},
author={Erhan Bayraktar, Li Chen, H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0501055},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501055},
primaryClass={cs.IT cs.CE math.IT}
} | bayraktar2005consistency |
arxiv-672535 | cs/0501056 | A Large Deviations Approach to Sensor Scheduling for Detection of Correlated Random Fields | <|reference_start|>A Large Deviations Approach to Sensor Scheduling for Detection of Correlated Random Fields: The problem of scheduling sensor transmissions for the detection of correlated random fields using spatially deployed sensors is considered. Using the large deviations principle, a closed-form expression for the error exponent of the miss probability is given as a function of the sensor spacing and signal-to-noise ratio (SNR). It is shown that the error exponent has a distinct characteristic: at high SNR, the error exponent is monotonically increasing with respect to sensor spacing, while at low SNR there is an optimal spacing for scheduled sensors.<|reference_end|> | arxiv | @article{sung2005a,
title={A Large Deviations Approach to Sensor Scheduling for Detection of
Correlated Random Fields},
author={Youngchul Sung, Lang Tong and H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0501056},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501056},
primaryClass={cs.IT math.IT}
} | sung2005a |
arxiv-672536 | cs/0501057 | Concavity of the auxiliary function appearing in quantum reliability function an classical-quantum channels | <|reference_start|>Concavity of the auxiliary function appearing in quantum reliability function an classical-quantum channels: Concavity of the auxiliary function which appears in the random coding exponent as the lower bound of the quantum reliability function for general quantum states is proven for s between 0 and 1.<|reference_end|> | arxiv | @article{fujii2005concavity,
title={Concavity of the auxiliary function appearing in quantum reliability
function an classical-quantum channels},
author={Jun Ichi Fujii, Ritsuo Nakamoto, Kenjiro Yanagi},
journal={arXiv preprint arXiv:cs/0501057},
year={2005},
doi={10.1109/ISIT.2005.1523466},
archivePrefix={arXiv},
eprint={cs/0501057},
primaryClass={cs.IT math.IT}
} | fujii2005concavity |
arxiv-672537 | cs/0501058 | Estimation of the Number of Sources in Unbalanced Arrays via Information Theoretic Criteria | <|reference_start|>Estimation of the Number of Sources in Unbalanced Arrays via Information Theoretic Criteria: Estimating the number of sources impinging on an array of sensors is a well known and well investigated problem. A common approach for solving this problem is to use an information theoretic criterion, such as Minimum Description Length (MDL) or the Akaike Information Criterion (AIC). The MDL estimator is known to be a consistent estimator, robust against deviations from the Gaussian assumption, and non-robust against deviations from the point source and/or temporally or spatially white additive noise assumptions. Over the years several alternative estimation algorithms have been proposed and tested. Usually, these algorithms are shown, using computer simulations, to have improved performance over the MDL estimator, and to be robust against deviations from the assumed spatial model. Nevertheless, these robust algorithms have high computational complexity, requiring several multi-dimensional searches. In this paper, motivated by real life problems, a systematic approach toward the problem of robust estimation of the number of sources using information theoretic criteria is taken. An MDL type estimator that is robust against deviation from assumption of equal noise level across the array is studied. The consistency of this estimator, even when deviations from the equal noise level assumption occur, is proven. A novel low-complexity implementation method avoiding the need for multi-dimensional searches is presented as well, making this estimator a favorable choice for practical applications.<|reference_end|> | arxiv | @article{fishler2005estimation,
title={Estimation of the Number of Sources in Unbalanced Arrays via Information
Theoretic Criteria},
author={Eran Fishler and H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0501058},
year={2005},
doi={10.1109/TSP.2005.853099},
archivePrefix={arXiv},
eprint={cs/0501058},
primaryClass={cs.IT math.IT}
} | fishler2005estimation |
arxiv-672538 | cs/0501059 | Under-approximation of the Greatest Fixpoint in Real-Time System Verification | <|reference_start|>Under-approximation of the Greatest Fixpoint in Real-Time System Verification: Techniques for the efficient successive under-approximation of the greatest fixpoint in TCTL formulas can be useful in fast refutation of inevitability properties and vacuity checking. We first give an integrated algorithmic framework for both under and over-approximate model-checking. We design the {\em NZF (Non-Zeno Fairness) predicate}, with a greatest fixpoint formulation, as a unified framework for the evaluation of formulas like $\exists\pfrr\eta_1$, $\exists\pfrr\pevt\eta_1$, and $\exists\pevt\pfrr\eta_1$. We then prove the correctness of a new formulation for the characterization of the NZF predicate based on zone search and the least fixpoint evaluation. The new formulation then leads to the design of an evaluation algorithm, with the capability of successive under-approximation, for $\exists\pfrr\eta_1$, $\exists\pfrr\pevt\eta_1$, and $\exists\pevt\pfrr\eta_1$. We then present techniques to efficiently search for the zones and to speed up the under-approximate evaluation of those three formulas. Our experiments show that the techniques have significantly enhanced the verification performance against several benchmarks over exact model-checking.<|reference_end|> | arxiv | @article{wang2005under-approximation,
title={Under-approximation of the Greatest Fixpoint in Real-Time System
Verification},
author={Farn Wang},
journal={arXiv preprint arXiv:cs/0501059},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501059},
primaryClass={cs.SE cs.LO}
} | wang2005under-approximation |
arxiv-672539 | cs/0501060 | Under-approximation of the Greatest Fixpoints in Real-Time System Verification | <|reference_start|>Under-approximation of the Greatest Fixpoints in Real-Time System Verification: Techniques for the efficient successive under-approximation of the greatest fixpoint in TCTL formulas can be useful in fast refutation of inevitability properties and vacuity checking. We first give an integrated algorithmic framework for both under and over-approximate model-checking. We design the {\em NZF (Non-Zeno Fairness) predicate}, with a greatest fixpoint formulation, as a unified framework for the evaluation of formulas like $\exists\pfrr\eta_1$, $\exists\pfrr\pevt\eta_1$, and $\exists\pevt\pfrr\eta_1$. We then prove the correctness of a new formulation for the characterization of the NZF predicate based on zone search and the least fixpoint evaluation. The new formulation then leads to the design of an evaluation algorithm, with the capability of successive under-approximation, for $\exists\pfrr\eta_1$, $\exists\pfrr\pevt\eta_1$, and $\exists\pevt\pfrr\eta_1$. We then present techniques to efficiently search for the zones and to speed up the under-approximate evaluation of those three formulas. Our experiments show that the techniques have significantly enhanced the verification performance against several benchmarks over exact model-checking.<|reference_end|> | arxiv | @article{wang2005under-approximation,
title={Under-approximation of the Greatest Fixpoints in Real-Time System
Verification},
author={Farn Wang},
journal={arXiv preprint arXiv:cs/0501060},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501060},
primaryClass={cs.SE cs.LO}
} | wang2005under-approximation |
arxiv-672540 | cs/0501061 | Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake Receivers in Impulse Radio Ultra-Wideband Systems | <|reference_start|>Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake Receivers in Impulse Radio Ultra-Wideband Systems: Convex relaxations of the optimal finger selection algorithm are proposed for a minimum mean square error (MMSE) Rake receiver in an impulse radio ultra-wideband system. First, the optimal finger selection problem is formulated as an integer programming problem with a non-convex objective function. Then, the objective function is approximated by a convex function and the integer programming problem is solved by means of constraint relaxation techniques. The proposed algorithms are suboptimal due to the approximate objective function and the constraint relaxation steps. However, they can be used in conjunction with the conventional finger selection algorithm, which is suboptimal on its own since it ignores the correlation between multipath components, to obtain performances reasonably close to that of the optimal scheme that cannot be implemented in practice due to its complexity. The proposed algorithms leverage convexity of the optimization problem formulations, which is the watershed between `easy' and `difficult' optimization problems.<|reference_end|> | arxiv | @article{gezici2005optimal,
title={Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake
Receivers in Impulse Radio Ultra-Wideband Systems},
author={Sinan Gezici, Mung Chiang, H. Vincent Poor and Hisashi Kobayashi},
journal={arXiv preprint arXiv:cs/0501061},
year={2005},
doi={10.1109/WCNC.2005.1424620},
archivePrefix={arXiv},
eprint={cs/0501061},
primaryClass={cs.IT math.IT}
} | gezici2005optimal |
arxiv-672541 | cs/0501062 | On The Tradeoff Between Two Types of Processing Gain | <|reference_start|>On The Tradeoff Between Two Types of Processing Gain: One of the features characterizing almost every multiple access (MA) communication system is the processing gain. Through the use of spreading sequences, the processing gain of Random CDMA systems (RCDMA), is devoted to both bandwidth expansion and orthogonalization of the signals transmitted by different users. Another type of multiple access system is Impulse Radio (IR). In many aspects, IR systems are similar to time division multiple access (TDMA) systems, and the processing gain of IR systems represents the ratio between the actual transmission time and the total time between two consecutive ransmissions (on-plus-off to on ratio). While CDMA systems, which constantly excite the channel, rely on spreading sequences to orthogonalize the signals transmitted by different users, IR systems transmit a series of short pulses and the orthogonalization between the signals transmitted by different users is achieved by the fact that most of the pulses do not collide with each other at the receiver. In this paper, a general class of MA communication systems that use both types of processing gain is presented, and both IR and RCDMA systems are demonstrated to be two special cases of this more general class of systems. The bit error rate (BER) of several receivers as a function of the ratio between the two types of processing gain is analyzed and compared under the constraint that the total processing gain of the system is large and fixed. It is demonstrated that in non inter-symbol interference (ISI) channels there is no tradeoff between the two types of processing gain. However, in ISI channels a tradeoff between the two types of processing gain exists. In addition, the sub-optimality of RCDMA systems in frequency selective channels is established.<|reference_end|> | arxiv | @article{fishler2005on,
title={On The Tradeoff Between Two Types of Processing Gain},
author={Eran Fishler and H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0501062},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501062},
primaryClass={cs.IT math.IT}
} | fishler2005on |
arxiv-672542 | cs/0501063 | Bandit Problems with Side Observations | <|reference_start|>Bandit Problems with Side Observations: An extension of the traditional two-armed bandit problem is considered, in which the decision maker has access to some side information before deciding which arm to pull. At each time t, before making a selection, the decision maker is able to observe a random variable X_t that provides some information on the rewards to be obtained. The focus is on finding uniformly good rules (that minimize the growth rate of the inferior sampling time) and on quantifying how much the additional information helps. Various settings are considered and for each setting, lower bounds on the achievable inferior sampling time are developed and asymptotically optimal adaptive schemes achieving these lower bounds are constructed.<|reference_end|> | arxiv | @article{wang2005bandit,
title={Bandit Problems with Side Observations},
author={Chih-Chun Wang (1) and Sanjeev R. Kulkarni (1) and H. Vincent Poor (1)
((1) Princeton University)},
journal={arXiv preprint arXiv:cs/0501063},
year={2005},
doi={10.1109/TAC.2005.844079},
archivePrefix={arXiv},
eprint={cs/0501063},
primaryClass={cs.IT cs.LG math.IT}
} | wang2005bandit |
arxiv-672543 | cs/0501064 | A Non-Cooperative Power Control Game for Multi-Carrier CDMA Systems | <|reference_start|>A Non-Cooperative Power Control Game for Multi-Carrier CDMA Systems: In this work, a non-cooperative power control game for multi-carrier CDMA systems is proposed. In the proposed game, each user needs to decide how much power to transmit over each carrier to maximize its overall utility. The utility function considered here measures the number of reliable bits transmitted per joule of energy consumed. It is shown that the user's utility is maximized when the user transmits only on the carrier with the best "effective channel". The existence and uniqueness of Nash equilibrium for the proposed game are investigated and the properties of equilibrium are studied. Also, an iterative and distributed algorithm for reaching the equilibrium (if it exists) is presented. It is shown that the proposed approach results in a significant improvement in the total utility achieved at equilibrium compared to the case in which each user maximizes its utility over each carrier independently.<|reference_end|> | arxiv | @article{meshkati2005a,
title={A Non-Cooperative Power Control Game for Multi-Carrier CDMA Systems},
author={Farhad Meshkati, Mung Chiang, Stuart C. Schwartz, H. Vincent Poor, and
Narayan B. Mandayam},
journal={arXiv preprint arXiv:cs/0501064},
year={2005},
doi={10.1109/WCNC.2005.1424570},
archivePrefix={arXiv},
eprint={cs/0501064},
primaryClass={cs.IT math.IT}
} | meshkati2005a |
arxiv-672544 | cs/0501065 | Harmonic Analysis | <|reference_start|>Harmonic Analysis: This paper describes a method of calculating the transforms, currently obtained via Fourier and reverse Fourier transforms. The method allows calculating efficiently the transforms of a signal having an arbitrary dimension of the digital representation by reducing the transform to a vector-to-circulant matrix multiplying. There is a connection between harmonic equations in rectangular and polar coordinate systems. The connection established here and used to create a very robust iterative algorithm for a conformal mapping calculation. There is also suggested a new ratio (and an efficient way of computing it) of two oscillative signals.<|reference_end|> | arxiv | @article{clue2005harmonic,
title={Harmonic Analysis},
author={Vladimir I Clue},
journal={arXiv preprint arXiv:cs/0501065},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501065},
primaryClass={cs.NA cs.DM}
} | clue2005harmonic |
arxiv-672545 | cs/0501066 | The Noncoherent Rician Fading Channel -- Part I : Structure of the Capacity-Achieving Input | <|reference_start|>The Noncoherent Rician Fading Channel -- Part I : Structure of the Capacity-Achieving Input: Transmission of information over a discrete-time memoryless Rician fading channel is considered where neither the receiver nor the transmitter knows the fading coefficients. First the structure of the capacity-achieving input signals is investigated when the input is constrained to have limited peakedness by imposing either a fourth moment or a peak constraint. When the input is subject to second and fourth moment limitations, it is shown that the capacity-achieving input amplitude distribution is discrete with a finite number of mass points in the low-power regime. A similar discrete structure for the optimal amplitude is proven over the entire SNR range when there is only a peak power constraint. The Rician fading with phase-noise channel model, where there is phase uncertainty in the specular component, is analyzed. For this model it is shown that, with only an average power constraint, the capacity-achieving input amplitude is discrete with a finite number of levels. For the classical average power limited Rician fading channel, it is proven that the optimal input amplitude distribution has bounded support.<|reference_end|> | arxiv | @article{gursoy2005the,
title={The Noncoherent Rician Fading Channel -- Part I : Structure of the
Capacity-Achieving Input},
author={Mustafa Cenk Gursoy, H. Vincent Poor and Sergio Verdu},
journal={arXiv preprint arXiv:cs/0501066},
year={2005},
doi={10.1109/TWC.2005.853970},
archivePrefix={arXiv},
eprint={cs/0501066},
primaryClass={cs.IT math.IT}
} | gursoy2005the |
arxiv-672546 | cs/0501067 | The Noncoherent Rician Fading Channel -- Part II : Spectral Efficiency in the Low-Power Regime | <|reference_start|>The Noncoherent Rician Fading Channel -- Part II : Spectral Efficiency in the Low-Power Regime: Transmission of information over a discrete-time memoryless Rician fading channel is considered where neither the receiver nor the transmitter knows the fading coefficients. The spectral-efficiency/bit-energy tradeoff in the low-power regime is examined when the input has limited peakedness. It is shown that if a fourth moment input constraint is imposed or the input peak-to-average power ratio is limited, then in contrast to the behavior observed in average power limited channels, the minimum bit energy is not always achieved at zero spectral efficiency. The low-power performance is also characterized when there is a fixed peak limit that does not vary with the average power. A new signaling scheme that overlays phase-shift keying on on-off keying is proposed and shown to be optimally efficient in the low-power regime.<|reference_end|> | arxiv | @article{gursoy2005the,
title={The Noncoherent Rician Fading Channel -- Part II : Spectral Efficiency
in the Low-Power Regime},
author={Mustafa Cenk Gursoy, H. Vincent Poor and Sergio Verdu},
journal={arXiv preprint arXiv:cs/0501067},
year={2005},
doi={10.1109/TWC.2005.853971},
archivePrefix={arXiv},
eprint={cs/0501067},
primaryClass={cs.IT math.IT}
} | gursoy2005the |
arxiv-672547 | cs/0501068 | Learning to automatically detect features for mobile robots using second-order Hidden Markov Models | <|reference_start|>Learning to automatically detect features for mobile robots using second-order Hidden Markov Models: In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks) are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.<|reference_end|> | arxiv | @article{aycard2005learning,
title={Learning to automatically detect features for mobile robots using
second-order Hidden Markov Models},
author={Olivier Aycard (GRAVIR - Imag, Orpailleur Loria), Jean-Francois Mari
(ORPAILLEUR Loria), Richard Washington},
journal={arXiv preprint arXiv:cs/0501068},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501068},
primaryClass={cs.AI}
} | aycard2005learning |
arxiv-672548 | cs/0501069 | A Statistical Theory of Chord under Churn | <|reference_start|>A Statistical Theory of Chord under Churn: Most earlier studies of Distributed Hash Tables (DHTs) under churn have either depended on simulations as the primary investigation tool, or on establishing bounds for DHTs to function. In this paper, we present a complete analytical study of churn using a master-equation-based approach, used traditionally in non-equilibrium statistical mechanics to describe steady-state or transient phenomena. Simulations are used to verify all theoretical predictions. We demonstrate the application of our methodology to the Chord system. For any rate of churn and stabilization rates, and any system size, we accurately predict the fraction of failed or incorrect successor and finger pointers and show how we can use these quantities to predict the performance and consistency of lookups under churn. We also discuss briefly how churn may actually be of different 'types' and the implications this will have for the functioning of DHTs in general.<|reference_end|> | arxiv | @article{krishnamurthy2005a,
title={A Statistical Theory of Chord under Churn},
author={Supriya Krishnamurthy, Sameh El-Ansary, Erik Aurell and Seif Haridi},
journal={arXiv preprint arXiv:cs/0501069},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501069},
primaryClass={cs.NI cond-mat.stat-mech cs.DC}
} | krishnamurthy2005a |
arxiv-672549 | cs/0501070 | Extending Design by Contract for Aspect-Oriented Programming | <|reference_start|>Extending Design by Contract for Aspect-Oriented Programming: Design by Contract (DbC) and runtime enforcement of program assertions enables the construction of more robust software. It also enables the assignment of blame in error reporting. Unfortunately, there is no support for runtime contract enforcement and blame assignment for Aspect-Oriented Programming (AOP). Extending DbC to also cover aspects brings forward a plethora of issues related to the correct order of assertion validation. We show that there is no generally correct execution sequence of object assertions and aspect assertions. A further classification of aspects as agnostic, obedient, or rebellious defines the order of assertion validation that needs to be followed. We describe the application of this classification in a prototyped DbC tool for AOP named Cona, where aspects are used for implementing contracts, and contracts are used for enforcing assertions on aspects.<|reference_end|> | arxiv | @article{lorenz2005extending,
title={Extending Design by Contract for Aspect-Oriented Programming},
author={David H. Lorenz, Therapon Skotiniotis},
journal={arXiv preprint arXiv:cs/0501070},
year={2005},
number={NU-CCIS-04-14},
archivePrefix={arXiv},
eprint={cs/0501070},
primaryClass={cs.SE cs.PL}
} | lorenz2005extending |
arxiv-672550 | cs/0501071 | Capacity Regions and Optimal Power Allocation for Groupwise Multiuser Detection | <|reference_start|>Capacity Regions and Optimal Power Allocation for Groupwise Multiuser Detection: In this paper, optimal power allocation and capacity regions are derived for GSIC (groupwise successive interference cancellation) systems operating in multipath fading channels, under imperfect channel estimation conditions. It is shown that the impact of channel estimation errors on the system capacity is two-fold: it affects the receivers' performance within a group of users, as well as the cancellation performance (through cancellation errors). An iterative power allocation algorithm is derived, based on which it can be shown that the total required received power is minimized when the groups are ordered according to their cancellation errors, and the first detected group has the smallest cancellation error. Performace/complexity tradeoff issues are also discussed by directly comparing the system capacity for different implementations: GSIC with linear minimum-mean-square error (LMMSE) receivers within the detection groups, GSIC with matched filter receivers, multicode LMMSE systems, and simple all matched filter receivers systems.<|reference_end|> | arxiv | @article{comaniciu2005capacity,
title={Capacity Regions and Optimal Power Allocation for Groupwise Multiuser
Detection},
author={C. Comaniciu and H. V. Poor},
journal={arXiv preprint arXiv:cs/0501071},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501071},
primaryClass={cs.IT math.IT}
} | comaniciu2005capacity |
arxiv-672551 | cs/0501072 | Inferring knowledge from a large semantic network | <|reference_start|>Inferring knowledge from a large semantic network: In this paper, we present a rich semantic network based on a differential analysis. We then detail implemented measures that take into account common and differential features between words. In a last section, we describe some industrial applications.<|reference_end|> | arxiv | @article{dutoit2005inferring,
title={Inferring knowledge from a large semantic network},
author={Dominique Dutoit, Thierry Poibeau (LIPN)},
journal={Inferring knowledge from a large semantic network (2002) 232-238},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501072},
primaryClass={cs.AI}
} | dutoit2005inferring |
arxiv-672552 | cs/0501073 | Optimal Union-Find in Constraint Handling Rules | <|reference_start|>Optimal Union-Find in Constraint Handling Rules: Constraint Handling Rules (CHR) is a committed-choice rule-based language that was originally intended for writing constraint solvers. In this paper we show that it is also possible to write the classic union-find algorithm and variants in CHR. The programs neither compromise in declarativeness nor efficiency. We study the time complexity of our programs: they match the almost-linear complexity of the best known imperative implementations. This fact is illustrated with experimental results.<|reference_end|> | arxiv | @article{schrijvers2005optimal,
title={Optimal Union-Find in Constraint Handling Rules},
author={Tom Schrijvers and Thom Fruehwirth},
journal={arXiv preprint arXiv:cs/0501073},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501073},
primaryClass={cs.PL cs.CC cs.DS cs.PF}
} | schrijvers2005optimal |
arxiv-672553 | cs/0501074 | Efficient Computation of the Characteristic Polynomial | <|reference_start|>Efficient Computation of the Characteristic Polynomial: This article deals with the computation of the characteristic polynomial of dense matrices over small finite fields and over the integers. We first present two algorithms for the finite fields: one is based on Krylov iterates and Gaussian elimination. We compare it to an improvement of the second algorithm of Keller-Gehrig. Then we show that a generalization of Keller-Gehrig's third algorithm could improve both complexity and computational time. We use these results as a basis for the computation of the characteristic polynomial of integer matrices. We first use early termination and Chinese remaindering for dense matrices. Then a probabilistic approach, based on integer minimal polynomial and Hensel factorization, is particularly well suited to sparse and/or structured matrices.<|reference_end|> | arxiv | @article{dumas2005efficient,
title={Efficient Computation of the Characteristic Polynomial},
author={Jean-Guillaume Dumas (LMC - IMAG), Cl'ement Pernet (LMC - IMAG),
Zhendong Wan (CIS)},
journal={arXiv preprint arXiv:cs/0501074},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501074},
primaryClass={cs.SC}
} | dumas2005efficient |
arxiv-672554 | cs/0501075 | Simple extractors via constructions of cryptographic pseudo-random generators | <|reference_start|>Simple extractors via constructions of cryptographic pseudo-random generators: Trevisan has shown that constructions of pseudo-random generators from hard functions (the Nisan-Wigderson approach) also produce extractors. We show that constructions of pseudo-random generators from one-way permutations (the Blum-Micali-Yao approach) can be used for building extractors as well. Using this new technique we build extractors that do not use designs and polynomial-based error-correcting codes and that are very simple and efficient. For example, one extractor produces each output bit separately in $O(\log^2 n)$ time. These extractors work for weak sources with min entropy $\lambda n$, for arbitrary constant $\lambda > 0$, have seed length $O(\log^2 n)$, and their output length is $\approx n^{\lambda/3}$.<|reference_end|> | arxiv | @article{zimand2005simple,
title={Simple extractors via constructions of cryptographic pseudo-random
generators},
author={Marius Zimand},
journal={arXiv preprint arXiv:cs/0501075},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501075},
primaryClass={cs.CC cs.CR}
} | zimand2005simple |
arxiv-672555 | cs/0501076 | Geometric Complexity III: on deciding positivity of Littlewood-Richardson coefficients | <|reference_start|>Geometric Complexity III: on deciding positivity of Littlewood-Richardson coefficients: We point out that the remarkable Knutson and Tao Saturation Theorem and polynomial time algorithms for LP have together an important and immediate consequence in Geometric Complexity Theory. The problem of deciding positivity of Littlewood-Richardson coefficients for GLn(C) belongs to P. Furthermore, the algorithm is strongly polynomial. The main goal of this article is to explain the significance of this result in the context of Geometric Complexity Theory. Furthermore, it is also conjectured that an analogous result holds for arbitrary symmetrizable Kac-Moody algebras.<|reference_end|> | arxiv | @article{mulmuley2005geometric,
title={Geometric Complexity III: on deciding positivity of
Littlewood-Richardson coefficients},
author={Ketan D. Mulmuley and Milind Sohoni},
journal={arXiv preprint arXiv:cs/0501076},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501076},
primaryClass={cs.CC math.RT}
} | mulmuley2005geometric |
arxiv-672556 | cs/0501077 | Ontology-Based Users & Requests Clustering in Customer Service Management System | <|reference_start|>Ontology-Based Users & Requests Clustering in Customer Service Management System: Customer Service Management is one of major business activities to better serve company customers through the introduction of reliable processes and procedures. Today this kind of activities is implemented through e-services to directly involve customers into business processes. Traditionally Customer Service Management involves application of data mining techniques to discover usage patterns from the company knowledge memory. Hence grouping of customers/requests to clusters is one of major technique to improve the level of company customization. The goal of this paper is to present an efficient for implementation approach for clustering users and their requests. The approach uses ontology as knowledge representation model to improve the semantic interoperability between units of the company and customers. Some fragments of the approach tested in an industrial company are also presented in the paper.<|reference_end|> | arxiv | @article{smirnov2005ontology-based,
title={Ontology-Based Users & Requests Clustering in Customer Service
Management System},
author={Alexander Smirnov, Mikhail Pashkin, Nikolai Chilov, Tatiana Levashova,
Andrew Krizhanovsky and Alexey Kashevnik},
journal={Smirnov A., Pashkin M., Chilov N., Levashova T., Krizhanovsky A.,
Kashevnik A. 2005. Ontology-Based Users and Requests Clustering in Customer
Service Management System. Springer-Verlag GmbH, Lecture Notes in Computer
Science, 3505: 231-246},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501077},
primaryClass={cs.IR cs.CL}
} | smirnov2005ontology-based |
arxiv-672557 | cs/0501078 | Multi-document Biography Summarization | <|reference_start|>Multi-document Biography Summarization: In this paper we describe a biography summarization system using sentence classification and ideas from information retrieval. Although the individual techniques are not new, assembling and applying them to generate multi-document biographies is new. Our system was evaluated in DUC2004. It is among the top performers in task 5-short summaries focused by person questions.<|reference_end|> | arxiv | @article{zhou2005multi-document,
title={Multi-document Biography Summarization},
author={Liang Zhou, Miruna Ticrea and Eduard Hovy},
journal={Proceedings of EMNLP, pp. 434-441, 2004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501078},
primaryClass={cs.CL}
} | zhou2005multi-document |
arxiv-672558 | cs/0501079 | Data Mining for Actionable Knowledge: A Survey | <|reference_start|>Data Mining for Actionable Knowledge: A Survey: The data mining process consists of a series of steps ranging from data cleaning, data selection and transformation, to pattern evaluation and visualization. One of the central problems in data mining is to make the mined patterns or knowledge actionable. Here, the term actionable refers to the mined patterns suggest concrete and profitable actions to the decision-maker. That is, the user can do something to bring direct benefits (increase in profits, reduction in cost, improvement in efficiency, etc.) to the organization's advantage. However, there has been written no comprehensive survey available on this topic. The goal of this paper is to fill the void. In this paper, we first present two frameworks for mining actionable knowledge that are inexplicitly adopted by existing research methods. Then we try to situate some of the research on this topic from two different viewpoints: 1) data mining tasks and 2) adopted framework. Finally, we specify issues that are either not addressed or insufficiently studied yet and conclude the paper.<|reference_end|> | arxiv | @article{he2005data,
title={Data Mining for Actionable Knowledge: A Survey},
author={Zengyou He, Xiaofei Xu, Shengchun Deng},
journal={arXiv preprint arXiv:cs/0501079},
year={2005},
number={Tr-05-01},
archivePrefix={arXiv},
eprint={cs/0501079},
primaryClass={cs.DB cs.AI}
} | he2005data |
arxiv-672559 | cs/0501080 | An Information Network Overlay Architecture for the NSDL | <|reference_start|>An Information Network Overlay Architecture for the NSDL: We describe the underlying data model and implementation of a new architecture for the National Science Digital Library (NSDL) by the Core Integration Team (CI). The architecture is based on the notion of an information network overlay. This network, implemented as a graph of digital objects in a Fedora repository, allows the representation of multiple information entities and their relationships. The architecture provides the framework for contextualization and reuse of resources, which we argue is essential for the utility of the NSDL as a tool for teaching and learning.<|reference_end|> | arxiv | @article{lagoze2005an,
title={An Information Network Overlay Architecture for the NSDL},
author={Carl Lagoze, Dean B. Krafft, Susan Jesuroga, Tim Cornwell, Ellen J.
Cramer, Eddie Shin},
journal={arXiv preprint arXiv:cs/0501080},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501080},
primaryClass={cs.DL}
} | lagoze2005an |
arxiv-672560 | cs/0501081 | A Tree Search Method for Iterative Decoding of Underdetermined Multiuser Systems | <|reference_start|>A Tree Search Method for Iterative Decoding of Underdetermined Multiuser Systems: Application of the turbo principle to multiuser decoding results in an exchange of probability distributions between two sets of constraints. Firstly, constraints imposed by the multiple-access channel, and secondly, individual constraints imposed by each users' error control code. A-posteriori probability computation for the first set of constraints is prohibitively complex for all but a small number of users. Several lower complexity approaches have been proposed in the literature. One class of methods is based on linear filtering (e.g. LMMSE). A more recent approach is to compute approximations to the posterior probabilities by marginalising over a subset of sequences (list detection). Most of the list detection methods are restricted to non-singular systems. In this paper, we introduce a transformation that permits application of standard tree-search methods to underdetermined systems. We find that the resulting tree-search based receiver outperforms existing methods.<|reference_end|> | arxiv | @article{kind2005a,
title={A Tree Search Method for Iterative Decoding of Underdetermined Multiuser
Systems},
author={Adriel Kind and Alex Grant},
journal={arXiv preprint arXiv:cs/0501081},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501081},
primaryClass={cs.IT math.IT}
} | kind2005a |
arxiv-672561 | cs/0501082 | A Group-Theoretic Approach to the WSSUS Pulse Design Problem | <|reference_start|>A Group-Theoretic Approach to the WSSUS Pulse Design Problem: We consider the pulse design problem in multicarrier transmission where the pulse shapes are adapted to the second order statistics of the WSSUS channel. Even though the problem has been addressed by many authors analytical insights are rather limited. First we show that the problem is equivalent to the pure state channel fidelity in quantum information theory. Next we present a new approach where the original optimization functional is related to an eigenvalue problem for a pseudo differential operator by utilizing unitary representations of the Weyl--Heisenberg group.A local approximation of the operator for underspread channels is derived which implicitly covers the concepts of pulse scaling and optimal phase space displacement. The problem is reformulated as a differential equation and the optimal pulses occur as eigenstates of the harmonic oscillator Hamiltonian. Furthermore this operator--algebraic approach is extended to provide exact solutions for different classes of scattering environments.<|reference_end|> | arxiv | @article{jung2005a,
title={A Group-Theoretic Approach to the WSSUS Pulse Design Problem},
author={Peter Jung and Gerhard Wunder},
journal={arXiv preprint arXiv:cs/0501082},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501082},
primaryClass={cs.IT math.IT}
} | jung2005a |
arxiv-672562 | cs/0501083 | Orchestrating Metadata Enhancement Services: Introducing Lenny | <|reference_start|>Orchestrating Metadata Enhancement Services: Introducing Lenny: Harvested metadata often suffers from uneven quality to the point that utility is compromised. Although some aggregators have developed methods for evaluating and repairing specific metadata problems, it has been unclear how these methods might be scaled into services that can be used within an automated production environment. The National Science Digital Library (NSDL), as part of its work with INFOMINE, has developed a model of ser-vice interaction that enables loosely-coupled third party services to provide metadata enhancements to a central repository, with interactions orchestrated by a centralized software application.<|reference_end|> | arxiv | @article{phipps2005orchestrating,
title={Orchestrating Metadata Enhancement Services: Introducing Lenny},
author={Jon Phipps, Diane I. Hillmann, Gordon Paynter},
journal={arXiv preprint arXiv:cs/0501083},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501083},
primaryClass={cs.DL}
} | phipps2005orchestrating |
arxiv-672563 | cs/0501084 | Towards Automated Integration of Guess and Check Programs in Answer Set Programming: A Meta-Interpreter and Applications | <|reference_start|>Towards Automated Integration of Guess and Check Programs in Answer Set Programming: A Meta-Interpreter and Applications: Answer set programming (ASP) with disjunction offers a powerful tool for declaratively representing and solving hard problems. Many NP-complete problems can be encoded in the answer set semantics of logic programs in a very concise and intuitive way, where the encoding reflects the typical "guess and check" nature of NP problems: The property is encoded in a way such that polynomial size certificates for it correspond to stable models of a program. However, the problem-solving capacity of full disjunctive logic programs (DLPs) is beyond NP, and captures a class of problems at the second level of the polynomial hierarchy. While these problems also have a clear "guess and check" structure, finding an encoding in a DLP reflecting this structure may sometimes be a non-obvious task, in particular if the "check" itself is a coNP-complete problem; usually, such problems are solved by interleaving separate guess and check programs, where the check is expressed by inconsistency of the check program. In this paper, we present general transformations of head-cycle free (extended) disjunctive logic programs into stratified and positive (extended) disjunctive logic programs based on meta-interpretation techniques. The answer sets of the original and the transformed program are in simple correspondence, and, moreover, inconsistency of the original program is indicated by a designated answer set of the transformed program. Our transformations facilitate the integration of separate "guess" and "check" programs, which are often easy to obtain, automatically into a single disjunctive logic program. Our results complement recent results on meta-interpretation in ASP, and extend methods and techniques for a declarative "guess and check" problem solving paradigm through ASP.<|reference_end|> | arxiv | @article{eiter2005towards,
title={Towards Automated Integration of Guess and Check Programs in Answer Set
Programming: A Meta-Interpreter and Applications},
author={Thomas Eiter and Axel Polleres},
journal={arXiv preprint arXiv:cs/0501084},
year={2005},
number={1843-04-01},
archivePrefix={arXiv},
eprint={cs/0501084},
primaryClass={cs.AI}
} | eiter2005towards |
arxiv-672564 | cs/0501085 | Space Frequency Codes from Spherical Codes | <|reference_start|>Space Frequency Codes from Spherical Codes: A new design method for high rate, fully diverse ('spherical') space frequency codes for MIMO-OFDM systems is proposed, which works for arbitrary numbers of antennas and subcarriers. The construction exploits a differential geometric connection between spherical codes and space time codes. The former are well studied e.g. in the context of optimal sequence design in CDMA systems, while the latter serve as basic building blocks for space frequency codes. In addition a decoding algorithm with moderate complexity is presented. This is achieved by a lattice based construction of spherical codes, which permits lattice decoding algorithms and thus offers a substantial reduction of complexity.<|reference_end|> | arxiv | @article{henkel2005space,
title={Space Frequency Codes from Spherical Codes},
author={Oliver Henkel},
journal={arXiv preprint arXiv:cs/0501085},
year={2005},
doi={10.1109/ISIT.2005.1523553},
archivePrefix={arXiv},
eprint={cs/0501085},
primaryClass={cs.IT math.IT}
} | henkel2005space |
arxiv-672565 | cs/0501086 | Clever Search: A WordNet Based Wrapper for Internet Search Engines | <|reference_start|>Clever Search: A WordNet Based Wrapper for Internet Search Engines: This paper presents an approach to enhance search engines with information about word senses available in WordNet. The approach exploits information about the conceptual relations within the lexical-semantic net. In the wrapper for search engines presented, WordNet information is used to specify user's request or to classify the results of a publicly available web search engine, like google, yahoo, etc.<|reference_end|> | arxiv | @article{kruse2005clever,
title={Clever Search: A WordNet Based Wrapper for Internet Search Engines},
author={Peter M. Kruse and Andre Naujoks and Dietmar Roesner and Manuela Kunze},
journal={Proceedings of 2nd GermaNet Workshop 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501086},
primaryClass={cs.AI}
} | kruse2005clever |
arxiv-672566 | cs/0501087 | Faults and Improvements of an Enhanced Remote User Authentication Scheme Using Smart Cards | <|reference_start|>Faults and Improvements of an Enhanced Remote User Authentication Scheme Using Smart Cards: In 2000, Hwang and Li proposed a remote user authentication scheme using smart cards to solve the problems of Lamport scheme. Later, Chan- Chang, Shen- Lin- Hwang and then Chang-Hwang pointed out some attacks on Hwang – Li’s scheme. In 2003, Shen, Lin and Hwang also proposed a modified scheme to remove these attacks. In the same year, Leung-Cheng-Fong-Chan showed that modified scheme proposed by Shen-Lin-Hwang is still insecure. In 2004, Awasthi and Lal enhanced Shen-Lin-Hwang’s scheme to overcome its security pitfalls. This paper analyses that the user U/smart card does not provide complete information for the execution and proper running of the login phase of the Awasthi- Lal’s scheme. Furthermore, this paper also modifies the Awasthi- Lal’s scheme for the proper functioning.<|reference_end|> | arxiv | @article{kumar2005faults,
title={Faults and Improvements of an Enhanced Remote User Authentication Scheme
Using Smart Cards},
author={Manoj Kumar},
journal={arXiv preprint arXiv:cs/0501087},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501087},
primaryClass={cs.CR}
} | kumar2005faults |
arxiv-672567 | cs/0501088 | Information estimations and analysis of structures | <|reference_start|>Information estimations and analysis of structures: In this paper have written the results of the information analysis of structures. The obtained information estimation (IE) are based on an entropy measure of C. Shannon. Obtained IE is univalent both for the non-isomorphic and for the isomorphic graphs, algorithmically, it is asymptotically steady and has vector character. IE can be used for the solution of the problems ranking of structures by the preference, the evaluation of the structurization of subject area, the solution of the problems of structural optimization. Information estimations and method of the information analysis of structures it can be used in many fields of knowledge (Electrical Systems and Circuit, Image recognition, Computer technology, Databases and Bases of knowledge, Organic chemistry, Biology and others) and it can be base for the structure calculus.<|reference_end|> | arxiv | @article{shaydurov2005information,
title={Information estimations and analysis of structures},
author={Alexander Shaydurov},
journal={arXiv preprint arXiv:cs/0501088},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501088},
primaryClass={cs.IT math.IT}
} | shaydurov2005information |
arxiv-672568 | cs/0501089 | Issues in Exploiting GermaNet as a Resource in Real Applications | <|reference_start|>Issues in Exploiting GermaNet as a Resource in Real Applications: This paper reports about experiments with GermaNet as a resource within domain specific document analysis. The main question to be answered is: How is the coverage of GermaNet in a specific domain? We report about results of a field test of GermaNet for analyses of autopsy protocols and present a sketch about the integration of GermaNet inside XDOC. Our remarks will contribute to a GermaNet user's wish list.<|reference_end|> | arxiv | @article{kunze2005issues,
title={Issues in Exploiting GermaNet as a Resource in Real Applications},
author={Manuela Kunze and Dietmar Roesner},
journal={arXiv preprint arXiv:cs/0501089},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501089},
primaryClass={cs.AI}
} | kunze2005issues |
arxiv-672569 | cs/0501090 | Stochastic Iterative Decoders | <|reference_start|>Stochastic Iterative Decoders: This paper presents a stochastic algorithm for iterative error control decoding. We show that the stochastic decoding algorithm is an approximation of the sum-product algorithm. When the code's factor graph is a tree, as with trellises, the algorithm approaches maximum a-posteriori decoding. We also demonstrate a stochastic approximations to the alternative update rule known as successive relaxation. Stochastic decoders have very simple digital implementations which have almost no RAM requirements. We present example stochastic decoders for a trellis-based Hamming code, and for a Block Turbo code constructed from Hamming codes.<|reference_end|> | arxiv | @article{winstead2005stochastic,
title={Stochastic Iterative Decoders},
author={Chris Winstead, Anthony Rapley, Vincent C. Gaudet and Christian
Schlegel},
journal={arXiv preprint arXiv:cs/0501090},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501090},
primaryClass={cs.IT math.IT}
} | winstead2005stochastic |
arxiv-672570 | cs/0501091 | A complexity-regularized quantization approach to nonlinear dimensionality reduction | <|reference_start|>A complexity-regularized quantization approach to nonlinear dimensionality reduction: We consider the problem of nonlinear dimensionality reduction: given a training set of high-dimensional data whose ``intrinsic'' low dimension is assumed known, find a feature extraction map to low-dimensional space, a reconstruction map back to high-dimensional space, and a geometric description of the dimension-reduced data as a smooth manifold. We introduce a complexity-regularized quantization approach for fitting a Gaussian mixture model to the training set via a Lloyd algorithm. Complexity regularization controls the trade-off between adaptation to the local shape of the underlying manifold and global geometric consistency. The resulting mixture model is used to design the feature extraction and reconstruction maps and to define a Riemannian metric on the low-dimensional data. We also sketch a proof of consistency of our scheme for the purposes of estimating the unknown underlying pdf of high-dimensional data.<|reference_end|> | arxiv | @article{raginsky2005a,
title={A complexity-regularized quantization approach to nonlinear
dimensionality reduction},
author={Maxim Raginsky},
journal={arXiv preprint arXiv:cs/0501091},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501091},
primaryClass={cs.IT math.IT}
} | raginsky2005a |
arxiv-672571 | cs/0501092 | Multi-Vehicle Cooperative Control Using Mixed Integer Linear Programming | <|reference_start|>Multi-Vehicle Cooperative Control Using Mixed Integer Linear Programming: We present methods to synthesize cooperative strategies for multi-vehicle control problems using mixed integer linear programming. Complex multi-vehicle control problems are expressed as mixed logical dynamical systems. Optimal strategies for these systems are then solved for using mixed integer linear programming. We motivate the methods on problems derived from an adversarial game between two teams of robots called RoboFlag. We assume the strategy for one team is fixed and governed by state machines. The strategy for the other team is generated using our methods. Finally, we perform an average case computational complexity study on our approach.<|reference_end|> | arxiv | @article{earl2005multi-vehicle,
title={Multi-Vehicle Cooperative Control Using Mixed Integer Linear Programming},
author={Matthew G. Earl and Raffaello D'Andrea},
journal={M. G. Earl and R. D'Andrea, "Multi-Vehicle Cooperative Control
using Mixed Integer Linear Programming," In Cooperative Control of
Distributed Multi-Agent Systems, J. S. Shamma ed., John Wiley & Sons, 2007},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501092},
primaryClass={cs.RO cs.AI cs.MA}
} | earl2005multi-vehicle |
arxiv-672572 | cs/0501093 | Transforming Business Rules Into Natural Language Text | <|reference_start|>Transforming Business Rules Into Natural Language Text: The aim of the project presented in this paper is to design a system for an NLG architecture, which supports the documentation process of eBusiness models. A major task is to enrich the formal description of an eBusiness model with additional information needed in an NLG task.<|reference_end|> | arxiv | @article{kunze2005transforming,
title={Transforming Business Rules Into Natural Language Text},
author={Manuela Kunze and Dietmar Roesner},
journal={in Proceedings of IWCS-6, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501093},
primaryClass={cs.AI}
} | kunze2005transforming |
arxiv-672573 | cs/0501094 | Corpus based Enrichment of GermaNet Verb Frames | <|reference_start|>Corpus based Enrichment of GermaNet Verb Frames: Lexical semantic resources, like WordNet, are often used in real applications of natural language document processing. For example, we integrated GermaNet in our document suite XDOC of processing of German forensic autopsy protocols. In addition to the hypernymy and synonymy relation, we want to adapt GermaNet's verb frames for our analysis. In this paper we outline an approach for the domain related enrichment of GermaNet verb frames by corpus based syntactic and co-occurred data analyses of real documents.<|reference_end|> | arxiv | @article{kunze2005corpus,
title={Corpus based Enrichment of GermaNet Verb Frames},
author={Manuela Kunze and Dietmar Roesner},
journal={in Proceedings of LREC 2004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501094},
primaryClass={cs.AI}
} | kunze2005corpus |
arxiv-672574 | cs/0501095 | Context Related Derivation of Word Senses | <|reference_start|>Context Related Derivation of Word Senses: Real applications of natural language document processing are very often confronted with domain specific lexical gaps during the analysis of documents of a new domain. This paper describes an approach for the derivation of domain specific concepts for the extension of an existing ontology. As resources we need an initial ontology and a partially processed corpus of a domain. We exploit the specific characteristic of the sublanguage in the corpus. Our approach is based on syntactical structures (noun phrases) and compound analyses to extract information required for the extension of GermaNet's lexical resources.<|reference_end|> | arxiv | @article{kunze2005context,
title={Context Related Derivation of Word Senses},
author={Manuela Kunze and Dietmar Roesner},
journal={in Proceedings of Ontolex- Workshop 2004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501095},
primaryClass={cs.AI}
} | kunze2005context |
arxiv-672575 | cs/0501096 | Transforming and Enriching Documents for the Semantic Web | <|reference_start|>Transforming and Enriching Documents for the Semantic Web: We suggest to employ techniques from Natural Language Processing (NLP) and Knowledge Representation (KR) to transform existing documents into documents amenable for the Semantic Web. Semantic Web documents have at least part of their semantics and pragmatics marked up explicitly in both a machine processable as well as human readable manner. XML and its related standards (XSLT, RDF, Topic Maps etc.) are the unifying platform for the tools and methodologies developed for different application scenarios.<|reference_end|> | arxiv | @article{roesner2005transforming,
title={Transforming and Enriching Documents for the Semantic Web},
author={Dietmar Roesner and Manuela Kunze and Sylke Kroetzsch},
journal={KI (1), 2004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501096},
primaryClass={cs.AI}
} | roesner2005transforming |
arxiv-672576 | cs/0502001 | Some Extensions of Gallager's Method to General Sources and Channels | <|reference_start|>Some Extensions of Gallager's Method to General Sources and Channels: The Gallager bound is well known in the area of channel coding. However, most discussions about it mainly focus on its applications to memoryless channels. We show in this paper that the bounds obtained by Gallager's method are very tight even for general sources and channels that are defined in the information-spectrum theory. Our method is mainly based on the estimations of error exponents in those bounds, and by these estimations we proved the direct part of the Slepian-Wolf theorem and channel coding theorem for general sources and channels.<|reference_end|> | arxiv | @article{yang2005some,
title={Some Extensions of Gallager's Method to General Sources and Channels},
author={Shengtian Yang, Peiliang Qiu},
journal={arXiv preprint arXiv:cs/0502001},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502001},
primaryClass={cs.IT math.IT}
} | yang2005some |
arxiv-672577 | cs/0502002 | Directed Threshold Multi – Signature Scheme without SDC | <|reference_start|>Directed Threshold Multi – Signature Scheme without SDC: In this paper, we propose a Directed threshold multisignature scheme without SDC. This signature scheme is applicable when the message is sensitive to the signature receiver; and the signatures are generated by the cooperation of a number of people from a given group of senders. In this scheme, any malicious set of signers cannot impersonate any other set of signers to forge the signatures. In case of forgery, it is possible to trace the signing set.<|reference_end|> | arxiv | @article{kumar2005directed,
title={Directed Threshold Multi – Signature Scheme without SDC},
author={Manoj Kumar},
journal={arXiv preprint arXiv:cs/0502002},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502002},
primaryClass={cs.CR}
} | kumar2005directed |
arxiv-672578 | cs/0502003 | Shawn: A new approach to simulating wireless sensor networks | <|reference_start|>Shawn: A new approach to simulating wireless sensor networks: We consider the simulation of wireless sensor networks (WSN) using a new approach. We present Shawn, an open-source discrete-event simulator that has considerable differences to all other existing simulators. Shawn is very powerful in simulating large scale networks with an abstract point of view. It is, to the best of our knowledge, the first simulator to support generic high-level algorithms as well as distributed protocols on exactly the same underlying networks.<|reference_end|> | arxiv | @article{kroeller2005shawn:,
title={Shawn: A new approach to simulating wireless sensor networks},
author={Alexander Kroeller, Dennis Pfisterer, Carsten Buschmann, Sandor P.
Fekete, and Stefan Fischer},
journal={arXiv preprint arXiv:cs/0502003},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502003},
primaryClass={cs.DC cs.PF}
} | kroeller2005shawn: |
arxiv-672579 | cs/0502004 | Asymptotic Log-loss of Prequential Maximum Likelihood Codes | <|reference_start|>Asymptotic Log-loss of Prequential Maximum Likelihood Codes: We analyze the Dawid-Rissanen prequential maximum likelihood codes relative to one-parameter exponential family models M. If data are i.i.d. according to an (essentially) arbitrary P, then the redundancy grows at rate c/2 ln n. We show that c=v1/v2, where v1 is the variance of P, and v2 is the variance of the distribution m* in M that is closest to P in KL divergence. This shows that prequential codes behave quite differently from other important universal codes such as the 2-part MDL, Shtarkov and Bayes codes, for which c=1. This behavior is undesirable in an MDL model selection setting.<|reference_end|> | arxiv | @article{grunwald2005asymptotic,
title={Asymptotic Log-loss of Prequential Maximum Likelihood Codes},
author={Peter Grunwald and Steven de Rooij},
journal={arXiv preprint arXiv:cs/0502004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502004},
primaryClass={cs.LG cs.IT math.IT}
} | grunwald2005asymptotic |
arxiv-672580 | cs/0502005 | Model-Checking Problems as a Basis for Parameterized Intractability | <|reference_start|>Model-Checking Problems as a Basis for Parameterized Intractability: Most parameterized complexity classes are defined in terms of a parameterized version of the Boolean satisfiability problem (the so-called weighted satisfiability problem). For example, Downey and Fellow's W-hierarchy is of this form. But there are also classes, for example, the A-hierarchy, that are more naturally characterised in terms of model-checking problems for certain fragments of first-order logic. Downey, Fellows, and Regan were the first to establish a connection between the two formalisms by giving a characterisation of the W-hierarchy in terms of first-order model-checking problems. We improve their result and then prove a similar correspondence between weighted satisfiability and model-checking problems for the A-hierarchy and the W^*-hierarchy. Thus we obtain very uniform characterisations of many of the most important parameterized complexity classes in both formalisms. Our results can be used to give new, simple proofs of some of the core results of structural parameterized complexity theory.<|reference_end|> | arxiv | @article{flum2005model-checking,
title={Model-Checking Problems as a Basis for Parameterized Intractability},
author={Joerg Flum, Martin Grohe},
journal={Logical Methods in Computer Science, Volume 1, Issue 1 (March 7,
2005) lmcs:2272},
year={2005},
doi={10.2168/LMCS-1(1:2)2005},
archivePrefix={arXiv},
eprint={cs/0502005},
primaryClass={cs.CC cs.LO}
} | flum2005model-checking |
arxiv-672581 | cs/0502006 | Neural network ensembles: Evaluation of aggregation algorithms | <|reference_start|>Neural network ensembles: Evaluation of aggregation algorithms: Ensembles of artificial neural networks show improved generalization capabilities that outperform those of single networks. However, for aggregation to be effective, the individual networks must be as accurate and diverse as possible. An important problem is, then, how to tune the aggregate members in order to have an optimal compromise between these two conflicting conditions. We present here an extensive evaluation of several algorithms for ensemble construction, including new proposals and comparing them with standard methods in the literature. We also discuss a potential problem with sequential aggregation algorithms: the non-frequent but damaging selection through their heuristics of particularly bad ensemble members. We introduce modified algorithms that cope with this problem by allowing individual weighting of aggregate members. Our algorithms and their weighted modifications are favorably tested against other methods in the literature, producing a sensible improvement in performance on most of the standard statistical databases used as benchmarks.<|reference_end|> | arxiv | @article{granitto2005neural,
title={Neural network ensembles: Evaluation of aggregation algorithms},
author={P.M. Granitto, P.F. Verdes and H.A. Ceccatto},
journal={arXiv preprint arXiv:cs/0502006},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502006},
primaryClass={cs.AI cs.NE}
} | granitto2005neural |
arxiv-672582 | cs/0502007 | Identification of complex systems in the basis of wavelets | <|reference_start|>Identification of complex systems in the basis of wavelets: In this paper is proposed the method of the identification of complex dynamic systems. Method can be used for the identification of linear and nonlinear complex dynamic systems for the determined or stochastic signals at the inputs and the outputs. It is proposed to use a basis of wavelets for obtaining the impulse transient function (ITF) of system. ITF is considered in the form of surface in the 3D space. Are given the results of experiments on the identification of systems in the basis of wavelets.<|reference_end|> | arxiv | @article{shaydurov2005identification,
title={Identification of complex systems in the basis of wavelets},
author={Alexander Shaydurov},
journal={arXiv preprint arXiv:cs/0502007},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502007},
primaryClass={cs.CE cs.NE}
} | shaydurov2005identification |
arxiv-672583 | cs/0502008 | Scientific Data Management in the Coming Decade | <|reference_start|>Scientific Data Management in the Coming Decade: This is a thought piece on data-intensive science requirements for databases and science centers. It argues that peta-scale datasets will be housed by science centers that provide substantial storage and processing for scientists who access the data via smart notebooks. Next-generation science instruments and simulations will generate these peta-scale datasets. The need to publish and share data and the need for generic analysis and visualization tools will finally create a convergence on common metadata standards. Database systems will be judged by their support of these metadata standards and by their ability to manage and access peta-scale datasets. The procedural stream-of-bytes-file-centric approach to data analysis is both too cumbersome and too serial for such large datasets. Non-procedural query and analysis of schematized self-describing data is both easier to use and allows much more parallelism.<|reference_end|> | arxiv | @article{gray2005scientific,
title={Scientific Data Management in the Coming Decade},
author={Jim Gray, David T. Liu, Maria Nieto-Santisteban, Alexander S. Szalay,
David DeWitt, Gerd Heber},
journal={arXiv preprint arXiv:cs/0502008},
year={2005},
number={Microsoft Technical Report MSR-TR-2005-10},
archivePrefix={arXiv},
eprint={cs/0502008},
primaryClass={cs.DB cs.CE}
} | gray2005scientific |
arxiv-672584 | cs/0502009 | Performance Considerations for Gigabyte per Second Transcontinental Disk-to-Disk File Transfers | <|reference_start|>Performance Considerations for Gigabyte per Second Transcontinental Disk-to-Disk File Transfers: Moving data from CERN to Pasadena at a gigabyte per second using the next generation Internet requires good networking and good disk IO. Ten Gbps Ethernet and OC192 links are in place, so now it is simply a matter of programming. This report describes our preliminary work and measurements in configuring the disk subsystem for this effort. Using 24 SATA disks at each endpoint we are able to locally read and write an NTFS volume is striped across 24 disks at 1.2 GBps. A 32-disk stripe delivers 1.7 GBps. Experiments on higher performance and higher-capacity systems deliver up to 3.5 GBps.<|reference_end|> | arxiv | @article{kukol2005performance,
title={Performance Considerations for Gigabyte per Second Transcontinental
Disk-to-Disk File Transfers},
author={Peter Kukol, Jim Gray},
journal={arXiv preprint arXiv:cs/0502009},
year={2005},
number={Microsoft Technical Report MSR-TR-2004-62},
archivePrefix={arXiv},
eprint={cs/0502009},
primaryClass={cs.DB cs.PF}
} | kukol2005performance |
arxiv-672585 | cs/0502010 | TerraServer SAN-Cluster Architecture and Operations Experience | <|reference_start|>TerraServer SAN-Cluster Architecture and Operations Experience: Microsoft TerraServer displays aerial, satellite, and to-pographic images of the earth in a SQL database available via the Internet. It is one of the most popular online at-lases, presenting seventeen terabytes of image data from the United States Geological Survey (USGS). Initially de-ployed in 1998, the system demonstrated the scalability of PC hardware and software - Windows and SQL Server - on a single, mainframe-class processor. In September 2000, the back-end database application was migrated to 4-node active/passive cluster connected to an 18 terabyte Storage Area Network (SAN). The new configuration was designed to achieve 99.99% availability for the back-end application. This paper describes the hardware and software components of the TerraServer Cluster and SAN, and describes our experience in configuring and operating this system for three years. Not surprisingly, the hardware and architecture delivered better than four-9's of availability, but operations mistakes delivered three-9's.<|reference_end|> | arxiv | @article{barclay2005terraserver,
title={TerraServer SAN-Cluster Architecture and Operations Experience},
author={Tom Barclay, Jim Gray},
journal={arXiv preprint arXiv:cs/0502010},
year={2005},
number={Microsoft Technical Report MSR-TR-2004-67},
archivePrefix={arXiv},
eprint={cs/0502010},
primaryClass={cs.DC cs.DB}
} | barclay2005terraserver |
arxiv-672586 | cs/0502011 | Where the Rubber Meets the Sky: Bridging the Gap between Databases and Science | <|reference_start|>Where the Rubber Meets the Sky: Bridging the Gap between Databases and Science: Scientists in all domains face a data avalanche - both from better instruments and from improved simulations. We believe that computer science tools and computer scientists are in a position to help all the sciences by building tools and developing techniques to manage, analyze, and visualize peta-scale scientific information. This article is summarizes our experiences over the last seven years trying to bridge the gap between database technology and the needs of the astronomy community in building the World-Wide Telescope.<|reference_end|> | arxiv | @article{gray2005where,
title={Where the Rubber Meets the Sky: Bridging the Gap between Databases and
Science},
author={Jim Gray, Alexander S. Szalay},
journal={IEEE Data Engineering Bulletin, Vol 27.4, Dec. 2004, pp. 3-11},
year={2005},
number={Microsoft Technical Report MSR-TR-2004-110},
archivePrefix={arXiv},
eprint={cs/0502011},
primaryClass={cs.DB cs.CE}
} | gray2005where |
arxiv-672587 | cs/0502012 | Sequential File Programming Patterns and Performance with NET | <|reference_start|>Sequential File Programming Patterns and Performance with NET: Programming patterns for sequential file access in the .NET Framework are described and the performance is measured. The default behavior provides excellent performance on a single disk - 50 MBps both reading and writing. Using large request sizes and doing file pre-allocation when possible have quantifiable benefits. When one considers disk arrays, .NET unbuffered IO delivers 800 MBps on a 16-disk array, but buffered IO delivers about 12% of that performance. Consequently, high-performance file and database utilities are still forced to use unbuffered IO for maximum sequential performance. The report is accompanied by downloadable source code that demonstrates the concepts and code that was used to obtain these measurements.<|reference_end|> | arxiv | @article{kukol2005sequential,
title={Sequential File Programming Patterns and Performance with .NET},
author={Peter Kukol, Jim Gray},
journal={arXiv preprint arXiv:cs/0502012},
year={2005},
number={Microsoft Technical Report MSR-TR-2004-136},
archivePrefix={arXiv},
eprint={cs/0502012},
primaryClass={cs.PF cs.OS}
} | kukol2005sequential |
arxiv-672588 | cs/0502013 | Amazons is PSPACE-complete | <|reference_start|>Amazons is PSPACE-complete: Amazons is a board game which combines elements of Chess and Go. It has become popular in recent years, and has served as a useful platform for both game-theoretic study and AI games research. Buro showed that simple Amazons endgames are NP-equivalent, leaving the complexity of the general case as an open problem. We settle this problem, by showing that deciding the outcome of an n x n Amazons position is PSPACE-hard. We give a reduction from one of the PSPACE-complete two-player formula games described by Schaefer. Since the number of moves in an Amazons game is polynomially bounded (unlike Chess and Go), Amazons is in PSPACE. It is thus on a par with other two-player, bounded-move, perfect-information games such as Hex, Othello, and Kayles. Our construction also provides an alternate proof that simple Amazons endgames are NP-equivalent. Our reduction uses a number of amazons polynomial in the input formula length; a remaining open problem is the complexity of Amazons when only a constant number of amazons is used.<|reference_end|> | arxiv | @article{hearn2005amazons,
title={Amazons is PSPACE-complete},
author={Robert A. Hearn},
journal={arXiv preprint arXiv:cs/0502013},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502013},
primaryClass={cs.CC cs.GT}
} | hearn2005amazons |
arxiv-672589 | cs/0502014 | On the asymptotic behavior of some Algorithms | <|reference_start|>On the asymptotic behavior of some Algorithms: A simple approach is presented to study the asymptotic behavior of some algorithms with an underlying tree structure. It is shown that some asymptotic oscillating behaviors can be precisely analyzed without resorting to complex analysis techniques as it is usually done in this context. A new explicit representation of periodic functions involved is obtained at the same time.<|reference_end|> | arxiv | @article{robert2005on,
title={On the asymptotic behavior of some Algorithms},
author={Philippe Robert (RAP UR-R)},
journal={Random Structures and Algorithms 27 (2005) 235--250},
year={2005},
doi={10.1002/rsa.20075},
archivePrefix={arXiv},
eprint={cs/0502014},
primaryClass={cs.DS math.CA math.PR}
} | robert2005on |
arxiv-672590 | cs/0502015 | Can Computer Algebra be Liberated from its Algebraic Yoke ? | <|reference_start|>Can Computer Algebra be Liberated from its Algebraic Yoke ?: So far, the scope of computer algebra has been needlessly restricted to exact algebraic methods. Its possible extension to approximate analytical methods is discussed. The entangled roles of functional analysis and symbolic programming, especially the functional and transformational paradigms, are put forward. In the future, algebraic algorithms could constitute the core of extended symbolic manipulation systems including primitives for symbolic approximations.<|reference_end|> | arxiv | @article{barrere2005can,
title={Can Computer Algebra be Liberated from its Algebraic Yoke ?},
author={R. Barrere},
journal={arXiv preprint arXiv:cs/0502015},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502015},
primaryClass={cs.SC cs.CE}
} | barrere2005can |
arxiv-672591 | cs/0502016 | Stability Analysis for Regularized Least Squares Regression | <|reference_start|>Stability Analysis for Regularized Least Squares Regression: We discuss stability for a class of learning algorithms with respect to noisy labels. The algorithms we consider are for regression, and they involve the minimization of regularized risk functionals, such as L(f) := 1/N sum_i (f(x_i)-y_i)^2+ lambda ||f||_H^2. We shall call the algorithm `stable' if, when y_i is a noisy version of f*(x_i) for some function f* in H, the output of the algorithm converges to f* as the regularization term and noise simultaneously vanish. We consider two flavors of this problem, one where a data set of N points remains fixed, and the other where N -> infinity. For the case where N -> infinity, we give conditions for convergence to f_E (the function which is the expectation of y(x) for each x), as lambda -> 0. For the fixed N case, we describe the limiting 'non-noisy', 'non-regularized' function f*, and give conditions for convergence. In the process, we develop a set of tools for dealing with functionals such as L(f), which are applicable to many other problems in learning theory.<|reference_end|> | arxiv | @article{rudin2005stability,
title={Stability Analysis for Regularized Least Squares Regression},
author={Cynthia Rudin},
journal={arXiv preprint arXiv:cs/0502016},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502016},
primaryClass={cs.LG}
} | rudin2005stability |
arxiv-672592 | cs/0502017 | Estimating mutual information and multi--information in large networks | <|reference_start|>Estimating mutual information and multi--information in large networks: We address the practical problems of estimating the information relations that characterize large networks. Building on methods developed for analysis of the neural code, we show that reliable estimates of mutual information can be obtained with manageable computational effort. The same methods allow estimation of higher order, multi--information terms. These ideas are illustrated by analyses of gene expression, financial markets, and consumer preferences. In each case, information theoretic measures correlate with independent, intuitive measures of the underlying structures in the system.<|reference_end|> | arxiv | @article{slonim2005estimating,
title={Estimating mutual information and multi--information in large networks},
author={Noam Slonim, Gurinder S. Atwal, Gasper Tkacik, and William Bialek},
journal={arXiv preprint arXiv:cs/0502017},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502017},
primaryClass={cs.IT cs.AI cs.CV cs.LG math.IT}
} | slonim2005estimating |
arxiv-672593 | cs/0502018 | When Database Systems Meet the Grid | <|reference_start|>When Database Systems Meet the Grid: We illustrate the benefits of combining database systems and Grid technologies for data-intensive applications. Using a cluster of SQL servers, we reimplemented an existing Grid application that finds galaxy clusters in a large astronomical database. The SQL implementation runs an order of magnitude faster than the earlier Tcl-C-file-based implementation. We discuss why and how Grid applications can take advantage of database systems.<|reference_end|> | arxiv | @article{nieto-santisteban2005when,
title={When Database Systems Meet the Grid},
author={Maria A. Nieto-Santisteban, Alexander S. Szalay, Aniruddha R. Thakar,
William J. O'Mullane, Jim Gray, James Annis},
journal={Proceedings of CIDR 2005 Conference, Asilomar, CA. Jan. 2005, pp
154-161},
year={2005},
number={Microsoft Technical Report MSR-TR-2004-81},
archivePrefix={arXiv},
eprint={cs/0502018},
primaryClass={cs.DC}
} | nieto-santisteban2005when |
arxiv-672594 | cs/0502019 | A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters | <|reference_start|>A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters: In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed market-based resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium.<|reference_end|> | arxiv | @article{feldman2005a,
title={A Price-Anticipating Resource Allocation Mechanism for Distributed
Shared Clusters},
author={Michal Feldman, Kevin Lai, Li Zhang},
journal={Proceedings of the ACM Conference on Electronic Commerce 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502019},
primaryClass={cs.DC cs.GT}
} | feldman2005a |
arxiv-672595 | cs/0502020 | Population Sizing for Genetic Programming Based Upon Decision Making | <|reference_start|>Population Sizing for Genetic Programming Based Upon Decision Making: This paper derives a population sizing relationship for genetic programming (GP). Following the population-sizing derivation for genetic algorithms in Goldberg, Deb, and Clark (1992), it considers building block decision making as a key facet. The analysis yields a GP-unique relationship because it has to account for bloat and for the fact that GP solutions often use subsolution multiple times. The population-sizing relationship depends upon tree size, solution complexity, problem difficulty and building block expression probability. The relationship is used to analyze and empirically investigate population sizing for three model GP problems named ORDER, ON-OFF and LOUD. These problems exhibit bloat to differing extents and differ in whether their solutions require the use of a building block multiple times.<|reference_end|> | arxiv | @article{sastry2005population,
title={Population Sizing for Genetic Programming Based Upon Decision Making},
author={K. Sastry, U.-M. O'Reilly, D. E. Goldberg},
journal={arXiv preprint arXiv:cs/0502020},
year={2005},
number={IlliGAL Report No. 2004028},
archivePrefix={arXiv},
eprint={cs/0502020},
primaryClass={cs.AI cs.NE}
} | sastry2005population |
arxiv-672596 | cs/0502021 | Oiling the Wheels of Change: The Role of Adaptive Automatic Problem Decomposition in Non--Stationary Environments | <|reference_start|>Oiling the Wheels of Change: The Role of Adaptive Automatic Problem Decomposition in Non--Stationary Environments: Genetic algorithms (GAs) that solve hard problems quickly, reliably and accurately are called competent GAs. When the fitness landscape of a problem changes overtime, the problem is called non--stationary, dynamic or time--variant problem. This paper investigates the use of competent GAs for optimizing non--stationary optimization problems. More specifically, we use an information theoretic approach based on the minimum description length principle to adaptively identify regularities and substructures that can be exploited to respond quickly to changes in the environment. We also develop a special type of problems with bounded difficulties to test non--stationary optimization problems. The results provide new insights into non-stationary optimization problems and show that a search algorithm which automatically identifies and exploits possible decompositions is more robust and responds quickly to changes than a simple genetic algorithm.<|reference_end|> | arxiv | @article{abbass2005oiling,
title={Oiling the Wheels of Change: The Role of Adaptive Automatic Problem
Decomposition in Non--Stationary Environments},
author={H. A. Abbass, K. Sastry, D. E. Goldberg},
journal={arXiv preprint arXiv:cs/0502021},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502021},
primaryClass={cs.NE cs.AI}
} | abbass2005oiling |
arxiv-672597 | cs/0502022 | Sub-Structural Niching in Non-Stationary Environments | <|reference_start|>Sub-Structural Niching in Non-Stationary Environments: Niching enables a genetic algorithm (GA) to maintain diversity in a population. It is particularly useful when the problem has multiple optima where the aim is to find all or as many as possible of these optima. When the fitness landscape of a problem changes overtime, the problem is called non--stationary, dynamic or time--variant problem. In these problems, niching can maintain useful solutions to respond quickly, reliably and accurately to a change in the environment. In this paper, we present a niching method that works on the problem substructures rather than the whole solution, therefore it has less space complexity than previously known niching mechanisms. We show that the method is responding accurately when environmental changes occur.<|reference_end|> | arxiv | @article{sastry2005sub-structural,
title={Sub-Structural Niching in Non-Stationary Environments},
author={K. Sastry, H. A. Abbass, D. E. Goldberg},
journal={arXiv preprint arXiv:cs/0502022},
year={2005},
number={IlliGAL Report No. 2004035},
archivePrefix={arXiv},
eprint={cs/0502022},
primaryClass={cs.NE cs.AI}
} | sastry2005sub-structural |
arxiv-672598 | cs/0502023 | Sub-structural Niching in Estimation of Distribution Algorithms | <|reference_start|>Sub-structural Niching in Estimation of Distribution Algorithms: We propose a sub-structural niching method that fully exploits the problem decomposition capability of linkage-learning methods such as the estimation of distribution algorithms and concentrate on maintaining diversity at the sub-structural level. The proposed method consists of three key components: (1) Problem decomposition and sub-structure identification, (2) sub-structure fitness estimation, and (3) sub-structural niche preservation. The sub-structural niching method is compared to restricted tournament selection (RTS)--a niching method used in hierarchical Bayesian optimization algorithm--with special emphasis on sustained preservation of multiple global solutions of a class of boundedly-difficult, additively-separable multimodal problems. The results show that sub-structural niching successfully maintains multiple global optima over large number of generations and does so with significantly less population than RTS. Additionally, the market share of each of the niche is much closer to the expected level in sub-structural niching when compared to RTS.<|reference_end|> | arxiv | @article{sastry2005sub-structural,
title={Sub-structural Niching in Estimation of Distribution Algorithms},
author={K. Sastry, H. A. Abbass, D. E. Goldberg, D. D. Johnson},
journal={arXiv preprint arXiv:cs/0502023},
year={2005},
number={IlliGAL Report No. 2005002},
archivePrefix={arXiv},
eprint={cs/0502023},
primaryClass={cs.NE cs.AI}
} | sastry2005sub-structural |
arxiv-672599 | cs/0502024 | Idempotents, Mattson-Solomon Polynomials and Binary LDPC codes | <|reference_start|>Idempotents, Mattson-Solomon Polynomials and Binary LDPC codes: We show how to construct an algorithm to search for binary idempotents which may be used to construct binary LDPC codes. The algorithm, which allows control of the key properties of sparseness, code rate and minimum distance, is constructed in the Mattson-Solomon domain. Some of the new codes, found by using this technique, are displayed.<|reference_end|> | arxiv | @article{horan2005idempotents,,
title={Idempotents, Mattson-Solomon Polynomials and Binary LDPC codes},
author={R.Horan, C.Tjhai, M.Tomlinson, M.Ambroze, M.Ahmed},
journal={arXiv preprint arXiv:cs/0502024},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502024},
primaryClass={cs.IT math.IT}
} | horan2005idempotents, |
arxiv-672600 | cs/0502025 | EPspectra: A Formal Toolkit for Developing DSP Software Applications | <|reference_start|>EPspectra: A Formal Toolkit for Developing DSP Software Applications: The software approach to developing Digital Signal Processing (DSP) applications brings some great features such as flexibility, re-usability of resources and easy upgrading of applications. However, it requires long and tedious tests and verification phases because of the increasing complexity of the software. This implies the need of a software programming environment capable of putting together DSP modules and providing facilities to debug, verify and validate the code. The objective of the work is to provide such facilities as simulation and verification for developing DSP software applications. This led us to develop an extension toolkit, Epspectra, built upon Pspectra, one of the first toolkits available to design basic software radio applications on standard PC workstations. In this paper, we first present Epspectra, an Esterel-based extension of Pspectra that makes the design and implementation of portable DSP applications easier. It allows drastic reduction of testing and verification time while requiring relatively little expertise in formal verification methods. Second, we demonstrate the use of Epspectra, taking as an example the radio interface part of a GSM base station. We also present the verification procedures for the three safety properties of the implementation programs which have complex control-paths. These have to obey strict scheduling rules. In addition, Epspectra achieves the verification of the targeted application since the same model is used for the executable code generation and for the formal verification.<|reference_end|> | arxiv | @article{kim2005epspectra:,
title={EPspectra: A Formal Toolkit for Developing DSP Software Applications},
author={Hahnsang Kim, Theirry Turletti, Amar Bouali},
journal={arXiv preprint arXiv:cs/0502025},
year={2005},
archivePrefix={arXiv},
eprint={cs/0502025},
primaryClass={cs.LO cs.SE}
} | kim2005epspectra: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.