corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-670701
cs/0207088
A Paraconsistent Higher Order Logic
<|reference_start|>A Paraconsistent Higher Order Logic: Classical logic predicts that everything (thus nothing useful at all) follows from inconsistency. A paraconsistent logic is a logic where an inconsistency does not lead to such an explosion, and since in practice consistency is difficult to achieve there are many potential applications of paraconsistent logics in knowledge-based systems, logical semantics of natural language, etc. Higher order logics have the advantages of being expressive and with several automated theorem provers available. Also the type system can be helpful. We present a concise description of a paraconsistent higher order logic with countable infinite indeterminacy, where each basic formula can get its own indeterminate truth value (or as we prefer: truth code). The meaning of the logical operators is new and rather different from traditional many-valued logics as well as from logics based on bilattices. The adequacy of the logic is examined by a case study in the domain of medicine. Thus we try to build a bridge between the HOL and MVL communities. A sequent calculus is proposed based on recent work by Muskens.<|reference_end|>
arxiv
@article{villadsen2002a, title={A Paraconsistent Higher Order Logic}, author={J{o}rgen Villadsen}, journal={arXiv preprint arXiv:cs/0207088}, year={2002}, archivePrefix={arXiv}, eprint={cs/0207088}, primaryClass={cs.LO cs.AI} }
villadsen2002a
arxiv-670702
cs/0207089
Defining Rough Sets by Extended Logic Programs
<|reference_start|>Defining Rough Sets by Extended Logic Programs: We show how definite extended logic programs can be used for defining and reasoning with rough sets. Moreover, a rough-set-specific query language is presented and an answering algorithm is outlined. Thus, we not only show a possible application of a paraconsistent logic to the field of rough sets as we also establish a link between rough set theory and logic programming, making possible transfer of expertise between both fields.<|reference_end|>
arxiv
@article{małuszyński2002defining, title={Defining Rough Sets by Extended Logic Programs}, author={Jan Ma{l}uszy'nski and Aida Vit'oria (Link"oping University, Sweden)}, journal={arXiv preprint arXiv:cs/0207089}, year={2002}, archivePrefix={arXiv}, eprint={cs/0207089}, primaryClass={cs.LO cs.PL} }
małuszyński2002defining
arxiv-670703
cs/0207090
On a Partial Decision Method for Dynamic Proofs
<|reference_start|>On a Partial Decision Method for Dynamic Proofs: This paper concerns a goal directed proof procedure for the propositional fragment of the adaptive logic ACLuN1. At the propositional level, it forms an algorithm for final derivability. If extended to the predicative level, it provides a criterion for final derivability. This is essential in view of the absence of a positive test. The procedure may be generalized to all flat adaptive logics.<|reference_end|>
arxiv
@article{batens2002on, title={On a Partial Decision Method for Dynamic Proofs}, author={Diderik Batens (Universiteit Gent, Belgium)}, journal={arXiv preprint arXiv:cs/0207090}, year={2002}, archivePrefix={arXiv}, eprint={cs/0207090}, primaryClass={cs.LO} }
batens2002on
arxiv-670704
cs/0207091
An Almost Classical Logic for Logic Programming and Nonmonotonic Reasoning
<|reference_start|>An Almost Classical Logic for Logic Programming and Nonmonotonic Reasoning: The model theory of a first-order logic called N^4 is introduced. N^4 does not eliminate double negations, as classical logic does, but instead reduces fourfold negations. N^4 is very close to classical logic: N^4 has two truth values; implications in N^4 are material, like in classical logic; and negation distributes over compound formulas in N^4 as it does in classical logic. Results suggest that the semantics of normal logic programs is conveniently formalized in N^4: Classical logic Herbrand interpretations generalize straightforwardly to N^4; the classical minimal Herbrand model of a positive logic program coincides with its unique minimal N^4 Herbrand model; the stable models of a normal logic program and its so-called complete minimal N^4 Herbrand models coincide.<|reference_end|>
arxiv
@article{bry2002an, title={An Almost Classical Logic for Logic Programming and Nonmonotonic Reasoning}, author={Franc{c}ois Bry (University of Munich, Germany)}, journal={arXiv preprint arXiv:cs/0207091}, year={2002}, archivePrefix={arXiv}, eprint={cs/0207091}, primaryClass={cs.LO} }
bry2002an
arxiv-670705
cs/0207092
Packet delay in models of data networks
<|reference_start|>Packet delay in models of data networks: We investigate individual packet delay in a model of data networks with table-free, partial table and full table routing. We present analytical estimation for the average packet delay in a network with small partial routing table. Dependence of the delay on the size of the network and on the size of the partial routing table is examined numerically. Consequences for network scalability are discussed.<|reference_end|>
arxiv
@article{fuks2002packet, title={Packet delay in models of data networks}, author={Henryk Fuks, Anna T. Lawniczak and Stanislav Volkov}, journal={ACM Transactions on Modelling and Simulations, vol. 11, pp. 233--250 (2001)}, year={2002}, doi={10.1145/502109.502110}, archivePrefix={arXiv}, eprint={cs/0207092}, primaryClass={cs.NI nlin.CG} }
fuks2002packet
arxiv-670706
cs/0207093
Preference Queries
<|reference_start|>Preference Queries: The handling of user preferences is becoming an increasingly important issue in present-day information systems. Among others, preferences are used for information filtering and extraction to reduce the volume of data presented to the user. They are also used to keep track of user profiles and formulate policies to improve and automate decision making. We propose here a simple, logical framework for formulating preferences as preference formulas. The framework does not impose any restrictions on the preference relations and allows arbitrary operation and predicate signatures in preference formulas. It also makes the composition of preference relations straightforward. We propose a simple, natural embedding of preference formulas into relational algebra (and SQL) through a single winnow operator parameterized by a preference formula. The embedding makes possible the formulation of complex preference queries, e.g., involving aggregation, by piggybacking on existing SQL constructs. It also leads in a natural way to the definition of further, preference-related concepts like ranking. Finally, we present general algebraic laws governing the winnow operator and its interaction with other relational algebra operators. The preconditions on the applicability of the laws are captured by logical formulas. The laws provide a formal foundation for the algebraic optimization of preference queries. We demonstrate the usefulness of our approach through numerous examples.<|reference_end|>
arxiv
@article{chomicki2002preference, title={Preference Queries}, author={Jan Chomicki}, journal={arXiv preprint arXiv:cs/0207093}, year={2002}, archivePrefix={arXiv}, eprint={cs/0207093}, primaryClass={cs.DB} }
chomicki2002preference
arxiv-670707
cs/0207094
Answer Sets for Consistent Query Answering in Inconsistent Databases
<|reference_start|>Answer Sets for Consistent Query Answering in Inconsistent Databases: A relational database is inconsistent if it does not satisfy a given set of integrity constraints. Nevertheless, it is likely that most of the data in it is consistent with the constraints. In this paper we apply logic programming based on answer sets to the problem of retrieving consistent information from a possibly inconsistent database. Since consistent information persists from the original database to every of its minimal repairs, the approach is based on a specification of database repairs using disjunctive logic programs with exceptions, whose answer set semantics can be represented and computed by systems that implement stable model semantics. These programs allow us to declare persistence by defaults and repairing changes by exceptions. We concentrate mainly on logic programs for binary integrity constraints, among which we find most of the integrity constraints found in practice.<|reference_end|>
arxiv
@article{arenas2002answer, title={Answer Sets for Consistent Query Answering in Inconsistent Databases}, author={Marcelo Arenas, Leopoldo Bertossi, Jan Chomicki}, journal={arXiv preprint arXiv:cs/0207094}, year={2002}, archivePrefix={arXiv}, eprint={cs/0207094}, primaryClass={cs.DB} }
arenas2002answer
arxiv-670708
cs/0207095
Eternity variables to prove simulation of specifications
<|reference_start|>Eternity variables to prove simulation of specifications: Simulations of specifications are introduced as a unification and generalization of refinement mappings, history variables, forward simulations, prophecy variables, and backward simulations. A specification implements another specification if and only if there is a simulation from the first one to the second one that satisfies a certain condition. By adding stutterings, the formalism allows that the concrete behaviours take more (or possibly less) steps than the abstract ones. Eternity variables are introduced as a more powerful alternative for prophecy variables and backward simulations. This formalism is semantically complete: every simulation that preserves quiescence is a composition of a forward simulation, an extension with eternity variables, and a refinement mapping. This result does not need finite invisible nondeterminism and machine closure as in the Abadi-Lamport Theorem. Internal continuity is weakened to preservation of quiescence.<|reference_end|>
arxiv
@article{hesselink2002eternity, title={Eternity variables to prove simulation of specifications}, author={Wim H. Hesselink}, journal={ACM Trans. on Computational Logic 6 (2005) 175-201.}, year={2002}, archivePrefix={arXiv}, eprint={cs/0207095}, primaryClass={cs.DC cs.LO} }
hesselink2002eternity
arxiv-670709
cs/0207096
Noncontiguous I/O through PVFS
<|reference_start|>Noncontiguous I/O through PVFS: With the tremendous advances in processor and memory technology, I/O has risen to become the bottleneck in high-performance computing for many applications. The development of parallel file systems has helped to ease the performance gap, but I/O still remains an area needing significant performance improvement. Research has found that noncontiguous I/O access patterns in scientific applications combined with current file system methods to perform these accesses lead to unacceptable performance for large data sets. To enhance performance of noncontiguous I/O we have created list I/O, a native version of noncontiguous I/O. We have used the Parallel Virtual File System (PVFS) to implement our ideas. Our research and experimentation shows that list I/O outperforms current noncontiguous I/O access methods in most I/O situations and can substantially enhance the performance of real-world scientific applications.<|reference_end|>
arxiv
@article{ching2002noncontiguous, title={Noncontiguous I/O through PVFS}, author={Avery Ching, Alok Choudhary, Wei-keng Liao, Rob Ross, William Gropp}, journal={arXiv preprint arXiv:cs/0207096}, year={2002}, number={ANL/MCS-P970-0702}, archivePrefix={arXiv}, eprint={cs/0207096}, primaryClass={cs.DC} }
ching2002noncontiguous
arxiv-670710
cs/0207097
Optimal Ordered Problem Solver
<|reference_start|>Optimal Ordered Problem Solver: We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, efficiently searching not only the space of domain-specific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience. In illustrative experiments, our self-improver becomes the first general system that learns to solve all n disk Towers of Hanoi tasks (solution size 2^n-1) for n up to 30, profiting from previously solved, simpler tasks involving samples of a simple context free language.<|reference_end|>
arxiv
@article{schmidhuber2002optimal, title={Optimal Ordered Problem Solver}, author={Juergen Schmidhuber}, journal={Machine Learning, 54, 211-254, 2004.}, year={2002}, number={IDSIA-12-02}, archivePrefix={arXiv}, eprint={cs/0207097}, primaryClass={cs.AI cs.CC cs.LG} }
schmidhuber2002optimal
arxiv-670711
cs/0208001
Classification of Random Boolean Networks
<|reference_start|>Classification of Random Boolean Networks: We provide the first classification of different types of Random Boolean Networks (RBNs). We study the differences of RBNs depending on the degree of synchronicity and determinism of their updating scheme. For doing so, we first define three new types of RBNs. We note some similarities and differences between different types of RBNs with the aid of a public software laboratory we developed. Particularly, we find that the point attractors are independent of the updating scheme, and that RBNs are more different depending on their determinism or non-determinism rather than depending on their synchronicity or asynchronicity. We also show a way of mapping non-synchronous deterministic RBNs into synchronous RBNs. Our results are important for justifying the use of specific types of RBNs for modelling natural phenomena.<|reference_end|>
arxiv
@article{gershenson2002classification, title={Classification of Random Boolean Networks}, author={Carlos Gershenson}, journal={arXiv preprint arXiv:cs/0208001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208001}, primaryClass={cs.CC cs.DM math.DS nlin.CG} }
gershenson2002classification
arxiv-670712
cs/0208002
Theoretical limit of the compression for the information
<|reference_start|>Theoretical limit of the compression for the information: The pit recording of file, the coefficient of compression are introduced. The theoretical limit of the information compression as minimal coefficient of compression for the given length of alphabet are found.<|reference_end|>
arxiv
@article{lavrenov2002theoretical, title={Theoretical limit of the compression for the information}, author={A. Lavrenov}, journal={arXiv preprint arXiv:cs/0208002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208002}, primaryClass={cs.CR} }
lavrenov2002theoretical
arxiv-670713
cs/0208003
MV2-algorithm's clones
<|reference_start|>MV2-algorithm's clones: The clones of MV2 algorithm for any radix are discussed. The three various examples of ones are represented.<|reference_end|>
arxiv
@article{lavrenov2002mv2-algorithm's, title={MV2-algorithm's clones}, author={A. Lavrenov}, journal={arXiv preprint arXiv:cs/0208003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208003}, primaryClass={cs.CR} }
lavrenov2002mv2-algorithm's
arxiv-670714
cs/0208004
Detecting Race Conditions in Parallel Programs that Use Semaphores
<|reference_start|>Detecting Race Conditions in Parallel Programs that Use Semaphores: We address the problem of detecting race conditions in programs that use semaphores for synchronization. Netzer and Miller showed that it is NP-complete to detect race conditions in programs that use many semaphores. We show in this paper that it remains NP-complete even if only two semaphores are used in the parallel programs. For the tractable case, i.e., using only one semaphore, we give two algorithms for detecting race conditions from the trace of executing a parallel program on p processors, where n semaphore operations are executed. The first algorithm determines in O(n) time whether a race condition exists between any two given operations. The second algorithm runs in O(np log n) time and outputs a compact representation from which one can determine in O(1) time whether a race condition exists between any two given operations. The second algorithm is near-optimal in that the running time is only O(log n) times the time required simply to write down the output.<|reference_end|>
arxiv
@article{klein2002detecting, title={Detecting Race Conditions in Parallel Programs that Use Semaphores}, author={Philip N. Klein, Hsueh-I Lu, and Rob H.B. Netzer}, journal={Algorithmica, 35(4):321-345, 2003}, year={2002}, doi={10.1007/s00453-002-1004-3}, archivePrefix={arXiv}, eprint={cs/0208004}, primaryClass={cs.DS cs.DC} }
klein2002detecting
arxiv-670715
cs/0208005
Probabilistic Search for Object Segmentation and Recognition
<|reference_start|>Probabilistic Search for Object Segmentation and Recognition: The problem of searching for a model-based scene interpretation is analyzed within a probabilistic framework. Object models are formulated as generative models for range data of the scene. A new statistical criterion, the truncated object probability, is introduced to infer an optimal sequence of object hypotheses to be evaluated for their match to the data. The truncated probability is partly determined by prior knowledge of the objects and partly learned from data. Some experiments on sequence quality and object segmentation and recognition from stereo data are presented. The article recovers classic concepts from object recognition (grouping, geometric hashing, alignment) from the probabilistic perspective and adds insight into the optimal ordering of object hypotheses for evaluation. Moreover, it introduces point-relation densities, a key component of the truncated probability, as statistical models of local surface shape.<|reference_end|>
arxiv
@article{hillenbrand2002probabilistic, title={Probabilistic Search for Object Segmentation and Recognition}, author={Ulrich Hillenbrand and Gerd Hirzinger}, journal={Proceedings ECCV 2002, Lecture Notes in Computer Science Vol. 2352, pp. 791-806}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208005}, primaryClass={cs.CV} }
hillenbrand2002probabilistic
arxiv-670716
cs/0208006
Rectangle Size Bounds and Threshold Covers in Communication Complexity
<|reference_start|>Rectangle Size Bounds and Threshold Covers in Communication Complexity: We investigate the power of the most important lower bound technique in randomized communication complexity, which is based on an evaluation of the maximal size of approximately monochromatic rectangles, minimized over all distributions on the inputs. While it is known that the 0-error version of this bound is polynomially tight for deterministic communication, nothing in this direction is known for constant error and randomized communication complexity. We first study a one-sided version of this bound and obtain that its value lies between the MA- and AM-complexities of the considered function. Hence the lower bound actually works for a (communication complexity) class between MA cap co-MA and AM cap co-AM. We also show that the MA-complexity of the disjointness problem is Omega(sqrt(n)). Following this we consider the conjecture that the lower bound method is polynomially tight for randomized communication complexity. First we disprove a distributional version of this conjecture. Then we give a combinatorial characterization of the value of the lower bound method, in which the optimization over all distributions is absent. This characterization is done by what we call a uniform threshold cover. We also study relaxations of this notion, namely approximate majority covers and majority covers, and compare these three notions in power, exhibiting exponential separations. Each of these covers captures a lower bound method previously used for randomized communication complexity.<|reference_end|>
arxiv
@article{klauck2002rectangle, title={Rectangle Size Bounds and Threshold Covers in Communication Complexity}, author={Hartmut Klauck}, journal={arXiv preprint arXiv:cs/0208006}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208006}, primaryClass={cs.CC} }
klauck2002rectangle
arxiv-670717
cs/0208007
On the graph coloring check-digit scheme with applications to verifiable secret sharing
<|reference_start|>On the graph coloring check-digit scheme with applications to verifiable secret sharing: In the paper we apply graph vertex coloring for verification of secret shares. We start from showing how to convert any graph into the number and vice versa. Next, theoretical result concerning properties of n-colorable graphs is stated and proven. From this result we derive graph coloring check-digit scheme. Feasibility of proposed scheme increases with the size of the number, which digits are checked and overall probability of errors. The check-digit scheme is used to build shares verification method that does not require cooperation of the third party. It allows implementing verification structure different from the access structure. It does not depend on particular secret sharing method. It can be used as long as the secret shares can be represented by numbers or graphs.<|reference_end|>
arxiv
@article{kulesza2002on, title={On the graph coloring check-digit scheme with applications to verifiable secret sharing}, author={Kamil Kulesza, Zbigniew Kotulski}, journal={arXiv preprint arXiv:cs/0208007}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208007}, primaryClass={cs.CR cs.DM math.CO} }
kulesza2002on
arxiv-670718
cs/0208008
Soft Concurrent Constraint Programming
<|reference_start|>Soft Concurrent Constraint Programming: Soft constraints extend classical constraints to represent multiple consistency levels, and thus provide a way to express preferences, fuzziness, and uncertainty. While there are many soft constraint solving formalisms, even distributed ones, by now there seems to be no concurrent programming framework where soft constraints can be handled. In this paper we show how the classical concurrent constraint (cc) programming framework can work with soft constraints, and we also propose an extension of cc languages which can use soft constraints to prune and direct the search for a solution. We believe that this new programming paradigm, called soft cc (scc), can be also very useful in many web-related scenarios. In fact, the language level allows web agents to express their interaction and negotiation protocols, and also to post their requests in terms of preferences, and the underlying soft constraint solver can find an agreement among the agents even if their requests are incompatible.<|reference_end|>
arxiv
@article{bistarelli2002soft, title={Soft Concurrent Constraint Programming}, author={S. Bistarelli (1), U. Montanari (2) and F. Rossi (3) ((1) Istituto di Informatica e Telematica, C.N.R., Pisa, Italy, (2) Dipartimento di Informatica, Universita di Pisa, Italy, (3) Dipartimento di Matematica Pura ed Applicata, Universita di Padova, Italy)}, journal={ACM Trans. Comput. Log. 7(3): 563-589 (2006)}, year={2002}, doi={10.1145/1149114.1149118}, archivePrefix={arXiv}, eprint={cs/0208008}, primaryClass={cs.PL cs.AI} }
bistarelli2002soft
arxiv-670719
cs/0208009
Offline Specialisation in Prolog Using a Hand-Written Compiler Generator
<|reference_start|>Offline Specialisation in Prolog Using a Hand-Written Compiler Generator: The so called ``cogen approach'' to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both functional and imperative languages. This paper demonstrates that the cogen approach is also applicable to the specialisation of logic programs (also called partial deduction) and leads to effective specialisers. Moreover, using good binding-time annotations, the speed-ups of the specialised programs are comparable to the speed-ups obtained with online specialisers. The paper first develops a generic approach to offline partial deduction and then a specific offline partial deduction method, leading to the offline system LIX for pure logic programs. While this is a usable specialiser by itself, it is used to develop the cogen system LOGEN. Given a program, a specification of what inputs will be static, and an annotation specifying which calls should be unfolded, LOGEN generates a specialised specialiser for the program at hand. Running this specialiser with particular values for the static inputs results in the specialised program. While this requires two steps instead of one, the efficiency of the specialisation process is improved in situations where the same program is specialised multiple times. The paper also presents and evaluates an automatic binding-time analysis that is able to derive the annotations. While the derived annotations are still suboptimal compared to hand-crafted ones, they enable non-expert users to use the LOGEN system in a fully automated way. Finally, LOGEN is extended so as to directly support a large part of Prolog's declarative and non-declarative features and so as to be able to perform so called mixline specialisations.<|reference_end|>
arxiv
@article{leuschel2002offline, title={Offline Specialisation in Prolog Using a Hand-Written Compiler Generator}, author={Michael Leuschel, Jesper Joergensen, Wim Vanhoof, Maurice Bruynooghe}, journal={arXiv preprint arXiv:cs/0208009}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208009}, primaryClass={cs.PL cs.AI} }
leuschel2002offline
arxiv-670720
cs/0208010
TerraServiceNET: An Introduction to Web Services
<|reference_start|>TerraServiceNET: An Introduction to Web Services: This article explores the design and construction of a geo-spatial Internet web service application from the host web site perspective and from the perspective of an application using the web service. The TerraService.NET web service was added to the popular TerraServer database and web site with no major structural changes to the database. The article discusses web service design, implementation, and deployment concepts and design guidelines. Web services enable applications that aggregate and interact with information and resources from Internet-scale distributed servers. The article presents the design of two USDA applications that interoperate with database and web service resources in Fort Collins Colorado and the TerraService web service located in Tukwila Washington.<|reference_end|>
arxiv
@article{barclay2002terraservice.net:, title={TerraService.NET: An Introduction to Web Services}, author={Tom Barclay, Jim Gray, Eric Strand, Steve Ekblad, Jeffrey Richter}, journal={arXiv preprint arXiv:cs/0208010}, year={2002}, number={MSR-TR-2002-53}, archivePrefix={arXiv}, eprint={cs/0208010}, primaryClass={cs.DL cs.DB} }
barclay2002terraservice.net:
arxiv-670721
cs/0208011
TeraScale SneakerNet: Using Inexpensive Disks for Backup, Archiving, and Data Exchange
<|reference_start|>TeraScale SneakerNet: Using Inexpensive Disks for Backup, Archiving, and Data Exchange: Large datasets are most economically trnsmitted via parcel post given the current economics of wide-area networking. This article describes how the Sloan Digital Sky Survey ships terabyte scale datasets both within the US and to Europe and Asia. We 3GT storage bricks (Ghz processor, GB ram, GbpsEthernet, TB disk) for about 2k$ each. These bricks act as database servers on the LAN. They are loaded at one site and read at the second site. The paper describes the bricks, their economics, and some software issues that they raise.<|reference_end|>
arxiv
@article{gray2002terascale, title={TeraScale SneakerNet: Using Inexpensive Disks for Backup, Archiving, and Data Exchange}, author={Jim Gray, Wyman Chong, Tom Barclay, Alex Szalay, Jan vandenBerg}, journal={arXiv preprint arXiv:cs/0208011}, year={2002}, number={MSR-TR-2002-54}, archivePrefix={arXiv}, eprint={cs/0208011}, primaryClass={cs.NI cs.DC} }
gray2002terascale
arxiv-670722
cs/0208012
Online Scientific Data Curation, Publication, and Archiving
<|reference_start|>Online Scientific Data Curation, Publication, and Archiving: Science projects are data publishers. The scale and complexity of current and future science data changes the nature of the publication process. Publication is becoming a major project component. At a minimum, a project must preserve the ephemeral data it gathers. Derived data can be reconstructed from metadata, but metadata is ephemeral. Longer term, a project should expect some archive to preserve the data. We observe that pub-lished scientific data needs to be available forever ? this gives rise to the data pyramid of versions and to data inflation where the derived data volumes explode. As an example, this article describes the Sloan Digital Sky Survey (SDSS) strategies for data publication, data access, curation, and preservation.<|reference_end|>
arxiv
@article{gray2002online, title={Online Scientific Data Curation, Publication, and Archiving}, author={Jim Gray, Alexander S. Szalay, Ani R. Thakar, Christopher Stoughton, Jan vandenBerg}, journal={arXiv preprint arXiv:cs/0208012}, year={2002}, doi={10.1117/12.461524}, number={MSR-TR-2002-74}, archivePrefix={arXiv}, eprint={cs/0208012}, primaryClass={cs.DL} }
gray2002online
arxiv-670723
cs/0208013
Petabyte Scale Data Mining: Dream or Reality?
<|reference_start|>Petabyte Scale Data Mining: Dream or Reality?: Science is becoming very data intensive1. Today's astronomy datasets with tens of millions of galaxies already present substantial challenges for data mining. In less than 10 years the catalogs are expected to grow to billions of objects, and image archives will reach Petabytes. Imagine having a 100GB database in 1996, when disk scanning speeds were 30MB/s, and database tools were immature. Such a task today is trivial, almost manageable with a laptop. We think that the issue of a PB database will be very similar in six years. In this paper we scale our current experiments in data archiving and analysis on the Sloan Digital Sky Survey2,3 data six years into the future. We analyze these projections and look at the requirements of performing data mining on such data sets. We conclude that the task scales rather well: we could do the job today, although it would be expensive. There do not seem to be any show-stoppers that would prevent us from storing and using a Petabyte dataset six years from today.<|reference_end|>
arxiv
@article{szalay2002petabyte, title={Petabyte Scale Data Mining: Dream or Reality?}, author={Alexander S. Szalay, Jim Gray, Jan vandenBerg}, journal={SIPE Astronmy Telescopes and Instruments, 22-28 August 2002, Waikoloa, Hawaii}, year={2002}, doi={10.1117/12.461427}, number={MSR-TR-2002-84}, archivePrefix={arXiv}, eprint={cs/0208013}, primaryClass={cs.DB cs.CE} }
szalay2002petabyte
arxiv-670724
cs/0208014
Web Services for the Virtual Observatory
<|reference_start|>Web Services for the Virtual Observatory: Web Services form a new, emerging paradigm to handle distributed access to resources over the Internet. There are platform independent standards (SOAP, WSDL), which make the developers? task considerably easier. This article discusses how web services could be used in the context of the Virtual Observatory. We envisage a multi-layer architecture, with interoperating services. A well-designed lower layer consisting of simple, standard services implemented by most data providers will go a long way towards establishing a modular architecture. More complex applications can be built upon this core layer. We present two prototype applications, the SdssCutout and the SkyQuery as examples of this layered architecture.<|reference_end|>
arxiv
@article{szalay2002web, title={Web Services for the Virtual Observatory}, author={Alexander S. Szalay, Tamas Budavari, Tanu Malika, Jim Gray, Ani Thakara}, journal={SIPE Astronomy Telescopes and Instruments, 22-28 August 2002, Waikoloa, Hawaii}, year={2002}, doi={10.1117/12.463947}, number={MSR-TR-2002-85}, archivePrefix={arXiv}, eprint={cs/0208014}, primaryClass={cs.DC cs.DL} }
szalay2002web
arxiv-670725
cs/0208015
Spatial Clustering of Galaxies in Large Datasets
<|reference_start|>Spatial Clustering of Galaxies in Large Datasets: Datasets with tens of millions of galaxies present new challenges for the analysis of spatial clustering. We have built a framework that integrates a database of object catalogs, tools for creating masks of bad regions, and a fast (NlogN) correlation code. This system has enabled unprecedented efficiency in carrying out the analysis of galaxy clustering in the SDSS catalog. A similar approach is used to compute the three-dimensional spatial clustering of galaxies on very large scales. We describe our strategy to estimate the effect of photometric errors using a database. We discuss our efforts as an early example of data-intensive science. While it would have been possible to get these results without the framework we describe, it will be infeasible to perform these computations on the future huge datasets without using this framework.<|reference_end|>
arxiv
@article{szalay2002spatial, title={Spatial Clustering of Galaxies in Large Datasets}, author={Alexander S. Szalay, Tamas Budavari, Andrew Connolly, Jim Gray, Takahiko Matsubara, Adrian Pope, Istvan Szapudi}, journal={SIPE Astronomy Telescopes and Instruments, 22-28 August 2002, Waikoloa, Hawaii}, year={2002}, doi={10.1117/12.476761}, number={TR_ID=MSR-TR-2002-86}, archivePrefix={arXiv}, eprint={cs/0208015}, primaryClass={cs.DB cs.DS} }
szalay2002spatial
arxiv-670726
cs/0208016
A note on fractional derivative modeling of broadband frequency-dependent absorption: Model III
<|reference_start|>A note on fractional derivative modeling of broadband frequency-dependent absorption: Model III: By far, the fractional derivative model is mainly related to the modelling of complicated solid viscoelastic material. In this study, we try to build the fractional derivative PDE model for broadband ultrasound propagation through human tissues.<|reference_end|>
arxiv
@article{chen2002a, title={A note on fractional derivative modeling of broadband frequency-dependent absorption: Model III}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0208016}, year={2002}, number={Simula Research Laboratory Report, April 2002}, archivePrefix={arXiv}, eprint={cs/0208016}, primaryClass={cs.CE cs.CC} }
chen2002a
arxiv-670727
cs/0208017
Linking Makinson and Kraus-Lehmann-Magidor preferential entailments
<|reference_start|>Linking Makinson and Kraus-Lehmann-Magidor preferential entailments: About ten years ago, various notions of preferential entailment have been introduced. The main reference is a paper by Kraus, Lehmann and Magidor (KLM), one of the main competitor being a more general version defined by Makinson (MAK). These two versions have already been compared, but it is time to revisit these comparisons. Here are our three main results: (1) These two notions are equivalent, provided that we restrict our attention, as done in KLM, to the cases where the entailment respects logical equivalence (on the left and on the right). (2) A serious simplification of the description of the fundamental cases in which MAK is equivalent to KLM, including a natural passage in both ways. (3) The two previous results are given for preferential entailments more general than considered in some of the original texts, but they apply also to the original definitions and, for this particular case also, the models can be simplified.<|reference_end|>
arxiv
@article{moinard2002linking, title={Linking Makinson and Kraus-Lehmann-Magidor preferential entailments}, author={Yves Moinard}, journal={arXiv preprint arXiv:cs/0208017}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208017}, primaryClass={cs.AI} }
moinard2002linking
arxiv-670728
cs/0208018
Does P = NP?
<|reference_start|>Does P = NP?: This paper considers the question of P = NP in context of the polynomial time SAT algorithm. It posits proposition dependent on existence of conjectured problem that even where the algorithm is shown to solve SAT in polynomial time it remains theoretically possible for there to yet exist a non-deterministically polynomial (NP) problem for which the algorithm does not provide a polynomial (P) time solution. The paper leaves open as subject of continuing research the question of existence of instance of conjectured problem.<|reference_end|>
arxiv
@article{sauerbier2002does, title={Does P = NP?}, author={C. Sauerbier}, journal={arXiv preprint arXiv:cs/0208018}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208018}, primaryClass={cs.CC} }
sauerbier2002does
arxiv-670729
cs/0208019
Knowledge Representation
<|reference_start|>Knowledge Representation: This work analyses main features that should be present in knowledge representation. It suggests a model for representation and a way to implement this model in software. Representation takes care of both low-level sensor information and high-level concepts.<|reference_end|>
arxiv
@article{birukou2002knowledge, title={Knowledge Representation}, author={Mikalai Birukou}, journal={arXiv preprint arXiv:cs/0208019}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208019}, primaryClass={cs.AI} }
birukou2002knowledge
arxiv-670730
cs/0208020
Using the DIFF Command for Natural Language Processing
<|reference_start|>Using the DIFF Command for Natural Language Processing: Diff is a software program that detects differences between two data sets and is useful in natural language processing. This paper shows several examples of the application of diff. They include the detection of differences between two different datasets, extraction of rewriting rules, merging of two different datasets, and the optimal matching of two different data sets. Since diff comes with any standard UNIX system, it is readily available and very easy to use. Our studies showed that diff is a practical tool for research into natural language processing.<|reference_end|>
arxiv
@article{murata2002using, title={Using the DIFF Command for Natural Language Processing}, author={Masaki Murata and Hitoshi Isahara}, journal={arXiv preprint arXiv:cs/0208020}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208020}, primaryClass={cs.CL} }
murata2002using
arxiv-670731
cs/0208021
Implicit Simulations using Messaging Protocols
<|reference_start|>Implicit Simulations using Messaging Protocols: A novel algorithm for performing parallel, distributed computer simulations on the Internet using IP control messages is introduced. The algorithm employs carefully constructed ICMP packets which enable the required computations to be completed as part of the standard IP communication protocol. After providing a detailed description of the algorithm, experimental applications in the areas of stochastic neural networks and deterministic cellular automata are discussed. As an example of the algorithms potential power, a simulation of a deterministic cellular automaton involving 10^5 Internet connected devices was performed.<|reference_end|>
arxiv
@article{kohring2002implicit, title={Implicit Simulations using Messaging Protocols}, author={G.A. Kohring}, journal={Int. J. Mod. Phys. C: Computers and Physics, vol. 14 , pp. 203-214 (2003).}, year={2002}, doi={10.1142/S012918310300436X}, archivePrefix={arXiv}, eprint={cs/0208021}, primaryClass={cs.DC} }
kohring2002implicit
arxiv-670732
cs/0208022
Symbolic Methodology in Numeric Data Mining: Relational Techniques for Financial Applications
<|reference_start|>Symbolic Methodology in Numeric Data Mining: Relational Techniques for Financial Applications: Currently statistical and artificial neural network methods dominate in financial data mining. Alternative relational (symbolic) data mining methods have shown their effectiveness in robotics, drug design and other applications. Traditionally symbolic methods prevail in the areas with significant non-numeric (symbolic) knowledge, such as relative location in robot navigation. At first glance, stock market forecast looks as a pure numeric area irrelevant to symbolic methods. One of our major goals is to show that financial time series can benefit significantly from relational data mining based on symbolic methods. The paper overviews relational data mining methodology and develops this techniques for financial data mining.<|reference_end|>
arxiv
@article{kovalerchuk2002symbolic, title={Symbolic Methodology in Numeric Data Mining: Relational Techniques for Financial Applications}, author={B. Kovalerchuk, E. Vityaev, H. Yusupov}, journal={arXiv preprint arXiv:cs/0208022}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208022}, primaryClass={cs.CE} }
kovalerchuk2002symbolic
arxiv-670733
cs/0208023
The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms
<|reference_start|>The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms: Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such approach may be useful for average-case analysis but does not cover boundary-point (worst or best-case) scenarios. To synthesize boundary-point scenarios a more systematic approach is needed.In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. The algorithms used in our method utilize implicit backward search using branch and bound techniques and start from given target events. This aims to reduce the search complexity drastically. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average-case analyses. We hope for our method to serve as a model for applying systematic scenario generation to other multicast protocols.<|reference_end|>
arxiv
@article{helmy2002the, title={The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms}, author={Ahmed Helmy, Sandeep Gupta, Deborah Estrin}, journal={arXiv preprint arXiv:cs/0208023}, year={2002}, doi={10.1109/TNET.2003.822643}, archivePrefix={arXiv}, eprint={cs/0208023}, primaryClass={cs.NI cs.GT} }
helmy2002the
arxiv-670734
cs/0208024
Contact-Based Architecture for Resource Discovery (CARD) in Large Scale MANets
<|reference_start|>Contact-Based Architecture for Resource Discovery (CARD) in Large Scale MANets: In this paper we propose a novel architecture, CARD, for resource discovery in large scale Mobile Ad hoc Networks (MANets) which, may scale up to thousands of nodes and may span wide geographical regions. Unlike previously proposed schemes, our architecture avoids expensive mechanisms such as global flooding as well as complex coordination between nodes to form a hierarchy. CARD is also independent of any external source of information such as GPS. In our architecture nodes within a limited number of hops from each node form the neighborhood of that node. Resources within the neighborhood can be readily accessed with the help of a proactive scheme within the neighborhood. For accessing resources beyond the neighborhood, each node also maintains a few distant nodes called contacts. Contacts help in creating a small world in the network and provide an efficient way to query for resources beyond the neighborhood. As the number of contacts of a node increases, the network view (reachability) of the node increases. Paths to contacts are validated periodically to adapt to mobility. We present mechanisms for contact selection and maintenance that attempt to increase reachability while minimizing overhead. Our simulation results show a clear trade-off between increase in reachability on one hand, and contact selection and maintenance overhead on the other. Our results suggest that CARD can be configured to provide a desirable reachability distribution for different network sizes. Comparisons with other schemes for resource discovery, such as flooding and bordercasting, show our architecture to be much more efficient and scalable.<|reference_end|>
arxiv
@article{garg2002contact-based, title={Contact-Based Architecture for Resource Discovery (CARD) in Large Scale MANets}, author={Saurabh Garg, Priyatham Pamu, Nitin Nahata, Ahmed Helmy}, journal={arXiv preprint arXiv:cs/0208024}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208024}, primaryClass={cs.NI} }
garg2002contact-based
arxiv-670735
cs/0208025
Efficient Micro-Mobility using Intra-domain Multicast-based Mechanisms (M&M)
<|reference_start|>Efficient Micro-Mobility using Intra-domain Multicast-based Mechanisms (M&M): One of the most important metrics in the design of IP mobility protocols is the handover performance. The current Mobile IP (MIP) standard has been shown to exhibit poor handover performance. Most other work attempts to modify MIP to slightly improve its efficiency, while others propose complex techniques to replace MIP. Rather than taking these approaches, we instead propose a new architecture for providing efficient and smooth handover, while being able to co-exist and inter-operate with other technologies. Specifically, we propose an intra-domain multicast-based mobility architecture, where a visiting mobile is assigned a multicast address to use while moving within a domain. Efficient handover is achieved using standard multicast join/prune mechanisms. Two approaches are proposed and contrasted. The first introduces the concept proxy-based mobility, while the other uses algorithmic mapping to obtain the multicast address of visiting mobiles. We show that the algorithmic mapping approach has several advantages over the proxy approach, and provide mechanisms to support it. Network simulation (using NS-2) is used to evaluate our scheme and compare it to other routing-based micro-mobility schemes - CIP and HAWAII. The proactive handover results show that both M&M and CIP shows low handoff delay and packet reordering depth as compared to HAWAII. The reason for M&M's comparable performance with CIP is that both use bi-cast in proactive handover. The M&M, however, handles multiple border routers in a domain, where CIP fails. We also provide a handover algorithm leveraging the proactive path setup capability of M&M, which is expected to outperform CIP in case of reactive handover.<|reference_end|>
arxiv
@article{helmy2002efficient, title={Efficient Micro-Mobility using Intra-domain Multicast-based Mechanisms (M&M)}, author={Ahmed Helmy, Muhammad Jaseemuddin, Ganesha Bhaskara}, journal={arXiv preprint arXiv:cs/0208025}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208025}, primaryClass={cs.NI} }
helmy2002efficient
arxiv-670736
cs/0208026
Mathematical basis for polySAT implication operator
<|reference_start|>Mathematical basis for polySAT implication operator: The mathematical basis motivating the "implication operator" of the polySAT algorithm and its function is examined. Such is not undertaken with onerous rigor of symbolic mathematics; a more intuitive visual appeal being employed to present some of the mathematical premises underlying function of the implication operator.<|reference_end|>
arxiv
@article{sauerbier2002mathematical, title={Mathematical basis for polySAT implication operator}, author={Charles Sauerbier}, journal={arXiv preprint arXiv:cs/0208026}, year={2002}, number={S3E-2002-03}, archivePrefix={arXiv}, eprint={cs/0208026}, primaryClass={cs.CC cs.LO} }
sauerbier2002mathematical
arxiv-670737
cs/0208027
A Unified Theory of Shared Memory Consistency
<|reference_start|>A Unified Theory of Shared Memory Consistency: Memory consistency models have been developed to specify what values may be returned by a read given that, in a distributed system, memory operations may only be partially ordered. Before this work, consistency models were defined independently. Each model followed a set of rules which was separate from the rules of every other model. In our work we have defined a set of four consistency properties. Any subset of the four properties yields a set of rules which constitute a consistency model. Every consistency model previously described in the literature can be defined based on our four properties. Therefore, we present these properties as a unfied theory of shared memory consistency.<|reference_end|>
arxiv
@article{steinke2002a, title={A Unified Theory of Shared Memory Consistency}, author={Robert C. Steinke and Gary J. Nutt}, journal={arXiv preprint arXiv:cs/0208027}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208027}, primaryClass={cs.DC} }
steinke2002a
arxiv-670738
cs/0208028
A logical reconstruction of SPKI
<|reference_start|>A logical reconstruction of SPKI: SPKI/SDSI is a proposed public key infrastructure standard that incorporates the SDSI public key infrastructure. SDSI's key innovation was the use of local names. We previously introduced a Logic of Local Name Containment that has a clear semantics and was shown to completely characterize SDSI name resolution. Here we show how our earlier approach can be extended to deal with a number of key features of SPKI, including revocation, expiry dates, and tuple reduction. We show that these extensions add relatively little complexity to the logic. In particular, we do not need a nonmonotonic logic to capture revocation. We then use our semantics to examine SPKI's tuple reduction rules. Our analysis highlights places where SPKI's informal description of tuple reduction is somewhat vague, and shows that extra reduction rules are necessary in order to capture general information about binding and authorization.<|reference_end|>
arxiv
@article{halpern2002a, title={A logical reconstruction of SPKI}, author={Joseph Y. Halpern and Ron van der Meyden}, journal={arXiv preprint arXiv:cs/0208028}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208028}, primaryClass={cs.CR cs.LO} }
halpern2002a
arxiv-670739
cs/0208029
Logic programming in the context of multiparadigm programming: the Oz experience
<|reference_start|>Logic programming in the context of multiparadigm programming: the Oz experience: Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.<|reference_end|>
arxiv
@article{van roy2002logic, title={Logic programming in the context of multiparadigm programming: the Oz experience}, author={Peter Van Roy, Per Brand, Denys Duchier, Seif Haridi, Martin Henz, Christian Schulte}, journal={arXiv preprint arXiv:cs/0208029}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208029}, primaryClass={cs.PL} }
van roy2002logic
arxiv-670740
cs/0208030
A direct time-domain FEM modeling of broadband frequency-dependent absorption with the presence of matrix fractional power: Model I
<|reference_start|>A direct time-domain FEM modeling of broadband frequency-dependent absorption with the presence of matrix fractional power: Model I: The frequency-dependent attenuation of broadband acoustics is often confronted in many different areas. However, the related time domain simulation is rarely found in literature due to enormous technical difficulty. The currently popular relaxation models with the presence of convolution operation require some material parameters which are not readily available. In this study, three reports are contributed to address broadband ultrasound frequency-dependent absorptions using the readily available empirical parameters. This report is the first in series concerned with developing a direct time domain FEM formulation. The next two reports are about the frequency decomposition model and the fractional derivative model.<|reference_end|>
arxiv
@article{chen2002a, title={A direct time-domain FEM modeling of broadband frequency-dependent absorption with the presence of matrix fractional power: Model I}, author={W Chen}, journal={arXiv preprint arXiv:cs/0208030}, year={2002}, number={Simula Research Laboratory Report, April 2002}, archivePrefix={arXiv}, eprint={cs/0208030}, primaryClass={cs.CE cs.CG} }
chen2002a
arxiv-670741
cs/0208031
Parameterized Type Definitions in Mathematica: Methods and Advantages
<|reference_start|>Parameterized Type Definitions in Mathematica: Methods and Advantages: The theme of symbolic computation in algebraic categories has become of utmost importance in the last decade since it enables the automatic modeling of modern algebra theories. On this theoretical background, the present paper reveals the utility of the parameterized categorical approach by deriving a multivariate polynomial category (over various coefficient domains), which is used by our Mathematica implementation of Buchberger's algorithms for determining the Groebner basis. These implementations are designed according to domain and category parameterization principles underlining their advantages: operation protection, inheritance, generality, easy extendibility. In particular, such an extension of Mathematica, a widely used symbolic computation system, with a new type system has a certain practical importance. The approach we propose for Mathematica is inspired from D. Gruntz and M. Monagan's work in Gauss, for Maple.<|reference_end|>
arxiv
@article{andreica2002parameterized, title={Parameterized Type Definitions in Mathematica: Methods and Advantages}, author={Alina Andreica}, journal={arXiv preprint arXiv:cs/0208031}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208031}, primaryClass={cs.SC} }
andreica2002parameterized
arxiv-670742
cs/0208032
First-order Logic as a Constraint Programming Language
<|reference_start|>First-order Logic as a Constraint Programming Language: We provide a denotational semantics for first-order logic that captures the two-level view of the computation process typical for constraint programming. At one level we have the usual program execution. At the other level an automatic maintenance of the constraint store takes place. We prove that the resulting semantics is sound with respect to the truth definition. By instantiating it by specific forms of constraint management policies we obtain several sound evaluation policies of first-order formulas. This semantics can also be used a basis for sound implementation of constraint maintenance in presence of block declarations and conditionals.<|reference_end|>
arxiv
@article{apt2002first-order, title={First-order Logic as a Constraint Programming Language}, author={K.R. Apt and C.F.M. Vermeulen}, journal={"Logic for Programming, Artificial Intelligence and Reasoning", Proceedings of the 9th International Conference LPAR2002, Tbilisi, Georgia; Editors: A. Voronkov and M. Baaz; Springer Verlag LNAI2514; pages 19-35; October 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208032}, primaryClass={cs.LO} }
apt2002first-order
arxiv-670743
cs/0208033
Complete Axiomatizations for Reasoning About Knowledge and Time
<|reference_start|>Complete Axiomatizations for Reasoning About Knowledge and Time: Sound and complete axiomatizations are provided for a number of different logics involving modalities for knowledge and time. These logics arise from different choices for various parameters. All the logics considered involve the discrete time linear temporal logic operators `next' and `until' and an operator for the knowledge of each of a number of agents. Both the single agent and multiple agent cases are studied: in some instances of the latter there is also an operator for the common knowledge of the group of all agents. Four different semantic properties of agents are considered: whether they have a unique initial state, whether they operate synchronously, whether they have perfect recall, and whether they learn. The property of no learning is essentially dual to perfect recall. Not all settings of these parameters lead to recursively axiomatizable logics, but sound and complete axiomatizations are presented for all the ones that do.<|reference_end|>
arxiv
@article{halpern2002complete, title={Complete Axiomatizations for Reasoning About Knowledge and Time}, author={Joseph Y. Halpern, Ron van der Meyden, and Moshe Y. Vardi}, journal={arXiv preprint arXiv:cs/0208033}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208033}, primaryClass={cs.LO cs.AI} }
halpern2002complete
arxiv-670744
cs/0208034
Causes and Explanations: A Structural-Model Approach Part II: Explanations
<|reference_start|>Causes and Explanations: A Structural-Model Approach Part II: Explanations: We propose new definitions of (causal) explanation, using structural equations to model counterfactuals. The definition is based on the notion of actual cause, as defined and motivated in a companion paper. Essentially, an explanation is a fact that is not known for certain but, if found to be true, would constitute an actual cause of the fact to be explained, regardless of the agent's initial uncertainty. We show that the definition handles well a number of problematic examples from the literature.<|reference_end|>
arxiv
@article{halpern2002causes, title={Causes and Explanations: A Structural-Model Approach. Part II: Explanations}, author={Joseph Y. Halpern and Judea Pearl}, journal={arXiv preprint arXiv:cs/0208034}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208034}, primaryClass={cs.AI} }
halpern2002causes
arxiv-670745
cs/0208035
Evaluation of Coreference Rules on Complex Narrative Texts
<|reference_start|>Evaluation of Coreference Rules on Complex Narrative Texts: This article studies the problem of assessing relevance to each of the rules of a reference resolution system. The reference solver described here stems from a formal model of reference and is integrated in a reference processing workbench. Evaluation of the reference resolution is essential, as it enables differential evaluation of individual rules. Numerical values of these measures are given, and discussed, for simple selection rules and other processing rules; such measures are then studied for numerical parameters.<|reference_end|>
arxiv
@article{popescu-belis2002evaluation, title={Evaluation of Coreference Rules on Complex Narrative Texts}, author={Andrei Popescu-Belis, Isabelle Robba}, journal={Proceedings of DAARC2 (Discourse Anaphora and Anaphor Resolution Colloquium), Lancaster, UK, 1998, p.178-185}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208035}, primaryClass={cs.CL} }
popescu-belis2002evaluation
arxiv-670746
cs/0208036
Three New Methods for Evaluating Reference Resolution
<|reference_start|>Three New Methods for Evaluating Reference Resolution: Reference resolution on extended texts (several thousand references) cannot be evaluated manually. An evaluation algorithm has been proposed for the MUC tests, using equivalence classes for the coreference relation. However, we show here that this algorithm is too indulgent, yielding good scores even for poor resolution strategies. We elaborate on the same formalism to propose two new evaluation algorithms, comparing them first with the MUC algorithm and giving then results on a variety of examples. A third algorithm using only distributional comparison of equivalence classes is finally described; it assesses the relative importance of the recall vs. precision errors.<|reference_end|>
arxiv
@article{popescu-belis2002three, title={Three New Methods for Evaluating Reference Resolution}, author={Andrei Popescu-Belis, Isabelle Robba}, journal={Proceedings of the LREC'98 Workshop on Linguistic Coreference, Madrid, Spain, 1998}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208036}, primaryClass={cs.CL} }
popescu-belis2002three
arxiv-670747
cs/0208037
Cooperation between Pronoun and Reference Resolution for Unrestricted Texts
<|reference_start|>Cooperation between Pronoun and Reference Resolution for Unrestricted Texts: Anaphora resolution is envisaged in this paper as part of the reference resolution process. A general open architecture is proposed, which can be particularized and configured in order to simulate some classic anaphora resolution methods. With the aim of improving pronoun resolution, the system takes advantage of elementary cues about characters of the text, which are represented through a particular data structure. In its most robust configuration, the system uses only a general lexicon, a local morpho-syntactic parser and a dictionary of synonyms. A short comparative corpus analysis shows that narrative texts are the most suitable for testing such a system.<|reference_end|>
arxiv
@article{popescu-belis2002cooperation, title={Cooperation between Pronoun and Reference Resolution for Unrestricted Texts}, author={Andrei Popescu-Belis, Isabelle Robba}, journal={Proceedings of the ACL'97 Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, Madrid, Spain, 1998, p.94-99}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208037}, primaryClass={cs.CL} }
popescu-belis2002cooperation
arxiv-670748
cs/0208038
Reference Resolution Beyond Coreference: a Conceptual Frame and its Application
<|reference_start|>Reference Resolution Beyond Coreference: a Conceptual Frame and its Application: A model for reference use in communication is proposed, from a representationist point of view. Both the sender and the receiver of a message handle representations of their common environment, including mental representations of objects. Reference resolution by a computer is viewed as the construction of object representations using referring expressions from the discourse, whereas often only coreference links between such expressions are looked for. Differences between these two approaches are discussed. The model has been implemented with elementary rules, and tested on complex narrative texts (hundreds to thousands of referring expressions). The results support the mental representations paradigm.<|reference_end|>
arxiv
@article{popescu-belis2002reference, title={Reference Resolution Beyond Coreference: a Conceptual Frame and its Application}, author={Andrei Popescu-Belis, Isabelle Robba, Gerard Sabah}, journal={Proceedings of COLING-ACL'98, Montreal, Canada, 1998, p.1046-1052}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208038}, primaryClass={cs.CL} }
popescu-belis2002reference
arxiv-670749
cs/0208039
A Virtual Library of Technical Publications
<|reference_start|>A Virtual Library of Technical Publications: Through a collaborative effort, the Fermilab Information Resources Department and Computing Division have created a "virtual library" of technical publications that provides public access to electronic full-text documents. This paper will discuss the vision, planning and milestones of the project, as well as the hardware, software and interdepartmental cooperation components.<|reference_end|>
arxiv
@article{anderson2002a, title={A Virtual Library of Technical Publications}, author={Elizabeth Anderson, Robert Atkinson, Elizabeth Buckley-Geer, Cynthia Crego, Lisa Giacchetti, Stephen Hanson, David Ritchie, Jean Slisz, Sara Tompson, Stephen Wolbers}, journal={arXiv preprint arXiv:cs/0208039}, year={2002}, number={FERMILAB-TM-2004}, archivePrefix={arXiv}, eprint={cs/0208039}, primaryClass={cs.DL} }
anderson2002a
arxiv-670750
cs/0208040
Using Hierarchical Data Mining to Characterize Performance of Wireless System Configurations
<|reference_start|>Using Hierarchical Data Mining to Characterize Performance of Wireless System Configurations: This paper presents a statistical framework for assessing wireless systems performance using hierarchical data mining techniques. We consider WCDMA (wideband code division multiple access) systems with two-branch STTD (space time transmit diversity) and 1/2 rate convolutional coding (forward error correction codes). Monte Carlo simulation estimates the bit error probability (BEP) of the system across a wide range of signal-to-noise ratios (SNRs). A performance database of simulation runs is collected over a targeted space of system configurations. This database is then mined to obtain regions of the configuration space that exhibit acceptable average performance. The shape of the mined regions illustrates the joint influence of configuration parameters on system performance. The role of data mining in this application is to provide explainable and statistically valid design conclusions. The research issue is to define statistically meaningful aggregation of data in a manner that permits efficient and effective data mining algorithms. We achieve a good compromise between these goals and help establish the applicability of data mining for characterizing wireless systems performance.<|reference_end|>
arxiv
@article{verstak2002using, title={Using Hierarchical Data Mining to Characterize Performance of Wireless System Configurations}, author={Alex Verstak, Naren Ramakrishnan, Kyung Kyoon Bae, William H. Tranter, Layne T. Watson, Jian He, Clifford A. Shaffer, and Theodore S. Rappaport}, journal={arXiv preprint arXiv:cs/0208040}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208040}, primaryClass={cs.CE} }
verstak2002using
arxiv-670751
cs/0208041
Perfectly Secure Message Transmission Revisited
<|reference_start|>Perfectly Secure Message Transmission Revisited: Achieving secure communications in networks has been one of the most important problems in information technology. Dolev, Dwork, Waarts, and Yung have studied secure message transmission in one-way or two-way channels. They only consider the case when all channels are two-way or all channels are one-way. Goldreich, Goldwasser, and Linial, Franklin and Yung, Franklin and Wright, and Wang and Desmedt have studied secure communication and secure computation in multi-recipient (multicast) models. In a ``multicast channel'' (such as Ethernet), one processor can send the same message--simultaneously and privately--to a fixed subset of processors. In this paper, we shall study necessary and sufficient conditions for achieving secure communications against active adversaries in mixed one-way and two-way channels. We also discuss multicast channels and neighbor network channels.<|reference_end|>
arxiv
@article{desmedt2002perfectly, title={Perfectly Secure Message Transmission Revisited}, author={Yvo Desmedt and Yongge Wang}, journal={arXiv preprint arXiv:cs/0208041}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208041}, primaryClass={cs.CR cs.CC} }
desmedt2002perfectly
arxiv-670752
cs/0208042
Proving correctness of Timed Concurrent Constraint Programs
<|reference_start|>Proving correctness of Timed Concurrent Constraint Programs: A temporal logic is presented for reasoning about the correctness of timed concurrent constraint programs. The logic is based on modalities which allow one to specify what a process produces as a reaction to what its environment inputs. These modalities provide an assumption/commitment style of specification which allows a sound and complete compositional axiomatization of the reactive behavior of timed concurrent constraint programs.<|reference_end|>
arxiv
@article{de boer2002proving, title={Proving correctness of Timed Concurrent Constraint Programs}, author={F.S. de Boer, M. Gabbrielli and M.C. Meo}, journal={arXiv preprint arXiv:cs/0208042}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208042}, primaryClass={cs.LO cs.PL} }
de boer2002proving
arxiv-670753
cs/0208043
Gales Suffice for Constructive Dimension
<|reference_start|>Gales Suffice for Constructive Dimension: Supergales, generalizations of supermartingales, have been used by Lutz (2002) to define the constructive dimensions of individual binary sequences. Here it is shown that gales, the corresponding generalizations of martingales, can be equivalently used to define constructive dimension.<|reference_end|>
arxiv
@article{hitchcock2002gales, title={Gales Suffice for Constructive Dimension}, author={John M. Hitchcock}, journal={arXiv preprint arXiv:cs/0208043}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208043}, primaryClass={cs.CC} }
hitchcock2002gales
arxiv-670754
cs/0208044
Gales and supergales are equivalent for defining constructive Hausdorff dimension
<|reference_start|>Gales and supergales are equivalent for defining constructive Hausdorff dimension: We show that for a wide range of probability measures, constructive gales are interchangable with constructive supergales for defining constructive Hausdorff dimension, thus generalizing a previous independent result of Hitchcock (cs.CC/0208043) and partially answering an open question of Lutz (cs.CC/0203017).<|reference_end|>
arxiv
@article{fenner2002gales, title={Gales and supergales are equivalent for defining constructive Hausdorff dimension}, author={Stephen A. Fenner}, journal={arXiv preprint arXiv:cs/0208044}, year={2002}, archivePrefix={arXiv}, eprint={cs/0208044}, primaryClass={cs.CC} }
fenner2002gales
arxiv-670755
cs/0209001
A Novel Statistical Diagnosis of Clinical Data
<|reference_start|>A Novel Statistical Diagnosis of Clinical Data: In this paper, we present a diagnosis method of diseases from clinical data. The data are routine test such as urine test, hematology, chemistries etc. Though those tests have been done for people who check in medical institutes, how each item of the data interacts each other and which combination of them cause a disease are neither understood nor studied well. Here we attack the practically important problem by putting the data into mathematical setup and applying support vector machine. Finally we present simulation results for fatty liver, gastritis etc and discuss about their implications.<|reference_end|>
arxiv
@article{kim2002a, title={A Novel Statistical Diagnosis of Clinical Data}, author={Gene Kim and MyungHo Kim}, journal={arXiv preprint arXiv:cs/0209001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209001}, primaryClass={cs.CE cs.CC} }
kim2002a
arxiv-670756
cs/0209002
A Chart-Parsing Algorithm for Efficient Semantic Analysis
<|reference_start|>A Chart-Parsing Algorithm for Efficient Semantic Analysis: In some contexts, well-formed natural language cannot be expected as input to information or communication systems. In these contexts, the use of grammar-independent input (sequences of uninflected semantic units like e.g. language-independent icons) can be an answer to the users' needs. A semantic analysis can be performed, based on lexical semantic knowledge: it is equivalent to a dependency analysis with no syntactic or morphological clues. However, this requires that an intelligent system should be able to interpret this input with reasonable accuracy and in reasonable time. Here we propose a method allowing a purely semantic-based analysis of sequences of semantic units. It uses an algorithm inspired by the idea of ``chart parsing'' known in Natural Language Processing, which stores intermediate parsing results in order to bring the calculation time down. In comparison with using declarative logic programming - where the calculation time, left to a prolog engine, is hyperexponential -, this method brings the calculation time down to a polynomial time, where the order depends on the valency of the predicates.<|reference_end|>
arxiv
@article{vaillant2002a, title={A Chart-Parsing Algorithm for Efficient Semantic Analysis}, author={Pascal Vaillant (ENST, Paris)}, journal={COLING 2002, Proceedings of the main Conference; The Association for Computational Linguistics and Chinese Language Processing; vol. 2, p. 1044-1050}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209002}, primaryClass={cs.CL} }
vaillant2002a
arxiv-670757
cs/0209003
Rerendering Semantic Ontologies: Automatic Extensions to UMLS through Corpus Analytics
<|reference_start|>Rerendering Semantic Ontologies: Automatic Extensions to UMLS through Corpus Analytics: In this paper, we discuss the utility and deficiencies of existing ontology resources for a number of language processing applications. We describe a technique for increasing the semantic type coverage of a specific ontology, the National Library of Medicine's UMLS, with the use of robust finite state methods used in conjunction with large-scale corpus analytics of the domain corpus. We call this technique "semantic rerendering" of the ontology. This research has been done in the context of Medstract, a joint Brandeis-Tufts effort aimed at developing tools for analyzing biomedical language (i.e., Medline), as well as creating targeted databases of bio-entities, biological relations, and pathway data for biological researchers. Motivating the current research is the need to have robust and reliable semantic typing of syntactic elements in the Medline corpus, in order to improve the overall performance of the information extraction applications mentioned above.<|reference_end|>
arxiv
@article{pustejovsky2002rerendering, title={Rerendering Semantic Ontologies: Automatic Extensions to UMLS through Corpus Analytics}, author={J. Pustejovsky, A. Rumshisky, J. Castano}, journal={LREC 2002 Workshop on Ontologies and Lexical Knowledge Bases}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209003}, primaryClass={cs.CL} }
pustejovsky2002rerendering
arxiv-670758
cs/0209004
Analysis of Non-Gaussian Nature of Network Traffic and its Implication on Network Performance
<|reference_start|>Analysis of Non-Gaussian Nature of Network Traffic and its Implication on Network Performance: We analyzed the non-Gaussian nature of network traffic using some Internet traffic data. We found that (1) the non-Gaussian nature degrades network performance, (2) it is caused by `greedy flows' that exist with non-negligible probability, and (3) a large majority of `greedy flows' are TCP flows having relatively small hop counts, which correspond to small round-trip times. We conclude that in a network hat has greedy flows with non-negligible probability, a traffic controlling scheme or bandwidth design that considers non-Gaussian nature is essential.<|reference_end|>
arxiv
@article{mori2002analysis, title={Analysis of Non-Gaussian Nature of Network Traffic and its Implication on Network Performance}, author={Tatsuya Mori, Ryoichi Kawahara, Shozo Naito}, journal={arXiv preprint arXiv:cs/0209004}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209004}, primaryClass={cs.NI} }
mori2002analysis
arxiv-670759
cs/0209005
Sampling from a couple of positively correlated binomial variables
<|reference_start|>Sampling from a couple of positively correlated binomial variables: We know that the marginals in a multinomial distribution are binomial variates exhibiting a negative correlation. But we can construct two linear combinations of such marginals in such a way to obtain a positive correlation. We discuss the restrictions that are to be imposed on the parameters of the given marginals to accomplish such a result. Next we discuss the regression function, showing that it is a linear function but not homoscedastic.<|reference_end|>
arxiv
@article{catalani2002sampling, title={Sampling from a couple of positively correlated binomial variables}, author={Mario Catalani}, journal={arXiv preprint arXiv:cs/0209005}, year={2002}, number={DECON0902}, archivePrefix={arXiv}, eprint={cs/0209005}, primaryClass={cs.DM} }
catalani2002sampling
arxiv-670760
cs/0209006
Fast optical layer mesh protection using pre-cross-connected trails
<|reference_start|>Fast optical layer mesh protection using pre-cross-connected trails: Conventional optical networks are based on SONET rings, but since rings are known to use bandwidth inefficiently, there has been much research into shared mesh protection, which promises significant bandwidth savings. Unfortunately, most shared mesh protection schemes cannot guarantee that failed traffic will be restored within the 50 ms timeframe that SONET standards specify. A notable exception is the p-cycle scheme of Grover and Stamatelakis. We argue, however, that p-cycles have certain limitations, e.g., there is no easy way to adapt p-cycles to a path-based protection scheme, and p-cycles seem more suited to static traffic than to dynamic traffic. In this paper we show that the key to fast restoration times is not a ring-like topology per se, but rather the ability to pre-cross-connect protection paths. This leads to the concept of a pre-cross-connected trail or PXT, which is a structure that is more flexible than rings and that adapts readily to both path-based and link-based schemes and to both static and dynamic traffic. The PXT protection scheme achieves fast restoration speeds, and our simulations, which have been carefully chosen using ideas from experimental design theory, show that the bandwidth efficiency of the PXT protection scheme is comparable to that of conventional shared mesh protection schemes.<|reference_end|>
arxiv
@article{chow2002fast, title={Fast optical layer mesh protection using pre-cross-connected trails}, author={Timothy Y. Chow, Fabian Chudak, Anthony M. Ffrench}, journal={IEEE/ACM Trans. Networking 12 (3) 2004: 539-548}, year={2002}, doi={10.1109/TNET.2004.828951}, archivePrefix={arXiv}, eprint={cs/0209006}, primaryClass={cs.NI} }
chow2002fast
arxiv-670761
cs/0209007
A Survey and a New Competitive Method for the Planar min-# Problem
<|reference_start|>A Survey and a New Competitive Method for the Planar min-# Problem: We survey most of the different types of approximation algorithms which minimize the number of output vertices. We present their main qualities and their inherent drawbacks.<|reference_end|>
arxiv
@article{buzer2002a, title={A Survey and a New Competitive Method for the Planar min-# Problem}, author={Lilian Buzer}, journal={arXiv preprint arXiv:cs/0209007}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209007}, primaryClass={cs.CG} }
buzer2002a
arxiv-670762
cs/0209008
The partition semantics of questions, syntactically
<|reference_start|>The partition semantics of questions, syntactically: Groenendijk and Stokhof (1984, 1996; Groenendijk 1999) provide a logically attractive theory of the semantics of natural language questions, commonly referred to as the partition theory. Two central notions in this theory are entailment between questions and answerhood. For example, the question "Who is going to the party?" entails the question "Is John going to the party?", and "John is going to the party" counts as an answer to both. Groenendijk and Stokhof define these two notions in terms of partitions of a set of possible worlds. We provide a syntactic characterization of entailment between questions and answerhood . We show that answers are, in some sense, exactly those formulas that are built up from instances of the question. This result lets us compare the partition theory with other approaches to interrogation -- both linguistic analyses, such as Hamblin's and Karttunen's semantics, and computational systems, such as Prolog. Our comparison separates a notion of answerhood into three aspects: equivalence (when two questions or answers are interchangeable), atomic answers (what instances of a question count as answers), and compound answers (how answers compose).<|reference_end|>
arxiv
@article{shan2002the, title={The partition semantics of questions, syntactically}, author={Chung-chieh Shan (Harvard University) and Balder D. ten Cate (Universiteit van Amsterdam)}, journal={Proceedings of the 2002 European Summer School in Logic, Language and Information student session, ed. Malvina Nissim}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209008}, primaryClass={cs.CL cs.AI cs.LO} }
shan2002the
arxiv-670763
cs/0209009
Question answering: from partitions to Prolog
<|reference_start|>Question answering: from partitions to Prolog: We implement Groenendijk and Stokhof's partition semantics of questions in a simple question answering algorithm. The algorithm is sound, complete, and based on tableau theorem proving. The algorithm relies on a syntactic characterization of answerhood: Any answer to a question is equivalent to some formula built up only from instances of the question. We prove this characterization by translating the logic of interrogation to classical predicate logic and applying Craig's interpolation theorem.<|reference_end|>
arxiv
@article{cate2002question, title={Question answering: from partitions to Prolog}, author={Balder D. ten Cate (Universiteit van Amsterdam) and Chung-chieh Shan (Harvard University)}, journal={Proceedings of TABLEAUX 2002: Automated Reasoning with Analytic Tableaux and Related Methods, ed. Uwe Egly and Christian G. Fermueller, Lecture Notes in Artificial Intelligence 2381, 251-265; also in Proceedings of NLULP 2002, ed. Shuly Wintner}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209009}, primaryClass={cs.CL cs.AI cs.LO} }
cate2002question
arxiv-670764
cs/0209010
Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition
<|reference_start|>Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition: We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.<|reference_end|>
arxiv
@article{sang2002introduction, title={Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition}, author={Erik F. Tjong Kim Sang}, journal={Dan Roth and Antal van den Bosch (eds.), Proceedings of CoNLL-2002, Taipei, Taiwan, 2002, pp. 155-158}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209010}, primaryClass={cs.CL} }
sang2002introduction
arxiv-670765
cs/0209011
Gossip Based Ad-Hoc Routing
<|reference_start|>Gossip Based Ad-Hoc Routing: Many ad hoc routing protocols are based on some variant of flooding. Despite various optimizations, many routing messages are propagated unnecessarily. We propose a gossiping-based approach, where each node forwards a message with some probability, to reduce the overhead of the routing protocols. Gossiping exhibits bimodal behavior in sufficiently large networks: in some executions, the gossip dies out quickly and hardly any node gets the message; in the remaining executions, a substantial fraction of the nodes gets the message. The fraction of executions in which most nodes get the message depends on the gossiping probability and the topology of the network. In the networks we have considered, using gossiping probability between 0.6 and 0.8 suffices to ensure that almost every node gets the message in almost every execution. For large networks, this simple gossiping protocol uses up to 35% fewer messages than flooding, with improved performance. Gossiping can also be combined with various optimizations of flooding to yield further benefits. Simulations show that adding gossiping to AODV results in significant performance improvement, even in networks as small as 150 nodes. We expect that the improvement should be even more significant in larger networks.<|reference_end|>
arxiv
@article{haas2002gossip, title={Gossip Based Ad-Hoc Routing}, author={Zygmunt Haas, Joseph Y. Halpern and Erran L. Li}, journal={IEEE INFOCOM, June 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209011}, primaryClass={cs.NI} }
haas2002gossip
arxiv-670766
cs/0209012
Analysis of a Cone-Based Distributed Topology Control Algorithm for Wireless Multi-hop Networks
<|reference_start|>Analysis of a Cone-Based Distributed Topology Control Algorithm for Wireless Multi-hop Networks: The topology of a wireless multi-hop network can be controlled by varying the transmission power at each node. In this paper, we give a detailed analysis of a cone-based distributed topology control algorithm. This algorithm, introduced in [16], does not assume that nodes have GPS information available; rather it depends only on directional information. Roughly speaking, the basic idea of the algorithm is that a node $u$ transmits with the minimum power $p_{u,\alpha}$ required to ensure that in every cone of degree $\alpha$ around $u$, there is some node that $u$ can reach with power $p_{u,\alpha}$. We show that taking $\alpha = 5\pi/6$ is a necessary and sufficient condition to guarantee that network connectivity is preserved. More precisely, if there is a path from $s$ to $t$ when every node communicates at maximum power, then, if $\alpha <= 5\pi/6$, there is still a path in the smallest symmetric graph $G_\alpha$ containing all edges $(u,v)$ such that $u$ can communicate with $v$ using power $p_{u,\alpha}$. On the other hand, if $\alpha > 5\pi/6$, connectivity is not necessarily preserved. We also propose a set of optimizations that further reduce power consumption and prove that they retain network connectivity. Dynamic reconfiguration in the presence of failures and mobility is also discussed. Simulation results are presented to demonstrate the effectiveness of the algorithm and the optimizations.<|reference_end|>
arxiv
@article{li2002analysis, title={Analysis of a Cone-Based Distributed Topology Control Algorithm for Wireless Multi-hop Networks}, author={Erran L. Li, Joseph Y. Halpern, Paramvir Bahl, Yi-Min Wang and Roger Wattenhofer}, journal={ACM PODC, 2001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209012}, primaryClass={cs.NI} }
li2002analysis
arxiv-670767
cs/0209013
Minimum-Energy Mobile Wireless Networks Revisited
<|reference_start|>Minimum-Energy Mobile Wireless Networks Revisited: We propose a protocol that, given a communication network, computes a subnetwork such that, for every pair $(u,v)$ of nodes connected in the original network, there is a minimum-energy path between $u$ and $v$ in the subnetwork (where a minimum-energy path is one that allows messages to be transmitted with a minimum use of energy). The network computed by our protocol is in general a subnetwork of the one computed by the protocol given in [13]. Moreover, our protocol is computationally simpler. We demonstrate the performance improvements obtained by using the subnetwork computed by our protocol through simulation.<|reference_end|>
arxiv
@article{li2002minimum-energy, title={Minimum-Energy Mobile Wireless Networks Revisited}, author={Erran L. Li and Joseph Y. Halpern}, journal={IEEE ICC, 2001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209013}, primaryClass={cs.NI} }
li2002minimum-energy
arxiv-670768
cs/0209014
Randomized protocols for asynchronous consensus
<|reference_start|>Randomized protocols for asynchronous consensus: The famous Fischer, Lynch, and Paterson impossibility proof shows that it is impossible to solve the consensus problem in a natural model of an asynchronous distributed system if even a single process can fail. Since its publication, two decades of work on fault-tolerant asynchronous consensus algorithms have evaded this impossibility result by using extended models that provide (a) randomization, (b) additional timing assumptions, (c) failure detectors, or (d) stronger synchronization mechanisms than are available in the basic model. Concentrating on the first of these approaches, we illustrate the history and structure of randomized asynchronous consensus protocols by giving detailed descriptions of several such protocols.<|reference_end|>
arxiv
@article{aspnes2002randomized, title={Randomized protocols for asynchronous consensus}, author={James Aspnes}, journal={arXiv preprint arXiv:cs/0209014}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209014}, primaryClass={cs.DS cs.DC} }
aspnes2002randomized
arxiv-670769
cs/0209015
Does NP not equal P?
<|reference_start|>Does NP not equal P?: Stephen Cook posited SAT is NP-Complete in 1971. If SAT is NP-Complete then, as is generally accepted, any polynomial solution of it must also present a polynomial solution of all NP decision problems. It is here argued, however, that NP is not of necessity equivalent to P where it is shown that SAT is contained in P. This due to a paradox, of nature addressed by both Godel and Russell, in regards to the P-NP system in total.<|reference_end|>
arxiv
@article{sauerbier2002does, title={Does NP not equal P?}, author={C. Sauerbier}, journal={arXiv preprint arXiv:cs/0209015}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209015}, primaryClass={cs.CC} }
sauerbier2002does
arxiv-670770
cs/0209016
Sorting with a forklift
<|reference_start|>Sorting with a forklift: A fork stack is a generalised stack which allows pushes and pops of several items at a time. We consider the problem of determining which input streams can be sorted using a single forkstack, or dually, which permutations of a fixed input stream can be produced using a single forkstack. An algorithm is given to solve the sorting problem and the minimal unsortable sequences are found. The results are extended to fork stacks where there are bounds on how many items can be pushed and popped at one time. In this context we also establish how to enumerate the collection of sortable sequences.<|reference_end|>
arxiv
@article{albert2002sorting, title={Sorting with a forklift}, author={M.H.Albert and M.D.Atkinson}, journal={arXiv preprint arXiv:cs/0209016}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209016}, primaryClass={cs.DM cs.DS math.CO} }
albert2002sorting
arxiv-670771
cs/0209017
Evolution and Gravitation: a Computer Simulation of a Non-Walrasian Equilibrium Model
<|reference_start|>Evolution and Gravitation: a Computer Simulation of a Non-Walrasian Equilibrium Model: The paper contains a computer simulation concerning a basic non-Walrasian equilibrium system, following the Edmond Malinvaud "short side" approach, as far as the price adjustment is concerned, and the sequential Hicksian "weeks" structure with regard of the temporal characterization.<|reference_end|>
arxiv
@article{tucci2002evolution, title={Evolution and Gravitation: a Computer Simulation of a Non-Walrasian Equilibrium Model}, author={Michele Tucci (U. of Rome "La Sapienza")}, journal={arXiv preprint arXiv:cs/0209017}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209017}, primaryClass={cs.CY} }
tucci2002evolution
arxiv-670772
cs/0209018
Probabilistic Reversible Automata and Quantum Automata
<|reference_start|>Probabilistic Reversible Automata and Quantum Automata: To study relationship between quantum finite automata and probabilistic finite automata, we introduce a notion of probabilistic reversible automata (PRA, or doubly stochastic automata). We find that there is a strong relationship between different possible models of PRA and corresponding models of quantum finite automata. We also propose a classification of reversible finite 1-way automata.<|reference_end|>
arxiv
@article{golovkins2002probabilistic, title={Probabilistic Reversible Automata and Quantum Automata}, author={Marats Golovkins and Maksim Kravtsev}, journal={Lecture Notes in Computer Science, 2002, Vol. 2387, pp. 574-583}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209018}, primaryClass={cs.CC cs.FL quant-ph} }
golovkins2002probabilistic
arxiv-670773
cs/0209019
Reasoning about Evolving Nonmonotonic Knowledge Bases
<|reference_start|>Reasoning about Evolving Nonmonotonic Knowledge Bases: Recently, several approaches to updating knowledge bases modeled as extended logic programs have been introduced, ranging from basic methods to incorporate (sequences of) sets of rules into a logic program, to more elaborate methods which use an update policy for specifying how updates must be incorporated. In this paper, we introduce a framework for reasoning about evolving knowledge bases, which are represented as extended logic programs and maintained by an update policy. We first describe a formal model which captures various update approaches, and we define a logical language for expressing properties of evolving knowledge bases. We then investigate semantical and computational properties of our framework, where we focus on properties of knowledge states with respect to the canonical reasoning task of whether a given formula holds on a given evolving knowledge base. In particular, we present finitary characterizations of the evolution for certain classes of framework instances, which can be exploited for obtaining decidability results. In more detail, we characterize the complexity of reasoning for some meaningful classes of evolving knowledge bases, ranging from polynomial to double exponential space complexity.<|reference_end|>
arxiv
@article{eiter2002reasoning, title={Reasoning about Evolving Nonmonotonic Knowledge Bases}, author={T. Eiter, M. Fink, G. Sabbatini, H. Tompits}, journal={arXiv preprint arXiv:cs/0209019}, year={2002}, number={INFSYS RR-1843-02-11}, archivePrefix={arXiv}, eprint={cs/0209019}, primaryClass={cs.AI} }
eiter2002reasoning
arxiv-670774
cs/0209020
A new definition of the fractional Laplacian
<|reference_start|>A new definition of the fractional Laplacian: It is noted that the standard definition of the fractional Laplacian leads to a hyper-singular convolution integral and is also obscure about how to implement the boundary conditions. This purpose of this note is to introduce a new definition of the fractional Laplacian to overcome these major drawbacks.<|reference_end|>
arxiv
@article{chen2002a, title={A new definition of the fractional Laplacian}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0209020}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209020}, primaryClass={cs.NA cs.CE} }
chen2002a
arxiv-670775
cs/0209021
Activities, Context and Ubiquitous Computing
<|reference_start|>Activities, Context and Ubiquitous Computing: Context and context-awareness provides computing environments with the ability to usefully adapt the services or information they provide. It is the ability to implicitly sense and automatically derive the user needs that separates context-aware applications from traditionally designed applications, and this makes them more attentive, responsive, and aware of their user's identity, and their user's environment. This paper argues that context-aware applications capable of supporting complex, cognitive activities can be built from a model of context called Activity-Centric Context. A conceptual model of Activity-Centric context is presented. The model is illustrated via a detailed example.<|reference_end|>
arxiv
@article{prekop2002activities,, title={Activities, Context and Ubiquitous Computing}, author={Paul Prekop and Mark Burnett}, journal={Computer Communications 26 (2003) 1168-1176}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209021}, primaryClass={cs.IR} }
prekop2002activities,
arxiv-670776
cs/0209022
A Comparison of Different Cognitive Paradigms Using Simple Animats in a Virtual Laboratory, with Implications to the Notion of Cognition
<|reference_start|>A Comparison of Different Cognitive Paradigms Using Simple Animats in a Virtual Laboratory, with Implications to the Notion of Cognition: In this thesis I present a virtual laboratory which implements five different models for controlling animats: a rule-based system, a behaviour-based system, a concept-based system, a neural network, and a Braitenberg architecture. Through different experiments, I compare the performance of the models and conclude that there is no "best" model, since different models are better for different things in different contexts. The models I chose, although quite simple, represent different approaches for studying cognition. Using the results as an empirical philosophical aid, I note that there is no "best" approach for studying cognition, since different approaches have all advantages and disadvantages, because they study different aspects of cognition from different contexts. This has implications for current debates on "proper" approaches for cognition: all approaches are a bit proper, but none will be "proper enough". I draw remarks on the notion of cognition abstracting from all the approaches used to study it, and propose a simple classification for different types of cognition.<|reference_end|>
arxiv
@article{gershenson2002a, title={A Comparison of Different Cognitive Paradigms Using Simple Animats in a Virtual Laboratory, with Implications to the Notion of Cognition}, author={Carlos Gershenson}, journal={arXiv preprint arXiv:cs/0209022}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209022}, primaryClass={cs.AI} }
gershenson2002a
arxiv-670777
cs/0209023
Practical Load Balancing for Content Requests in Peer-to-Peer Networks
<|reference_start|>Practical Load Balancing for Content Requests in Peer-to-Peer Networks: This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.<|reference_end|>
arxiv
@article{roussopoulos2002practical, title={Practical Load Balancing for Content Requests in Peer-to-Peer Networks}, author={Mema Roussopoulos and Mary Baker}, journal={arXiv preprint arXiv:cs/0209023}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209023}, primaryClass={cs.NI cs.DC} }
roussopoulos2002practical
arxiv-670778
cs/0209024
Errors in Low and Lapsley's article "Optimization Flow Control, I: Basic Algorithm and Convergence"
<|reference_start|>Errors in Low and Lapsley's article "Optimization Flow Control, I: Basic Algorithm and Convergence": In the note two errors in Low and Lapsley's article "Optimization Flow Control, I: Basic Algorithm and Convergence", "IEEE/ACM Transactions on Networking", 7(6), pp. 861-874, 1999, are shown. Because of these errors the proofs of both theorems presented in the article are incomplete and some assessments are wrong.<|reference_end|>
arxiv
@article{karbowski2002errors, title={Errors in Low and Lapsley's article "Optimization Flow Control, I: Basic Algorithm and Convergence"}, author={Andrzej Karbowski}, journal={Karbowski, A., Comments on "Optimization Flow Control, I: Basic Algorithm and Convergence", IEEE/ACM Trans. on Networking, vol. 11(2), pp. 338-339, 2003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209024}, primaryClass={cs.NI cs.DC} }
karbowski2002errors
arxiv-670779
cs/0209025
Correction to Low and Lapsley's article "Optimization Flow Control, I: Basic Algorithm and Convergence"
<|reference_start|>Correction to Low and Lapsley's article "Optimization Flow Control, I: Basic Algorithm and Convergence": In the note an error in Low and Lapsley's article ("Optimization Flow Control, I: Basic Algorithm and Convergence", IEEE/ACM Transactions on Networking, 7(6), pp. 861-874, 1999) is pointed out. Because of this error the proof of the Theorem 2 presented in the article is incomplete and some assessments are wrong. In the second part of the note the author proposes a correction to this proof.<|reference_end|>
arxiv
@article{karbowski2002correction, title={Correction to Low and Lapsley's article "Optimization Flow Control, I: Basic Algorithm and Convergence"}, author={Andrzej Karbowski}, journal={IEEE/ACM Trans. on Networking, vol. 11(2), pp. 338-339, 2003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209025}, primaryClass={cs.NI cs.DC} }
karbowski2002correction
arxiv-670780
cs/0209026
A universal alphabet and rewrite system
<|reference_start|>A universal alphabet and rewrite system: We present two ways in which an infinite universal alphabet may be generated using a novel rewrite system that conserves zero (a special character of the alphabet and the symbol for that character) at every step. The recursive method delivers the entire alphabet in one step when invoked with the zero character as the initial subset alphabet. The iterative method with the same start delivers characters that act as ciphers for properties that the developing subset alphabet contains. These properties emerge in an arbitrary sequence and there are an infinite number of ways they may be selected. The subset alphabets in addition to having mathematical interpretation as algebra can also be constrained to emerge in a minimal way which then has application as a foundational physical system. Each subset alphabet may itself be the basis of a rewrite system where rules that operate on symbols (representing characters) or collections of symbols manipulate the specific properties in a dynamic way.<|reference_end|>
arxiv
@article{rowlands2002a, title={A universal alphabet and rewrite system}, author={Peter Rowlands and Bernard Diaz}, journal={arXiv preprint arXiv:cs/0209026}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209026}, primaryClass={cs.OH} }
rowlands2002a
arxiv-670781
cs/0209027
Remarks on d-Dimensional TSP Optimal Tour Length Behaviour
<|reference_start|>Remarks on d-Dimensional TSP Optimal Tour Length Behaviour: The well-known $O(n^{1-1/d})$ behaviour of the optimal tour length for TSP in d-dimensional Cartesian space causes breaches of the triangle inequality. Other practical inadequacies of this model are discussed, including its use as basis for approximation of the TSP optimal tour length or bounds derivations, which I attempt to remedy.<|reference_end|>
arxiv
@article{yaneff2002remarks, title={Remarks on d-Dimensional TSP Optimal Tour Length Behaviour}, author={A. G. Yaneff}, journal={arXiv preprint arXiv:cs/0209027}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209027}, primaryClass={cs.CG} }
yaneff2002remarks
arxiv-670782
cs/0209028
Mapping the Gnutella Network: Properties of Large-Scale Peer-to-Peer Systems and Implications for System Design
<|reference_start|>Mapping the Gnutella Network: Properties of Large-Scale Peer-to-Peer Systems and Implications for System Design: Despite recent excitement generated by the peer-to-peer (P2P) paradigm and the surprisingly rapid deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. The open architecture, achieved scale, and self-organizing structure of the Gnutella network make it an interesting P2P architecture to study. Like most other P2P applications, Gnutella builds, at the application level, a virtual network with its own routing mechanisms. The topology of this virtual network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We have built a "crawler" to extract the topology of Gnutella's application level network. In this paper we analyze the topology graph and evaluate generated network traffic. Our two major findings are that: (1) although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure, and (2) the Gnutella virtual network topology does not match well the underlying Internet topology, hence leading to ineffective use of the physical networking infrastructure. These findings guide us to propose changes to the Gnutella protocol and implementations that may bring significant performance and scalability improvements. We believe that our findings as well as our measurement and analysis techniques have broad applicability to P2P systems and provide unique insights into P2P system design tradeoffs.<|reference_end|>
arxiv
@article{ripeanu2002mapping, title={Mapping the Gnutella Network: Properties of Large-Scale Peer-to-Peer Systems and Implications for System Design}, author={Matei Ripeanu, Ian Foster and Adriana Iamnitchi}, journal={IEEE Internet Computing Journal (special issue on peer-to-peer networking), vol. 6(1) 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209028}, primaryClass={cs.DC cond-mat.stat-mech cs.NI} }
ripeanu2002mapping
arxiv-670783
cs/0209029
A generalization of Amdahl's law and relative conditions of parallelism
<|reference_start|>A generalization of Amdahl's law and relative conditions of parallelism: In this work I present a generalization of Amdahl's law on the limits of a parallel implementation with many processors. In particular I establish some mathematical relations involving the number of processors and the dimension of the treated problem, and with these conditions I define, on the ground of the reachable speedup, some classes of parallelism for the implementations. I also derive a condition for obtaining superlinear speedup. The used mathematical technics are those of differential calculus. I describe some examples from classical problems offered by the specialized literature on the subject.<|reference_end|>
arxiv
@article{argentini2002a, title={A generalization of Amdahl's law and relative conditions of parallelism}, author={Gianluca Argentini}, journal={arXiv preprint arXiv:cs/0209029}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209029}, primaryClass={cs.DC cs.PF} }
argentini2002a
arxiv-670784
cs/0209030
Extremal Optimization: an Evolutionary Local-Search Algorithm
<|reference_start|>Extremal Optimization: an Evolutionary Local-Search Algorithm: A recently introduced general-purpose heuristic for finding high-quality solutions for many hard optimization problems is reviewed. The method is inspired by recent progress in understanding far-from-equilibrium phenomena in terms of {\em self-organized criticality,} a concept introduced to describe emergent complexity in physical systems. This method, called {\em extremal optimization,} successively replaces the value of extremely undesirable variables in a sub-optimal solution with new, random ones. Large, avalanche-like fluctuations in the cost function self-organize from this dynamics, effectively scaling barriers to explore local optima in distant neighborhoods of the configuration space while eliminating the need to tune parameters. Drawing upon models used to simulate the dynamics of granular media, evolution, or geology, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as {\em simulated annealing}. It may be but one example of applying new insights into {\em non-equilibrium phenomena} systematically to hard optimization problems. This method is widely applicable and so far has proved competitive with -- and even superior to -- more elaborate general-purpose heuristics on testbeds of constrained optimization problems with up to $10^5$ variables, such as bipartitioning, coloring, and satisfiability. Analysis of a suitable model predicts the only free parameter of the method in accordance with all experimental results.<|reference_end|>
arxiv
@article{boettcher2002extremal, title={Extremal Optimization: an Evolutionary Local-Search Algorithm}, author={Stefan Boettcher (Emory U) and Allon G. Percus (Los Alamos)}, journal={arXiv preprint arXiv:cs/0209030}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209030}, primaryClass={cs.NE cs.AI} }
boettcher2002extremal
arxiv-670785
cs/0209031
Locating Data in (Small-World?) Peer-to-Peer Scientific Collaborations
<|reference_start|>Locating Data in (Small-World?) Peer-to-Peer Scientific Collaborations: Data-sharing scientific collaborations have particular characteristics, potentially different from the current peer-to-peer environments. In this paper we advocate the benefits of exploiting emergent patterns in self-configuring networks specialized for scientific data-sharing collaborations. We speculate that a peer-to-peer scientific collaboration network will exhibit small-world topology, as do a large number of social networks for which the same pattern has been documented. We propose a solution for locating data in decentralized, scientific, data-sharing environments that exploits the small-worlds topology. The research challenge we raise is: what protocols should be used to allow a self-configuring peer-to-peer network to form small worlds similar to the way in which the humans that use the network do in their social interactions?<|reference_end|>
arxiv
@article{iamnitchi2002locating, title={Locating Data in (Small-World?) Peer-to-Peer Scientific Collaborations}, author={Adriana Iamnitchi, Matei Ripeanu, Ian Foster}, journal={1st International Workshop on Peer-to-Peer Systems IPTPS 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209031}, primaryClass={cs.DC cond-mat} }
iamnitchi2002locating
arxiv-670786
cs/0209032
Complexity Results on DPLL and Resolution
<|reference_start|>Complexity Results on DPLL and Resolution: DPLL and resolution are two popular methods for solving the problem of propositional satisfiability. Rather than algorithms, they are families of algorithms, as their behavior depend on some choices they face during execution: DPLL depends on the choice of the literal to branch on; resolution depends on the choice of the pair of clauses to resolve at each step. The complexity of making the optimal choice is analyzed in this paper. Extending previous results, we prove that choosing the optimal literal to branch on in DPLL is Delta[log]^2-hard, and becomes NP^PP-hard if branching is only allowed on a subset of variables. Optimal choice in regular resolution is both NP-hard and CoNP-hard. The problem of determining the size of the optimal proofs is also analyzed: it is CoNP-hard for DPLL, and Delta[log]^2-hard if a conjecture we make is true. This problem is CoNP-hard for regular resolution.<|reference_end|>
arxiv
@article{liberatore2002complexity, title={Complexity Results on DPLL and Resolution}, author={Paolo Liberatore}, journal={arXiv preprint arXiv:cs/0209032}, year={2002}, doi={10.1145/1119439.1119442}, archivePrefix={arXiv}, eprint={cs/0209032}, primaryClass={cs.LO cs.CC} }
liberatore2002complexity
arxiv-670787
cs/0209033
Preemptive Scheduling of Equal-Length Jobs to Maximize Weighted Throughput
<|reference_start|>Preemptive Scheduling of Equal-Length Jobs to Maximize Weighted Throughput: We study the problem of computing a preemptive schedule of equal-length jobs with given release times, deadlines and weights. Our goal is to maximize the weighted throughput, which is the total weight of completed jobs. In Graham's notation this problem is described as (1 | r_j;p_j=p;pmtn | sum w_j U_j). We provide an O(n^4)-time algorithm for this problem, improving the previous bound of O(n^{10}) by Baptiste.<|reference_end|>
arxiv
@article{baptiste2002preemptive, title={Preemptive Scheduling of Equal-Length Jobs to Maximize Weighted Throughput}, author={Philippe Baptiste, Marek Chrobak, Christoph Durr, Wojciech Jawor and Nodari Vakhania}, journal={arXiv preprint arXiv:cs/0209033}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209033}, primaryClass={cs.DS} }
baptiste2002preemptive
arxiv-670788
cs/0209034
An Algorithmic Study of Manufacturing Paperclips and Other Folded Structures
<|reference_start|>An Algorithmic Study of Manufacturing Paperclips and Other Folded Structures: We study algorithmic aspects of bending wires and sheet metal into a specified structure. Problems of this type are closely related to the question of deciding whether a simple non-self-intersecting wire structure (a carpenter's ruler) can be straightened, a problem that was open for several years and has only recently been solved in the affirmative. If we impose some of the constraints that are imposed by the manufacturing process, we obtain quite different results. In particular, we study the variant of the carpenter's ruler problem in which there is a restriction that only one joint can be modified at a time. For a linkage that does not self-intersect or self-touch, the recent results of Connelly et al. and Streinu imply that it can always be straightened, modifying one joint at a time. However, we show that for a linkage with even a single vertex degeneracy, it becomes NP-hard to decide if it can be straightened while altering only one joint at a time. If we add the restriction that each joint can be altered at most once, we show that the problem is NP-complete even without vertex degeneracies. In the special case, arising in wire forming manufacturing, that each joint can be altered at most once, and must be done sequentially from one or both ends of the linkage, we give an efficient algorithm to determine if a linkage can be straightened.<|reference_end|>
arxiv
@article{arkin2002an, title={An Algorithmic Study of Manufacturing Paperclips and Other Folded Structures}, author={Esther M. Arkin, Sandor P. Fekete, and Joseph S. B. Mitchell}, journal={Computational Geometry: Theory and Applications, 25 (2003), 117-138.}, year={2002}, archivePrefix={arXiv}, eprint={cs/0209034}, primaryClass={cs.CG} }
arkin2002an
arxiv-670789
cs/0210001
Edsger Wybe Dijkstra (1930 -- 2002): A Portrait of a Genius
<|reference_start|>Edsger Wybe Dijkstra (1930 -- 2002): A Portrait of a Genius: We discuss the scientific contributions of Edsger Wybe Dijkstra, his opinions and his legacy.<|reference_end|>
arxiv
@article{apt2002edsger, title={Edsger Wybe Dijkstra (1930 -- 2002): A Portrait of a Genius}, author={Krzysztof R. Apt}, journal={arXiv preprint arXiv:cs/0210001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210001}, primaryClass={cs.GL} }
apt2002edsger
arxiv-670790
cs/0210002
GridBank: A Grid Accounting Services Architecture (GASA) for Distributed Systems Sharing and Integration
<|reference_start|>GridBank: A Grid Accounting Services Architecture (GASA) for Distributed Systems Sharing and Integration: Computational Grids are emerging as new infrastructure for Internet-based parallel and distributed computing. They enable the sharing, exchange, discovery, and aggregation of resources distributed across multiple administrative domains, organizations and enterprises. To accomplish this, Grids need infrastructure that supports various services: security, uniform access, resource management, scheduling, application composition, computational economy, and accountability. Many Grid projects have developed technologies that provide many of these services with an exception of accountability. To overcome this limitation, we propose a new infrastructure called Grid Bank that provides services for accounting. This paper presents requirements of Grid accountability and different models within which it can operate and proposes Grid Bank Services Architecture that meets them. The paper highlights implementation issues with detailed discussion on format for various records/database that the GridBank need to maintain. It also presents protocols for interaction between GridBank and various components within Grid computing environments.<|reference_end|>
arxiv
@article{buyya2002gridbank:, title={GridBank: A Grid Accounting Services Architecture (GASA) for Distributed Systems Sharing and Integration}, author={Alexander Barmouta Rajkumar Buyya}, journal={arXiv preprint arXiv:cs/0210002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210002}, primaryClass={cs.DC} }
buyya2002gridbank:
arxiv-670791
cs/0210003
On the Reflexivity of Point Sets
<|reference_start|>On the Reflexivity of Point Sets: We introduce a new measure for planar point sets S that captures a combinatorial distance that S is from being a convex set: The reflexivity rho(S) of S is given by the smallest number of reflex vertices in a simple polygonalization of S. We prove various combinatorial bounds and provide efficient algorithms to compute reflexivity, both exactly (in special cases) and approximately (in general). Our study considers also some closely related quantities, such as the convex cover number kappa_c(S) of a planar point set, which is the smallest number of convex chains that cover S, and the convex partition number kappa_p(S), which is given by the smallest number of convex chains with pairwise-disjoint convex hulls that cover S. We have proved that it is NP-complete to determine the convex cover or the convex partition number and have given logarithmic-approximation algorithms for determining each.<|reference_end|>
arxiv
@article{arkin2002on, title={On the Reflexivity of Point Sets}, author={Esther M. Arkin, Sandor P. Fekete, Ferran Hurtado, Joseph S. B. Mitchell, Marc Noy, Vera Sacristan, and Saurabh Sethia}, journal={arXiv preprint arXiv:cs/0210003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210003}, primaryClass={cs.CG cs.DS} }
arkin2002on
arxiv-670792
cs/0210004
Revising Partially Ordered Beliefs
<|reference_start|>Revising Partially Ordered Beliefs: This paper deals with the revision of partially ordered beliefs. It proposes a semantic representation of epistemic states by partial pre-orders on interpretations and a syntactic representation by partially ordered belief bases. Two revision operations, the revision stemming from the history of observations and the possibilistic revision, defined when the epistemic state is represented by a total pre-order, are generalized, at a semantic level, to the case of a partial pre-order on interpretations, and at a syntactic level, to the case of a partially ordered belief base. The equivalence between the two representations is shown for the two revision operations.<|reference_end|>
arxiv
@article{benferhat2002revising, title={Revising Partially Ordered Beliefs}, author={Salem Benferhat, Sylvain Lagrue, Odile Papini}, journal={Proc. of the 9th Workshop on Non-monotonic Reasoning (NMR'2002), pp. 142--149}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210004}, primaryClass={cs.AI} }
benferhat2002revising
arxiv-670793
cs/0210005
Positive time fractional derivative
<|reference_start|>Positive time fractional derivative: In mathematical modeling of the non-squared frequency-dependent diffusions, also known as the anomalous diffusions, it is desirable to have a positive real Fourier transform for the time derivative of arbitrary fractional or odd integer order. The Fourier transform of the fractional time derivative in the Riemann-Liouville and Caputo senses, however, involves a complex power function of the fractional order. In this study, a positive time derivative of fractional or odd integer order is introduced to respect the positivity in modeling the anomalous diffusions.<|reference_end|>
arxiv
@article{chen2002positive, title={Positive time fractional derivative}, author={W Chen}, journal={arXiv preprint arXiv:cs/0210005}, year={2002}, number={Simula Research Laboratory Report, Sept. 2002}, archivePrefix={arXiv}, eprint={cs/0210005}, primaryClass={cs.CE} }
chen2002positive
arxiv-670794
cs/0210006
Dynamic Ordered Sets with Exponential Search Trees
<|reference_start|>Dynamic Ordered Sets with Exponential Search Trees: We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fully-dynamic linear space data structures. This leads to an optimal bound of O(sqrt(log n/loglog n)) for searching and updating a dynamic set of n integer keys in linear space. Here searching an integer y means finding the maximum key in the set which is smaller than or equal to y. This problem is equivalent to the standard text book problem of maintaining an ordered set (see, e.g., Cormen, Leiserson, Rivest, and Stein: Introduction to Algorithms, 2nd ed., MIT Press, 2001). The best previous deterministic linear space bound was O(log n/loglog n) due Fredman and Willard from STOC 1990. No better deterministic search bound was known using polynomial space. We also get the following worst-case linear space trade-offs between the number n, the word length w, and the maximal key U < 2^w: O(min{loglog n+log n/log w, (loglog n)(loglog U)/(logloglog U)}). These trade-offs are, however, not likely to be optimal. Our results are generalized to finger searching and string searching, providing optimal results for both in terms of n.<|reference_end|>
arxiv
@article{andersson2002dynamic, title={Dynamic Ordered Sets with Exponential Search Trees}, author={Arne Andersson and Mikkel Thorup}, journal={arXiv preprint arXiv:cs/0210006}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210006}, primaryClass={cs.DS} }
andersson2002dynamic
arxiv-670795
cs/0210007
Compilability of Abduction
<|reference_start|>Compilability of Abduction: Abduction is one of the most important forms of reasoning; it has been successfully applied to several practical problems such as diagnosis. In this paper we investigate whether the computational complexity of abduction can be reduced by an appropriate use of preprocessing. This is motivated by the fact that part of the data of the problem (namely, the set of all possible assumptions and the theory relating assumptions and manifestations) are often known before the rest of the problem. In this paper, we show some complexity results about abduction when compilation is allowed.<|reference_end|>
arxiv
@article{liberatore2002compilability, title={Compilability of Abduction}, author={Paolo Liberatore and Marco Schaerf}, journal={arXiv preprint arXiv:cs/0210007}, year={2002}, doi={10.1145/1182613.1182615}, archivePrefix={arXiv}, eprint={cs/0210007}, primaryClass={cs.AI cs.CC} }
liberatore2002compilability
arxiv-670796
cs/0210008
Cellular automata and communication complexity
<|reference_start|>Cellular automata and communication complexity: The model of cellular automata is fascinating because very simple local rules can generate complex global behaviors. The relationship between local and global function is subject of many studies. We tackle this question by using results on communication complexity theory and, as a by-product, we provide (yet another) classification of cellular automata.<|reference_end|>
arxiv
@article{durr2002cellular, title={Cellular automata and communication complexity}, author={Christoph Durr, Ivan Rapaport, Guillaume Theyssier}, journal={arXiv preprint arXiv:cs/0210008}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210008}, primaryClass={cs.CC} }
durr2002cellular
arxiv-670797
cs/0210009
On the Cell-based Complexity of Recognition of Bounded Configurations by Finite Dynamic Cellular Automata
<|reference_start|>On the Cell-based Complexity of Recognition of Bounded Configurations by Finite Dynamic Cellular Automata: This paper studies complexity of recognition of classes of bounded configurations by a generalization of conventional cellular automata (CA) -- finite dynamic cellular automata (FDCA). Inspired by the CA-based models of biological and computer vision, this study attempts to derive the properties of a complexity measure and of the classes of input configurations that make it beneficial to realize the recognition via a two-layered automaton as compared to a one-layered automaton. A formalized model of an image pattern recognition task is utilized to demonstrate that the derived conditions can be satisfied for a non-empty set of practical problems.<|reference_end|>
arxiv
@article{makatchev2002on, title={On the Cell-based Complexity of Recognition of Bounded Configurations by Finite Dynamic Cellular Automata}, author={Maxim Makatchev}, journal={arXiv preprint arXiv:cs/0210009}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210009}, primaryClass={cs.CC cs.CV} }
makatchev2002on
arxiv-670798
cs/0210010
A Random Structure for Optimum Cache Size Distributed hash table (DHT) Peer-to-Peer design
<|reference_start|>A Random Structure for Optimum Cache Size Distributed hash table (DHT) Peer-to-Peer design: We propose a new and easily-realizable distributed hash table (DHT) peer-to-peer structure, incorporating a random caching strategy that allows for {\em polylogarithmic search time} while having only a {\em constant cache} size. We also show that a very large class of deterministic caching strategies, which covers almost all previously proposed DHT systems, can not achieve polylog search time with constant cache size. In general, the new scheme is the first known DHT structure with the following highly-desired properties: (a) Random caching strategy with constant cache size; (b) Average search time of $O(log^{2}(N))$; (c) Guaranteed search time of $O(log^{3}(N))$; (d) Truly local cache dynamics with constant overhead for node deletions and additions; (e) Self-organization from any initial network state towards the desired structure; and (f) Allows a seamless means for various trade-offs, e.g., search speed or anonymity at the expense of larger cache size.<|reference_end|>
arxiv
@article{sarshar2002a, title={A Random Structure for Optimum Cache Size Distributed hash table (DHT) Peer-to-Peer design}, author={Nima Sarshar and Vwani Roychowdhury}, journal={arXiv preprint arXiv:cs/0210010}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210010}, primaryClass={cs.NI cs.DC} }
sarshar2002a
arxiv-670799
cs/0210011
A Note on Induction Schemas in Bounded Arithmetic
<|reference_start|>A Note on Induction Schemas in Bounded Arithmetic: As is well known, Buss' theory of bounded arithmetic $S^{1}_{2}$ proves $\Sigma_{0}^{b}(\Sigma_{1}^{b})-LIND$; however, we show that Allen's $D_{2}^{1}$ does not prove $\Sigma_{0}^{b}(\Sigma_{1}^{b})-LLIND$ unless $P = NC$. We also give some interesting alternative axiomatisations of $S^{1}_{2}$.<|reference_end|>
arxiv
@article{ignjatovic2002a, title={A Note on Induction Schemas in Bounded Arithmetic}, author={Aleksandar Ignjatovic}, journal={arXiv preprint arXiv:cs/0210011}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210011}, primaryClass={cs.LO cs.CC} }
ignjatovic2002a
arxiv-670800
cs/0210012
Selection of future events from a time series in relation to estimations of forecasting uncertainty
<|reference_start|>Selection of future events from a time series in relation to estimations of forecasting uncertainty: A new general procedure for a priori selection of more predictable events from a time series of observed variable is proposed. The procedure is applicable to time series which contains different types of events that feature significantly different predictability, or, in other words, to heteroskedastic time series. A priori selection of future events in accordance to expected uncertainty of their forecasts may be helpful for making practical decisions. The procedure first implies creation of two neural network based forecasting models, one of which is aimed at prediction of conditional mean and other - conditional dispersion, and then elaboration of the rule for future event selection into groups of more and less predictable events. The method is demonstrated and tested by the example of the computer generated time series, and then applied to the real world time series, Dow Jones Industrial Average index.<|reference_end|>
arxiv
@article{konovalov2002selection, title={Selection of future events from a time series in relation to estimations of forecasting uncertainty}, author={Igor B. Konovalov}, journal={arXiv preprint arXiv:cs/0210012}, year={2002}, archivePrefix={arXiv}, eprint={cs/0210012}, primaryClass={cs.NE} }
konovalov2002selection