corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-669501
cs/0005001
Robustness of Regional Matching Scheme over Global Matching Scheme
<|reference_start|>Robustness of Regional Matching Scheme over Global Matching Scheme: The paper has established and verified the theory prevailing widely among image and pattern recognition specialists that the bottom-up indirect regional matching process is the more stable and the more robust than the global matching process against concentrated types of noise represented by clutter, outlier or occlusion in the imagery. We have demonstrated this by analyzing the effect of concentrated noise on a typical decision making process of a simplified two candidate voting model where our theorem establishes the lower bounds to a critical breakdown point of election (or decision) result by the bottom-up matching process are greater than the exact bound of the global matching process implying that the former regional process is capable of accommodating a higher level of noise than the latter global process before the result of decision overturns. We present a convincing experimental verification supporting not only the theory by a white-black flag recognition problem in the presence of localized noise but also the validity of the conjecture by a facial recognition problem that the theorem remains valid for other decision making processes involving an important dimension-reducing transform such as principal component analysis or a Gabor transform.<|reference_end|>
arxiv
@article{chen2000robustness, title={Robustness of Regional Matching Scheme over Global Matching Scheme}, author={Liang Chen, Naoyuki Tokuda (Utsunomiya University, Japan)}, journal={arXiv preprint arXiv:cs/0005001}, year={2000}, number={UU-TOKUDALAB-00-03}, archivePrefix={arXiv}, eprint={cs/0005001}, primaryClass={cs.CV} }
chen2000robustness
arxiv-669502
cs/0005002
Application Software, Domain-Specific Languages, and Language Design Assistants
<|reference_start|>Application Software, Domain-Specific Languages, and Language Design Assistants: While application software does the real work, domain-specific languages (DSLs) are tools to help produce it efficiently, and language design assistants in turn are meta-tools to help produce DSLs quickly. DSLs are already in wide use (HTML for web pages, Excel macros for spreadsheet applications, VHDL for hardware design, ...), but many more will be needed for both new as well as existing application domains. Language design assistants to help develop them currently exist only in the basic form of language development systems. After a quick look at domain-specific languages, and especially their relationship to application libraries, we survey existing language development systems and give an outline of future language design assistants.<|reference_end|>
arxiv
@article{heering2000application, title={Application Software, Domain-Specific Languages, and Language Design Assistants}, author={Jan Heering}, journal={in Proceedings SSGRR 2000 International Conference on Advances in Infrastructure for Electronic Business, Science, and Education on the Internet}, year={2000}, number={SEN-R0010 (CWI, Amsterdam)}, archivePrefix={arXiv}, eprint={cs/0005002}, primaryClass={cs.PL} }
heering2000application
arxiv-669503
cs/0005003
CoRR: A Computing Research Repository
<|reference_start|>CoRR: A Computing Research Repository: Discusses how CoRR was set up and some policy issues involved with setting up such a repository.<|reference_end|>
arxiv
@article{halpern2000corr:, title={CoRR: A Computing Research Repository}, author={Joseph Y. Halpern}, journal={arXiv preprint arXiv:cs/0005003}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005003}, primaryClass={cs.DL cs.CY} }
halpern2000corr:
arxiv-669504
cs/0005004
A response to the commentaries on CoRR
<|reference_start|>A response to the commentaries on CoRR: This is a response to the commentaries on "CoRR: A Computing Research Repository".<|reference_end|>
arxiv
@article{halpern2000a, title={A response to the commentaries on CoRR}, author={Joseph Y. Halpern}, journal={arXiv preprint arXiv:cs/0005004}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005004}, primaryClass={cs.DL cs.CY} }
halpern2000a
arxiv-669505
cs/0005005
Connectivity Compression for Irregular Quadrilateral Meshes
<|reference_start|>Connectivity Compression for Irregular Quadrilateral Meshes: Applications that require Internet access to remote 3D datasets are often limited by the storage costs of 3D models. Several compression methods are available to address these limits for objects represented by triangle meshes. Many CAD and VRML models, however, are represented as quadrilateral meshes or mixed triangle/quadrilateral meshes, and these models may also require compression. We present an algorithm for encoding the connectivity of such quadrilateral meshes, and we demonstrate that by preserving and exploiting the original quad structure, our approach achieves encodings 30 - 80% smaller than an approach based on randomly splitting quads into triangles. We present both a code with a proven worst-case cost of 3 bits per vertex (or 2.75 bits per vertex for meshes without valence-two vertices) and entropy-coding results for typical meshes ranging from 0.3 to 0.9 bits per vertex, depending on the regularity of the mesh. Our method may be implemented by a rule for a particular splitting of quads into triangles and by using the compression and decompression algorithms introduced in [Rossignac99] and [Rossignac&Szymczak99]. We also present extensions to the algorithm to compress meshes with holes and handles and meshes containing triangles and other polygons as well as quads.<|reference_end|>
arxiv
@article{king2000connectivity, title={Connectivity Compression for Irregular Quadrilateral Meshes}, author={Davis King, Jarek Rossignac, and Andrzej Szymczak}, journal={arXiv preprint arXiv:cs/0005005}, year={2000}, number={GVU Tech Report GIT-GVU-99-36}, archivePrefix={arXiv}, eprint={cs/0005005}, primaryClass={cs.GR cs.CG cs.DS} }
king2000connectivity
arxiv-669506
cs/0005006
A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation
<|reference_start|>A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation: This paper presents a corpus-based approach to word sense disambiguation that builds an ensemble of Naive Bayesian classifiers, each of which is based on lexical features that represent co--occurring words in varying sized windows of context. Despite the simplicity of this approach, empirical results disambiguating the widely studied nouns line and interest show that such an ensemble achieves accuracy rivaling the best previously published results.<|reference_end|>
arxiv
@article{pedersen2000a, title={A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation}, author={Ted Pedersen (University of Minnesota Duluth)}, journal={arXiv preprint arXiv:cs/0005006}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005006}, primaryClass={cs.CL} }
pedersen2000a
arxiv-669507
cs/0005007
Scientific Collaboratories as Socio-Technical Interaction Networks: A Theoretical Approach
<|reference_start|>Scientific Collaboratories as Socio-Technical Interaction Networks: A Theoretical Approach: Collaboratories refer to laboratories where scientists can work together while they are in distant locations from each other and from key equipment. They have captured the interest both of CSCW researchers and of science funders who wish to optimize the use of rare scientific equipment and expertise. We examine the kind of CSCW conceptions that help us best understand the character of working relationships in these scientific collaboratories. Our model, inspired by actor-network theory, considers technologies as Socio-technical Interaction Networks (STINs). This model provides a rich understanding of the scientific collaboratories, and also a more complete understanding of the conditions and activities that support collaborative work in them. We illustrate the significance of STIN models with several cases drawn from the fields of high energy physics and materials science.<|reference_end|>
arxiv
@article{kling2000scientific, title={Scientific Collaboratories as Socio-Technical Interaction Networks: A Theoretical Approach}, author={Rob Kling, Geoffrey McKim, Joanna Fortuna, Adam King}, journal={arXiv preprint arXiv:cs/0005007}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005007}, primaryClass={cs.CY} }
kling2000scientific
arxiv-669508
cs/0005008
A Denotational Semantics for First-Order Logic
<|reference_start|>A Denotational Semantics for First-Order Logic: In Apt and Bezem [AB99] (see cs.LO/9811017) we provided a computational interpretation of first-order formulas over arbitrary interpretations. Here we complement this work by introducing a denotational semantics for first-order logic. Additionally, by allowing an assignment of a non-ground term to a variable we introduce in this framework logical variables. The semantics combines a number of well-known ideas from the areas of semantics of imperative programming languages and logic programming. In the resulting computational view conjunction corresponds to sequential composition, disjunction to ``don't know'' nondeterminism, existential quantification to declaration of a local variable, and negation to the ``negation as finite failure'' rule. The soundness result shows correctness of the semantics with respect to the notion of truth. The proof resembles in some aspects the proof of the soundness of the SLDNF-resolution.<|reference_end|>
arxiv
@article{apt2000a, title={A Denotational Semantics for First-Order Logic}, author={Krzysztof R. Apt}, journal={arXiv preprint arXiv:cs/0005008}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005008}, primaryClass={cs.PL cs.AI} }
apt2000a
arxiv-669509
cs/0005009
PSPACE Reasoning for Graded Modal Logics
<|reference_start|>PSPACE Reasoning for Graded Modal Logics: We present a PSPACE algorithm that decides satisfiability of the graded modal logic Gr(K_R)---a natural extension of propositional modal logic K_R by counting expressions---which plays an important role in the area of knowledge representation. The algorithm employs a tableaux approach and is the first known algorithm which meets the lower bound for the complexity of the problem. Thus, we exactly fix the complexity of the problem and refute an ExpTime-hardness conjecture. We extend the results to the logic Gr(K_(R \cap I)), which augments Gr(K_R) with inverse relations and intersection of accessibility relations. This establishes a kind of ``theoretical benchmark'' that all algorithmic approaches can be measured against.<|reference_end|>
arxiv
@article{tobies2000pspace, title={PSPACE Reasoning for Graded Modal Logics}, author={Stephan Tobies}, journal={Journal of Logic and Computation, Vol. 11 No. 1, pp 85-106 2001}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005009}, primaryClass={cs.LO cs.AI cs.CC cs.DS} }
tobies2000pspace
arxiv-669510
cs/0005010
Extending and Implementing the Stable Model Semantics
<|reference_start|>Extending and Implementing the Stable Model Semantics: An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.<|reference_end|>
arxiv
@article{simons2000extending, title={Extending and Implementing the Stable Model Semantics}, author={Patrik Simons}, journal={arXiv preprint arXiv:cs/0005010}, year={2000}, number={HUT-TCS-A58}, archivePrefix={arXiv}, eprint={cs/0005010}, primaryClass={cs.LO cs.AI} }
simons2000extending
arxiv-669511
cs/0005011
An Average Analysis of Backtracking on Random Constraint Satisfaction Problems
<|reference_start|>An Average Analysis of Backtracking on Random Constraint Satisfaction Problems: In this paper we propose a random CSP model, called Model GB, which is a natural generalization of standard Model B. It is proved that Model GB in which each constraint is easy to satisfy exhibits non-trivial behaviour (not trivially satisfiable or unsatisfiable) as the number of variables approaches infinity. A detailed analysis to obtain an asymptotic estimate (good to 1+o(1)) of the average number of nodes in a search tree used by the backtracking algorithm on Model GB is also presented. It is shown that the average number of nodes required for finding all solutions or proving that no solution exists grows exponentially with the number of variables. So this model might be an interesting distribution for studying the nature of hard instances and evaluating the performance of CSP algorithms. In addition, we further investigate the behaviour of the average number of nodes as r (the ratio of constraints to variables) varies. The results indicate that as r increases, random CSP instances get easier and easier to solve, and the base for the average number of nodes that is exponential in r tends to 1 as r approaches infinity. Therefore, although the average number of nodes used by the backtracking algorithm on random CSP is exponential, many CSP instances will be very easy to solve when r is sufficiently large.<|reference_end|>
arxiv
@article{xu2000an, title={An Average Analysis of Backtracking on Random Constraint Satisfaction Problems}, author={Ke Xu and Wei Li}, journal={Annals of Mathematics and Artificial Intelligence, 33:21-37, 2001.}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005011}, primaryClass={cs.CC cs.AI} }
xu2000an
arxiv-669512
cs/0005012
Reasoning with Axioms: Theory and Pratice
<|reference_start|>Reasoning with Axioms: Theory and Pratice: When reasoning in description, modal or temporal logics it is often useful to consider axioms representing universal truths in the domain of discourse. Reasoning with respect to an arbitrary set of axioms is hard, even for relatively inexpressive logics, and it is essential to deal with such axioms in an efficient manner if implemented systems are to be effective in real applications. This is particularly relevant to Description Logics, where subsumption reasoning with respect to a terminology is a fundamental problem. Two optimisation techniques that have proved to be particularly effective in dealing with terminologies are lazy unfolding and absorption. In this paper we seek to improve our theoretical understanding of these important techniques. We define a formal framework that allows the techniques to be precisely described, establish conditions under which they can be safely applied, and prove that, provided these conditions are respected, subsumption testing algorithms will still function correctly. These results are used to show that the procedures used in the FaCT system are correct and, moreover, to show how efficiency can be significantly improved, while still retaining the guarantee of correctness, by relaxing the safety conditions for absorption.<|reference_end|>
arxiv
@article{horrocks2000reasoning, title={Reasoning with Axioms: Theory and Pratice}, author={Ian Horrocks and Stephan Tobies}, journal={arXiv preprint arXiv:cs/0005012}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005012}, primaryClass={cs.LO cs.AI} }
horrocks2000reasoning
arxiv-669513
cs/0005013
Practical Reasoning for Very Expressive Description Logics
<|reference_start|>Practical Reasoning for Very Expressive Description Logics: Description Logics (DLs) are a family of knowledge representation formalisms mainly characterised by constructors to build complex concepts and roles from atomic ones. Expressive role constructors are important in many applications, but can be computationally problematical. We present an algorithm that decides satisfiability of the DL ALC extended with transitive and inverse roles and functional restrictions with respect to general concept inclusion axioms and role hierarchies; early experiments indicate that this algorithm is well-suited for implementation. Additionally, we show that ALC extended with just transitive and inverse roles is still in PSPACE. We investigate the limits of decidability for this family of DLs, showing that relaxing the constraints placed on the kinds of roles used in number restrictions leads to the undecidability of all inference problems. Finally, we describe a number of optimisation techniques that are crucial in obtaining implementations of the decision procedures, which, despite the worst-case complexity of the problem, exhibit good performance with real-life problems.<|reference_end|>
arxiv
@article{horrocks2000practical, title={Practical Reasoning for Very Expressive Description Logics}, author={Ian Horrocks, Ulrike Sattler and Stephan Tobies}, journal={Logic Journal of the IGPL 8(3):239-264, May 2000}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005013}, primaryClass={cs.LO cs.AI} }
horrocks2000practical
arxiv-669514
cs/0005014
Practical Reasoning for Expressive Description Logics
<|reference_start|>Practical Reasoning for Expressive Description Logics: Description Logics (DLs) are a family of knowledge representation formalisms mainly characterised by constructors to build complex concepts and roles from atomic ones. Expressive role constructors are important in many applications, but can be computationally problematical. We present an algorithm that decides satisfiability of the DL ALC extended with transitive and inverse roles, role hierarchies, and qualifying number restrictions. Early experiments indicate that this algorithm is well-suited for implementation. Additionally, we show that ALC extended with just transitive and inverse roles is still in PSPACE. Finally, we investigate the limits of decidability for this family of DLs.<|reference_end|>
arxiv
@article{horrocks2000practical, title={Practical Reasoning for Expressive Description Logics}, author={Ian Horrocks, Ulrike Sattler and Stephan Tobies}, journal={arXiv preprint arXiv:cs/0005014}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005014}, primaryClass={cs.LO cs.AI} }
horrocks2000practical
arxiv-669515
cs/0005015
Noun Phrase Recognition by System Combination
<|reference_start|>Noun Phrase Recognition by System Combination: The performance of machine learning algorithms can be improved by combining the output of different systems. In this paper we apply this idea to the recognition of noun phrases.We generate different classifiers by using different representations of the data. By combining the results with voting techniques described in (Van Halteren et.al. 1998) we manage to improve the best reported performances on standard data sets for base noun phrases and arbitrary noun phrases.<|reference_end|>
arxiv
@article{sang2000noun, title={Noun Phrase Recognition by System Combination}, author={Erik F. Tjong Kim Sang}, journal={Proceedings of NAACL 2000, Seattle, WA, USA}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005015}, primaryClass={cs.CL} }
sang2000noun
arxiv-669516
cs/0005016
Improving Testsuites via Instrumentation
<|reference_start|>Improving Testsuites via Instrumentation: This paper explores the usefulness of a technique from software engineering, namely code instrumentation, for the development of large-scale natural language grammars. Information about the usage of grammar rules in test sentences is used to detect untested rules, redundant test sentences, and likely causes of overgeneration. Results show that less than half of a large-coverage grammar for German is actually tested by two large testsuites, and that 10-30% of testing time is redundant. The methodology applied can be seen as a re-use of grammar writing knowledge for testsuite compilation.<|reference_end|>
arxiv
@article{broeker2000improving, title={Improving Testsuites via Instrumentation}, author={Norbert Broeker}, journal={Proc. ANLP--NAACL, Seattle/WA, Apr29--May4 2000, pp.325-330}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005016}, primaryClass={cs.CL} }
broeker2000improving
arxiv-669517
cs/0005017
Reasoning with Individuals for the Description Logic SHIQ
<|reference_start|>Reasoning with Individuals for the Description Logic SHIQ: While there has been a great deal of work on the development of reasoning algorithms for expressive description logics, in most cases only Tbox reasoning is considered. In this paper we present an algorithm for combined Tbox and Abox reasoning in the SHIQ description logic. This algorithm is of particular interest as it can be used to decide the problem of (database) conjunctive query containment w.r.t. a schema. Moreover, the realisation of an efficient implementation should be relatively straightforward as it can be based on an existing highly optimised implementation of the Tbox algorithm in the FaCT system.<|reference_end|>
arxiv
@article{horrock2000reasoning, title={Reasoning with Individuals for the Description Logic SHIQ}, author={Ian Horrock and Ulrike Sattler and Stephan Tobies}, journal={arXiv preprint arXiv:cs/0005017}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005017}, primaryClass={cs.LO cs.AI} }
horrock2000reasoning
arxiv-669518
cs/0005018
On Modular Termination Proofs of General Logic Programs
<|reference_start|>On Modular Termination Proofs of General Logic Programs: We propose a modular method for proving termination of general logic programs (i.e., logic programs with negation). It is based on the notion of acceptable programs, but it allows us to prove termination in a truly modular way. We consider programs consisting of a hierarchy of modules and supply a general result for proving termination by dealing with each module separately. For programs which are in a certain sense well-behaved, namely well-moded or well-typed programs, we derive both a simple verification technique and an iterative proof method. Some examples show how our system allows for greatly simplified proofs.<|reference_end|>
arxiv
@article{bossi2000on, title={On Modular Termination Proofs of General Logic Programs}, author={Annalisa Bossi, Nicoletta Cocco, Sandro Etalle and Sabina Rossi}, journal={arXiv preprint arXiv:cs/0005018}, year={2000}, number={University of Venice Technical Report CS2000-8}, archivePrefix={arXiv}, eprint={cs/0005018}, primaryClass={cs.LO cs.PL} }
bossi2000on
arxiv-669519
cs/0005019
On the Scalability of the Answer Extraction System "ExtrAns"
<|reference_start|>On the Scalability of the Answer Extraction System "ExtrAns": This paper reports on the scalability of the answer extraction system ExtrAns. An answer extraction system locates the exact phrases in the documents that contain the explicit answers to the user queries. Answer extraction systems are therefore more convenient than document retrieval systems in situations where the user wants to find specific information in limited time. ExtrAns performs answer extraction over UNIX manpages. It has been constructed by combining available linguistic resources and implementing only a few modules from scratch. A resolution procedure between the minimal logical form of the user query and the minimal logical forms of the manpage sentences finds the answers to the queries. These answers are displayed to the user, together with pointers to the respective manpages, and the exact phrases that contribute to the answer are highlighted. This paper shows that the increase in response times is not a big issue when scaling the system up from 30 to 500 documents, and that the response times for 500 documents are still acceptable for a real-time answer extraction system.<|reference_end|>
arxiv
@article{aliod2000on, title={On the Scalability of the Answer Extraction System "ExtrAns"}, author={Diego Moll'a Aliod and Michael Hess}, journal={Applications of Natural Language to Information Systems (NLDB'99). Klagenfurt, Austria, 1999, 219-224}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005019}, primaryClass={cs.CL} }
aliod2000on
arxiv-669520
cs/0005020
Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies
<|reference_start|>Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies: We present a multi-document summarizer, called MEAD, which generates summaries using cluster centroids produced by a topic detection and tracking system. We also describe two new techniques, based on sentence utility and subsumption, which we have applied to the evaluation of both single and multiple document summaries. Finally, we describe two user studies that test our models of multi-document summarization.<|reference_end|>
arxiv
@article{radev2000centroid-based, title={Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies}, author={Dragomir R. Radev (University of Michigan), Hongyan Jing (Columbia University), Malgorzata Budzikowska (IBM TJ Watson Research Center)}, journal={NAACL/ANLP Workshop on Automatic Summarization, Seattle, WA, April 30, 2000}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005020}, primaryClass={cs.CL cs.AI cs.DL cs.HC cs.IR} }
radev2000centroid-based
arxiv-669521
cs/0005021
Modeling the Uncertainty in Complex Engineering Systems
<|reference_start|>Modeling the Uncertainty in Complex Engineering Systems: Existing procedures for model validation have been deemed inadequate for many engineering systems. The reason of this inadequacy is due to the high degree of complexity of the mechanisms that govern these systems. It is proposed in this paper to shift the attention from modeling the engineering system itself to modeling the uncertainty that underlies its behavior. A mathematical framework for modeling the uncertainty in complex engineering systems is developed. This framework uses the results of computational learning theory. It is based on the premise that a system model is a learning machine.<|reference_end|>
arxiv
@article{guergachi2000modeling, title={Modeling the Uncertainty in Complex Engineering Systems}, author={A. Guergachi}, journal={arXiv preprint arXiv:cs/0005021}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005021}, primaryClass={cs.AI cs.LG} }
guergachi2000modeling
arxiv-669522
cs/0005022
Fractionally-addressed delay lines
<|reference_start|>Fractionally-addressed delay lines: While traditional implementations of variable-length digital delay lines are based on a circular buffer accessed by two pointers, we propose an implementation where a single fractional pointer is used both for read and write operations. On modern general-purpose architectures, the proposed method is nearly as efficient as the popularinterpolated circular buffer, and it behaves well for delay-length modulations commonly found in digital audio effects. The physical interpretation of the new implementation shows that it is suitable for simulating tension or density modulations in wave-propagating media.<|reference_end|>
arxiv
@article{rocchesso2000fractionally-addressed, title={Fractionally-addressed delay lines}, author={Davide Rocchesso}, journal={IEEE Transactions on Speech and Audio Processing, vol. 8, no. 6, november 2000, pp. 717-727}, year={2000}, doi={10.1109/89.876310}, archivePrefix={arXiv}, eprint={cs/0005022}, primaryClass={cs.SD} }
rocchesso2000fractionally-addressed
arxiv-669523
cs/0005023
C++ programming language for an abstract massively parallel SIMD architecture
<|reference_start|>C++ programming language for an abstract massively parallel SIMD architecture: The aim of this work is to define and implement an extended C++ language to support the SIMD programming paradigm. The C++ programming language has been extended to express all the potentiality of an abstract SIMD machine consisting of a central Control Processor and a N-dimensional toroidal array of Numeric Processors. Very few extensions have been added to the standard C++ with the goal of minimising the effort for the programmer in learning a new language and to keep very high the performance of the compiled code. The proposed language has been implemented as a porting of the GNU C++ Compiler on a SIMD supercomputer.<|reference_end|>
arxiv
@article{lonardo2000c++, title={C++ programming language for an abstract massively parallel SIMD architecture}, author={Alessandro Lonardo, Emanuele Panizzi and Benedetto Proietti}, journal={arXiv preprint arXiv:cs/0005023}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005023}, primaryClass={cs.PL} }
lonardo2000c++
arxiv-669524
cs/0005024
The SAT Phase Transition
<|reference_start|>The SAT Phase Transition: Phase transition is an important feature of SAT problem. For random k-SAT model, it is proved that as r (ratio of clauses to variables) increases, the structure of solutions will undergo a sudden change like satisfiability phase transition when r reaches a threshold point. This phenomenon shows that the satisfying truth assignments suddenly shift from being relatively different from each other to being very similar to each other.<|reference_end|>
arxiv
@article{xu2000the, title={The SAT Phase Transition}, author={Ke Xu and Wei Li}, journal={The SAT Phase Transition. Science in China, Series E, 42(5):494-501, 1999}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005024}, primaryClass={cs.AI cs.CC} }
xu2000the
arxiv-669525
cs/0005025
Finite-State Reduplication in One-Level Prosodic Morphology
<|reference_start|>Finite-State Reduplication in One-Level Prosodic Morphology: Reduplication, a central instance of prosodic morphology, is particularly challenging for state-of-the-art computational morphology, since it involves copying of some part of a phonological string. In this paper I advocate a finite-state method that combines enriched lexical representations via intersection to implement the copying. The proposal includes a resource-conscious variant of automata and can benefit from the existence of lazy algorithms. Finally, the implementation of a complex case from Koasati is presented.<|reference_end|>
arxiv
@article{walther2000finite-state, title={Finite-State Reduplication in One-Level Prosodic Morphology}, author={Markus Walther (University of Marburg)}, journal={Proc. NAACL-2000, Seattle/WA, pp.296-302}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005025}, primaryClass={cs.CL} }
walther2000finite-state
arxiv-669526
cs/0005026
A One-Time Pad based Cipher for Data Protection in Distributed Environments
<|reference_start|>A One-Time Pad based Cipher for Data Protection in Distributed Environments: A one-time pad (OTP) based cipher to insure both data protection and integrity when mobile code arrives to a remote host is presented. Data protection is required when a mobile agent could retrieve confidential information that would be encrypted in untrusted nodes of the network; in this case, information management could not rely on carrying an encryption key. Data integrity is a prerequisite because mobile code must be protected against malicious hosts that, by counterfeiting or removing collected data, could cover information to the server that has sent the agent. The algorithm described in this article seems to be simple enough, so as to be easily implemented. This scheme is based on a non-interactive protocol and allows a remote host to change its own data on-the-fly and, at the same time, protecting information against handling by other hosts.<|reference_end|>
arxiv
@article{sobrado2000a, title={A One-Time Pad based Cipher for Data Protection in Distributed Environments}, author={Igor Sobrado (University of Oviedo)}, journal={arXiv preprint arXiv:cs/0005026}, year={2000}, number={FFUOV-00/03}, archivePrefix={arXiv}, eprint={cs/0005026}, primaryClass={cs.CR cs.DC cs.IR cs.NI} }
sobrado2000a
arxiv-669527
cs/0005027
A Bayesian Reflection on Surfaces
<|reference_start|>A Bayesian Reflection on Surfaces: The topic of this paper is a novel Bayesian continuous-basis field representation and inference framework. Within this paper several problems are solved: The maximally informative inference of continuous-basis fields, that is where the basis for the field is itself a continuous object and not representable in a finite manner; the tradeoff between accuracy of representation in terms of information learned, and memory or storage capacity in bits; the approximation of probability distributions so that a maximal amount of information about the object being inferred is preserved; an information theoretic justification for multigrid methodology. The maximally informative field inference framework is described in full generality and denoted the Generalized Kalman Filter. The Generalized Kalman Filter allows the update of field knowledge from previous knowledge at any scale, and new data, to new knowledge at any other scale. An application example instance, the inference of continuous surfaces from measurements (for example, camera image data), is presented.<|reference_end|>
arxiv
@article{wolf2000a, title={A Bayesian Reflection on Surfaces}, author={David R. Wolf}, journal={Entropy, Vol.1, Issue 4, 69-98, 1999. http://www.mdpi.org/entropy/}, year={2000}, doi={10.3390/e1040069}, archivePrefix={arXiv}, eprint={cs/0005027}, primaryClass={cs.CV cs.DS cs.LG math.PR nlin.AO physics.data-an} }
wolf2000a
arxiv-669528
cs/0005028
A method for command identification, using modified collision free hashing with addition & rotation iterative hash functions (part 1)
<|reference_start|>A method for command identification, using modified collision free hashing with addition & rotation iterative hash functions (part 1): This paper proposes a method for identification of a user`s fixed string set (which can be a command/instruction set for a terminal or microprocessor). This method is fast and has very small memory requirements, compared to a traditional full string storage and compare method. The user feeds characters into a microcontroller via a keyboard or another microprocessor sends commands and the microcontroller hashes the input in order to identify valid commands, ensuring no collisions between hashed valid strings, while applying further criteria to narrow collision between random and valid strings. The method proposed narrows the possibility of the latter kind of collision, achieving small code and memory-size utilization and very fast execution. Hashing is achieved using additive & rotating hash functions in an iterative form, which can be very easily implemented in simple microcontrollers and microprocessors. Such hash functions are presented and compared according to their efficiency for a given string/command set, using the program found in the appendix.<|reference_end|>
arxiv
@article{skraparlis2000a, title={A method for command identification, using modified collision free hashing with addition & rotation iterative hash functions (part 1)}, author={Dimitrios Skraparlis}, journal={arXiv preprint arXiv:cs/0005028}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005028}, primaryClass={cs.HC cs.IR} }
skraparlis2000a
arxiv-669529
cs/0005029
Ranking suspected answers to natural language questions using predictive annotation
<|reference_start|>Ranking suspected answers to natural language questions using predictive annotation: In this paper, we describe a system to rank suspected answers to natural language questions. We process both corpus and query using a new technique, predictive annotation, which augments phrases in texts with labels anticipating their being targets of certain kinds of questions. Given a natural language question, an IR system returns a set of matching passages, which are then analyzed and ranked according to various criteria described in this paper. We provide an evaluation of the techniques based on results from the TREC Q&A evaluation in which our system participated.<|reference_end|>
arxiv
@article{radev2000ranking, title={Ranking suspected answers to natural language questions using predictive annotation}, author={Dragomir R. Radev (University of Michigan), John Prager (IBM TJ Watson Research Center), Valerie Samn (Teachers College, Columbia University)}, journal={ANLP'00, Seattle, WA, May 2000}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005029}, primaryClass={cs.CL} }
radev2000ranking
arxiv-669530
cs/0005030
Axiomatizing Causal Reasoning
<|reference_start|>Axiomatizing Causal Reasoning: Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutions to the equations are unique, (3) arbitrary theories (where the equations may not have solutions and, if they do, they are not necessarily unique). It is shown that to reason about causality in the most general third class, we must extend the language used by Galles and Pearl. In addition, the complexity of the decision procedures is characterized for all the languages and classes of models considered.<|reference_end|>
arxiv
@article{halpern2000axiomatizing, title={Axiomatizing Causal Reasoning}, author={Joseph Y. Halpern}, journal={Journal of AI Research, Vol. 12, 2000, pp. 317--337}, year={2000}, archivePrefix={arXiv}, eprint={cs/0005030}, primaryClass={cs.AI cs.LO} }
halpern2000axiomatizing
arxiv-669531
cs/0005031
Conditional Plausibility Measures and Bayesian Networks
<|reference_start|>Conditional Plausibility Measures and Bayesian Networks: A general notion of algebraic conditional plausibility measures is defined. Probability measures, ranking functions, possibility measures, and (under the appropriate definitions) sets of probability measures can all be viewed as defining algebraic conditional plausibility measures. It is shown that algebraic conditional plausibility measures can be represented using Bayesian networks.<|reference_end|>
arxiv
@article{halpern2000conditional, title={Conditional Plausibility Measures and Bayesian Networks}, author={Joseph Y. Halpern}, journal={Journal Of Artificial Intelligence Research, Volume 14, pages 359-389, 2001}, year={2000}, doi={10.1613/jair.817}, archivePrefix={arXiv}, eprint={cs/0005031}, primaryClass={cs.AI} }
halpern2000conditional
arxiv-669532
cs/0005032
Computational Complexity and Phase Transitions
<|reference_start|>Computational Complexity and Phase Transitions: Phase transitions in combinatorial problems have recently been shown to be useful in locating "hard" instances of combinatorial problems. The connection between computational complexity and the existence of phase transitions has been addressed in Statistical Mechanics and Artificial Intelligence, but not studied rigorously. We take a step in this direction by investigating the existence of sharp thresholds for the class of generalized satisfiability problems defined by Schaefer. In the case when all constraints are clauses we give a complete characterization of such problems that have a sharp threshold. While NP-completeness does not imply (even in this restricted case) the existence of a sharp threshold, it "almost implies" this, since clausal generalized satisfiability problems that lack a sharp threshold are either 1. polynomial time solvable, or 2. predicted, with success probability lower bounded by some positive constant by across all the probability range, by a single, trivial procedure.<|reference_end|>
arxiv
@article{istrate2000computational, title={Computational Complexity and Phase Transitions}, author={Gabriel Istrate}, journal={arXiv preprint arXiv:cs/0005032}, year={2000}, doi={10.1109/CCC.2000.856740}, archivePrefix={arXiv}, eprint={cs/0005032}, primaryClass={cs.CC cs.DS} }
istrate2000computational
arxiv-669533
cs/0005033
Multimethods and separate static typechecking in a language with C++-like object model
<|reference_start|>Multimethods and separate static typechecking in a language with C++-like object model: The goal of this paper is the description and analysis of multimethod implementation in a new object-oriented, class-based programming language called OOLANG. The implementation of the multimethod typecheck and selection, deeply analyzed in the paper, is performed in two phases in order to allow static typechecking and separate compilation of modules. The first phase is performed at compile time, while the second is executed at link time and does not require the modules' source code. OOLANG has syntax similar to C++; the main differences are the absence of pointers and the realization of polymorphism through subsumption. It adopts the C++ object model and supports multiple inheritance as well as virtual base classes. For this reason, it has been necessary to define techniques for realigning argument and return value addresses when performing multimethod invocations.<|reference_end|>
arxiv
@article{panizzi2000multimethods, title={Multimethods and separate static typechecking in a language with C++-like object model}, author={Emanuele Panizzi, Bernardo Pastorelli}, journal={arXiv preprint arXiv:cs/0005033}, year={2000}, number={UAQ DIE R.99-33}, archivePrefix={arXiv}, eprint={cs/0005033}, primaryClass={cs.PL} }
panizzi2000multimethods
arxiv-669534
cs/0006001
Boosting the Differences: A fast Bayesian classifier neural network
<|reference_start|>Boosting the Differences: A fast Bayesian classifier neural network: A Bayesian classifier that up-weights the differences in the attribute values is discussed. Using four popular datasets from the UCI repository, some interesting features of the network are illustrated. The network is suitable for classification problems.<|reference_end|>
arxiv
@article{philip2000boosting, title={Boosting the Differences: A fast Bayesian classifier neural network}, author={Ninan Sajeeth Philip, K. Babu Joseph}, journal={arXiv preprint arXiv:cs/0006001}, year={2000}, number={IDA2000}, archivePrefix={arXiv}, eprint={cs/0006001}, primaryClass={cs.CV} }
philip2000boosting
arxiv-669535
cs/0006002
Distorted English Alphabet Identification : An application of Difference Boosting Algorithm
<|reference_start|>Distorted English Alphabet Identification : An application of Difference Boosting Algorithm: The difference-boosting algorithm is used on letters dataset from the UCI repository to classify distorted raster images of English alphabets. In contrast to rather complex networks, the difference-boosting is found to produce comparable or better classification efficiency on this complex problem.<|reference_end|>
arxiv
@article{philip2000distorted, title={Distorted English Alphabet Identification : An application of Difference Boosting Algorithm}, author={Ninan Sajeeth Philip, K. Babu Joseph}, journal={arXiv preprint arXiv:cs/0006002}, year={2000}, number={ADCOM2000}, archivePrefix={arXiv}, eprint={cs/0006002}, primaryClass={cs.CV} }
philip2000distorted
arxiv-669536
cs/0006003
Exploiting Diversity in Natural Language Processing: Combining Parsers
<|reference_start|>Exploiting Diversity in Natural Language Processing: Combining Parsers: Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy. Two general approaches are presented and two combination techniques are described for each approach. Both parametric and non-parametric models are explored. The resulting parsers surpass the best previously published performance results for the Penn Treebank.<|reference_end|>
arxiv
@article{henderson2000exploiting, title={Exploiting Diversity in Natural Language Processing: Combining Parsers}, author={John C. Henderson and Eric Brill}, journal={Proceedings of the Fourth Conference on Empirical Methods in Natural Language Processing (EMNLP-99), pages 187-194. College Park, Maryland, USA. June, 1999}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006003}, primaryClass={cs.CL} }
henderson2000exploiting
arxiv-669537
cs/0006004
A Note on "Optimal Static Load Balancing in Distributed Computer Systems"
<|reference_start|>A Note on "Optimal Static Load Balancing in Distributed Computer Systems": The problem of minimizing mean response time of generic jobs submitted to a heterogenous distributed computer systems is considered in this paper. A static load balancing strategy, in which decision of redistribution of loads does not depend on the state of the system, is used for this purpose. The article is closely related to a previous article on the same topic. The present article points out number of inconsistencies in the previous article, provides a new formulation, and discusses the impact of new findings, based on the improved formulation, on the results of the previous article.<|reference_end|>
arxiv
@article{mondal2000a, title={A Note on "Optimal Static Load Balancing in Distributed Computer Systems"}, author={S. A. Mondal}, journal={arXiv preprint arXiv:cs/0006004}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006004}, primaryClass={cs.DC} }
mondal2000a
arxiv-669538
cs/0006005
Novelty Detection for Robot Neotaxis
<|reference_start|>Novelty Detection for Robot Neotaxis: The ability of a robot to detect and respond to changes in its environment is potentially very useful, as it draws attention to new and potentially important features. We describe an algorithm for learning to filter out previously experienced stimuli to allow further concentration on novel features. The algorithm uses a model of habituation, a biological process which causes a decrement in response with repeated presentation. Experiments with a mobile robot are presented in which the robot detects the most novel stimulus and turns towards it (`neotaxis').<|reference_end|>
arxiv
@article{marsland2000novelty, title={Novelty Detection for Robot Neotaxis}, author={Stephen Marsland, Ulrich Nehmzow and Jonathan Shapiro}, journal={arXiv preprint arXiv:cs/0006005}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006005}, primaryClass={cs.RO cs.NE nlin.AO} }
marsland2000novelty
arxiv-669539
cs/0006006
A Real-Time Novelty Detector for a Mobile Robot
<|reference_start|>A Real-Time Novelty Detector for a Mobile Robot: Recognising new or unusual features of an environment is an ability which is potentially very useful to a robot. This paper demonstrates an algorithm which achieves this task by learning an internal representation of `normality' from sonar scans taken as a robot explores the environment. This model of the environment is used to evaluate the novelty of each sonar scan presented to it with relation to the model. Stimuli which have not been seen before, and therefore have more novelty, are highlighted by the filter. The filter has the ability to forget about features which have been learned, so that stimuli which are seen only rarely recover their response over time. A number of robot experiments are presented which demonstrate the operation of the filter.<|reference_end|>
arxiv
@article{marsland2000a, title={A Real-Time Novelty Detector for a Mobile Robot}, author={Stephen Marsland, Ulrich Nehmzow and Jonathan Shapiro}, journal={arXiv preprint arXiv:cs/0006006}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006006}, primaryClass={cs.RO cs.NE} }
marsland2000a
arxiv-669540
cs/0006007
Novelty Detection on a Mobile Robot Using Habituation
<|reference_start|>Novelty Detection on a Mobile Robot Using Habituation: In this paper a novelty filter is introduced which allows a robot operating in an un structured environment to produce a self-organised model of its surroundings and to detect deviations from the learned model. The environment is perceived using the rob ot's 16 sonar sensors. The algorithm produces a novelty measure for each sensor scan relative to the model it has learned. This means that it highlights stimuli which h ave not been previously experienced. The novelty filter proposed uses a model of hab ituation. Habituation is a decrement in behavioural response when a stimulus is pre sented repeatedly. Robot experiments are presented which demonstrate the reliable o peration of the filter in a number of environments.<|reference_end|>
arxiv
@article{marsland2000novelty, title={Novelty Detection on a Mobile Robot Using Habituation}, author={Stephen Marsland, Ulrich Nehmzow and Jonathan Shapiro}, journal={arXiv preprint arXiv:cs/0006007}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006007}, primaryClass={cs.RO cs.NE nlin.AO} }
marsland2000novelty
arxiv-669541
cs/0006008
Performing work efficiently in the presence of faults
<|reference_start|>Performing work efficiently in the presence of faults: We consider a system of t synchronous processes that communicate only by sending messages to one another, and that together must perform $n$ independent units of work. Processes may fail by crashing; we want to guarantee that in every execution of the protocol in which at least one process survives, all n units of work will be performed. We consider three parameters: the number of messages sent, the total number of units of work performed (including multiplicities), and time. We present three protocols for solving the problem. All three are work-optimal, doing O(n+t) work. The first has moderate costs in the remaining two parameters, sending O(t\sqrt{t}) messages, and taking O(n+t) time. This protocol can be easily modified to run in any completely asynchronous system equipped with a failure detection mechanism. The second sends only O(t log{t}) messages, but its running time is large (exponential in n and t). The third is essentially time-optimal in the (usual) case in which there are no failures, and its time complexity degrades gracefully as the number of failures increases.<|reference_end|>
arxiv
@article{dwork2000performing, title={Performing work efficiently in the presence of faults}, author={Cynthia Dwork, Joseph Y. Halpern, and O. Waarts}, journal={SIAM Journal on Computing 27:5, 1998, pp. 1457--1491}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006008}, primaryClass={cs.DC} }
dwork2000performing
arxiv-669542
cs/0006009
Knowledge and common knowledge in a distributed environment
<|reference_start|>Knowledge and common knowledge in a distributed environment: Reasoning about knowledge seems to play a fundamental role in distributed systems. Indeed, such reasoning is a central part of the informal intuitive arguments used in the design of distributed protocols. Communication in a distributed system can be viewed as the act of transforming the system's state of knowledge. This paper presents a general framework for formalizing and reasoning about knowledge in distributed systems. We argue that states of knowledge of groups of processors are useful concepts for the design and analysis of distributed protocols. In particular, distributed knowledge corresponds to knowledge that is ``distributed'' among the members of the group, while common knowledge corresponds to a fact being ``publicly known''. The relationship between common knowledge and a variety of desirable actions in a distributed system is illustrated. Furthermore, it is shown that, formally speaking, in practical systems common knowledge cannot be attained. A number of weaker variants of common knowledge that are attainable in many cases of interest are introduced and investigated.<|reference_end|>
arxiv
@article{halpern2000knowledge, title={Knowledge and common knowledge in a distributed environment}, author={Joseph Y. Halpern and Yoram Moses}, journal={Journal of the ACM 37:3, 1990, pp. 549--587}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006009}, primaryClass={cs.DC cs.AI} }
halpern2000knowledge
arxiv-669543
cs/0006010
Light Affine Logic (Proof Nets, Programming Notation, P-Time Correctness and Completeness)
<|reference_start|>Light Affine Logic (Proof Nets, Programming Notation, P-Time Correctness and Completeness): This paper is a structured introduction to Light Affine Logic, and to its intuitionistic fragment. Light Affine Logic has a polynomially costing cut elimination (P-Time correctness), and encodes all P-Time Turing machines (P-Time completeness). P-Time correctness is proved by introducing the Proof nets for Intuitionistic Light Affine Logic. P-Time completeness is demonstrated in full details thanks to a very compact program notation. On one side, the proof of P-Time correctness describes how the complexity of cut elimination is controlled, thanks to a suitable cut elimination strategy that exploits structural properties of the Proof nets. This allows to have a good catch on the meaning of the ``paragraph'' modality, which is a peculiarity of light logics. On the other side, the proof of P-Time completeness, together with a lot of programming examples, gives a flavor of the non trivial task of programming with resource limitations, using Intuitionistic Light Affine Logic derivations as programs.<|reference_end|>
arxiv
@article{asperti2000light, title={Light Affine Logic (Proof Nets, Programming Notation, P-Time Correctness and Completeness)}, author={Andrea Asperti and Luca Roversi}, journal={arXiv preprint arXiv:cs/0006010}, year={2000}, number={RT-54-2000}, archivePrefix={arXiv}, eprint={cs/0006010}, primaryClass={cs.LO} }
asperti2000light
arxiv-669544
cs/0006011
Bagging and Boosting a Treebank Parser
<|reference_start|>Bagging and Boosting a Treebank Parser: Bagging and boosting, two effective machine learning techniques, are applied to natural language parsing. Experiments using these techniques with a trainable statistical parser are described. The best resulting system provides roughly as large of a gain in F-measure as doubling the corpus size. Error analysis of the result of the boosting technique reveals some inconsistent annotations in the Penn Treebank, suggesting a semi-automatic method for finding inconsistent treebank annotations.<|reference_end|>
arxiv
@article{henderson2000bagging, title={Bagging and Boosting a Treebank Parser}, author={John C. Henderson and Eric Brill}, journal={Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-2000), pages 34-41}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006011}, primaryClass={cs.CL} }
henderson2000bagging
arxiv-669545
cs/0006012
Exploiting Diversity for Natural Language Parsing
<|reference_start|>Exploiting Diversity for Natural Language Parsing: The popularity of applying machine learning methods to computational linguistics problems has produced a large supply of trainable natural language processing systems. Most problems of interest have an array of off-the-shelf products or downloadable code implementing solutions using various techniques. Where these solutions are developed independently, it is observed that their errors tend to be independently distributed. This thesis is concerned with approaches for capitalizing on this situation in a sample problem domain, Penn Treebank-style parsing. The machine learning community provides techniques for combining outputs of classifiers, but parser output is more structured and interdependent than classifications. To address this discrepancy, two novel strategies for combining parsers are used: learning to control a switch between parsers and constructing a hybrid parse from multiple parsers' outputs. Off-the-shelf parsers are not developed with an intention to perform well in a collaborative ensemble. Two techniques are presented for producing an ensemble of parsers that collaborate. All of the ensemble members are created using the same underlying parser induction algorithm, and the method for producing complementary parsers is only loosely constrained by that chosen algorithm.<|reference_end|>
arxiv
@article{henderson2000exploiting, title={Exploiting Diversity for Natural Language Parsing}, author={John C. Henderson}, journal={arXiv preprint arXiv:cs/0006012}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006012}, primaryClass={cs.CL} }
henderson2000exploiting
arxiv-669546
cs/0006013
An evaluation of Naive Bayesian anti-spam filtering
<|reference_start|>An evaluation of Naive Bayesian anti-spam filtering: It has recently been argued that a Naive Bayesian classifier can be used to filter unsolicited bulk e-mail ("spam"). We conduct a thorough evaluation of this proposal on a corpus that we make publicly available, contributing towards standard benchmarks. At the same time we investigate the effect of attribute-set size, training-corpus size, lemmatization, and stop-lists on the filter's performance, issues that had not been previously explored. After introducing appropriate cost-sensitive evaluation measures, we reach the conclusion that additional safety nets are needed for the Naive Bayesian anti-spam filter to be viable in practice.<|reference_end|>
arxiv
@article{androutsopoulos2000an, title={An evaluation of Naive Bayesian anti-spam filtering}, author={Ion Androutsopoulos, John Koutsias, Konstantinos V. Chandrinos, George Paliouras and Constantine D. Spyropoulos}, journal={Proceedings of the workshop on Machine Learning in the New Information Age, G. Potamias, V. Moustakis and M. van Someren (eds.), 11th European Conference on Machine Learning, Barcelona, Spain, pp. 9-17, 2000}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006013}, primaryClass={cs.CL cs.AI} }
androutsopoulos2000an
arxiv-669547
cs/0006014
Solaris System Resource Manager: All I Ever Wanted Was My Unfair Advantage (And Why You Can't Have It!)
<|reference_start|>Solaris System Resource Manager: All I Ever Wanted Was My Unfair Advantage (And Why You Can't Have It!): Traditional UNIX time-share schedulers attempt to be fair to all users by employing a round-robin style algorithm for allocating CPU time. Unfortunately, a loophole exists whereby the scheduler can be biased in favor of a greedy user running many short CPU-time processes. This loophole is not a defect but an intrinsic property of the round-robin scheduler that ensures responsiveness to the short CPU demands associated with multiple interactive users. A new generation of UNIX system resource management software constrains the scheduler to be equitable to all users regardless of the number of processes each may be running. This "fair-share" scheduling draws on the concept of pro rating resource "shares" across users and groups and then dynamically adjusting CPU usage to meet those share proportions. The simple notion of statically allocating these shares, however, belies the potential consequences for performance as measured by user response time and service level targets. We demonstrate this point by modeling several simple share allocation scenarios and analyzing the corresponding performance effects. A brief comparison of commercial system resource management implementations from HP, IBM, and SUN is also given.<|reference_end|>
arxiv
@article{gunther2000solaris, title={Solaris System Resource Manager: All I Ever Wanted Was My Unfair Advantage (And Why You Can't Have It!)}, author={Neil J. Gunther}, journal={Proc. CMG'99 Conf. p.194-205}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006014}, primaryClass={cs.PF cs.OS} }
gunther2000solaris
arxiv-669548
cs/0006015
UNIX Resource Managers: Capacity Planning and Resource Issues
<|reference_start|>UNIX Resource Managers: Capacity Planning and Resource Issues: The latest implementations of commercial UNIX to offer mainframe style capacity management on enterprise servers include: AIX Workload Manager (WLM), HP-UX Process Resource Manager (PRM), Solaris Resource Manager (SRM), as well as SGI and Compaq. The ability to manage server capacity is achieved by making significant modifications to the standard UNIX operating system so that processes are inherently tied to specific users. Those users, in turn, are granted only a certain fraction of system resources. Resource usage is monitored and compared with each users grant to ensure that the assigned entitlement constraints are met. In this paper, we begin by clearing up some of the confusion that has surrounded the motivation and the terminology behind the new technology. The common theme across each of the commercial implementations is the introduction of the fair-share scheduler. After reviewing some potential performance pitfalls, we present capacity planning guidelines for migrating to automated UNIX resource management.<|reference_end|>
arxiv
@article{gunther2000unix, title={UNIX Resource Managers: Capacity Planning and Resource Issues}, author={Neil J. Gunther}, journal={arXiv preprint arXiv:cs/0006015}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006015}, primaryClass={cs.PF cs.OS} }
gunther2000unix
arxiv-669549
cs/0006016
The X-Files: Investigating Alien Performance in a Thin-client World
<|reference_start|>The X-Files: Investigating Alien Performance in a Thin-client World: Many scientific applications use the X11 window environment; an open source windows GUI standard employing a client/server architecture. X11 promotes: distributed computing, thin-client functionality, cheap desktop displays, compatibility with heterogeneous servers, remote services and administration, and greater maturity than newer web technologies. This paper details the author's investigations into close encounters with alien performance in X11-based seismic applications running on a 200-node cluster, backed by 2 TB of mass storage. End-users cited two significant UFOs (Unidentified Faulty Operations) i) long application launch times and ii) poor interactive response times. The paper is divided into three major sections describing Close Encounters of the 1st Kind: citings of UFO experiences, the 2nd Kind: recording evidence of a UFO, and the 3rd Kind: contact and analysis. UFOs do exist and this investigation presents a real case study for evaluating workload analysis and other diagnostic tools.<|reference_end|>
arxiv
@article{gunther2000the, title={The X-Files: Investigating Alien Performance in a Thin-client World}, author={Neil J. Gunther}, journal={Proc. Hiper'99 Vol.1, p.156}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006016}, primaryClass={cs.PF cs.DC} }
gunther2000the
arxiv-669550
cs/0006017
Turning Speech Into Scripts
<|reference_start|>Turning Speech Into Scripts: We describe an architecture for implementing spoken natural language dialogue interfaces to semi-autonomous systems, in which the central idea is to transform the input speech signal through successive levels of representation corresponding roughly to linguistic knowledge, dialogue knowledge, and domain knowledge. The final representation is an executable program in a simple scripting language equivalent to a subset of Cshell. At each stage of the translation process, an input is transformed into an output, producing as a byproduct a "meta-output" which describes the nature of the transformation performed. We show how consistent use of the output/meta-output distinction permits a simple and perspicuous treatment of apparently diverse topics including resolution of pronouns, correction of user misconceptions, and optimization of scripts. The methods described have been concretely realized in a prototype speech interface to a simulation of the Personal Satellite Assistant.<|reference_end|>
arxiv
@article{rayner2000turning, title={Turning Speech Into Scripts}, author={Manny Rayner, Beth Ann Hockey, Frankie James}, journal={AAAI Spring Symposium on Natural Dialogues with Practical Robotic Devices, March 20-22, 2000. Stanford, CA}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006017}, primaryClass={cs.CL} }
rayner2000turning
arxiv-669551
cs/0006018
Accuracy, Coverage, and Speed: What Do They Mean to Users?
<|reference_start|>Accuracy, Coverage, and Speed: What Do They Mean to Users?: Speech is becoming increasingly popular as an interface modality, especially in hands- and eyes-busy situations where the use of a keyboard or mouse is difficult. However, despite the fact that many have hailed speech as being inherently usable (since everyone already knows how to talk), most users of speech input are left feeling disappointed by the quality of the interaction. Clearly, there is much work to be done on the design of usable spoken interfaces. We believe that there are two major problems in the design of speech interfaces, namely, (a) the people who are currently working on the design of speech interfaces are, for the most part, not interface designers and therefore do not have as much experience with usability issues as we in the CHI community do, and (b) speech, as an interface modality, has vastly different properties than other modalities, and therefore requires different usability measures.<|reference_end|>
arxiv
@article{james2000accuracy,, title={Accuracy, Coverage, and Speed: What Do They Mean to Users?}, author={Frankie James, Manny Rayner, Beth Ann Hockey}, journal={arXiv preprint arXiv:cs/0006018}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006018}, primaryClass={cs.CL cs.HC} }
james2000accuracy,
arxiv-669552
cs/0006019
A Compact Architecture for Dialogue Management Based on Scripts and Meta-Outputs
<|reference_start|>A Compact Architecture for Dialogue Management Based on Scripts and Meta-Outputs: We describe an architecture for spoken dialogue interfaces to semi-autonomous systems that transforms speech signals through successive representations of linguistic, dialogue, and domain knowledge. Each step produces an output, and a meta-output describing the transformation, with an executable program in a simple scripting language as the final result. The output/meta-output distinction permits perspicuous treatment of diverse tasks such as resolving pronouns, correcting user misconceptions, and optimizing scripts.<|reference_end|>
arxiv
@article{rayner2000a, title={A Compact Architecture for Dialogue Management Based on Scripts and Meta-Outputs}, author={Manny Rayner, Beth Ann Hockey, Frankie James}, journal={Language Technology Joint Conference ANLP-NAACL 2000. 29 April - 4 May 2000, Seattle, WA}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006019}, primaryClass={cs.CL} }
rayner2000a
arxiv-669553
cs/0006020
A Comparison of the XTAG and CLE Grammars for English
<|reference_start|>A Comparison of the XTAG and CLE Grammars for English: When people develop something intended as a large broad-coverage grammar, they usually have a more specific goal in mind. Sometimes this goal is covering a corpus; sometimes the developers have theoretical ideas they wish to investigate; most often, work is driven by a combination of these two main types of goal. What tends to happen after a while is that the community of people working with the grammar starts thinking of some phenomena as ``central'', and makes serious efforts to deal with them; other phenomena are labelled ``marginal'', and ignored. Before long, the distinction between ``central'' and ``marginal'' becomes so ingrained that it is automatic, and people virtually stop thinking about the ``marginal'' phenomena. In practice, the only way to bring the marginal things back into focus is to look at what other people are doing and compare it with one's own work. In this paper, we will take two large grammars, XTAG and the CLE, and examine each of them from the other's point of view. We will find in both cases not only that important things are missing, but that the perspective offered by the other grammar suggests simple and practical ways of filling in the holes. It turns out that there is a pleasing symmetry to the picture. XTAG has a very good treatment of complement structure, which the CLE to some extent lacks; conversely, the CLE offers a powerful and general account of adjuncts, which the XTAG grammar does not fully duplicate. If we examine the way in which each grammar does the thing it is good at, we find that the relevant methods are quite easy to port to the other framework, and in fact only involve generalization and systematization of existing mechanisms.<|reference_end|>
arxiv
@article{hockey2000a, title={A Comparison of the XTAG and CLE Grammars for English}, author={Beth Ann Hockey, Manny Rayner, Frankie James}, journal={arXiv preprint arXiv:cs/0006020}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006020}, primaryClass={cs.CL} }
hockey2000a
arxiv-669554
cs/0006021
Compiling Language Models from a Linguistically Motivated Unification Grammar
<|reference_start|>Compiling Language Models from a Linguistically Motivated Unification Grammar: Systems now exist which are able to compile unification grammars into language models that can be included in a speech recognizer, but it is so far unclear whether non-trivial linguistically principled grammars can be used for this purpose. We describe a series of experiments which investigate the question empirically, by incrementally constructing a grammar and discovering what problems emerge when successively larger versions are compiled into finite state graph representations and used as language models for a medium-vocabulary recognition task.<|reference_end|>
arxiv
@article{rayner2000compiling, title={Compiling Language Models from a Linguistically Motivated Unification Grammar}, author={Manny Rayner, Beth Ann Hockey, Frankie James, Elizabeth O. Bratt, Sharon Goldwater, Mark Gawron}, journal={arXiv preprint arXiv:cs/0006021}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006021}, primaryClass={cs.CL} }
rayner2000compiling
arxiv-669555
cs/0006022
Multicast-based Architecture for IP Mobility: Simulation Analysis and Comparison with Basic Mobile IP
<|reference_start|>Multicast-based Architecture for IP Mobility: Simulation Analysis and Comparison with Basic Mobile IP: With the introduction of a newer generation of wireless devices and technologies, the need for an efficient architecture for IP mobility is becoming more apparent. Several architectures have been proposed to support IP mobility. Most studies, however, show that current architectures, in general, fall short from satisfying the performance requirements for wireless applications, mainly audio. Other studies have shown performance improvement by using multicast to reduce latency and packet loss during handoff. In this study, we propose a multicast-based architecture to support IP mobility. We evaluate our approach through simulation, and we compare it to mainstream approaches for IP mobility, mainly, the Mobile IP protocol. Comparison is performed according to the required performance criteria, such as smooth handoff and efficient routing. Our simulation results show significant improvement for the proposed architecture. On average, basic Mobile IP consumes almost twice as much network bandwidth, and experiences more than twice as much end-to-end and handoff delays, as does our proposed architecture. Furthermore, we propose an extension to Mobile IP to support our architecture with minimal modification.<|reference_end|>
arxiv
@article{helmy2000multicast-based, title={Multicast-based Architecture for IP Mobility: Simulation Analysis and Comparison with Basic Mobile IP}, author={Ahmed Helmy (University of Southern California)}, journal={arXiv preprint arXiv:cs/0006022}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006022}, primaryClass={cs.NI cs.PF} }
helmy2000multicast-based
arxiv-669556
cs/0006023
Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech
<|reference_start|>Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech: We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as Statement, Question, Backchannel, Agreement, Disagreement, and Apology. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.<|reference_end|>
arxiv
@article{stolcke2000dialogue, title={Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech}, author={A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. Van Ess-Dykema, M. Meteer}, journal={Computational Linguistics 26(3), 339-373, September 2000}, year={2000}, doi={10.1162/089120100561737}, archivePrefix={arXiv}, eprint={cs/0006023}, primaryClass={cs.CL} }
stolcke2000dialogue
arxiv-669557
cs/0006024
Can Prosody Aid the Automatic Classification of Dialog Acts in Conversational Speech?
<|reference_start|>Can Prosody Aid the Automatic Classification of Dialog Acts in Conversational Speech?: Identifying whether an utterance is a statement, question, greeting, and so forth is integral to effective automatic understanding of natural dialog. Little is known, however, about how such dialog acts (DAs) can be automatically classified in truly natural conversation. This study asks whether current approaches, which use mainly word information, could be improved by adding prosodic information. The study is based on more than 1000 conversations from the Switchboard corpus. DAs were hand-annotated, and prosodic features (duration, pause, F0, energy, and speaking rate) were automatically extracted for each DA. In training, decision trees based on these features were inferred; trees were then applied to unseen test data to evaluate performance. Performance was evaluated for prosody models alone, and after combining the prosody models with word information -- either from true words or from the output of an automatic speech recognizer. For an overall classification task, as well as three subtasks, prosody made significant contributions to classification. Feature-specific analyses further revealed that although canonical features (such as F0 for questions) were important, less obvious features could compensate if canonical features were removed. Finally, in each task, integrating the prosodic model with a DA-specific statistical language model improved performance over that of the language model alone, especially for the case of recognized words. Results suggest that DAs are redundantly marked in natural conversation, and that a variety of automatically extractable prosodic features could aid dialog processing in speech applications.<|reference_end|>
arxiv
@article{shriberg2000can, title={Can Prosody Aid the Automatic Classification of Dialog Acts in Conversational Speech?}, author={E. Shriberg, R. Bates, A. Stolcke, P. Taylor, D. Jurafsky, K. Ries, N. Coccaro, R. Martin, M. Meteer, C. Van Ess-Dykema}, journal={Language and Speech 41(3-4), 439-487, 1998}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006024}, primaryClass={cs.CL} }
shriberg2000can
arxiv-669558
cs/0006025
Entropy-based Pruning of Backoff Language Models
<|reference_start|>Entropy-based Pruning of Backoff Language Models: A criterion for pruning parameters from N-gram backoff language models is developed, based on the relative entropy between the original and the pruned model. It is shown that the relative entropy resulting from pruning a single N-gram can be computed exactly and efficiently for backoff models. The relative entropy measure can be expressed as a relative change in training set perplexity. This leads to a simple pruning criterion whereby all N-grams that change perplexity by less than a threshold are removed from the model. Experiments show that a production-quality Hub4 LM can be reduced to 26% its original size without increasing recognition error. We also compare the approach to a heuristic pruning criterion by Seymore and Rosenfeld (1996), and show that their approach can be interpreted as an approximation to the relative entropy criterion. Experimentally, both approaches select similar sets of N-grams (about 85% overlap), with the exact relative entropy criterion giving marginally better performance.<|reference_end|>
arxiv
@article{stolcke2000entropy-based, title={Entropy-based Pruning of Backoff Language Models}, author={A. Stolcke}, journal={Proceedings DARPA Broadcast News Transcription and Understanding Workshop, pp. 270-274, Lansdowne, VA, 1998}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006025}, primaryClass={cs.CL} }
stolcke2000entropy-based
arxiv-669559
cs/0006026
Online Correction of Dispersion Error in 2D Waveguide Meshes
<|reference_start|>Online Correction of Dispersion Error in 2D Waveguide Meshes: An elastic ideal 2D propagation medium, i.e., a membrane, can be simulated by models discretizing the wave equation on the time-space grid (finite difference methods), or locally discretizing the solution of the wave equation (waveguide meshes). The two approaches provide equivalent computational structures, and introduce numerical dispersion that induces a misalignment of the modes from their theoretical positions. Prior literature shows that dispersion can be arbitrarily reduced by oversizing and oversampling the mesh, or by adpting offline warping techniques. In this paper we propose to reduce numerical dispersion by embedding warping elements, i.e., properly tuned allpass filters, in the structure. The resulting model exhibits a significant reduction in dispersion, and requires less computational resources than a regular mesh structure having comparable accuracy.<|reference_end|>
arxiv
@article{fontana2000online, title={Online Correction of Dispersion Error in 2D Waveguide Meshes}, author={Federico Fontana and Davide Rocchesso}, journal={arXiv preprint arXiv:cs/0006026}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006026}, primaryClass={cs.SD cs.NA math.NA} }
fontana2000online
arxiv-669560
cs/0006027
Verbal Interactions in Virtual Worlds
<|reference_start|>Verbal Interactions in Virtual Worlds: We first discuss respective advantages of language interaction in virtual worlds and of using 3D images in dialogue systems. Then, we describe an example of a verbal interaction system in virtual reality: Ulysse. Ulysse is a conversational agent that helps a user navigate in virtual worlds. It has been designed to be embedded in the representation of a participant of a virtual conference and it responds positively to motion orders. Ulysse navigates the user's viewpoint on his/her behalf in the virtual world. On tests we carried out, we discovered that users, novices as well as experienced ones have difficulties moving in a 3D environment. Agents such as Ulysse enable a user to carry out navigation motions that would have been impossible with classical interaction devices. From the whole Ulysse system, we have stripped off a skeleton architecture that we have ported to VRML, Java, and Prolog. We hope this skeleton helps the design of language applications in virtual worlds.<|reference_end|>
arxiv
@article{nugues2000verbal, title={Verbal Interactions in Virtual Worlds}, author={Pierre Nugues}, journal={arXiv preprint arXiv:cs/0006027}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006027}, primaryClass={cs.CL cs.HC} }
nugues2000verbal
arxiv-669561
cs/0006028
Trainable Methods for Surface Natural Language Generation
<|reference_start|>Trainable Methods for Surface Natural Language Generation: We present three systems for surface natural language generation that are trainable from annotated corpora. The first two systems, called NLG1 and NLG2, require a corpus marked only with domain-specific semantic attributes, while the last system, called NLG3, requires a corpus marked with both semantic attributes and syntactic dependency information. All systems attempt to produce a grammatical natural language phrase from a domain-specific semantic representation. NLG1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step, while NLG2 and NLG3 use maximum entropy probability models to individually generate each word in the phrase. The systems NLG2 and NLG3 learn to determine both the word choice and the word order of the phrase. We present experiments in which we generate phrases to describe flights in the air travel domain.<|reference_end|>
arxiv
@article{ratnaparkhi2000trainable, title={Trainable Methods for Surface Natural Language Generation}, author={Adwait Ratnaparkhi}, journal={Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL 2000). Pages 194--201}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006028}, primaryClass={cs.CL} }
ratnaparkhi2000trainable
arxiv-669562
cs/0006029
Systematic Performance Evaluation of Multipoint Protocols
<|reference_start|>Systematic Performance Evaluation of Multipoint Protocols: The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.<|reference_end|>
arxiv
@article{helmy2000systematic, title={Systematic Performance Evaluation of Multipoint Protocols}, author={Ahmed Helmy, Sandeep Gupta, Deborah Estrin, Alberto Cerpa, Yan Yu (University of Southern California)}, journal={arXiv preprint arXiv:cs/0006029}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006029}, primaryClass={cs.NI cs.DS} }
helmy2000systematic
arxiv-669563
cs/0006030
Multiagent Control of Self-reconfigurable Robots
<|reference_start|>Multiagent Control of Self-reconfigurable Robots: We demonstrate how multiagent systems provide useful control techniques for modular self-reconfigurable (metamorphic) robots. Such robots consist of many modules that can move relative to each other, thereby changing the overall shape of the robot to suit different tasks. Multiagent control is particularly well-suited for tasks involving uncertain and changing environments. We illustrate this approach through simulation experiments of Proteo, a metamorphic robot system currently under development.<|reference_end|>
arxiv
@article{bojinov2000multiagent, title={Multiagent Control of Self-reconfigurable Robots}, author={Hristo Bojinov, Arancha Casal and Tad Hogg}, journal={Artificial Intelligence 142:99-120 (2002)}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006030}, primaryClass={cs.RO cs.DC cs.MA} }
bojinov2000multiagent
arxiv-669564
cs/0006031
Verifying Termination of General Logic Programs with Concrete Queries
<|reference_start|>Verifying Termination of General Logic Programs with Concrete Queries: We introduce a method of verifying termination of logic programs with respect to concrete queries (instead of abstract query patterns). A necessary and sufficient condition is established and an algorithm for automatic verification is developed. In contrast to existing query pattern-based approaches, our method has the following features: (1) It applies to all general logic programs with non-floundering queries. (2) It is very easy to automate because it does not need to search for a level mapping or a model, nor does it need to compute an interargument relation based on additional mode or type information. (3) It bridges termination analysis with loop checking, the two problems that have been studied separately in the past despite their close technical relation with each other.<|reference_end|>
arxiv
@article{shen2000verifying, title={Verifying Termination of General Logic Programs with Concrete Queries}, author={Yi-Dong Shen, Li-Yan Yuan, Jia-Huai You}, journal={arXiv preprint arXiv:cs/0006031}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006031}, primaryClass={cs.AI cs.LO} }
shen2000verifying
arxiv-669565
cs/0006032
Estimation of English and non-English Language Use on the WWW
<|reference_start|>Estimation of English and non-English Language Use on the WWW: The World Wide Web has grown so big, in such an anarchic fashion, that it is difficult to describe. One of the evident intrinsic characteristics of the World Wide Web is its multilinguality. Here, we present a technique for estimating the size of a language-specific corpus given the frequency of commonly occurring words in the corpus. We apply this technique to estimating the number of words available through Web browsers for given languages. Comparing data from 1996 to data from 1999 and 2000, we calculate the growth of a number of European languages on the Web. As expected, non-English languages are growing at a faster pace than English, though the position of English is still dominant.<|reference_end|>
arxiv
@article{grefenstette2000estimation, title={Estimation of English and non-English Language Use on the WWW}, author={Gregory Grefenstette, Julien Nioche}, journal={Proceedings of RIAO'2000, "Content-Based Multimedia Information Access", Paris, April 12-14,2000, pp. 237-246}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006032}, primaryClass={cs.CL cs.HC} }
grefenstette2000estimation
arxiv-669566
cs/0006033
Verifying Termination and Error-Freedom of Logic Programs with block Declarations
<|reference_start|>Verifying Termination and Error-Freedom of Logic Programs with block Declarations: We present verification methods for logic programs with delay declarations. The verified properties are termination and freedom from errors related to built-ins. Concerning termination, we present two approaches. The first approach tries to eliminate the well-known problem of speculative output bindings. The second approach is based on identifying the predicates for which the textual position of an atom using this predicate is irrelevant with respect to termination. Three features are distinctive of this work: it allows for predicates to be used in several modes; it shows that block declarations, which are a very simple delay construct, are sufficient to ensure the desired properties; it takes the selection rule into account, assuming it to be as in most Prolog implementations. The methods can be used to verify existing programs and assist in writing new programs.<|reference_end|>
arxiv
@article{smaus2000verifying, title={Verifying Termination and Error-Freedom of Logic Programs with block Declarations}, author={Jan-Georg Smaus, Patricia M. Hill and Andy King}, journal={arXiv preprint arXiv:cs/0006033}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006033}, primaryClass={cs.LO cs.PL} }
smaus2000verifying
arxiv-669567
cs/0006034
Type Classes and Constraint Handling Rules
<|reference_start|>Type Classes and Constraint Handling Rules: Type classes are an elegant extension to traditional, Hindley-Milner based typing systems. They are used in modern, typed languages such as Haskell to support controlled overloading of symbols. Haskell 98 supports only single-parameter and constructor type classes. Other extensions such as multi-parameter type classes are highly desired but are still not officially supported by Haskell. Subtle issues arise with extensions, which may lead to a loss of feasible type inference or ambiguous programs. A proper logical basis for type class systems seems to be missing. Such a basis would allow extensions to be characterised and studied rigorously. We propose to employ Constraint Handling Rules as a tool to study and develop type class systems in a uniform way.<|reference_end|>
arxiv
@article{glynn2000type, title={Type Classes and Constraint Handling Rules}, author={Kevin Glynn, Martin Sulzmann, and Peter J. Stuckey}, journal={arXiv preprint arXiv:cs/0006034}, year={2000}, number={TR2000/7}, archivePrefix={arXiv}, eprint={cs/0006034}, primaryClass={cs.PL} }
glynn2000type
arxiv-669568
cs/0006035
On the Development of the Intersection of a Plane with a Polytope
<|reference_start|>On the Development of the Intersection of a Plane with a Polytope: Define a ``slice'' curve as the intersection of a plane with the surface of a polytope, i.e., a convex polyhedron in three dimensions. We prove that a slice curve develops on a plane without self-intersection. The key tool used is a generalization of Cauchy's arm lemma to permit nonconvex ``openings'' of a planar convex chain.<|reference_end|>
arxiv
@article{o'rourke2000on, title={On the Development of the Intersection of a Plane with a Polytope}, author={Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0006035}, year={2000}, number={Smith Technical Report 068}, archivePrefix={arXiv}, eprint={cs/0006035}, primaryClass={cs.CG cs.DM} }
o'rourke2000on
arxiv-669569
cs/0006036
Prosody-Based Automatic Segmentation of Speech into Sentences and Topics
<|reference_start|>Prosody-Based Automatic Segmentation of Speech into Sentences and Topics: A crucial step in processing speech audio data for information extraction, topic detection, or browsing/playback is to segment the input into sentence and topic units. Speech segmentation is challenging, since the cues typically present for segmenting text (headers, paragraphs, punctuation) are absent in spoken language. We investigate the use of prosody (information gleaned from the timing and melody of speech) for these tasks. Using decision tree and hidden Markov modeling techniques, we combine prosodic cues with word-based approaches, and evaluate performance on two speech corpora, Broadcast News and Switchboard. Results show that the prosodic model alone performs on par with, or better than, word-based statistical language models -- for both true and automatically recognized words in news speech. The prosodic model achieves comparable performance with significantly less training data, and requires no hand-labeling of prosodic events. Across tasks and corpora, we obtain a significant improvement over word-only models using a probabilistic combination of prosodic and lexical information. Inspection reveals that the prosodic models capture language-independent boundary indicators described in the literature. Finally, cue usage is task and corpus dependent. For example, pause and pitch features are highly informative for segmenting news speech, whereas pause, duration and word-based cues dominate for natural conversation.<|reference_end|>
arxiv
@article{shriberg2000prosody-based, title={Prosody-Based Automatic Segmentation of Speech into Sentences and Topics}, author={E. Shriberg and A. Stolcke and D. Hakkani-Tur and G. Tur}, journal={Speech Communication 32(1-2), 127-154, September 2000}, year={2000}, doi={10.1016/S0167-6393(00)00028-5}, archivePrefix={arXiv}, eprint={cs/0006036}, primaryClass={cs.CL} }
shriberg2000prosody-based
arxiv-669570
cs/0006037
A Decision-Theoretic Approach to Resource Allocation in Wireless Multimedia Networks
<|reference_start|>A Decision-Theoretic Approach to Resource Allocation in Wireless Multimedia Networks: The allocation of scarce spectral resources to support as many user applications as possible while maintaining reasonable quality of service is a fundamental problem in wireless communication. We argue that the problem is best formulated in terms of decision theory. We propose a scheme that takes decision-theoretic concerns (like preferences) into account and discuss the difficulties and subtleties involved in applying standard techniques from the theory of Markov Decision Processes (MDPs) in constructing an algorithm that is decision-theoretically optimal. As an example of the proposed framework, we construct such an algorithm under some simplifying assumptions. Additionally, we present analysis and simulation results that show that our algorithm meets its design goals. Finally, we investigate how far from optimal one well-known heuristic is. The main contribution of our results is in providing insight and guidance for the design of near-optimal admission-control policies.<|reference_end|>
arxiv
@article{haas2000a, title={A Decision-Theoretic Approach to Resource Allocation in Wireless Multimedia Networks}, author={Zygmunt Haas, Joseph Y. Halpern, Li Li, Stephen B. Wicker}, journal={arXiv preprint arXiv:cs/0006037}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006037}, primaryClass={cs.NI} }
haas2000a
arxiv-669571
cs/0006038
Approximation and Exactness in Finite State Optimality Theory
<|reference_start|>Approximation and Exactness in Finite State Optimality Theory: Previous work (Frank and Satta 1998; Karttunen, 1998) has shown that Optimality Theory with gradient constraints generally is not finite state. A new finite-state treatment of gradient constraints is presented which improves upon the approximation of Karttunen (1998). The method turns out to be exact, and very compact, for the syllabification analysis of Prince and Smolensky (1993).<|reference_end|>
arxiv
@article{gerdemann2000approximation, title={Approximation and Exactness in Finite State Optimality Theory}, author={Dale Gerdemann and Gertjan van Noord}, journal={arXiv preprint arXiv:cs/0006038}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006038}, primaryClass={cs.CL} }
gerdemann2000approximation
arxiv-669572
cs/0006039
Orthogonal Least Squares Algorithm for the Approximation of a Map and its Derivatives with a RBF Network
<|reference_start|>Orthogonal Least Squares Algorithm for the Approximation of a Map and its Derivatives with a RBF Network: Radial Basis Function Networks (RBFNs) are used primarily to solve curve-fitting problems and for non-linear system modeling. Several algorithms are known for the approximation of a non-linear curve from a sparse data set by means of RBFNs. However, there are no procedures that permit to define constrains on the derivatives of the curve. In this paper, the Orthogonal Least Squares algorithm for the identification of RBFNs is modified to provide the approximation of a non-linear 1-in 1-out map along with its derivatives, given a set of training data. The interest on the derivatives of non-linear functions concerns many identification and control tasks where the study of system stability and robustness is addressed. The effectiveness of the proposed algorithm is demonstrated by a study on the stability of a single loop feedback system.<|reference_end|>
arxiv
@article{drioli2000orthogonal, title={Orthogonal Least Squares Algorithm for the Approximation of a Map and its Derivatives with a RBF Network}, author={Carlo Drioli and Davide Rocchesso}, journal={arXiv preprint arXiv:cs/0006039}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006039}, primaryClass={cs.NE cs.SD} }
drioli2000orthogonal
arxiv-669573
cs/0006040
Correlation over Decomposed Signals: A Non-Linear Approach to Fast and Effective Sequences Comparison
<|reference_start|>Correlation over Decomposed Signals: A Non-Linear Approach to Fast and Effective Sequences Comparison: A novel non-linear approach to fast and effective comparison of sequences is presented, compared to the traditional cross-correlation operator, and illustrated with respect to DNA sequences.<|reference_end|>
arxiv
@article{costa2000correlation, title={Correlation over Decomposed Signals: A Non-Linear Approach to Fast and Effective Sequences Comparison}, author={Luciano da Fontoura Costa}, journal={arXiv preprint arXiv:cs/0006040}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006040}, primaryClass={cs.CV cs.DS q-bio} }
costa2000correlation
arxiv-669574
cs/0006041
Using a Diathesis Model for Semantic Parsing
<|reference_start|>Using a Diathesis Model for Semantic Parsing: This paper presents a semantic parsing approach for unrestricted texts. Semantic parsing is one of the major bottlenecks of Natural Language Understanding (NLU) systems and usually requires building expensive resources not easily portable to other domains. Our approach obtains a case-role analysis, in which the semantic roles of the verb are identified. In order to cover all the possible syntactic realisations of a verb, our system combines their argument structure with a set of general semantic labelled diatheses models. Combining them, the system builds a set of syntactic-semantic patterns with their own role-case representation. Once the patterns are build, we use an approximate tree pattern-matching algorithm to identify the most reliable pattern for a sentence. The pattern matching is performed between the syntactic-semantic patterns and the feature-structure tree representing the morphological, syntactical and semantic information of the analysed sentence. For sentences assigned to the correct model, the semantic parsing system we are presenting identifies correctly more than 73% of possible semantic case-roles.<|reference_end|>
arxiv
@article{atserias2000using, title={Using a Diathesis Model for Semantic Parsing}, author={Jordi Atserias, Irene Castellon, Montse Civit, German Rigau}, journal={Proceedins of VEXTAL.1999 pg 385-392}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006041}, primaryClass={cs.CL cs.AI} }
atserias2000using
arxiv-669575
cs/0006042
Semantic Parsing based on Verbal Subcategorization
<|reference_start|>Semantic Parsing based on Verbal Subcategorization: The aim of this work is to explore new methodologies on Semantic Parsing for unrestricted texts. Our approach follows the current trends in Information Extraction (IE) and is based on the application of a verbal subcategorization lexicon (LEXPIR) by means of complex pattern recognition techniques. LEXPIR is framed on the theoretical model of the verbal subcategorization developed in the Pirapides project.<|reference_end|>
arxiv
@article{atserias2000semantic, title={Semantic Parsing based on Verbal Subcategorization}, author={Jordi Atserias, Irene Castellon, Montse Civit, German Rigau}, journal={Conference on Intelligence text Processing and Computational Linguistics, CICLing 2000. pg 330-340}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006042}, primaryClass={cs.CL cs.AI} }
atserias2000semantic
arxiv-669576
cs/0006043
Constraint compiling into rules formalism constraint compiling into rules formalism for dynamic CSPs computing
<|reference_start|>Constraint compiling into rules formalism constraint compiling into rules formalism for dynamic CSPs computing: In this paper we present a rule based formalism for filtering variables domains of constraints. This formalism is well adapted for solving dynamic CSP. We take diagnosis as an instance problem to illustrate the use of these rules. A diagnosis problem is seen like finding all the minimal sets of constraints to be relaxed in the constraint network that models the device to be diagnosed<|reference_end|>
arxiv
@article{piechowiak2000constraint, title={Constraint compiling into rules formalism constraint compiling into rules formalism for dynamic CSPs computing}, author={S. Piechowiak, J. Rodriguez}, journal={arXiv preprint arXiv:cs/0006043}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006043}, primaryClass={cs.AI} }
piechowiak2000constraint
arxiv-669577
cs/0006044
Finite-State Non-Concatenative Morphotactics
<|reference_start|>Finite-State Non-Concatenative Morphotactics: Finite-state morphology in the general tradition of the Two-Level and Xerox implementations has proved very successful in the production of robust morphological analyzer-generators, including many large-scale commercial systems. However, it has long been recognized that these implementations have serious limitations in handling non-concatenative phenomena. We describe a new technique for constructing finite-state transducers that involves reapplying the regular-expression compiler to its own output. Implemented in an algorithm called compile-replace, this technique has proved useful for handling non-concatenative phenomena; and we demonstrate it on Malay full-stem reduplication and Arabic stem interdigitation.<|reference_end|>
arxiv
@article{beesley2000finite-state, title={Finite-State Non-Concatenative Morphotactics}, author={Kenneth R. Beesley and Lauri Karttunen}, journal={arXiv preprint arXiv:cs/0006044}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006044}, primaryClass={cs.CL} }
beesley2000finite-state
arxiv-669578
cs/0006045
Security Policy Consistency
<|reference_start|>Security Policy Consistency: With the advent of wide security platforms able to express simultaneously all the policies comprising an organization's global security policy, the problem of inconsistencies within security policies become harder and more relevant. We have defined a tool based on the CHR language which is able to detect several types of inconsistencies within and between security policies and other specifications, namely workflow specifications. Although the problem of security conflicts has been addressed by several authors, to our knowledge none has addressed the general problem of security inconsistencies, on its several definitions and target specifications.<|reference_end|>
arxiv
@article{ribeiro2000security, title={Security Policy Consistency}, author={Carlos Ribeiro, Andre Zuquete, Paulo Ferreira and Paulo Guedes}, journal={arXiv preprint arXiv:cs/0006045}, year={2000}, archivePrefix={arXiv}, eprint={cs/0006045}, primaryClass={cs.LO cs.CR} }
ribeiro2000security
arxiv-669579
cs/0006046
3-Coloring in Time O(13289^n)
<|reference_start|>3-Coloring in Time O(13289^n): We consider worst case time bounds for NP-complete problems including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems. 3-SAT is equivalent to (2,3)-CSP while the other problems above are special cases of (3,2)-CSP; there is also a natural duality transformation from (a,b)-CSP to (b,a)-CSP. We give a fast algorithm for (3,2)-CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of Davis-Putnam-style backtracking with more sophisticated matching and network flow based ideas.<|reference_end|>
arxiv
@article{beigel20003-coloring, title={3-Coloring in Time O(1.3289^n)}, author={Richard Beigel and David Eppstein}, journal={J. Algorithms 54:2 (2005) 168-204}, year={2000}, doi={10.1016/j.jalgor.2004.06.008}, archivePrefix={arXiv}, eprint={cs/0006046}, primaryClass={cs.DS} }
beigel20003-coloring
arxiv-669580
cs/0006047
Geometric Morphology of Granular Materials
<|reference_start|>Geometric Morphology of Granular Materials: We present a new method to transform the spectral pixel information of a micrograph into an affine geometric description, which allows us to analyze the morphology of granular materials. We use spectral and pulse-coupled neural network based segmentation techniques to generate blobs, and a newly developed algorithm to extract dilated contours. A constrained Delaunay tesselation of the contour points results in a triangular mesh. This mesh is the basic ingredient of the Chodal Axis Transform, which provides a morphological decomposition of shapes. Such decomposition allows for grain separation and the efficient computation of the statistical features of granular materials.<|reference_end|>
arxiv
@article{schlei2000geometric, title={Geometric Morphology of Granular Materials}, author={B. R. Schlei, L. Prasad, A. N. Skourikhine}, journal={arXiv preprint arXiv:cs/0006047}, year={2000}, doi={10.1117/12.404821}, number={LA-UR-00-2839}, archivePrefix={arXiv}, eprint={cs/0006047}, primaryClass={cs.CV} }
schlei2000geometric
arxiv-669581
cs/0007001
Constraint Exploration and Envelope of Simulation Trajectories
<|reference_start|>Constraint Exploration and Envelope of Simulation Trajectories: The implicit theory that a simulation represents is precisely not in the individual choices but rather in the 'envelope' of possible trajectories - what is important is the shape of the whole envelope. Typically a huge amount of computation is required when experimenting with factors bearing on the dynamics of a simulation to tease out what affects the shape of this envelope. In this paper we present a methodology aimed at systematically exploring this envelope. We propose a method for searching for tendencies and proving their necessity relative to a range of parameterisations of the model and agents' choices, and to the logic of the simulation language. The exploration consists of a forward chaining generation of the trajectories associated to and constrained by such a range of parameterisations and choices. Additionally, we propose a computational procedure that helps implement this exploration by translating a Multi Agent System simulation into a constraint-based search over possible trajectories by 'compiling' the simulation rules into a more specific form, namely by partitioning the simulation rules using appropriate modularity in the simulation. An example of this procedure is exhibited. Keywords: Constraint Search, Constraint Logic Programming, Proof, Emergence, Tendencies<|reference_end|>
arxiv
@article{teran2000constraint, title={Constraint Exploration and Envelope of Simulation Trajectories}, author={Oswaldo Teran (1 and 2), Bruce Edmonds (1) and Steve Wallis (1) ((1) Manchester Metropolitan University. Manchester. UK, (2) Universidad de Los Andes. Merida. Venezuela)}, journal={arXiv preprint arXiv:cs/0007001}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007001}, primaryClass={cs.PL cs.AI cs.LO} }
teran2000constraint
arxiv-669582
cs/0007002
Interval Constraint Solving for Camera Control and Motion Planning
<|reference_start|>Interval Constraint Solving for Camera Control and Motion Planning: Many problems in robust control and motion planning can be reduced to either find a sound approximation of the solution space determined by a set of nonlinear inequalities, or to the ``guaranteed tuning problem'' as defined by Jaulin and Walter, which amounts to finding a value for some tuning parameter such that a set of inequalities be verified for all the possible values of some perturbation vector. A classical approach to solve these problems, which satisfies the strong soundness requirement, involves some quantifier elimination procedure such as Collins' Cylindrical Algebraic Decomposition symbolic method. Sound numerical methods using interval arithmetic and local consistency enforcement to prune the search space are presented in this paper as much faster alternatives for both soundly solving systems of nonlinear inequalities, and addressing the guaranteed tuning problem whenever the perturbation vector has dimension one. The use of these methods in camera control is investigated, and experiments with the prototype of a declarative modeller to express camera motion using a cinematic language are reported and commented.<|reference_end|>
arxiv
@article{benhamou2000interval, title={Interval Constraint Solving for Camera Control and Motion Planning}, author={Frederic Benhamou, Frederic Goualard, Eric Languenou, Marc Christie}, journal={arXiv preprint arXiv:cs/0007002}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007002}, primaryClass={cs.AI cs.NA math.NA} }
benhamou2000interval
arxiv-669583
cs/0007003
Using compression to identify acronyms in text
<|reference_start|>Using compression to identify acronyms in text: Text mining is about looking for patterns in natural language text, and may be defined as the process of analyzing text to extract information from it for particular purposes. In previous work, we claimed that compression is a key technology for text mining, and backed this up with a study that showed how particular kinds of lexical tokens---names, dates, locations, etc.---can be identified and located in running text, using compression models to provide the leverage necessary to distinguish different token types (Witten et al., 1999)<|reference_end|>
arxiv
@article{yeates2000using, title={Using compression to identify acronyms in text}, author={Stuart Yeates and David Bainbridge and Ian H. Witten}, journal={arXiv preprint arXiv:cs/0007003}, year={2000}, number={Working Paper 00/01}, archivePrefix={arXiv}, eprint={cs/0007003}, primaryClass={cs.DL cs.IR} }
yeates2000using
arxiv-669584
cs/0007004
Brainstorm/J: a Java Framework for Intelligent Agents
<|reference_start|>Brainstorm/J: a Java Framework for Intelligent Agents: Despite the effort of many researchers in the area of multi-agent systems (MAS) for designing and programming agents, a few years ago the research community began to take into account that common features among different MAS exists. Based on these common features, several tools have tackled the problem of agent development on specific application domains or specific types of agents. As a consequence, their scope is restricted to a subset of the huge application domain of MAS. In this paper we propose a generic infrastructure for programming agents whose name is Brainstorm/J. The infrastructure has been implemented as an object oriented framework. As a consequence, our approach supports a broader scope of MAS applications than previous efforts, being flexible and reusable.<|reference_end|>
arxiv
@article{zunino2000brainstorm/j:, title={Brainstorm/J: a Java Framework for Intelligent Agents}, author={Alejandro Zunino and Analia Amandi}, journal={arXiv preprint arXiv:cs/0007004}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007004}, primaryClass={cs.AI} }
zunino2000brainstorm/j:
arxiv-669585
cs/0007005
Systematic Testing of Multicast Routing Protocols: Analysis of Forward and Backward Search Techniques
<|reference_start|>Systematic Testing of Multicast Routing Protocols: Analysis of Forward and Backward Search Techniques: In this paper, we present a new methodology for developing systematic and automatic test generation algorithms for multipoint protocols. These algorithms attempt to synthesize network topologies and sequences of events that stress the protocol's correctness or performance. This problem can be viewed as a domain-specific search problem that suffers from the state space explosion problem. One goal of this work is to circumvent the state space explosion problem utilizing knowledge of network and fault modeling, and multipoint protocols. The two approaches investigated in this study are based on forward and backward search techniques. We use an extended finite state machine (FSM) model of the protocol. The first algorithm uses forward search to perform reduced reachability analysis. Using domain-specific information for multicast routing over LANs, the algorithm complexity is reduced from exponential to polynomial in the number of routers. This approach, however, does not fully automate topology synthesis. The second algorithm, the fault-oriented test generation, uses backward search for topology synthesis and uses backtracking to generate event sequences instead of searching forward from initial states. Using these algorithms, we have conducted studies for correctness of the multicast routing protocol PIM. We propose to extend these algorithms to study end-to-end multipoint protocols using a virtual LAN that represents delays of the underlying multicast distribution tree.<|reference_end|>
arxiv
@article{helmy2000systematic, title={Systematic Testing of Multicast Routing Protocols: Analysis of Forward and Backward Search Techniques}, author={Ahmed Helmy, Deborah Estrin, Sandeep Gupta}, journal={arXiv preprint arXiv:cs/0007005}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007005}, primaryClass={cs.NI cs.DS} }
helmy2000systematic
arxiv-669586
cs/0007006
DISCO: An object-oriented system for music composition and sound design
<|reference_start|>DISCO: An object-oriented system for music composition and sound design: This paper describes an object-oriented approach to music composition and sound design. The approach unifies the processes of music making and instrument building by using similar logic, objects, and procedures. The composition modules use an abstract representation of musical data, which can be easily mapped onto different synthesis languages or a traditionally notated score. An abstract base class is used to derive classes on different time scales. Objects can be related to act across time scales, as well as across an entire piece, and relationships between similar objects can replicate traditional music operations or introduce new ones. The DISCO (Digital Instrument for Sonification and Composition) system is an open-ended work in progress.<|reference_end|>
arxiv
@article{kaper2000disco:, title={DISCO: An object-oriented system for music composition and sound design}, author={Hans G. Kaper (1), Sever Tipei (2) and Jeff M. Wright (2) ((1) Argonne National Laboratory, (2) University of Illinois at Urbana-Champaign)}, journal={arXiv preprint arXiv:cs/0007006}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007006}, primaryClass={cs.SD cs.DS cs.SE} }
kaper2000disco:
arxiv-669587
cs/0007007
Data sonification and sound visualization
<|reference_start|>Data sonification and sound visualization: This article describes a collaborative project between researchers in the Mathematics and Computer Science Division at Argonne National Laboratory and the Computer Music Project of the University of Illinois at Urbana-Champaign. The project focuses on the use of sound for the exploration and analysis of complex data sets in scientific computing. The article addresses digital sound synthesis in the context of DIASS (Digital Instrument for Additive Sound Synthesis) and sound visualization in a virtual-reality environment by means of M4CAVE. It describes the procedures and preliminary results of some experiments in scientific sonification and sound visualization.<|reference_end|>
arxiv
@article{kaper2000data, title={Data sonification and sound visualization}, author={Hans G. Kaper,(1) Sever Tipei,(2) and Elizabeth Wiebel (1) ((1) Argonne National Laboratory, (2) University of Illinois at Urbana-Champaign)}, journal={Computing in Science and Engineering, Vol. 1 No. 4, July-August 1999, pp. 48-58}, year={2000}, number={ANL/MCS-P738-0199}, archivePrefix={arXiv}, eprint={cs/0007007}, primaryClass={cs.SD cs.HC cs.MM} }
kaper2000data
arxiv-669588
cs/0007008
Compiling Language Definitions: The ASF+SDF Compiler
<|reference_start|>Compiling Language Definitions: The ASF+SDF Compiler: The ASF+SDF Meta-Environment is an interactive language development environment whose main application areas are definition of domain-specific languages, generation of program analysis and transformation tools, production of software renovation tools, and general specification and prototyping. It uses conditional rewrite rules to define the dynamic semantics and other tool-oriented aspects of languages, so the effectiveness of the generated tools is critically dependent on the quality of the rewrite rule implementation. The ASF+SDF rewrite rule compiler generates C code, thus taking advantage of C's portability and the sophisticated optimization capabilities of current C compilers as well as avoiding potential abstract machine interface bottlenecks. It can handle large (10 000+ rule) language definitions and uses an efficient run-time storage scheme capable of handling large (1 000 000+ node) terms. Term storage uses maximal subterm sharing (hash-consing), which turns out to be more effective in the case of ASF+SDF than in Lisp or SML. Extensive benchmarking has shown the time and space performance of the generated code to be as good as or better than that of the best current rewrite rule and functional language compilers.<|reference_end|>
arxiv
@article{brand2000compiling, title={Compiling Language Definitions: The ASF+SDF Compiler}, author={M. G. J. van den Brand, J. Heering, P. Klint, P. A. Olivier}, journal={ACM Transactions on Programming Languages and Systems 24 4 (July 2002) 334-368}, year={2000}, number={SEN-R0014}, archivePrefix={arXiv}, eprint={cs/0007008}, primaryClass={cs.PL cs.SE} }
brand2000compiling
arxiv-669589
cs/0007009
Incremental construction of minimal acyclic finite-state automata
<|reference_start|>Incremental construction of minimal acyclic finite-state automata: In this paper, we describe a new method for constructing minimal, deterministic, acyclic finite-state automata from a set of strings. Traditional methods consist of two phases: the first to construct a trie, the second one to minimize it. Our approach is to construct a minimal automaton in a single phase by adding new strings one by one and minimizing the resulting automaton on-the-fly. We present a general algorithm as well as a specialization that relies upon the lexicographical ordering of the input strings.<|reference_end|>
arxiv
@article{daciuk2000incremental, title={Incremental construction of minimal acyclic finite-state automata}, author={Jan Daciuk, Stoyan Mihov, Bruce Watson, Richard Watson}, journal={Computational Linguistics, Vol. 26, Number 1, March 2000}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007009}, primaryClass={cs.CL} }
daciuk2000incremental
arxiv-669590
cs/0007010
Boosting Applied to Word Sense Disambiguation
<|reference_start|>Boosting Applied to Word Sense Disambiguation: In this paper Schapire and Singer's AdaBoost.MH boosting algorithm is applied to the Word Sense Disambiguation (WSD) problem. Initial experiments on a set of 15 selected polysemous words show that the boosting approach surpasses Naive Bayes and Exemplar-based approaches, which represent state-of-the-art accuracy on supervised WSD. In order to make boosting practical for a real learning domain of thousands of words, several ways of accelerating the algorithm by reducing the feature space are studied. The best variant, which we call LazyBoosting, is tested on the largest sense-tagged corpus available containing 192,800 examples of the 191 most frequent and ambiguous English words. Again, boosting compares favourably to the other benchmark algorithms.<|reference_end|>
arxiv
@article{escudero2000boosting, title={Boosting Applied to Word Sense Disambiguation}, author={Gerard Escudero, Lluis Marquez, German Rigau}, journal={Proceedings of the 11th European Conference on Machine Learning, ECML'2000 pp. 129-141}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007010}, primaryClass={cs.CL cs.AI} }
escudero2000boosting
arxiv-669591
cs/0007011
Naive Bayes and Exemplar-Based approaches to Word Sense Disambiguation Revisited
<|reference_start|>Naive Bayes and Exemplar-Based approaches to Word Sense Disambiguation Revisited: This paper describes an experimental comparison between two standard supervised learning methods, namely Naive Bayes and Exemplar-based classification, on the Word Sense Disambiguation (WSD) problem. The aim of the work is twofold. Firstly, it attempts to contribute to clarify some confusing information about the comparison between both methods appearing in the related literature. In doing so, several directions have been explored, including: testing several modifications of the basic learning algorithms and varying the feature space. Secondly, an improvement of both algorithms is proposed, in order to deal with large attribute sets. This modification, which basically consists in using only the positive information appearing in the examples, allows to improve greatly the efficiency of the methods, with no loss in accuracy. The experiments have been performed on the largest sense-tagged corpus available containing the most frequent and ambiguous English words. Results show that the Exemplar-based approach to WSD is generally superior to the Bayesian approach, especially when a specific metric for dealing with symbolic attributes is used.<|reference_end|>
arxiv
@article{escudero2000naive, title={Naive Bayes and Exemplar-Based approaches to Word Sense Disambiguation Revisited}, author={Gerard Escudero, Lluis Marquez, German Rigau}, journal={Proceedings of the 14th European Conference on Artificial Intelligence, ECAI'2000 pp. 421-425}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007011}, primaryClass={cs.CL cs.AI} }
escudero2000naive
arxiv-669592
cs/0007012
Using Learning-based Filters to Detect Rule-based Filtering Obsolescence
<|reference_start|>Using Learning-based Filters to Detect Rule-based Filtering Obsolescence: For years, Caisse des Depots et Consignations has produced information filtering applications. To be operational, these applications require high filtering performances which are achieved by using rule-based filters. With this technique, an administrator has to tune a set of rules for each topic. However, filters become obsolescent over time. The decrease of their performances is due to diachronic polysemy of terms that involves a loss of precision and to diachronic polymorphism of concepts that involves a loss of recall. To help the administrator to maintain his filters, we have developed a method which automatically detects filtering obsolescence. It consists in making a learning-based control filter using a set of documents which have already been categorised as relevant or not relevant by the rule-based filter. The idea is to supervise this filter by processing a differential comparison of its outcomes with those of the control one. This method has many advantages. It is simple to implement since the training set used by the learning is supplied by the rule-based filter. Thus, both the making and the use of the control filter are fully automatic. With automatic detection of obsolescence, learning-based filtering finds a rich application which offers interesting prospects.<|reference_end|>
arxiv
@article{wolinski2000using, title={Using Learning-based Filters to Detect Rule-based Filtering Obsolescence}, author={Francis Wolinski, Frantz Vichot, Mathieu Stricker}, journal={arXiv preprint arXiv:cs/0007012}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007012}, primaryClass={cs.CL cs.AI} }
wolinski2000using
arxiv-669593
cs/0007013
Applying Constraint Handling Rules to HPSG
<|reference_start|>Applying Constraint Handling Rules to HPSG: Constraint Handling Rules (CHR) have provided a realistic solution to an over-arching problem in many fields that deal with constraint logic programming: how to combine recursive functions or relations with constraints while avoiding non-termination problems. This paper focuses on some other benefits that CHR, specifically their implementation in SICStus Prolog, have provided to computational linguists working on grammar design tools. CHR rules are applied by means of a subsumption check and this check is made only when their variables are instantiated or bound. The former functionality is at best difficult to simulate using more primitive coroutining statements such as SICStus when/2, and the latter simply did not exist in any form before CHR. For the sake of providing a case study in how these can be applied to grammar development, we consider the Attribute Logic Engine (ALE), a Prolog preprocessor for logic programming with typed feature structures, and its extension to a complete grammar development system for Head-driven Phrase Structure Grammar (HPSG), a popular constraint-based linguistic theory that uses typed feature structures. In this context, CHR can be used not only to extend the constraint language of feature structure descriptions to include relations in a declarative way, but also to provide support for constraints with complex antecedents and constraints on the co-occurrence of feature values that are necessary to interpret the type system of HPSG properly.<|reference_end|>
arxiv
@article{penn2000applying, title={Applying Constraint Handling Rules to HPSG}, author={Gerald Penn}, journal={arXiv preprint arXiv:cs/0007013}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007013}, primaryClass={cs.CL cs.PL} }
penn2000applying
arxiv-669594
cs/0007014
The Sound Manifesto
<|reference_start|>The Sound Manifesto: Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need co-ordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the co-operative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for a characterization of visual information.<|reference_end|>
arxiv
@article{o'donnell2000the, title={The Sound Manifesto}, author={Michael J. O'Donnell and Ilia Bisnovatyi}, journal={arXiv preprint arXiv:cs/0007014}, year={2000}, doi={10.1117/12.409214}, archivePrefix={arXiv}, eprint={cs/0007014}, primaryClass={cs.SD} }
o'donnell2000the
arxiv-669595
cs/0007015
Phase Clocks for Transient Fault Repair
<|reference_start|>Phase Clocks for Transient Fault Repair: Phase clocks are synchronization tools that implement a form of logical time in distributed systems. For systems tolerating transient faults by self-repair of damaged data, phase clocks can enable reasoning about the progress of distributed repair procedures. This paper presents a phase clock algorithm suited to the model of transient memory faults in asynchronous systems with read/write registers. The algorithm is self-stabilizing and guarantees accuracy of phase clocks within O(k) time following an initial state that is k-faulty. Composition theorems show how the algorithm can be used for the timing of distributed procedures that repair system outputs.<|reference_end|>
arxiv
@article{herman2000phase, title={Phase Clocks for Transient Fault Repair}, author={Ted Herman}, journal={arXiv preprint arXiv:cs/0007015}, year={2000}, number={TR99-08}, archivePrefix={arXiv}, eprint={cs/0007015}, primaryClass={cs.DC} }
herman2000phase
arxiv-669596
cs/0007016
Two Steps Feature Selection and Neural Network Classification for the TREC-8 Routing
<|reference_start|>Two Steps Feature Selection and Neural Network Classification for the TREC-8 Routing: For the TREC-8 routing, one specific filter is built for each topic. Each filter is a classifier trained to recognize the documents that are relevant to the topic. When presented with a document, each classifier estimates the probability for the document to be relevant to the topic for which it has been trained. Since the procedure for building a filter is topic-independent, the system is fully automatic. By making use of a sample of documents that have previously been evaluated as relevant or not relevant to a particular topic, a term selection is performed, and a neural network is trained. Each document is represented by a vector of frequencies of a list of selected terms. This list depends on the topic to be filtered; it is constructed in two steps. The first step defines the characteristic words used in the relevant documents of the corpus; the second one chooses, among the previous list, the most discriminant ones. The length of the vector is optimized automatically for each topic. At the end of the term selection, a vector of typically 25 words is defined for the topic, so that each document which has to be processed is represented by a vector of term frequencies. This vector is subsequently input to a classifier that is trained from the same sample. After training, the classifier estimates for each document of a test set its probability of being relevant; for submission to TREC, the top 1000 documents are ranked in order of decreasing relevance.<|reference_end|>
arxiv
@article{stricker2000two, title={Two Steps Feature Selection and Neural Network Classification for the TREC-8 Routing}, author={Mathieu Stricker, Frantz Vichot, Gerard Dreyfus, Francis Wolinski}, journal={arXiv preprint arXiv:cs/0007016}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007016}, primaryClass={cs.CL cs.AI} }
stricker2000two
arxiv-669597
cs/0007017
Fuzzy data: XML may handle it
<|reference_start|>Fuzzy data: XML may handle it: Data modeling is one of the most difficult tasks in application engineering. The engineer must be aware of the use cases and the required application services and at a certain point of time he has to fix the data model which forms the base for the application services. However, once the data model has been fixed it is difficult to consider changing needs. This might be a problem in specific domains, which are as dynamic as the healthcare domain. With fuzzy data we address all those data that are difficult to organize in a single database. In this paper we discuss a gradual and pragmatic approach that uses the XML technology to conquer more model flexibility. XML may provide the clue between unstructured text data and structured database solutions and shift the paradigm from "organizing the data along a given model" towards "organizing the data along user requirements".<|reference_end|>
arxiv
@article{schweiger2000fuzzy, title={Fuzzy data: XML may handle it}, author={R. Schweiger (University Giessen), S. Hoelzer (Giessen), J. Dudeck (Giessen)}, journal={arXiv preprint arXiv:cs/0007017}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007017}, primaryClass={cs.IR} }
schweiger2000fuzzy
arxiv-669598
cs/0007018
Bootstrapping a Tagged Corpus through Combination of Existing Heterogeneous Taggers
<|reference_start|>Bootstrapping a Tagged Corpus through Combination of Existing Heterogeneous Taggers: This paper describes a new method, Combi-bootstrap, to exploit existing taggers and lexical resources for the annotation of corpora with new tagsets. Combi-bootstrap uses existing resources as features for a second level machine learning module, that is trained to make the mapping to the new tagset on a very small sample of annotated corpus material. Experiments show that Combi-bootstrap: i) can integrate a wide variety of existing resources, and ii) achieves much higher accuracy (up to 44.7 % error reduction) than both the best single tagger and an ensemble tagger constructed out of the same small training sample.<|reference_end|>
arxiv
@article{zavrel2000bootstrapping, title={Bootstrapping a Tagged Corpus through Combination of Existing Heterogeneous Taggers}, author={Jakub Zavrel and Walter Daelemans}, journal={Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC 2000), pp. 17--20}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007018}, primaryClass={cs.CL} }
zavrel2000bootstrapping
arxiv-669599
cs/0007019
Examples, Counterexamples, and Enumeration Results for Foldings and Unfoldings between Polygons and Polytopes
<|reference_start|>Examples, Counterexamples, and Enumeration Results for Foldings and Unfoldings between Polygons and Polytopes: We investigate how to make the surface of a convex polyhedron (a polytope) by folding up a polygon and gluing its perimeter shut, and the reverse process of cutting open a polytope and unfolding it to a polygon. We explore basic enumeration questions in both directions: Given a polygon, how many foldings are there? Given a polytope, how many unfoldings are there to simple polygons? Throughout we give special attention to convex polygons, and to regular polygons. We show that every convex polygon folds to an infinite number of distinct polytopes, but that their number of combinatorially distinct gluings is polynomial. There are, however, simple polygons with an exponential number of distinct gluings. In the reverse direction, we show that there are polytopes with an exponential number of distinct cuttings that lead to simple unfoldings. We establish necessary conditions for a polytope to have convex unfoldings, implying, for example, that among the Platonic solids, only the tetrahedron has a convex unfolding. We provide an inventory of the polytopes that may unfold to regular polygons, showing that, for n>6, there is essentially only one class of such polytopes.<|reference_end|>
arxiv
@article{demaine2000examples,, title={Examples, Counterexamples, and Enumeration Results for Foldings and Unfoldings between Polygons and Polytopes}, author={Erik D. Demaine, Martin L. Demaine, Anna Lubiw, Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0007019}, year={2000}, number={Smith Technical Report 069}, archivePrefix={arXiv}, eprint={cs/0007019}, primaryClass={cs.CG cs.DM} }
demaine2000examples,
arxiv-669600
cs/0007020
Polynomial-time Computation via Local Inference Relations
<|reference_start|>Polynomial-time Computation via Local Inference Relations: We consider the concept of a local set of inference rules. A local rule set can be automatically transformed into a rule set for which bottom-up evaluation terminates in polynomial time. The local-rule-set transformation gives polynomial-time evaluation strategies for a large variety of rule sets that cannot be given terminating evaluation strategies by any other known automatic technique. This paper discusses three new results. First, it is shown that every polynomial-time predicate can be defined by an (unstratified) local rule set. Second, a new machine-recognizable subclass of the local rule sets is identified. Finally we show that locality, as a property of rule sets, is undecidable in general.<|reference_end|>
arxiv
@article{givan2000polynomial-time, title={Polynomial-time Computation via Local Inference Relations}, author={Robert Givan and David McAllester}, journal={arXiv preprint arXiv:cs/0007020}, year={2000}, archivePrefix={arXiv}, eprint={cs/0007020}, primaryClass={cs.LO cs.AI cs.PL} }
givan2000polynomial-time