corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-669701 | cs/0010011 | If P \neq NP then Some Strongly Noninvertible Functions are Invertible | <|reference_start|>If P \neq NP then Some Strongly Noninvertible Functions are Invertible: Rabi, Rivest, and Sherman alter the standard notion of noninvertibility to a new notion they call strong noninvertibility, and show -- via explicit cryptographic protocols for secret-key agreement ([RS93,RS97] attribute this to Rivest and Sherman) and digital signatures [RS93,RS97] -- that strongly noninvertible functions would be very useful components in protocol design. Their definition of strong noninvertibility has a small twist (``respecting the argument given'') that is needed to ensure cryptographic usefulness. In this paper, we show that this small twist has a large, unexpected consequence: Unless P=NP, some strongly noninvertible functions are invertible.<|reference_end|> | arxiv | @article{hemaspaandra2000if,
title={If P \neq NP then Some Strongly Noninvertible Functions are Invertible},
author={Lane A. Hemaspaandra, Kari Pasanen, and J"org Rothe},
journal={arXiv preprint arXiv:cs/0010011},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010011},
primaryClass={cs.CC}
} | hemaspaandra2000if |
arxiv-669702 | cs/0010012 | Finding consensus in speech recognition: word error minimization and other applications of confusion networks | <|reference_start|>Finding consensus in speech recognition: word error minimization and other applications of confusion networks: We describe a new framework for distilling information from word lattices to improve the accuracy of speech recognition and obtain a more perspicuous representation of a set of alternative hypotheses. In the standard MAP decoding approach the recognizer outputs the string of words corresponding to the path with the highest posterior probability given the acoustics and a language model. However, even given optimal models, the MAP decoder does not necessarily minimize the commonly used performance metric, word error rate (WER). We describe a method for explicitly minimizing WER by extracting word hypotheses with the highest posterior probabilities from word lattices. We change the standard problem formulation by replacing global search over a large set of sentence hypotheses with local search over a small set of word candidates. In addition to improving the accuracy of the recognizer, our method produces a new representation of the set of candidate hypotheses that specifies the sequence of word-level confusions in a compact lattice format. We study the properties of confusion networks and examine their use for other tasks, such as lattice compression, word spotting, confidence annotation, and reevaluation of recognition hypotheses using higher-level knowledge sources.<|reference_end|> | arxiv | @article{mangu2000finding,
title={Finding consensus in speech recognition: word error minimization and
other applications of confusion networks},
author={L. Mangu, E. Brill, A. Stolcke},
journal={Computer Speech and Language 14(4), 373-400, October 2000},
year={2000},
doi={10.1006/csla.2000.0152},
archivePrefix={arXiv},
eprint={cs/0010012},
primaryClass={cs.CL}
} | mangu2000finding |
arxiv-669703 | cs/0010013 | A Public-key based Information Management Model for Mobile Agents | <|reference_start|>A Public-key based Information Management Model for Mobile Agents: Mobile code based computing requires development of protection schemes that allow digital signature and encryption of data collected by the agents in untrusted hosts. These algorithms could not rely on carrying encryption keys if these keys could be stolen or used to counterfeit data by hostile hosts and agents. As a consequence, both information and keys must be protected in a way that only authorized hosts, that is the host that provides information and the server that has sent the mobile agent, could modify (by changing or removing) retrieved data. The data management model proposed in this work allows the information collected by the agents to be protected against handling by other hosts in the information network. It has been done by using standard public-key cryptography modified to support protection of data in distributed environments without requiring an interactive protocol with the host that has dropped the agent. Their significance stands on the fact that it is the first model that supports a full-featured protection of mobile agents allowing remote hosts to change its own information if required before agent returns to its originating server.<|reference_end|> | arxiv | @article{rodriguez2000a,
title={A Public-key based Information Management Model for Mobile Agents},
author={Diego Rodriguez (University of Oviedo), Igor Sobrado (University of
Oviedo)},
journal={arXiv preprint arXiv:cs/0010013},
year={2000},
number={FFUOV-00/04},
archivePrefix={arXiv},
eprint={cs/0010013},
primaryClass={cs.CR cs.DC cs.IR cs.NI}
} | rodriguez2000a |
arxiv-669704 | cs/0010014 | On a cepstrum-based speech detector robust to white noise | <|reference_start|>On a cepstrum-based speech detector robust to white noise: We study effects of additive white noise on the cepstral representation of speech signals. Distribution of each individual cepstrum coefficient of speech is shown to depend strongly on noise and to overlap significantly with the cepstrum distribution of noise. Based on these studies, we suggest a scalar quantity, V, equal to the sum of weighted cepstral coefficients, which is able to classify frames containing speech against noise-like frames. The distributions of V for speech and noise frames are reasonably well separated above SNR = 5 dB, demonstrating the feasibility of robust speech detector based on V.<|reference_end|> | arxiv | @article{skorik2000on,
title={On a cepstrum-based speech detector robust to white noise},
author={Sergei Skorik and Frederic Berthommier},
journal={arXiv preprint arXiv:cs/0010014},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010014},
primaryClass={cs.CL cs.CV cs.HC}
} | skorik2000on |
arxiv-669705 | cs/0010015 | On Exponential-Time Completeness of the Circularity Problem for Attribute Grammars | <|reference_start|>On Exponential-Time Completeness of the Circularity Problem for Attribute Grammars: Attribute grammars (AGs) are a formal technique for defining semantics of programming languages. Existing complexity proofs on the circularity problem of AGs are based on automata theory, such as writing pushdown acceptor and alternating Turing machines. They reduced the acceptance problems of above automata, which are exponential-time (EXPTIME) complete, to the AG circularity problem. These proofs thus show that the circularity problem is EXPTIME-hard, at least as hard as the most difficult problems in EXPTIME. However, none has given a proof for the EXPTIME-completeness of the problem. This paper first presents an alternating Turing machine for the circularity problem. The alternating Turing machine requires polynomial space. Thus, the circularity problem is in EXPTIME and is then EXPTIME-complete.<|reference_end|> | arxiv | @article{wu2000on,
title={On Exponential-Time Completeness of the Circularity Problem for
Attribute Grammars},
author={Pei-Chi Wu},
journal={arXiv preprint arXiv:cs/0010015},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010015},
primaryClass={cs.PL cs.CC}
} | wu2000on |
arxiv-669706 | cs/0010016 | Towards rule-based visual programming of generic visual systems | <|reference_start|>Towards rule-based visual programming of generic visual systems: This paper illustrates how the diagram programming language DiaPlan can be used to program visual systems. DiaPlan is a visual rule-based language that is founded on the computational model of graph transformation. The language supports object-oriented programming since its graphs are hierarchically structured. Typing allows the shape of these graphs to be specified recursively in order to increase program security. Thanks to its genericity, DiaPlan allows to implement systems that represent and manipulate data in arbitrary diagram notations. The environment for the language exploits the diagram editor generator DiaGen for providing genericity, and for implementing its user interface and type checker.<|reference_end|> | arxiv | @article{hoffmann2000towards,
title={Towards rule-based visual programming of generic visual systems},
author={Berthold Hoffmann, Mark Minas},
journal={arXiv preprint arXiv:cs/0010016},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010016},
primaryClass={cs.PL}
} | hoffmann2000towards |
arxiv-669707 | cs/0010017 | Generalization of a 3-D resonator model for the simulation of spherical enclosures | <|reference_start|>Generalization of a 3-D resonator model for the simulation of spherical enclosures: A rectangular enclosure has such an even distribution of resonances that it can be accurately and efficiently modelled using a feedback delay network. Conversely, a non rectangular shape such as a sphere has a distribution of resonances that challenges the construction of an efficient model. This work proposes an extension of the already known feedback delay network structure to model the resonant properties of a sphere. A specific frequency distribution of resonances can be approximated, up to a certain frequency, by inserting an allpass filter of moderate order after each delay line of a feedback delay network. The structure used for rectangular boxes is therefore augmented with a set of allpass filters allowing parametric control over the enclosure size and the boundary properties. This work was motivated by informal listening tests which have shown that it is possible to identify a basic shape just from the distribution of its audible resonances.<|reference_end|> | arxiv | @article{rocchesso2000generalization,
title={Generalization of a 3-D resonator model for the simulation of spherical
enclosures},
author={Davide Rocchesso and Pierre Dutilleux},
journal={arXiv preprint arXiv:cs/0010017},
year={2000},
doi={10.1155/S1110865701000105},
archivePrefix={arXiv},
eprint={cs/0010017},
primaryClass={cs.SD}
} | rocchesso2000generalization |
arxiv-669708 | cs/0010018 | Internet Packet Filter Management and Rectangle Geometry | <|reference_start|>Internet Packet Filter Management and Rectangle Geometry: We consider rule sets for internet packet routing and filtering, where each rule consists of a range of source addresses, a range of destination addresses, a priority, and an action. A given packet should be handled by the action from the maximum priority rule that matches its source and destination. We describe new data structures for quickly finding the rule matching an incoming packet, in near-linear space, and a new algorithm for determining whether a rule set contains any conflicts, in time O(n^{3/2}).<|reference_end|> | arxiv | @article{eppstein2000internet,
title={Internet Packet Filter Management and Rectangle Geometry},
author={David Eppstein and S. Muthukrishnan},
journal={arXiv preprint arXiv:cs/0010018},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010018},
primaryClass={cs.CG cs.NI}
} | eppstein2000internet |
arxiv-669709 | cs/0010019 | The Random Oracle Methodology, Revisited | <|reference_start|>The Random Oracle Methodology, Revisited: We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.<|reference_end|> | arxiv | @article{canetti2000the,
title={The Random Oracle Methodology, Revisited},
author={Ran Canetti, Oded Goldreich and Shai Halevi},
journal={In Proceedings of 30th Annual ACM Symposium on the Theory of
Computing, pages 209-218, May 1998, ACM},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010019},
primaryClass={cs.CR}
} | canetti2000the |
arxiv-669710 | cs/0010020 | Using existing systems to supplement small amounts of annotated grammatical relations training data | <|reference_start|>Using existing systems to supplement small amounts of annotated grammatical relations training data: Grammatical relationships (GRs) form an important level of natural language processing, but different sets of GRs are useful for different purposes. Therefore, one may often only have time to obtain a small training corpus with the desired GR annotations. To boost the performance from using such a small training corpus on a transformation rule learner, we use existing systems that find related types of annotations.<|reference_end|> | arxiv | @article{yeh2000using,
title={Using existing systems to supplement small amounts of annotated
grammatical relations training data},
author={Alexander Yeh},
journal={38th Annual Meeting of the Association for Computational
Linguistics (ACL-2000), pages 126-132, Hong Kong, October, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010020},
primaryClass={cs.CL}
} | yeh2000using |
arxiv-669711 | cs/0010021 | Towards Understanding the Predictability of Stock Markets from the Perspective of Computational Complexity | <|reference_start|>Towards Understanding the Predictability of Stock Markets from the Perspective of Computational Complexity: This paper initiates a study into the century-old issue of market predictability from the perspective of computational complexity. We develop a simple agent-based model for a stock market where the agents are traders equipped with simple trading strategies, and their trades together determine the stock prices. Computer simulations show that a basic case of this model is already capable of generating price graphs which are visually similar to the recent price movements of high tech stocks. In the general model, we prove that if there are a large number of traders but they employ a relatively small number of strategies, then there is a polynomial-time algorithm for predicting future price movements with high accuracy. On the other hand, if the number of strategies is large, market prediction becomes complete in two new computational complexity classes CPP and BCPP, which are between P^NP[O(log n)] and PP. These computational completeness results open up a novel possibility that the price graph of an actual stock could be sufficiently deterministic for various prediction goals but appear random to all polynomial-time prediction algorithms.<|reference_end|> | arxiv | @article{aspnes2000towards,
title={Towards Understanding the Predictability of Stock Markets from the
Perspective of Computational Complexity},
author={James Aspnes, David F. Fischer, Michael J. Fischer, Ming-Yang Kao,
Alok Kumar},
journal={arXiv preprint arXiv:cs/0010021},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010021},
primaryClass={cs.CE cs.CC}
} | aspnes2000towards |
arxiv-669712 | cs/0010022 | Noise-Tolerant Learning, the Parity Problem, and the Statistical Query Model | <|reference_start|>Noise-Tolerant Learning, the Parity Problem, and the Statistical Query Model: We describe a slightly sub-exponential time algorithm for learning parity functions in the presence of random classification noise. This results in a polynomial-time algorithm for the case of parity functions that depend on only the first O(log n log log n) bits of input. This is the first known instance of an efficient noise-tolerant algorithm for a concept class that is provably not learnable in the Statistical Query model of Kearns. Thus, we demonstrate that the set of problems learnable in the statistical query model is a strict subset of those problems learnable in the presence of noise in the PAC model. In coding-theory terms, what we give is a poly(n)-time algorithm for decoding linear k by n codes in the presence of random noise for the case of k = c log n loglog n for some c > 0. (The case of k = O(log n) is trivial since one can just individually check each of the 2^k possible messages and choose the one that yields the closest codeword.) A natural extension of the statistical query model is to allow queries about statistical properties that involve t-tuples of examples (as opposed to single examples). The second result of this paper is to show that any class of functions learnable (strongly or weakly) with t-wise queries for t = O(log n) is also weakly learnable with standard unary queries. Hence this natural extension to the statistical query model does not increase the set of weakly learnable functions.<|reference_end|> | arxiv | @article{blum2000noise-tolerant,
title={Noise-Tolerant Learning, the Parity Problem, and the Statistical Query
Model},
author={Avrim Blum, Adam Kalai, Hal Wasserman},
journal={arXiv preprint arXiv:cs/0010022},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010022},
primaryClass={cs.LG cs.AI cs.DS}
} | blum2000noise-tolerant |
arxiv-669713 | cs/0010023 | Oracle Complexity and Nontransitivity in Pattern Recognition | <|reference_start|>Oracle Complexity and Nontransitivity in Pattern Recognition: Different mathematical models of recognition processes are known. In the present paper we consider a pattern recognition algorithm as an oracle computation on a Turing machine. Such point of view seems to be useful in pattern recognition as well as in recursion theory. Use of recursion theory in pattern recognition shows connection between a recognition algorithm comparison problem and complexity problems of oracle computation. That is because in many cases we can take into account only the number of sign computations or in other words volume of oracle information needed. Therefore, the problem of recognition algorithm preference can be formulated as a complexity optimization problem of oracle computation. Furthermore, introducing a certain "natural" preference relation on a set of recognizing algorithms, we discover it to be nontransitive. This relates to the well known nontransitivity paradox in probability theory. Keywords: Pattern Recognition, Recursion Theory, Nontransitivity, Preference Relation<|reference_end|> | arxiv | @article{bulitko2000oracle,
title={Oracle Complexity and Nontransitivity in Pattern Recognition},
author={Vadim Bulitko},
journal={arXiv preprint arXiv:cs/0010023},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010023},
primaryClass={cs.CC cs.AI cs.CV cs.DS}
} | bulitko2000oracle |
arxiv-669714 | cs/0010024 | Exploring automatic word sense disambiguation with decision lists and the Web | <|reference_start|>Exploring automatic word sense disambiguation with decision lists and the Web: The most effective paradigm for word sense disambiguation, supervised learning, seems to be stuck because of the knowledge acquisition bottleneck. In this paper we take an in-depth study of the performance of decision lists on two publicly available corpora and an additional corpus automatically acquired from the Web, using the fine-grained highly polysemous senses in WordNet. Decision lists are shown a versatile state-of-the-art technique. The experiments reveal, among other facts, that SemCor can be an acceptable (0.7 precision for polysemous words) starting point for an all-words system. The results on the DSO corpus show that for some highly polysemous words 0.7 precision seems to be the current state-of-the-art limit. On the other hand, independently constructed hand-tagged corpora are not mutually useful, and a corpus automatically acquired from the Web is shown to fail.<|reference_end|> | arxiv | @article{agirre2000exploring,
title={Exploring automatic word sense disambiguation with decision lists and
the Web},
author={Eneko Agirre and David Martinez},
journal={Procedings of the COLING 2000 Workshop on Semantic Annotation and
Intelligent Content},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010024},
primaryClass={cs.CL}
} | agirre2000exploring |
arxiv-669715 | cs/0010025 | Extraction of semantic relations from a Basque monolingual dictionary using Constraint Grammar | <|reference_start|>Extraction of semantic relations from a Basque monolingual dictionary using Constraint Grammar: This paper deals with the exploitation of dictionaries for the semi-automatic construction of lexicons and lexical knowledge bases. The final goal of our research is to enrich the Basque Lexical Database with semantic information such as senses, definitions, semantic relations, etc., extracted from a Basque monolingual dictionary. The work here presented focuses on the extraction of the semantic relations that best characterise the headword, that is, those of synonymy, antonymy, hypernymy, and other relations marked by specific relators and derivation. All nominal, verbal and adjectival entries were treated. Basque uses morphological inflection to mark case, and therefore semantic relations have to be inferred from suffixes rather than from prepositions. Our approach combines a morphological analyser and surface syntax parsing (based on Constraint Grammar), and has proven very successful for highly inflected languages such as Basque. Both the effort to write the rules and the actual processing time of the dictionary have been very low. At present we have extracted 42,533 relations, leaving only 2,943 (9%) definitions without any extracted relation. The error rate is extremely low, as only 2.2% of the extracted relations are wrong.<|reference_end|> | arxiv | @article{agirre2000extraction,
title={Extraction of semantic relations from a Basque monolingual dictionary
using Constraint Grammar},
author={Eneko Agirre, Olatz Ansa, Xabier Arregi, Xabier Artola, Arantza Diaz
de Ilarraza, Mikel Lersundi, David Martinez, Kepa Sarasola, Ruben Urizar},
journal={Proceedings of EURALEX 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010025},
primaryClass={cs.CL}
} | agirre2000extraction |
arxiv-669716 | cs/0010026 | Enriching very large ontologies using the WWW | <|reference_start|>Enriching very large ontologies using the WWW: This paper explores the possibility to exploit text on the world wide web in order to enrich the concepts in existing ontologies. First, a method to retrieve documents from the WWW related to a concept is described. These document collections are used 1) to construct topic signatures (lists of topically related words) for each concept in WordNet, and 2) to build hierarchical clusters of the concepts (the word senses) that lexicalize a given word. The overall goal is to overcome two shortcomings of WordNet: the lack of topical links among concepts, and the proliferation of senses. Topic signatures are validated on a word sense disambiguation task with good results, which are improved when the hierarchical clusters are used.<|reference_end|> | arxiv | @article{agirre2000enriching,
title={Enriching very large ontologies using the WWW},
author={Eneko Agirre, Olatz Ansa, Eduard Hovy, David Martinez},
journal={Procedings of the ECAI 2000 Workshop on Ontology Learning},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010026},
primaryClass={cs.CL}
} | agirre2000enriching |
arxiv-669717 | cs/0010027 | One Sense per Collocation and Genre/Topic Variations | <|reference_start|>One Sense per Collocation and Genre/Topic Variations: This paper revisits the one sense per collocation hypothesis using fine-grained sense distinctions and two different corpora. We show that the hypothesis is weaker for fine-grained sense distinctions (70% vs. 99% reported earlier on 2-way ambiguities). We also show that one sense per collocation does hold across corpora, but that collocations vary from one corpus to the other, following genre and topic variations. This explains the low results when performing word sense disambiguation across corpora. In fact, we demonstrate that when two independent corpora share a related genre/topic, the word sense disambiguation results would be better. Future work on word sense disambiguation will have to take into account genre and topic as important parameters on their models.<|reference_end|> | arxiv | @article{martinez2000one,
title={One Sense per Collocation and Genre/Topic Variations},
author={David Martinez and Eneko Agirre},
journal={Proceedings of the Joint SIGDAT Conference on Empirical Methods in
Natural Language Processing and Very Large Corpora 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010027},
primaryClass={cs.CL}
} | martinez2000one |
arxiv-669718 | cs/0010028 | Sequence-Based Abstract Interpretation of Prolog | <|reference_start|>Sequence-Based Abstract Interpretation of Prolog: Many abstract interpretation frameworks and analyses for Prolog have been proposed, which seek to extract information useful for program optimization. Although motivated by practical considerations, notably making Prolog competitive with imperative languages, such frameworks fail to capture some of the control structures of existing implementations of the language. In this paper we propose a novel framework for the abstract interpretation of Prolog which handles the depth-first search rule and the cut operator. It relies on the notion of substitution sequence to model the result of the execution of a goal. The framework consists of (i) a denotational concrete semantics, (ii) a safe abstraction of the concrete semantics defined in terms of a class of post-fixpoints, and (iii) a generic abstract interpretation algorithm. We show that traditional abstract domains of substitutions may easily be adapted to the new framework, and provide experimental evidence of the effectiveness of our approach. We also show that previous work on determinacy analysis, that was not expressible by existing abstract interpretation frameworks, can be seen as an instance of our framework.<|reference_end|> | arxiv | @article{charlier2000sequence-based,
title={Sequence-Based Abstract Interpretation of Prolog},
author={Baudouin Le Charlier, Sabina Rossi and Pascal Van Hentenryck},
journal={arXiv preprint arXiv:cs/0010028},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010028},
primaryClass={cs.LO cs.PL}
} | charlier2000sequence-based |
arxiv-669719 | cs/0010029 | Using Modes to Ensure Subject Reduction for Typed Logic Programs with Subtyping | <|reference_start|>Using Modes to Ensure Subject Reduction for Typed Logic Programs with Subtyping: We consider a general prescriptive type system with parametric polymorphism and subtyping for logic programs. The property of subject reduction expresses the consistency of the type system w.r.t. the execution model: if a program is "well-typed", then all derivations starting in a "well-typed" goal are again "well-typed". It is well-established that without subtyping, this property is readily obtained for logic programs w.r.t. their standard (untyped) execution model. Here we give syntactic conditions that ensure subject reduction also in the presence of general subtyping relations between type constructors. The idea is to consider logic programs with a fixed dataflow, given by modes.<|reference_end|> | arxiv | @article{smaus2000using,
title={Using Modes to Ensure Subject Reduction for Typed Logic Programs with
Subtyping},
author={Jan-Georg Smaus, Francois Fages, Pierre Deransart},
journal={arXiv preprint arXiv:cs/0010029},
year={2000},
number={RR-4020},
archivePrefix={arXiv},
eprint={cs/0010029},
primaryClass={cs.LO}
} | smaus2000using |
arxiv-669720 | cs/0010030 | Reduction of Intermediate Alphabets in Finite-State Transducer Cascades | <|reference_start|>Reduction of Intermediate Alphabets in Finite-State Transducer Cascades: This article describes an algorithm for reducing the intermediate alphabets in cascades of finite-state transducers (FSTs). Although the method modifies the component FSTs, there is no change in the overall relation described by the whole cascade. No additional information or special algorithm, that could decelerate the processing of input, is required at runtime. Two examples from Natural Language Processing are used to illustrate the effect of the algorithm on the sizes of the FSTs and their alphabets. With some FSTs the number of arcs and symbols shrank considerably.<|reference_end|> | arxiv | @article{kempe2000reduction,
title={Reduction of Intermediate Alphabets in Finite-State Transducer Cascades},
author={Andre Kempe},
journal={Proc. TALN 2000, pp. 207-215, Lausanne, Switzerland. October 16-18},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010030},
primaryClass={cs.CL}
} | kempe2000reduction |
arxiv-669721 | cs/0010031 | Opportunity Cost Algorithms for Combinatorial Auctions | <|reference_start|>Opportunity Cost Algorithms for Combinatorial Auctions: Two general algorithms based on opportunity costs are given for approximating a revenue-maximizing set of bids an auctioneer should accept, in a combinatorial auction in which each bidder offers a price for some subset of the available goods and the auctioneer can only accept non-intersecting bids. Since this problem is difficult even to approximate in general, the algorithms are most useful when the bids are restricted to be connected node subsets of an underlying object graph that represents which objects are relevant to each other. The approximation ratios of the algorithms depend on structural properties of this graph and are small constants for many interesting families of object graphs. The running times of the algorithms are linear in the size of the bid graph, which describes the conflicts between bids. Extensions of the algorithms allow for efficient processing of additional constraints, such as budget constraints that associate bids with particular bidders and limit how many bids from a particular bidder can be accepted.<|reference_end|> | arxiv | @article{akcoglu2000opportunity,
title={Opportunity Cost Algorithms for Combinatorial Auctions},
author={Karhan Akcoglu, James Aspnes, Bhaskar DasGupta, and Ming-Yang Kao},
journal={arXiv preprint arXiv:cs/0010031},
year={2000},
number={DIMACS TR 2000-27},
archivePrefix={arXiv},
eprint={cs/0010031},
primaryClass={cs.CE cs.DS}
} | akcoglu2000opportunity |
arxiv-669722 | cs/0010032 | Super Logic Programs | <|reference_start|>Super Logic Programs: The Autoepistemic Logic of Knowledge and Belief (AELB) is a powerful nonmonotic formalism introduced by Teodor Przymusinski in 1994. In this paper, we specialize it to a class of theories called `super logic programs'. We argue that these programs form a natural generalization of standard logic programs. In particular, they allow disjunctions and default negation of arbibrary positive objective formulas. Our main results are two new and powerful characterizations of the static semant ics of these programs, one syntactic, and one model-theoretic. The syntactic fixed point characterization is much simpler than the fixed point construction of the static semantics for arbitrary AELB theories. The model-theoretic characterization via Kripke models allows one to construct finite representations of the inherently infinite static expansions. Both characterizations can be used as the basis of algorithms for query answering under the static semantics. We describe a query-answering interpreter for super programs which we developed based on the model-theoretic characterization and which is available on the web.<|reference_end|> | arxiv | @article{brass2000super,
title={Super Logic Programs},
author={Stefan Brass, Juergen Dix, Teodor C. Przymusinski},
journal={arXiv preprint arXiv:cs/0010032},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010032},
primaryClass={cs.AI cs.LO}
} | brass2000super |
arxiv-669723 | cs/0010033 | A Formal Framework for Linguistic Annotation (revised version) | <|reference_start|>A Formal Framework for Linguistic Annotation (revised version): `Linguistic annotation' covers any descriptive or analytic notations applied to raw language data. The basic data may be in the form of time functions - audio, video and/or physiological recordings - or it may be textual. The added notations may include transcriptions of all sorts (from phonetic features to discourse structures), part-of-speech and sense tagging, syntactic analysis, `named entity' identification, co-reference annotation, and so on. While there are several ongoing efforts to provide formats and tools for such annotations and to publish annotated linguistic databases, the lack of widely accepted standards is becoming a critical problem. Proposed standards, to the extent they exist, have focused on file formats. This paper focuses instead on the logical structure of linguistic annotations. We survey a wide variety of existing annotation formats and demonstrate a common conceptual core, the annotation graph. This provides a formal framework for constructing, maintaining and searching linguistic annotations, while remaining consistent with many alternative data structures and file formats.<|reference_end|> | arxiv | @article{bird2000a,
title={A Formal Framework for Linguistic Annotation (revised version)},
author={Steven Bird and Mark Liberman},
journal={arXiv preprint arXiv:cs/0010033},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010033},
primaryClass={cs.CL cs.DB cs.DS}
} | bird2000a |
arxiv-669724 | cs/0010034 | Static Analysis Techniques for Equational Logic Programming | <|reference_start|>Static Analysis Techniques for Equational Logic Programming: An equational logic program is a set of directed equations or rules, which are used to compute in the obvious way (by replacing equals with ``simpler'' equals). We present static analysis techniques for efficient equational logic programming, some of which have been implemented in $LR^2$, a laboratory for developing and evaluating fast, efficient, and practical rewriting techniques. Two novel features of $LR^2$ are that non-left-linear rules are allowed in most contexts and it has a tabling option based on the congruence-closure based algorithm to compute normal forms. Although, the focus of this research is on the tabling approach some of the techniques are applicable to the untabled approach as well. Our presentation is in the context of $LR^2$, which is an interpreter, but some of the techniques apply to compilation as well.<|reference_end|> | arxiv | @article{verma2000static,
title={Static Analysis Techniques for Equational Logic Programming},
author={Rakesh M. Verma},
journal={arXiv preprint arXiv:cs/0010034},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010034},
primaryClass={cs.LO cs.PL}
} | verma2000static |
arxiv-669725 | cs/0010035 | Proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000) | <|reference_start|>Proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000): Over the past decades automated debugging has seen major achievements. However, as debugging is by necessity attached to particular programming paradigms, the results are scattered. The aims of the workshop are to gather common themes and solutions across programming communities, and to cross-fertilize ideas. AADEBUG 2000 in Munich follows AADEBUG'93 in Linkoeping, Sweden; AADEBUG'95 in Saint Malo, France; AADEBUG'97 in Linkoeping, Sweden.<|reference_end|> | arxiv | @article{ducasse2000proceedings,
title={Proceedings of the Fourth International Workshop on Automated Debugging
(AADEBUG 2000)},
author={M. Ducasse (IRISA/INSA de Rennes)},
journal={arXiv preprint arXiv:cs/0010035},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010035},
primaryClass={cs.SE cs.PL}
} | ducasse2000proceedings |
arxiv-669726 | cs/0010036 | Lattice Structure and Convergence of a Game of Cards | <|reference_start|>Lattice Structure and Convergence of a Game of Cards: This paper is devoted to the study of the dynamics of a discrete system related to some self stabilizing protocol on a ring of processors.<|reference_end|> | arxiv | @article{goles2000lattice,
title={Lattice Structure and Convergence of a Game of Cards},
author={Eric Goles (Univ. de Chile), Michel Morvan (Univ. Paris 7 and IUF), Ha
Duong Phan (Univ. Paris 7)},
journal={arXiv preprint arXiv:cs/0010036},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010036},
primaryClass={cs.DM}
} | goles2000lattice |
arxiv-669727 | cs/0010037 | On the relationship between fuzzy logic and four-valued relevance logic | <|reference_start|>On the relationship between fuzzy logic and four-valued relevance logic: In fuzzy propositional logic, to a proposition a partial truth in [0,1] is assigned. It is well known that under certain circumstances, fuzzy logic collapses to classical logic. In this paper, we will show that under dual conditions, fuzzy logic collapses to four-valued (relevance) logic, where propositions have truth-value true, false, unknown, or contradiction. As a consequence, fuzzy entailment may be considered as ``in between'' four-valued (relevance) entailment and classical entailment.<|reference_end|> | arxiv | @article{straccia2000on,
title={On the relationship between fuzzy logic and four-valued relevance logic},
author={Umberto Straccia},
journal={arXiv preprint arXiv:cs/0010037},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010037},
primaryClass={cs.AI}
} | straccia2000on |
arxiv-669728 | cs/0010038 | Collecting Graphical Abstract Views of Mercury Program Executions | <|reference_start|>Collecting Graphical Abstract Views of Mercury Program Executions: A program execution monitor is a program that collects and abstracts information about program executions. The "collect" operator is a high level, general purpose primitive which lets users implement their own monitors. "Collect" is built on top of the Mercury trace. In previous work, we have demonstrated how this operator can be used to efficiently collect various kinds of statistics about Mercury program executions. In this article we further demonstrate the expressive power and effectiveness of "collect" by providing more monitor examples. In particular, we show how to implement monitors that generate graphical abstractions of program executions such as proof trees, control flow graphs and dynamic call graphs. We show how those abstractions can be easily modified and adapted, since those monitors only require several dozens of lines of code. Those abstractions are intended to serve as front-ends of software visualization tools. Although "collect" is currently implemented on top of the Mercury trace, none of its underlying concepts depend of Mercury and it can be implemented on top of any tracer for any programming language.<|reference_end|> | arxiv | @article{jahier2000collecting,
title={Collecting Graphical Abstract Views of Mercury Program Executions},
author={Erwan Jahier (IRISA/INSA de Rennes)},
journal={arXiv preprint arXiv:cs/0010038},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010038},
primaryClass={cs.SE cs.PL}
} | jahier2000collecting |
arxiv-669729 | cs/0010039 | Computational Geometry Column 40 | <|reference_start|>Computational Geometry Column 40: It has recently been established by Below, De Loera, and Richter-Gebert that finding a minimum size (or even just a small) triangulation of a convex polyhedron is NP-complete. Their 3SAT-reduction proof is discussed.<|reference_end|> | arxiv | @article{o'rourke2000computational,
title={Computational Geometry Column 40},
author={Joseph O'Rourke},
journal={arXiv preprint arXiv:cs/0010039},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010039},
primaryClass={cs.CG}
} | o'rourke2000computational |
arxiv-669730 | cs/0011001 | Utilizing the World Wide Web as an Encyclopedia: Extracting Term Descriptions from Semi-Structured Texts | <|reference_start|>Utilizing the World Wide Web as an Encyclopedia: Extracting Term Descriptions from Semi-Structured Texts: In this paper, we propose a method to extract descriptions of technical terms from Web pages in order to utilize the World Wide Web as an encyclopedia. We use linguistic patterns and HTML text structures to extract text fragments containing term descriptions. We also use a language model to discard extraneous descriptions, and a clustering method to summarize resultant descriptions. We show the effectiveness of our method by way of experiments.<|reference_end|> | arxiv | @article{fujii2000utilizing,
title={Utilizing the World Wide Web as an Encyclopedia: Extracting Term
Descriptions from Semi-Structured Texts},
author={Atsushi Fujii and Tetsuya Ishikawa},
journal={Proceedings of the 38th Annual Meeting of the Association for
Computational Linguistics (ACL-2000), pp.488-495, Oct. 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011001},
primaryClass={cs.CL}
} | fujii2000utilizing |
arxiv-669731 | cs/0011002 | A Novelty-based Evaluation Method for Information Retrieval | <|reference_start|>A Novelty-based Evaluation Method for Information Retrieval: In information retrieval research, precision and recall have long been used to evaluate IR systems. However, given that a number of retrieval systems resembling one another are already available to the public, it is valuable to retrieve novel relevant documents, i.e., documents that cannot be retrieved by those existing systems. In view of this problem, we propose an evaluation method that favors systems retrieving as many novel documents as possible. We also used our method to evaluate systems that participated in the IREX workshop.<|reference_end|> | arxiv | @article{fujii2000a,
title={A Novelty-based Evaluation Method for Information Retrieval},
author={Atsushi Fujii and Tetsuya Ishikawa},
journal={Proceedings of the 2nd International Conference on Language
Resources and Evaluation (LREC-2000), pp.1637-1641, Jun. 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011002},
primaryClass={cs.CL}
} | fujii2000a |
arxiv-669732 | cs/0011003 | Applying Machine Translation to Two-Stage Cross-Language Information Retrieval | <|reference_start|>Applying Machine Translation to Two-Stage Cross-Language Information Retrieval: Cross-language information retrieval (CLIR), where queries and documents are in different languages, needs a translation of queries and/or documents, so as to standardize both of them into a common representation. For this purpose, the use of machine translation is an effective approach. However, computational cost is prohibitive in translating large-scale document collections. To resolve this problem, we propose a two-stage CLIR method. First, we translate a given query into the document language, and retrieve a limited number of foreign documents. Second, we machine translate only those documents into the user language, and re-rank them based on the translation result. We also show the effectiveness of our method by way of experiments using Japanese queries and English technical documents.<|reference_end|> | arxiv | @article{fujii2000applying,
title={Applying Machine Translation to Two-Stage Cross-Language Information
Retrieval},
author={Atsushi Fujii and Tetsuya Ishikawa},
journal={Proceedings of the 4th Conference of the Association for Machine
Translation in the Americas (AMTA-2000), pp.13-24, Oct. 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011003},
primaryClass={cs.CL}
} | fujii2000applying |
arxiv-669733 | cs/0011004 | Anonymous Oblivious Transfer | <|reference_start|>Anonymous Oblivious Transfer: In this short note we want to introduce {\em anonymous oblivious transfer} a new cryptographic primitive which can be proven to be strictly more powerful than oblivious transfer. We show that all functions can be robustly realized by multi party protocols with {\em anonymous oblivious transfer}. No assumption about possible collusions of cheaters or disruptors have to be made. Furthermore we shortly discuss how to realize anonymous oblivious transfer with oblivious broadcast or by quantum cryptography. The protocol of anonymous oblivious transfer was inspired by a quantum protocol: the anonymous quantum channel.<|reference_end|> | arxiv | @article{mueller-quade2000anonymous,
title={Anonymous Oblivious Transfer},
author={J. Mueller-Quade and H. Imai},
journal={arXiv preprint arXiv:cs/0011004},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011004},
primaryClass={cs.CR}
} | mueller-quade2000anonymous |
arxiv-669734 | cs/0011005 | Non-intrusive on-the-fly data race detection using execution replay | <|reference_start|>Non-intrusive on-the-fly data race detection using execution replay: This paper presents a practical solution for detecting data races in parallel programs. The solution consists of a combination of execution replay (RecPlay) with automatic on-the-fly data race detection. This combination enables us to perform the data race detection on an unaltered execution (almost no probe effect). Furthermore, the usage of multilevel bitmaps and snooped matrix clocks limits the amount of memory used. As the record phase of RecPlay is highly efficient, there is no need to switch it off, hereby eliminating the possibility of Heisenbugs because tracing can be left on all the time.<|reference_end|> | arxiv | @article{ronsse2000non-intrusive,
title={Non-intrusive on-the-fly data race detection using execution replay},
author={Michiel Ronsse and Koen De Bosschere},
journal={arXiv preprint arXiv:cs/0011005},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011005},
primaryClass={cs.SE cs.PL}
} | ronsse2000non-intrusive |
arxiv-669735 | cs/0011006 | Execution replay and debugging | <|reference_start|>Execution replay and debugging: As most parallel and distributed programs are internally non-deterministic -- consecutive runs with the same input might result in a different program flow -- vanilla cyclic debugging techniques as such are useless. In order to use cyclic debugging tools, we need a tool that records information about an execution so that it can be replayed for debugging. Because recording information interferes with the execution, we must limit the amount of information and keep the processing of the information fast. This paper contains a survey of existing execution replay techniques and tools.<|reference_end|> | arxiv | @article{ronsse2000execution,
title={Execution replay and debugging},
author={Michiel Ronsse, Koen De Bosschere and Jacques Chassin de Kergommeaux},
journal={arXiv preprint arXiv:cs/0011006},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011006},
primaryClass={cs.SE cs.PL}
} | ronsse2000execution |
arxiv-669736 | cs/0011007 | Tree-gram Parsing: Lexical Dependencies and Structural Relations | <|reference_start|>Tree-gram Parsing: Lexical Dependencies and Structural Relations: This paper explores the kinds of probabilistic relations that are important in syntactic disambiguation. It proposes that two widely used kinds of relations, lexical dependencies and structural relations, have complementary disambiguation capabilities. It presents a new model based on structural relations, the Tree-gram model, and reports experiments showing that structural relations should benefit from enrichment by lexical dependencies.<|reference_end|> | arxiv | @article{sima'an2000tree-gram,
title={Tree-gram Parsing: Lexical Dependencies and Structural Relations},
author={Khalil Sima'an},
journal={arXiv preprint arXiv:cs/0011007},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011007},
primaryClass={cs.CL cs.AI cs.HC}
} | sima'an2000tree-gram |
arxiv-669737 | cs/0011008 | A Lambda-Calculus with letrec, case, constructors and non-determinism | <|reference_start|>A Lambda-Calculus with letrec, case, constructors and non-determinism: A non-deterministic call-by-need lambda-calculus \calc with case, constructors, letrec and a (non-deterministic) erratic choice, based on rewriting rules is investigated. A standard reduction is defined as a variant of left-most outermost reduction. The semantics is defined by contextual equivalence of expressions instead of using $\alpha\beta(\eta)$-equivalence. It is shown that several program transformations are correct, for example all (deterministic) rules of the calculus, and in addition the rules for garbage collection, removing indirections and unique copy. This shows that the combination of a context lemma and a meta-rewriting on reductions using complete sets of commuting (forking, resp.) diagrams is a useful and successful method for providing a semantics of a functional programming language and proving correctness of program transformations.<|reference_end|> | arxiv | @article{schmidt-schauß2000a,
title={A Lambda-Calculus with letrec, case, constructors and non-determinism},
author={Manfred Schmidt-Schau{ss} and Michael Huber},
journal={arXiv preprint arXiv:cs/0011008},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011008},
primaryClass={cs.PL cs.AI cs.SC}
} | schmidt-schauß2000a |
arxiv-669738 | cs/0011009 | Small Maximal Independent Sets and Faster Exact Graph Coloring | <|reference_start|>Small Maximal Independent Sets and Faster Exact Graph Coloring: We show that, for any n-vertex graph G and integer parameter k, there are at most 3^{4k-n}4^{n-3k} maximal independent sets I \subset G with |I| <= k, and that all such sets can be listed in time O(3^{4k-n} 4^{n-3k}). These bounds are tight when n/4 <= k <= n/3. As a consequence, we show how to compute the exact chromatic number of a graph in time O((4/3 + 3^{4/3}/4)^n) ~= 2.4150^n, improving a previous O((1+3^{1/3})^n) ~= 2.4422^n algorithm of Lawler (1976).<|reference_end|> | arxiv | @article{eppstein2000small,
title={Small Maximal Independent Sets and Faster Exact Graph Coloring},
author={David Eppstein},
journal={J. Graph Algorithms & Applications 7(2):131-140, 2003},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011009},
primaryClass={cs.DS math.CO}
} | eppstein2000small |
arxiv-669739 | cs/0011010 | Extension Language Automation of Embedded System Debugging | <|reference_start|>Extension Language Automation of Embedded System Debugging: Embedded systems contain several layers of target processing abstraction. These layers include electronic circuit, binary machine code, mnemonic assembly code, and high-level procedural and object-oriented abstractions. Physical and temporal constraints and artifacts within physically embedded systems make it impossible for software engineers to operate at a single layer of processor abstraction. The Luxdbg embedded system debugger exposes these layers to debugger users, and it adds an additional layer, the extension language layer, that allows users to extend both the debugger and its target processor capabilities. Tcl is Luxdbg's extension language. Luxdbg users can apply Tcl to automate interactive debugging steps, to redirect and to interconnect target processor input-output facilities, to schedule multiple processor execution, to log and to react to target processing exceptions, and to automate target system testing. Inclusion of an extension language like Tcl in a debugger promises additional advantages for distributed debugging, where debuggers can pass extension language expressions across computer networks.<|reference_end|> | arxiv | @article{parson2000extension,
title={Extension Language Automation of Embedded System Debugging},
author={Dale Parson, Bryan Schlieder, Paul Beatty},
journal={arXiv preprint arXiv:cs/0011010},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011010},
primaryClass={cs.SE cs.PL}
} | parson2000extension |
arxiv-669740 | cs/0011011 | Formal Properties of XML Grammars and Languages | <|reference_start|>Formal Properties of XML Grammars and Languages: XML documents are described by a document type definition (DTD). An XML-grammar is a formal grammar that captures the syntactic features of a DTD. We investigate properties of this family of grammars. We show that every XML-language basically has a unique XML-grammar. We give two characterizations of languages generated by XML-grammars, one is set-theoretic, the other is by a kind of saturation property. We investigate decidability problems and prove that some properties that are undecidable for general context-free languages become decidable for XML-languages. We also characterize those XML-grammars that generate regular XML-languages.<|reference_end|> | arxiv | @article{berstel2000formal,
title={Formal Properties of XML Grammars and Languages},
author={Jean Berstel and Luc Boasson},
journal={arXiv preprint arXiv:cs/0011011},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011011},
primaryClass={cs.DM cs.CL}
} | berstel2000formal |
arxiv-669741 | cs/0011012 | Causes and Explanations: A Structural-Model Approach, Part I: Causes | <|reference_start|>Causes and Explanations: A Structural-Model Approach, Part I: Causes: We propose a new definition of actual cause, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions and resolves major difficulties in the traditional account.<|reference_end|> | arxiv | @article{halpern2000causes,
title={Causes and Explanations: A Structural-Model Approach, Part I: Causes},
author={Joseph Y. Halpern and Judea Pearl},
journal={arXiv preprint arXiv:cs/0011012},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011012},
primaryClass={cs.AI}
} | halpern2000causes |
arxiv-669742 | cs/0011013 | Transformation-Based Bottom-Up Computation of the Well-Founded Model | <|reference_start|>Transformation-Based Bottom-Up Computation of the Well-Founded Model: We present a framework for expressing bottom-up algorithms to compute the well-founded model of non-disjunctive logic programs. Our method is based on the notion of conditional facts and elementary program transformations studied by Brass and Dix for disjunctive programs. However, even if we restrict their framework to nondisjunctive programs, their residual program can grow to exponential size, whereas for function-free programs our program remainder is always polynomial in the size of the extensional database (EDB). We show that particular orderings of our transformations (we call them strategies) correspond to well-known computational methods like the alternating fixpoint approach, the well-founded magic sets method and the magic alternating fixpoint procedure. However, due to the confluence of our calculi, we come up with computations of the well-founded model that are provably better than these methods. In contrast to other approaches, our transformation method treats magic set transformed programs correctly, i.e. it always computes a relevant part of the well-founded model of the original program.<|reference_end|> | arxiv | @article{brass2000transformation-based,
title={Transformation-Based Bottom-Up Computation of the Well-Founded Model},
author={Stefan Brass, Juergen Dix, Burkhard Freitag, Ulrich Zukowski},
journal={arXiv preprint arXiv:cs/0011013},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011013},
primaryClass={cs.LO}
} | brass2000transformation-based |
arxiv-669743 | cs/0011014 | Chip-level CMP Modeling and Smart Dummy for HDP and Conformal CVD Films | <|reference_start|>Chip-level CMP Modeling and Smart Dummy for HDP and Conformal CVD Films: Chip-level CMP modeling is investigated to obtain the post-CMP film profile thickness across a die from its design layout file and a few film deposition and CMP parameters. The work covers both HDP and conformal CVD film. The experimental CMP results agree well with the modeled results. Different algorithms for filling of dummy structure are compared. A smart algorithm for dummy filling is presented, which achieves maximal pattern-density uniformity and CMP planarity.<|reference_end|> | arxiv | @article{liu2000chip-level,
title={Chip-level CMP Modeling and Smart Dummy for HDP and Conformal CVD Films},
author={George Yong Liu (1), Ray F. Zhang (1), Kelvin Hsu (1) and Lawrence
Camilletti (2) ((1) CMP Technology, Inc., (2) Conexant Systems Inc.)},
journal={Proceedings of CMPMIC 99, pp120-127},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011014},
primaryClass={cs.CE}
} | liu2000chip-level |
arxiv-669744 | cs/0011015 | A Decomposition Theorem for Maximum Weight Bipartite Matchings | <|reference_start|>A Decomposition Theorem for Maximum Weight Bipartite Matchings: Let G be a bipartite graph with positive integer weights on the edges and without isolated nodes. Let n, N and W be the node count, the largest edge weight and the total weight of G. Let k(x,y) be log(x)/log(x^2/y). We present a new decomposition theorem for maximum weight bipartite matchings and use it to design an O(sqrt(n)W/k(n,W/N))-time algorithm for computing a maximum weight matching of G. This algorithm bridges a long-standing gap between the best known time complexity of computing a maximum weight matching and that of computing a maximum cardinality matching. Given G and a maximum weight matching of G, we can further compute the weight of a maximum weight matching of G-{u} for all nodes u in O(W) time.<|reference_end|> | arxiv | @article{kao2000a,
title={A Decomposition Theorem for Maximum Weight Bipartite Matchings},
author={Ming-Yang Kao, Tak-Wah Lam, Wing-Kin Sung, Hing-Fung Ting},
journal={arXiv preprint arXiv:cs/0011015},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011015},
primaryClass={cs.DS cs.DM}
} | kao2000a |
arxiv-669745 | cs/0011016 | Designing Proxies for Stock Market Indices is Computationally Hard | <|reference_start|>Designing Proxies for Stock Market Indices is Computationally Hard: In this paper, we study the problem of designing proxies (or portfolios) for various stock market indices based on historical data. We use four different methods for computing market indices, all of which are formulas used in actual stock market analysis. For each index, we consider three criteria for designing the proxy: the proxy must either track the market index, outperform the market index, or perform within a margin of error of the index while maintaining a low volatility. In eleven of the twelve cases (all combinations of four indices with three criteria except the problem of sacrificing return for less volatility using the price-relative index) we show that the problem is NP-hard, and hence most likely intractable.<|reference_end|> | arxiv | @article{kao2000designing,
title={Designing Proxies for Stock Market Indices is Computationally Hard},
author={Ming-Yang Kao, Stephen R. Tate},
journal={arXiv preprint arXiv:cs/0011016},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011016},
primaryClass={cs.CE cs.CC}
} | kao2000designing |
arxiv-669746 | cs/0011017 | Automatic Debugging Support for UML Designs | <|reference_start|>Automatic Debugging Support for UML Designs: Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the ``backward'' direction of our synthesis algorithm, we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations on why there is a conflict and which parts of the specifications are affected.<|reference_end|> | arxiv | @article{schumann2000automatic,
title={Automatic Debugging Support for UML Designs},
author={Johann Schumann},
journal={arXiv preprint arXiv:cs/0011017},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011017},
primaryClass={cs.SE cs.PL}
} | schumann2000automatic |
arxiv-669747 | cs/0011018 | Optimal Buy-and-Hold Strategies for Financial Markets with Bounded Daily Returns | <|reference_start|>Optimal Buy-and-Hold Strategies for Financial Markets with Bounded Daily Returns: In the context of investment analysis, we formulate an abstract online computing problem called a planning game and develop general tools for solving such a game. We then use the tools to investigate a practical buy-and-hold trading problem faced by long-term investors in stocks. We obtain the unique optimal static online algorithm for the problem and determine its exact competitive ratio. We also compare this algorithm with the popular dollar averaging strategy using actual market data.<|reference_end|> | arxiv | @article{chen2000optimal,
title={Optimal Buy-and-Hold Strategies for Financial Markets with Bounded Daily
Returns},
author={Gen-Huey Chen, Ming-Yang Kao, Yuh-Dauh Lyuu, Hsing-Kuo Wong},
journal={arXiv preprint arXiv:cs/0011018},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011018},
primaryClass={cs.CE cs.DS}
} | chen2000optimal |
arxiv-669748 | cs/0011019 | A Moment of Perfect Clarity II: Consequences of Sparse Sets Hard for NP with Respect to Weak Reductions | <|reference_start|>A Moment of Perfect Clarity II: Consequences of Sparse Sets Hard for NP with Respect to Weak Reductions: This paper discusses advances, due to the work of Cai, Naik, and Sivakumar and Glasser, in the complexity class collapses that follow if NP has sparse hard sets under reductions weaker than (full) truth-table reductions.<|reference_end|> | arxiv | @article{glasser2000a,
title={A Moment of Perfect Clarity II: Consequences of Sparse Sets Hard for NP
with Respect to Weak Reductions},
author={Christian Glasser and Lane A. Hemaspaandra},
journal={arXiv preprint arXiv:cs/0011019},
year={2000},
number={URCS-TR-2000-738},
archivePrefix={arXiv},
eprint={cs/0011019},
primaryClass={cs.CC cs.DS}
} | glasser2000a |
arxiv-669749 | cs/0011020 | The Use of Instrumentation in Grammar Engineering | <|reference_start|>The Use of Instrumentation in Grammar Engineering: This paper explores the usefulness of a technique from software engineering, code instrumentation, for the development of large-scale natural language grammars. Information about the usage of grammar rules in test and corpus sentences is used to improve grammar and testsuite, as well as adapting a grammar to a specific genre. Results show that less than half of a large-coverage grammar for German is actually tested by two large testsuites, and that 10--30% of testing time is redundant. This methodology applied can be seen as a re-use of grammar writing knowledge for testsuite compilation.<|reference_end|> | arxiv | @article{broeker2000the,
title={The Use of Instrumentation in Grammar Engineering},
author={Norbert Broeker},
journal={adapted from COLING2000, Saarbruecken/FRG, July31--Aug4 2000,
pp.118-124},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011020},
primaryClass={cs.CL}
} | broeker2000the |
arxiv-669750 | cs/0011021 | On-the-fly Query-Based Debugging with Examples | <|reference_start|>On-the-fly Query-Based Debugging with Examples: Program errors are hard to find because of the cause-effect gap between the time when an error occurs and the time when the error becomes apparent to the programmer. Although debugging techniques such as conditional and data breakpoints help to find error causes in simple cases, they fail to effectively bridge the cause-effect gap in many situations. Query-based debuggers offer programmers an effective tool that provides instant error alert by continuously checking inter-object relationships while the debugged program is running. To enable the query-based debugger in the middle of program execution in a portable way, we propose efficient Java class file instrumentation and discuss alternative techniques. Although the on-the-fly debugger has a higher overhead than a dynamic query-based debugger, it offers additional interactive power and flexibility while maintaining complete portability. To speed up dynamic query evaluation, our debugger implemented in portable Java uses a combination of program instrumentation, load-time code generation, query optimization, and incremental reevaluation. This paper discusses on-the-fly debugging and demonstrates the query-based debugger application for debugging Java gas tank applet as well as SPECjvm98 suite applications.<|reference_end|> | arxiv | @article{lencevicius2000on-the-fly,
title={On-the-fly Query-Based Debugging with Examples},
author={Raimondas Lencevicius},
journal={arXiv preprint arXiv:cs/0011021},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011021},
primaryClass={cs.SE cs.PL}
} | lencevicius2000on-the-fly |
arxiv-669751 | cs/0011022 | Apache web server execution tracing using Third Eye | <|reference_start|>Apache web server execution tracing using Third Eye: Testing of modern software systems that integrate many components developed by different teams is a difficult task. Third Eye is a framework for tracing and validating software systems using application domain events. We use formal descriptions of the constraints between events to identify violations in execution traces. Third Eye is a flexible and modular framework that can be used in different products. We present the validation of the Apache Web Server access policy implementation. The results indicate that our tool is a helpful addition to software development infrastructure.<|reference_end|> | arxiv | @article{lencevicius2000apache,
title={Apache web server execution tracing using Third Eye},
author={Raimondas Lencevicius, Alexander Ran, Rahav Yairi},
journal={arXiv preprint arXiv:cs/0011022},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011022},
primaryClass={cs.SE cs.PL}
} | lencevicius2000apache |
arxiv-669752 | cs/0011023 | Optimal Bidding Algorithms Against Cheating in Multiple-Object Auctions | <|reference_start|>Optimal Bidding Algorithms Against Cheating in Multiple-Object Auctions: This paper studies some basic problems in a multiple-object auction model using methodologies from theoretical computer science. We are especially concerned with situations where an adversary bidder knows the bidding algorithms of all the other bidders. In the two-bidder case, we derive an optimal randomized bidding algorithm, by which the disadvantaged bidder can procure at least half of the auction objects despite the adversary's a priori knowledge of his algorithm. In the general $k$-bidder case, if the number of objects is a multiple of $k$, an optimal randomized bidding algorithm is found. If the $k-1$ disadvantaged bidders employ that same algorithm, each of them can obtain at least $1/k$ of the objects regardless of the bidding algorithm the adversary uses. These two algorithms are based on closed-form solutions to certain multivariate probability distributions. In situations where a closed-form solution cannot be obtained, we study a restricted class of bidding algorithms as an approximation to desired optimal algorithms.<|reference_end|> | arxiv | @article{kao2000optimal,
title={Optimal Bidding Algorithms Against Cheating in Multiple-Object Auctions},
author={Ming-Yang Kao, Junfeng Qi, Lei Tan},
journal={SIAM Journal on Computing, 28(3):955--969, 1999},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011023},
primaryClass={cs.CE cs.DS}
} | kao2000optimal |
arxiv-669753 | cs/0011024 | Algorithms for Rewriting Aggregate Queries Using Views | <|reference_start|>Algorithms for Rewriting Aggregate Queries Using Views: Queries involving aggregation are typical in database applications. One of the main ideas to optimize the execution of an aggregate query is to reuse results of previously answered queries. This leads to the problem of rewriting aggregate queries using views. Due to a lack of theory, algorithms for this problem were rather ad-hoc. They were sound, but were not proven to be complete. Recently we have given syntactic characterizations for the equivalence of aggregate queries and applied them to decide when there exist rewritings. However, these decision procedures do not lend themselves immediately to an implementation. In this paper, we present practical algorithms for rewriting queries with $\COUNT$ and $\SUM$. Our algorithms are sound. They are also complete for important cases. Our techniques can be used to improve well-known procedures for rewriting non-aggregate queries. These procedures can then be adapted to obtain algorithms for rewriting queries with $\MIN$ and $\MAX$. The algorithms presented are a basis for realizing optimizers that rewrite queries using views.<|reference_end|> | arxiv | @article{cohen2000algorithms,
title={Algorithms for Rewriting Aggregate Queries Using Views},
author={Sara Cohen, Werner Nutt, Alexander Serebrenik},
journal={arXiv preprint arXiv:cs/0011024},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011024},
primaryClass={cs.DB}
} | cohen2000algorithms |
arxiv-669754 | cs/0011025 | Termination analysis of logic programs using acceptability with general term orders | <|reference_start|>Termination analysis of logic programs using acceptability with general term orders: We present a new approach to termination analysis of logic programs. The essence of the approach is that we make use of general term-orderings (instead of level mappings), like it is done in transformational approaches to logic program termination analysis, but that we apply these orderings directly to the logic program and not to the term-rewrite system obtained through some transformation. We define some variants of acceptability, based on general term-orderings, and show how they are equivalent to LD-termination. We develop a demand driven, constraint-based approach to verify these acceptability-variants. The advantage of the approach over standard acceptability is that in some cases, where complex level mappings are needed, fairly simple term-orderings may be easily generated. The advantage over transformational approaches is that it avoids the transformation step all together.<|reference_end|> | arxiv | @article{serebrenik2000termination,
title={Termination analysis of logic programs using acceptability with general
term orders},
author={Alexander Serebrenik, Danny De Schreye},
journal={arXiv preprint arXiv:cs/0011025},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011025},
primaryClass={cs.PL}
} | serebrenik2000termination |
arxiv-669755 | cs/0011026 | When Can You Fold a Map? | <|reference_start|>When Can You Fold a Map?: We explore the following problem: given a collection of creases on a piece of paper, each assigned a folding direction of mountain or valley, is there a flat folding by a sequence of simple folds? There are several models of simple folds; the simplest one-layer simple fold rotates a portion of paper about a crease in the paper by +-180 degrees. We first consider the analogous questions in one dimension lower -- bending a segment into a flat object -- which lead to interesting problems on strings. We develop efficient algorithms for the recognition of simply foldable 1D crease patterns, and reconstruction of a sequence of simple folds. Indeed, we prove that a 1D crease pattern is flat-foldable by any means precisely if it is by a sequence of one-layer simple folds. Next we explore simple foldability in two dimensions, and find a surprising contrast: ``map'' folding and variants are polynomial, but slight generalizations are NP-complete. Specifically, we develop a linear-time algorithm for deciding foldability of an orthogonal crease pattern on a rectangular piece of paper, and prove that it is (weakly) NP-complete to decide foldability of (1) an orthogonal crease pattern on a orthogonal piece of paper, (2) a crease pattern of axis-parallel and diagonal (45-degree) creases on a square piece of paper, and (3) crease patterns without a mountain/valley assignment.<|reference_end|> | arxiv | @article{arkin2000when,
title={When Can You Fold a Map?},
author={Esther M. Arkin, Michael A. Bender, Erik D. Demaine, Martin L.
Demaine, Joseph S. B. Mitchell, Saurabh Sethia, Steven S. Skiena},
journal={arXiv preprint arXiv:cs/0011026},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011026},
primaryClass={cs.CG cs.DM}
} | arkin2000when |
arxiv-669756 | cs/0011027 | Extended Abstract - Model-Based Debugging of Java Programs | <|reference_start|>Extended Abstract - Model-Based Debugging of Java Programs: Model-based reasoning is a central concept in current research into intelligent diagnostic systems. It is based on the assumption that sources of incorrect behavior in technical devices can be located and identified via the existence of a model describing the basic properties of components of a certain application domain. When actual data concerning the misbehavior of a system composed from such components is available, a domain-independent diagnosis engine can be used to infer which parts of the system contribute to the observed behavior. This paper describes the application of the model-based approach to the debugging of Java programs written in a subset of Java. We show how a simple dependency model can be derived from a program, demonstrate the use of the model for debugging and reducing the required user interactions, give a comparison of the functional dependency model with program slicing, and finally discuss some current research issues.<|reference_end|> | arxiv | @article{mateis2000extended,
title={Extended Abstract - Model-Based Debugging of Java Programs},
author={Cristinel Mateis, Markus Stumptner, Dominik Wieland, and Franz Wotawa},
journal={arXiv preprint arXiv:cs/0011027},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011027},
primaryClass={cs.SE cs.PL}
} | mateis2000extended |
arxiv-669757 | cs/0011028 | Retrieval from Captioned Image Databases Using Natural Language Processing | <|reference_start|>Retrieval from Captioned Image Databases Using Natural Language Processing: It might appear that natural language processing should improve the accuracy of information retrieval systems, by making available a more detailed analysis of queries and documents. Although past results appear to show that this is not so, if the focus is shifted to short phrases rather than full documents, the situation becomes somewhat different. The ANVIL system uses a natural language technique to obtain high accuracy retrieval of images which have been annotated with a descriptive textual caption. The natural language techniques also allow additional contextual information to be derived from the relation between the query and the caption, which can help users to understand the overall collection of retrieval results. The techniques have been successfully used in a information retrieval system which forms both a testbed for research and the basis of a commercial system.<|reference_end|> | arxiv | @article{elworthy2000retrieval,
title={Retrieval from Captioned Image Databases Using Natural Language
Processing},
author={David Elworthy},
journal={arXiv preprint arXiv:cs/0011028},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011028},
primaryClass={cs.CL cs.IR}
} | elworthy2000retrieval |
arxiv-669758 | cs/0011029 | Systematic Debugging of Attribute Grammars | <|reference_start|>Systematic Debugging of Attribute Grammars: Although attribute grammars are commonly used for compiler construction, little investigation has been conducted on debugging attribute grammars. The paper proposes two types of systematic debugging methods, an algorithmic debugging and slice-based debugging, both tailored for attribute grammars. By means of query-based interaction with the developer, our debugging methods effectively narrow the potential bug space in the attribute grammar description and eventually identify the incorrect attribution rule. We have incorporated this technology in our visual debugging tool called Aki.<|reference_end|> | arxiv | @article{ikezoe2000systematic,
title={Systematic Debugging of Attribute Grammars},
author={Yohei Ikezoe, Akira Sasaki, Yoshiki Ohshima, Ken Wakita, Masataka
Sassa},
journal={arXiv preprint arXiv:cs/0011029},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011029},
primaryClass={cs.SE}
} | ikezoe2000systematic |
arxiv-669759 | cs/0011030 | Logic Programming Approaches for Representing and Solving Constraint Satisfaction Problems: A Comparison | <|reference_start|>Logic Programming Approaches for Representing and Solving Constraint Satisfaction Problems: A Comparison: Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there is constraint logic programming which computes a solution as an answer substitution to a query containing the variables of the constraint satisfaction problem. On the other hand there are systems based on stable model semantics, abductive systems, and first order logic model generators which compute solutions as models of some theory. This paper compares these different approaches from the point of view of knowledge representation (how declarative are the programs) and from the point of view of performance (how good are they at solving typical problems).<|reference_end|> | arxiv | @article{pelov2000logic,
title={Logic Programming Approaches for Representing and Solving Constraint
Satisfaction Problems: A Comparison},
author={Nikolay Pelov, Emmanuel De Mot, Marc Denecker},
journal={LPAR 2000, Lecture Notes in Artificial Intelligence, vol. 1955,
Springer, 2000, pp. 225-239},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011030},
primaryClass={cs.AI}
} | pelov2000logic |
arxiv-669760 | cs/0011031 | SimLab 11, Software for Sensitivity and Uncertainty Analysis, tool for sound modelling | <|reference_start|>SimLab 11, Software for Sensitivity and Uncertainty Analysis, tool for sound modelling: The aim of this paper is to present and describe SimLab 1.1 (Simulation Laboratory for Uncertainty and Sensitivity Analysis) software designed for Monte Carlo analysis that is based on performing multiple model evaluations with probabilistically selected model input. The results of these evaluations are used to determine both the uncertainty in model predictions and the input variables that drive this uncertainty. This methodology is essential in situations where a decision has to be taken based on the model results; typical examples include risk and emergency management systems, financial analysis and many others. It is also highly recommended as part of model validation, even where the models are used for diagnostic purposes, as an element of sound model building. SimLab allows an exploration of the space of possible alternative model assumptions and structure on the prediction of the model, thereby testing both the quality of the model and the robustness of the model based inference.<|reference_end|> | arxiv | @article{giglioli2000simlab,
title={SimLab 1.1, Software for Sensitivity and Uncertainty Analysis, tool for
sound modelling},
author={N. Giglioli, A. Saltelli (Joint Research Centre, Ispra, Italy)},
journal={arXiv preprint arXiv:cs/0011031},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011031},
primaryClass={cs.DM}
} | giglioli2000simlab |
arxiv-669761 | cs/0011032 | Top-down induction of clustering trees | <|reference_start|>Top-down induction of clustering trees: An approach to clustering is presented that adapts the basic top-down induction of decision trees method towards clustering. To this aim, it employs the principles of instance based learning. The resulting methodology is implemented in the TIC (Top down Induction of Clustering trees) system for first order clustering. The TIC system employs the first order logical decision tree representation of the inductive logic programming system Tilde. Various experiments with TIC are presented, in both propositional and relational domains.<|reference_end|> | arxiv | @article{blockeel2000top-down,
title={Top-down induction of clustering trees},
author={Hendrik Blockeel, Luc De Raedt and Jan Ramon},
journal={Machine Learning, Proceedings of the 15th International Conference
(J. Shavlik, ed.), Morgan Kaufmann, 1998, pp. 55-63},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011032},
primaryClass={cs.LG}
} | blockeel2000top-down |
arxiv-669762 | cs/0011033 | Web Mining Research: A Survey | <|reference_start|>Web Mining Research: A Survey: With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.<|reference_end|> | arxiv | @article{kosala2000web,
title={Web Mining Research: A Survey},
author={Raymond Kosala, Hendrik Blockeel},
journal={ACM SIGKDD Explorations, 2(1):1-15, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011033},
primaryClass={cs.LG cs.DB}
} | kosala2000web |
arxiv-669763 | cs/0011034 | Semantic interpretation of temporal information by abductive inference | <|reference_start|>Semantic interpretation of temporal information by abductive inference: Besides temporal information explicitly available in verbs and adjuncts, the temporal interpretation of a text also depends on general world knowledge and default assumptions. We will present a theory for describing the relation between, on the one hand, verbs, their tenses and adjuncts and, on the other, the eventualities and periods of time they represent and their relative temporal locations. The theory is formulated in logic and is a practical implementation of the concepts described in Ness Schelkens et al. We will show how an abductive resolution procedure can be used on this representation to extract temporal information from texts.<|reference_end|> | arxiv | @article{verdoolaege2000semantic,
title={Semantic interpretation of temporal information by abductive inference},
author={Sven Verdoolaege, Marc Denecker, Ness Schelkens, Danny De Schreye and
Frank Van Eynde},
journal={arXiv preprint arXiv:cs/0011034},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011034},
primaryClass={cs.CL}
} | verdoolaege2000semantic |
arxiv-669764 | cs/0011035 | Abductive reasoning with temporal information | <|reference_start|>Abductive reasoning with temporal information: Texts in natural language contain a lot of temporal information, both explicit and implicit. Verbs and temporal adjuncts carry most of the explicit information, but for a full understanding general world knowledge and default assumptions have to be taken into account. We will present a theory for describing the relation between, on the one hand, verbs, their tenses and adjuncts and, on the other, the eventualities and periods of time they represent and their relative temporal locations, while allowing interaction with general world knowledge. The theory is formulated in an extension of first order logic and is a practical implementation of the concepts described in Van Eynde 2001 and Schelkens et al. 2000. We will show how an abductive resolution procedure can be used on this representation to extract temporal information from texts. The theory presented here is an extension of that in Verdoolaege et al. 2000, adapted to VanEynde 2001, with a simplified and extended analysis of adjuncts and with more emphasis on how a model can be constructed.<|reference_end|> | arxiv | @article{verdoolaege2000abductive,
title={Abductive reasoning with temporal information},
author={Sven Verdoolaege, Marc Denecker and Frank Van Eynde},
journal={arXiv preprint arXiv:cs/0011035},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011035},
primaryClass={cs.CL}
} | verdoolaege2000abductive |
arxiv-669765 | cs/0011036 | Automatic Termination Analysis of Programs Containing Arithmetic Predicates | <|reference_start|>Automatic Termination Analysis of Programs Containing Arithmetic Predicates: For logic programs with arithmetic predicates, showing termination is not easy, since the usual order for the integers is not well-founded. A new method, easily incorporated in the TermiLog system for automatic termination analysis, is presented for showing termination in this case. The method consists of the following steps: First, a finite abstract domain for representing the range of integers is deduced automatically. Based on this abstraction, abstract interpretation is applied to the program. The result is a finite number of atoms abstracting answers to queries which are used to extend the technique of query-mapping pairs. For each query-mapping pair that is potentially non-terminating, a bounded (integer-valued) termination function is guessed. If traversing the pair decreases the value of the termination function, then termination is established. Simple functions often suffice for each query-mapping pair, and that gives our approach an edge over the classical approach of using a single termination function for all loops, which must inevitably be more complicated and harder to guess automatically. It is worth noting that the termination of McCarthy's 91 function can be shown automatically using our method. In summary, the proposed approach is based on combining a finite abstraction of the integers with the technique of the query-mapping pairs, and is essentially capable of dividing a termination proof into several cases, such that a simple termination function suffices for each case. Consequently, the whole process of proving termination can be done automatically in the framework of TermiLog and similar systems.<|reference_end|> | arxiv | @article{dershowitz2000automatic,
title={Automatic Termination Analysis of Programs Containing Arithmetic
Predicates},
author={Nachum Dershowitz, Naomi Lindenstrauss, Yehoshua Sagiv, Alexander
Serebrenik},
journal={arXiv preprint arXiv:cs/0011036},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011036},
primaryClass={cs.PL}
} | dershowitz2000automatic |
arxiv-669766 | cs/0011037 | A syntactical analysis of non-size-increasing polynomial time computation | <|reference_start|>A syntactical analysis of non-size-increasing polynomial time computation: A syntactical proof is given that all functions definable in a certain affine linear typed lambda-calculus with iteration in all types are polynomial time computable. The proof provides explicit polynomial bounds that can easily be calculated.<|reference_end|> | arxiv | @article{aehlig2000a,
title={A syntactical analysis of non-size-increasing polynomial time
computation},
author={Klaus Aehlig, Helmut Schwichtenberg},
journal={ACM Transactions on Computational Logic 3(3), 383-401 (2002)},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011037},
primaryClass={cs.LO}
} | aehlig2000a |
arxiv-669767 | cs/0011038 | Provably Fast and Accurate Recovery of Evolutionary Trees through Harmonic Greedy Triplets | <|reference_start|>Provably Fast and Accurate Recovery of Evolutionary Trees through Harmonic Greedy Triplets: We give a greedy learning algorithm for reconstructing an evolutionary tree based on a certain harmonic average on triplets of terminal taxa. After the pairwise distances between terminal taxa are estimated from sequence data, the algorithm runs in O(n^2) time using O(n) work space, where n is the number of terminal taxa. These time and space complexities are optimal in the sense that the size of an input distance matrix is n^2 and the size of an output tree is n. Moreover, in the Jukes-Cantor model of evolution, the algorithm recovers the correct tree topology with high probability using sample sequences of length polynomial in (1) n, (2) the logarithm of the error probability, and (3) the inverses of two small parameters.<|reference_end|> | arxiv | @article{csuros2000provably,
title={Provably Fast and Accurate Recovery of Evolutionary Trees through
Harmonic Greedy Triplets},
author={Miklos Csuros, Ming-Yang Kao},
journal={arXiv preprint arXiv:cs/0011038},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011038},
primaryClass={cs.DS cs.LG}
} | csuros2000provably |
arxiv-669768 | cs/0011039 | A Complete Characterization of Complete Intersection-Type Theories | <|reference_start|>A Complete Characterization of Complete Intersection-Type Theories: We characterize those intersection-type theories which yield complete intersection-type assignment systems for lambda-calculi, with respect to the three canonical set-theoretical semantics for intersection-types: the inference semantics, the simple semantics and the F-semantics. These semantics arise by taking as interpretation of types subsets of applicative structures, as interpretation of the intersection constructor set-theoretic inclusion, and by taking the interpretation of the arrow constructor a' la Scott, with respect to either any possible functionality set, or the largest one, or the least one. These results strengthen and generalize significantly all earlier results in the literature, to our knowledge, in at least three respects. First of all the inference semantics had not been considered before. Secondly, the characterizations are all given just in terms of simple closure conditions on the preorder relation on the types, rather than on the typing judgments themselves. The task of checking the condition is made therefore considerably more tractable. Lastly, we do not restrict attention just to lambda-models, but to arbitrary applicative structures which admit an interpretation function. Thus we allow also for the treatment of models of restricted lambda-calculi. Nevertheless the characterizations we give can be tailored just to the case of lambda-models.<|reference_end|> | arxiv | @article{dezani-ciancaglini2000a,
title={A Complete Characterization of Complete Intersection-Type Theories},
author={M. Dezani-Ciancaglini, F. Honsell and F. Alessi},
journal={arXiv preprint arXiv:cs/0011039},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011039},
primaryClass={cs.LO}
} | dezani-ciancaglini2000a |
arxiv-669769 | cs/0011040 | Do All Fragments Count? | <|reference_start|>Do All Fragments Count?: We aim at finding the minimal set of fragments which achieves maximal parse accuracy in Data Oriented Parsing. Experiments with the Penn Wall Street Journal treebank show that counts of almost arbitrary fragments within parse trees are important, leading to improved parse accuracy over previous models tested on this treebank. We isolate a number of dependency relations which previous models neglect but which contribute to higher parse accuracy.<|reference_end|> | arxiv | @article{bod2000do,
title={Do All Fragments Count?},
author={Rens Bod},
journal={Technical Report COMP-11-12},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011040},
primaryClass={cs.CL}
} | bod2000do |
arxiv-669770 | cs/0011041 | EquiX---A Search and Query Language for XML | <|reference_start|>EquiX---A Search and Query Language for XML: EquiX is a search language for XML that combines the power of querying with the simplicity of searching. Requirements for such languages are discussed and it is shown that EquiX meets the necessary criteria. Both a graphical abstract syntax and a formal concrete syntax are presented for EquiX queries. In addition, the semantics is defined and an evaluation algorithm is presented. The evaluation algorithm is polynomial under combined complexity. EquiX combines pattern matching, quantification and logical expressions to query both the data and meta-data of XML documents. The result of a query in EquiX is a set of XML documents. A DTD describing the result documents is derived automatically from the query.<|reference_end|> | arxiv | @article{cohen2000equix---a,
title={EquiX---A Search and Query Language for XML},
author={Sara Cohen, Yaron Kanza, Yakov Kogan, Werner Nutt, Yehoshua Sagiv,
Alexander Serebrenik},
journal={arXiv preprint arXiv:cs/0011041},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011041},
primaryClass={cs.DB}
} | cohen2000equix---a |
arxiv-669771 | cs/0011042 | Order-consistent programs are cautiously monotonic | <|reference_start|>Order-consistent programs are cautiously monotonic: Some normal logic programs under the answer set (stable model) semantics lack the appealing property of "cautious monotonicity." That is, augmenting a program with one of its consequences may cause it to lose another of its consequences. The syntactic condition of "order-consistency" was shown by Fages to guarantee existence of an answer set. This note establishes that order-consistent programs are not only consistent, but cautiously monotonic. From this it follows that they are also "cumulative." That is, augmenting an order-consistent with some of its consequences does not alter its consequences. In fact, as we show, its answer sets remain unchanged.<|reference_end|> | arxiv | @article{turner2000order-consistent,
title={Order-consistent programs are cautiously monotonic},
author={Hudson Turner},
journal={arXiv preprint arXiv:cs/0011042},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011042},
primaryClass={cs.LO cs.AI}
} | turner2000order-consistent |
arxiv-669772 | cs/0011043 | Rewriting Calculus: Foundations and Applications | <|reference_start|>Rewriting Calculus: Foundations and Applications: This thesis is devoted to the study of a calculus that describes the application of conditional rewriting rules and the obtained results at the same level of representation. We introduce the rewriting calculus, also called the rho-calculus, which generalizes the first order term rewriting and lambda-calculus, and makes possible the representation of the non-determinism. In our approach the abstraction operator as well as the application operator are objects of calculus. The result of a reduction in the rewriting calculus is either an empty set representing the application failure, or a singleton representing a deterministic result, or a set having several elements representing a not-deterministic choice of results. In this thesis we concentrate on the properties of the rewriting calculus where a syntactic matching is used in order to bind the variables to their current values. We define evaluation strategies ensuring the confluence of the calculus and we show that these strategies become trivial for restrictions of the general rewriting calculus to simpler calculi like the lambda-calculus. The rewriting calculus is not terminating in the untyped case but the strong normalization is obtained for the simply typed calculus. In the rewriting calculus extended with an operator allowing to test the application failure we define terms representing innermost and outermost normalizations with respect to a set of rewriting rules. By using these terms, we obtain a natural and concise description of the conditional rewriting. Finally, starting from the representation of the conditional rewriting rules, we show how the rewriting calculus can be used to give a semantics to ELAN, a language based on the application of rewriting rules controlled by strategies.<|reference_end|> | arxiv | @article{cirstea2000rewriting,
title={Rewriting Calculus: Foundations and Applications},
author={Horatiu Cirstea},
journal={arXiv preprint arXiv:cs/0011043},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011043},
primaryClass={cs.SC cs.LO cs.PL}
} | cirstea2000rewriting |
arxiv-669773 | cs/0011044 | Scaling Up Inductive Logic Programming by Learning from Interpretations | <|reference_start|>Scaling Up Inductive Logic Programming by Learning from Interpretations: When comparing inductive logic programming (ILP) and attribute-value learning techniques, there is a trade-off between expressive power and efficiency. Inductive logic programming techniques are typically more expressive but also less efficient. Therefore, the data sets handled by current inductive logic programming systems are small according to general standards within the data mining community. The main source of inefficiency lies in the assumption that several examples may be related to each other, so they cannot be handled independently. Within the learning from interpretations framework for inductive logic programming this assumption is unnecessary, which allows to scale up existing ILP algorithms. In this paper we explain this learning setting in the context of relational databases. We relate the setting to propositional data mining and to the classical ILP setting, and show that learning from interpretations corresponds to learning from multiple relations and thus extends the expressiveness of propositional learning, while maintaining its efficiency to a large extent (which is not the case in the classical ILP setting). As a case study, we present two alternative implementations of the ILP system Tilde (Top-down Induction of Logical DEcision trees): Tilde-classic, which loads all data in main memory, and Tilde-LDS, which loads the examples one by one. We experimentally compare the implementations, showing Tilde-LDS can handle large data sets (in the order of 100,000 examples or 100 MB) and indeed scales up linearly in the number of examples.<|reference_end|> | arxiv | @article{blockeel2000scaling,
title={Scaling Up Inductive Logic Programming by Learning from Interpretations},
author={Hendrik Blockeel (1), Luc De Raedt (1), Nico Jacobs (1), Bart Demoen
(1) ((1) Katholieke Universiteit Leuven, Dept. of Computer Science)},
journal={Data Mining and Knowledge Discovery 3(1), pp. 59-93, 1999},
year={2000},
number={CW-297},
archivePrefix={arXiv},
eprint={cs/0011044},
primaryClass={cs.LG}
} | blockeel2000scaling |
arxiv-669774 | cs/0011045 | Index Assignment for Multichannel Communication under Failure | <|reference_start|>Index Assignment for Multichannel Communication under Failure: We consider the problem of multiple description scalar quantizers and describing the achievable rate-distortion tuples in that setting. We formulate it as a combinatorial optimization problem of arranging numbers in a matrix to minimize the maximum difference between the largest and the smallest number in any row or column. We develop a technique for deriving lower bounds on the distortion at given channel rates. The approach is constructive, thus allowing an algorithm that gives a closely matching upper bound. For the case of two communication channels with equal rates, the bounds coincide, thus giving the precise lowest achievable distortion at fixed rates. The bounds are within a small constant for higher number of channels. To the best of our knowledge, this is the first result concerning systems with more than two communication channels. The problem is also equivalent to the bandwidth minimization problem of Hamming graphs.<|reference_end|> | arxiv | @article{berger-wolf2000index,
title={Index Assignment for Multichannel Communication under Failure},
author={Tanya Y. Berger-Wolf and Edward M. Reingold},
journal={arXiv preprint arXiv:cs/0011045},
year={2000},
archivePrefix={arXiv},
eprint={cs/0011045},
primaryClass={cs.DS cs.DM}
} | berger-wolf2000index |
arxiv-669775 | cs/0011046 | Available Stabilizing Heaps | <|reference_start|>Available Stabilizing Heaps: This paper describes a heap construction that supports insert and delete operations in arbitrary (possibly illegitimate) states. After any sequence of at most O(m) heap operations, the heap state is guarantee to be legitimate, where m is the initial number of items in the heap. The response from each operation is consistent with its effect on the data structure, even for illegitimate states. The time complexity of each operation is O(lg K) where K is the capacity of the data structure; when the heap's state is legitimate the time complexity is O(lg n) for n equal to the number items in the heap.<|reference_end|> | arxiv | @article{herman2000available,
title={Available Stabilizing Heaps},
author={Ted Herman, Toshimitsu Masuzawa},
journal={arXiv preprint arXiv:cs/0011046},
year={2000},
number={University of Iowa Department of Computer Science TR 00-03},
archivePrefix={arXiv},
eprint={cs/0011046},
primaryClass={cs.DC cs.DS}
} | herman2000available |
arxiv-669776 | cs/0011047 | Dancing links | <|reference_start|>Dancing links: The author presents two tricks to accelerate depth-first search algorithms for a class of combinatorial puzzle problems, such as tiling a tray by a fixed set of polyominoes. The first trick is to implement each assumption of the search with reversible local operations on doubly linked lists. By this trick, every step of the search affects the data incrementally. The second trick is to add a ghost square that represents the identity of each polyomino. Thus puts the rule that each polyomino be used once on the same footing as the rule that each square be covered once. The coding simplifies to a more abstract form which is equivalent to 0-1 integer programming. More significantly for the total computation time, the search can naturally switch between placing a fixed polyomino or covering a fixed square at different stages, according to a combined heuristic. Finally the author reports excellent performance for his algorithm for some familiar puzzles. These include tiling a hexagon by 19 hexiamonds and the N queens problem for N up to 18.<|reference_end|> | arxiv | @article{knuth2000dancing,
title={Dancing links},
author={Donald E. Knuth},
journal={Millenial Perspectives in Computer Science, 2000, 187--214},
year={2000},
number={Knuth migration 11/2004},
archivePrefix={arXiv},
eprint={cs/0011047},
primaryClass={cs.DS}
} | knuth2000dancing |
arxiv-669777 | cs/0012001 | Available and Stabilizing 2-3 Trees | <|reference_start|>Available and Stabilizing 2-3 Trees: Transient faults corrupt the content and organization of data structures. A recovery technique dealing with such faults is stabilization, which guarantees, following some number of operations on the data structure, that content of the data structure is legitimate. Another notion of fault tolerance is availability, which is the property that operations continue to be applied during the period of recovery after a fault, and successful updates are not lost while the data structure stabilizes to a legitimate state. The available, stabilizing 2-3 tree supports find, insert, and delete operations, each with O(lg n) complexity when the tree's state is legitimate and contains n items. For an illegitimate state, these operations have O(lg K) complexity where K is the maximum capacity of the tree. Within O(t) operations, the state of the tree is guaranteed to be legitimate, where t is the number of nodes accessible via some path from the tree's root at the initial state. This paper resolves, for the first time, issues of dynamic allocation and pointer organization in a stabilizing data structure.<|reference_end|> | arxiv | @article{herman2000available,
title={Available and Stabilizing 2-3 Trees},
author={Ted Herman, Toshimitsu Masuzawa},
journal={arXiv preprint arXiv:cs/0012001},
year={2000},
number={University of Iowa Department of Computer Science TR 00-04},
archivePrefix={arXiv},
eprint={cs/0012001},
primaryClass={cs.DC cs.DS}
} | herman2000available |
arxiv-669778 | cs/0012002 | Random Shuffling to Reduce Disorder in Adaptive Sorting Scheme | <|reference_start|>Random Shuffling to Reduce Disorder in Adaptive Sorting Scheme: In this paper we present a random shuffling scheme to apply with adaptive sorting algorithms. Adaptive sorting algorithms utilize the presortedness present in a given sequence. We have probabilistically increased the amount of presortedness present in a sequence by using a random shuffling technique that requires little computation. Theoretical analysis suggests that the proposed scheme can improve the performance of adaptive sorting. Experimental results show that it significantly reduces the amount of disorder present in a given sequence and improves the execution time of adaptive sorting algorithm as well.<|reference_end|> | arxiv | @article{karim2000random,
title={Random Shuffling to Reduce Disorder in Adaptive Sorting Scheme},
author={Md. Enamul Karim (1), Abdun Naser Mahmood (1) ((1) University of
Dhaka)},
journal={arXiv preprint arXiv:cs/0012002},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012002},
primaryClass={cs.DS}
} | karim2000random |
arxiv-669779 | cs/0012003 | Questions for a Materialist Philosophy Implying the Equivalence of Computers and Human Cognition | <|reference_start|>Questions for a Materialist Philosophy Implying the Equivalence of Computers and Human Cognition: Issues related to a materialist philosophy are explored as concerns the implied equivalence of computers running software and human observers. One issue explored concerns the measurement process in quantum mechanics. Another issue explored concerns the nature of experience as revealed by the existence of dreams. Some difficulties stemming from a materialist philosophy as regards these issues are pointed out. For example, a gedankenexperiment involving what has been called "negative" observation is discussed that illustrates the difficulty with a materialist assumption in quantum mechanics. Based on an exploration of these difficulties, specifications are outlined briefly that would provide a means to demonstrate the equivalence of of computers running software and human experience given a materialist assumption.<|reference_end|> | arxiv | @article{snyder2000questions,
title={Questions for a Materialist Philosophy Implying the Equivalence of
Computers and Human Cognition},
author={Douglas M. Snyder},
journal={arXiv preprint arXiv:cs/0012003},
year={2000},
number={0196},
archivePrefix={arXiv},
eprint={cs/0012003},
primaryClass={cs.GL}
} | snyder2000questions |
arxiv-669780 | cs/0012004 | Improving Performance of heavily loaded agents | <|reference_start|>Improving Performance of heavily loaded agents: With the increase in agent-based applications, there are now agent systems that support \emph{concurrent} client accesses. The ability to process large volumes of simultaneous requests is critical in many such applications. In such a setting, the traditional approach of serving these requests one at a time via queues (e.g. \textsf{FIFO} queues, priority queues) is insufficient. Alternative models are essential to improve the performance of such \emph{heavily loaded} agents. In this paper, we propose a set of \emph{cost-based algorithms} to \emph{optimize} and \emph{merge} multiple requests submitted to an agent. In order to merge a set of requests, one first needs to identify commonalities among such requests. First, we provide an \emph{application independent framework} within which an agent developer may specify relationships (called \emph{invariants}) between requests. Second, we provide two algorithms (and various accompanying heuristics) which allow an agent to automatically rewrite requests so as to avoid redundant work---these algorithms take invariants associated with the agent into account. Our algorithms are independent of any specific agent framework. For an implementation, we implemented both these algorithms on top of the \impact agent development platform, and on top of a (non-\impact) geographic database agent. Based on these implementations, we conducted experiments and show that our algorithms are considerably more efficient than methods that use the $A^*$ algorithm.<|reference_end|> | arxiv | @article{ozcan2000improving,
title={Improving Performance of heavily loaded agents},
author={Fatma Ozcan, VS Subrahmanian, Juergen Dix},
journal={arXiv preprint arXiv:cs/0012004},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012004},
primaryClass={cs.MA cs.AI}
} | ozcan2000improving |
arxiv-669781 | cs/0012005 | Value Withdrawal Explanation in CSP | <|reference_start|>Value Withdrawal Explanation in CSP: This work is devoted to constraint solving motivated by the debugging of constraint logic programs a la GNU-Prolog. The paper focuses only on the constraints. In this framework, constraint solving amounts to domain reduction. A computation is formalized by a chaotic iteration. The computed result is described as a closure. This model is well suited to the design of debugging notions and tools, for example failure explanations or error diagnosis. In this paper we detail an application of the model to an explanation of a value withdrawal in a domain. Some other works have already shown the interest of such a notion of explanation not only for failure analysis.<|reference_end|> | arxiv | @article{ferrand2000value,
title={Value Withdrawal Explanation in CSP},
author={Gerard Ferrand, Willy Lesaint and Alexandre Tessier},
journal={arXiv preprint arXiv:cs/0012005},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012005},
primaryClass={cs.SE cs.PL}
} | ferrand2000value |
arxiv-669782 | cs/0012006 | Support for Debugging Automatically Parallelized Programs | <|reference_start|>Support for Debugging Automatically Parallelized Programs: We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.<|reference_end|> | arxiv | @article{hood2000support,
title={Support for Debugging Automatically Parallelized Programs},
author={Robert Hood and Gabriele Jost},
journal={arXiv preprint arXiv:cs/0012006},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012006},
primaryClass={cs.SE cs.PL}
} | hood2000support |
arxiv-669783 | cs/0012007 | Kima - an Automated Error Correction System for Concurrent Logic Programs | <|reference_start|>Kima - an Automated Error Correction System for Concurrent Logic Programs: We have implemented Kima, an automated error correction system for concurrent logic programs. Kima corrects near-misses such as wrong variable occurrences in the absence of explicit declarations of program properties. Strong moding/typing and constraint-based analysis are turning to play fundamental roles in debugging concurrent logic programs as well as in establishing the consistency of communication protocols and data types. Mode/type analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode/type constraints, and can be solved efficiently. We proposed a simple and efficient technique which, given a non-well-moded/typed program, diagnoses the ``reasons'' of inconsistency by finding minimal inconsistent subsets of mode/type constraints. Since each constraint keeps track of the symbol occurrence in the program, a minimal subset also tells possible sources of program errors. Kima realizes automated correction by replacing symbol occurrences around the possible sources and recalculating modes and types of the rewritten programs systematically. As long as bugs are near-misses, Kima proposes a rather small number of alternatives that include an intended program.<|reference_end|> | arxiv | @article{ajiro2000kima,
title={Kima - an Automated Error Correction System for Concurrent Logic
Programs},
author={Yasuhiro Ajiro and Kazunori Ueda},
journal={arXiv preprint arXiv:cs/0012007},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012007},
primaryClass={cs.SE cs.PL}
} | ajiro2000kima |
arxiv-669784 | cs/0012008 | A General Framework for Automatic Termination Analysis of Logic Programs | <|reference_start|>A General Framework for Automatic Termination Analysis of Logic Programs: This paper describes a general framework for automatic termination analysis of logic programs, where we understand by ``termination'' the finitenes s of the LD-tree constructed for the program and a given query. A general property of mappings from a certain subset of the branches of an infinite LD-tree into a finite set is proved. From this result several termination theorems are derived, by using different finite sets. The first two are formulated for the predicate dependency and atom dependency graphs. Then a general result for the case of the query-mapping pairs relevant to a program is proved (cf. \cite{Sagiv,Lindenstrauss:Sagiv}). The correctness of the {\em TermiLog} system described in \cite{Lindenstrauss:Sagiv:Serebrenik} follows from it. In this system it is not possible to prove termination for programs involving arithmetic predicates, since the usual order for the integers is not well-founded. A new method, which can be easily incorporated in {\em TermiLog} or similar systems, is presented, which makes it possible to prove termination for programs involving arithmetic predicates. It is based on combining a finite abstraction of the integers with the technique of the query-mapping pairs, and is essentially capable of dividing a termination proof into several cases, such that a simple termination function suffices for each case. Finally several possible extensions are outlined.<|reference_end|> | arxiv | @article{dershowitz2000a,
title={A General Framework for Automatic Termination Analysis of Logic Programs},
author={Nachum Dershowitz, Naomi Lindenstrauss, Yehoshua Sagiv, Alexander
Serebrenik},
journal={Applicable Algebra in Engineering, Communication and Computing,
vol. 12, no. 1/2, pp. 117-156, 2001},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012008},
primaryClass={cs.PL}
} | dershowitz2000a |
arxiv-669785 | cs/0012009 | Finding Failure Causes through Automated Testing | <|reference_start|>Finding Failure Causes through Automated Testing: A program fails. Under which circumstances does this failure occur? One single algorithm, the delta debugging algorithm, suffices to determine these failure-inducing circumstances. Delta debugging tests a program systematically and automatically to isolate failure-inducing circumstances such as the program input, changes to the program code, or executed statements.<|reference_end|> | arxiv | @article{cleve2000finding,
title={Finding Failure Causes through Automated Testing},
author={Holger Cleve and Andreas Zeller},
journal={arXiv preprint arXiv:cs/0012009},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012009},
primaryClass={cs.SE}
} | cleve2000finding |
arxiv-669786 | cs/0012010 | The Role of Commutativity in Constraint Propagation Algorithms | <|reference_start|>The Role of Commutativity in Constraint Propagation Algorithms: Constraint propagation algorithms form an important part of most of the constraint programming systems. We provide here a simple, yet very general framework that allows us to explain several constraint propagation algorithms in a systematic way. In this framework we proceed in two steps. First, we introduce a generic iteration algorithm on partial orderings and prove its correctness in an abstract setting. Then we instantiate this algorithm with specific partial orderings and functions to obtain specific constraint propagation algorithms. In particular, using the notions commutativity and semi-commutativity, we show that the {\tt AC-3}, {\tt PC-2}, {\tt DAC} and {\tt DPC} algorithms for achieving (directional) arc consistency and (directional) path consistency are instances of a single generic algorithm. The work reported here extends and simplifies that of Apt \citeyear{Apt99b}.<|reference_end|> | arxiv | @article{apt2000the,
title={The Role of Commutativity in Constraint Propagation Algorithms},
author={Krzysztof R. Apt},
journal={arXiv preprint arXiv:cs/0012010},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012010},
primaryClass={cs.PF cs.AI}
} | apt2000the |
arxiv-669787 | cs/0012011 | Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decision Theory | <|reference_start|>Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decision Theory: Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown distribution. We unify both theories and give strong arguments that the resulting universal AIXI model behaves optimal in any computable environment. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXI^tl, which is still superior to any other time t and space l bounded agent. The computation time of AIXI^tl is of the order t x 2^l.<|reference_end|> | arxiv | @article{hutter2000towards,
title={Towards a Universal Theory of Artificial Intelligence based on
Algorithmic Probability and Sequential Decision Theory},
author={Marcus Hutter},
journal={Lecture Notes in Artificial Intelligence (LNAI 2167), Proc. 12th
Eurpean Conf. on Machine Learning, ECML (2001) 226--238},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012011},
primaryClass={cs.AI cs.CC cs.IT cs.LG math.IT}
} | hutter2000towards |
arxiv-669788 | cs/0012012 | A brief overview of the MAD debugging activities | <|reference_start|>A brief overview of the MAD debugging activities: Debugging parallel and distributed programs is a difficult activitiy due to the multiplicity of sequential bugs, the existence of malign effects like race conditions and deadlocks, and the huge amounts of data that have to be processed. These problems are addressed by the Monitoring And Debugging environment MAD, which offers debugging functionality based on a graphical representation of a program's execution. The target applications of MAD are parallel programs applying the standard Message-Passing Interface MPI, which is used extensively in the high-performance computing domain. The highlights of MAD are interactive inspection mechanisms including visualization of distributed arrays, the possibility to graphically place breakpoints, a mechanism for monitor overhead removal, and the evaluation of racing messages occuring due to nondeterminism in the code.<|reference_end|> | arxiv | @article{kranzlmueller2000a,
title={A brief overview of the MAD debugging activities},
author={Dieter Kranzlmueller, Christian Schaubschlaeger, Jens Volkert (GUP
Linz, Joh. Kepler University Linz, Austria)},
journal={arXiv preprint arXiv:cs/0012012},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012012},
primaryClass={cs.SE cs.PL}
} | kranzlmueller2000a |
arxiv-669789 | cs/0012014 | Slicing of Constraint Logic Programs | <|reference_start|>Slicing of Constraint Logic Programs: Slicing is a program analysis technique originally developed for imperative languages. It facilitates understanding of data flow and debugging. This paper discusses slicing of Constraint Logic Programs. Constraint Logic Programming (CLP) is an emerging software technology with a growing number of applications. Data flow in constraint programs is not explicit, and for this reason the concepts of slice and the slicing techniques of imperative languages are not directly applicable. This paper formulates declarative notions of slice suitable for CLP. They provide a basis for defining slicing techniques (both dynamic and static) based on variable sharing. The techniques are further extended by using groundness information. A prototype dynamic slicer of CLP programs implementing the presented ideas is briefly described together with the results of some slicing experiments.<|reference_end|> | arxiv | @article{szilagyi2000slicing,
title={Slicing of Constraint Logic Programs},
author={Gyongyi Szilagyi, Tibor Gyimothy and Jan Maluszynski},
journal={arXiv preprint arXiv:cs/0012014},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012014},
primaryClass={cs.SE}
} | szilagyi2000slicing |
arxiv-669790 | cs/0012015 | Well-Typed Logic Programs Are not Wrong | <|reference_start|>Well-Typed Logic Programs Are not Wrong: We consider prescriptive type systems for logic programs (as in Goedel or Mercury). In such systems, the typing is static, but it guarantees an operational property: if a program is "well-typed", then all derivations starting in a "well-typed" query are again "well-typed". This property has been called subject reduction. We show that this property can also be phrased as a property of the proof-theoretic semantics of logic programs, thus abstracting from the usual operational (top-down) semantics. This proof-theoretic view leads us to questioning a condition which is usually considered necessary for subject reduction, namely the head condition. It states that the head of each clause must have a type which is a variant (and not a proper instance) of the declared type. We provide a more general condition, thus reestablishing a certain symmetry between heads and body atoms. The condition ensures that in a derivation, the types of two unified terms are themselves unifiable. We discuss possible implications of this result. We also discuss the relationship between the head condition and polymorphic recursion, a concept known in functional programming.<|reference_end|> | arxiv | @article{deransart2000well-typed,
title={Well-Typed Logic Programs Are not Wrong},
author={Pierre Deransart and Jan-Georg Smaus},
journal={arXiv preprint arXiv:cs/0012015},
year={2000},
number={RR-4082},
archivePrefix={arXiv},
eprint={cs/0012015},
primaryClass={cs.LO}
} | deransart2000well-typed |
arxiv-669791 | cs/0012016 | A Virtual Java Simulation Lab for Computer Science Students | <|reference_start|>A Virtual Java Simulation Lab for Computer Science Students: The VJ-Lab is a project oriented to improve the students learning process of Computer Science degree at the National University of La Plata. The VJ-Lab is a Web application with Java based simulations. Java can be used to provide simulation environments with simple pictorial interfaces that can help students to understand the subject. There are many fields in which it is difficult to give students a feel for the subject that they are learning. Computer based simulations offer a fun and effective way to enable students to learn by doing. Both, practicing skills and applying knowledge are both allowed in simulated worlds. We will focus on the VJ-Lab project overview, the work in progress and some Java based simulations running. They imitate the behavior of data network protocol and data structure algorithms. These applets are produced by the students of the 'Software Development Laboratory' course.<|reference_end|> | arxiv | @article{diaz2000a,
title={A Virtual Java Simulation Lab for Computer Science Students},
author={Javier Diaz, Claudia Queiruga, Villar Claudia and Laura Fava},
journal={arXiv preprint arXiv:cs/0012016},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012016},
primaryClass={cs.OH}
} | diaz2000a |
arxiv-669792 | cs/0012017 | Towards Robust Quantum Computation | <|reference_start|>Towards Robust Quantum Computation: Quantum computation is a subject of much theoretical promise, but has not been realized in large scale, despite the discovery of fault-tolerant procedures to overcome decoherence. Part of the reason is that the theoretically modest requirements still present daunting experimental challenges. The goal of this Dissertation is to reduce various resources required for robust quantum computation, focusing on quantum error correcting codes and solution NMR quantum computation. A variety of techniques have been developed, including high rate quantum codes for amplitude damping, relaxed criteria for quantum error correction, systematic construction of fault-tolerant gates, recipes for quantum process tomography, techniques in bulk thermal state computation, and efficient decoupling techniques to implement selective coupled logic gates. A detailed experimental study of a quantum error correcting code in NMR is also presented. The Dissertation clarifies and extends results previously reported in quant-ph/9610043, quant-ph/9704002, quant-ph/9811068, quant-ph/9904100, quant-ph/9906112, quant-ph/0002039. Additionally, a procedure for quantum process tomography using maximally entangled states, and a review on NMR quantum computation are included.<|reference_end|> | arxiv | @article{leung2000towards,
title={Towards Robust Quantum Computation},
author={Debbie W. Leung},
journal={arXiv preprint arXiv:cs/0012017},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012017},
primaryClass={cs.CC quant-ph}
} | leung2000towards |
arxiv-669793 | cs/0012018 | Resource-distribution via Boolean constraints | <|reference_start|>Resource-distribution via Boolean constraints: We consider the problem of searching for proofs in sequential presentations of logics with multiplicative (or intensional) connectives. Specifically, we start with the multiplicative fragment of linear logic and extend, on the one hand, to linear logic with its additives and, on the other, to the additives of the logic of bunched implications, BI. We give an algebraic method for calculating the distribution of the side-formulae in multiplicative rules which allows the occurrence or non-occurrence of a formula on a branch of a proof to be determined once sufficient information is available. Each formula in the conclusion of such a rule is assigned a Boolean expression. As a search proceeds, a set of Boolean constraint equations is generated. We show that a solution to such a set of equations determines a proof corresponding to the given search. We explain a range of strategies, from the lazy to the eager, for solving sets of constraint equations. We indicate how to apply our methods systematically to large family of relevant systems.<|reference_end|> | arxiv | @article{harland2000resource-distribution,
title={Resource-distribution via Boolean constraints},
author={James Harland, David Pym},
journal={arXiv preprint arXiv:cs/0012018},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012018},
primaryClass={cs.LO}
} | harland2000resource-distribution |
arxiv-669794 | cs/0012019 | A Note on Power-Laws of Internet Topology | <|reference_start|>A Note on Power-Laws of Internet Topology: The three Power-Laws proposed by Faloutsos et al(1999) are important discoveries among many recent works on finding hidden rules in the seemingly chaotic Internet topology. In this note, we want to point out that the first two laws discovered by Faloutsos et al(1999, hereafter, {\it Faloutsos' Power Laws}) are in fact equivalent. That is, as long as any one of them is true, the other can be derived from it, and {\it vice versa}. Although these two laws are equivalent, they provide different ways to measure the exponents of their corresponding power law relations. We also show that these two measures will give equivalent results, but with different error bars. We argue that for nodes of not very large out-degree($\leq 32$ in our simulation), the first Faloutsos' Power Law is superior to the second one in giving a better estimate of the exponent, while for nodes of very large out-degree($> 32$) the power law relation may not be present, at least for the relation between the frequency of out-degree and node out-degree.<|reference_end|> | arxiv | @article{chou2000a,
title={A Note on Power-Laws of Internet Topology},
author={Hongsong Chou},
journal={arXiv preprint arXiv:cs/0012019},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012019},
primaryClass={cs.NI}
} | chou2000a |
arxiv-669795 | cs/0012020 | Creativity and Delusions: A Neurocomputational Approach | <|reference_start|>Creativity and Delusions: A Neurocomputational Approach: Thinking is one of the most interesting mental processes. Its complexity is sometimes simplified and its different manifestations are classified into normal and abnormal, like the delusional and disorganized thought or the creative one. The boundaries between these facets of thinking are fuzzy causing difficulties in medical, academic, and philosophical discussions. Considering the dopaminergic signal-to-noise neuronal modulation in the central nervous system, and the existence of semantic maps in human brain, a self-organizing neural network model was developed to unify the different thought processes into a single neurocomputational substrate. Simulations were performed varying the dopaminergic modulation and observing the different patterns that emerged at the semantic map. Assuming that the thought process is the total pattern elicited at the output layer of the neural network, the model shows how the normal and abnormal thinking are generated and that there are no borders between their different manifestations. Actually, a continuum of different qualitative reasoning, ranging from delusion to disorganization of thought, and passing through the normal and the creative thinking, seems to be more plausible. The model is far from explaining the complexities of human thinking but, at least, it seems to be a good metaphorical and unifying view of the many facets of this phenomenon usually studied in separated settings.<|reference_end|> | arxiv | @article{mendes2000creativity,
title={Creativity and Delusions: A Neurocomputational Approach},
author={Daniele Quintella Mendes and Luis Alfredo Vidal de Carvalho},
journal={arXiv preprint arXiv:cs/0012020},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012020},
primaryClass={cs.NE cs.AI}
} | mendes2000creativity |
arxiv-669796 | cs/0012021 | A Benchmark for Image Retrieval using Distributed Systems over the Internet: BIRDS-I | <|reference_start|>A Benchmark for Image Retrieval using Distributed Systems over the Internet: BIRDS-I: The performance of CBIR algorithms is usually measured on an isolated workstation. In a real-world environment the algorithms would only constitute a minor component among the many interacting components. The Internet dramati-cally changes many of the usual assumptions about measuring CBIR performance. Any CBIR benchmark should be designed from a networked systems standpoint. These benchmarks typically introduce communication overhead because the real systems they model are distributed applications. We present our implementation of a client/server benchmark called BIRDS-I to measure image retrieval performance over the Internet. It has been designed with the trend toward the use of small personalized wireless systems in mind. Web-based CBIR implies the use of heteroge-neous image sets, imposing certain constraints on how the images are organized and the type of performance metrics applicable. BIRDS-I only requires controlled human intervention for the compilation of the image collection and none for the generation of ground truth in the measurement of retrieval accuracy. Benchmark image collections need to be evolved incrementally toward the storage of millions of images and that scaleup can only be achieved through the use of computer-aided compilation. Finally, our scoring metric introduces a tightly optimized image-ranking window.<|reference_end|> | arxiv | @article{gunther2000a,
title={A Benchmark for Image Retrieval using Distributed Systems over the
Internet: BIRDS-I},
author={Neil J. Gunther, and Giordano B. Beretta},
journal={arXiv preprint arXiv:cs/0012021},
year={2000},
doi={10.1117/12.411898},
number={HPL-2000-162},
archivePrefix={arXiv},
eprint={cs/0012021},
primaryClass={cs.IR cs.MM}
} | gunther2000a |
arxiv-669797 | cs/0012022 | Performance and Scalability Models for a Hypergrowth e-Commerce Web Site | <|reference_start|>Performance and Scalability Models for a Hypergrowth e-Commerce Web Site: The performance of successful Web-based e-commerce services has all the allure of a roller-coaster ride: accelerated fiscal growth combined with the ever-present danger of running out of server capacity. This chapter presents a case study based on the author's own capacity planning engagement with one of the hottest e-commerce Web sites in the world. Several spreadsheet techniques are presented for forecasting both short-term and long-term trends in the consumption of server capacity. Two new performance metrics are introduced for site planning and procurement: the effective demand, and the doubling period.<|reference_end|> | arxiv | @article{gunther2000performance,
title={Performance and Scalability Models for a Hypergrowth e-Commerce Web Site},
author={Neil J. Gunther},
journal={arXiv preprint arXiv:cs/0012022},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012022},
primaryClass={cs.PF cs.DC cs.SE}
} | gunther2000performance |
arxiv-669798 | cs/0012023 | The Tale of One-way Functions | <|reference_start|>The Tale of One-way Functions: The existence of one-way functions is arguably the most important problem in computer theory. The article discusses and refines a number of concepts relevant to this problem. For instance, it gives the first combinatorial complete owf, i.e., a function which is one-way if any function is. There are surprisingly many subtleties in basic definitions. Some of these subtleties are discussed or hinted at in the literature and some are overlooked. Here, a unified approach is attempted.<|reference_end|> | arxiv | @article{levin2000the,
title={The Tale of One-way Functions},
author={Leonid A. Levin},
journal={Problems of Information Transmission (= Problemy Peredachi
Informatsii), 39(1):92-103, 2003},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012023},
primaryClass={cs.CR cs.CC}
} | levin2000the |
arxiv-669799 | cs/0012024 | Byzantine Agreement with Faulty Majority using Bounded Broadcast | <|reference_start|>Byzantine Agreement with Faulty Majority using Bounded Broadcast: Byzantine Agreement introduced in [Pease, Shostak, Lamport, 80] is a widely used building block of reliable distributed protocols. It simulates broadcast despite the presence of faulty parties within the network, traditionally using only private unicast links. Under such conditions, Byzantine Agreement requires more than 2/3 of the parties to be compliant. [Fitzi, Maurer, 00], constructed a Byzantine Agreement protocol for any compliant majority based on an additional primitive allowing transmission to any two parties simultaneously. They proposed a problem of generalizing these results to wider channels and fewer compliant parties. We prove that 2f < kh condition is necessary and sufficient for implementing broadcast with h compliant and f faulty parties using k-cast channels.<|reference_end|> | arxiv | @article{considine2000byzantine,
title={Byzantine Agreement with Faulty Majority using Bounded Broadcast},
author={Jeffrey Considine, Leonid A. Levin, David Metcalf},
journal={Journal of Cryptology, 18/3:191-217, 2005},
year={2000},
archivePrefix={arXiv},
eprint={cs/0012024},
primaryClass={cs.DC}
} | considine2000byzantine |
arxiv-669800 | cs/0101001 | Automatic Differentiation Tools in Optimization Software | <|reference_start|>Automatic Differentiation Tools in Optimization Software: We discuss the role of automatic differentiation tools in optimization software. We emphasize issues that are important to large-scale optimization and that have proved useful in the installation of nonlinear solvers in the NEOS Server. Our discussion centers on the computation of the gradient and Hessian matrix for partially separable functions and shows that the gradient and Hessian matrix can be computed with guaranteed bounds in time and memory requirements<|reference_end|> | arxiv | @article{moré2001automatic,
title={Automatic Differentiation Tools in Optimization Software},
author={Jorge J. Mor'e},
journal={arXiv preprint arXiv:cs/0101001},
year={2001},
number={ANL/MCS-P859-1100},
archivePrefix={arXiv},
eprint={cs/0101001},
primaryClass={cs.MS}
} | moré2001automatic |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.