corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-669901
cs/0104010
Type Arithmetics: Computation based on the theory of types
<|reference_start|>Type Arithmetics: Computation based on the theory of types: The present paper shows meta-programming turn programming, which is rich enough to express arbitrary arithmetic computations. We demonstrate a type system that implements Peano arithmetics, slightly generalized to negative numbers. Certain types in this system denote numerals. Arithmetic operations on such types-numerals - addition, subtraction, and even division - are expressed as type reduction rules executed by a compiler. A remarkable trait is that division by zero becomes a type error - and reported as such by a compiler.<|reference_end|>
arxiv
@article{kiselyov2001type, title={Type Arithmetics: Computation based on the theory of types}, author={Oleg Kiselyov}, journal={arXiv preprint arXiv:cs/0104010}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104010}, primaryClass={cs.CL} }
kiselyov2001type
arxiv-669902
cs/0104011
Potholes on the Royal Road
<|reference_start|>Potholes on the Royal Road: It is still unclear how an evolutionary algorithm (EA) searches a fitness landscape, and on what fitness landscapes a particular EA will do well. The validity of the building-block hypothesis, a major tenet of traditional genetic algorithm theory, remains controversial despite its continued use to justify claims about EAs. This paper outlines a research program to begin to answer some of these open questions, by extending the work done in the royal road project. The short-term goal is to find a simple class of functions which the simple genetic algorithm optimizes better than other optimization methods, such as hillclimbers. A dialectical heuristic for searching for such a class is introduced. As an example of using the heuristic, the simple genetic algorithm is compared with a set of hillclimbers on a simple subset of the hyperplane-defined functions, the pothole functions.<|reference_end|>
arxiv
@article{belding2001potholes, title={Potholes on the Royal Road}, author={Theodore C. Belding}, journal={arXiv preprint arXiv:cs/0104011}, year={2001}, number={CSCS-2001-001}, archivePrefix={arXiv}, eprint={cs/0104011}, primaryClass={cs.NE nlin.AO} }
belding2001potholes
arxiv-669903
cs/0104012
System Support for Bandwidth Management and Content Adaptation in Internet Applications
<|reference_start|>System Support for Bandwidth Management and Content Adaptation in Internet Applications: This paper describes the implementation and evaluation of an operating system module, the Congestion Manager (CM), which provides integrated network flow management and exports a convenient programming interface that allows applications to be notified of, and adapt to, changing network conditions. We describe the API by which applications interface with the CM, and the architectural considerations that factored into the design. To evaluate the architecture and API, we describe our implementations of TCP; a streaming layered audio/video application; and an interactive audio application using the CM, and show that they achieve adaptive behavior without incurring much end-system overhead. All flows including TCP benefit from the sharing of congestion information, and applications are able to incorporate new functionality such as congestion control and adaptive behavior.<|reference_end|>
arxiv
@article{andersen2001system, title={System Support for Bandwidth Management and Content Adaptation in Internet Applications}, author={David G. Andersen, Deepak Bansal, Dorothy Curtis, Srinivasan Seshan, Hari Balakrishnan}, journal={Proc. OSDI 2000}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104012}, primaryClass={cs.NI cs.OS} }
andersen2001system
arxiv-669904
cs/0104013
Shooting Over or Under the Mark: Towards a Reliable and Flexible Anticipation in the Economy
<|reference_start|>Shooting Over or Under the Mark: Towards a Reliable and Flexible Anticipation in the Economy: The real monetary economy is grounded upon monetary flow equilibration or the activity of actualizing monetary flow continuity at each economic agent except for the central bank. Every update of monetary flow continuity at each agent constantly causes monetary flow equilibration at the neighborhood agents. Every monetary flow equilibration as the activity of shooting the mark identified as monetary flow continuity turns out to be off the mark, and constantly generate the similar activities in sequence. Monetary flow equilibration ceaselessly reverberating in the economy performs two functions. One is to seek an organization on its own, and the other is to perturb the ongoing organization. Monetary flow equilibration as the agency of seeking and perturbing its organization also serves as a means of predicting its behavior. The likely organizational behavior could be the one that remains most robust against monetary flow equilibration as an agency of applying perturbations.<|reference_end|>
arxiv
@article{matsuno2001shooting, title={Shooting Over or Under the Mark: Towards a Reliable and Flexible Anticipation in the Economy}, author={Koichiro Matsuno}, journal={Int. J. Comp. Anticipatory Syst. 5 (2000), 305-314}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104013}, primaryClass={cs.CE} }
matsuno2001shooting
arxiv-669905
cs/0104014
Tracing a Faint Fingerprint of the Invisible Hand?
<|reference_start|>Tracing a Faint Fingerprint of the Invisible Hand?: Any economic agent constituting the monetary economy maintains the activity of monetary flow equilibration for fulfilling the condition of monetary flow continuity in the record, except at the central bank. At the same time, monetary flow equilibration at one economic agent constantly induces at other agents in the economy further flow disequilibrium to be eliminated subsequently. We propose the rate of monetary flow disequilibration as a figure measuring the progressive movement of the economy. The rate of disequilibration was read out of both the Japanese and the United States monetary economy recorded over the last fifty years.<|reference_end|>
arxiv
@article{matsuno2001tracing, title={Tracing a Faint Fingerprint of the Invisible Hand?}, author={Koichiro Matsuno}, journal={arXiv preprint arXiv:cs/0104014}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104014}, primaryClass={cs.CE} }
matsuno2001tracing
arxiv-669906
cs/0104015
Application of Support Vector Machine to detect an association between a disease or trait and multiple SNP variations
<|reference_start|>Application of Support Vector Machine to detect an association between a disease or trait and multiple SNP variations: After the completion of human genome sequence was anounced, it is evident that interpretation of DNA sequences is an immediate task to work on. For understanding their signals, improvement of present sequence analysis tools and developing new ones become necessary. Along this current trend, we attack one of the fundamental questions, which set of SNP(single nucleotide polymorphism) variations is related to a specific disease or trait is. For, in the whole DNA sequence, it is known that people have different DNAs only at SNP locations, and moreover, the total SNPs are less than 5 millions, finding an association between SNP variations and certain disease or trait is believed to be one of the essential steps not only for genetic researches but for drug design and discovery. In this paper, we are going to present a method of detecting whether there is an association between multiple SNP variations and a trait or disease. The method exploits the Support Vector Machine which has been attracting lots of attentions recently.<|reference_end|>
arxiv
@article{kim2001application, title={Application of Support Vector Machine to detect an association between a disease or trait and multiple SNP variations}, author={Gene Kim and MyungHo Kim}, journal={arXiv preprint arXiv:cs/0104015}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104015}, primaryClass={cs.CC q-bio} }
kim2001application
arxiv-669907
cs/0104016
The Gibbs Representation of 3D Rotations
<|reference_start|>The Gibbs Representation of 3D Rotations: This paper revisits the little-known Gibbs-Rodrigues representation of rotations in a three-dimensional space and demonstrates a set of algorithms for handling it. In this representation the rotation is itself represented as a three-dimensional vector. The vector is parallel to the axis of rotation and its three components transform covariantly on change of coordinates. The mapping from rotations to vectors is 1:1 apart from computation error. The discontinuities of the representation require special handling but are not problematic. The rotation matrix can be generated efficiently from the vector without the use of transcendental functions, and vice-versa. The representation is more efficient than Euler angles, has affinities with Hassenpflug's Argyris angles and is very closely related to the quaternion representation. While the quaternion representation avoids the discontinuities inherent in any 3-component representation, this problem is readily overcome. The present paper gives efficient algorithms for computing the set of rotations which map a given vector to another of the same length and the rotation which maps a given pair of vectors to another pair of the same length and subtended angle.<|reference_end|>
arxiv
@article{peterson2001the, title={The Gibbs Representation of 3D Rotations}, author={Ian R. Peterson}, journal={arXiv preprint arXiv:cs/0104016}, year={2001}, number={CMBE-01-IRP3}, archivePrefix={arXiv}, eprint={cs/0104016}, primaryClass={cs.DS cs.CG} }
peterson2001the
arxiv-669908
cs/0104017
Local Search Techniques for Constrained Portfolio Selection Problems
<|reference_start|>Local Search Techniques for Constrained Portfolio Selection Problems: We consider the problem of selecting a portfolio of assets that provides the investor a suitable balance of expected return and risk. With respect to the seminal mean-variance model of Markowitz, we consider additional constraints on the cardinality of the portfolio and on the quantity of individual shares. Such constraints better capture the real-world trading system, but make the problem more difficult to be solved with exact methods. We explore the use of local search techniques, mainly tabu search, for the portfolio selection problem. We compare and combine previous work on portfolio selection that makes use of the local search approach and we propose new algorithms that combine different neighborhood relations. In addition, we show how the use of randomization and of a simple form of adaptiveness simplifies the setting of a large number of critical parameters. Finally, we show how our techniques perform on public benchmarks.<|reference_end|>
arxiv
@article{schaerf2001local, title={Local Search Techniques for Constrained Portfolio Selection Problems}, author={Andrea Schaerf (University of Udine, Italy)}, journal={arXiv preprint arXiv:cs/0104017}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104017}, primaryClass={cs.CE cs.AI} }
schaerf2001local
arxiv-669909
cs/0104018
Several new domain-type and boundary-type numerical discretization schemes with radial basis function
<|reference_start|>Several new domain-type and boundary-type numerical discretization schemes with radial basis function: This paper is concerned with a few novel RBF-based numerical schemes discretizing partial differential equations. For boundary-type methods, we derive the indirect and direct symmetric boundary knot methods (BKM). The resulting interpolation matrix of both is always symmetric irrespective of boundary geometry and conditions. In particular, the direct BKM applies the practical physical variables rather than expansion coefficients and becomes very competitive to the boundary element method. On the other hand, based on the multiple reciprocity principle, we invent the RBF-based boundary particle method (BPM) for general inhomogeneous problems without a need using inner nodes. The direct and symmetric BPM schemes are also developed. For domain-type RBF discretization schemes, by using the Green integral we develop a new Hermite RBF scheme called as the modified Kansa method (MKM), which differs from the symmetric Hermite RBF scheme in that the MKM discretizes both governing equation and boundary conditions on the same boundary nodes. The local spline version of the MKM is named as the finite knot method (FKM). Both MKM and FKM significantly reduce calculation errors at nodes adjacent to boundary. In addition, the nonsingular high-order fundamental or general solution is strongly recommended as the RBF in the domain-type methods and dual reciprocity method approximation of particular solution relating to the BKM. It is stressed that all the above discretization methods of boundary-type and domain-type are symmetric, meshless, and integration-free. The spline-based schemes will produce desirable symmetric sparse banded interpolation matrix. In appendix, we present a Hermite scheme to eliminate edge effect on the RBF geometric modeling and imaging.<|reference_end|>
arxiv
@article{chen2001several, title={Several new domain-type and boundary-type numerical discretization schemes with radial basis function}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0104018}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104018}, primaryClass={cs.NA cs.CE} }
chen2001several
arxiv-669910
cs/0104019
Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation
<|reference_start|>Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation: This paper presents a novel method of generating and applying hierarchical, dynamic topic-based language models. It proposes and evaluates new cluster generation, hierarchical smoothing and adaptive topic-probability estimation techniques. These combined models help capture long-distance lexical dependencies. Experiments on the Broadcast News corpus show significant improvement in perplexity (10.5% overall and 33.5% on target vocabulary).<|reference_end|>
arxiv
@article{florian2001dynamic, title={Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation}, author={Radu Florian and David Yarowsky}, journal={Proceedings of the 37th Annual Meeting of the ACL, pages 167-174, College Park, Maryland}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104019}, primaryClass={cs.CL} }
florian2001dynamic
arxiv-669911
cs/0104020
Coaxing Confidences from an Old Friend: Probabilistic Classifications from Transformation Rule Lists
<|reference_start|>Coaxing Confidences from an Old Friend: Probabilistic Classifications from Transformation Rule Lists: Transformation-based learning has been successfully employed to solve many natural language processing problems. It has many positive features, but one drawback is that it does not provide estimates of class membership probabilities. In this paper, we present a novel method for obtaining class membership probabilities from a transformation-based rule list classifier. Three experiments are presented which measure the modeling accuracy and cross-entropy of the probabilistic classifier on unseen data and the degree to which the output probabilities from the classifier can be used to estimate confidences in its classification decisions. The results of these experiments show that, for the task of text chunking, the estimates produced by this technique are more informative than those generated by a state-of-the-art decision tree.<|reference_end|>
arxiv
@article{florian2001coaxing, title={Coaxing Confidences from an Old Friend: Probabilistic Classifications from Transformation Rule Lists}, author={Radu Florian, John C. Henderson and Grace Ngai}, journal={Proceedings of the Fifth Conference on Empirical Methods in Natural Language Processing, pages 26-34, Hong Kong (2000)}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104020}, primaryClass={cs.CL cs.AI} }
florian2001coaxing
arxiv-669912
cs/0104021
Disjunction and modular goal-directed proof search
<|reference_start|>Disjunction and modular goal-directed proof search: This paper explores goal-directed proof search in first-order multi-modal logic. The key issue is to design a proof system that respects the modularity and locality of assumptions of many modal logics. By forcing ambiguities to be considered independently, modular disjunctions in particular can be used to construct efficiently executable specifications in reasoning tasks involving partial information that otherwise might require prohibitive search. To achieve this behavior requires prior proof-theoretic justifications of logic programming to be extended, strengthened, and combined with proof-theoretic analyses of modal deduction in a novel way.<|reference_end|>
arxiv
@article{stone2001disjunction, title={Disjunction and modular goal-directed proof search}, author={Matthew Stone}, journal={arXiv preprint arXiv:cs/0104021}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104021}, primaryClass={cs.LO} }
stone2001disjunction
arxiv-669913
cs/0104022
Microplanning with Communicative Intentions: The SPUD System
<|reference_start|>Microplanning with Communicative Intentions: The SPUD System: The process of microplanning encompasses a range of problems in Natural Language Generation (NLG), such as referring expression generation, lexical choice, and aggregation, problems in which a generator must bridge underlying domain-specific representations and general linguistic representations. In this paper, we describe a uniform approach to microplanning based on declarative representations of a generator's communicative intent. These representations describe the results of NLG: communicative intent associates the concrete linguistic structure planned by the generator with inferences that show how the meaning of that structure communicates needed information about some application domain in the current discourse context. Our approach, implemented in the SPUD (sentence planning using description) microplanner, uses the lexicalized tree-adjoining grammar formalism (LTAG) to connect structure to meaning and uses modal logic programming to connect meaning to context. At the same time, communicative intent representations provide a resource for the process of NLG. Using representations of communicative intent, a generator can augment the syntax, semantics and pragmatics of an incomplete sentence simultaneously, and can assess its progress on the various problems of microplanning incrementally. The declarative formulation of communicative intent translates into a well-defined methodology for designing grammatical and conceptual resources which the generator can use to achieve desired microplanning behavior in a specified domain.<|reference_end|>
arxiv
@article{stone2001microplanning, title={Microplanning with Communicative Intentions: The SPUD System}, author={Matthew Stone, Christine Doran, Bonnie Webber, Tonia Bleam and Martha Palmer}, journal={arXiv preprint arXiv:cs/0104022}, year={2001}, archivePrefix={arXiv}, eprint={cs/0104022}, primaryClass={cs.CL} }
stone2001microplanning
arxiv-669914
cs/0105001
Correction of Errors in a Modality Corpus Used for Machine Translation by Using Machine-learning Method
<|reference_start|>Correction of Errors in a Modality Corpus Used for Machine Translation by Using Machine-learning Method: We performed corpus correction on a modality corpus for machine translation by using such machine-learning methods as the maximum-entropy method. We thus constructed a high-quality modality corpus based on corpus correction. We compared several kinds of methods for corpus correction in our experiments and developed a good method for corpus correction.<|reference_end|>
arxiv
@article{murata2001correction, title={Correction of Errors in a Modality Corpus Used for Machine Translation by Using Machine-learning Method}, author={Masaki Murata, Masao Utiyama, Kiyotaka Uchimoto, Qing Ma, and Hitoshi Isahara}, journal={arXiv preprint arXiv:cs/0105001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105001}, primaryClass={cs.CL} }
murata2001correction
arxiv-669915
cs/0105002
Man [and Woman] vs Machine: A Case Study in Base Noun Phrase Learning
<|reference_start|>Man [and Woman] vs Machine: A Case Study in Base Noun Phrase Learning: A great deal of work has been done demonstrating the ability of machine learning algorithms to automatically extract linguistic knowledge from annotated corpora. Very little work has gone into quantifying the difference in ability at this task between a person and a machine. This paper is a first step in that direction.<|reference_end|>
arxiv
@article{brill2001man, title={Man [and Woman] vs. Machine: A Case Study in Base Noun Phrase Learning}, author={Eric Brill and Grace Ngai}, journal={Proceedings of the 37th Annual Meeting of the Association of Computational Linguistics, pages 65-72, College Park, MD, USA (1999)}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105002}, primaryClass={cs.CL} }
brill2001man
arxiv-669916
cs/0105003
Rule Writing or Annotation: Cost-efficient Resource Usage for Base Noun Phrase Chunking
<|reference_start|>Rule Writing or Annotation: Cost-efficient Resource Usage for Base Noun Phrase Chunking: This paper presents a comprehensive empirical comparison between two approaches for developing a base noun phrase chunker: human rule writing and active learning using interactive real-time human annotation. Several novel variations on active learning are investigated, and underlying cost models for cross-modal machine learning comparison are presented and explored. Results show that it is more efficient and more successful by several measures to train a system using active learning annotation rather than hand-crafted rule writing at a comparable level of human labor investment.<|reference_end|>
arxiv
@article{ngai2001rule, title={Rule Writing or Annotation: Cost-efficient Resource Usage for Base Noun Phrase Chunking}, author={Grace Ngai and David Yarowsky}, journal={Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 117-125, Hong Kong (2000)}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105003}, primaryClass={cs.CL cs.AI} }
ngai2001rule
arxiv-669917
cs/0105004
Parallel implementation of the TRANSIMS micro-simulation
<|reference_start|>Parallel implementation of the TRANSIMS micro-simulation: This paper describes the parallel implementation of the TRANSIMS traffic micro-simulation. The parallelization method is domain decomposition, which means that each CPU of the parallel computer is responsible for a different geographical area of the simulated region. We describe how information between domains is exchanged, and how the transportation network graph is partitioned. An adaptive scheme is used to optimize load balancing. We then demonstrate how computing speeds of our parallel micro-simulations can be systematically predicted once the scenario and the computer architecture are known. This makes it possible, for example, to decide if a certain study is feasible with a certain computing budget, and how to invest that budget. The main ingredients of the prediction are knowledge about the parallel implementation of the micro-simulation, knowledge about the characteristics of the partitioning of the transportation network graph, and knowledge about the interaction of these quantities with the computer system. In particular, we investigate the differences between switched and non-switched topologies, and the effects of 10 Mbit, 100 Mbit, and Gbit Ethernet. keywords: Traffic simulation, parallel computing, transportation planning, TRANSIMS<|reference_end|>
arxiv
@article{nagel2001parallel, title={Parallel implementation of the TRANSIMS micro-simulation}, author={Kai Nagel and Marcus Rickert}, journal={arXiv preprint arXiv:cs/0105004}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105004}, primaryClass={cs.CE} }
nagel2001parallel
arxiv-669918
cs/0105005
A Complete WordNet15 to WordNet16 Mapping
<|reference_start|>A Complete WordNet15 to WordNet16 Mapping: We describe a robust approach for linking already existing lexical/semantic hierarchies. We use a constraint satisfaction algorithm (relaxation labelling) to select --among a set of candidates-- the node in a target taxonomy that bests matches each node in a source taxonomy. In this paper we present the complete mapping of the nominal, verbal, adjectival and adverbial parts of WordNet 1.5 onto WordNet 1.6.<|reference_end|>
arxiv
@article{daudé2001a, title={A Complete WordNet1.5 to WordNet1.6 Mapping}, author={J. Daud'e, L. Padr'o and G. Rigau (TALP Research Center, Universitat Polit`ecnica de Catalunya)}, journal={arXiv preprint arXiv:cs/0105005}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105005}, primaryClass={cs.CL} }
daudé2001a
arxiv-669919
cs/0105006
Reverse Engineering from Assembler to Formal Specifications via Program Transformations
<|reference_start|>Reverse Engineering from Assembler to Formal Specifications via Program Transformations: The FermaT transformation system, based on research carried out over the last sixteen years at Durham University, De Montfort University and Software Migrations Ltd., is an industrial-strength formal transformation engine with many applications in program comprehension and language migration. This paper is a case study which uses automated plus manually-directed transformations and abstractions to convert an IBM 370 Assembler code program into a very high-level abstract specification.<|reference_end|>
arxiv
@article{ward2001reverse, title={Reverse Engineering from Assembler to Formal Specifications via Program Transformations}, author={M. P. Ward}, journal={7th Working Conference on Reverse Engineering 2000, 23--25 Nov 2000, Brisbane, Queensland, Australia. IEEE Computer Society}, year={2001}, doi={10.1109/WCRE.2000.891448}, archivePrefix={arXiv}, eprint={cs/0105006}, primaryClass={cs.SE cs.PL} }
ward2001reverse
arxiv-669920
cs/0105007
Analysis of Polymorphically Typed Logic Programs Using ACI-Unification
<|reference_start|>Analysis of Polymorphically Typed Logic Programs Using ACI-Unification: Analysis of (partial) groundness is an important application of abstract interpretation. There are several proposals for improving the precision of such an analysis by exploiting type information, icluding our own work with Hill and King, where we had shown how the information present in the type declarations of a program can be used to characterise the degree of instantiation of a term in a precise and yet inherently finite way. This approach worked for polymorphically typed programs as in Goedel or HAL. Here, we recast this approach following works by Codish, Lagoon and Stuckey. To formalise which properties of terms we want to characterise, we use labelling functions, which are functions that extract subterms from a term along certain paths. An abstract term collects the results of all labelling functions of a term. For the analysis, programs are executed on abstract terms instead of the concrete ones, and usual unification is replaced by unification modulo an equality theory which includes the well-known ACI-theory. Thus we generalise the works by Codish, Lagoon and Stuckey w.r.t. the type systems considered and relate the works among each other.<|reference_end|>
arxiv
@article{smaus2001analysis, title={Analysis of Polymorphically Typed Logic Programs Using ACI-Unification}, author={Jan-Georg Smaus}, journal={arXiv preprint arXiv:cs/0105007}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105007}, primaryClass={cs.LO} }
smaus2001analysis
arxiv-669921
cs/0105008
Applying Slicing Technique to Software Architectures
<|reference_start|>Applying Slicing Technique to Software Architectures: Software architecture is receiving increasingly attention as a critical design level for software systems. As software architecture design resources (in the form of architectural specifications) are going to be accumulated, the development of techniques and tools to support architectural understanding, testing, reengineering, maintenance, and reuse will become an important issue. This paper introduces a new form of slicing, named architectural slicing, to aid architectural understanding and reuse. In contrast to traditional slicing, architectural slicing is designed to operate on the architectural specification of a software system, rather than the source code of a program. Architectural slicing provides knowledge about the high-level structure of a software system, rather than the low-level implementation details of a program. In order to compute an architectural slice, we present the architecture information flow graph which can be used to represent information flows in a software architecture. Based on the graph, we give a two-phase algorithm to compute an architectural slice.<|reference_end|>
arxiv
@article{zhao2001applying, title={Applying Slicing Technique to Software Architectures}, author={Jianjun Zhao}, journal={Proceedings of the 4th IEEE International Conference on Engineering of Complex Computer Systems, pp.87-98, August 1998}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105008}, primaryClass={cs.SE} }
zhao2001applying
arxiv-669922
cs/0105009
Using Dependence Analysis to Support Software Architecture Understanding
<|reference_start|>Using Dependence Analysis to Support Software Architecture Understanding: Software architecture is receiving increasingly attention as a critical design level for software systems. As software architecture design resources (in the form of architectural descriptions) are going to be accumulated, the development of techniques and tools to support architectural understanding, testing, reengineering, maintaining, and reusing will become an important issue. In this paper we introduce a new dependence analysis technique, named architectural dependence analysis to support software architecture development. In contrast to traditional dependence analysis, architectural dependence analysis is designed to operate on an architectural description of a software system, rather than the source code of a conventional program. Architectural dependence analysis provides knowledge of dependences for the high-level architecture of a software system, rather than the low-level implementation details of a conventional program.<|reference_end|>
arxiv
@article{zhao2001using, title={Using Dependence Analysis to Support Software Architecture Understanding}, author={Jianjun Zhao}, journal={In M. Li (Ed.), "New Technologies on Computer Software," pp.135-142, International Academic Publishers, September 1997}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105009}, primaryClass={cs.SE} }
zhao2001using
arxiv-669923
cs/0105010
On Assessing the Complexity of Software Architectures
<|reference_start|>On Assessing the Complexity of Software Architectures: This paper proposes some new architectural metrics which are appropriate for evaluating the architectural attributes of a software system. The main feature of our approach is to assess the complexity of a software architecture by analyzing various types of architectural dependences in the architecture.<|reference_end|>
arxiv
@article{zhao2001on, title={On Assessing the Complexity of Software Architectures}, author={Jianjun Zhao}, journal={Proceedings of the 3rd International Software Architecture Workshop (ISAW3), pp.163-166, ACM SIGSOFT, November 1998}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105010}, primaryClass={cs.SE} }
zhao2001on
arxiv-669924
cs/0105011
Component Programming and Interoperability in Constraint Solver Design
<|reference_start|>Component Programming and Interoperability in Constraint Solver Design: Prolog was once the main host for implementing constraint solvers. It seems that it is no longer so. To be useful, constraint solvers have to be integrable into industrial applications written in imperative or object-oriented languages; to be efficient, they have to interact with other solvers. To meet these requirements, many solvers are now implemented in the form of extensible object-oriented libraries. Following Pfister and Szyperski, we argue that ``objects are not enough,'' and we propose to design solvers as component-oriented libraries. We illustrate our approach by the description of the architecture of a prototype, and we assess its strong points and weaknesses.<|reference_end|>
arxiv
@article{goualard2001component, title={Component Programming and Interoperability in Constraint Solver Design}, author={Frederic Goualard}, journal={arXiv preprint arXiv:cs/0105011}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105011}, primaryClass={cs.PL} }
goualard2001component
arxiv-669925
cs/0105012
Joint and conditional estimation of tagging and parsing models
<|reference_start|>Joint and conditional estimation of tagging and parsing models: This paper compares two different ways of estimating statistical language models. Many statistical NLP tagging and parsing models are estimated by maximizing the (joint) likelihood of the fully-observed training data. However, since these applications only require the conditional probability distributions, these distributions can in principle be learnt by maximizing the conditional likelihood of the training data. Perhaps somewhat surprisingly, models estimated by maximizing the joint were superior to models estimated by maximizing the conditional, even though some of the latter models intuitively had access to ``more information''.<|reference_end|>
arxiv
@article{johnson2001joint, title={Joint and conditional estimation of tagging and parsing models}, author={Mark Johnson}, journal={arXiv preprint arXiv:cs/0105012}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105012}, primaryClass={cs.CL} }
johnson2001joint
arxiv-669926
cs/0105013
Dijkstra's Self-Stabilizing Algorithm in Unsupportive Environments
<|reference_start|>Dijkstra's Self-Stabilizing Algorithm in Unsupportive Environments: The first self-stabilizing algorithm [Dij73] assumed the existence of a central daemon, that activates one processor at time to change state as a function of its own state and the state of a neighbor. Subsequent research has reconsidered this algorithm without the assumption of a central daemon, and under different forms of communication, such as the model of link registers. In all of these investigations, one common feature is the atomicity of communication, whether by shared variables or read/write registers. This paper weakens the atomicity assumptions for the communication model, proposing versions of [Dij73] that tolerate various weaker forms of atomicity. First, a solution for the case of regular registers is presented. Then the case of safe registers is considered, with both negative and positive results presented. The paper also presents an implementation of [Dij73] based on registers that have probabilistically correct behavior, which requires a notion of weak stabilization.<|reference_end|>
arxiv
@article{dolev2001dijkstra's, title={Dijkstra's Self-Stabilizing Algorithm in Unsupportive Environments}, author={Shlomi Dolev and Ted Herman}, journal={arXiv preprint arXiv:cs/0105013}, year={2001}, number={TR-01-03}, archivePrefix={arXiv}, eprint={cs/0105013}, primaryClass={cs.DC} }
dolev2001dijkstra's
arxiv-669927
cs/0105014
Errata and supplements to: Orthonormal RBF Wavelet and Ridgelet-like Series and Transforms for High-Dimensional Problems
<|reference_start|>Errata and supplements to: Orthonormal RBF Wavelet and Ridgelet-like Series and Transforms for High-Dimensional Problems: In recent years some attempts have been done to relate the RBF with wavelets in handling high dimensional multiscale problems. To the author's knowledge, however, the orthonormal and bi-orthogonal RBF wavelets are still missing in the literature. By using the nonsingular general solution and singular fundamental solution of differential operator, recently the present author, refer. 3, made some substantial headway to derive the orthonormal RBF wavelets series and transforms. The methodology can be generalized to create the RBF wavelets by means of the orthogonal convolution kernel function of various integral operators. In particular, it is stressed that the presented RBF wavelets does not apply the tensor product to handle multivariate problems at all. This note is to correct some errata in reference 3 and also to supply a few latest advances in the study of orthornormal RBF wavelet transforms.<|reference_end|>
arxiv
@article{chen2001errata, title={Errata and supplements to: Orthonormal RBF Wavelet and Ridgelet-like Series and Transforms for High-Dimensional Problems}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0105014}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105014}, primaryClass={cs.NA cs.CE} }
chen2001errata
arxiv-669928
cs/0105015
The alldifferent Constraint: A Survey
<|reference_start|>The alldifferent Constraint: A Survey: The constraint of difference is known to the constraint programming community since Lauriere introduced Alice in 1978. Since then, several solving strategies have been designed for this constraint. In this paper we give both a practical overview and an abstract comparison of these different strategies.<|reference_end|>
arxiv
@article{van hoeve2001the, title={The alldifferent Constraint: A Survey}, author={W.J. van Hoeve}, journal={arXiv preprint arXiv:cs/0105015}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105015}, primaryClass={cs.PL cs.AI} }
van hoeve2001the
arxiv-669929
cs/0105016
Probabilistic top-down parsing and language modeling
<|reference_start|>Probabilistic top-down parsing and language modeling: This paper describes the functioning of a broad-coverage probabilistic top-down parser, and its application to the problem of language modeling for speech recognition. The paper first introduces key notions in language modeling and probabilistic parsing, and briefly reviews some previous approaches to using syntactic structure for language modeling. A lexicalized probabilistic top-down parser is then presented, which performs very well, in terms of both the accuracy of returned parses and the efficiency with which they are found, relative to the best broad-coverage statistical parsers. A new language model which utilizes probabilistic top-down parsing is then outlined, and empirical results show that it improves upon previous work in test corpus perplexity. Interpolation with a trigram model yields an exceptional improvement relative to the improvement observed by other models, demonstrating the degree to which the information captured by our parsing model is orthogonal to that captured by a trigram model. A small recognition experiment also demonstrates the utility of the model.<|reference_end|>
arxiv
@article{roark2001probabilistic, title={Probabilistic top-down parsing and language modeling}, author={Brian Roark}, journal={arXiv preprint arXiv:cs/0105016}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105016}, primaryClass={cs.CL} }
roark2001probabilistic
arxiv-669930
cs/0105017
Optimization Over Zonotopes and Training Support Vector Machines
<|reference_start|>Optimization Over Zonotopes and Training Support Vector Machines: We make a connection between classical polytopes called zonotopes and Support Vector Machine (SVM) classifiers. We combine this connection with the ellipsoid method to give some new theoretical results on training SVMs. We also describe some special properties of soft margin C-SVMs as parameter C goes to infinity.<|reference_end|>
arxiv
@article{bern2001optimization, title={Optimization Over Zonotopes and Training Support Vector Machines}, author={Marshall Bern and David Eppstein}, journal={arXiv preprint arXiv:cs/0105017}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105017}, primaryClass={cs.CG cs.AI} }
bern2001optimization
arxiv-669931
cs/0105018
HTTP Cookies: Standards, Privacy, and Politics
<|reference_start|>HTTP Cookies: Standards, Privacy, and Politics: How did we get from a world where cookies were something you ate and where "non-techies" were unaware of "Netscape cookies" to a world where cookies are a hot-button privacy issue for many computer users? This paper will describe how HTTP "cookies" work, and how Netscape's original specification evolved into an IETF Proposed Standard. I will also offer a personal perspective on how what began as a straightforward technical specification turned into a political flashpoint when it tried to address non-technical issues such as privacy.<|reference_end|>
arxiv
@article{kristol2001http, title={HTTP Cookies: Standards, Privacy, and Politics}, author={David M. Kristol}, journal={ACM Transactions on Internet Technology, Vol. 1, #2, November 2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105018}, primaryClass={cs.SE cs.CY} }
kristol2001http
arxiv-669932
cs/0105019
Robust Probabilistic Predictive Syntactic Processing
<|reference_start|>Robust Probabilistic Predictive Syntactic Processing: This thesis presents a broad-coverage probabilistic top-down parser, and its application to the problem of language modeling for speech recognition. The parser builds fully connected derivations incrementally, in a single pass from left-to-right across the string. We argue that the parsing approach that we have adopted is well-motivated from a psycholinguistic perspective, as a model that captures probabilistic dependencies between lexical items, as part of the process of building connected syntactic structures. The basic parser and conditional probability models are presented, and empirical results are provided for its parsing accuracy on both newspaper text and spontaneous telephone conversations. Modifications to the probability model are presented that lead to improved performance. A new language model which uses the output of the parser is then defined. Perplexity and word error rate reduction are demonstrated over trigram models, even when the trigram is trained on significantly more data. Interpolation on a word-by-word basis with a trigram model yields additional improvements.<|reference_end|>
arxiv
@article{roark2001robust, title={Robust Probabilistic Predictive Syntactic Processing}, author={Brian Roark}, journal={arXiv preprint arXiv:cs/0105019}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105019}, primaryClass={cs.CL} }
roark2001robust
arxiv-669933
cs/0105020
A Logical Framework for Convergent Infinite Computations
<|reference_start|>A Logical Framework for Convergent Infinite Computations: Classical computations can not capture the essence of infinite computations very well. This paper will focus on a class of infinite computations called convergent infinite computations}. A logic for convergent infinite computations is proposed by extending first order theories using Cauchy sequences, which has stronger expressive power than the first order logic. A class of fixed points characterizing the logical properties of the limits can be represented by means of infinite-length terms defined by Cauchy sequences. We will show that the limit of sequence of first order theories can be defined in terms of distance, similar to the $\epsilon-N$ style definition of limits in real analysis. On the basis of infinitary terms, a computation model for convergent infinite computations is proposed. Finally, the interpretations of logic programs are extended by introducing real Herbrand models of logic programs and a sufficient condition for computing a real Herbrand model of Horn logic programs using convergent infinite computation is given.<|reference_end|>
arxiv
@article{li2001a, title={A Logical Framework for Convergent Infinite Computations}, author={Wei Li, Shilong Ma, Yuefei Sui, Ke Xu}, journal={arXiv preprint arXiv:cs/0105020}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105020}, primaryClass={cs.LO cs.PL} }
li2001a
arxiv-669934
cs/0105021
Solving Composed First-Order Constraints from Discrete-Time Robust Control
<|reference_start|>Solving Composed First-Order Constraints from Discrete-Time Robust Control: This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.<|reference_end|>
arxiv
@article{ratschan2001solving, title={Solving Composed First-Order Constraints from Discrete-Time Robust Control}, author={Stefan Ratschan and Luc Jaulin}, journal={arXiv preprint arXiv:cs/0105021}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105021}, primaryClass={cs.LO cs.AI cs.CE} }
ratschan2001solving
arxiv-669935
cs/0105022
Multi-Channel Parallel Adaptation Theory for Rule Discovery
<|reference_start|>Multi-Channel Parallel Adaptation Theory for Rule Discovery: In this paper, we introduce a new machine learning theory based on multi-channel parallel adaptation for rule discovery. This theory is distinguished from the familiar parallel-distributed adaptation theory of neural networks in terms of channel-based convergence to the target rules. We show how to realize this theory in a learning system named CFRule. CFRule is a parallel weight-based model, but it departs from traditional neural computing in that its internal knowledge is comprehensible. Furthermore, when the model converges upon training, each channel converges to a target rule. The model adaptation rule is derived by multi-level parallel weight optimization based on gradient descent. Since, however, gradient descent only guarantees local optimization, a multi-channel regression-based optimization strategy is developed to effectively deal with this problem. Formally, we prove that the CFRule model can explicitly and precisely encode any given rule set. Also, we prove a property related to asynchronous parallel convergence, which is a critical element of the multi-channel parallel adaptation theory for rule learning. Thanks to the quantizability nature of the CFRule model, rules can be extracted completely and soundly via a threshold-based mechanism. Finally, the practical application of the theory is demonstrated in DNA promoter recognition and hepatitis prognosis prediction.<|reference_end|>
arxiv
@article{fu2001multi-channel, title={Multi-Channel Parallel Adaptation Theory for Rule Discovery}, author={Li Min Fu}, journal={arXiv preprint arXiv:cs/0105022}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105022}, primaryClass={cs.AI} }
fu2001multi-channel
arxiv-669936
cs/0105023
Generating a 3D Simulation of a Car Accident from a Written Description in Natural Language: the CarSim System
<|reference_start|>Generating a 3D Simulation of a Car Accident from a Written Description in Natural Language: the CarSim System: This paper describes a prototype system to visualize and animate 3D scenes from car accident reports, written in French. The problem of generating such a 3D simulation can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two modules, we first designed a template formalism to represent a written accident report. The CarSim system first processes written reports, gathers relevant information, and converts it into a formal description. Then, it creates the corresponding 3D scene and animates the vehicles.<|reference_end|>
arxiv
@article{dupuy2001generating, title={Generating a 3D Simulation of a Car Accident from a Written Description in Natural Language: the CarSim System}, author={Sylvain Dupuy, Arjan Egges, Vincent Legendre and Pierre Nugues}, journal={arXiv preprint arXiv:cs/0105023}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105023}, primaryClass={cs.CL} }
dupuy2001generating
arxiv-669937
cs/0105024
Constraint Propagation in Presence of Arrays
<|reference_start|>Constraint Propagation in Presence of Arrays: We describe the use of array expressions as constraints, which represents a consequent generalisation of the "element" constraint. Constraint propagation for array constraints is studied theoretically, and for a set of domain reduction rules the local consistency they enforce, arc-consistency, is proved. An efficient algorithm is described that encapsulates the rule set and so inherits the capability to enforce arc-consistency from the rules.<|reference_end|>
arxiv
@article{brand2001constraint, title={Constraint Propagation in Presence of Arrays}, author={Sebastian Brand}, journal={arXiv preprint arXiv:cs/0105024}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105024}, primaryClass={cs.PL cs.DS} }
brand2001constraint
arxiv-669938
cs/0105025
Market-Based Reinforcement Learning in Partially Observable Worlds
<|reference_start|>Market-Based Reinforcement Learning in Partially Observable Worlds: Unlike traditional reinforcement learning (RL), market-based RL is in principle applicable to worlds described by partially observable Markov Decision Processes (POMDPs), where an agent needs to learn short-term memories of relevant previous events in order to execute optimal actions. Most previous work, however, has focused on reactive settings (MDPs) instead of POMDPs. Here we reimplement a recent approach to market-based RL and for the first time evaluate it in a toy POMDP setting.<|reference_end|>
arxiv
@article{kwee2001market-based, title={Market-Based Reinforcement Learning in Partially Observable Worlds}, author={Ivo Kwee, Marcus Hutter, Juergen Schmidhuber}, journal={Lecture Notes in Computer Science (LNCS 2130), Proceeding of the International Conference on Artificial Neural Networks ICANN (2001) 865-873}, year={2001}, number={IDSIA-10-01}, archivePrefix={arXiv}, eprint={cs/0105025}, primaryClass={cs.AI cs.LG cs.MA cs.NE} }
kwee2001market-based
arxiv-669939
cs/0105026
Toward Natural Gesture/Speech Control of a Large Display
<|reference_start|>Toward Natural Gesture/Speech Control of a Large Display: In recent years because of the advances in computer vision research, free hand gestures have been explored as means of human-computer interaction (HCI). Together with improved speech processing technology it is an important step toward natural multimodal HCI. However, inclusion of non-predefined continuous gestures into a multimodal framework is a challenging problem. In this paper, we propose a structured approach for studying patterns of multimodal language in the context of a 2D-display control. We consider systematic analysis of gestures from observable kinematical primitives to their semantics as pertinent to a linguistic structure. Proposed semantic classification of co-verbal gestures distinguishes six categories based on their spatio-temporal deixis. We discuss evolution of a computational framework for gesture and speech integration which was used to develop an interactive testbed (iMAP). The testbed enabled elicitation of adequate, non-sequential, multimodal patterns in a narrative mode of HCI. Conducted user studies illustrate significance of accounting for the temporal alignment of gesture and speech parts in semantic mapping. Furthermore, co-occurrence analysis of gesture/speech production suggests syntactic organization of gestures at the lexical level.<|reference_end|>
arxiv
@article{kettebekov2001toward, title={Toward Natural Gesture/Speech Control of a Large Display}, author={S. Kettebekov, R. Sharma}, journal={arXiv preprint arXiv:cs/0105026}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105026}, primaryClass={cs.CV cs.HC} }
kettebekov2001toward
arxiv-669940
cs/0105027
Bounds on sample size for policy evaluation in Markov environments
<|reference_start|>Bounds on sample size for policy evaluation in Markov environments: Reinforcement learning means finding the optimal course of action in Markovian environments without knowledge of the environment's dynamics. Stochastic optimization algorithms used in the field rely on estimates of the value of a policy. Typically, the value of a policy is estimated from results of simulating that very policy in the environment. This approach requires a large amount of simulation as different points in the policy space are considered. In this paper, we develop value estimators that utilize data gathered when using one policy to estimate the value of using another policy, resulting in much more data-efficient algorithms. We consider the question of accumulating a sufficient experience and give PAC-style bounds.<|reference_end|>
arxiv
@article{peshkin2001bounds, title={Bounds on sample size for policy evaluation in Markov environments}, author={Leonid Peshkin and Sayan Mukherjee}, journal={COLT 2001: The Fourteenth Annual Conference on Computational Learning Theory}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105027}, primaryClass={cs.LG cs.AI cs.CC} }
peshkin2001bounds
arxiv-669941
cs/0105028
When being Weak is Brave: Privacy in Recommender Systems
<|reference_start|>When being Weak is Brave: Privacy in Recommender Systems: We explore the conflict between personalization and privacy that arises from the existence of weak ties. A weak tie is an unexpected connection that provides serendipitous recommendations. However, information about weak ties could be used in conjunction with other sources of data to uncover identities and reveal other personal information. In this article, we use a graph-theoretic model to study the benefit and risk from weak ties.<|reference_end|>
arxiv
@article{ramakrishnan2001when, title={When being Weak is Brave: Privacy in Recommender Systems}, author={Naren Ramakrishnan, Benjamin J. Keller, Batul J. Mirza, Ananth Y. Grama, and George Karypis}, journal={arXiv preprint arXiv:cs/0105028}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105028}, primaryClass={cs.CR cs.DS} }
ramakrishnan2001when
arxiv-669942
cs/0105029
Coloring k-colorable graphs using relatively small palettes
<|reference_start|>Coloring k-colorable graphs using relatively small palettes: We obtain the following new coloring results: * A 3-colorable graph on $n$ vertices with maximum degree~$\Delta$ can be colored, in polynomial time, using $O((\Delta \log\Delta)^{1/3} \cdot\log{n})$ colors. This slightly improves an $O((\Delta^{{1}/{3}} \log^{1/2}\Delta)\cdot\log{n})$ bound given by Karger, Motwani and Sudan. More generally, $k$-colorable graphs with maximum degree $\Delta$ can be colored, in polynomial time, using $O((\Delta^{1-{2}/{k}}\log^{1/k}\Delta) \cdot\log{n})$ colors. * A 4-colorable graph on $n$ vertices can be colored, in polynomial time, using $\Ot(n^{7/19})$ colors. This improves an $\Ot(n^{2/5})$ bound given again by Karger, Motwani and Sudan. More generally, $k$-colorable graphs on $n$ vertices can be colored, in polynomial time, using $\Ot(n^{\alpha_k})$ colors, where $\alpha_5=97/207$, $\alpha_6=43/79$, $\alpha_7=1391/2315$, $\alpha_8=175/271$, ... The first result is obtained by a slightly more refined probabilistic analysis of the semidefinite programming based coloring algorithm of Karger, Motwani and Sudan. The second result is obtained by combining the coloring algorithm of Karger, Motwani and Sudan, the combinatorial coloring algorithms of Blum and an extension of a technique of Alon and Kahale (which is based on the Karger, Motwani and Sudan algorithm) for finding relatively large independent sets in graphs that are guaranteed to have very large independent sets. The extension of the Alon and Kahale result may be of independent interest.<|reference_end|>
arxiv
@article{halperin2001coloring, title={Coloring k-colorable graphs using relatively small palettes}, author={Eran Halperin, Ram Nathaniel and Uri Zwick}, journal={arXiv preprint arXiv:cs/0105029}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105029}, primaryClass={cs.DS cs.CC cs.DM} }
halperin2001coloring
arxiv-669943
cs/0105030
The OLAC Metadata Set and Controlled Vocabularies
<|reference_start|>The OLAC Metadata Set and Controlled Vocabularies: As language data and associated technologies proliferate and as the language resources community rapidly expands, it has become difficult to locate and reuse existing resources. Are there any lexical resources for such-and-such a language? What tool can work with transcripts in this particular format? What is a good format to use for linguistic data of this type? Questions like these dominate many mailing lists, since web search engines are an unreliable way to find language resources. This paper describes a new digital infrastructure for language resource discovery, based on the Open Archives Initiative, and called OLAC -- the Open Language Archives Community. The OLAC Metadata Set and the associated controlled vocabularies facilitate consistent description and focussed searching. We report progress on the metadata set and controlled vocabularies, describing current issues and soliciting input from the language resources community.<|reference_end|>
arxiv
@article{bird2001the, title={The OLAC Metadata Set and Controlled Vocabularies}, author={Steven Bird and Gary Simons}, journal={Proceedings of the ACL/EACL Workshop on Sharing Tools and Resources for Research and Education, Toulouse, July 2001, Association for Computational Linguistics}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105030}, primaryClass={cs.CL cs.DL} }
bird2001the
arxiv-669944
cs/0105031
State Analysis and Aggregation Study for Multicast-based Micro Mobility
<|reference_start|>State Analysis and Aggregation Study for Multicast-based Micro Mobility: IP mobility addresses the problem of changing the network point-of-attachment transparently during movement. Mobile IP is the proposed standard by IETF. Several studies, however, have shown that Mobile IP has several drawbacks, such as triangle routing and poor handoff performance. Multicast-based mobility has been proposed as a promising solution to the above problems, incurring less end-to-end delays and fast smooth handoff. Nonetheless, such architecture suffers from multicast state scalability problems with the growth in number of mobile nodes. This architecture also requires ubiquitous multicast deployment and more complex security measures. To alleviate these problems, we propose an intra-domain multicast-based mobility solution. A mobility proxy allocates a multicast address for each mobile that moves to its domain. The mobile uses this multicast address within a domain for micro mobility. Also, aggregation is considered to reduce the multicast state. We conduct multicast state analysis to study the efficiency of several aggregation techniques. We use extensive simulation to evaluate our protocol's performance over a variety of real and generated topologies. We take aggregation gain as metric for our evaluation. Our simulation results show that in general leaky aggregation obtains better gains than perfect aggregation. Also, we notice that aggregation gain increases with the increase in number of visiting mobile nodes and with the decrease in number of mobility proxies within a domain.<|reference_end|>
arxiv
@article{helmy2001state, title={State Analysis and Aggregation Study for Multicast-based Micro Mobility}, author={Ahmed Helmy}, journal={arXiv preprint arXiv:cs/0105031}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105031}, primaryClass={cs.NI} }
helmy2001state
arxiv-669945
cs/0105032
Learning to Cooperate via Policy Search
<|reference_start|>Learning to Cooperate via Policy Search: Cooperative games are those in which both agents share the same payoff structure. Value-based reinforcement-learning algorithms, such as variants of Q-learning, have been applied to learning cooperative games, but they only apply when the game state is completely observable to both agents. Policy search methods are a reasonable alternative to value-based methods for partially observable environments. In this paper, we provide a gradient-based distributed policy-search method for cooperative games and compare the notion of local optimum to that of Nash equilibrium. We demonstrate the effectiveness of this method experimentally in a small, partially observable simulated soccer domain.<|reference_end|>
arxiv
@article{peshkin2001learning, title={Learning to Cooperate via Policy Search}, author={Leonid Peshkin, Kee-Eung Kim, Nicolas Meuleau and Leslie Pack Kaelbling}, journal={Sixteenth Conference on Uncertainty in Artificial Intelligence, 2000}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105032}, primaryClass={cs.LG cs.MA} }
peshkin2001learning
arxiv-669946
cs/0105033
Lectures on Reduce and Maple at UAM I - Mexico
<|reference_start|>Lectures on Reduce and Maple at UAM I - Mexico: These lectures give a brief introduction to the Computer Algebra systems Reduce and Maple. The aim is to provide a systematic survey of most important commands and concepts. In particular, this includes a discussion of simplification schemes and the handling of simplification and substitution rules (e.g., a Lie Algebra is implemented in Reduce by means of simplification rules). Another emphasis is on the different implementations of tensor calculi and the exterior calculus by Reduce and Maple and their application in Gravitation theory and Differential Geometry. I held the lectures at the Universidad Autonoma Metropolitana-Iztapalapa, Departamento de Fisica, Mexico, in November 1999.<|reference_end|>
arxiv
@article{toussaint2001lectures, title={Lectures on Reduce and Maple at UAM I - Mexico}, author={Marc Toussaint}, journal={arXiv preprint arXiv:cs/0105033}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105033}, primaryClass={cs.SC cs.MS gr-qc} }
toussaint2001lectures
arxiv-669947
cs/0105034
On the Area of Hypercube Layouts
<|reference_start|>On the Area of Hypercube Layouts: This paper precisely analyzes the wire density and required area in standard layout styles for the hypercube. The most natural, regular layout of a hypercube of N^2 nodes in the plane, in a N x N grid arrangement, uses floor(2N/3)+1 horizontal wiring tracks for each row of nodes. (The number of tracks per row can be reduced by 1 with a less regular design.) This paper also gives a simple formula for the wire density at any cut position and a full characterization of all places where the wire density is maximized (which does not occur at the bisection).<|reference_end|>
arxiv
@article{greenberg2001on, title={On the Area of Hypercube Layouts}, author={Ronald I. Greenberg and Lee Guan}, journal={condensed and revised in Information Processing Letters, v. 84, n. 1, pp. 41--46, Sep. 2002}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105034}, primaryClass={cs.DC} }
greenberg2001on
arxiv-669948
cs/0105035
Historical Dynamics of Lexical System as Random Walk Process
<|reference_start|>Historical Dynamics of Lexical System as Random Walk Process: It is offered to consider word meanings changes in diachrony as semicontinuous random walk with reflecting and swallowing screens. The basic characteristics of word life cycle are defined. Verification of the model has been realized on the data of Russian words distribution on various age periods.<|reference_end|>
arxiv
@article{kromer2001historical, title={Historical Dynamics of Lexical System as Random Walk Process}, author={Victor Kromer}, journal={arXiv preprint arXiv:cs/0105035}, year={2001}, archivePrefix={arXiv}, eprint={cs/0105035}, primaryClass={cs.CL} }
kromer2001historical
arxiv-669949
cs/0105036
Disjunctive Logic Programs with Inheritance
<|reference_start|>Disjunctive Logic Programs with Inheritance: The paper proposes a new knowledge representation language, called DLP<, which extends disjunctive logic programming (with strong negation) by inheritance. The addition of inheritance enhances the knowledge modeling features of the language providing a natural representation of default reasoning with exceptions. A declarative model-theoretic semantics of DLP< is provided, which is shown to generalize the Answer Set Semantics of disjunctive logic programs. The knowledge modeling features of the language are illustrated by encoding classical nonmonotonic problems in DLP<. The complexity of DLP< is analyzed, proving that inheritance does not cause any computational overhead, as reasoning in DLP< has exactly the same complexity as reasoning in disjunctive logic programming. This is confirmed by the existence of an efficient translation from DLP< to plain disjunctive logic programming. Using this translation, an advanced KR system supporting the DLP< language has been implemented on top of the DLV system and has subsequently been integrated into DLV.<|reference_end|>
arxiv
@article{buccafurri2001disjunctive, title={Disjunctive Logic Programs with Inheritance}, author={Francesco Buccafurri, Wolfgang Faber, Nicola Leone}, journal={Theory and Practice of Logic Programming 2(3):293-321, 2002}, year={2001}, doi={10.1017/S1471068402001394}, archivePrefix={arXiv}, eprint={cs/0105036}, primaryClass={cs.LO cs.AI} }
buccafurri2001disjunctive
arxiv-669950
cs/0105037
Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation
<|reference_start|>Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation: We present a probabilistic model that uses both prosodic and lexical cues for the automatic segmentation of speech into topically coherent units. We propose two methods for combining lexical and prosodic information using hidden Markov models and decision trees. Lexical information is obtained from a speech recognizer, and prosodic features are extracted automatically from speech waveforms. We evaluate our approach on the Broadcast News corpus, using the DARPA-TDT evaluation metrics. Results show that the prosodic model alone is competitive with word-based segmentation methods. Furthermore, we achieve a significant reduction in error by combining the prosodic and word-based knowledge sources.<|reference_end|>
arxiv
@article{tur2001integrating, title={Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation}, author={G. Tur, D. Hakkani-Tur, A. Stolcke, E. Shriberg}, journal={Computation Linguistics 27(1), 31-57, March 2001}, year={2001}, doi={10.1162/089120101300346796}, archivePrefix={arXiv}, eprint={cs/0105037}, primaryClass={cs.CL} }
tur2001integrating
arxiv-669951
cs/0106001
Approximating the satisfiability threshold for random k-XOR-formulas
<|reference_start|>Approximating the satisfiability threshold for random k-XOR-formulas: In this paper we study random linear systems with $k$ variables per equation over the finite field GF(2), or equivalently $k$-XOR-CNF formulas. In a previous paper Creignou and Daud\'e proved that the phase transition for the consistency (satisfiability) of such systems (formulas) exhibits a sharp threshold. Here we prove that the phase transition occurs as the number of equations (clauses) is proportional to the number of variables. For any $k\ge 3$ we establish first estimates for the critical ratio. For $k=3$ we get 0.93 as an upper bound, 0.89 as a lower bound, whereas experiments suggest that the critical ratio is approximately 0.92.<|reference_end|>
arxiv
@article{creignou2001approximating, title={Approximating the satisfiability threshold for random k-XOR-formulas}, author={Nadia Creignou, Herve Daude and Olivier Dubois}, journal={arXiv preprint arXiv:cs/0106001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106001}, primaryClass={cs.DM} }
creignou2001approximating
arxiv-669952
cs/0106002
Solving Assembly Line Balancing Problems by Combining IP and CP
<|reference_start|>Solving Assembly Line Balancing Problems by Combining IP and CP: Assembly line balancing problems consist in partitioning the work necessary to assemble a number of products among different stations of an assembly line. We present a hybrid approach for solving such problems, which combines constraint programming and integer programming.<|reference_end|>
arxiv
@article{bockmayr2001solving, title={Solving Assembly Line Balancing Problems by Combining IP and CP}, author={Alexander Bockmayr and Nicolai Pisaruk}, journal={arXiv preprint arXiv:cs/0106002}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106002}, primaryClass={cs.DM} }
bockmayr2001solving
arxiv-669953
cs/0106003
A note on radial basis function computing
<|reference_start|>A note on radial basis function computing: This note carries three purposes involving our latest advances on the radial basis function (RBF) approach. First, we will introduce a new scheme employing the boundary knot method (BKM) to nonlinear convection-diffusion problem. It is stressed that the new scheme directly results in a linear BKM formulation of nonlinear problems by using response point-dependent RBFs, which can be solved by any linear solver. Then we only need to solve a single nonlinear algebraic equation for desirable unknown at some inner node of interest. The numerical results demonstrate high accuracy and efficiency of this nonlinear BKM strategy. Second, we extend the concepts of distance function, which include time-space and variable transformation distance functions. Finally, we demonstrate that if the nodes are symmetrically placed, the RBF coefficient matrices have either centrosymmetric or skew centrosymmetric structures. The factorization features of such matrices lead to a considerable reduction in the RBF computing effort. A simple approach is also presented to reduce the ill-conditioning of RBF interpolation matrices in general cases.<|reference_end|>
arxiv
@article{chen2001a, title={A note on radial basis function computing}, author={W. Chen and W. He}, journal={Int. J. Nonlinear Modelling in Sci. & Engng., 1(1), 2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106003}, primaryClass={cs.CE cs.CG} }
chen2001a
arxiv-669954
cs/0106004
Soft Scheduling
<|reference_start|>Soft Scheduling: Classical notions of disjunctive and cumulative scheduling are studied from the point of view of soft constraint satisfaction. Soft disjunctive scheduling is introduced as an instance of soft CSP and preferences included in this problem are applied to generate a lower bound based on existing discrete capacity resource. Timetabling problems at Purdue University and Faculty of Informatics at Masaryk University considering individual course requirements of students demonstrate practical problems which are solved via proposed methods. Implementation of general preference constraint solver is discussed and first computational results for timetabling problem are presented.<|reference_end|>
arxiv
@article{rudova2001soft, title={Soft Scheduling}, author={Hana Rudova}, journal={arXiv preprint arXiv:cs/0106004}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106004}, primaryClass={cs.AI cs.PL} }
rudova2001soft
arxiv-669955
cs/0106005
The Representation of Legal Contracts
<|reference_start|>The Representation of Legal Contracts: The paper outlines ongoing research on logic-based tools for the analysis and representation of legal contracts of the kind frequently encountered in large-scale engineering projects and complex, long-term trading agreements. We consider both contract formation and contract performance, in each case identifying the representational issues and the prospects for providing automated support tools.<|reference_end|>
arxiv
@article{daskalopulu2001the, title={The Representation of Legal Contracts}, author={Aspassia Daskalopulu, Marek Sergot}, journal={AI & Society, vol. 11, Nos 1/2, pp. 6-17, 1997}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106005}, primaryClass={cs.AI cs.CY} }
daskalopulu2001the
arxiv-669956
cs/0106006
A Constraint-Driven System for Contract Assembly
<|reference_start|>A Constraint-Driven System for Contract Assembly: We present an approach for modelling the structure and coarse content of legal documents with a view to providing automated support for the drafting of contracts and contract database retrieval. The approach is designed to be applicable where contract drafting is based on model-form contracts or on existing examples of a similar type. The main features of the approach are: (1) the representation addresses the structure and the interrelationships between the constituent parts of contracts, but not the text of the document itself; (2) the representation of documents is separated from the mechanisms that manipulate it; and (3) the drafting process is subject to a collection of explicitly stated constraints that govern the structure of the documents. We describe the representation of document instances and of 'generic documents', which are data structures used to drive the creation of new document instances, and we show extracts from a sample session to illustrate the features of a prototype system implemented in MacProlog.<|reference_end|>
arxiv
@article{daskalopulu2001a, title={A Constraint-Driven System for Contract Assembly}, author={Aspassia Daskalopulu, Marek Sergot}, journal={Proc. 5th International Conference on Artificial Intelligence and Law, ACM Press, pp. 62-69, 1995}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106006}, primaryClass={cs.AI} }
daskalopulu2001a
arxiv-669957
cs/0106007
Modelling Contractual Arguments
<|reference_start|>Modelling Contractual Arguments: One influential approach to assessing the "goodness" of arguments is offered by the Pragma-Dialectical school (p-d) (Eemeren & Grootendorst 1992). This can be compared with Rhetorical Structure Theory (RST) (Mann & Thompson 1988), an approach that originates in discourse analysis. In p-d terms an argument is good if it avoids committing a fallacy, whereas in RST terms an argument is good if it is coherent. RST has been criticised (Snoeck Henkemans 1997) for providing only a partially functional account of argument, and similar criticisms have been raised in the Natural Language Generation (NLG) community-particularly by Moore & Pollack (1992)- with regards to its account of intentionality in text in general. Mann and Thompson themselves note that although RST can be successfully applied to a wide range of texts from diverse domains, it fails to characterise some types of text, most notably legal contracts. There is ongoing research in the Artificial Intelligence and Law community exploring the potential for providing electronic support to contract negotiators, focusing on long-term, complex engineering agreements (see for example Daskalopulu & Sergot 1997). This paper provides a brief introduction to RST and illustrates its shortcomings with respect to contractual text. An alternative approach for modelling argument structure is presented which not only caters for contractual text, but also overcomes the aforementioned limitations of RST.<|reference_end|>
arxiv
@article{reed2001modelling, title={Modelling Contractual Arguments}, author={Chris Reed, Aspassia Daskalopulu}, journal={Proceedings of the 4th International Conference on Argumentation, SICSAT, pp. 686-692, 1998}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106007}, primaryClass={cs.AI} }
reed2001modelling
arxiv-669958
cs/0106008
Computing Functional and Relational Box Consistency by Structured Propagation in Atomic Constraint Systems
<|reference_start|>Computing Functional and Relational Box Consistency by Structured Propagation in Atomic Constraint Systems: Box consistency has been observed to yield exponentially better performance than chaotic constraint propagation in the interval constraint system obtained by decomposing the original expression into primitive constraints. The claim was made that the improvement is due to avoiding decomposition. In this paper we argue that the improvement is due to replacing chaotic iteration by a more structured alternative. To this end we distinguish the existing notion of box consistency from relational box consistency. We show that from a computational point of view it is important to maintain the functional structure in constraint systems that are associated with a system of equations. So far, it has only been considered computationally important that constraint propagation be fair. With the additional structure of functional constraint systems, one can define and implement computationally effective, structured, truncated constraint propagations. The existing algorithm for box consistency is one such. Our results suggest that there are others worth investigating.<|reference_end|>
arxiv
@article{van emden2001computing, title={Computing Functional and Relational Box Consistency by Structured Propagation in Atomic Constraint Systems}, author={M.H. van Emden}, journal={arXiv preprint arXiv:cs/0106008}, year={2001}, number={Univ. of Victoria Computer Science Dept Technical Report DCS-266-IR}, archivePrefix={arXiv}, eprint={cs/0106008}, primaryClass={cs.PL cs.AI} }
van emden2001computing
arxiv-669959
cs/0106009
Model Checking Contractual Protocols
<|reference_start|>Model Checking Contractual Protocols: This paper discusses how model checking, a technique used for the verification of behavioural requirements of dynamic systems, can be usefully deployed for the verification of contracts. A process view of agreements between parties is taken, whereby a contract is modelled as it evolves over time in terms of actions or more generally events that effect changes in its state. Modelling is done with Petri Nets in the spirit of other research work on the representation of trade procedures. The paper illustrates all the phases of the verification technique through an example and argues that the approach is useful particularly in the context of pre-contractual negotiation and contract drafting. The work reported here is part of a broader project on the development of logic-based tools for the analysis and representation of legal contracts.<|reference_end|>
arxiv
@article{daskalopulu2001model, title={Model Checking Contractual Protocols}, author={Aspassia Daskalopulu}, journal={In Breuker, Leenes & Winkels (eds) Legal Knowledge and Information Systems, JURIX 2000: The 13th Annual Conference, Frontiers in Artificial Intelligence and Applications Series, IOS Press, pp. 35-47, 2000}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106009}, primaryClass={cs.SE cs.LO} }
daskalopulu2001model
arxiv-669960
cs/0106010
Modelling Legal Contracts as Processes
<|reference_start|>Modelling Legal Contracts as Processes: This paper concentrates on the representation of the legal relations that obtain between parties once they have entered a contractual agreement and their evolution as the agreement progresses through time. Contracts are regarded as process and they are analysed in terms of the obligations that are active at various points during their life span. An informal notation is introduced that summarizes conveniently the states of an agreement as it evolves over time. Such a representation enables us to determine what the status of an agreement is, given an event or a sequence of events that concern the performance of actions by the agents involved. This is useful both in the context of contract drafting (where parties might wish to preview how their agreement might evolve) and in the context of contract performance monitoring (where parties might with to establish what their legal positions are once their agreement is in force). The discussion is based on an example that illustrates some typical patterns of contractual obligations.<|reference_end|>
arxiv
@article{daskalopulu2001modelling, title={Modelling Legal Contracts as Processes}, author={Aspassia Daskalopulu}, journal={11th International Conference and Workshop on Database and Expert Systems Applications, IEEE C. S. Press, pp. 1074-1079, 2000}, year={2001}, doi={10.1109/DEXA.2000.875160}, archivePrefix={arXiv}, eprint={cs/0106010}, primaryClass={cs.AI cs.LO} }
daskalopulu2001modelling
arxiv-669961
cs/0106011
Computational properties of environment-based disambiguation
<|reference_start|>Computational properties of environment-based disambiguation: The standard pipeline approach to semantic processing, in which sentences are morphologically and syntactically resolved to a single tree before they are interpreted, is a poor fit for applications such as natural language interfaces. This is because the environment information, in the form of the objects and events in the application's run-time environment, cannot be used to inform parsing decisions unless the input sentence is semantically analyzed, but this does not occur until after parsing in the single-tree semantic architecture. This paper describes the computational properties of an alternative architecture, in which semantic analysis is performed on all possible interpretations during parsing, in polynomial time.<|reference_end|>
arxiv
@article{schuler2001computational, title={Computational properties of environment-based disambiguation}, author={William Schuler}, journal={arXiv preprint arXiv:cs/0106011}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106011}, primaryClass={cs.CL cs.HC} }
schuler2001computational
arxiv-669962
cs/0106012
Computational Properties of Metaquerying Problems
<|reference_start|>Computational Properties of Metaquerying Problems: Metaquerying is a datamining technology by which hidden dependencies among several database relations can be discovered. This tool has already been successfully applied to several real-world applications. Recent papers provide only preliminary results about the complexity of metaquerying. In this paper we define several variants of metaquerying that encompass, as far as we know, all variants defined in the literature. We study both the combined complexity and the data complexity of these variants. We show that, under the combined complexity measure, metaquerying is generally intractable (unless P=NP), lying sometimes quite high in the complexity hierarchies (as high as NP^PP), depending on the characteristics of the plausibility index. However, we are able to single out some tractable and interesting metaquerying cases (whose combined complexity is LOGCFL-complete). As for the data complexity of metaquerying, we prove that, in general, this is in TC0, but lies within AC0 in some simpler cases. Finally, we discuss implementation of metaqueries, by providing algorithms to answer them.<|reference_end|>
arxiv
@article{angiulli2001computational, title={Computational Properties of Metaquerying Problems}, author={F. Angiulli, R. Ben-Eliyahu-Zohary, G. Ianni, L. Palopoli}, journal={arXiv preprint arXiv:cs/0106012}, year={2001}, number={ISICNR n.7, 2001}, archivePrefix={arXiv}, eprint={cs/0106012}, primaryClass={cs.DB cs.CC} }
angiulli2001computational
arxiv-669963
cs/0106013
The Set of Equations to Evaluate Objects
<|reference_start|>The Set of Equations to Evaluate Objects: The notion of an equational shell is studied to involve the objects and their environment. Appropriate methods are studied as valid embeddings of refined objects. The refinement process determines the linkages between the variety of possible representations giving rise to variants of computations. The case study is equipped with the adjusted equational systems that validate the initial applicative framework.<|reference_end|>
arxiv
@article{ismailova2001the, title={The Set of Equations to Evaluate Objects}, author={Larissa Ismailova}, journal={Proceedings of the 3-rd International Workshop on Computer Science and Information Technologies CSIT'2001, Ufa, Yangantau, Ruissia}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106013}, primaryClass={cs.LO cs.PL cs.SC} }
ismailova2001the
arxiv-669964
cs/0106014
LTKuzin: Research Program
<|reference_start|>LTKuzin: Research Program: Lev T. Kuzin (1928--1997) is one of the founders of modern cybernetics and information science in Russia. He was awarded and honored the USSR State Prize for inspiring vision into the future of technical cybernetics and his invention and innovation of key technologies. The last years he interested in the computational models of geometrical and algebraic nature and their applications in various branches of computer science and information technologies. In the recent years the interest in computation models based on object notion has grown tremendously stimulating an interest to Kuzin's ideas. This year of 50th Anniversary of Cybernetics and on the occasion of his 70th birthday on September 12, 1998 seems especially appropriate for discussing Kuzin's Research Program.<|reference_end|>
arxiv
@article{wolfengagen2001l.t.kuzin:, title={L.T.Kuzin: Research Program}, author={Viacheslav Wolfengagen}, journal={Proceedings of the 1st International Workshop on Computer Science and Information Technologies CSIT'99. Moscow, Russia, January 18--22, 1999. Vol. 1, pp. 97--106}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106014}, primaryClass={cs.DM cs.AI cs.SE} }
wolfengagen2001l.t.kuzin:
arxiv-669965
cs/0106015
Organizing Encyclopedic Knowledge based on the Web and its Application to Question Answering
<|reference_start|>Organizing Encyclopedic Knowledge based on the Web and its Application to Question Answering: We propose a method to generate large-scale encyclopedic knowledge, which is valuable for much NLP research, based on the Web. We first search the Web for pages containing a term in question. Then we use linguistic patterns and HTML structures to extract text fragments describing the term. Finally, we organize extracted term descriptions based on word senses and domains. In addition, we apply an automatically generated encyclopedia to a question answering system targeting the Japanese Information-Technology Engineers Examination.<|reference_end|>
arxiv
@article{fujii2001organizing, title={Organizing Encyclopedic Knowledge based on the Web and its Application to Question Answering}, author={Atsushi Fujii and Tetsuya Ishikawa}, journal={Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL-EACL 2001), pp.196-203, July. 2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106015}, primaryClass={cs.CL} }
fujii2001organizing
arxiv-669966
cs/0106016
File mapping Rule-based DBMS and Natural Language Processing
<|reference_start|>File mapping Rule-based DBMS and Natural Language Processing: This paper describes the system of storage, extract and processing of information structured similarly to the natural language. For recursive inference the system uses the rules having the same representation, as the data. The environment of storage of information is provided with the File Mapping (SHM) mechanism of operating system. In the paper the main principles of construction of dynamic data structure and language for record of the inference rules are stated; the features of available implementation are considered and the description of the application realizing semantic information retrieval on the natural language is given.<|reference_end|>
arxiv
@article{novikov2001file, title={File mapping Rule-based DBMS and Natural Language Processing}, author={Vjacheslav M. Novikov}, journal={arXiv preprint arXiv:cs/0106016}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106016}, primaryClass={cs.CL cs.AI cs.DB cs.IR cs.LG cs.PL} }
novikov2001file
arxiv-669967
cs/0106017
An object evaluator to generate flexible applications
<|reference_start|>An object evaluator to generate flexible applications: This paper contains a brief discussion of an object evaluator which is based on principles of evaluations in a category. The main tool system referred as the Application Development Environment (ADE) is used to build database applications involving the graphical user interface (GUI). The separation of a database access and the user interface is reached by distinguishing the potential and actual objects. The variety of applications may be generated that communicate with different and distinct desktop databases. The commutative diagrams' technique allows to involve retrieval and call of the delayed procedures.<|reference_end|>
arxiv
@article{ismailova2001an, title={An object evaluator to generate flexible applications}, author={Larissa Ismailova, Konstantin Zinchenko}, journal={Proceedings of the 1-st East-European Symposium on Advances in Databases and Information Systems, ADBIS'97, St.-Petersburg, September 2--5, 1997, Vol. 1, pp. 141--148}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106017}, primaryClass={cs.LO cs.PL} }
ismailova2001an
arxiv-669968
cs/0106018
Building the access pointers to a computation environment
<|reference_start|>Building the access pointers to a computation environment: A common object technique equipped with the categorical and computational styles is briefly outlined. An object is evaluated by embedding in a host computational environment which is the domain-ranged structure. An embedded object is accessed by the pointers generated within the host system. To assist with an easy extract the result of the evaluation a pre-embedded object is generated. It is observed as the decomposition into substitutional part and access function part which are generated during the object evaluation.<|reference_end|>
arxiv
@article{wolfengagen2001building, title={Building the access pointers to a computation environment}, author={Viacheslav Wolfengagen}, journal={Proceedings of the 1st East-European Symposium on Advances in Databases and Information Systems, ADBIS'97, St.-Petersburg, September 2--5, 1997, Russia. Vol. 1, pp. 117--122}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106018}, primaryClass={cs.LO cs.PL} }
wolfengagen2001building
arxiv-669969
cs/0106019
Playing Games with Algorithms: Algorithmic Combinatorial Game Theory
<|reference_start|>Playing Games with Algorithms: Algorithmic Combinatorial Game Theory: Combinatorial games lead to several interesting, clean problems in algorithms and complexity theory, many of which remain open. The purpose of this paper is to provide an overview of the area to encourage further research. In particular, we begin with general background in Combinatorial Game Theory, which analyzes ideal play in perfect-information games, and Constraint Logic, which provides a framework for showing hardness. Then we survey results about the complexity of determining ideal play in these games, and the related problems of solving puzzles, in terms of both polynomial-time algorithms and computational intractability results. Our review of background and survey of algorithmic results are by no means complete, but should serve as a useful primer.<|reference_end|>
arxiv
@article{demaine2001playing, title={Playing Games with Algorithms: Algorithmic Combinatorial Game Theory}, author={Erik D. Demaine and Robert A. Hearn}, journal={arXiv preprint arXiv:cs/0106019}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106019}, primaryClass={cs.CC cs.DM cs.DS math.CO} }
demaine2001playing
arxiv-669970
cs/0106020
Economic Models for Management of Resources in Grid Computing
<|reference_start|>Economic Models for Management of Resources in Grid Computing: The accelerated development in Grid and peer-to-peer computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed.<|reference_end|>
arxiv
@article{buyya2001economic, title={Economic Models for Management of Resources in Grid Computing}, author={Rajkumar Buyya, Heinz Stockinger, Jonathan Giddy, and David Abrams}, journal={arXiv preprint arXiv:cs/0106020}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106020}, primaryClass={cs.DC} }
buyya2001economic
arxiv-669971
cs/0106021
Object-oriented solutions
<|reference_start|>Object-oriented solutions: In this paper are briefly outlined the motivations, mathematical ideas in use, pre-formalization and assumptions, object-as-functor construction, `soft' types and concept constructions, case study for concepts based on variable domains, extracting a computational background, and examples of evaluations.<|reference_end|>
arxiv
@article{wolfengagen2001object-oriented, title={Object-oriented solutions}, author={Viacheslav Wolfengagen}, journal={Proceedings of the 2-nd International Workshop on Advances in Databases and Information Systems, ADBIS'95, Moscow, June 27 --30, 1995, Vol. 1, pp. 235--246}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106021}, primaryClass={cs.LO cs.DB cs.PL} }
wolfengagen2001object-oriented
arxiv-669972
cs/0106022
One More Revolution to Make: Free Scientific Publishing
<|reference_start|>One More Revolution to Make: Free Scientific Publishing: Computer scientists are in the position to create new, free high-quality journals. So what would it take?<|reference_end|>
arxiv
@article{apt2001one, title={One More Revolution to Make: Free Scientific Publishing}, author={Krzysztof R. Apt}, journal={Communications of ACM, 44(5), pp. 25-28}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106022}, primaryClass={cs.GL} }
apt2001one
arxiv-669973
cs/0106023
Object-oriented tools for advanced applications
<|reference_start|>Object-oriented tools for advanced applications: This paper contains a brief discussion of the Application Development Environment (ADE) that is used to build database applications involving the graphical user interface (GUI). ADE computing separates the database access and the user interface. The variety of applications may be generated that communicate with different and distinct desktop databases. The advanced techniques allows to involve remote or stored procedures retrieval and call.<|reference_end|>
arxiv
@article{ismailova2001object-oriented, title={Object-oriented tools for advanced applications}, author={Larissa Ismailova, Konstantin Zinchenko}, journal={Proceedings of the 3-rd International Workshop on Advances in Databases and Information Systems, ADBIS'96, Moscow, September 10 --13, 1996, Vol. 2, pp. 27--31}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106023}, primaryClass={cs.LO cs.DB cs.PL} }
ismailova2001object-oriented
arxiv-669974
cs/0106024
Objects and their computational framework
<|reference_start|>Objects and their computational framework: Most of the object notions are embedded into a logical domain, especially when dealing with a database theory. Thus, their properties within a computational domain are not yet studied properly. The main topic of this paper is to analyze different concepts of the distinct computational primitive frames to extract the useful object properties and their possible advantages. Some important metaoperators are used to unify the approaches and to establish their possible correspondences.<|reference_end|>
arxiv
@article{wolfengagen2001objects, title={Objects and their computational framework}, author={Viacheslav Wolfengagen}, journal={Proceedings of the 3-rd International Workshop on Advances in Databases and Information Systems, ADBIS'96, Moscow, September 10 --13, 1996, Vol. 1, pp. 66--74}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106024}, primaryClass={cs.LO cs.DB cs.PL} }
wolfengagen2001objects
arxiv-669975
cs/0106025
Information Integration and Computational Logic
<|reference_start|>Information Integration and Computational Logic: Information Integration is a young and exciting field with enormous research and commercial significance in the new world of the Information Society. It stands at the crossroad of Databases and Artificial Intelligence requiring novel techniques that bring together different methods from these fields. Information from disparate heterogeneous sources often with no a-priori common schema needs to be synthesized in a flexible, transparent and intelligent way in order to respond to the demands of a query thus enabling a more informed decision by the user or application program. The field although relatively young has already found many practical applications particularly for integrating information over the World Wide Web. This paper gives a brief introduction of the field highlighting some of the main current and future research issues and application areas. It attempts to evaluate the current and potential role of Computational Logic in this and suggests some of the problems where logic-based techniques could be used.<|reference_end|>
arxiv
@article{dimopoulos2001information, title={Information Integration and Computational Logic}, author={Yannis Dimopoulos and Antonis Kakas}, journal={arXiv preprint arXiv:cs/0106025}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106025}, primaryClass={cs.AI} }
dimopoulos2001information
arxiv-669976
cs/0106026
Event Driven Computations for Relational Query Language
<|reference_start|>Event Driven Computations for Relational Query Language: This paper deals with an extended model of computations which uses the parameterized families of entities for data objects and reflects a preliminary outline of this problem. Some topics are selected out, briefly analyzed and arranged to cover a general problem. The authors intended more to discuss the particular topics, their interconnection and computational meaning as a panel proposal, so that this paper is not yet to be evaluated as a closed journal paper. To save space all the technical and implementation features are left for the future paper. Data object is a schematic entity and modelled by the partial function. A notion of type is extended by the variable domains which depend on events and types. A variable domain is built from the potential and schematic individuals and generates the valid families of types depending on a sequence of events. Each valid type consists of the actual individuals which are actual relatively the event or script. In case when a type depends on the script then corresponding view for data objects is attached, otherwise a snapshot is generated. The type thus determined gives an upper range for typed variables so that the local ranges are event driven resulting is the families of actual individuals. An expressive power of the query language is extended using the extensional and intentional relations.<|reference_end|>
arxiv
@article{ismailova2001event, title={Event Driven Computations for Relational Query Language}, author={Larissa Ismailova, Konstantin Zinchenko, Lioubouv Bourmistrova}, journal={Proceedings of the 1-st International Workshop on Computer Science and Information Technologies CSIT'99, Moscow, Russia, 1999. Vol.1, pp. 43--52}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106026}, primaryClass={cs.LO cs.DB cs.PL} }
ismailova2001event
arxiv-669977
cs/0106027
Event Driven Objects
<|reference_start|>Event Driven Objects: A formal consideration in this paper is given for the essential notations to characterize the object that is distinguished in a problem domain. The distinct object is represented by another idealized object, which is a schematic element. When the existence of an element is significant, then a class of these partial elements is dropped down into actual, potential and virtual objects. The potential objects are gathered into the variable domains which are the extended ranges for unbound variables. The families of actual objects are shown to be parameterized with the types and events. The transitions between events are shown to be driven by the scripts. A computational framework arises which is described by the commutative diagrams.<|reference_end|>
arxiv
@article{wolfengagen2001event, title={Event Driven Objects}, author={Viacheslav Wolfengagen}, journal={Proceedings of the 1-st International Workshop on Computer Science and Information Technologies CSIT'99, Moscow, Russia, 1999. Vol.1, pp. 88--97}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106027}, primaryClass={cs.LO cs.DB cs.SE} }
wolfengagen2001event
arxiv-669978
cs/0106028
Pricing Virtual Paths with Quality-of-Service Guarantees as Bundle Derivatives
<|reference_start|>Pricing Virtual Paths with Quality-of-Service Guarantees as Bundle Derivatives: We describe a model of a communication network that allows us to price complex network services as financial derivative contracts based on the spot price of the capacity in individual routers. We prove a theorem of a Girsanov transform that is useful for pricing linear derivatives on underlying assets, which can be used to price many complex network services, and it is used to price an option that gives access to one of several virtual channels between two network nodes, during a specified future time interval. We give the continuous time hedging strategy, for which the option price is independent of the service providers attitude towards risk. The option price contains the density function of a sum of lognormal variables, which has to be evaluated numerically.<|reference_end|>
arxiv
@article{rasmusson2001pricing, title={Pricing Virtual Paths with Quality-of-Service Guarantees as Bundle Derivatives}, author={Lars Rasmusson}, journal={arXiv preprint arXiv:cs/0106028}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106028}, primaryClass={cs.NI cs.CE} }
rasmusson2001pricing
arxiv-669979
cs/0106029
Building Views with Description Logics in ADE: Application Development Environment
<|reference_start|>Building Views with Description Logics in ADE: Application Development Environment: Any of views is formally defined within description logics that were established as a family of logics for modeling complex hereditary structures and have a suitable expressive power. This paper considers the Application Development Environment (ADE) over generalized variable concepts that are used to build database applications involving the supporting views. The front-end user interacts the database via separate ADE access mechanism intermediated by view support. The variety of applications may be generated that communicate with different and distinct desktop databases in a data warehouse. The advanced techniques allows to involve remote or stored procedures retrieval and call.<|reference_end|>
arxiv
@article{ismailova2001building, title={Building Views with Description Logics in ADE: Application Development Environment}, author={Larissa Ismailova, Sergey Kosikov, Konstantin Zinchenko, Alexey Mikhaylov, Lioubouv Bourmistrova, Anastassiya Berezovskaya}, journal={Proceedings of the 2-nd International Workshop on Computer Science and Information Technologies CSIT'2000, Ufa, Yangantau, Russia, 2000. Vol.1, pp. 153--161}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106029}, primaryClass={cs.LO cs.DB cs.DS} }
ismailova2001building
arxiv-669980
cs/0106030
Logic, Individuals and Concepts
<|reference_start|>Logic, Individuals and Concepts: This extended abstract gives a brief outline of the connections between the descriptions and variable concepts. Thus, the notion of a concept is extended to include both the syntax and semantics features. The evaluation map in use is parameterized by a kind of computational environment, the index, giving rise to indexed concepts. The concepts are inhabited into language by the descriptions from the higher order logic. In general the idea of object-as-functor should assist the designer to outline a programming tool in conceptual shell style.<|reference_end|>
arxiv
@article{wolfengagen2001logic,, title={Logic, Individuals and Concepts}, author={Viacheslav Wolfengagen}, journal={Proceedings of the 2-nd International Workshop on Computer Science and Information Technologies CSIT'2000, Ufa, Yangantau, Russia, 2000. Vol.1, pp. 141--145}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106030}, primaryClass={cs.LO cs.DB cs.DM cs.SE} }
wolfengagen2001logic,
arxiv-669981
cs/0106031
Complexity Results and Practical Algorithms for Logics in Knowledge Representation
<|reference_start|>Complexity Results and Practical Algorithms for Logics in Knowledge Representation: Description Logics (DLs) are used in knowledge-based systems to represent and reason about terminological knowledge of the application domain in a semantically well-defined manner. In this thesis, we establish a number of novel complexity results and give practical algorithms for expressive DLs that provide different forms of counting quantifiers. We show that, in many cases, adding local counting in the form of qualifying number restrictions to DLs does not increase the complexity of the inference problems, even if binary coding of numbers in the input is assumed. On the other hand, we show that adding different forms of global counting restrictions to a logic may increase the complexity of the inference problems dramatically. We provide exact complexity results and a practical, tableau based algorithm for the DL SHIQ, which forms the basis of the highly optimized DL system iFaCT. Finally, we describe a tableau algorithm for the clique guarded fragment (CGF), which we hope will serve as the basis for an efficient implementation of a CGF reasoner.<|reference_end|>
arxiv
@article{tobies2001complexity, title={Complexity Results and Practical Algorithms for Logics in Knowledge Representation}, author={Stephan Tobies}, journal={arXiv preprint arXiv:cs/0106031}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106031}, primaryClass={cs.LO cs.AI} }
tobies2001complexity
arxiv-669982
cs/0106032
Hinged Kite Mirror Dissection
<|reference_start|>Hinged Kite Mirror Dissection: Any two polygons of equal area can be partitioned into congruent sets of polygonal pieces, and in many cases one can connect the pieces by flexible hinges while still allowing the connected set to form both polygons. However it is open whether such a hinged dissection always exists. We solve a special case of this problem, by showing that any asymmetric polygon always has a hinged dissection to its mirror image. Our dissection forms a chain of kite-shaped pieces, found by a circle-packing algorithm for quadrilateral mesh generation. A hinged mirror dissection of a polygon with n sides can be formed with O(n) kites in O(n log n) time.<|reference_end|>
arxiv
@article{eppstein2001hinged, title={Hinged Kite Mirror Dissection}, author={David Eppstein}, journal={arXiv preprint arXiv:cs/0106032}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106032}, primaryClass={cs.CG math.MG} }
eppstein2001hinged
arxiv-669983
cs/0106033
locationlocationlocation: Internet Addresses as Evolving Property
<|reference_start|>locationlocationlocation: Internet Addresses as Evolving Property: I describe recent developments in the rules governing registration and ownership of Internet and World Wide Web addresses or "domain names." I consider the idea that "virtual" properties like domain names are more similar to real estate than to trademarks. Therefore, it would be economically efficient to grant domain name owners stronger rights than those of trademarks and copyright holders.<|reference_end|>
arxiv
@article{yee2001location.location.location:, title={location.location.location: Internet Addresses as Evolving Property}, author={Kenton K. Yee}, journal={Southern California Interdisciplinary Law Journal (1998) Volume 6, No. 2, pp. 201-243}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106033}, primaryClass={cs.CY cs.HC cs.IR} }
yee2001location.location.location:
arxiv-669984
cs/0106034
Solving equations in the relational algebra
<|reference_start|>Solving equations in the relational algebra: Enumerating all solutions of a relational algebra equation is a natural and powerful operation which, when added as a query language primitive to the nested relational algebra, yields a query language for nested relational databases, equivalent to the well-known powerset algebra. We study \emph{sparse} equations, which are equations with at most polynomially many solutions. We look at their complexity, and compare their expressive power with that of similar notions in the powerset algebra.<|reference_end|>
arxiv
@article{biskup2001solving, title={Solving equations in the relational algebra}, author={Joachim Biskup, Jan Paredaens, Thomas Schwentick, Jan Van den Bussche}, journal={arXiv preprint arXiv:cs/0106034}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106034}, primaryClass={cs.LO cs.DB} }
biskup2001solving
arxiv-669985
cs/0106035
Polymorphic type inference for the relational algebra
<|reference_start|>Polymorphic type inference for the relational algebra: We give a polymorphic account of the relational algebra. We introduce a formalism of ``type formulas'' specifically tuned for relational algebra expressions, and present an algorithm that computes the ``principal'' type for a given expression. The principal type of an expression is a formula that specifies, in a clear and concise manner, all assignments of types (sets of attributes) to relation names, under which a given relational algebra expression is well-typed, as well as the output type that expression will have under each of these assignments. Topics discussed include complexity and polymorphic expressive power.<|reference_end|>
arxiv
@article{bussche2001polymorphic, title={Polymorphic type inference for the relational algebra}, author={Jan Van den Bussche, Emmanuel Waller}, journal={arXiv preprint arXiv:cs/0106035}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106035}, primaryClass={cs.LO cs.DB} }
bussche2001polymorphic
arxiv-669986
cs/0106036
Convergence and Error Bounds for Universal Prediction of Nonbinary Sequences
<|reference_start|>Convergence and Error Bounds for Universal Prediction of Nonbinary Sequences: Solomonoff's uncomputable universal prediction scheme $\xi$ allows to predict the next symbol $x_k$ of a sequence $x_1...x_{k-1}$ for any Turing computable, but otherwise unknown, probabilistic environment $\mu$. This scheme will be generalized to arbitrary environmental classes, which, among others, allows the construction of computable universal prediction schemes $\xi$. Convergence of $\xi$ to $\mu$ in a conditional mean squared sense and with $\mu$ probability 1 is proven. It is shown that the average number of prediction errors made by the universal $\xi$ scheme rapidly converges to those made by the best possible informed $\mu$ scheme. The schemes, theorems and proofs are given for general finite alphabet, which results in additional complications as compared to the binary case. Several extensions of the presented theory and results are outlined. They include general loss functions and bounds, games of chance, infinite alphabet, partial and delayed prediction, classification, and more active systems.<|reference_end|>
arxiv
@article{hutter2001convergence, title={Convergence and Error Bounds for Universal Prediction of Nonbinary Sequences}, author={Marcus Hutter}, journal={Lecture Notes in Artificial Intelligence (LNAI 2167), Proc. 12th Eurpean Conf. on Machine Learning (ECML) (2001) 239-250}, year={2001}, number={IDSIA-07-01}, archivePrefix={arXiv}, eprint={cs/0106036}, primaryClass={cs.LG cs.AI cs.CC math.PR} }
hutter2001convergence
arxiv-669987
cs/0106037
Using the No-Search Easy-Hard Technique for Downward Collapse
<|reference_start|>Using the No-Search Easy-Hard Technique for Downward Collapse: The top part of the preceding figure [figure appears in actual paper] shows some classes from the (truth-table) bounded-query and boolean hierarchies. It is well-known that if either of these hierarchies collapses at a given level, then all higher levels of that hierarchy collapse to that same level. This is a standard ``upward translation of equality'' that has been known for over a decade. The issue of whether these hierarchies can translate equality {\em downwards\/} has proven vastly more challenging. In particular, with regard to the figure above, consider the following claim: $$P_{m-tt}^{\Sigma_k^p} = P_{m+1-tt}^{\Sigma_k^p} \implies DIFF_m(\Sigma_k^p) coDIFF_m(\Sigma_k^p) = BH(\Sigma_k^p). (*) $$ This claim, if true, says that equality translates downwards between levels of the bounded-query hierarchy and the boolean hierarchy levels that (before the fact) are immediately below them. Until recently, it was not known whether (*) {\em ever\/} held, except for the degenerate cases $m=0$ and $k=0$. Then Hemaspaandra, Hemaspaandra, and Hempel \cite{hem-hem-hem:j:downward-translation} proved that (*) holds for all $m$, for $k > 2$. Buhrman and Fortnow~\cite{buh-for:j:two-queries} then showed that, when $k=2$, (*) holds for the case $m = 1$. In this paper, we prove that for the case $k=2$, (*) holds for all values of $m$. Since there is an oracle relative to which ``for $k=1$, (*) holds for all $m$'' fails \cite{buh-for:j:two-queries}, our achievement of the $k=2$ case cannot to be strengthened to $k=1$ by any relativizable proof technique. The new downward translation we obtain also tightens the collapse in the polynomial hierarchy implied by a collapse in the bounded-query hierarchy of the second level of the polynomial hierarchy.<|reference_end|>
arxiv
@article{hemaspaandra2001using, title={Using the No-Search Easy-Hard Technique for Downward Collapse}, author={Edith Hemaspaandra, Lane A. Hemaspaandra, Harald Hempel}, journal={arXiv preprint arXiv:cs/0106037}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106037}, primaryClass={cs.CC} }
hemaspaandra2001using
arxiv-669988
cs/0106038
Simple and Effective Distributed Computing with a Scheduling Service
<|reference_start|>Simple and Effective Distributed Computing with a Scheduling Service: High-throughput computing projects require the solution of large numbers of problems. In many cases, these problems can be solved on desktop PCs, or can be broken down into independent "PC-solvable" sub-problems. In such cases, the projects are high-performance computing projects, but only because of the sheer number of the needed calculations. We briefly describe our efforts to increase the throughput of one such project. We then explain how to easily set up a distributed computing facility composed of standard networked PCs running Windows 95, 98, 2000, or NT. The facility requires no special software or hardware, involves little or no re-coding of application software, and operates almost invisibly to the owners of the PCs. Depending on the number and quality of PCs recruited, performance can rival that of supercomputers.<|reference_end|>
arxiv
@article{mackie2001simple, title={Simple and Effective Distributed Computing with a Scheduling Service}, author={David M. Mackie}, journal={arXiv preprint arXiv:cs/0106038}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106038}, primaryClass={cs.DC} }
mackie2001simple
arxiv-669989
cs/0106039
Iterative Residual Rescaling: An Analysis and Generalization of LSI
<|reference_start|>Iterative Residual Rescaling: An Analysis and Generalization of LSI: We consider the problem of creating document representations in which inter-document similarity measurements correspond to semantic similarity. We first present a novel subspace-based framework for formalizing this task. Using this framework, we derive a new analysis of Latent Semantic Indexing (LSI), showing a precise relationship between its performance and the uniformity of the underlying distribution of documents over topics. This analysis helps explain the improvements gained by Ando's (2000) Iterative Residual Rescaling (IRR) algorithm: IRR can compensate for distributional non-uniformity. A further benefit of our framework is that it provides a well-motivated, effective method for automatically determining the rescaling factor IRR depends on, leading to further improvements. A series of experiments over various settings and with several evaluation metrics validates our claims.<|reference_end|>
arxiv
@article{ando2001iterative, title={Iterative Residual Rescaling: An Analysis and Generalization of LSI}, author={Rie Kubota Ando and Lillian Lee}, journal={Proceedings of the 24th SIGIR, pp. 154--162, 2001.}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106039}, primaryClass={cs.CL cs.IR} }
ando2001iterative
arxiv-669990
cs/0106040
Stacking classifiers for anti-spam filtering of e-mail
<|reference_start|>Stacking classifiers for anti-spam filtering of e-mail: We evaluate empirically a scheme for combining classifiers, known as stacked generalization, in the context of anti-spam filtering, a novel cost-sensitive application of text categorization. Unsolicited commercial e-mail, or "spam", floods mailboxes, causing frustration, wasting bandwidth, and exposing minors to unsuitable content. Using a public corpus, we show that stacking can improve the efficiency of automatically induced anti-spam filters, and that such filters can be used in real-life applications.<|reference_end|>
arxiv
@article{sakkis2001stacking, title={Stacking classifiers for anti-spam filtering of e-mail}, author={G. Sakkis, I. Androutsopoulos, G. Paliouras, V. Karkaletsis, C. D. Spyropoulos and P. Stamatopoulos}, journal={Proceedings of "Empirical Methods in Natural Language Processing" (EMNLP 2001), L. Lee and D. Harman (Eds.), pp. 44-50, Carnegie Mellon University, Pittsburgh, PA, 2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106040}, primaryClass={cs.CL cs.AI} }
sakkis2001stacking
arxiv-669991
cs/0106041
Computing Complete Graph Isomorphisms and Hamiltonian Cycles from Partial Ones
<|reference_start|>Computing Complete Graph Isomorphisms and Hamiltonian Cycles from Partial Ones: We prove that computing a single pair of vertices that are mapped onto each other by an isomorphism $\phi$ between two isomorphic graphs is as hard as computing $\phi$ itself. This result optimally improves upon a result of G\'{a}l et al. We establish a similar, albeit slightly weaker, result about computing complete Hamiltonian cycles of a graph from partial Hamiltonian cycles.<|reference_end|>
arxiv
@article{grosse2001computing, title={Computing Complete Graph Isomorphisms and Hamiltonian Cycles from Partial Ones}, author={Andr'e Grosse, Joerg Rothe, and Gerd Wechsung}, journal={arXiv preprint arXiv:cs/0106041}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106041}, primaryClass={cs.CC} }
grosse2001computing
arxiv-669992
cs/0106042
MACE 20 Reference Manual and Guide
<|reference_start|>MACE 20 Reference Manual and Guide: MACE is a program that searches for finite models of first-order statements. The statement to be modeled is first translated to clauses, then to relational clauses; finally for the given domain size, the ground instances are constructed. A Davis-Putnam-Loveland-Logeman procedure decides the propositional problem, and any models found are translated to first-order models. MACE is a useful complement to the theorem prover Otter, with Otter searching for proofs and MACE looking for countermodels.<|reference_end|>
arxiv
@article{mccune2001mace, title={MACE 2.0 Reference Manual and Guide}, author={William McCune}, journal={arXiv preprint arXiv:cs/0106042}, year={2001}, number={ANL/MCS-TM-249}, archivePrefix={arXiv}, eprint={cs/0106042}, primaryClass={cs.LO cs.SC} }
mccune2001mace
arxiv-669993
cs/0106043
Using the Distribution of Performance for Studying Statistical NLP Systems and Corpora
<|reference_start|>Using the Distribution of Performance for Studying Statistical NLP Systems and Corpora: Statistical NLP systems are frequently evaluated and compared on the basis of their performances on a single split of training and test data. Results obtained using a single split are, however, subject to sampling noise. In this paper we argue in favour of reporting a distribution of performance figures, obtained by resampling the training data, rather than a single number. The additional information from distributions can be used to make statistically quantified statements about differences across parameter settings, systems, and corpora.<|reference_end|>
arxiv
@article{krymolowski2001using, title={Using the Distribution of Performance for Studying Statistical NLP Systems and Corpora}, author={Yuval Krymolowski}, journal={arXiv preprint arXiv:cs/0106043}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106043}, primaryClass={cs.CL} }
krymolowski2001using
arxiv-669994
cs/0106044
A Sequential Model for Multi-Class Classification
<|reference_start|>A Sequential Model for Multi-Class Classification: Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach -- a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in part-of-speech tagging.<|reference_end|>
arxiv
@article{even-zohar2001a, title={A Sequential Model for Multi-Class Classification}, author={Yair Even-Zohar and Dan Roth}, journal={arXiv preprint arXiv:cs/0106044}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106044}, primaryClass={cs.AI cs.CL cs.LG} }
even-zohar2001a
arxiv-669995
cs/0106045
A Note on the Complexity of Computing the Smallest Four-Coloring of Planar Graphs
<|reference_start|>A Note on the Complexity of Computing the Smallest Four-Coloring of Planar Graphs: We show that computing the lexicographically first four-coloring for planar graphs is P^{NP}-hard. This result optimally improves upon a result of Khuller and Vazirani who prove this problem to be NP-hard, and conclude that it is not self-reducible in the sense of Schnorr, assuming P \neq NP. We discuss this application to non-self-reducibility and provide a general related result.<|reference_end|>
arxiv
@article{grosse2001a, title={A Note on the Complexity of Computing the Smallest Four-Coloring of Planar Graphs}, author={Andre Grosse, Joerg Rothe, and Gerd Wechsung}, journal={arXiv preprint arXiv:cs/0106045}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106045}, primaryClass={cs.CC} }
grosse2001a
arxiv-669996
cs/0106046
Expressing the cone radius in the relational calculus with real polynomial constraints
<|reference_start|>Expressing the cone radius in the relational calculus with real polynomial constraints: We show that there is a query expressible in first-order logic over the reals that returns, on any given semi-algebraic set A, for every point a radius around which A is conical. We obtain this result by combining famous results from calculus and real algebraic geometry, notably Sard's theorem and Thom's first isotopy lemma, with recent algorithmic results by Rannou.<|reference_end|>
arxiv
@article{geerts2001expressing, title={Expressing the cone radius in the relational calculus with real polynomial constraints}, author={Floris Geerts}, journal={arXiv preprint arXiv:cs/0106046}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106046}, primaryClass={cs.DB cs.LO} }
geerts2001expressing
arxiv-669997
cs/0106047
Modeling informational novelty in a conversational system with a hybrid statistical and grammar-based approach to natural language generation
<|reference_start|>Modeling informational novelty in a conversational system with a hybrid statistical and grammar-based approach to natural language generation: We present a hybrid statistical and grammar-based system for surface natural language generation (NLG) that uses grammar rules, conditions on using those grammar rules, and corpus statistics to determine the word order. We also describe how this surface NLG module is implemented in a prototype conversational system, and how it attempts to model informational novelty by varying the word order. Using a combination of rules and statistical information, the conversational system expresses the novel information differently than the given information, based on the run-time dialog state. We also discuss our plans for evaluating the generation strategy.<|reference_end|>
arxiv
@article{ratnaparkhi2001modeling, title={Modeling informational novelty in a conversational system with a hybrid statistical and grammar-based approach to natural language generation}, author={Adwait Ratnaparkhi}, journal={Proceedings of the NAACL Workshop on Adaptation in Dialogue Systems, June 4, 2001, Pittsburgh, PA, USA}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106047}, primaryClass={cs.CL} }
ratnaparkhi2001modeling
arxiv-669998
cs/0106048
On some optimization problems for star-free graphs
<|reference_start|>On some optimization problems for star-free graphs: It is shown that in star-free graphs the maximum independent set problem, the minimum dominating set problem and the minimum independent dominating set problem are approximable up to constant factor by any maximal independent set.<|reference_end|>
arxiv
@article{naidenko2001on, title={On some optimization problems for star-free graphs}, author={V.G. Naidenko, Yu.L. Orlovich}, journal={arXiv preprint arXiv:cs/0106048}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106048}, primaryClass={cs.CC cs.DM} }
naidenko2001on
arxiv-669999
cs/0106049
Recursively Undecidable Properties of NP
<|reference_start|>Recursively Undecidable Properties of NP: We show that there cannot be any algorithm that for a given nondeterministic polynomial-time Turing machine determinates whether or not the language recognized by this machine belongs to P<|reference_end|>
arxiv
@article{naidenko2001recursively, title={Recursively Undecidable Properties of NP}, author={V.G. Naidenko}, journal={arXiv preprint arXiv:cs/0106049}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106049}, primaryClass={cs.CC} }
naidenko2001recursively
arxiv-670000
cs/0106050
Classes of Terminating Logic Programs
<|reference_start|>Classes of Terminating Logic Programs: Termination of logic programs depends critically on the selection rule, i.e. the rule that determines which atom is selected in each resolution step. In this article, we classify programs (and queries) according to the selection rules for which they terminate. This is a survey and unified view on different approaches in the literature. For each class, we present a sufficient, for most classes even necessary, criterion for determining that a program is in that class. We study six classes: a program strongly terminates if it terminates for all selection rules; a program input terminates if it terminates for selection rules which only select atoms that are sufficiently instantiated in their input positions, so that these arguments do not get instantiated any further by the unification; a program local delay terminates if it terminates for local selection rules which only select atoms that are bounded w.r.t. an appropriate level mapping; a program left-terminates if it terminates for the usual left-to-right selection rule; a program exists-terminates if there exists a selection rule for which it terminates; finally, a program has bounded nondeterminism if it only has finitely many refutations. We propose a semantics-preserving transformation from programs with bounded nondeterminism into strongly terminating programs. Moreover, by unifying different formalisms and making appropriate assumptions, we are able to establish a formal hierarchy between the different classes.<|reference_end|>
arxiv
@article{pedreschi2001classes, title={Classes of Terminating Logic Programs}, author={Dino Pedreschi, Salvatore Ruggieri and Jan-Georg Smaus}, journal={Theory and Practice of Logic Programming, 2(3), 369-418, 2002}, year={2001}, archivePrefix={arXiv}, eprint={cs/0106050}, primaryClass={cs.LO cs.PL} }
pedreschi2001classes