corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-669601 | cs/0007021 | PushPush and Push-1 are NP-hard in 2D | <|reference_start|>PushPush and Push-1 are NP-hard in 2D: We prove that two pushing-blocks puzzles are intractable in 2D. One of our constructions improves an earlier result that established intractability in 3D [OS99] for a puzzle inspired by the game PushPush. The second construction answers a question we raised in [DDO00] for a variant we call Push-1. Both puzzles consist of unit square blocks on an integer lattice; all blocks are movable. An agent may push blocks (but never pull them) in attempting to move between given start and goal positions. In the PushPush version, the agent can only push one block at a time, and moreover when a block is pushed it slides the maximal extent of its free range. In the Push-1 version, the agent can only push one block one square at a time, the minimal extent---one square. Both NP-hardness proofs are by reduction from SAT, and rely on a common construction.<|reference_end|> | arxiv | @article{demaine2000pushpush,
title={PushPush and Push-1 are NP-hard in 2D},
author={Erik D. Demaine, Martin L. Demaine, Joseph O'Rourke},
journal={arXiv preprint arXiv:cs/0007021},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007021},
primaryClass={cs.CG cs.DM}
} | demaine2000pushpush |
arxiv-669602 | cs/0007022 | ATLAS: A flexible and extensible architecture for linguistic annotation | <|reference_start|>ATLAS: A flexible and extensible architecture for linguistic annotation: We describe a formal model for annotating linguistic artifacts, from which we derive an application programming interface (API) to a suite of tools for manipulating these annotations. The abstract logical model provides for a range of storage formats and promotes the reuse of tools that interact through this API. We focus first on ``Annotation Graphs,'' a graph model for annotations on linear signals (such as text and speech) indexed by intervals, for which efficient database storage and querying techniques are applicable. We note how a wide range of existing annotated corpora can be mapped to this annotation graph model. This model is then generalized to encompass a wider variety of linguistic ``signals,'' including both naturally occuring phenomena (as recorded in images, video, multi-modal interactions, etc.), as well as the derived resources that are increasingly important to the engineering of natural language processing systems (such as word lists, dictionaries, aligned bilingual corpora, etc.). We conclude with a review of the current efforts towards implementing key pieces of this architecture.<|reference_end|> | arxiv | @article{bird2000atlas:,
title={ATLAS: A flexible and extensible architecture for linguistic annotation},
author={Steven Bird, David Day, John Garofolo, John Henderson, Christophe
Laprun, Mark Liberman},
journal={Proceedings of the Second International Conference on Language
Resources and Evaluation, pp. 1699-1706, Paris: European Language Resources
Association, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007022},
primaryClass={cs.CL}
} | bird2000atlas: |
arxiv-669603 | cs/0007023 | Towards a query language for annotation graphs | <|reference_start|>Towards a query language for annotation graphs: The multidimensional, heterogeneous, and temporal nature of speech databases raises interesting challenges for representation and query. Recently, annotation graphs have been proposed as a general-purpose representational framework for speech databases. Typical queries on annotation graphs require path expressions similar to those used in semistructured query languages. However, the underlying model is rather different from the customary graph models for semistructured data: the graph is acyclic and unrooted, and both temporal and inclusion relationships are important. We develop a query language and describe optimization techniques for an underlying relational representation.<|reference_end|> | arxiv | @article{bird2000towards,
title={Towards a query language for annotation graphs},
author={Steven Bird, Peter Buneman and Wang-Chiew Tan},
journal={Proceedings of the Second International Conference on Language
Resources and Evaluation, pp. 807-814, Paris: European Language Resources
Association, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007023},
primaryClass={cs.CL cs.DB}
} | bird2000towards |
arxiv-669604 | cs/0007024 | Many uses, many annotations for large speech corpora: Switchboard and TDT as case studies | <|reference_start|>Many uses, many annotations for large speech corpora: Switchboard and TDT as case studies: This paper discusses the challenges that arise when large speech corpora receive an ever-broadening range of diverse and distinct annotations. Two case studies of this process are presented: the Switchboard Corpus of telephone conversations and the TDT2 corpus of broadcast news. Switchboard has undergone two independent transcriptions and various types of additional annotation, all carried out as separate projects that were dispersed both geographically and chronologically. The TDT2 corpus has also received a variety of annotations, but all directly created or managed by a core group. In both cases, issues arise involving the propagation of repairs, consistency of references, and the ability to integrate annotations having different formats and levels of detail. We describe a general framework whereby these issues can be addressed successfully.<|reference_end|> | arxiv | @article{graff2000many,
title={Many uses, many annotations for large speech corpora: Switchboard and
TDT as case studies},
author={David Graff and Steven Bird},
journal={Proceedings of the Second International Conference on Language
Resources and Evaluation, pp. 427-433, Paris: European Language Resources
Association, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007024},
primaryClass={cs.CL}
} | graff2000many |
arxiv-669605 | cs/0007025 | A Moment of Perfect Clarity I: The Parallel Census Technique | <|reference_start|>A Moment of Perfect Clarity I: The Parallel Census Technique: We discuss the history and uses of the parallel census technique---an elegant tool in the study of certain computational objects having polynomially bounded census functions. A sequel will discuss advances (including Cai, Naik, and Sivakumar [CNS95] and Glasser [Gla00]), some related to the parallel census technique and some due to other approaches, in the complexity-class collapses that follow if NP has sparse hard sets under reductions weaker than (full) truth-table reductions.<|reference_end|> | arxiv | @article{glasser2000a,
title={A Moment of Perfect Clarity I: The Parallel Census Technique},
author={Christian Glasser and Lane A. Hemaspaandra},
journal={arXiv preprint arXiv:cs/0007025},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007025},
primaryClass={cs.CC}
} | glasser2000a |
arxiv-669606 | cs/0007026 | Integrating E-Commerce and Data Mining: Architecture and Challenges | <|reference_start|>Integrating E-Commerce and Data Mining: Architecture and Challenges: We show that the e-commerce domain can provide all the right ingredients for successful data mining and claim that it is a killer domain for data mining. We describe an integrated architecture, based on our expe-rience at Blue Martini Software, for supporting this integration. The architecture can dramatically reduce the pre-processing, cleaning, and data understanding effort often documented to take 80% of the time in knowledge discovery projects. We emphasize the need for data collection at the application server layer (not the web server) in order to support logging of data and metadata that is essential to the discovery process. We describe the data transformation bridges required from the transaction processing systems and customer event streams (e.g., clickstreams) to the data warehouse. We detail the mining workbench, which needs to provide multiple views of the data through reporting, data mining algorithms, visualization, and OLAP. We con-clude with a set of challenges.<|reference_end|> | arxiv | @article{ansari2000integrating,
title={Integrating E-Commerce and Data Mining: Architecture and Challenges},
author={Suhail Ansari, Ron Kohavi, Llew Mason, and Zijian Zheng},
journal={WEBKDD'2000 workshop: Web Mining for E-Commerce -- Challenges and
Opportunities},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007026},
primaryClass={cs.LG cs.AI cs.CV cs.DB}
} | ansari2000integrating |
arxiv-669607 | cs/0007027 | Efficient cache use for stencil operations on structured discretization grids | <|reference_start|>Efficient cache use for stencil operations on structured discretization grids: We derive tight bounds on cache misses for evaluation of explicit stencil operators on structured grids. Our lower bound is based on the isoperimetrical property of the discrete octahedron. Our upper bound is based on good surface to volume ratio of a parallelepiped spanned by a reduced basis of the inter- ference lattice of a grid. Measurements show that our algorithm typically reduces the number of cache misses by factor of three relative to a compiler optimized code. We show that stencil calculations on grids whose interference lattice have a short vector feature abnormally high numbers of cache misses. We call such grids unfavorable and suggest to avoid these in computations by appropriate padding. By direct measurements on MIPS R10000 we show a good correlation of abnormally high cache misses and unfavorable three-dimensional grids.<|reference_end|> | arxiv | @article{frumkin2000efficient,
title={Efficient cache use for stencil operations on structured discretization
grids},
author={Michael A. Frumkin, Rob F. Van der Wijngaart},
journal={arXiv preprint arXiv:cs/0007027},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007027},
primaryClass={cs.PF cs.CC}
} | frumkin2000efficient |
arxiv-669608 | cs/0007028 | Base Encryption: Dynamic algorithms, Keys, and Symbol Set | <|reference_start|>Base Encryption: Dynamic algorithms, Keys, and Symbol Set: All the current modern encryption algorithms utilize fixed symbols for plaintext and cyphertext. What I mean by fixed is that there is a set and limited number of symbols to represent the characters, numbers, and punctuations. In addition, they are usually the same (the plaintext symbols have the same and equivalent counterpart in the cyphertext symbols). Almost all the encryption algorithms rely on a predefined keyspace and length for the encryption/decription keys, and it is usually fixed (number of bits). In addition, the algorithms used by the encryptions are static. There is a predefined number of operatiors, and a predefined order (loops included) of operations. The algorithm stays the same, and the plaintext and cyphertext along with the key are churned through this cypherblock. Base Encryption does the opposite: It utilizes the novel concepts of base conversion, symbol remapping, and dynamic algorithms (dynamic operators and dynamic operations). Base Encryption solves the weakness in todays encryption schemes, namely... Fixed symbols (base) Fixed keylengths Fixed algorithms (fixed number of operations and operators) Unique features... Immune from plain-text-attacks. Immune from brute-force-attacks. Can utilize throwaway algorithms (as opposed to throw away keys). Plug-And-Play engine (other cyphers can be augmentated to it)<|reference_end|> | arxiv | @article{lin2000base,
title={Base Encryption: Dynamic algorithms, Keys, and Symbol Set},
author={Po-Han Lin},
journal={arXiv preprint arXiv:cs/0007028},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007028},
primaryClass={cs.CR cs.CC}
} | lin2000base |
arxiv-669609 | cs/0007029 | Dimension-Dependent behavior in the satisfability of random k-Horn formulae | <|reference_start|>Dimension-Dependent behavior in the satisfability of random k-Horn formulae: We determine the asymptotical satisfiability probability of a random at-most-k-Horn formula, via a probabilistic analysis of a simple version, called PUR, of positive unit resolution. We show that for k=k(n)->oo the problem can be ``reduced'' to the case k(n)=n, that was solved in cs.DS/9912001. On the other hand, in the case k= a constant the behavior of PUR is modeled by a simple queuing chain, leading to a closed-form solution when k=2. Our analysis predicts an ``easy-hard-easy'' pattern in this latter case. Under a rescaled parameter, the graphs of satisfaction probability corresponding to finite values of k converge to the one for the uniform case, a ``dimension-dependent behavior'' similar to the one found experimentally by Kirkpatrick and Selman (Science'94) for k-SAT. The phenomenon is qualitatively explained by a threshold property for the number of iterations of PUR makes on random satisfiable Horn formulas.<|reference_end|> | arxiv | @article{istrate2000dimension-dependent,
title={Dimension-Dependent behavior in the satisfability of random k-Horn
formulae},
author={Gabriel Istrate},
journal={arXiv preprint arXiv:cs/0007029},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007029},
primaryClass={cs.DS}
} | istrate2000dimension-dependent |
arxiv-669610 | cs/0007030 | A theory of normed simulations | <|reference_start|>A theory of normed simulations: In existing simulation proof techniques, a single step in a lower-level specification may be simulated by an extended execution fragment in a higher-level one. As a result, it is cumbersome to mechanize these techniques using general purpose theorem provers. Moreover, it is undecidable whether a given relation is a simulation, even if tautology checking is decidable for the underlying specification logic. This paper introduces various types of normed simulations. In a normed simulation, each step in a lower-level specification can be simulated by at most one step in the higher-level one, for any related pair of states. In earlier work we demonstrated that normed simulations are quite useful as a vehicle for the formalization of refinement proofs via theorem provers. Here we show that normed simulations also have pleasant theoretical properties: (1) under some reasonable assumptions, it is decidable whether a given relation is a normed forward simulation, provided tautology checking is decidable for the underlying logic; (2) at the semantic level, normed forward and backward simulations together form a complete proof method for establishing behavior inclusion, provided that the higher-level specification has finite invisible nondeterminism.<|reference_end|> | arxiv | @article{griffioen2000a,
title={A theory of normed simulations},
author={W.O.D. Griffioen and F.W. Vaandrager},
journal={ACM Trans. Comput. Log. 5(4): 577-610 (2004)},
year={2000},
doi={10.1145/1024922.1024923},
number={CSI-R0013},
archivePrefix={arXiv},
eprint={cs/0007030},
primaryClass={cs.LO}
} | griffioen2000a |
arxiv-669611 | cs/0007031 | Parameter-free Model of Rank Polysemantic Distribution | <|reference_start|>Parameter-free Model of Rank Polysemantic Distribution: A model of rank polysemantic distribution with a minimal number of fitting parameters is offered. In an ideal case a parameter-free description of the dependence on the basis of one or several immediate features of the distribution is possible.<|reference_end|> | arxiv | @article{kromer2000parameter-free,
title={Parameter-free Model of Rank Polysemantic Distribution},
author={Victor Kromer},
journal={Proceedings of the 4th conference of the International
Quantitative Linguistics Association (QUALICO 2000). Prague, August 24-26,
2000. P. 21-22.The full version (in Russian) is available in Web Journal
FCCL. See URL http://fccl.ksu.ru/fcclpap.htm},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007031},
primaryClass={cs.CL}
} | kromer2000parameter-free |
arxiv-669612 | cs/0007032 | Knowledge on Treelike Spaces | <|reference_start|>Knowledge on Treelike Spaces: This paper presents a bimodal logic for reasoning about knowledge during knowledge acquisition. One of the modalities represents (effort during) non-deterministic time and the other represents knowledge. The semantics of this logic are tree-like spaces which are a generalization of semantics used for modeling branching time and historical necessity. A finite system of axiom schemes is shown to be canonically complete for the formentioned spaces. A characterization of the satisfaction relation implies the small model property and decidability for this system.<|reference_end|> | arxiv | @article{georgatos2000knowledge,
title={Knowledge on Treelike Spaces},
author={Konstantinos Georgatos},
journal={Studia Logica, 1(59), 1997},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007032},
primaryClass={cs.LO cs.AI}
} | georgatos2000knowledge |
arxiv-669613 | cs/0007033 | To Preference via Entrenchment | <|reference_start|>To Preference via Entrenchment: We introduce a simple generalization of Gardenfors and Makinson's epistemic entrenchment called partial entrenchment. We show that preferential inference can be generated as the sceptical counterpart of an inference mechanism defined directly on partial entrenchment.<|reference_end|> | arxiv | @article{georgatos2000to,
title={To Preference via Entrenchment},
author={Konstantinos Georgatos},
journal={Annals Of Pure And Applied Logic, (96)1-3, pages 141-155, 1999},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007033},
primaryClass={cs.LO cs.AI}
} | georgatos2000to |
arxiv-669614 | cs/0007034 | The Competitiveness of On-Line vis-a-vis Conventional Retailing: A Preliminary Study | <|reference_start|>The Competitiveness of On-Line vis-a-vis Conventional Retailing: A Preliminary Study: Previous research has directly studied whether on-line retailing is more competitive than conventional retail markets. The evidence from books and music CDs is mixed. Here, I use an indirect approach to compare the competitiveness of on-line with conventional markets. Focusing on the retail market for books, I identify a peculiarity in the pricing of bestsellers relative to other titles. Supposing that competitive barriers are lower in on-line retailing, I analyze how the lower barriers would affect the relative pricing of bestsellers. The empirical data indicates that on-line retailing is more competitive than conventional retailing.<|reference_end|> | arxiv | @article{png2000the,
title={The Competitiveness of On-Line vis-a-vis Conventional Retailing: A
Preliminary Study},
author={Ivan Png},
journal={arXiv preprint arXiv:cs/0007034},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007034},
primaryClass={cs.OH}
} | png2000the |
arxiv-669615 | cs/0007035 | Mapping WordNets Using Structural Information | <|reference_start|>Mapping WordNets Using Structural Information: We present a robust approach for linking already existing lexical/semantic hierarchies. We used a constraint satisfaction algorithm (relaxation labeling) to select --among a set of candidates-- the node in a target taxonomy that bests matches each node in a source taxonomy. In particular, we use it to map the nominal part of WordNet 1.5 onto WordNet 1.6, with a very high precision and a very low remaining ambiguity.<|reference_end|> | arxiv | @article{daude2000mapping,
title={Mapping WordNets Using Structural Information},
author={J. Daude, L. Padro & G. Rigau},
journal={38th Anual Meeting of the Association for Computational
Linguistics (ACL'2000). Hong Kong, October 2000.},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007035},
primaryClass={cs.CL}
} | daude2000mapping |
arxiv-669616 | cs/0007036 | Language identification of controlled systems: Modelling, control and anomaly detection | <|reference_start|>Language identification of controlled systems: Modelling, control and anomaly detection: Formal language techniques have been used in the past to study autonomous dynamical systems. However, for controlled systems, new features are needed to distinguish between information generated by the system and input control. We show how the modelling framework for controlled dynamical systems leads naturally to a formulation in terms of context-dependent grammars. A learning algorithm is proposed for on-line generation of the grammar productions, this formulation being then used for modelling, control and anomaly detection. Practical applications are described for electromechanical drives. Grammatical interpolation techniques yield accurate results and the pattern detection capabilities of the language-based formulation makes it a promising technique for the early detection of anomalies or faulty behaviour.<|reference_end|> | arxiv | @article{martins2000language,
title={Language identification of controlled systems: Modelling, control and
anomaly detection},
author={J. F. Martins (EST-IPS, Setubal), J. A. Dente (IST, Lisboa), A. J.
Pires (EST-IPS, Setubal) and R. Vilela Mendes (GFM, UL, Lisboa)},
journal={IEEE Trans. in Systems, Man and Cybernetics 31 (2001) 234},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007036},
primaryClass={cs.CL}
} | martins2000language |
arxiv-669617 | cs/0007037 | Knowledge Theoretic Properties of Topological Spaces | <|reference_start|>Knowledge Theoretic Properties of Topological Spaces: We study the topological models of a logic of knowledge for topological reasoning, introduced by Larry Moss and Rohit Parikh. Among our results is a solution of a conjecture by the formentioned authors, finite satisfiability property and decidability for the theory of topological models.<|reference_end|> | arxiv | @article{georgatos2000knowledge,
title={Knowledge Theoretic Properties of Topological Spaces},
author={Konstantinos Georgatos},
journal={In Knowledge Representation and Uncertainty. M. Masuch and L.
Polos, Eds. Lecture Notes in Artificial Intelligence, vol. 808, pages
147-159, Springer-Verlag, 1994},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007037},
primaryClass={cs.LO}
} | georgatos2000knowledge |
arxiv-669618 | cs/0007038 | Modal Logics for Topological Spaces | <|reference_start|>Modal Logics for Topological Spaces: In this thesis we shall present two logical systems, MP and MP, for the purpose of reasoning about knowledge and effort. These logical systems will be interpreted in a spatial context and therefore, the abstract concepts of knowledge and effort will be defined by concrete mathematical concepts.<|reference_end|> | arxiv | @article{georgatos2000modal,
title={Modal Logics for Topological Spaces},
author={Konstantinos Georgatos},
journal={arXiv preprint arXiv:cs/0007038},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007038},
primaryClass={cs.LO cs.AI}
} | georgatos2000modal |
arxiv-669619 | cs/0007039 | Ordering-based Representations of Rational Inference | <|reference_start|>Ordering-based Representations of Rational Inference: Rational inference relations were introduced by Lehmann and Magidor as the ideal systems for drawing conclusions from a conditional base. However, there has been no simple characterization of these relations, other than its original representation by preferential models. In this paper, we shall characterize them with a class of total preorders of formulas by improving and extending Gardenfors and Makinson's results for expectation inference relations. A second representation is application-oriented and is obtained by considering a class of consequence operators that grade sets of defaults according to our reliance on them. The finitary fragment of this class of consequence operators has been employed by recent default logic formalisms based on maxiconsistency.<|reference_end|> | arxiv | @article{georgatos2000ordering-based,
title={Ordering-based Representations of Rational Inference},
author={Konstantinos Georgatos},
journal={In the Proceedings of the European Workshop on Logics in AI (JELIA
'96). J.J. Alferes, L.M. Pereira and E. Orlowska, Eds. Lecture Notes in
Artificial Intelligence, vol. 1126, pages 176-191, Springer-Verlag, 1996},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007039},
primaryClass={cs.LO cs.AI}
} | georgatos2000ordering-based |
arxiv-669620 | cs/0007040 | Entrenchment Relations: A Uniform Approach to Nonmonotonicity | <|reference_start|>Entrenchment Relations: A Uniform Approach to Nonmonotonicity: We show that Gabbay's nonmonotonic consequence relations can be reduced to a new family of relations, called entrenchment relations. Entrenchment relations provide a direct generalization of epistemic entrenchment and expectation ordering introduced by Gardenfors and Makinson for the study of belief revision and expectation inference, respectively.<|reference_end|> | arxiv | @article{georgatos2000entrenchment,
title={Entrenchment Relations: A Uniform Approach to Nonmonotonicity},
author={Konstantinos Georgatos},
journal={In the Proceedings of the International Joint Conference on
Qualitative and Quantitative Practical Reasoning (ESCQARU/FAPR 97), Lecture
Notes in Artificial Intelligence, vol. 1244, pages 282-297, Springer-Verlag,
1997},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007040},
primaryClass={cs.LO cs.AI}
} | georgatos2000entrenchment |
arxiv-669621 | cs/0007041 | Relevance as Deduction: A Logical View of Information Retrieval | <|reference_start|>Relevance as Deduction: A Logical View of Information Retrieval: The problem of Information Retrieval is, given a set of documents D and a query q, providing an algorithm for retrieving all documents in D relevant to q. However, retrieval should depend and be updated whenever the user is able to provide as an input a preferred set of relevant documents; this process is known as em relevance feedback. Recent work in IR has been paying great attention to models which employ a logical approach; the advantage being that one can have a simple computable characterization of retrieval on the basis of a pure logical analysis of retrieval. Most of the logical models make use of probabilities or similar belief functions in order to introduce the inductive component whereby uncertainty is treated. Their general paradigm is the following: em find the nature of conditional $d\imp q$ and then define a probability on the top of it. We just reverse this point of view; first use the numerical information, frequencies or probabilities, then define your own logical consequence. More generally, we claim that retrieval is a form of deduction. We introduce a simple but powerful logical framework of relevance feedback, derived from the well founded area of nonmonotonic logic. This description can help us evaluate, describe and compare from a theoretical point of view previous approaches based on conditionals or probabilities.<|reference_end|> | arxiv | @article{amati2000relevance,
title={Relevance as Deduction: A Logical View of Information Retrieval},
author={Gianni Amati, Konstantinos Georgatos},
journal={In F. Crestani and M. Lalmas, editors, Proceedings of the Second
Workshop on Information Retrieval, Uncertainty and Logic WIRUL'96, pages
21--26. University of Glasgow, Glasgow, Scotland, 1996},
year={2000},
number={TR-1996-29},
archivePrefix={arXiv},
eprint={cs/0007041},
primaryClass={cs.IR cs.LO}
} | amati2000relevance |
arxiv-669622 | cs/0007042 | Computational Geometry Column 39 | <|reference_start|>Computational Geometry Column 39: The resolution of a decades-old open problem is described: polygonal chains cannot lock in the plane.<|reference_end|> | arxiv | @article{o'rourke2000computational,
title={Computational Geometry Column 39},
author={Joseph O'Rourke},
journal={SIGACT News 31(3): 47-49 (2000)},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007042},
primaryClass={cs.CG cs.DM}
} | o'rourke2000computational |
arxiv-669623 | cs/0007043 | Min-Max Fine Heaps | <|reference_start|>Min-Max Fine Heaps: In this paper we present a new data structure for double ended priority queue, called min-max fine heap, which combines the techniques used in fine heap and traditional min-max heap. The standard operations on this proposed structure are also presented, and their analysis indicates that the new structure outperforms the traditional one.<|reference_end|> | arxiv | @article{nath2000min-max,
title={Min-Max Fine Heaps},
author={Suman Kumar Nath, Rezaul Alam Chowdhury, M. Kaykobad},
journal={arXiv preprint arXiv:cs/0007043},
year={2000},
archivePrefix={arXiv},
eprint={cs/0007043},
primaryClass={cs.DS}
} | nath2000min-max |
arxiv-669624 | cs/0007044 | Managing Periodically Updated Data in Relational Databases: A Stochastic Modeling Approach | <|reference_start|>Managing Periodically Updated Data in Relational Databases: A Stochastic Modeling Approach: Recent trends in information management involve the periodic transcription of data onto secondary devices in a networked environment, and the proper scheduling of these transcriptions is critical for efficient data management. To assist in the scheduling process, we are interested in modeling the reduction of consistency over time between a relation and its replica, termed obsolescence of data. The modeling is based on techniques from the field of stochastic processes, and provides several stochastic models for content evolution in the base relations of a database, taking referential integrity constraints into account. These models are general enough to accommodate most of the common scenarios in databases, including batch insertions and life spans both with and without memory. As an initial "proof of concept" of the applicability of our approach, we validate the insertion portion of our model framework via experiments with real data feeds. We also discuss a set of transcription protocols which make use of the proposed stochastic model.<|reference_end|> | arxiv | @article{gal2000managing,
title={Managing Periodically Updated Data in Relational Databases: A Stochastic
Modeling Approach},
author={Avigdor Gal and Jonathan Eckstein},
journal={arXiv preprint arXiv:cs/0007044},
year={2000},
number={RRR-37-2000},
archivePrefix={arXiv},
eprint={cs/0007044},
primaryClass={cs.DB}
} | gal2000managing |
arxiv-669625 | cs/0008001 | Boolean Satisfiability with Transitivity Constraints | <|reference_start|>Boolean Satisfiability with Transitivity Constraints: We consider a variant of the Boolean satisfiability problem where a subset E of the propositional variables appearing in formula Fsat encode a symmetric, transitive, binary relation over N elements. Each of these relational variables, e[i,j], for 1 <= i < j <= N, expresses whether or not the relation holds between elements i and j. The task is to either find a satisfying assignment to Fsat that also satisfies all transitivity constraints over the relational variables (e.g., e[1,2] & e[2,3] ==> e[1,3]), or to prove that no such assignment exists. Solving this satisfiability problem is the final and most difficult step in our decision procedure for a logic of equality with uninterpreted functions. This procedure forms the core of our tool for verifying pipelined microprocessors. To use a conventional Boolean satisfiability checker, we augment the set of clauses expressing Fsat with clauses expressing the transitivity constraints. We consider methods to reduce the number of such clauses based on the sparse structure of the relational variables. To use Ordered Binary Decision Diagrams (OBDDs), we show that for some sets E, the OBDD representation of the transitivity constraints has exponential size for all possible variable orderings. By considering only those relational variables that occur in the OBDD representation of Fsat, our experiments show that we can readily construct an OBDD representation of the relevant transitivity constraints and thus solve the constrained satisfiability problem.<|reference_end|> | arxiv | @article{bryant2000boolean,
title={Boolean Satisfiability with Transitivity Constraints},
author={Randal E. Bryant, Miroslav N. Velev},
journal={arXiv preprint arXiv:cs/0008001},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008001},
primaryClass={cs.LO}
} | bryant2000boolean |
arxiv-669626 | cs/0008002 | Structure of some sand pile model | <|reference_start|>Structure of some sand pile model: SPM (Sand Pile Model) is a simple discrete dynamical system used in physics to represent granular objects. It is deeply related to integer partitions, and many other combinatorics problems, such as tilings or rewriting systems. The evolution of the system started with n stacked grains generates a lattice, denoted by SPM(n). We study here the structure of this lattice. We first explain how it can be constructed, by showing its strong self-similarity property. Then, we define SPM(infini), a natural extension of SPM when one starts with an infinite number of grains. Again, we give an efficient construction algorithm and a coding of this lattice using a self-similar tree. The two approaches give different recursive formulae for the cardinal of SPM(n), where no closed formula have ever been found.<|reference_end|> | arxiv | @article{latapy2000structure,
title={Structure of some sand pile model},
author={M.Latapy, R.Mantaci, M.Morvan and H.D.Phan},
journal={arXiv preprint arXiv:cs/0008002},
year={2000},
number={LIAFA Technical Report 99/22},
archivePrefix={arXiv},
eprint={cs/0008002},
primaryClass={cs.DM cond-mat.stat-mech cs.DS math.CO}
} | latapy2000structure |
arxiv-669627 | cs/0008003 | Interfacing Constraint-Based Grammars and Generation Algorithms | <|reference_start|>Interfacing Constraint-Based Grammars and Generation Algorithms: Constraint-based grammars can, in principle, serve as the major linguistic knowledge source for both parsing and generation. Surface generation starts from input semantics representations that may vary across grammars. For many declarative grammars, the concept of derivation implicitly built in is that of parsing. They may thus not be interpretable by a generation algorithm. We show that linguistically plausible semantic analyses can cause severe problems for semantic-head-driven approaches for generation (SHDG). We use SeReal, a variant of SHDG and the DISCO grammar of German as our source of examples. We propose a new, general approach that explicitly accounts for the interface between the grammar and the generation algorithm by adding a control-oriented layer to the linguistic knowledge base that reorganizes the semantics in a way suitable for generation.<|reference_end|> | arxiv | @article{busemann2000interfacing,
title={Interfacing Constraint-Based Grammars and Generation Algorithms},
author={Stephan Busemann},
journal={Proc. Workshop on Analysis for Generation, 1st International
Natural Language Generation Conference, Mitzpe Ramon, Israel, June 12, 2000.
pp. 14-21},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008003},
primaryClass={cs.CL}
} | busemann2000interfacing |
arxiv-669628 | cs/0008004 | Comparing two trainable grammatical relations finders | <|reference_start|>Comparing two trainable grammatical relations finders: Grammatical relationships (GRs) form an important level of natural language processing, but different sets of GRs are useful for different purposes. Therefore, one may often only have time to obtain a small training corpus with the desired GR annotations. On such a small training corpus, we compare two systems. They use different learning techniques, but we find that this difference by itself only has a minor effect. A larger factor is that in English, a different GR length measure appears better suited for finding simple argument GRs than for finding modifier GRs. We also find that partitioning the data may help memory-based learning.<|reference_end|> | arxiv | @article{yeh2000comparing,
title={Comparing two trainable grammatical relations finders},
author={Alexander Yeh},
journal={18th International Conference on Computational Linguistics (COLING
2000), pages 1146-1150, Saarbruecken, Germany, July, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008004},
primaryClass={cs.CL}
} | yeh2000comparing |
arxiv-669629 | cs/0008005 | More accurate tests for the statistical significance of result differences | <|reference_start|>More accurate tests for the statistical significance of result differences: Statistical significance testing of differences in values of metrics like recall, precision and balanced F-score is a necessary part of empirical natural language processing. Unfortunately, we find in a set of experiments that many commonly used tests often underestimate the significance and so are less likely to detect differences that exist between different techniques. This underestimation comes from an independence assumption that is often violated. We point out some useful tests that do not make this assumption, including computationally-intensive randomization tests.<|reference_end|> | arxiv | @article{yeh2000more,
title={More accurate tests for the statistical significance of result
differences},
author={Alexander Yeh},
journal={18th International Conference on Computational Linguistics (COLING
2000), pages 947-953, Saarbruecken, Germany, July, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008005},
primaryClass={cs.CL}
} | yeh2000more |
arxiv-669630 | cs/0008006 | Algorithms for Analysing Firewall and Router Access Lists | <|reference_start|>Algorithms for Analysing Firewall and Router Access Lists: Network firewalls and routers use a rule database to decide which packets will be allowed from one network onto another. By filtering packets the firewalls and routers can improve security and performance. However, as the size of the rule list increases, it becomes difficult to maintain and validate the rules, and lookup latency may increase significantly. Ordered binary decision diagrams (BDDs) - a compact method of representing and manipulating boolean expressions - are a potential method of representing the rules. This paper presents a new algorithm for representing such lists as a BDD and then shows how the resulting boolean expression can be used to analyse rule sets.<|reference_end|> | arxiv | @article{hazelhurst2000algorithms,
title={Algorithms for Analysing Firewall and Router Access Lists},
author={Scott Hazelhurst},
journal={arXiv preprint arXiv:cs/0008006},
year={2000},
number={TR-Wits-CS-1999-5},
archivePrefix={arXiv},
eprint={cs/0008006},
primaryClass={cs.NI}
} | hazelhurst2000algorithms |
arxiv-669631 | cs/0008007 | Tagger Evaluation Given Hierarchical Tag Sets | <|reference_start|>Tagger Evaluation Given Hierarchical Tag Sets: We present methods for evaluating human and automatic taggers that extend current practice in three ways. First, we show how to evaluate taggers that assign multiple tags to each test instance, even if they do not assign probabilities. Second, we show how to accommodate a common property of manually constructed ``gold standards'' that are typically used for objective evaluation, namely that there is often more than one correct answer. Third, we show how to measure performance when the set of possible tags is tree-structured in an IS-A hierarchy. To illustrate how our methods can be used to measure inter-annotator agreement, we show how to compute the kappa coefficient over hierarchical tag sets.<|reference_end|> | arxiv | @article{melamed2000tagger,
title={Tagger Evaluation Given Hierarchical Tag Sets},
author={I. Dan Melamed and Philip Resnik},
journal={Computers and the Humanities 34(1-2). Special issue on SENSEVAL.
pp. 79-84},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008007},
primaryClass={cs.CL}
} | melamed2000tagger |
arxiv-669632 | cs/0008008 | On the Average Similarity Degree between Solutions of Random k-SAT and Random CSPs | <|reference_start|>On the Average Similarity Degree between Solutions of Random k-SAT and Random CSPs: To study the structure of solutions for random k-SAT and random CSPs, this paper introduces the concept of average similarity degree to characterize how solutions are similar to each other. It is proved that under certain conditions, as r (i.e. the ratio of constraints to variables) increases, the limit of average similarity degree when the number of variables approaches infinity exhibits phase transitions at a threshold point, shifting from a smaller value to a larger value abruptly. For random k-SAT this phenomenon will occur when k>4 . It is further shown that this threshold point is also a singular point with respect to r in the asymptotic estimate of the second moment of the number of solutions. Finally, we discuss how this work is helpful to understand the hardness of solving random instances and a possible application of it to the design of search algorithms.<|reference_end|> | arxiv | @article{xu2000on,
title={On the Average Similarity Degree between Solutions of Random k-SAT and
Random CSPs},
author={Ke Xu and Wei Li},
journal={Discrete Applied Mathematics, 136(2004):125-149.},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008008},
primaryClass={cs.AI cs.CC cs.DM}
} | xu2000on |
arxiv-669633 | cs/0008009 | Data Mining to Measure and Improve the Success of Web Sites | <|reference_start|>Data Mining to Measure and Improve the Success of Web Sites: For many companies, competitiveness in e-commerce requires a successful presence on the web. Web sites are used to establish the company's image, to promote and sell goods and to provide customer support. The success of a web site affects and reflects directly the success of the company in the electronic market. In this study, we propose a methodology to improve the ``success'' of web sites, based on the exploitation of navigation pattern discovery. In particular, we present a theory, in which success is modelled on the basis of the navigation behaviour of the site's users. We then exploit WUM, a navigation pattern discovery miner, to study how the success of a site is reflected in the users' behaviour. With WUM we measure the success of a site's components and obtain concrete indications of how the site should be improved. We report on our first experiments with an online catalog, the success of which we have studied. Our mining analysis has shown very promising results, on the basis of which the site is currently undergoing concrete improvements.<|reference_end|> | arxiv | @article{spiliopoulou2000data,
title={Data Mining to Measure and Improve the Success of Web Sites},
author={Myra Spiliopoulou and Carsten Pohle (Institue of Information Systems,
Humboldt University Berlin)},
journal={arXiv preprint arXiv:cs/0008009},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008009},
primaryClass={cs.LG cs.DB}
} | spiliopoulou2000data |
arxiv-669634 | cs/0008010 | Flipturning polygons | <|reference_start|>Flipturning polygons: A flipturn is an operation that transforms a nonconvex simple polygon into another simple polygon, by rotating a concavity 180 degrees around the midpoint of its bounding convex hull edge. Joss and Shannon proved in 1973 that a sequence of flipturns eventually transforms any simple polygon into a convex polygon. This paper describes several new results about such flipturn sequences. We show that any orthogonal polygon is convexified after at most n-5 arbitrary flipturns, or at most 5(n-4)/6 well-chosen flipturns, improving the previously best upper bound of (n-1)!/2. We also show that any simple polygon can be convexified by at most n^2-4n+1 flipturns, generalizing earlier results of Ahn et al. These bounds depend critically on how degenerate cases are handled; we carefully explore several possibilities. We describe how to maintain both a simple polygon and its convex hull in O(log^4 n) time per flipturn, using a data structure of size O(n). We show that although flipturn sequences for the same polygon can have very different lengths, the shape and position of the final convex polygon is the same for all sequences and can be computed in O(n log n) time. Finally, we demonstrate that finding the longest convexifying flipturn sequence of a simple polygon is NP-hard.<|reference_end|> | arxiv | @article{aichholzer2000flipturning,
title={Flipturning polygons},
author={Oswin Aichholzer, Carmen Cortes, Erik D. Demaine, Vida Dujmovic, Jeff
Erickson, Henk Meijer, Mark Overmars, Belen Palop, Suneeta Ramaswami, and
Godfried T. Toussaint},
journal={arXiv preprint arXiv:cs/0008010},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008010},
primaryClass={cs.CG cs.DM math.MG}
} | aichholzer2000flipturning |
arxiv-669635 | cs/0008011 | All Pairs Shortest Paths using Bridging Sets and Rectangular Matrix Multiplication | <|reference_start|>All Pairs Shortest Paths using Bridging Sets and Rectangular Matrix Multiplication: We present two new algorithms for solving the {\em All Pairs Shortest Paths} (APSP) problem for weighted directed graphs. Both algorithms use fast matrix multiplication algorithms. The first algorithm solves the APSP problem for weighted directed graphs in which the edge weights are integers of small absolute value in $\Ot(n^{2+\mu})$ time, where $\mu$ satisfies the equation $\omega(1,\mu,1)=1+2\mu$ and $\omega(1,\mu,1)$ is the exponent of the multiplication of an $n\times n^\mu$ matrix by an $n^\mu \times n$ matrix. Currently, the best available bounds on $\omega(1,\mu,1)$, obtained by Coppersmith, imply that $\mu<0.575$. The running time of our algorithm is therefore $O(n^{2.575})$. Our algorithm improves on the $\Ot(n^{(3+\omega)/2})$ time algorithm, where $\omega=\omega(1,1,1)<2.376$ is the usual exponent of matrix multiplication, obtained by Alon, Galil and Margalit, whose running time is only known to be $O(n^{2.688})$. The second algorithm solves the APSP problem {\em almost} exactly for directed graphs with {\em arbitrary} non-negative real weights. The algorithm runs in $\Ot((n^\omega/\eps)\log (W/\eps))$ time, where $\eps>0$ is an error parameter and W is the largest edge weight in the graph, after the edge weights are scaled so that the smallest non-zero edge weight in the graph is 1. It returns estimates of all the distances in the graph with a stretch of at most $1+\eps$. Corresponding paths can also be found efficiently.<|reference_end|> | arxiv | @article{zwick2000all,
title={All Pairs Shortest Paths using Bridging Sets and Rectangular Matrix
Multiplication},
author={Uri Zwick},
journal={arXiv preprint arXiv:cs/0008011},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008011},
primaryClass={cs.DS}
} | zwick2000all |
arxiv-669636 | cs/0008012 | Applying System Combination to Base Noun Phrase Identification | <|reference_start|>Applying System Combination to Base Noun Phrase Identification: We use seven machine learning algorithms for one task: identifying base noun phrases. The results have been processed by different system combination methods and all of these outperformed the best individual result. We have applied the seven learners with the best combinator, a majority vote of the top five systems, to a standard data set and managed to improve the best published result for this data set.<|reference_end|> | arxiv | @article{sang2000applying,
title={Applying System Combination to Base Noun Phrase Identification},
author={Erik F. Tjong Kim Sang, Walter Daelemans, Herve Dejean, Rob Koeling,
Yuval Krymolowski, Vasin Punyakanok and Dan Roth},
journal={Proceedings of COLING 2000, Saarbruecken, Germany},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008012},
primaryClass={cs.CL}
} | sang2000applying |
arxiv-669637 | cs/0008013 | Meta-Learning for Phonemic Annotation of Corpora | <|reference_start|>Meta-Learning for Phonemic Annotation of Corpora: We apply rule induction, classifier combination and meta-learning (stacked classifiers) to the problem of bootstrapping high accuracy automatic annotation of corpora with pronunciation information. The task we address in this paper consists of generating phonemic representations reflecting the Flemish and Dutch pronunciations of a word on the basis of its orthographic representation (which in turn is based on the actual speech recordings). We compare several possible approaches to achieve the text-to-pronunciation mapping task: memory-based learning, transformation-based learning, rule induction, maximum entropy modeling, combination of classifiers in stacked learning, and stacking of meta-learners. We are interested both in optimal accuracy and in obtaining insight into the linguistic regularities involved. As far as accuracy is concerned, an already high accuracy level (93% for Celex and 86% for Fonilex at word level) for single classifiers is boosted significantly with additional error reductions of 31% and 38% respectively using combination of classifiers, and a further 5% using combination of meta-learners, bringing overall word level accuracy to 96% for the Dutch variant and 92% for the Flemish variant. We also show that the application of machine learning methods indeed leads to increased insight into the linguistic regularities determining the variation between the two pronunciation variants studied.<|reference_end|> | arxiv | @article{hoste2000meta-learning,
title={Meta-Learning for Phonemic Annotation of Corpora},
author={Veronique Hoste, Walter Daelemans, Erik Tjong Kim Sang, Steven Gillis},
journal={Proceedings of ICML-2000, Stanford University, CA, USA},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008013},
primaryClass={cs.CL}
} | hoste2000meta-learning |
arxiv-669638 | cs/0008014 | Aspects of Pattern-Matching in Data-Oriented Parsing | <|reference_start|>Aspects of Pattern-Matching in Data-Oriented Parsing: Data-Oriented Parsing (dop) ranks among the best parsing schemes, pairing state-of-the art parsing accuracy to the psycholinguistic insight that larger chunks of syntactic structures are relevant grammatical and probabilistic units. Parsing with the dop-model, however, seems to involve a lot of CPU cycles and a considerable amount of double work, brought on by the concept of multiple derivations, which is necessary for probabilistic processing, but which is not convincingly related to a proper linguistic backbone. It is however possible to re-interpret the dop-model as a pattern-matching model, which tries to maximize the size of the substructures that construct the parse, rather than the probability of the parse. By emphasizing this memory-based aspect of the dop-model, it is possible to do away with multiple derivations, opening up possibilities for efficient Viterbi-style optimizations, while still retaining acceptable parsing accuracy through enhanced context-sensitivity.<|reference_end|> | arxiv | @article{de pauw2000aspects,
title={Aspects of Pattern-Matching in Data-Oriented Parsing},
author={Guy De Pauw},
journal={Proceedings of the 18th International Conference on Computational
Linguistics},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008014},
primaryClass={cs.CL}
} | de pauw2000aspects |
arxiv-669639 | cs/0008015 | Temiar Reduplication in One-Level Prosodic Morphology | <|reference_start|>Temiar Reduplication in One-Level Prosodic Morphology: Temiar reduplication is a difficult piece of prosodic morphology. This paper presents the first computational analysis of Temiar reduplication, using the novel finite-state approach of One-Level Prosodic Morphology originally developed by Walther (1999b, 2000). After reviewing both the data and the basic tenets of One-level Prosodic Morphology, the analysis is laid out in some detail, using the notation of the FSA Utilities finite-state toolkit (van Noord 1997). One important discovery is that in this approach one can easily define a regular expression operator which ambiguously scans a string in the left- or rightward direction for a certain prosodic property. This yields an elegant account of base-length-dependent triggering of reduplication as found in Temiar.<|reference_end|> | arxiv | @article{walther2000temiar,
title={Temiar Reduplication in One-Level Prosodic Morphology},
author={Markus Walther (University of Marburg)},
journal={arXiv preprint arXiv:cs/0008015},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008015},
primaryClass={cs.CL}
} | walther2000temiar |
arxiv-669640 | cs/0008016 | Processing Self Corrections in a speech to speech system | <|reference_start|>Processing Self Corrections in a speech to speech system: Speech repairs occur often in spontaneous spoken dialogues. The ability to detect and correct those repairs is necessary for any spoken language system. We present a framework to detect and correct speech repairs where all relevant levels of information, i.e., acoustics, lexis, syntax and semantics can be integrated. The basic idea is to reduce the search space for repairs as soon as possible by cascading filters that involve more and more features. At first an acoustic module generates hypotheses about the existence of a repair. Second a stochastic model suggests a correction for every hypothesis. Well scored corrections are inserted as new paths in the word lattice. Finally a lattice parser decides on accepting the rep air.<|reference_end|> | arxiv | @article{spilker2000processing,
title={Processing Self Corrections in a speech to speech system},
author={Joerg Spilker, Martin Klarner, Guenther Goerz},
journal={Proceedings of COLING 2000, Saarbruecken, Germany; 31.7-4.8; pp
1116-1120},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008016},
primaryClass={cs.CL cs.AI}
} | spilker2000processing |
arxiv-669641 | cs/0008017 | Efficient probabilistic top-down and left-corner parsing | <|reference_start|>Efficient probabilistic top-down and left-corner parsing: This paper examines efficient predictive broad-coverage parsing without dynamic programming. In contrast to bottom-up methods, depth-first top-down parsing produces partial parses that are fully connected trees spanning the entire left context, from which any kind of non-local dependency or partial semantic interpretation can in principle be read. We contrast two predictive parsing approaches, top-down and left-corner parsing, and find both to be viable. In addition, we find that enhancement with non-local information not only improves parser accuracy, but also substantially improves the search efficiency.<|reference_end|> | arxiv | @article{roark2000efficient,
title={Efficient probabilistic top-down and left-corner parsing},
author={Brian Roark and Mark Johnson},
journal={Proceedings of the 37th Annual Meeting of the Association for
Computational Linguistics, 1999, pages 421-428},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008017},
primaryClass={cs.CL}
} | roark2000efficient |
arxiv-669642 | cs/0008018 | The Bisimulation Problem for equational graphs of finite out-degree | <|reference_start|>The Bisimulation Problem for equational graphs of finite out-degree: The "bisimulation problem" for equational graphs of finite out-degree is shown to be decidable. We reduce this problem to the bisimulation problem for deterministic rational (vectors of) boolean series on the alphabet of a dpda M. We then exhibit a complete formal system for deducing equivalent pairs of such vectors.<|reference_end|> | arxiv | @article{senizergues2000the,
title={The Bisimulation Problem for equational graphs of finite out-degree},
author={G. Senizergues},
journal={arXiv preprint arXiv:cs/0008018},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008018},
primaryClass={cs.LO cs.DM}
} | senizergues2000the |
arxiv-669643 | cs/0008019 | An Experimental Comparison of Naive Bayesian and Keyword-Based Anti-Spam Filtering with Personal E-mail Messages | <|reference_start|>An Experimental Comparison of Naive Bayesian and Keyword-Based Anti-Spam Filtering with Personal E-mail Messages: The growing problem of unsolicited bulk e-mail, also known as "spam", has generated a need for reliable anti-spam e-mail filters. Filters of this type have so far been based mostly on manually constructed keyword patterns. An alternative approach has recently been proposed, whereby a Naive Bayesian classifier is trained automatically to detect spam messages. We test this approach on a large collection of personal e-mail messages, which we make publicly available in "encrypted" form contributing towards standard benchmarks. We introduce appropriate cost-sensitive measures, investigating at the same time the effect of attribute-set size, training-corpus size, lemmatization, and stop lists, issues that have not been explored in previous experiments. Finally, the Naive Bayesian filter is compared, in terms of performance, to a filter that uses keyword patterns, and which is part of a widely used e-mail reader.<|reference_end|> | arxiv | @article{androutsopoulos2000an,
title={An Experimental Comparison of Naive Bayesian and Keyword-Based Anti-Spam
Filtering with Personal E-mail Messages},
author={Ion Androutsopoulos, John Koutsias, Konstantinos V. Chandrinos and
Constantine D. Spyropoulos},
journal={Proceedings of the 23rd Annual International ACM SIGIR Conference
on Research and Development in Information Retrieval, N.J. Belkin, P.
Ingwersen and M.-K. Leong (Eds.), Athens, Greece, July 24-28, 2000, pages
160-167},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008019},
primaryClass={cs.CL cs.IR cs.LG}
} | androutsopoulos2000an |
arxiv-669644 | cs/0008020 | Explaining away ambiguity: Learning verb selectional preference with Bayesian networks | <|reference_start|>Explaining away ambiguity: Learning verb selectional preference with Bayesian networks: This paper presents a Bayesian model for unsupervised learning of verb selectional preferences. For each verb the model creates a Bayesian network whose architecture is determined by the lexical hierarchy of Wordnet and whose parameters are estimated from a list of verb-object pairs found from a corpus. ``Explaining away'', a well-known property of Bayesian networks, helps the model deal in a natural fashion with word sense ambiguity in the training data. On a word sense disambiguation test our model performed better than other state of the art systems for unsupervised learning of selectional preferences. Computational complexity problems, ways of improving this approach and methods for implementing ``explaining away'' in other graphical frameworks are discussed.<|reference_end|> | arxiv | @article{ciaramita2000explaining,
title={Explaining away ambiguity: Learning verb selectional preference with
Bayesian networks},
author={Massimiliano Ciaramita and Mark Johnson},
journal={Proceedings of the 18th International Conference on Computational
Linguistics, Saarbrucken, Germany, Vol.1, 2000, p.187},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008020},
primaryClass={cs.CL cs.AI}
} | ciaramita2000explaining |
arxiv-669645 | cs/0008021 | Compact non-left-recursive grammars using the selective left-corner transform and factoring | <|reference_start|>Compact non-left-recursive grammars using the selective left-corner transform and factoring: The left-corner transform removes left-recursion from (probabilistic) context-free grammars and unification grammars, permitting simple top-down parsing techniques to be used. Unfortunately the grammars produced by the standard left-corner transform are usually much larger than the original. The selective left-corner transform described in this paper produces a transformed grammar which simulates left-corner recognition of a user-specified set of the original productions, and top-down recognition of the others. Combined with two factorizations, it produces non-left-recursive grammars that are not much larger than the original.<|reference_end|> | arxiv | @article{johnson2000compact,
title={Compact non-left-recursive grammars using the selective left-corner
transform and factoring},
author={Mark Johnson and Brian Roark},
journal={Proceedings of the 18th International Conference on Computational
Linguistics (COLING), 2000, pages 355-361},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008021},
primaryClass={cs.CL}
} | johnson2000compact |
arxiv-669646 | cs/0008022 | A Learning Approach to Shallow Parsing | <|reference_start|>A Learning Approach to Shallow Parsing: A SNoW based learning approach to shallow parsing tasks is presented and studied experimentally. The approach learns to identify syntactic patterns by combining simple predictors to produce a coherent inference. Two instantiations of this approach are studied and experimental results for Noun-Phrases (NP) and Subject-Verb (SV) phrases that compare favorably with the best published results are presented. In doing that, we compare two ways of modeling the problem of learning to recognize patterns and suggest that shallow parsing patterns are better learned using open/close predictors than using inside/outside predictors.<|reference_end|> | arxiv | @article{muñoz2000a,
title={A Learning Approach to Shallow Parsing},
author={Marcia Mu~noz, Vasin Punyakanok, Dan Roth and Dav Zimak},
journal={Proceedings of EMNLP-VLC'99, pages 168-178},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008022},
primaryClass={cs.LG cs.CL}
} | muñoz2000a |
arxiv-669647 | cs/0008023 | Selectional Restrictions in HPSG | <|reference_start|>Selectional Restrictions in HPSG: Selectional restrictions are semantic sortal constraints imposed on the participants of linguistic constructions to capture contextually-dependent constraints on interpretation. Despite their limitations, selectional restrictions have proven very useful in natural language applications, where they have been used frequently in word sense disambiguation, syntactic disambiguation, and anaphora resolution. Given their practical value, we explore two methods to incorporate selectional restrictions in the HPSG theory, assuming that the reader is familiar with HPSG. The first method employs HPSG's Background feature and a constraint-satisfaction component pipe-lined after the parser. The second method uses subsorts of referential indices, and blocks readings that violate selectional restrictions during parsing. While theoretically less satisfactory, we have found the second method particularly useful in the development of practical systems.<|reference_end|> | arxiv | @article{androutsopoulos2000selectional,
title={Selectional Restrictions in HPSG},
author={Ion Androutsopoulos and Robert Dale},
journal={Proceedings of the 18th International Conference on Computational
Linguistics (COLING), Saarbrucken, Germany, 31 July - 4 August 2000, pages
15-20},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008023},
primaryClass={cs.CL}
} | androutsopoulos2000selectional |
arxiv-669648 | cs/0008024 | Estimation of Stochastic Attribute-Value Grammars using an Informative Sample | <|reference_start|>Estimation of Stochastic Attribute-Value Grammars using an Informative Sample: We argue that some of the computational complexity associated with estimation of stochastic attribute-value grammars can be reduced by training upon an informative subset of the full training set. Results using the parsed Wall Street Journal corpus show that in some circumstances, it is possible to obtain better estimation results using an informative sample than when training upon all the available material. Further experimentation demonstrates that with unlexicalised models, a Gaussian Prior can reduce overfitting. However, when models are lexicalised and contain overlapping features, overfitting does not seem to be a problem, and a Gaussian Prior makes minimal difference to performance. Our approach is applicable for situations when there are an infeasibly large number of parses in the training set, or else for when recovery of these parses from a packed representation is itself computationally expensive.<|reference_end|> | arxiv | @article{osborne2000estimation,
title={Estimation of Stochastic Attribute-Value Grammars using an Informative
Sample},
author={Miles Osborne},
journal={Coling 2000, Saarbr\"{u}cken, Germany. pp 586--592},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008024},
primaryClass={cs.CL}
} | osborne2000estimation |
arxiv-669649 | cs/0008025 | Phutball Endgames are Hard | <|reference_start|>Phutball Endgames are Hard: We show that, in John Conway's board game Phutball (or Philosopher's Football), it is NP-complete to determine whether the current player has a move that immediately wins the game. In contrast, the similar problems of determining whether there is an immediately winning move in checkers, or a move that kings a man, are both solvable in polynomial time.<|reference_end|> | arxiv | @article{demaine2000phutball,
title={Phutball Endgames are Hard},
author={Erik D. Demaine, Martin L. Demaine, and David Eppstein},
journal={More Games of No Chance, MSRI Publications 42, 2002, pp. 351-360},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008025},
primaryClass={cs.CC cs.GT}
} | demaine2000phutball |
arxiv-669650 | cs/0008026 | Noun-phrase co-occurrence statistics for semi-automatic semantic lexicon construction | <|reference_start|>Noun-phrase co-occurrence statistics for semi-automatic semantic lexicon construction: Generating semantic lexicons semi-automatically could be a great time saver, relative to creating them by hand. In this paper, we present an algorithm for extracting potential entries for a category from an on-line corpus, based upon a small set of exemplars. Our algorithm finds more correct terms and fewer incorrect ones than previous work in this area. Additionally, the entries that are generated potentially provide broader coverage of the category than would occur to an individual coding them by hand. Our algorithm finds many terms not included within Wordnet (many more than previous algorithms), and could be viewed as an ``enhancer'' of existing broad-coverage resources.<|reference_end|> | arxiv | @article{roark2000noun-phrase,
title={Noun-phrase co-occurrence statistics for semi-automatic semantic lexicon
construction},
author={Brian Roark and Eugene Charniak},
journal={Proceedings of the 36th Annual Meeting of the Association for
Computational Linguistics and 17th International Conference on Computational
Linguistics (COLING-ACL), 1998, pages 1110-1116},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008026},
primaryClass={cs.CL}
} | roark2000noun-phrase |
arxiv-669651 | cs/0008027 | Measuring efficiency in high-accuracy, broad-coverage statistical parsing | <|reference_start|>Measuring efficiency in high-accuracy, broad-coverage statistical parsing: Very little attention has been paid to the comparison of efficiency between high accuracy statistical parsers. This paper proposes one machine-independent metric that is general enough to allow comparisons across very different parsing architectures. This metric, which we call ``events considered'', measures the number of ``events'', however they are defined for a particular parser, for which a probability must be calculated, in order to find the parse. It is applicable to single-pass or multi-stage parsers. We discuss the advantages of the metric, and demonstrate its usefulness by using it to compare two parsers which differ in several fundamental ways.<|reference_end|> | arxiv | @article{roark2000measuring,
title={Measuring efficiency in high-accuracy, broad-coverage statistical
parsing},
author={Brian Roark and Eugene Charniak},
journal={Proceedings of the COLING 2000 Workshop on Efficiency in
Large-Scale Parsing Systems, 2000, pages 29-36},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008027},
primaryClass={cs.CL}
} | roark2000measuring |
arxiv-669652 | cs/0008028 | Estimators for Stochastic ``Unification-Based'' Grammars | <|reference_start|>Estimators for Stochastic ``Unification-Based'' Grammars: Log-linear models provide a statistically sound framework for Stochastic ``Unification-Based'' Grammars (SUBGs) and stochastic versions of other kinds of grammars. We describe two computationally-tractable ways of estimating the parameters of such grammars from a training corpus of syntactic analyses, and apply these to estimate a stochastic version of Lexical-Functional Grammar.<|reference_end|> | arxiv | @article{johnson2000estimators,
title={Estimators for Stochastic ``Unification-Based'' Grammars},
author={Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi and Stefan
Riezler},
journal={Proc 37th Annual Conference of the Association for Computational
Linguistics, 1999, pages 535-541},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008028},
primaryClass={cs.CL}
} | johnson2000estimators |
arxiv-669653 | cs/0008029 | Exploiting auxiliary distributions in stochastic unification-based grammars | <|reference_start|>Exploiting auxiliary distributions in stochastic unification-based grammars: This paper describes a method for estimating conditional probability distributions over the parses of ``unification-based'' grammars which can utilize auxiliary distributions that are estimated by other means. We show how this can be used to incorporate information about lexical selectional preferences gathered from other sources into Stochastic ``Unification-based'' Grammars (SUBGs). While we apply this estimator to a Stochastic Lexical-Functional Grammar, the method is general, and should be applicable to stochastic versions of HPSGs, categorial grammars and transformational grammars.<|reference_end|> | arxiv | @article{johnson2000exploiting,
title={Exploiting auxiliary distributions in stochastic unification-based
grammars},
author={Mark Johnson and Stefan Riezler},
journal={Proc 1st NAACL, 2000, pages 154-161},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008029},
primaryClass={cs.CL}
} | johnson2000exploiting |
arxiv-669654 | cs/0008030 | Metonymy Interpretation Using X NO Y Examples | <|reference_start|>Metonymy Interpretation Using X NO Y Examples: We developed on example-based method of metonymy interpretation. One advantages of this method is that a hand-built database of metonymy is not necessary because it instead uses examples in the form ``Noun X no Noun Y (Noun Y of Noun X).'' Another advantage is that we will be able to interpret newly-coined metonymic sentences by using a new corpus. We experimented with metonymy interpretation and obtained a precision rate of 66% when using this method.<|reference_end|> | arxiv | @article{murata2000metonymy,
title={Metonymy Interpretation Using X NO Y Examples},
author={Masaki Murata, Qing Ma, Atsumu Yamamoto, Hitoshi Isahara},
journal={SNLP2000, Chiang Mai, Thailand, May 10, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008030},
primaryClass={cs.CL}
} | murata2000metonymy |
arxiv-669655 | cs/0008031 | Bunsetsu Identification Using Category-Exclusive Rules | <|reference_start|>Bunsetsu Identification Using Category-Exclusive Rules: This paper describes two new bunsetsu identification methods using supervised learning. Since Japanese syntactic analysis is usually done after bunsetsu identification, bunsetsu identification is important for analyzing Japanese sentences. In experiments comparing the four previously available machine-learning methods (decision tree, maximum-entropy method, example-based approach and decision list) and two new methods using category-exclusive rules, the new method using the category-exclusive rules with the highest similarity performed best.<|reference_end|> | arxiv | @article{murata2000bunsetsu,
title={Bunsetsu Identification Using Category-Exclusive Rules},
author={Masaki Murata, Kiyotaka Uchimoto, Qing Ma, Hitoshi Isahara},
journal={COLING'2000, Saarbrucken, Germany, August, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008031},
primaryClass={cs.CL}
} | murata2000bunsetsu |
arxiv-669656 | cs/0008032 | Japanese Probabilistic Information Retrieval Using Location and Category Information | <|reference_start|>Japanese Probabilistic Information Retrieval Using Location and Category Information: Robertson's 2-poisson information retrieve model does not use location and category information. We constructed a framework using location and category information in a 2-poisson model. We submitted two systems based on this framework to the IREX contest, Japanese language information retrieval contest held in Japan in 1999. For precision in the A-judgement measure they scored 0.4926 and 0.4827, the highest values among the 15 teams and 22 systems that participated in the IREX contest. We describe our systems and the comparative experiments done when various parameters were changed. These experiments confirmed the effectiveness of using location and category information.<|reference_end|> | arxiv | @article{murata2000japanese,
title={Japanese Probabilistic Information Retrieval Using Location and Category
Information},
author={Masaki Murata, Qing Ma, Kiyotaka Uchimoto, Hiromi Ozaku, Masao Utiyama
and Hitoshi Isahara},
journal={arXiv preprint arXiv:cs/0008032},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008032},
primaryClass={cs.CL}
} | murata2000japanese |
arxiv-669657 | cs/0008033 | Temporal Expressions in Japanese-to-English Machine Translation | <|reference_start|>Temporal Expressions in Japanese-to-English Machine Translation: This paper describes in outline a method for translating Japanese temporal expressions into English. We argue that temporal expressions form a special subset of language that is best handled as a special module in machine translation. The paper deals with problems of lexical idiosyncrasy as well as the choice of articles and prepositions within temporal expressions. In addition temporal expressions are considered as parts of larger structures, and the question of whether to translate them as noun phrases or adverbials is addressed.<|reference_end|> | arxiv | @article{bond2000temporal,
title={Temporal Expressions in Japanese-to-English Machine Translation},
author={Francis Bond, Kentaro Ogura and Hajime Uchino},
journal={Seventh International Conference on Theoretical and Methodological
Issues in Machine Translation: TMI-97, Santa Fe, July 1997, pp 55--62},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008033},
primaryClass={cs.CL}
} | bond2000temporal |
arxiv-669658 | cs/0008034 | Lexicalized Stochastic Modeling of Constraint-Based Grammars using Log-Linear Measures and EM Training | <|reference_start|>Lexicalized Stochastic Modeling of Constraint-Based Grammars using Log-Linear Measures and EM Training: We present a new approach to stochastic modeling of constraint-based grammars that is based on log-linear models and uses EM for estimation from unannotated data. The techniques are applied to an LFG grammar for German. Evaluation on an exact match task yields 86% precision for an ambiguity rate of 5.4, and 90% precision on a subcat frame match for an ambiguity rate of 25. Experimental comparison to training from a parsebank shows a 10% gain from EM training. Also, a new class-based grammar lexicalization is presented, showing a 10% gain over unlexicalized models.<|reference_end|> | arxiv | @article{riezler2000lexicalized,
title={Lexicalized Stochastic Modeling of Constraint-Based Grammars using
Log-Linear Measures and EM Training},
author={Stefan Riezler, Detlef Prescher, Jonas Kuhn, Mark Johnson},
journal={Proceedings of the 38th Annual Meeting of the ACL, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008034},
primaryClass={cs.CL}
} | riezler2000lexicalized |
arxiv-669659 | cs/0008035 | Using a Probabilistic Class-Based Lexicon for Lexical Ambiguity Resolution | <|reference_start|>Using a Probabilistic Class-Based Lexicon for Lexical Ambiguity Resolution: This paper presents the use of probabilistic class-based lexica for disambiguation in target-word selection. Our method employs minimal but precise contextual information for disambiguation. That is, only information provided by the target-verb, enriched by the condensed information of a probabilistic class-based lexicon, is used. Induction of classes and fine-tuning to verbal arguments is done in an unsupervised manner by EM-based clustering techniques. The method shows promising results in an evaluation on real-world translations.<|reference_end|> | arxiv | @article{prescher2000using,
title={Using a Probabilistic Class-Based Lexicon for Lexical Ambiguity
Resolution},
author={Detlef Prescher, Stefan Riezler, Mats Rooth},
journal={Proceedings of the 18th COLING, 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008035},
primaryClass={cs.CL}
} | prescher2000using |
arxiv-669660 | cs/0008036 | Probabilistic Constraint Logic Programming Formal Foundations of Quantitative and Statistical Inference in Constraint-Based Natural Language Processing | <|reference_start|>Probabilistic Constraint Logic Programming Formal Foundations of Quantitative and Statistical Inference in Constraint-Based Natural Language Processing: In this thesis, we present two approaches to a rigorous mathematical and algorithmic foundation of quantitative and statistical inference in constraint-based natural language processing. The first approach, called quantitative constraint logic programming, is conceptualized in a clear logical framework, and presents a sound and complete system of quantitative inference for definite clauses annotated with subjective weights. This approach combines a rigorous formal semantics for quantitative inference based on subjective weights with efficient weight-based pruning for constraint-based systems. The second approach, called probabilistic constraint logic programming, introduces a log-linear probability distribution on the proof trees of a constraint logic program and an algorithm for statistical inference of the parameters and properties of such probability models from incomplete, i.e., unparsed data. The possibility of defining arbitrary properties of proof trees as properties of the log-linear probability model and efficiently estimating appropriate parameter values for them permits the probabilistic modeling of arbitrary context-dependencies in constraint logic programs. The usefulness of these ideas is evaluated empirically in a small-scale experiment on finding the correct parses of a constraint-based grammar. In addition, we address the problem of computational intractability of the calculation of expectations in the inference task and present various techniques to approximately solve this task. Moreover, we present an approximate heuristic technique for searching for the most probable analysis in probabilistic constraint logic programs.<|reference_end|> | arxiv | @article{riezler2000probabilistic,
title={Probabilistic Constraint Logic Programming. Formal Foundations of
Quantitative and Statistical Inference in Constraint-Based Natural Language
Processing},
author={Stefan Riezler},
journal={arXiv preprint arXiv:cs/0008036},
year={2000},
archivePrefix={arXiv},
eprint={cs/0008036},
primaryClass={cs.CL}
} | riezler2000probabilistic |
arxiv-669661 | cs/0009001 | Complexity analysis for algorithmically simple strings | <|reference_start|>Complexity analysis for algorithmically simple strings: Given a reference computer, Kolmogorov complexity is a well defined function on all binary strings. In the standard approach, however, only the asymptotic properties of such functions are considered because they do not depend on the reference computer. We argue that this approach can be more useful if it is refined to include an important practical case of simple binary strings. Kolmogorov complexity calculus may be developed for this case if we restrict the class of available reference computers. The interesting problem is to define a class of computers which is restricted in a {\it natural} way modeling the real-life situation where only a limited class of computers is physically available to us. We give an example of what such a natural restriction might look like mathematically, and show that under such restrictions some error terms, even logarithmic in complexity, can disappear from the standard complexity calculus. Keywords: Kolmogorov complexity; Algorithmic information theory.<|reference_end|> | arxiv | @article{soklakov2000complexity,
title={Complexity analysis for algorithmically simple strings},
author={Andrei N. Soklakov (Royal Holloway, University of London)},
journal={arXiv preprint arXiv:cs/0009001},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009001},
primaryClass={cs.LG}
} | soklakov2000complexity |
arxiv-669662 | cs/0009002 | Succinct quantum proofs for properties of finite groups | <|reference_start|>Succinct quantum proofs for properties of finite groups: In this paper we consider a quantum computational variant of nondeterminism based on the notion of a quantum proof, which is a quantum state that plays a role similar to a certificate in an NP-type proof. Specifically, we consider quantum proofs for properties of black-box groups, which are finite groups whose elements are encoded as strings of a given length and whose group operations are performed by a group oracle. We prove that for an arbitrary group oracle there exist succinct (polynomial-length) quantum proofs for the Group Non-Membership problem that can be checked with small error in polynomial time on a quantum computer. Classically this is impossible--it is proved that there exists a group oracle relative to which this problem does not have succinct proofs that can be checked classically with bounded error in polynomial time (i.e., the problem is not in MA relative to the group oracle constructed). By considering a certain subproblem of the Group Non-Membership problem we obtain a simple proof that there exists an oracle relative to which BQP is not contained in MA. Finally, we show that quantum proofs for non-membership and classical proofs for various other group properties can be combined to yield succinct quantum proofs for other group properties not having succinct proofs in the classical setting, such as verifying that a number divides the order of a group and verifying that a group is not a simple group.<|reference_end|> | arxiv | @article{watrous2000succinct,
title={Succinct quantum proofs for properties of finite groups},
author={John Watrous (University of Calgary)},
journal={arXiv preprint arXiv:cs/0009002},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009002},
primaryClass={cs.CC quant-ph}
} | watrous2000succinct |
arxiv-669663 | cs/0009003 | Automatic Extraction of Subcategorization Frames for Czech | <|reference_start|>Automatic Extraction of Subcategorization Frames for Czech: We present some novel machine learning techniques for the identification of subcategorization information for verbs in Czech. We compare three different statistical techniques applied to this problem. We show how the learning algorithm can be used to discover previously unknown subcategorization frames from the Czech Prague Dependency Treebank. The algorithm can then be used to label dependents of a verb in the Czech treebank as either arguments or adjuncts. Using our techniques, we ar able to achieve 88% precision on unseen parsed text.<|reference_end|> | arxiv | @article{sarkar2000automatic,
title={Automatic Extraction of Subcategorization Frames for Czech},
author={Anoop Sarkar, Daniel Zeman},
journal={Proceedings of the 18th International Conference on Computational
Linguistics (Coling 2000), Universit},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009003},
primaryClass={cs.CL}
} | sarkar2000automatic |
arxiv-669664 | cs/0009004 | A usage based analysis of CoRR | <|reference_start|>A usage based analysis of CoRR: Based on an empirical analysis of author usage of CoRR, and of its predecessor in the Los Alamos eprint archives, it is shown that CoRR has not yet been able to match the early growth of the Los Alamos physics archives. Some of the reasons are implicit in Halpern's paper, and we explore them further here. In particular we refer to the need to promote CoRR more effectively for its intended community - computer scientists in universities, industrial research labs and in government. We take up some points of detail on this new world of open archiving concerning central versus distributed self-archiving, publication, the restructuring of the journal publishers' niche, peer review and copyright.<|reference_end|> | arxiv | @article{carr2000a,
title={A usage based analysis of CoRR},
author={Les Carr, Steve Hitchcock, Wendy Hall and Stevan Harnad},
journal={ACM Journal of Computer Documentation, Vol. 24, No. 2, May 2000,
54-59},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009004},
primaryClass={cs.DL}
} | carr2000a |
arxiv-669665 | cs/0009005 | Fast Approximation of Centrality | <|reference_start|>Fast Approximation of Centrality: Social studies researchers use graphs to model group activities in social networks. An important property in this context is the centrality of a vertex: the inverse of the average distance to each other vertex. We describe a randomized approximation algorithm for centrality in weighted graphs. For graphs exhibiting the small world phenomenon, our method estimates the centrality of all vertices with high probability within a (1+epsilon) factor in near-linear time.<|reference_end|> | arxiv | @article{eppstein2000fast,
title={Fast Approximation of Centrality},
author={David Eppstein and Joseph Wang},
journal={J. Graph Algorithms & Applications 8(1):27-38, 2004},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009005},
primaryClass={cs.DS cond-mat.dis-nn cs.SI}
} | eppstein2000fast |
arxiv-669666 | cs/0009006 | Improved Algorithms for 3-Coloring, 3-Edge-Coloring, and Constraint Satisfaction | <|reference_start|>Improved Algorithms for 3-Coloring, 3-Edge-Coloring, and Constraint Satisfaction: We consider worst case time bounds for NP-complete problems including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems; 3-SAT is equivalent to (2,3)-CSP while the other problems above are special cases of (3,2)-CSP. We give a fast algorithm for (3,2)-CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of Davis-Putnam-style backtracking with more sophisticated matching and network flow based ideas.<|reference_end|> | arxiv | @article{eppstein2000improved,
title={Improved Algorithms for 3-Coloring, 3-Edge-Coloring, and Constraint
Satisfaction},
author={David Eppstein},
journal={arXiv preprint arXiv:cs/0009006},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009006},
primaryClass={cs.DS}
} | eppstein2000improved |
arxiv-669667 | cs/0009007 | Robust Classification for Imprecise Environments | <|reference_start|>Robust Classification for Imprecise Environments: In real-world environments it usually is difficult to specify target operating conditions precisely, for example, target misclassification costs. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. In some cases, the performance of the hybrid actually can surpass that of the best known classifier. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. The hybrid also is efficient to build, to store, and to update. The hybrid is based on a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull (ROCCH) method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. Finally, we point to empirical evidence that a robust hybrid classifier indeed is needed for many real-world problems.<|reference_end|> | arxiv | @article{provost2000robust,
title={Robust Classification for Imprecise Environments},
author={Foster Provost and Tom Fawcett},
journal={arXiv preprint arXiv:cs/0009007},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009007},
primaryClass={cs.LG}
} | provost2000robust |
arxiv-669668 | cs/0009008 | Introduction to the CoNLL-2000 Shared Task: Chunking | <|reference_start|>Introduction to the CoNLL-2000 Shared Task: Chunking: We describe the CoNLL-2000 shared task: dividing text into syntactically related non-overlapping groups of words, so-called text chunking. We give background information on the data sets, present a general overview of the systems that have taken part in the shared task and briefly discuss their performance.<|reference_end|> | arxiv | @article{sang2000introduction,
title={Introduction to the CoNLL-2000 Shared Task: Chunking},
author={Erik F. Tjong Kim Sang and Sabine Buchholz},
journal={Proceedings of CoNLL-2000 and LLL-2000, Lisbon, Portugal},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009008},
primaryClass={cs.CL}
} | sang2000introduction |
arxiv-669669 | cs/0009009 | Learning to Filter Spam E-Mail: A Comparison of a Naive Bayesian and a Memory-Based Approach | <|reference_start|>Learning to Filter Spam E-Mail: A Comparison of a Naive Bayesian and a Memory-Based Approach: We investigate the performance of two machine learning algorithms in the context of anti-spam filtering. The increasing volume of unsolicited bulk e-mail (spam) has generated a need for reliable anti-spam filters. Filters of this type have so far been based mostly on keyword patterns that are constructed by hand and perform poorly. The Naive Bayesian classifier has recently been suggested as an effective method to construct automatically anti-spam filters with superior performance. We investigate thoroughly the performance of the Naive Bayesian filter on a publicly available corpus, contributing towards standard benchmarks. At the same time, we compare the performance of the Naive Bayesian filter to an alternative memory-based learning approach, after introducing suitable cost-sensitive evaluation measures. Both methods achieve very accurate spam filtering, outperforming clearly the keyword-based filter of a widely used e-mail reader.<|reference_end|> | arxiv | @article{androutsopoulos2000learning,
title={Learning to Filter Spam E-Mail: A Comparison of a Naive Bayesian and a
Memory-Based Approach},
author={Ion Androutsopoulos, Georgios Paliouras, Vangelis Karkaletsis,
Georgios Sakkis, Constantine D. Spyropoulos and Panagiotis Stamatopoulos},
journal={Proceedings of the workshop "Machine Learning and Textual
Information Access", 4th European Conference on Principles and Practice of
Knowledge Discovery in Databases (PKDD-2000), H. Zaragoza, P. Gallinari and
M. Rajman (Eds.), Lyon, France, September 2000, pp. 1-13},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009009},
primaryClass={cs.CL cs.IR cs.LG}
} | androutsopoulos2000learning |
arxiv-669670 | cs/0009010 | Computing Crossing Numbers in Quadratic Time | <|reference_start|>Computing Crossing Numbers in Quadratic Time: We show that for every fixed non-negative integer k there is a quadratic time algorithm that decides whether a given graph has crossing number at most k and, if this is the case, computes a drawing of the graph in the plane with at most k crossings.<|reference_end|> | arxiv | @article{grohe2000computing,
title={Computing Crossing Numbers in Quadratic Time},
author={Martin Grohe},
journal={arXiv preprint arXiv:cs/0009010},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009010},
primaryClass={cs.DS cs.DM}
} | grohe2000computing |
arxiv-669671 | cs/0009011 | Anaphora Resolution in Japanese Sentences Using Surface Expressions and Examples | <|reference_start|>Anaphora Resolution in Japanese Sentences Using Surface Expressions and Examples: Anaphora resolution is one of the major problems in natural language processing. It is also one of the important tasks in machine translation and man/machine dialogue. We solve the problem by using surface expressions and examples. Surface expressions are the words in sentences which provide clues for anaphora resolution. Examples are linguistic data which are actually used in conversations and texts. The method using surface expressions and examples is a practical method. This thesis handles almost all kinds of anaphora: i. The referential property and number of a noun phrase ii. Noun phrase direct anaphora iii. Noun phrase indirect anaphora iv. Pronoun anaphora v. Verb phrase ellipsis<|reference_end|> | arxiv | @article{murata2000anaphora,
title={Anaphora Resolution in Japanese Sentences Using Surface Expressions and
Examples},
author={Masaki Murata},
journal={arXiv preprint arXiv:cs/0009011},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009011},
primaryClass={cs.CL}
} | murata2000anaphora |
arxiv-669672 | cs/0009012 | Modeling Ambiguity in a Multi-Agent System | <|reference_start|>Modeling Ambiguity in a Multi-Agent System: This paper investigates the formal pragmatics of ambiguous expressions by modeling ambiguity in a multi-agent system. Such a framework allows us to give a more refined notion of the kind of information that is conveyed by ambiguous expressions. We analyze how ambiguity affects the knowledge of the dialog participants and, especially, what they know about each other after an ambiguous sentence has been uttered. The agents communicate with each other by means of a TELL-function, whose application is constrained by an implementation of some of Grice's maxims. The information states of the multi-agent system itself are represented as a Kripke structures and TELL is an update function on those structures. This framework enables us to distinguish between the information conveyed by ambiguous sentences vs. the information conveyed by disjunctions, and between semantic ambiguity vs. perceived ambiguity.<|reference_end|> | arxiv | @article{monz2000modeling,
title={Modeling Ambiguity in a Multi-Agent System},
author={Christof Monz},
journal={arXiv preprint arXiv:cs/0009012},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009012},
primaryClass={cs.CL cs.AI cs.MA}
} | monz2000modeling |
arxiv-669673 | cs/0009013 | Pattern Matching for sets of segments | <|reference_start|>Pattern Matching for sets of segments: In this paper we present algorithms for a number of problems in geometric pattern matching where the input consist of a collections of segments in the plane. Our work consists of two main parts. In the first, we address problems and measures that relate to collections of orthogonal line segments in the plane. Such collections arise naturally from problems in mapping buildings and robot exploration. We propose a new measure of segment similarity called a \emph{coverage measure}, and present efficient algorithms for maximising this measure between sets of axis-parallel segments under translations. Our algorithms run in time $O(n^3\polylog n)$ in the general case, and run in time $O(n^2\polylog n)$ for the case when all segments are horizontal. In addition, we show that when restricted to translations that are only vertical, the Hausdorff distance between two sets of horizontal segments can be computed in time roughly $O(n^{3/2}{\sl polylog}n)$. These algorithms form significant improvements over the general algorithm of Chew et al. that takes time $O(n^4 \log^2 n)$. In the second part of this paper we address the problem of matching polygonal chains. We study the well known \Frd, and present the first algorithm for computing the \Frd under general translations. Our methods also yield algorithms for computing a generalization of the \Fr distance, and we also present a simple approximation algorithm for the \Frd that runs in time $O(n^2\polylog n)$.<|reference_end|> | arxiv | @article{efrat2000pattern,
title={Pattern Matching for sets of segments},
author={Alon Efrat, Piotr Indyk and Suresh Venkatasubramanian},
journal={arXiv preprint arXiv:cs/0009013},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009013},
primaryClass={cs.CG}
} | efrat2000pattern |
arxiv-669674 | cs/0009014 | Combining Linguistic and Spatial Information for Document Analysis | <|reference_start|>Combining Linguistic and Spatial Information for Document Analysis: We present a framework to analyze color documents of complex layout. In addition, no assumption is made on the layout. Our framework combines in a content-driven bottom-up approach two different sources of information: textual and spatial. To analyze the text, shallow natural language processing tools, such as taggers and partial parsers, are used. To infer relations of the logical layout we resort to a qualitative spatial calculus closely related to Allen's calculus. We evaluate the system against documents from a color journal and present the results of extracting the reading order from the journal's pages. In this case, our analysis is successful as it extracts the intended reading order from the document.<|reference_end|> | arxiv | @article{aiello2000combining,
title={Combining Linguistic and Spatial Information for Document Analysis},
author={Marco Aiello, Christof Monz, Leon Todoran},
journal={arXiv preprint arXiv:cs/0009014},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009014},
primaryClass={cs.CL cs.DL}
} | aiello2000combining |
arxiv-669675 | cs/0009015 | A Tableaux Calculus for Ambiguous Quantification | <|reference_start|>A Tableaux Calculus for Ambiguous Quantification: Coping with ambiguity has recently received a lot of attention in natural language processing. Most work focuses on the semantic representation of ambiguous expressions. In this paper we complement this work in two ways. First, we provide an entailment relation for a language with ambiguous expressions. Second, we give a sound and complete tableaux calculus for reasoning with statements involving ambiguous quantification. The calculus interleaves partial disambiguation steps with steps in a traditional deductive process, so as to minimize and postpone branching in the proof process, and thereby increases its efficiency.<|reference_end|> | arxiv | @article{monz2000a,
title={A Tableaux Calculus for Ambiguous Quantification},
author={Christof Monz, Maarten de Rijke},
journal={arXiv preprint arXiv:cs/0009015},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009015},
primaryClass={cs.CL}
} | monz2000a |
arxiv-669676 | cs/0009016 | Contextual Inference in Computational Semantics | <|reference_start|>Contextual Inference in Computational Semantics: In this paper, an application of automated theorem proving techniques to computational semantics is considered. In order to compute the presuppositions of a natural language discourse, several inference tasks arise. Instead of treating these inferences independently of each other, we show how integrating techniques from formal approaches to context into deduction can help to compute presuppositions more efficiently. Contexts are represented as Discourse Representation Structures and the way they are nested is made explicit. In addition, a tableau calculus is present which keeps track of contextual information, and thereby allows to avoid carrying out redundant inference steps as it happens in approaches that neglect explicit nesting of contexts.<|reference_end|> | arxiv | @article{monz2000contextual,
title={Contextual Inference in Computational Semantics},
author={Christof Monz},
journal={In: P. Bouquet, P. Brezillon, L. Serafini, M. Benerecetti, F.
Castellani (Eds.) 2nd International and Interdisciplinary Conference on
Modeling and Using Context (CONTEXT'99). Lecture Notes in Artificial
Intelligence 1688, Springer, 1999, pages 242-255},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009016},
primaryClass={cs.CL cs.AI}
} | monz2000contextual |
arxiv-669677 | cs/0009017 | A Tableau Calculus for Pronoun Resolution | <|reference_start|>A Tableau Calculus for Pronoun Resolution: We present a tableau calculus for reasoning in fragments of natural language. We focus on the problem of pronoun resolution and the way in which it complicates automated theorem proving for natural language processing. A method for explicitly manipulating contextual information during deduction is proposed, where pronouns are resolved against this context during deduction. As a result, pronoun resolution and deduction can be interleaved in such a way that pronouns are only resolved if this is licensed by a deduction rule; this helps us to avoid the combinatorial complexity of total pronoun disambiguation.<|reference_end|> | arxiv | @article{monz2000a,
title={A Tableau Calculus for Pronoun Resolution},
author={Christof Monz, Maarten de Rijke},
journal={In: N.V. Murray (ed.) Automated Reasoning with Analytic Tableaux
and Related Methods. Lecture Notes in Artificial Intelligence 1617, Springer,
1999, pages 247-262},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009017},
primaryClass={cs.CL cs.AI}
} | monz2000a |
arxiv-669678 | cs/0009018 | A Resolution Calculus for Dynamic Semantics | <|reference_start|>A Resolution Calculus for Dynamic Semantics: This paper applies resolution theorem proving to natural language semantics. The aim is to circumvent the computational complexity triggered by natural language ambiguities like pronoun binding, by interleaving pronoun binding with resolution deduction. Therefore disambiguation is only applied to expression that actually occur during derivations.<|reference_end|> | arxiv | @article{monz2000a,
title={A Resolution Calculus for Dynamic Semantics},
author={Christof Monz, Maarten de Rijke},
journal={In: J. Dix, L. Farinas del Cerro, and U. Furbach (eds.) Logics in
Artificial Intelligence (JELIA'98). Lecture Notes in Artificial Intelligence
1489, Springer, 1998, pp. 184-198},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009018},
primaryClass={cs.CL cs.AI}
} | monz2000a |
arxiv-669679 | cs/0009019 | Computing Presuppositions by Contextual Reasoning | <|reference_start|>Computing Presuppositions by Contextual Reasoning: This paper describes how automated deduction methods for natural language processing can be applied more efficiently by encoding context in a more elaborate way. Our work is based on formal approaches to context, and we provide a tableau calculus for contextual reasoning. This is explained by considering an example from the problem area of presupposition projection.<|reference_end|> | arxiv | @article{monz2000computing,
title={Computing Presuppositions by Contextual Reasoning},
author={Christof Monz},
journal={In: P. Brezillon, R. Turner, J-C. Pomerol and E. Turner (Eds.)
Proceedings of the AAAI-99 Workshop on Reasoning in Context for AI
Applications, AAAI Press, 1999, pp. 75-79},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009019},
primaryClass={cs.AI cs.CL}
} | monz2000computing |
arxiv-669680 | cs/0009020 | Cluster Computing: A High-Performance Contender | <|reference_start|>Cluster Computing: A High-Performance Contender: When you first heard people speak of Piles of PCs, the first thing that came to mind may have been a cluttered computer room with processors, monitors, and snarls of cables all around. Collections of computers have undoubtedly become more sophisticated than in the early days of shared drives and modem connections. No matter what you call them, Clusters of Workstations (COW), Networks of Workstations (NOW), Workstation Clusters (WCs), Clusters of PCs (CoPs), clusters of computers are now filling the processing niche once occupied by more powerful stand-alone machines. This article discusses the need for cluster computing technology, Technologies, Components, and Applications, Supercluster Systems and Issues, The Need for a New Task Force, and Cluster Computing Educational Resources.<|reference_end|> | arxiv | @article{baker2000cluster,
title={Cluster Computing: A High-Performance Contender},
author={Mark Baker, Rajkumar Buyya, Dan Hyde},
journal={arXiv preprint arXiv:cs/0009020},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009020},
primaryClass={cs.DC cs.AR}
} | baker2000cluster |
arxiv-669681 | cs/0009021 | Nimrod/G: An Architecture of a Resource Management and Scheduling System in a Global Computational Grid | <|reference_start|>Nimrod/G: An Architecture of a Resource Management and Scheduling System in a Global Computational Grid: The availability of powerful microprocessors and high-speed networks as commodity components has enabled high performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise, or worldwide) there is a great challenge in integrating, coordinating and presenting them as a single resource to the user; thus forming a computational grid. Another challenge comes from the distributed ownership of resources with each resource having its own access policy, cost, and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the Globus toolkit services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise, or global level with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real test bed, namely, the Globus testbed (GUSTO).<|reference_end|> | arxiv | @article{buyya2000nimrod/g:,
title={Nimrod/G: An Architecture of a Resource Management and Scheduling System
in a Global Computational Grid},
author={Rajkumar Buyya, David Abramson, Jon Giddy},
journal={HPC Asia 2000, IEEE Press},
year={2000},
doi={10.1109/HPC.2000.846563},
archivePrefix={arXiv},
eprint={cs/0009021},
primaryClass={cs.DC}
} | buyya2000nimrod/g: |
arxiv-669682 | cs/0009022 | A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation | <|reference_start|>A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation: This paper describes a set of comparative experiments, including cross-corpus evaluation, between five alternative algorithms for supervised Word Sense Disambiguation (WSD), namely Naive Bayes, Exemplar-based learning, SNoW, Decision Lists, and Boosting. Two main conclusions can be drawn: 1) The LazyBoosting algorithm outperforms the other four state-of-the-art algorithms in terms of accuracy and ability to tune to new domains; 2) The domain dependence of WSD systems seems very strong and suggests that some kind of adaptation or tuning is required for cross-corpus application.<|reference_end|> | arxiv | @article{escudero2000a,
title={A Comparison between Supervised Learning Algorithms for Word Sense
Disambiguation},
author={Gerard Escudero, Lluis Marquez, German Rigau},
journal={Proceedings of the 4th Conference on Computational Natural
Language Learning, CoNLL'2000, pp. 31-36},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009022},
primaryClass={cs.CL cs.AI}
} | escudero2000a |
arxiv-669683 | cs/0009023 | The Rectilinear Crossing Number of K_10 is 62 | <|reference_start|>The Rectilinear Crossing Number of K_10 is 62: A drawing of a graph G in the plane is said to be a rectilinear drawing of G if the edges are required to be line segments (as opposed to Jordan curves). We assume no three vertices are collinear. The rectilinear crossing number of G is the fewest number of edge crossings attainable over all rectilinear drawings of G. Thanks to Richard Guy, exact values of the rectilinear crossing number of K_n, the complete graph on n vertices, for n = 3,...,9, are known (Guy 1972, White and Beinke 1978, Finch 2000, Sloanes A014540). Since 1971, thanks to the work of David Singer (1971, Gardiner 1986), the rectilinear crossing number of K_10 has been known to be either 61 or 62, a deceptively innocent and tantalizing statement. The difficulty of determining the correct value is evidenced by the fact that Singer's result has withstood the test of time. In this paper we use a purely combinatorial argument to show that the rectilinear crossing number of K_10 is 62. Moreover, using this result, we improve an asymptotic lower bound for a related problem. Finally, we close with some new and old open questions that were provoked, in part, by the results of this paper, and by the tangled history of the problem itself.<|reference_end|> | arxiv | @article{brodsky2000the,
title={The Rectilinear Crossing Number of K_10 is 62},
author={Alex Brodsky, Stephane Durocher, Ellen Gethner},
journal={Electronic Journal of Combinatorics. 8(1):R23 1-30. 2001},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009023},
primaryClass={cs.DM math.CO}
} | brodsky2000the |
arxiv-669684 | cs/0009024 | Computing the Depth of a Flat | <|reference_start|>Computing the Depth of a Flat: We give algorithms for computing the regression depth of a k-flat for a set of n points in R^d. The running time is O(n^(d-2) + n log n) when 0 < k < d-1, faster than the best time bound for hyperplane regression or for data depth.<|reference_end|> | arxiv | @article{bern2000computing,
title={Computing the Depth of a Flat},
author={Marshall Bern, David Eppstein},
journal={arXiv preprint arXiv:cs/0009024},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009024},
primaryClass={cs.CG}
} | bern2000computing |
arxiv-669685 | cs/0009025 | Parsing with the Shortest Derivation | <|reference_start|>Parsing with the Shortest Derivation: Common wisdom has it that the bias of stochastic grammars in favor of shorter derivations of a sentence is harmful and should be redressed. We show that the common wisdom is wrong for stochastic grammars that use elementary trees instead of context-free rules, such as Stochastic Tree-Substitution Grammars used by Data-Oriented Parsing models. For such grammars a non-probabilistic metric based on the shortest derivation outperforms a probabilistic metric on the ATIS and OVIS corpora, while it obtains very competitive results on the Wall Street Journal corpus. This paper also contains the first published experiments with DOP on the Wall Street Journal.<|reference_end|> | arxiv | @article{bod2000parsing,
title={Parsing with the Shortest Derivation},
author={Rens Bod},
journal={Proceedings COLING'2000, with a minor correction},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009025},
primaryClass={cs.CL}
} | bod2000parsing |
arxiv-669686 | cs/0009026 | An improved parser for data-oriented lexical-functional analysis | <|reference_start|>An improved parser for data-oriented lexical-functional analysis: We present an LFG-DOP parser which uses fragments from LFG-annotated sentences to parse new sentences. Experiments with the Verbmobil and Homecentre corpora show that (1) Viterbi n best search performs about 100 times faster than Monte Carlo search while both achieve the same accuracy; (2) the DOP hypothesis which states that parse accuracy increases with increasing fragment size is confirmed for LFG-DOP; (3) LFG-DOP's relative frequency estimator performs worse than a discounted frequency estimator; and (4) LFG-DOP significantly outperforms Tree-DOP is evaluated on tree structures only.<|reference_end|> | arxiv | @article{bod2000an,
title={An improved parser for data-oriented lexical-functional analysis},
author={Rens Bod},
journal={Proceedings ACL'2000, Hong Kong},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009026},
primaryClass={cs.CL}
} | bod2000an |
arxiv-669687 | cs/0009027 | A Classification Approach to Word Prediction | <|reference_start|>A Classification Approach to Word Prediction: The eventual goal of a language model is to accurately predict the value of a missing word given its context. We present an approach to word prediction that is based on learning a representation for each word as a function of words and linguistics predicates in its context. This approach raises a few new questions that we address. First, in order to learn good word representations it is necessary to use an expressive representation of the context. We present a way that uses external knowledge to generate expressive context representations, along with a learning method capable of handling the large number of features generated this way that can, potentially, contribute to each prediction. Second, since the number of words ``competing'' for each prediction is large, there is a need to ``focus the attention'' on a smaller subset of these. We exhibit the contribution of a ``focus of attention'' mechanism to the performance of the word predictor. Finally, we describe a large scale experimental study in which the approach presented is shown to yield significant improvements in word prediction tasks.<|reference_end|> | arxiv | @article{even-zohar2000a,
title={A Classification Approach to Word Prediction},
author={Yair Even-Zohar and Dan Roth},
journal={NAACL 2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009027},
primaryClass={cs.CL cs.AI cs.LG}
} | even-zohar2000a |
arxiv-669688 | cs/0009028 | Toward the Rectilinear Crossing Number of $K_n$: New Drawings, Upper Bounds, and Asymptotics | <|reference_start|>Toward the Rectilinear Crossing Number of $K_n$: New Drawings, Upper Bounds, and Asymptotics: Scheinerman and Wilf (1994) assert that `an important open problem in the study of graph embeddings is to determine the rectilinear crossing number of the complete graph K_n.' A rectilinear drawing of K_n is an arrangement of n vertices in the plane, every pair of which is connected by an edge that is a line segment. We assume that no three vertices are collinear, and that no three edges intersect in a point unless that point is an endpoint of all three. The rectilinear crossing number of K_n is the fewest number of edge crossings attainable over all rectilinear drawings of K_n. For each n we construct a rectilinear drawing of K_n that has the fewest number of edge crossings and the best asymptotics known to date. Moreover, we give some alternative infinite families of drawings of K_n with good asymptotics. Finally, we mention some old and new open problems.<|reference_end|> | arxiv | @article{brodsky2000toward,
title={Toward the Rectilinear Crossing Number of $K_n$: New Drawings, Upper
Bounds, and Asymptotics},
author={Alex Brodsky, Stephane Durocher, Ellen Gethner},
journal={Discrete Mathematics. 262(1-3):59-77. 2003},
year={2000},
doi={10.1016/S0012-365X(02)00491-0},
archivePrefix={arXiv},
eprint={cs/0009028},
primaryClass={cs.DM cs.CG math.CO}
} | brodsky2000toward |
arxiv-669689 | cs/0009029 | The Concurrent Language Aldwych | <|reference_start|>The Concurrent Language Aldwych: Aldwych is proposed as the foundation of a general purpose language for parallel applications. It works on a rule-based principle, and has aspects variously of concurrent functional, logic and object-oriented languages, yet it forms an integrated whole. It is intended to be applicable both for small-scale parallel programming, and for large-scale open systems.<|reference_end|> | arxiv | @article{huntbach2000the,
title={The Concurrent Language Aldwych},
author={Matthew Huntbach},
journal={arXiv preprint arXiv:cs/0009029},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009029},
primaryClass={cs.PL}
} | huntbach2000the |
arxiv-669690 | cs/0009030 | From Syntactic Theories to Interpreters: A Specification Language and Its Compilation | <|reference_start|>From Syntactic Theories to Interpreters: A Specification Language and Its Compilation: Recent years have seen an increasing need of high-level specification languages and tools generating code from specifications. In this paper, we introduce a specification language, {\splname}, which is tailored to the writing of syntactic theories of language semantics. More specifically, the language supports specifying primitive notions such as dynamic constraints, contexts, axioms, and inference rules. We also introduce a system which generates interpreters from {\splname} specifications. A prototype system is implemented and has been tested on a number of examples, including a syntactic theory for Verilog.<|reference_end|> | arxiv | @article{xiao2000from,
title={From Syntactic Theories to Interpreters: A Specification Language and
Its Compilation},
author={Yong Xiao, Zena M. Ariola and Michel Mauny},
journal={arXiv preprint arXiv:cs/0009030},
year={2000},
archivePrefix={arXiv},
eprint={cs/0009030},
primaryClass={cs.PL cs.SE}
} | xiao2000from |
arxiv-669691 | cs/0010001 | Design of an Electro-Hydraulic System Using Neuro-Fuzzy Techniques | <|reference_start|>Design of an Electro-Hydraulic System Using Neuro-Fuzzy Techniques: Increasing demands in performance and quality make drive systems fundamental parts in the progressive automation of industrial processes. Their conventional models become inappropriate and have limited scope if one requires a precise and fast performance. So, it is important to incorporate learning capabilities into drive systems in such a way that they improve their accuracy in realtime, becoming more autonomous agents with some degree of intelligence. To investigate this challenge, this chapter presents the development of a learning control system that uses neuro-fuzzy techniques in the design of a tracking controller to an experimental electro-hydraulic actuator. We begin the chapter by presenting the neuro-fuzzy modeling process of the actuator. This part surveys the learning algorithm, describes the laboratorial system, and presents the modeling steps as the choice of actuator representative variables, the acquisition of training and testing data sets, and the acquisition of the neuro-fuzzy inverse-model of the actuator. In the second part of the chapter, we use the extracted neuro-fuzzy model and its learning capabilities to design the actuator position controller based on the feedback-error-learning technique. Through a set of experimental results, we show the generalization properties of the controller, its learning capability in actualizing in realtime the initial neuro-fuzzy inverse-model, and its compensation action improving the electro-hydraulics tracking performance.<|reference_end|> | arxiv | @article{branco2000design,
title={Design of an Electro-Hydraulic System Using Neuro-Fuzzy Techniques},
author={P. J. Costa Branco, J. A. Dente},
journal={In: Fusion of Neural Networks, Fuzzy Sets & Genetic Algorithms:
Industrial Applications, Chapter 4, CRC Press, Boca Raton, Florida, USA.,
1999},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010001},
primaryClass={cs.RO cs.LG}
} | branco2000design |
arxiv-669692 | cs/0010002 | Noise Effects in Fuzzy Modelling Systems | <|reference_start|>Noise Effects in Fuzzy Modelling Systems: Noise is source of ambiguity for fuzzy systems. Although being an important aspect, the effects of noise in fuzzy modeling have been little investigated. This paper presents a set of tests using three well-known fuzzy modeling algorithms. These evaluate perturbations in the extracted rule-bases caused by noise polluting the learning data, and the corresponding deformations in each learned functional relation. We present results to show: 1) how these fuzzy modeling systems deal with noise; 2) how the established fuzzy model structure influences noise sensitivity of each algorithm; and 3) whose characteristics of the learning algorithms are relevant to noise attenuation.<|reference_end|> | arxiv | @article{branco2000noise,
title={Noise Effects in Fuzzy Modelling Systems},
author={P. J. Costa Branco, J. A. Dente},
journal={In: Computational Intelligence and Applications, pp. 103-108,
World Scientific and Engineering Society Press, Danvers, USA, 1999},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010002},
primaryClass={cs.NE cs.LG}
} | branco2000noise |
arxiv-669693 | cs/0010003 | Torque Ripple Minimization in a Switched Reluctance Drive by Neuro-Fuzzy Compensation | <|reference_start|>Torque Ripple Minimization in a Switched Reluctance Drive by Neuro-Fuzzy Compensation: Simple power electronic drive circuit and fault tolerance of converter are specific advantages of SRM drives, but excessive torque ripple has limited its use to special applications. It is well known that controlling the current shape adequately can minimize the torque ripple. This paper presents a new method for shaping the motor currents to minimize the torque ripple, using a neuro-fuzzy compensator. In the proposed method, a compensating signal is added to the output of a PI controller, in a current-regulated speed control loop. Numerical results are presented in this paper, with an analysis of the effects of changing the form of the membership function of the neuro-fuzzy compensator.<|reference_end|> | arxiv | @article{henriques2000torque,
title={Torque Ripple Minimization in a Switched Reluctance Drive by Neuro-Fuzzy
Compensation},
author={L. Henriques, L. Rolim, W. Suemitsu, P. J. Costa Branco, J. A. Dente},
journal={arXiv preprint arXiv:cs/0010003},
year={2000},
doi={10.1109/20.908911},
archivePrefix={arXiv},
eprint={cs/0010003},
primaryClass={cs.RO cs.LG}
} | henriques2000torque |
arxiv-669694 | cs/0010004 | A Fuzzy Relational Identification Algorithm and Its Application to Predict The Behaviour of a Motor Drive System | <|reference_start|>A Fuzzy Relational Identification Algorithm and Its Application to Predict The Behaviour of a Motor Drive System: Fuzzy relational identification builds a relational model describing systems behaviour by a nonlinear mapping between its variables. In this paper, we propose a new fuzzy relational algorithm based on simplified max-min relational equation. The algorithm presents an adaptation method applied to gravity-center of each fuzzy set based on error integral value between measured and predicted system output, and uses the concept of time-variant universe of discourses. The identification algorithm also includes a method to attenuate noise influence in extracted system relational model using a fuzzy filtering mechanism. The algorithm is applied to one-step forward prediction of a simulated and experimental motor drive system. The identified model has its input-output variables (stator-reference current and motor speed signal) treated as fuzzy sets, whereas the relations existing between them are described by means of a matrix R defining the relational model extracted by the algorithm. The results show the good potentialities of the algorithm in predict the behaviour of the system and attenuate through the fuzzy filtering method possible noise distortions in the relational model.<|reference_end|> | arxiv | @article{branco2000a,
title={A Fuzzy Relational Identification Algorithm and Its Application to
Predict The Behaviour of a Motor Drive System},
author={P. J. Costa Branco, J. A. Dente},
journal={In: Fuzzy Sets and Systems, Vol. 109, No. 3, pp. 41-52, Elsevier,
2000},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010004},
primaryClass={cs.RO cs.LG}
} | branco2000a |
arxiv-669695 | cs/0010005 | Low Ambiguity in Strong, Total, Associative, One-Way Functions | <|reference_start|>Low Ambiguity in Strong, Total, Associative, One-Way Functions: Rabi and Sherman present a cryptographic paradigm based on associative, one-way functions that are strong (i.e., hard to invert even if one of their arguments is given) and total. Hemaspaandra and Rothe proved that such powerful one-way functions exist exactly if (standard) one-way functions exist, thus showing that the associative one-way function approach is as plausible as previous approaches. In the present paper, we study the degree of ambiguity of one-way functions. Rabiand Sherman showed that no associative one-way function (over a universe having at least two elements) can be unambiguous (i.e., one-to-one). Nonetheless, we prove that if standard, unambiguous, one-way functions exist, then there exist strong, total, associative, one-way functions that are $\mathcal{O}(n)$-to-one. This puts a reasonable upper bound on the ambiguity.<|reference_end|> | arxiv | @article{homan2000low,
title={Low Ambiguity in Strong, Total, Associative, One-Way Functions},
author={Christopher M. Homan},
journal={arXiv preprint arXiv:cs/0010005},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010005},
primaryClass={cs.CC}
} | homan2000low |
arxiv-669696 | cs/0010006 | Applications of Data Mining to Electronic Commerce | <|reference_start|>Applications of Data Mining to Electronic Commerce: Electronic commerce is emerging as the killer domain for data mining technology. The following are five desiderata for success. Seldom are they they all present in one data mining application. 1. Data with rich descriptions. For example, wide customer records with many potentially useful fields allow data mining algorithms to search beyond obvious correlations. 2. A large volume of data. The large model spaces corresponding to rich data demand many training instances to build reliable models. 3. Controlled and reliable data collection. Manual data entry and integration from legacy systems both are notoriously problematic; fully automated collection is considerably better. 4. The ability to evaluate results. Substantial, demonstrable return on investment can be very convincing. 5. Ease of integration with existing processes. Even if pilot studies show potential benefit, deploying automated solutions to previously manual processes is rife with pitfalls. Building a system to take advantage of the mined knowledge can be a substantial undertaking. Furthermore, one often must deal with social and political issues involved in the automation of a previously manual business process.<|reference_end|> | arxiv | @article{kohavi2000applications,
title={Applications of Data Mining to Electronic Commerce},
author={Ron Kohavi and Foster Provost},
journal={arXiv preprint arXiv:cs/0010006},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010006},
primaryClass={cs.LG cs.DB}
} | kohavi2000applications |
arxiv-669697 | cs/0010007 | Towards a Theory of Cache-Efficient Algorithms | <|reference_start|>Towards a Theory of Cache-Efficient Algorithms: We describe a model that enables us to analyze the running time of an algorithm in a computer with a memory hierarchy with limited associativity, in terms of various cache parameters. Our model, an extension of Aggarwal and Vitter's I/O model, enables us to establish useful relationships between the cache complexity and the I/O complexity of computations. As a corollary, we obtain cache-optimal algorithms for some fundamental problems like sorting, FFT, and an important subclass of permutations in the single-level cache model. We also show that ignoring associativity concerns could lead to inferior performance, by analyzing the average-case cache behavior of mergesort. We further extend our model to multiple levels of cache with limited associativity and present optimal algorithms for matrix transpose and sorting. Our techniques may be used for systematic exploitation of the memory hierarchy starting from the algorithm design stage, and dealing with the hitherto unresolved problem of limited associativity.<|reference_end|> | arxiv | @article{sen2000towards,
title={Towards a Theory of Cache-Efficient Algorithms},
author={Sandeep Sen, Siddhartha Chatterjee, Neeraj Dumir},
journal={arXiv preprint arXiv:cs/0010007},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010007},
primaryClass={cs.AR cs.DS}
} | sen2000towards |
arxiv-669698 | cs/0010008 | The Light Lexicographic path Ordering | <|reference_start|>The Light Lexicographic path Ordering: We introduce syntactic restrictions of the lexicographic path ordering to obtain the Light Lexicographic Path Ordering. We show that the light lexicographic path ordering leads to a characterisation of the functions computable in space bounded by a polynomial in the size of the inputs.<|reference_end|> | arxiv | @article{cichon2000the,
title={The Light Lexicographic path Ordering},
author={E.A. Cichon and J-Y. Marion},
journal={arXiv preprint arXiv:cs/0010008},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010008},
primaryClass={cs.PL cs.CC}
} | cichon2000the |
arxiv-669699 | cs/0010009 | An Approach to the Implementation of Overlapping Rules in Standard ML | <|reference_start|>An Approach to the Implementation of Overlapping Rules in Standard ML: We describe an approach to programming rule-based systems in Standard ML, with a focus on so-called overlapping rules, that is rules that can still be active when other rules are fired. Such rules are useful when implementing rule-based reactive systems, and to that effect we show a simple implementation of Loyall's Active Behavior Trees, used to control goal-directed agents in the Oz virtual environment. We discuss an implementation of our framework using a reactive library geared towards implementing those kind of systems.<|reference_end|> | arxiv | @article{pucella2000an,
title={An Approach to the Implementation of Overlapping Rules in Standard ML},
author={Riccardo Pucella},
journal={arXiv preprint arXiv:cs/0010009},
year={2000},
archivePrefix={arXiv},
eprint={cs/0010009},
primaryClass={cs.PL}
} | pucella2000an |
arxiv-669700 | cs/0010010 | Fault Detection using Immune-Based Systems and Formal Language Algorithms | <|reference_start|>Fault Detection using Immune-Based Systems and Formal Language Algorithms: This paper describes two approaches for fault detection: an immune-based mechanism and a formal language algorithm. The first one is based on the feature of immune systems in distinguish any foreign cell from the body own cell. The formal language approach assumes the system as a linguistic source capable of generating a certain language, characterised by a grammar. Each algorithm has particular characteristics, which are analysed in the paper, namely in what cases they can be used with advantage. To test their practicality, both approaches were applied on the problem of fault detection in an induction motor.<|reference_end|> | arxiv | @article{martins2000fault,
title={Fault Detection using Immune-Based Systems and Formal Language
Algorithms},
author={J.F. Martins, P. J. Costa Branco, A.J. Pires, J.A. Dente},
journal={arXiv preprint arXiv:cs/0010010},
year={2000},
doi={10.1109/CDC.2000.914202},
archivePrefix={arXiv},
eprint={cs/0010010},
primaryClass={cs.CE cs.LG}
} | martins2000fault |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.