corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-670001 | cs/0106051 | Users Guide for SnadiOpt: A Package Adding Automatic Differentiation to Snopt | <|reference_start|>Users Guide for SnadiOpt: A Package Adding Automatic Differentiation to Snopt: SnadiOpt is a package that supports the use of the automatic differentiation package ADIFOR with the optimization package Snopt. Snopt is a general-purpose system for solving optimization problems with many variables and constraints. It minimizes a linear or nonlinear function subject to bounds on the variables and sparse linear or nonlinear constraints. It is suitable for large-scale linear and quadratic programming and for linearly constrained optimization, as well as for general nonlinear programs. The method used by Snopt requires the first derivatives of the objective and constraint functions to be available. The SnadiOpt package allows users to avoid the time-consuming and error-prone process of evaluating and coding these derivatives. Given Fortran code for evaluating only the values of the objective and constraints, SnadiOpt automatically generates the code for evaluating the derivatives and builds the relevant Snopt input files and sparse data structures.<|reference_end|> | arxiv | @article{gertz2001users,
title={Users Guide for SnadiOpt: A Package Adding Automatic Differentiation to
Snopt},
author={E. Michael Gertz, Philip E. Gill, and Julia Muetherig},
journal={arXiv preprint arXiv:cs/0106051},
year={2001},
number={ANL/MCS-TM-245},
archivePrefix={arXiv},
eprint={cs/0106051},
primaryClass={cs.MS}
} | gertz2001users |
arxiv-670002 | cs/0106052 | Acceptability with general orderings | <|reference_start|>Acceptability with general orderings: We present a new approach to termination analysis of logic programs. The essence of the approach is that we make use of general orderings (instead of level mappings), like it is done in transformational approaches to logic program termination analysis, but we apply these orderings directly to the logic program and not to the term-rewrite system obtained through some transformation. We define some variants of acceptability, based on general orderings, and show how they are equivalent to LD-termination. We develop a demand driven, constraint-based approach to verify these acceptability-variants. The advantage of the approach over standard acceptability is that in some cases, where complex level mappings are needed, fairly simple orderings may be easily generated. The advantage over transformational approaches is that it avoids the transformation step all together. {\bf Keywords:} termination analysis, acceptability, orderings.<|reference_end|> | arxiv | @article{de schreye2001acceptability,
title={Acceptability with general orderings},
author={Danny De Schreye, Alexander Serebrenik},
journal={arXiv preprint arXiv:cs/0106052},
year={2001},
archivePrefix={arXiv},
eprint={cs/0106052},
primaryClass={cs.PL cs.LO}
} | de schreye2001acceptability |
arxiv-670003 | cs/0106053 | Inference of termination conditions for numerical loops | <|reference_start|>Inference of termination conditions for numerical loops: We present a new approach to termination analysis of numerical computations in logic programs. Traditional approaches fail to analyse them due to non well-foundedness of the integers. We present a technique that allows to overcome these difficulties. Our approach is based on transforming a program in way that allows integrating and extending techniques originally developed for analysis of numerical computations in the framework of query-mapping pairs with the well-known framework of acceptability. Such an integration not only contributes to the understanding of termination behaviour of numerical computations, but also allows to perform a correct analysis of such computations automatically, thus, extending previous work on a constraints-based approach to termination. In the last section of the paper we discuss possible extensions of the technique, including incorporating general term orderings.<|reference_end|> | arxiv | @article{serebrenik2001inference,
title={Inference of termination conditions for numerical loops},
author={Alexander Serebrenik, Danny De Schreye},
journal={arXiv preprint arXiv:cs/0106053},
year={2001},
number={CW 308},
archivePrefix={arXiv},
eprint={cs/0106053},
primaryClass={cs.PL cs.LO}
} | serebrenik2001inference |
arxiv-670004 | cs/0106054 | Software Toolkit for Building Embedded and Distributed Knowledge-based Systems | <|reference_start|>Software Toolkit for Building Embedded and Distributed Knowledge-based Systems: The paper discusses the basic principles and the architecture of the software toolkit for constructing knowledge-based systems which can be used cooperatively over computer networks and also embedded into larger software systems in different ways. Presented architecture is based on frame knowledge representation and production rules, which also allows to interface high-level programming languages and relational databases by exposing corresponding classes or database tables as frames. Frames located on the remote computers can also be transparently accessed and used in inference, and the dynamic knowledge for specific frames can also be transferred over the network. The issues of implementation of such a system are addressed, which use Java programming language, CORBA and XML for external knowledge representation. Finally, some applications of the toolkit are considered, including e-business approach to knowledge sharing, intelligent web behaviours, etc.<|reference_end|> | arxiv | @article{soshnikov2001software,
title={Software Toolkit for Building Embedded and Distributed Knowledge-based
Systems},
author={Dmitri Soshnikov},
journal={Soshnikov D. Software Toolkit for Building Distributed and
Embedded Knowledge-Based Systems. In Proceedings of the 2nd International
Workshop on Computer Science and Information Technologies, Ufa, USATU
Publishers, 2000. pp. 103--111},
year={2001},
archivePrefix={arXiv},
eprint={cs/0106054},
primaryClass={cs.AI cs.DC cs.MA}
} | soshnikov2001software |
arxiv-670005 | cs/0106055 | A Seamless Integration of Association Rule Mining with Database Systems | <|reference_start|>A Seamless Integration of Association Rule Mining with Database Systems: The need for Knowledge and Data Discovery Management Systems (KDDMS) that support ad hoc data mining queries has been long recognized. A significant amount of research has gone into building tightly coupled systems that integrate association rule mining with database systems. In this paper, we describe a seamless integration scheme for database queries and association rule discovery using a common query optimizer for both. Query trees of expressions in an extended algebra are used for internal representation in the optimizer. The algebraic representation is flexible enough to deal with constrained association rule queries and other variations of association rule specifications. We propose modularization to simplify the query tree for complex tasks in data mining. It paves the way for making use of existing algorithms for constructing query plans in the optimization process. How the integration scheme we present will facilitate greater user control over the data mining process is also discussed. The work described in this paper forms part of a larger project for fully integrating data mining with database management.<|reference_end|> | arxiv | @article{gopalan2001a,
title={A Seamless Integration of Association Rule Mining with Database Systems},
author={Raj P. Gopalan, Tariq Nuruddin, Yudho Giri Sucahyo},
journal={arXiv preprint arXiv:cs/0106055},
year={2001},
archivePrefix={arXiv},
eprint={cs/0106055},
primaryClass={cs.DB}
} | gopalan2001a |
arxiv-670006 | cs/0106056 | Randomized Two-Process Wait-Free Test-and-Set | <|reference_start|>Randomized Two-Process Wait-Free Test-and-Set: We present the first explicit, and currently simplest, randomized algorithm for 2-process wait-free test-and-set. It is implemented with two 4-valued single writer single reader atomic variables. A test-and-set takes at most 11 expected elementary steps, while a reset takes exactly 1 elementary step. Based on a finite-state analysis, the proofs of correctness and expected length are compressed into one table.<|reference_end|> | arxiv | @article{tromp2001randomized,
title={Randomized Two-Process Wait-Free Test-and-Set},
author={John Tromp (CWI and BioInformatics Solutions) and Paul Vitanyi (CWI
and University of Amsterdam)},
journal={arXiv preprint arXiv:cs/0106056},
year={2001},
archivePrefix={arXiv},
eprint={cs/0106056},
primaryClass={cs.DC}
} | tromp2001randomized |
arxiv-670007 | cs/0106057 | Exposing and harvesting metadata using the OAI metadata harvesting protocol: A tutorial | <|reference_start|>Exposing and harvesting metadata using the OAI metadata harvesting protocol: A tutorial: In this article I outline the ideas behind the Open Archives Initiative metadata harvesting protocol (OAIMH), and attempt to clarify some common misconceptions. I then consider how the OAIMH protocol can be used to expose and harvest metadata. Perl code examples are given as practical illustration.<|reference_end|> | arxiv | @article{warner2001exposing,
title={Exposing and harvesting metadata using the OAI metadata harvesting
protocol: A tutorial},
author={Simeon Warner (LANL)},
journal={High Energy Physics Libraries Webzine, Issue 4, June 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0106057},
primaryClass={cs.DL}
} | warner2001exposing |
arxiv-670008 | cs/0106058 | Enabling the Long-Term Archival of Signed Documents through Time Stamping | <|reference_start|>Enabling the Long-Term Archival of Signed Documents through Time Stamping: In this paper we describe how to build a trusted reliable distributed service across administrative domains in a peer-to-peer network. The application we use to motivate our work is a public key time stamping service called Prokopius. The service provides a secure, verifiable but distributable stable archive that maintains time stamped snapshots of public keys over time. This in turn allows clients to verify time stamped documents or certificates that rely on formerly trusted public keys that are no longer in service or where the signer no longer exists. We find that such a service can time stamp the snapshots of public keys in a network of 148 nodes at the granularity of a couple of days, even in the worst case where an adversary causes the maximal amount of damage allowable within our fault model.<|reference_end|> | arxiv | @article{maniatis2001enabling,
title={Enabling the Long-Term Archival of Signed Documents through Time
Stamping},
author={Petros Maniatis, T.J. Giuli, Mary Baker},
journal={arXiv preprint arXiv:cs/0106058},
year={2001},
archivePrefix={arXiv},
eprint={cs/0106058},
primaryClass={cs.DC cs.CR}
} | maniatis2001enabling |
arxiv-670009 | cs/0106059 | CHR as grammar formalism A first report | <|reference_start|>CHR as grammar formalism A first report: Grammars written as Constraint Handling Rules (CHR) can be executed as efficient and robust bottom-up parsers that provide a straightforward, non-backtracking treatment of ambiguity. Abduction with integrity constraints as well as other dynamic hypothesis generation techniques fit naturally into such grammars and are exemplified for anaphora resolution, coordination and text interpretation.<|reference_end|> | arxiv | @article{christiansen2001chr,
title={CHR as grammar formalism. A first report},
author={Henning Christiansen},
journal={Proc. of ERCIM Workshop on Constraints, Prague, Czech Republic,
June 18-20, 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0106059},
primaryClass={cs.PL cs.CL}
} | christiansen2001chr |
arxiv-670010 | cs/0107001 | Analysis of Network Traffic in Switched Ethernet Systems | <|reference_start|>Analysis of Network Traffic in Switched Ethernet Systems: A 100 Mbps Ethernet link between a college campus and the outside world was monitored with a dedicated PC and the measured data analysed for its statistical properties. Similar measurements were taken at an internal node of the network. The networks in both cases are a full-duplex switched Ethernet. Inter-event interval histograms and power spectra of the throughput aggregated for 10ms bins were used to analyse the measured traffic. For most investigated cases both methods reveal that the traffic behaves according to a power law. The results will be used in later studies to parameterise models for network traffic.<|reference_end|> | arxiv | @article{field2001analysis,
title={Analysis of Network Traffic in Switched Ethernet Systems},
author={Tony Field, Uli Harder and Peter Harrison},
journal={arXiv preprint arXiv:cs/0107001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107001},
primaryClass={cs.PF cs.NI}
} | field2001analysis |
arxiv-670011 | cs/0107002 | Enhancing Constraint Propagation with Composition Operators | <|reference_start|>Enhancing Constraint Propagation with Composition Operators: Constraint propagation is a general algorithmic approach for pruning the search space of a CSP. In a uniform way, K. R. Apt has defined a computation as an iteration of reduction functions over a domain. He has also demonstrated the need for integrating static properties of reduction functions (commutativity and semi-commutativity) to design specialized algorithms such as AC3 and DAC. We introduce here a set of operators for modeling compositions of reduction functions. Two of the major goals are to tackle parallel computations, and dynamic behaviours (such as slow convergence).<|reference_end|> | arxiv | @article{granvilliers2001enhancing,
title={Enhancing Constraint Propagation with Composition Operators},
author={Laurent Granvilliers and Eric Monfroy},
journal={arXiv preprint arXiv:cs/0107002},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107002},
primaryClass={cs.AI}
} | granvilliers2001enhancing |
arxiv-670012 | cs/0107003 | Lower Bounds for Zero-knowledge on the Internet | <|reference_start|>Lower Bounds for Zero-knowledge on the Internet: We consider zero knowledge interactive proofs in a richer, more realistic communication environment. In this setting, one may simultaneously engage in many interactive proofs, and these proofs may take place in an asynchronous fashion. It is known that zero-knowledge is not necessarily preserved in such an environment; we show that for a large class of protocols, it cannot be preserved. Any 4 round (computational) zero-knowledge interactive proof (or argument) for a non-trivial language L is not black-box simulatable in the asynchronous setting.<|reference_end|> | arxiv | @article{kilian2001lower,
title={Lower Bounds for Zero-knowledge on the Internet},
author={Joe Kilian and Erez Petrank and Charles Rackoff},
journal={arXiv preprint arXiv:cs/0107003},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107003},
primaryClass={cs.CR}
} | kilian2001lower |
arxiv-670013 | cs/0107004 | On Concurrent and Resettable Zero-Knowledge Proofs for NP | <|reference_start|>On Concurrent and Resettable Zero-Knowledge Proofs for NP: A proof is concurrent zero-knowledge if it remains zero-knowledge when many copies of the proof are run in an asynchronous environment, such as the Internet. It is known that zero-knowledge is not necessarily preserved in such an environment. Designing concurrent zero-knowledge proofs is a fundamental issue in the study of zero-knowledge since known zero-knowledge protocols cannot be run in a realistic modern computing environment. In this paper we present a concurrent zero-knowledge proof systems for all languages in NP. Currently, the proof system we present is the only known proof system that retains the zero-knowledge property when copies of the proof are allowed to run in an asynchronous environment. Our proof system has $\tilde{O}(\log^2 k)$ rounds (for a security parameter $k$), which is almost optimal, as it is shown by Canetti Kilian Petrank and Rosen that black-box concurrent zero-knowledge requires $\tilde{\Omega}(\log k)$ rounds. Canetti, Goldreich, Goldwasser and Micali introduced the notion of {\em resettable} zero-knowledge, and modified an earlier version of our proof system to obtain the first resettable zero-knowledge proof system. This protocol requires $k^{\theta(1)}$ rounds. We note that their technique also applies to our current proof system, yielding a resettable zero-knowledge proof for NP with $\tilde{O}(\log^2 k)$ rounds.<|reference_end|> | arxiv | @article{kilian2001on,
title={On Concurrent and Resettable Zero-Knowledge Proofs for NP},
author={Joe Kilian and Erez Petrank and Ransom Richardson},
journal={arXiv preprint arXiv:cs/0107004},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107004},
primaryClass={cs.CR}
} | kilian2001on |
arxiv-670014 | cs/0107005 | The Role of Conceptual Relations in Word Sense Disambiguation | <|reference_start|>The Role of Conceptual Relations in Word Sense Disambiguation: We explore many ways of using conceptual distance measures in Word Sense Disambiguation, starting with the Agirre-Rigau conceptual density measure. We use a generalized form of this measure, introducing many (parameterized) refinements and performing an exhaustive evaluation of all meaningful combinations. We finally obtain a 42% improvement over the original algorithm, and show that measures of conceptual distance are not worse indicators for sense disambiguation than measures based on word-coocurrence (exemplified by the Lesk algorithm). Our results, however, reinforce the idea that only a combination of different sources of knowledge might eventually lead to accurate word sense disambiguation.<|reference_end|> | arxiv | @article{fernandez-amoros2001the,
title={The Role of Conceptual Relations in Word Sense Disambiguation},
author={David Fernandez-Amoros, Julio Gonzalo, Felisa Verdejo},
journal={arXiv preprint arXiv:cs/0107005},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107005},
primaryClass={cs.CL}
} | fernandez-amoros2001the |
arxiv-670015 | cs/0107006 | Looking Under the Hood : Tools for Diagnosing your Question Answering Engine | <|reference_start|>Looking Under the Hood : Tools for Diagnosing your Question Answering Engine: In this paper we analyze two question answering tasks : the TREC-8 question answering task and a set of reading comprehension exams. First, we show that Q/A systems perform better when there are multiple answer opportunities per question. Next, we analyze common approaches to two subproblems: term overlap for answer sentence identification, and answer typing for short answer extraction. We present general tools for analyzing the strengths and limitations of techniques for these subproblems. Our results quantify the limitations of both term overlap and answer typing to distinguish between competing answer candidates.<|reference_end|> | arxiv | @article{breck2001looking,
title={Looking Under the Hood : Tools for Diagnosing your Question Answering
Engine},
author={Eric Breck, Marc Light, Gideon S. Mann, Ellen Riloff, Brianne Brown
Pranav Anand, Mats Rooth, Michael Thelen},
journal={arXiv preprint arXiv:cs/0107006},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107006},
primaryClass={cs.CL}
} | breck2001looking |
arxiv-670016 | cs/0107007 | The Risk Profile Problem for Stock Portfolio Optimization | <|reference_start|>The Risk Profile Problem for Stock Portfolio Optimization: This work initiates research into the problem of determining an optimal investment strategy for investors with different attitudes towards the trade-offs of risk and profit. The probability distribution of the return values of the stocks that are considered by the investor are assumed to be known, while the joint distribution is unknown. The problem is to find the best investment strategy in order to minimize the probability of losing a certain percentage of the invested capital based on different attitudes of the investors towards future outcomes of the stock market. For portfolios made up of two stocks, this work shows how to exactly and quickly solve the problem of finding an optimal portfolio for aggressive or risk-averse investors, using an algorithm based on a fast greedy solution to a maximum flow problem. However, an investor looking for an average-case guarantee (so is neither aggressive or risk-averse) must deal with a more difficult problem. In particular, it is #P-complete to compute the distribution function associated with the average-case bound. On the positive side, approximate answers can be computed by using random sampling techniques similar to those for high-dimensional volume estimation. When k>2 stocks are considered, it is proved that a simple solution based on the same flow concepts as the 2-stock algorithm would imply that P = NP, so is highly unlikely. This work gives approximation algorithms for this case as well as exact algorithms for some important special cases.<|reference_end|> | arxiv | @article{kao2001the,
title={The Risk Profile Problem for Stock Portfolio Optimization},
author={Ming-Yang Kao, Andreas Nolte, Stephen R. Tate},
journal={arXiv preprint arXiv:cs/0107007},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107007},
primaryClass={cs.CE cs.DM cs.DS}
} | kao2001the |
arxiv-670017 | cs/0107008 | Complex Tilings | <|reference_start|>Complex Tilings: We study the minimal complexity of tilings of a plane with a given tile set. We note that every tile set admits either no tiling or some tiling with O(n) Kolmogorov complexity of its n-by-n squares. We construct tile sets for which this bound is tight: all n-by-n squares in all tilings have complexity at least n. This adds a quantitative angle to classical results on non-recursivity of tilings -- that we also develop in terms of Turing degrees of unsolvability. Keywords: Tilings, Kolmogorov complexity, recursion theory<|reference_end|> | arxiv | @article{durand2001complex,
title={Complex Tilings},
author={Bruno Durand, Leonid A. Levin, Alexander Shen},
journal={Journal of Symbolic Logic, 73(2):593-613, 2008},
year={2001},
doi={10.2178/jsl/1208359062},
archivePrefix={arXiv},
eprint={cs/0107008},
primaryClass={cs.CC cs.DM}
} | durand2001complex |
arxiv-670018 | cs/0107009 | A Blueprint for Building Serverless Applications on the Net | <|reference_start|>A Blueprint for Building Serverless Applications on the Net: A peer-to-peer application architecture is proposed that has the potential to eliminate the back-end servers for hosting services on the Internet. The proposed application architecture has been modeled as a distributed system for delivering an Internet service. The service thus created, though chaotic and fraught with uncertainties, would be highly scalable and capable of achieving unprecedented levels of robustness and viability with the increase in the number of the users. The core issues relating to the architecture, such as service discovery, distributed application architecture components, and inter-application communications, have been analysed. It is shown that the communications for the coordination of various functions, among the cooperating instances of the application, may be optimised using a divide-and-conquer strategy. Finally, the areas where future work needs to be directed have been identified.<|reference_end|> | arxiv | @article{khan2001a,
title={A Blueprint for Building Serverless Applications on the Net},
author={A. I. Khan, R. Spindler},
journal={arXiv preprint arXiv:cs/0107009},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107009},
primaryClass={cs.DC cs.NI}
} | khan2001a |
arxiv-670019 | cs/0107010 | Algorithms for Boolean Function Query Properties | <|reference_start|>Algorithms for Boolean Function Query Properties: We present new algorithms to compute fundamental properties of a Boolean function given in truth-table form. Specifically, we give an O(N^2.322 log N) algorithm for block sensitivity, an O(N^1.585 log N) algorithm for `tree decomposition,' and an O(N) algorithm for `quasisymmetry.' These algorithms are based on new insights into the structure of Boolean functions that may be of independent interest. We also give a subexponential-time algorithm for the space-bounded quantum query complexity of a Boolean function. To prove this algorithm correct, we develop a theory of limited-precision representation of unitary operators, building on work of Bernstein and Vazirani.<|reference_end|> | arxiv | @article{aaronson2001algorithms,
title={Algorithms for Boolean Function Query Properties},
author={Scott Aaronson (UC Berkeley)},
journal={arXiv preprint arXiv:cs/0107010},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107010},
primaryClass={cs.CC cs.DS}
} | aaronson2001algorithms |
arxiv-670020 | cs/0107011 | Distributed Broadcast in Wireless Networks with Unknown Topology | <|reference_start|>Distributed Broadcast in Wireless Networks with Unknown Topology: A multi-hop synchronous wirelss network is said to be unknown if the nodes have no knowledge of the topology. A basic task in wireless network is that of broadcasting a message (created by a fixed source node) to all nodes of the network. The multi-broadcast that consists in performing a set of r independent broadcasts. In this paper, we study the completion and the termination time of distributed protocols for both the (single) broadcast and the multi-broadcast operations on unknown networks as functions of the number of nodes n, the maximum eccentricity D, the maximum in-degree Delta, and the congestion c of the networks. We establish new connections between these operations and some combinatorial concepts, such as selective families, strongly-selective families (also known as superimposed codes), and pairwise r-different families. Such connections, combined with a set of new lower and upper bounds on the size of the above families, allow us to derive new lower bounds and new distributed protocols for the broadcast and multi-broadcast operations. In particular, our upper bounds are almost tight and improve exponentially over the previous bounds when D and Delta are polylogarithmic in n. Network topologies having ``small'' eccentricity and ``small'' degree (such as bounded-degree expanders) are often used in practice to achieve efficient communication.<|reference_end|> | arxiv | @article{clementi2001distributed,
title={Distributed Broadcast in Wireless Networks with Unknown Topology},
author={Andrea E.F. Clementi, Angelo Monti, and Riccardo Silvestri},
journal={arXiv preprint arXiv:cs/0107011},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107011},
primaryClass={cs.DS cs.DM}
} | clementi2001distributed |
arxiv-670021 | cs/0107012 | Three-Stage Quantitative Neural Network Model of the Tip-of-the-Tongue Phenomenon | <|reference_start|>Three-Stage Quantitative Neural Network Model of the Tip-of-the-Tongue Phenomenon: A new three-stage computer artificial neural network model of the tip-of-the-tongue phenomenon is shortly described, and its stochastic nature was demonstrated. A way to calculate strength and appearance probability of tip-of-the-tongue states, neural network mechanism of feeling-of-knowing phenomenon are proposed. The model synthesizes memory, psycholinguistic, and metamemory approaches, bridges speech errors and naming chronometry research traditions. A model analysis of a tip-of-the-tongue case from Anton Chekhov's short story 'A Horsey Name' is performed. A new 'throw-up-one's-arms effect' is defined.<|reference_end|> | arxiv | @article{gopych2001three-stage,
title={Three-Stage Quantitative Neural Network Model of the Tip-of-the-Tongue
Phenomenon},
author={Petro M. Gopych},
journal={arXiv preprint arXiv:cs/0107012},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107012},
primaryClass={cs.CL cs.AI q-bio.NC q-bio.QM}
} | gopych2001three-stage |
arxiv-670022 | cs/0107013 | The Logic Programming Paradigm and Prolog | <|reference_start|>The Logic Programming Paradigm and Prolog: This is a tutorial on logic programming and Prolog appropriate for a course on programming languages for students familiar with imperative programming.<|reference_end|> | arxiv | @article{apt2001the,
title={The Logic Programming Paradigm and Prolog},
author={Krzysztof R. Apt},
journal={arXiv preprint arXiv:cs/0107013},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107013},
primaryClass={cs.PL cs.AI}
} | apt2001the |
arxiv-670023 | cs/0107014 | Transformations of CCP programs | <|reference_start|>Transformations of CCP programs: We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input/output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.<|reference_end|> | arxiv | @article{etalle2001transformations,
title={Transformations of CCP programs},
author={Sandro Etalle, Maurizio Gabbrielli and Maria Chiara Meo},
journal={arXiv preprint arXiv:cs/0107014},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107014},
primaryClass={cs.PL cs.AI cs.LO}
} | etalle2001transformations |
arxiv-670024 | cs/0107015 | From Neel to NPC: Colouring Small Worlds | <|reference_start|>From Neel to NPC: Colouring Small Worlds: In this note, we present results for the colouring problem on small world graphs created by rewiring square, triangular, and two kinds of cubic (with coordination numbers 5 and 6) lattices. As the rewiring parameter p tends to 1, we find the expected crossover to the behaviour of random graphs with corresponding connectivity. However, for the cubic lattices there is a region near p=0 for which the graphs are colourable. This could in principle be used as an additional heuristic for solving real world colouring or scheduling problems. Small worlds with connectivity 5 and p ~ 0.1 provide an interesting ensemble of graphs whose colourability is hard to determine. For square lattices, we get good data collapse plotting the fraction of colourable graphs against the rescaled parameter parameter $p N^{-\nu}$ with $\nu = 1.35$. No such collapse can be obtained for the data from lattices with coordination number 5 or 6.<|reference_end|> | arxiv | @article{svenson2001from,
title={From Neel to NPC: Colouring Small Worlds},
author={Pontus Svenson},
journal={arXiv preprint arXiv:cs/0107015},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107015},
primaryClass={cs.CC cond-mat.stat-mech}
} | svenson2001from |
arxiv-670025 | cs/0107016 | Introduction to the CoNLL-2001 Shared Task: Clause Identification | <|reference_start|>Introduction to the CoNLL-2001 Shared Task: Clause Identification: We describe the CoNLL-2001 shared task: dividing text into clauses. We give background information on the data sets, present a general overview of the systems that have taken part in the shared task and briefly discuss their performance.<|reference_end|> | arxiv | @article{sang2001introduction,
title={Introduction to the CoNLL-2001 Shared Task: Clause Identification},
author={Erik F. Tjong Kim Sang and Herve Dejean},
journal={In: Walter Daelemans and Remi Zajac (eds.), Proceedings of
CoNLL-2001, Toulouse, France, 2001, pp. 53-57},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107016},
primaryClass={cs.CL}
} | sang2001introduction |
arxiv-670026 | cs/0107017 | Learning Computational Grammars | <|reference_start|>Learning Computational Grammars: This paper reports on the "Learning Computational Grammars" (LCG) project, a postdoc network devoted to studying the application of machine learning techniques to grammars suitable for computational use. We were interested in a more systematic survey to understand the relevance of many factors to the success of learning, esp. the availability of annotated data, the kind of dependencies in the data, and the availability of knowledge bases (grammars). We focused on syntax, esp. noun phrase (NP) syntax.<|reference_end|> | arxiv | @article{nerbonne2001learning,
title={Learning Computational Grammars},
author={John Nerbonne, Anja Belz, Nicola Cancedda, Herve Dejean, James
Hammerton, Rob Koeling, Stasinos Konstantopoulos, Miles Osborne, Franck
Thollard and Erik F. Tjong Kim Sang},
journal={In: Walter Daelemans and Remi Zajac (eds.), Proceedings of
CoNLL-2001, Toulouse, France, 2001, pp. 97-104},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107017},
primaryClass={cs.CL}
} | nerbonne2001learning |
arxiv-670027 | cs/0107018 | Combining a self-organising map with memory-based learning | <|reference_start|>Combining a self-organising map with memory-based learning: Memory-based learning (MBL) has enjoyed considerable success in corpus-based natural language processing (NLP) tasks and is thus a reliable method of getting a high-level of performance when building corpus-based NLP systems. However there is a bottleneck in MBL whereby any novel testing item has to be compared against all the training items in memory base. For this reason there has been some interest in various forms of memory editing whereby some method of selecting a subset of the memory base is employed to reduce the number of comparisons. This paper investigates the use of a modified self-organising map (SOM) to select a subset of the memory items for comparison. This method involves reducing the number of comparisons to a value proportional to the square root of the number of training items. The method is tested on the identification of base noun-phrases in the Wall Street Journal corpus, using sections 15 to 18 for training and section 20 for testing.<|reference_end|> | arxiv | @article{hammerton2001combining,
title={Combining a self-organising map with memory-based learning},
author={James Hammerton and Erik F. Tjong Kim Sang},
journal={In: Walter Daelemans and Remi Zajac (eds.), Proceedings of
CoNLL-2001, Toulouse, France, 2001, pp. 9-14},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107018},
primaryClass={cs.CL}
} | hammerton2001combining |
arxiv-670028 | cs/0107019 | Applying Natural Language Generation to Indicative Summarization | <|reference_start|>Applying Natural Language Generation to Indicative Summarization: The task of creating indicative summaries that help a searcher decide whether to read a particular document is a difficult task. This paper examines the indicative summarization task from a generation perspective, by first analyzing its required content via published guidelines and corpus analysis. We show how these summaries can be factored into a set of document features, and how an implemented content planner uses the topicality document feature to create indicative multidocument query-based summaries.<|reference_end|> | arxiv | @article{kan2001applying,
title={Applying Natural Language Generation to Indicative Summarization},
author={Min-Yen Kan, Kathleen R. McKeown, Judith L. Klavans},
journal={arXiv preprint arXiv:cs/0107019},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107019},
primaryClass={cs.CL}
} | kan2001applying |
arxiv-670029 | cs/0107020 | Transformation-Based Learning in the Fast Lane | <|reference_start|>Transformation-Based Learning in the Fast Lane: Transformation-based learning has been successfully employed to solve many natural language processing problems. It achieves state-of-the-art performance on many natural language processing tasks and does not overtrain easily. However, it does have a serious drawback: the training time is often intorelably long, especially on the large corpora which are often used in NLP. In this paper, we present a novel and realistic method for speeding up the training time of a transformation-based learner without sacrificing performance. The paper compares and contrasts the training time needed and performance achieved by our modified learner with two other systems: a standard transformation-based learner, and the ICA system \cite{hepple00:tbl}. The results of these experiments show that our system is able to achieve a significant improvement in training time while still achieving the same performance as a standard transformation-based learner. This is a valuable contribution to systems and algorithms which utilize transformation-based learning at any part of the execution.<|reference_end|> | arxiv | @article{ngai2001transformation-based,
title={Transformation-Based Learning in the Fast Lane},
author={Grace Ngai, Radu Florian},
journal={Proceedings of the Second Conference of the North American Chapter
of the Association for Computational Linguistics, pages 40-47, Pittsburgh,
PA, USA},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107020},
primaryClass={cs.CL}
} | ngai2001transformation-based |
arxiv-670030 | cs/0107021 | Multidimensional Transformation-Based Learning | <|reference_start|>Multidimensional Transformation-Based Learning: This paper presents a novel method that allows a machine learning algorithm following the transformation-based learning paradigm \cite{brill95:tagging} to be applied to multiple classification tasks by training jointly and simultaneously on all fields. The motivation for constructing such a system stems from the observation that many tasks in natural language processing are naturally composed of multiple subtasks which need to be resolved simultaneously; also tasks usually learned in isolation can possibly benefit from being learned in a joint framework, as the signals for the extra tasks usually constitute inductive bias. The proposed algorithm is evaluated in two experiments: in one, the system is used to jointly predict the part-of-speech and text chunks/baseNP chunks of an English corpus; and in the second it is used to learn the joint prediction of word segment boundaries and part-of-speech tagging for Chinese. The results show that the simultaneous learning of multiple tasks does achieve an improvement in each task upon training the same tasks sequentially. The part-of-speech tagging result of 96.63% is state-of-the-art for individual systems on the particular train/test split.<|reference_end|> | arxiv | @article{florian2001multidimensional,
title={Multidimensional Transformation-Based Learning},
author={Radu Florian, Grace Ngai},
journal={Proceedings of the 5th Computational Natural Language Learning
Workshop (CoNNL-2001), pages 1-8, Toulouse, France},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107021},
primaryClass={cs.CL}
} | florian2001multidimensional |
arxiv-670031 | cs/0107022 | An interactive semantics of logic programming | <|reference_start|>An interactive semantics of logic programming: We apply to logic programming some recently emerging ideas from the field of reduction-based communicating systems, with the aim of giving evidence of the hidden interactions and the coordination mechanisms that rule the operational machinery of such a programming paradigm. The semantic framework we have chosen for presenting our results is tile logic, which has the advantage of allowing a uniform treatment of goals and observations and of applying abstract categorical tools for proving the results. As main contributions, we mention the finitary presentation of abstract unification, and a concurrent and coordinated abstract semantics consistent with the most common semantics of logic programming. Moreover, the compositionality of the tile semantics is guaranteed by standard results, as it reduces to check that the tile systems associated to logic programs enjoy the tile decomposition property. An extension of the approach for handling constraint systems is also discussed.<|reference_end|> | arxiv | @article{bruni2001an,
title={An interactive semantics of logic programming},
author={Roberto Bruni, Ugo Montanari, Francesca Rossi},
journal={arXiv preprint arXiv:cs/0107022},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107022},
primaryClass={cs.LO cs.PL}
} | bruni2001an |
arxiv-670032 | cs/0107023 | Vertex-Unfoldings of Simplicial Polyhedra | <|reference_start|>Vertex-Unfoldings of Simplicial Polyhedra: We present two algorithms for unfolding the surface of any polyhedron, all of whose faces are triangles, to a nonoverlapping, connected planar layout. The surface is cut only along polyhedron edges. The layout is connected, but it may have a disconnected interior: the triangles are connected at vertices, but not necessarily joined along edges.<|reference_end|> | arxiv | @article{demaine2001vertex-unfoldings,
title={Vertex-Unfoldings of Simplicial Polyhedra},
author={Erik D. Demaine, David Eppstein, Jeff Erickson, George W. Hart, Joseph
O'Rourke},
journal={Discrete Geometry: In honor of W. Kuperberg's 60th birthday, Pure
and Appl. Math. 253, Marcel Dekker, pp. 215-228, 2003},
year={2001},
number={Smith Tech. Rep. 071},
archivePrefix={arXiv},
eprint={cs/0107023},
primaryClass={cs.CG cs.DM}
} | demaine2001vertex-unfoldings |
arxiv-670033 | cs/0107024 | Enumerating Foldings and Unfoldings between Polygons and Polytopes | <|reference_start|>Enumerating Foldings and Unfoldings between Polygons and Polytopes: We pose and answer several questions concerning the number of ways to fold a polygon to a polytope, and how many polytopes can be obtained from one polygon; and the analogous questions for unfolding polytopes to polygons. Our answers are, roughly: exponentially many, or nondenumerably infinite.<|reference_end|> | arxiv | @article{demaine2001enumerating,
title={Enumerating Foldings and Unfoldings between Polygons and Polytopes},
author={Erik D. Demaine, Martin L. Demaine, Anna Lubiw, Joseph O'Rourke},
journal={Graphs and Combinatorics 18(1) 93-104 (2002)},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107024},
primaryClass={cs.CG cs.DM}
} | demaine2001enumerating |
arxiv-670034 | cs/0107025 | Computer validated proofs of a toolset for adaptable arithmetic | <|reference_start|>Computer validated proofs of a toolset for adaptable arithmetic: Most existing implementations of multiple precision arithmetic demand that the user sets the precision {\em a priori}. Some libraries are said adaptable in the sense that they dynamically change the precision of each intermediate operation individually to deliver the target accuracy according to the actual inputs. We present in this text a new adaptable numeric core inspired both from floating point expansions and from on-line arithmetic. The numeric core is cut down to four tools. The tool that contains arithmetic operations is proved to be correct. The proofs have been formally checked by the Coq assistant. Developing the proofs, we have formally proved many results published in the literature and we have extended a few of them. This work may let users (i) develop application specific adaptable libraries based on the toolset and / or (ii) write new formal proofs based on the set of validated facts.<|reference_end|> | arxiv | @article{boldo2001computer,
title={Computer validated proofs of a toolset for adaptable arithmetic},
author={Sylvie Boldo, Marc Daumas, Claire Moreau-Finot, Laurent Thery},
journal={arXiv preprint arXiv:cs/0107025},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107025},
primaryClass={cs.MS}
} | boldo2001computer |
arxiv-670035 | cs/0107026 | Annotated revision programs | <|reference_start|>Annotated revision programs: Revision programming is a formalism to describe and enforce updates of belief sets and databases. That formalism was extended by Fitting who assigned annotations to revision atoms. Annotations provide a way to quantify the confidence (probability) that a revision atom holds. The main goal of our paper is to reexamine the work of Fitting, argue that his semantics does not always provide results consistent with intuition, and to propose an alternative treatment of annotated revision programs. Our approach differs from that proposed by Fitting in two key aspects: we change the notion of a model of a program and we change the notion of a justified revision. We show that under this new approach fundamental properties of justified revisions of standard revision programs extend to the annotated case.<|reference_end|> | arxiv | @article{marek2001annotated,
title={Annotated revision programs},
author={Victor Marek, Inna Pivkina, Miroslaw Truszczynski},
journal={Artificial Intelligence Journal, 138 (2002), pp. 149-180.},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107026},
primaryClass={cs.AI cs.LO}
} | marek2001annotated |
arxiv-670036 | cs/0107027 | Fixed-parameter complexity of semantics for logic programs | <|reference_start|>Fixed-parameter complexity of semantics for logic programs: A decision problem is called parameterized if its input is a pair of strings. One of these strings is referred to as a parameter. The problem: given a propositional logic program P and a non-negative integer k, decide whether P has a stable model of size no more than k, is an example of a parameterized decision problem with k serving as a parameter. Parameterized problems that are NP-complete often become solvable in polynomial time if the parameter is fixed. The problem to decide whether a program P has a stable model of size no more than k, where k is fixed and not a part of input, can be solved in time O(mn^k), where m is the size of P and n is the number of atoms in P. Thus, this problem is in the class P. However, algorithms with the running time given by a polynomial of order k are not satisfactory even for relatively small values of k. The key question then is whether significantly better algorithms (with the degree of the polynomial not dependent on k) exist. To tackle it, we use the framework of fixed-parameter complexity. We establish the fixed-parameter complexity for several parameterized decision problems involving models, supported models and stable models of logic programs. We also establish the fixed-parameter complexity for variants of these problems resulting from restricting attention to Horn programs and to purely negative programs. Most of the problems considered in the paper have high fixed-parameter complexity. Thus, it is unlikely that fixing bounds on models (supported models, stable models) will lead to fast algorithms to decide the existence of such models.<|reference_end|> | arxiv | @article{lonc2001fixed-parameter,
title={Fixed-parameter complexity of semantics for logic programs},
author={Zbigniew Lonc, Miroslaw Truszczynski},
journal={ACM Transactions on Computational Logic, 4 (2003), pp. 91-119.},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107027},
primaryClass={cs.LO cs.AI}
} | lonc2001fixed-parameter |
arxiv-670037 | cs/0107028 | Propositional satisfiability in answer-set programming | <|reference_start|>Propositional satisfiability in answer-set programming: We show that propositional logic and its extensions can support answer-set programming in the same way stable logic programming and disjunctive logic programming do. To this end, we introduce a logic based on the logic of propositional schemata and on a version of the Closed World Assumption. We call it the extended logic of propositional schemata with CWA (PS+, in symbols). An important feature of this logic is that it supports explicit modeling of constraints on cardinalities of sets. In the paper, we characterize the class of problems that can be solved by finite PS+ theories. We implement a programming system based on the logic PS+ and design and implement a solver for processing theories in PS+. We present encouraging performance results for our approach --- we show it to be competitive with smodels, a state-of-the-art answer-set programming system based on stable logic programming.<|reference_end|> | arxiv | @article{east2001propositional,
title={Propositional satisfiability in answer-set programming},
author={Deborah East, Miroslaw Truszczynski},
journal={arXiv preprint arXiv:cs/0107028},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107028},
primaryClass={cs.AI cs.LO}
} | east2001propositional |
arxiv-670038 | cs/0107029 | aspps --- an implementation of answer-set programming with propositional schemata | <|reference_start|>aspps --- an implementation of answer-set programming with propositional schemata: We present an implementation of an answer-set programming paradigm, called aspps (short for answer-set programming with propositional schemata). The system aspps is designed to process PS+ theories. It consists of two basic modules. The first module, psgrnd, grounds an PS+ theory. The second module, referred to as aspps, is a solver. It computes models of ground PS+ theories.<|reference_end|> | arxiv | @article{truszczynski2001aspps,
title={aspps --- an implementation of answer-set programming with propositional
schemata},
author={Deborah East. Miroslaw Truszczynski},
journal={arXiv preprint arXiv:cs/0107029},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107029},
primaryClass={cs.AI cs.LO}
} | truszczynski2001aspps |
arxiv-670039 | cs/0107030 | Reconciliation of a Quantum-Distributed Gaussian Key | <|reference_start|>Reconciliation of a Quantum-Distributed Gaussian Key: Two parties, Alice and Bob, wish to distill a binary secret key out of a list of correlated variables that they share after running a quantum key distribution protocol based on continuous-spectrum quantum carriers. We present a novel construction that allows the legitimate parties to get equal bit strings out of correlated variables by using a classical channel, with as few leaked information as possible. This opens the way to securely correcting non-binary key elements. In particular, the construction is refined to the case of Gaussian variables as it applies directly to recent continuous-variable protocols for quantum key distribution.<|reference_end|> | arxiv | @article{van assche2001reconciliation,
title={Reconciliation of a Quantum-Distributed Gaussian Key},
author={G. Van Assche (1), J. Cardinal (1) and N. J. Cerf (1 and 2) ((1) ULB,
(2) JPL/Caltech)},
journal={IEEE Trans. Inform. Theory, vol. 50, p. 394, Feb. 2004},
year={2001},
doi={10.1109/TIT.2003.822618},
archivePrefix={arXiv},
eprint={cs/0107030},
primaryClass={cs.CR quant-ph}
} | van assche2001reconciliation |
arxiv-670040 | cs/0107031 | The Complexity of Clickomania | <|reference_start|>The Complexity of Clickomania: We study a popular puzzle game known variously as Clickomania and Same Game. Basically, a rectangular grid of blocks is initially colored with some number of colors, and the player repeatedly removes a chosen connected monochromatic group of at least two square blocks, and any blocks above it fall down. We show that one-column puzzles can be solved, i.e., the maximum possible number of blocks can be removed, in linear time for two colors, and in polynomial time for an arbitrary number of colors. On the other hand, deciding whether a puzzle is solvable (all blocks can be removed) is NP-complete for two columns and five colors, or five columns and three colors.<|reference_end|> | arxiv | @article{biedl2001the,
title={The Complexity of Clickomania},
author={Therese C. Biedl, Erik D. Demaine, Martin L. Demaine, Rudolf
Fleischer, Lars Jacobsen, J. Ian Munro},
journal={arXiv preprint arXiv:cs/0107031},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107031},
primaryClass={cs.CC cs.DM cs.DS}
} | biedl2001the |
arxiv-670041 | cs/0107032 | Coupled Clustering: a Method for Detecting Structural Correspondence | <|reference_start|>Coupled Clustering: a Method for Detecting Structural Correspondence: This paper proposes a new paradigm and computational framework for identification of correspondences between sub-structures of distinct composite systems. For this, we define and investigate a variant of traditional data clustering, termed coupled clustering, which simultaneously identifies corresponding clusters within two data sets. The presented method is demonstrated and evaluated for detecting topical correspondences in textual corpora.<|reference_end|> | arxiv | @article{marx2001coupled,
title={Coupled Clustering: a Method for Detecting Structural Correspondence},
author={Zvika Marx, Ido Dagan, Joachim Buhmann},
journal={In: C. E. Brodley and A. P. Danyluk (eds.), Proceedings of the
18th International Conference on Machine Learning (ICML 2001), pp. 353-360},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107032},
primaryClass={cs.LG cs.CL cs.IR}
} | marx2001coupled |
arxiv-670042 | cs/0107033 | Yet another zeta function and learning | <|reference_start|>Yet another zeta function and learning: We study the convergence speed of the batch learning algorithm, and compare its speed to that of the memoryless learning algorithm and of learning with memory (as analyzed in joint work with N. Komarova). We obtain precise results and show in particular that the batch learning algorithm is never worse than the memoryless learning algorithm (at least asymptotically). Its performance vis-a-vis learning with full memory is less clearcut, and depends on certainprobabilistic assumptions. These results necessitate theintroduction of the moment zeta function of a probability distribution and the study of some of its properties.<|reference_end|> | arxiv | @article{rivin2001yet,
title={Yet another zeta function and learning},
author={Igor Rivin},
journal={arXiv preprint arXiv:cs/0107033},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107033},
primaryClass={cs.LG cs.DM math.PR}
} | rivin2001yet |
arxiv-670043 | cs/0107034 | NEOS Server 40 Administrative Guide | <|reference_start|>NEOS Server 40 Administrative Guide: The NEOS Server 4.0 provides a general Internet-based client/server as a link between users and software applications. The administrative guide covers the fundamental principals behind the operation of the NEOS Server, installation and trouble-shooting of the Server software, and implementation details of potential interest to a NEOS Server administrator. The guide also discusses making new software applications available through the Server, including areas of concern to remote solver administrators such as maintaining security, providing usage instructions, and enforcing reasonable restrictions on jobs. The administrative guide is intended both as an introduction to the NEOS Server and as a reference for use when running the Server.<|reference_end|> | arxiv | @article{dolan2001neos,
title={NEOS Server 4.0 Administrative Guide},
author={Elizabeth D. Dolan},
journal={arXiv preprint arXiv:cs/0107034},
year={2001},
number={ANL/MCS-TM-250},
archivePrefix={arXiv},
eprint={cs/0107034},
primaryClass={cs.DC}
} | dolan2001neos |
arxiv-670044 | cs/0107035 | Semantic Web Content Accessibility Guidelines for Current Research Information Systems (CRIS) | <|reference_start|>Semantic Web Content Accessibility Guidelines for Current Research Information Systems (CRIS): The most exciting challenge for CRIS is to create a service for research information which should be wide-spread, distributed and actual like Google, but at the same time structured, trusted, with a complex search and navigation similar to today CRIS application. The core technology for such a "new" CRIS is the semantic web technology to integrate database contents with HTML and XML web pages for being provided to the research interested public. One (at the moment the best) possible way is to use RDF (Resource Description Framework) which is also recommended by the W3 consortium.<|reference_end|> | arxiv | @article{lopatenko2001semantic,
title={Semantic Web Content Accessibility Guidelines for Current Research
Information Systems (CRIS)},
author={A. Lopatenko},
journal={Second Interim Report of Extencion Centre, Vienna University of
Technology, 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107035},
primaryClass={cs.NI cs.DL}
} | lopatenko2001semantic |
arxiv-670045 | cs/0107036 | TeXmacs interfaces to Maxima, MuPAD and REDUCE | <|reference_start|>TeXmacs interfaces to Maxima, MuPAD and REDUCE: GNU TeXmacs is a free wysiwyg word processor providing an excellent typesetting quality of texts and formulae. It can also be used as an interface to Computer Algebra Systems (CASs). In the present work, interfaces to three general-purpose CASs have been implemented.<|reference_end|> | arxiv | @article{grozin2001texmacs,
title={TeXmacs interfaces to Maxima, MuPAD and REDUCE},
author={A.G. Grozin},
journal={arXiv preprint arXiv:cs/0107036},
year={2001},
archivePrefix={arXiv},
eprint={cs/0107036},
primaryClass={cs.SC cs.MS hep-ph}
} | grozin2001texmacs |
arxiv-670046 | cs/0108001 | The Cactus Worm: Experiments with Dynamic Resource Discovery and Allocation in a Grid Environment | <|reference_start|>The Cactus Worm: Experiments with Dynamic Resource Discovery and Allocation in a Grid Environment: The ability to harness heterogeneous, dynamically available "Grid" resources is attractive to typically resource-starved computational scientists and engineers, as in principle it can increase, by significant factors, the number of cycles that can be delivered to applications. However, new adaptive application structures and dynamic runtime system mechanisms are required if we are to operate effectively in Grid environments. In order to explore some of these issues in a practical setting, we are developing an experimental framework, called Cactus, that incorporates both adaptive application structures for dealing with changing resource characteristics and adaptive resource selection mechanisms that allow applications to change their resource allocations (e.g., via migration) when performance falls outside specified limits. We describe here the adaptive resource selection mechanisms and describe how they are used to achieve automatic application migration to "better" resources following performance degradation. Our results provide insights into the architectural structures required to support adaptive resource selection. In addition, we suggest that this "Cactus Worm" is an interesting challenge problem for Grid computing.<|reference_end|> | arxiv | @article{allen2001the,
title={The Cactus Worm: Experiments with Dynamic Resource Discovery and
Allocation in a Grid Environment},
author={Gabrielle Allen (1), David Angulo (2), Ian Foster (2 and 3), Gerd
Lanfermann (1), Chuang Liu (2), Thomas Radke (1), Ed Seidel (1), John Shalf
(4) ((1) Max-Planck-Institut f"ur Gravitationsphysik, (2) Univ of Chicago,
(3) Argonne Natnl Lab, (4) Lawrence Berkeley Natnl Lab)},
journal={arXiv preprint arXiv:cs/0108001},
year={2001},
number={TR-2001-28},
archivePrefix={arXiv},
eprint={cs/0108001},
primaryClass={cs.DC}
} | allen2001the |
arxiv-670047 | cs/0108002 | Bounded Concurrent Timestamp Systems Using Vector Clocks | <|reference_start|>Bounded Concurrent Timestamp Systems Using Vector Clocks: Shared registers are basic objects used as communication mediums in asynchronous concurrent computation. A concurrent timestamp system is a higher typed communication object, and has been shown to be a powerful tool to solve many concurrency control problems. It has turned out to be possible to construct such higher typed objects from primitive lower typed ones. The next step is to find efficient constructions. We propose a very efficient wait-free construction of bounded concurrent timestamp systems from 1-writer multireader registers. This finalizes, corrects, and extends, a preliminary bounded multiwriter construction proposed by the second author in 1986. That work partially initiated the current interest in wait-free concurrent objects, and introduced a notion of discrete vector clocks in distributed algorithms.<|reference_end|> | arxiv | @article{haldar2001bounded,
title={Bounded Concurrent Timestamp Systems Using Vector Clocks},
author={Sibsankar Haldar (Bell Labs) and Paul Vitanyi (CWI and University of
Amsterdam)},
journal={arXiv preprint arXiv:cs/0108002},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108002},
primaryClass={cs.DC}
} | haldar2001bounded |
arxiv-670048 | cs/0108003 | The Partial Evaluation Approach to Information Personalization | <|reference_start|>The Partial Evaluation Approach to Information Personalization: Information personalization refers to the automatic adjustment of information content, structure, and presentation tailored to an individual user. By reducing information overload and customizing information access, personalization systems have emerged as an important segment of the Internet economy. This paper presents a systematic modeling methodology - PIPE (`Personalization is Partial Evaluation') - for personalization. Personalization systems are designed and implemented in PIPE by modeling an information-seeking interaction in a programmatic representation. The representation supports the description of information-seeking activities as partial information and their subsequent realization by partial evaluation, a technique for specializing programs. We describe the modeling methodology at a conceptual level and outline representational choices. We present two application case studies that use PIPE for personalizing web sites and describe how PIPE suggests a novel evaluation criterion for information system designs. Finally, we mention several fundamental implications of adopting the PIPE model for personalization and when it is (and is not) applicable.<|reference_end|> | arxiv | @article{ramakrishnan2001the,
title={The Partial Evaluation Approach to Information Personalization},
author={Naren Ramakrishnan and Saverio Perugini},
journal={arXiv preprint arXiv:cs/0108003},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108003},
primaryClass={cs.IR cs.PL}
} | ramakrishnan2001the |
arxiv-670049 | cs/0108004 | Links tell us about lexical and semantic Web content | <|reference_start|>Links tell us about lexical and semantic Web content: The latest generation of Web search tools is beginning to exploit hypertext link information to improve ranking\cite{Brin98,Kleinberg98} and crawling\cite{Menczer00,Ben-Shaul99etal,Chakrabarti99} algorithms. The hidden assumption behind such approaches, a correlation between the graph structure of the Web and its content, has not been tested explicitly despite increasing research on Web topology\cite{Lawrence98,Albert99,Adamic99,Butler00}. Here I formalize and quantitatively validate two conjectures drawing connections from link information to lexical and semantic Web content. The clink-content conjecture states that a page is similar to the pages that link to it, i.e., one can infer the lexical content of a page by looking at the pages that link to it. I also show that lexical inferences based on link cues are quite heterogeneous across Web communities. The link-cluster conjecture states that pages about the same topic are clustered together, i.e., one can infer the meaning of a page by looking at its neighbours. These results explain the success of the newest search technologies and open the way for more dynamic and scalable methods to locate information in a topic or user driven way.<|reference_end|> | arxiv | @article{menczer2001links,
title={Links tell us about lexical and semantic Web content},
author={Filippo Menczer},
journal={arXiv preprint arXiv:cs/0108004},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108004},
primaryClass={cs.IR cs.DL}
} | menczer2001links |
arxiv-670050 | cs/0108005 | A Bit of Progress in Language Modeling | <|reference_start|>A Bit of Progress in Language Modeling: In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser-Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We find some significant interactions, especially with smoothing and clustering techniques. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38% and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8.9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. This is the extended version of the paper; it contains additional details and proofs, and is designed to be a good introduction to the state of the art in language modeling.<|reference_end|> | arxiv | @article{goodman2001a,
title={A Bit of Progress in Language Modeling},
author={Joshua Goodman},
journal={arXiv preprint arXiv:cs/0108005},
year={2001},
number={MSR-TR-2001-72},
archivePrefix={arXiv},
eprint={cs/0108005},
primaryClass={cs.CL}
} | goodman2001a |
arxiv-670051 | cs/0108006 | Classes for Fast Maximum Entropy Training | <|reference_start|>Classes for Fast Maximum Entropy Training: Maximum entropy models are considered by many to be one of the most promising avenues of language modeling research. Unfortunately, long training times make maximum entropy research difficult. We present a novel speedup technique: we change the form of the model to use classes. Our speedup works by creating two maximum entropy models, the first of which predicts the class of each word, and the second of which predicts the word itself. This factoring of the model leads to fewer non-zero indicator functions, and faster normalization, achieving speedups of up to a factor of 35 over one of the best previous techniques. It also results in typically slightly lower perplexities. The same trick can be used to speed training of other machine learning techniques, e.g. neural networks, applied to any problem with a large number of outputs, such as language modeling.<|reference_end|> | arxiv | @article{goodman2001classes,
title={Classes for Fast Maximum Entropy Training},
author={Joshua Goodman},
journal={Proceedings of ICASSP-2001, Utah, May 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108006},
primaryClass={cs.CL}
} | goodman2001classes |
arxiv-670052 | cs/0108007 | Abstract versus Concrete Computation on Metric Partial Algebras | <|reference_start|>Abstract versus Concrete Computation on Metric Partial Algebras: A model of computation is abstract if, when applied to any algebra, the resulting programs for computable functions and sets on that algebra are invariant under isomorphisms, and hence do not depend on a representation for the algebra. Otherwise it is concrete. Intuitively, concrete models depend on the implementation of the algebra. The difference is particularly striking in the case of topological partial algebras, and notably in algebras over the reals. We investigate the relationship between abstract and concrete models of partial metric algebras. In the course of this investigation, interesting aspects of continuity, extensionality and non-determinism are uncovered.<|reference_end|> | arxiv | @article{tucker2001abstract,
title={Abstract versus Concrete Computation on Metric Partial Algebras},
author={J.V. Tucker (University of Wales, Swansea) and J.I. Zucker (McMaster
University, Hamilton, Canada)},
journal={arXiv preprint arXiv:cs/0108007},
year={2001},
number={McMaster Dept of Computing & Software Tech Report CAS-01-01-JZ},
archivePrefix={arXiv},
eprint={cs/0108007},
primaryClass={cs.LO}
} | tucker2001abstract |
arxiv-670053 | cs/0108008 | Using Methods of Declarative Logic Programming for Intelligent Information Agents | <|reference_start|>Using Methods of Declarative Logic Programming for Intelligent Information Agents: The search for information on the web is faced with several problems, which arise on the one hand from the vast number of available sources, and on the other hand from their heterogeneity. A promising approach is the use of multi-agent systems of information agents, which cooperatively solve advanced information-retrieval problems. This requires capabilities to address complex tasks, such as search and assessment of sources, query planning, information merging and fusion, dealing with incomplete information, and handling of inconsistency. In this paper, our interest is in the role which some methods from the field of declarative logic programming can play in the realization of reasoning capabilities for information agents. In particular, we are interested in how they can be used and further developed for the specific needs of this application domain. We review some existing systems and current projects, which address information-integration problems. We then focus on declarative knowledge-representation methods, and review and evaluate approaches from logic programming and nonmonotonic reasoning for information agents. We discuss advantages and drawbacks, and point out possible extensions and open issues.<|reference_end|> | arxiv | @article{eiter2001using,
title={Using Methods of Declarative Logic Programming for Intelligent
Information Agents},
author={T. Eiter, M. Fink, G. Sabbatini and H. Tompits},
journal={arXiv preprint arXiv:cs/0108008},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108008},
primaryClass={cs.MA cs.AI}
} | eiter2001using |
arxiv-670054 | cs/0108009 | Artificial Neurons with Arbitrarily Complex Internal Structures | <|reference_start|>Artificial Neurons with Arbitrarily Complex Internal Structures: Artificial neurons with arbitrarily complex internal structure are introduced. The neurons can be described in terms of a set of internal variables, a set activation functions which describe the time evolution of these variables and a set of characteristic functions which control how the neurons interact with one another. The information capacity of attractor networks composed of these generalized neurons is shown to reach the maximum allowed bound. A simple example taken from the domain of pattern recognition demonstrates the increased computational power of these neurons. Furthermore, a specific class of generalized neurons gives rise to a simple transformation relating attractor networks of generalized neurons to standard three layer feed-forward networks. Given this correspondence, we conjecture that the maximum information capacity of a three layer feed-forward network is 2 bits per weight.<|reference_end|> | arxiv | @article{kohring2001artificial,
title={Artificial Neurons with Arbitrarily Complex Internal Structures},
author={G.A. Kohring},
journal={Neurocomputing, vol. 47, pp. 103-118 (2002).},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108009},
primaryClass={cs.NE q-bio.NC}
} | kohring2001artificial |
arxiv-670055 | cs/0108010 | A Note on Tiling under Tomographic Constraints | <|reference_start|>A Note on Tiling under Tomographic Constraints: Given a tiling of a 2D grid with several types of tiles, we can count for every row and column how many tiles of each type it intersects. These numbers are called the_projections_. We are interested in the problem of reconstructing a tiling which has given projections. Some simple variants of this problem, involving tiles that are 1x1 or 1x2 rectangles, have been studied in the past, and were proved to be either solvable in polynomial time or NP-complete. In this note we make progress toward a comprehensive classification of various tiling reconstruction problems, by proving NP-completeness results for several sets of tiles.<|reference_end|> | arxiv | @article{chrobak2001a,
title={A Note on Tiling under Tomographic Constraints},
author={Marek Chrobak, Peter Couperus, Christoph Durr and Gerhard Woeginger},
journal={arXiv preprint arXiv:cs/0108010},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108010},
primaryClass={cs.CC}
} | chrobak2001a |
arxiv-670056 | cs/0108011 | On Classes of Functions for which No Free Lunch Results Hold | <|reference_start|>On Classes of Functions for which No Free Lunch Results Hold: In a recent paper it was shown that No Free Lunch results hold for any subset F of the set of all possible functions from a finite set X to a finite set Y iff F is closed under permutation of X. In this article, we prove that the number of those subsets can be neglected compared to the overall number of possible subsets. Further, we present some arguments why problem classes relevant in practice are not likely to be closed under permutation.<|reference_end|> | arxiv | @article{igel2001on,
title={On Classes of Functions for which No Free Lunch Results Hold},
author={Christian Igel and Marc Toussaint},
journal={arXiv preprint arXiv:cs/0108011},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108011},
primaryClass={cs.NE math.OC nlin.AO}
} | igel2001on |
arxiv-670057 | cs/0108012 | A polynomial axles-detection algorithm for a four-contacts treadle | <|reference_start|>A polynomial axles-detection algorithm for a four-contacts treadle: This submission was removed because it contained proprietary information that was distributed without permission.<|reference_end|> | arxiv | @article{crocetti2001a,
title={A polynomial axles-detection algorithm for a four-contacts treadle},
author={Giancarlo Crocetti},
journal={arXiv preprint arXiv:cs/0108012},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108012},
primaryClass={cs.OH}
} | crocetti2001a |
arxiv-670058 | cs/0108013 | Convergent Approximate Solving of First-Order Constraints by Approximate Quantifiers | <|reference_start|>Convergent Approximate Solving of First-Order Constraints by Approximate Quantifiers: Exactly solving first-order constraints (i.e., first-order formulas over a certain predefined structure) can be a very hard, or even undecidable problem. In continuous structures like the real numbers it is promising to compute approximate solutions instead of exact ones. However, the quantifiers of the first-order predicate language are an obstacle to allowing approximations to arbitrary small error bounds. In this paper we solve the problem by modifying the first-order language and replacing the classical quantifiers with approximate quantifiers. These also have two additional advantages: First, they are tunable, in the sense that they allow the user to decide on the trade-off between precision and efficiency. Second, they introduce additional expressivity into the first-order language by allowing reasoning over the size of solution sets.<|reference_end|> | arxiv | @article{ratschan2001convergent,
title={Convergent Approximate Solving of First-Order Constraints by Approximate
Quantifiers},
author={Stefan Ratschan},
journal={arXiv preprint arXiv:cs/0108013},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108013},
primaryClass={cs.LO cs.AI}
} | ratschan2001convergent |
arxiv-670059 | cs/0108014 | What's Fit To Print: The Effect Of Ownership Concentration On Product Variety In Daily Newspaper Markets | <|reference_start|>What's Fit To Print: The Effect Of Ownership Concentration On Product Variety In Daily Newspaper Markets: This paper examines the effect of ownership concentration on product position, product variety and readership in markets for daily newspapers. US antitrust policy presumes that mergers reduce the amount and diversity of content available to consumers. However, the effects of consolidation in differentiated product markets cannot be determined solely from theory. Because multi-product firms internalize business stealing, mergers may encourage firms to reposition products, leading to more, not less, variety. Using data on reporter assignments from 1993-1999, results show that differentiation and variety increase with concentration. Moreover, there is evidence that additional variety increases readership, suggesting that concentration benefits consumers.<|reference_end|> | arxiv | @article{george2001what's,
title={What's Fit To Print: The Effect Of Ownership Concentration On Product
Variety In Daily Newspaper Markets},
author={Lisa M. George},
journal={arXiv preprint arXiv:cs/0108014},
year={2001},
number={TPRC-2001-097},
archivePrefix={arXiv},
eprint={cs/0108014},
primaryClass={cs.CY}
} | george2001what's |
arxiv-670060 | cs/0108015 | Spiders and Crawlers and Bots, Oh My: The Economic Efficiency and Public Policy of Contracts that Restrict Data Collection | <|reference_start|>Spiders and Crawlers and Bots, Oh My: The Economic Efficiency and Public Policy of Contracts that Restrict Data Collection: Recent trends reveal the search by companies for a legal hook to prevent the undesired and unauthorized copying of information posted on websites. In the center of this controversy are metasites, websites that display prices for a variety of vendors. Metasites function by implementing shopbots, which extract pricing data from other vendors' websites. Technological mechanisms have proved unsuccessful in blocking shopbots, and in response, websites have asserted a variety of legal claims. Two recent cases, which rely on the troublesome trespass to chattels doctrine, suggest that contract law may provide a less demanding legal method of preventing the search of websites by data robots. If blocking collection of pricing data is as simple as posting an online contract, the question arises whether this end result is desirable and legally viable.<|reference_end|> | arxiv | @article{rosenfeld2001spiders,
title={Spiders and Crawlers and Bots, Oh My: The Economic Efficiency and Public
Policy of Contracts that Restrict Data Collection},
author={Jeffrey M. Rosenfeld},
journal={arXiv preprint arXiv:cs/0108015},
year={2001},
number={TPRC-2001-XXX},
archivePrefix={arXiv},
eprint={cs/0108015},
primaryClass={cs.CY}
} | rosenfeld2001spiders |
arxiv-670061 | cs/0108016 | Verifying Sequential Consistency on Shared-Memory Multiprocessors by Model Checking | <|reference_start|>Verifying Sequential Consistency on Shared-Memory Multiprocessors by Model Checking: The memory model of a shared-memory multiprocessor is a contract between the designer and programmer of the multiprocessor. The sequential consistency memory model specifies a total order among the memory (read and write) events performed at each processor. A trace of a memory system satisfies sequential consistency if there exists a total order of all memory events in the trace that is both consistent with the total order at each processor and has the property that every read event to a location returns the value of the last write to that location. Descriptions of shared-memory systems are typically parameterized by the number of processors, the number of memory locations, and the number of data values. It has been shown that even for finite parameter values, verifying sequential consistency on general shared-memory systems is undecidable. We observe that, in practice, shared-memory systems satisfy the properties of causality and data independence. Causality is the property that values of read events flow from values of write events. Data independence is the property that all traces can be generated by renaming data values from traces where the written values are distinct from each other. If a causal and data independent system also has the property that the logical order of write events to each location is identical to their temporal order, then sequential consistency can be verified algorithmically. Specifically, we present a model checking algorithm to verify sequential consistency on such systems for a finite number of processors and memory locations and an arbitrary number of data values.<|reference_end|> | arxiv | @article{qadeer2001verifying,
title={Verifying Sequential Consistency on Shared-Memory Multiprocessors by
Model Checking},
author={Shaz Qadeer},
journal={arXiv preprint arXiv:cs/0108016},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108016},
primaryClass={cs.DC cs.AR}
} | qadeer2001verifying |
arxiv-670062 | cs/0108017 | Security Considerations for Remote Electronic Voting over the Internet | <|reference_start|>Security Considerations for Remote Electronic Voting over the Internet: This paper discusses the security considerations for remote electronic voting in public elections. In particular, we examine the feasibility of running national federal elections over the Internet. The focus of this paper is on the limitations of the current deployed infrastructure in terms of the security of the hosts and the Internet itself. We conclude that at present, our infrastructure is inadequate for remote Internet voting.<|reference_end|> | arxiv | @article{rubin2001security,
title={Security Considerations for Remote Electronic Voting over the Internet},
author={Aviel D. Rubin},
journal={arXiv preprint arXiv:cs/0108017},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108017},
primaryClass={cs.CR}
} | rubin2001security |
arxiv-670063 | cs/0108018 | Bipartite graph partitioning and data clustering | <|reference_start|>Bipartite graph partitioning and data clustering: Many data types arising from data mining applications can be modeled as bipartite graphs, examples include terms and documents in a text corpus, customers and purchasing items in market basket analysis and reviewers and movies in a movie recommender system. In this paper, we propose a new data clustering method based on partitioning the underlying bipartite graph. The partition is constructed by minimizing a normalized sum of edge weights between unmatched pairs of vertices of the bipartite graph. We show that an approximate solution to the minimization problem can be obtained by computing a partial singular value decomposition (SVD) of the associated edge weight matrix of the bipartite graph. We point out the connection of our clustering algorithm to correspondence analysis used in multivariate analysis. We also briefly discuss the issue of assigning data objects to multiple clusters. In the experimental results, we apply our clustering algorithm to the problem of document clustering to illustrate its effectiveness and efficiency.<|reference_end|> | arxiv | @article{zha2001bipartite,
title={Bipartite graph partitioning and data clustering},
author={H. Zha, X. He, C. Ding, M. Gu and H. Simon},
journal={arXiv preprint arXiv:cs/0108018},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108018},
primaryClass={cs.IR cs.LG}
} | zha2001bipartite |
arxiv-670064 | cs/0108019 | Scalable Unix Commands for Parallel Processors: A High-Performance Implementation | <|reference_start|>Scalable Unix Commands for Parallel Processors: A High-Performance Implementation: We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.<|reference_end|> | arxiv | @article{ong2001scalable,
title={Scalable Unix Commands for Parallel Processors: A High-Performance
Implementation},
author={E. Ong, E. Lusk, and W. Gropp},
journal={in Recent Advances in Parallel Virtual Machine and Message Passing
Interface, eds. Y. Cotronis and J. Dongarra, Lecture Notes in Computer
Science, Vol. 2131, Springer-Verlag, pp. 410-418, Sept. 2001.},
year={2001},
number={ANL/MCS-P885-0601},
archivePrefix={arXiv},
eprint={cs/0108019},
primaryClass={cs.DC}
} | ong2001scalable |
arxiv-670065 | cs/0108020 | Flipping Cubical Meshes | <|reference_start|>Flipping Cubical Meshes: We define and examine flip operations for quadrilateral and hexahedral meshes, similar to the flipping transformations previously used in triangular and tetrahedral mesh generation.<|reference_end|> | arxiv | @article{bern2001flipping,
title={Flipping Cubical Meshes},
author={Marshall Bern, David Eppstein, Jeff Erickson},
journal={Engineering with Computers 18(3):173-187, 2002},
year={2001},
doi={10.1007/s003660200016},
archivePrefix={arXiv},
eprint={cs/0108020},
primaryClass={cs.CG}
} | bern2001flipping |
arxiv-670066 | cs/0108021 | Computational Geometry Column 42 | <|reference_start|>Computational Geometry Column 42: A compendium of thirty previously published open problems in computational geometry is presented.<|reference_end|> | arxiv | @article{mitchell2001computational,
title={Computational Geometry Column 42},
author={Joseph S. B. Mitchell and Joseph O'Rourke},
journal={SIGACT News, 32(3) Issue, 120 Sep. 2001, 63--72},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108021},
primaryClass={cs.CG cs.DM}
} | mitchell2001computational |
arxiv-670067 | cs/0108022 | Portability of Syntactic Structure for Language Modeling | <|reference_start|>Portability of Syntactic Structure for Language Modeling: The paper presents a study on the portability of statistical syntactic knowledge in the framework of the structured language model (SLM). We investigate the impact of porting SLM statistics from the Wall Street Journal (WSJ) to the Air Travel Information System (ATIS) domain. We compare this approach to applying the Microsoft rule-based parser (NLPwin) for the ATIS data and to using a small amount of data manually parsed at UPenn for gathering the intial SLM statistics. Surprisingly, despite the fact that it performs modestly in perplexity (PPL), the model initialized on WSJ parses outperforms the other initialization methods based on in-domain annotated data, achieving a significant 0.4% absolute and 7% relative reduction in word error rate (WER) over a baseline system whose word error rate is 5.8%; the improvement measured relative to the minimum WER achievable on the N-best lists we worked with is 12%.<|reference_end|> | arxiv | @article{chelba2001portability,
title={Portability of Syntactic Structure for Language Modeling},
author={Ciprian Chelba},
journal={ICASSP 2001 Proceedings},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108022},
primaryClass={cs.CL}
} | chelba2001portability |
arxiv-670068 | cs/0108023 | Information Extraction Using the Structured Language Model | <|reference_start|>Information Extraction Using the Structured Language Model: The paper presents a data-driven approach to information extraction (viewed as template filling) using the structured language model (SLM) as a statistical parser. The task of template filling is cast as constrained parsing using the SLM. The model is automatically trained from a set of sentences annotated with frame/slot labels and spans. Training proceeds in stages: first a constrained syntactic parser is trained such that the parses on training data meet the specified semantic spans, then the non-terminal labels are enriched to contain semantic information and finally a constrained syntactic+semantic parser is trained on the parse trees resulting from the previous stage. Despite the small amount of training data used, the model is shown to outperform the slot level accuracy of a simple semantic grammar authored manually for the MiPad --- personal information management --- task.<|reference_end|> | arxiv | @article{chelba2001information,
title={Information Extraction Using the Structured Language Model},
author={Ciprian Chelba and Milind Mahajan},
journal={EMNLP/NAACL 2001 Conference Proceedings},
year={2001},
archivePrefix={arXiv},
eprint={cs/0108023},
primaryClass={cs.CL cs.IR}
} | chelba2001information |
arxiv-670069 | cs/0109001 | Abstract Computability, Algebraic Specification and Initiality | <|reference_start|>Abstract Computability, Algebraic Specification and Initiality: computable functions are defined by abstract finite deterministic algorithms on many-sorted algebras. We show that there exist finite universal algebraic specifications that specify uniquely (up to isomorphism) (i) all abstract computable functions on any many-sorted algebra; and (ii) all functions effectively approximable by abstract computable functions on any metric algebra. We show that there exist universal algebraic specifications for all the classically computable functions on the set R of real numbers. The algebraic specifications used are mainly bounded universal equations and conditional equations. We investigate the initial algebra semantics of these specifications, and derive situations where algebraic specifications define precisely the computable functions.<|reference_end|> | arxiv | @article{tucker2001abstract,
title={Abstract Computability, Algebraic Specification and Initiality},
author={J.V. Tucker (University of Wales, Swansea) and J.I. Zucker (McMaster
University, Hamilton, Canada)},
journal={arXiv preprint arXiv:cs/0109001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109001},
primaryClass={cs.LO}
} | tucker2001abstract |
arxiv-670070 | cs/0109002 | Probabilistic asynchronous pi-calculus | <|reference_start|>Probabilistic asynchronous pi-calculus: We propose an extension of the asynchronous pi-calculus with a notion of random choice. We define an operational semantics which distinguishes between probabilistic choice, made internally by the process, and nondeterministic choice, made externally by an adversary scheduler. This distinction will allow us to reason about the probabilistic correctness of algorithms under certain schedulers. We show that in this language we can solve the electoral problem, which was proved not possible in the asynchronous $\pi$-calculus. Finally, we show an implementation of the probabilistic asynchronous pi-calculus in a Java-like language.<|reference_end|> | arxiv | @article{herescu2001probabilistic,
title={Probabilistic asynchronous pi-calculus},
author={Oltea Mihaela Herescu and Catuscia Palamidessi},
journal={Jerzy Tiuryn, editor, Proceedings of FOSSACS 2000 (Part of ETAPS
2000), volume 1784 of Lecture Notes in Computer Science, pages 146--160.
Springer-Verlag, 2000},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109002},
primaryClass={cs.PL}
} | herescu2001probabilistic |
arxiv-670071 | cs/0109003 | On the generalized dining philosophers problem | <|reference_start|>On the generalized dining philosophers problem: We consider a generalization of the dining philosophers problem to arbitrary connection topologies. We focus on symmetric, fully distributed systems, and we address the problem of guaranteeing progress and lockout-freedom, even in presence of adversary schedulers, by using randomized algorithms. We show that the well-known algorithms of Lehmann and Rabin do not work in the generalized case, and we propose an alternative algorithm based on the idea of letting the philosophers assign a random priority to their adjacent forks.<|reference_end|> | arxiv | @article{herescu2001on,
title={On the generalized dining philosophers problem},
author={Oltea Mihaela Herescu and Catuscia Palamidessi},
journal={Proc. of the 20th ACM Symposium on Principles of Distributed
Computing (PODC), pages 81-89, ACM, 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109003},
primaryClass={cs.PL}
} | herescu2001on |
arxiv-670072 | cs/0109004 | Parallel Computing on a PC Cluster | <|reference_start|>Parallel Computing on a PC Cluster: The tremendous advance in computer technology in the past decade has made it possible to achieve the performance of a supercomputer on a very small budget. We have built a multi-CPU cluster of Pentium PC capable of parallel computations using the Message Passing Interface (MPI). We will discuss the configuration, performance, and application of the cluster to our work in physics.<|reference_end|> | arxiv | @article{luo2001parallel,
title={Parallel Computing on a PC Cluster},
author={X.Q. Luo (1), E.B. Gregory (1), J. C. Yang (2), Y. L. Wang (2), D.
Chang (2), and Y. Lin (2) ((1) Zhongshan University, (2) Guoxun Ltd.)},
journal={Advanced Computing and Analysis Techniques in Physics Research:
VII International Workshop; ACAT 2000, American Institute of Physics (2001)
270-272},
year={2001},
doi={10.1063/1.1405325},
archivePrefix={arXiv},
eprint={cs/0109004},
primaryClass={cs.DC hep-ph}
} | luo2001parallel |
arxiv-670073 | cs/0109005 | Architectural Framework for Large-Scale Multicast in Mobile Ad Hoc Networks | <|reference_start|>Architectural Framework for Large-Scale Multicast in Mobile Ad Hoc Networks: Emerging ad hoc networks are infrastructure-less networks consisting of wireless devices with various power constraints, capabilities and mobility characteristics. An essential capability in future ad hoc networks is the ability to provide scalable multicast services. This paper presents a novel adaptive architecture to support multicast services in large-scale wide-area ad hoc networks. Existing works on multicast in ad hoc networks address only small size networks. Our main design goals are scalability, robustness and efficiency. We propose a self-configuring hierarchy extending zone-based routing with the notion of contacts based on the small world graphs phenomenon and new metrics of stability and mobility. We introduce a new geographic-based multicast address allocation scheme coupled with adaptive anycast based on group popularity. Our scheme is the first of its kind and promises efficient and robust operation in the common case. Also, based on the new concept of rendezvous regions, we provide a bootstrap mechanism for the multicast service; a challenge generally ignored in previous work.<|reference_end|> | arxiv | @article{helmy2001architectural,
title={Architectural Framework for Large-Scale Multicast in Mobile Ad Hoc
Networks},
author={Ahmed Helmy (University of Southern California)},
journal={arXiv preprint arXiv:cs/0109005},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109005},
primaryClass={cs.NI}
} | helmy2001architectural |
arxiv-670074 | cs/0109006 | On Properties of Update Sequences Based on Causal Rejection | <|reference_start|>On Properties of Update Sequences Based on Causal Rejection: We consider an approach to update nonmonotonic knowledge bases represented as extended logic programs under answer set semantics. New information is incorporated into the current knowledge base subject to a causal rejection principle enforcing that, in case of conflicts, more recent rules are preferred and older rules are overridden. Such a rejection principle is also exploited in other approaches to update logic programs, e.g., in dynamic logic programming by Alferes et al. We give a thorough analysis of properties of our approach, to get a better understanding of the causal rejection principle. We review postulates for update and revision operators from the area of theory change and nonmonotonic reasoning, and some new properties are considered as well. We then consider refinements of our semantics which incorporate a notion of minimality of change. As well, we investigate the relationship to other approaches, showing that our approach is semantically equivalent to inheritance programs by Buccafurri et al. and that it coincides with certain classes of dynamic logic programs, for which we provide characterizations in terms of graph conditions. Therefore, most of our results about properties of causal rejection principle apply to these approaches as well. Finally, we deal with computational complexity of our approach, and outline how the update semantics and its refinements can be implemented on top of existing logic programming engines.<|reference_end|> | arxiv | @article{eiter2001on,
title={On Properties of Update Sequences Based on Causal Rejection},
author={T. Eiter, M. Fink, G. Sabbatini and H. Tompits},
journal={arXiv preprint arXiv:cs/0109006},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109006},
primaryClass={cs.AI}
} | eiter2001on |
arxiv-670075 | cs/0109007 | Voice vs Data: Estimates of Media Usage and Network Traffic | <|reference_start|>Voice vs Data: Estimates of Media Usage and Network Traffic: The popular conception is that data traffic nearly, if not already, exceeds voice traffic on backbone networks. However, the results of research reported in this paper imply that voice traffic greatly exceeds data traffic when real users are asked to estimate their usage of a wide variety of media. Media usage was surveyed for students in New York City and in Los Angeles. Other than significant differences in radio listening, e-mails, and downloads, the usage was quite similar. Telephone usage (wired and wireless) was nearly an hour per day. When converted to bits, the telephone traffic was much greater than the data traffic over the Internet. This paper reports on the details of the two user studies. The traffic implications of the results are estimated. The finding that voice exceeds data will then be reconciled with the popular opposite conception.<|reference_end|> | arxiv | @article{noll2001voice,
title={Voice vs Data: Estimates of Media Usage and Network Traffic},
author={A. Michael Noll},
journal={arXiv preprint arXiv:cs/0109007},
year={2001},
number={TPRC-2001-005},
archivePrefix={arXiv},
eprint={cs/0109007},
primaryClass={cs.CY}
} | noll2001voice |
arxiv-670076 | cs/0109008 | The Role of Incentives for Opening Monopoly Markets: Comparing GTE and BOC Cooperation with Local Entrants | <|reference_start|>The Role of Incentives for Opening Monopoly Markets: Comparing GTE and BOC Cooperation with Local Entrants: While the 1996 Telecommunications Act requires all incumbent local telephone companies to cooperate with local entrants, section 271 of the Act provides the Bell companies (but not GTE) additional incentives to cooperate. Using an original data set, I compare the negotiations of AT&T, as a local entrant, with GTE and with the Bell companies in states where both operate. My results suggest that the differential incentives matter: The Bells accommodate entry more than does GTE, as evidenced in quicker agreements, less litigation, and more favorable prices offered for network access. Consistent with this, there is more entry into Bell territories<|reference_end|> | arxiv | @article{mini2001the,
title={The Role of Incentives for Opening Monopoly Markets: Comparing GTE and
BOC Cooperation with Local Entrants},
author={Federico Mini},
journal={arXiv preprint arXiv:cs/0109008},
year={2001},
number={TPRC-2001-100},
archivePrefix={arXiv},
eprint={cs/0109008},
primaryClass={cs.CY}
} | mini2001the |
arxiv-670077 | cs/0109009 | The Effect of Native Language on Internet Usage | <|reference_start|>The Effect of Native Language on Internet Usage: Our goal is to distinguish between the following two hypotheses: (A) The Internet will remain disproportionately in English and will, over time, cause more people to learn English as second language and thus solidify the role of English as a global language. This outcome will prevail even though there are more native Chinese and Spanish speakers than there are native English speakers. (B) As the Internet matures, it will more accurately reflect the native languages spoken around the world (perhaps weighted by purchasing power) and will not promote English as a global language. English's "early lead" on the web is more likely to persist if those who are not native English speakers frequently access the large number of English language web sites that are currently available. In that case, many existing web sites will have little incentive to develop non-English versions of their sites, and new sites will tend to gravitate towards English. The key empirical question, therefore, is whether individuals whose native language is not English use the Web, or certain types of Web sites, less than do native English speakers. In order to examine this issue empirically, we employ a unique data set on Internet use at the individual level in Canada from Media Metrix. Canada provides an ideal setting to examine this issue because English is one of the two official languages. Our preliminary results suggest that English web sites are not a barrier to Internet use for French-speaking Quebecois. These preliminary results are consistent with the scenario in which the Internet will promote English as a global language.<|reference_end|> | arxiv | @article{gandal2001the,
title={The Effect of Native Language on Internet Usage},
author={Neil Gandal and Carl Shapiro},
journal={arXiv preprint arXiv:cs/0109009},
year={2001},
number={TPRC-2001-038},
archivePrefix={arXiv},
eprint={cs/0109009},
primaryClass={cs.CY}
} | gandal2001the |
arxiv-670078 | cs/0109010 | Anaphora and Discourse Structure | <|reference_start|>Anaphora and Discourse Structure: We argue in this paper that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure, instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics, and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalised grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution and inference.<|reference_end|> | arxiv | @article{webber2001anaphora,
title={Anaphora and Discourse Structure},
author={Bonnie Webber, Matthew Stone, Aravind Joshi and Alistair Knott},
journal={arXiv preprint arXiv:cs/0109010},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109010},
primaryClass={cs.CL}
} | webber2001anaphora |
arxiv-670079 | cs/0109011 | Communication Complexity and Secure Function Evaluation | <|reference_start|>Communication Complexity and Secure Function Evaluation: We suggest two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching for f and transform it into a secure protocol. In other words, "any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter". The second methodology uses the circuit computing f, enhanced with look-up tables as its underlying computational model. It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of f on a RAM machine and transform it into a secure protocol. We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the "millionaires problem", where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.<|reference_end|> | arxiv | @article{naor2001communication,
title={Communication Complexity and Secure Function Evaluation},
author={Moni Naor and Kobbi Nissim},
journal={arXiv preprint arXiv:cs/0109011},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109011},
primaryClass={cs.CR cs.CC}
} | naor2001communication |
arxiv-670080 | cs/0109012 | Is There a There There: Towards Greater Certainty for Internet Jurisdiction | <|reference_start|>Is There a There There: Towards Greater Certainty for Internet Jurisdiction: The unique challenge presented by the Internet is that compliance with local laws is rarely sufficient to assure a business that it has limited its exposure to legal risk. The paper identifies why the challenge of adequately accounting for the legal risk arising from Internet jurisdiction has been aggravated in recent years by the adoption of the Zippo legal framework, commonly referred to as the passive versus active test. The test provides parties with only limited guidance and often results in detrimental judicial decisions from a policy perspective. Given the inadequacies of the Zippo passive versus active test, the paper argues that it is now fitting to identify a more effective standard for determining when it is appropriate to assert jurisdiction in cases involving predominantly Internet-based contacts. The solution submitted in the paper is to move toward a targeting-based analysis. Unlike the Zippo approach, a targeting analysis would seek to identify the intentions of the parties and to assess the steps taken to either enter or avoid a particular jurisdiction. Targeting would also lessen the reliance on effects-based analysis, the source of considerable uncertainty since Internet-based activity can ordinarily be said to create some effects in most jurisdictions. To identify the appropriate criteria for a targeting test, the paper recommends returning to the core jurisdictional principle -- foreseeability. Foreseeability in the targeting context depends on three factors -- contracts, technology, and actual or implied knowledge.<|reference_end|> | arxiv | @article{geist2001is,
title={Is There a There There: Towards Greater Certainty for Internet
Jurisdiction},
author={Michael Geist},
journal={16 (3) Berkeley Tech. LJ (forthcoming 2001)},
year={2001},
number={TPRC-2001-017},
archivePrefix={arXiv},
eprint={cs/0109012},
primaryClass={cs.CY}
} | geist2001is |
arxiv-670081 | cs/0109013 | Conceptual Analysis of Lexical Taxonomies: The Case of WordNet Top-Level | <|reference_start|>Conceptual Analysis of Lexical Taxonomies: The Case of WordNet Top-Level: In this paper we propose an analysis and an upgrade of WordNet's top-level synset taxonomy. We briefly review WordNet and identify its main semantic limitations. Some principles from a forthcoming OntoClean methodology are applied to the ontological analysis of WordNet. A revised top-level taxonomy is proposed, which is meant to be more conceptually rigorous, cognitively transparent, and efficiently exploitable in several applications.<|reference_end|> | arxiv | @article{gangemi2001conceptual,
title={Conceptual Analysis of Lexical Taxonomies: The Case of WordNet Top-Level},
author={Aldo Gangemi, Nicola Guarino, Alessandro Oltramari},
journal={arXiv preprint arXiv:cs/0109013},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109013},
primaryClass={cs.CL cs.IR}
} | gangemi2001conceptual |
arxiv-670082 | cs/0109014 | Assigning Satisfaction Values to Constraints: An Algorithm to Solve Dynamic Meta-Constraints | <|reference_start|>Assigning Satisfaction Values to Constraints: An Algorithm to Solve Dynamic Meta-Constraints: The model of Dynamic Meta-Constraints has special activity constraints which can activate other constraints. It also has meta-constraints which range over other constraints. An algorithm is presented in which constraints can be assigned one of five different satisfaction values, which leads to the assignment of domain values to the variables in the CSP. An outline of the model and the algorithm is presented, followed by some initial results for two problems: a simple classic CSP and the Car Configuration Problem. The algorithm is shown to perform few backtracks per solution, but to have overheads in the form of historical records required for the implementation of state.<|reference_end|> | arxiv | @article{van der linden2001assigning,
title={Assigning Satisfaction Values to Constraints: An Algorithm to Solve
Dynamic Meta-Constraints},
author={Janet van der Linden (The Open University)},
journal={arXiv preprint arXiv:cs/0109014},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109014},
primaryClass={cs.PL cs.AI}
} | van der linden2001assigning |
arxiv-670083 | cs/0109015 | Boosting Trees for Anti-Spam Email Filtering | <|reference_start|>Boosting Trees for Anti-Spam Email Filtering: This paper describes a set of comparative experiments for the problem of automatically filtering unwanted electronic mail messages. Several variants of the AdaBoost algorithm with confidence-rated predictions [Schapire & Singer, 99] have been applied, which differ in the complexity of the base learners considered. Two main conclusions can be drawn from our experiments: a) The boosting-based methods clearly outperform the baseline learning algorithms (Naive Bayes and Induction of Decision Trees) on the PU1 corpus, achieving very high levels of the F1 measure; b) Increasing the complexity of the base learners allows to obtain better ``high-precision'' classifiers, which is a very important issue when misclassification costs are considered.<|reference_end|> | arxiv | @article{carreras2001boosting,
title={Boosting Trees for Anti-Spam Email Filtering},
author={Xavier Carreras and Lluis Marquez},
journal={Proceedings of RANLP-2001, pp. 58-64, Bulgaria, 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109015},
primaryClass={cs.CL}
} | carreras2001boosting |
arxiv-670084 | cs/0109016 | Communications Convergence, Spectrum Use and Regulatory Constraints, Or Property Rights, Flexible Spectrum Use and Satellite v Terrestrial Uses and Users | <|reference_start|>Communications Convergence, Spectrum Use and Regulatory Constraints, Or Property Rights, Flexible Spectrum Use and Satellite v Terrestrial Uses and Users: As far as many consumers and businessmen and women are concerned, increasingly wireline and wireless services, including those provided by terrestrial and satellite systems, are considered to be substitutes and sometimes complements, regardless of the laws and regulations applicable to them. At the same time, many writers and even government agencies (such as the FCC) have suggested that users of the spectrum should be given more property-like rights in the use of the spectrum and at a minimum should be given much more flexibility in how they may use the spectrum. Two recent developments have important implications with respect to "convergence," spectrum property rights and flexible use of the spectrum. The first development involves several proposals to provide terrestrial wireless services within spectrum in use or planned to be used to provide satellite services. The second development is the passage of the 2000 ORBIT Act which specifically forbids the use of license auctions to select among mutually exclusive applicants to provide international or global satellite communications service. The purpose of this paper is to discuss some of the questions raised by these two events, but not necessarily to provide definitive answers or solutions.<|reference_end|> | arxiv | @article{webbink2001communications,
title={Communications Convergence, Spectrum Use and Regulatory Constraints, Or
Property Rights, Flexible Spectrum Use and Satellite v. Terrestrial Uses and
Users},
author={Douglas W. Webbink},
journal={arXiv preprint arXiv:cs/0109016},
year={2001},
number={TPRC-2001-030},
archivePrefix={arXiv},
eprint={cs/0109016},
primaryClass={cs.CY}
} | webbink2001communications |
arxiv-670085 | cs/0109017 | Learning from the Success of MPI | <|reference_start|>Learning from the Success of MPI: The Message Passing Interface (MPI) has been extremely successful as a portable way to program high-performance parallel computers. This success has occurred in spite of the view of many that message passing is difficult and that other approaches, including automatic parallelization and directive-based parallelism, are easier to use. This paper argues that MPI has succeeded because it addresses all of the important issues in providing a parallel programming model.<|reference_end|> | arxiv | @article{gropp2001learning,
title={Learning from the Success of MPI},
author={William D. Gropp},
journal={arXiv preprint arXiv:cs/0109017},
year={2001},
number={ANL/MCS-P903-0801},
archivePrefix={arXiv},
eprint={cs/0109017},
primaryClass={cs.DC}
} | gropp2001learning |
arxiv-670086 | cs/0109018 | Exact Complexity of Exact-Four-Colorability | <|reference_start|>Exact Complexity of Exact-Four-Colorability: Let $M_k \seq \nats$ be a given set that consists of $k$ noncontiguous integers. Define $\exactcolor{M_k}$ to be the problem of determining whether $\chi(G)$, the chromatic number of a given graph $G$, equals one of the $k$ elements of the set $M_k$ exactly. In 1987, Wagner \cite{wag:j:min-max} proved that $\exactcolor{M_k}$ is $\bhlevel{2k}$-complete, where $M_k = \{6k+1, 6k+3, >..., 8k-1 \}$ and $\bhlevel{2k}$ is the $2k$th level of the boolean hierarchy over $\np$. In particular, for $k = 1$, it is DP-complete to determine whether $\chi(G) = 7$, where $\DP = \bhlevel{2}$. Wagner raised the question of how small the numbers in a $k$-element set $M_k$ can be chosen such that $\exactcolor{M_k}$ still is $\bhlevel{2k}$-complete. In particular, for $k = 1$, he asked if it is DP-complete to determine whether $\chi(G) = 4$. In this note, we solve this question of Wagner and determine the precise threshold $t \in \{4, 5, 6, 7\}$ for which the problem $\exactcolor{\{t\}}$ jumps from NP to DP-completeness: It is DP-complete to determine whether $\chi(G) = 4$, yet $\exactcolor{\{3\}}$ is in $\np$. More generally, for each $k \geq 1$, we show that $\exactcolor{M_k}$ is $\bhlevel{2k}$-complete for $M_k = \{3k+1, 3k+3,..., 5k-1\}$.<|reference_end|> | arxiv | @article{rothe2001exact,
title={Exact Complexity of Exact-Four-Colorability},
author={J"org Rothe},
journal={arXiv preprint arXiv:cs/0109018},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109018},
primaryClass={cs.CC}
} | rothe2001exact |
arxiv-670087 | cs/0109019 | Tracing Execution of Software for Design Coverage | <|reference_start|>Tracing Execution of Software for Design Coverage: Test suites are designed to validate the operation of a system against requirements. One important aspect of a test suite design is to ensure that system operation logic is tested completely. A test suite should drive a system through all abstract states to exercise all possible cases of its operation. This is a difficult task. Code coverage tools support test suite designers by providing the information about which parts of source code are covered during system execution. Unfortunately, code coverage tools produce only source code coverage information. For a test engineer it is often hard to understand what the noncovered parts of the source code do and how they relate to requirements. We propose a generic approach that provides design coverage of the executed software simplifying the development of new test suites. We demonstrate our approach on common design abstractions such as statecharts, activity diagrams, message sequence charts and structure diagrams. We implement the design coverage using Third Eye tracing and trace analysis framework. Using design coverage, test suites could be created faster by focussing on untested design elements.<|reference_end|> | arxiv | @article{lencevicius2001tracing,
title={Tracing Execution of Software for Design Coverage},
author={Raimondas Lencevicius, Edu Metz, and Alexander Ran},
journal={arXiv preprint arXiv:cs/0109019},
year={2001},
doi={10.1109/ASE.2001.989822},
archivePrefix={arXiv},
eprint={cs/0109019},
primaryClass={cs.SE}
} | lencevicius2001tracing |
arxiv-670088 | cs/0109020 | Modelling Semantic Association and Conceptual Inheritance for Semantic Analysis | <|reference_start|>Modelling Semantic Association and Conceptual Inheritance for Semantic Analysis: Allowing users to interact through language borders is an interesting challenge for information technology. For the purpose of a computer assisted language learning system, we have chosen icons for representing meaning on the input interface, since icons do not depend on a particular language. However, a key limitation of this type of communication is the expression of articulated ideas instead of isolated concepts. We propose a method to interpret sequences of icons as complex messages by reconstructing the relations between concepts, so as to build conceptual graphs able to represent meaning and to be used for natural language sentence generation. This method is based on an electronic dictionary containing semantic information.<|reference_end|> | arxiv | @article{vaillant2001modelling,
title={Modelling Semantic Association and Conceptual Inheritance for Semantic
Analysis},
author={Pascal Vaillant},
journal={Springer LNCS (LNAI) 2166 (2001), 54-61},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109020},
primaryClass={cs.CL}
} | vaillant2001modelling |
arxiv-670089 | cs/0109021 | Competing DNS Roots: Creative Destruction or Just Plain Destruction? | <|reference_start|>Competing DNS Roots: Creative Destruction or Just Plain Destruction?: The Internet Domain Name System (DNS) is a hierarchical name space that enables the assignment of unique, mnemonic identifiers to Internet hosts and the consistent mapping of these names to IP addresses. The root of the domain name system is the top of the hierarchy and is currently managed by a quasi-private centralized regulatory authority, the Internet Corporation for Assigned Names and Numbers (ICANN). This paper identifies and discusses the economic and policy issues raised by competing DNS roots. The paper provides a precise definition of root-competition and shows that multiple roots are a species of standards competition, in which network externalities play a major role. The paper performs a structural analysis of the different forms that competing DNS roots can take and their effects on end-user compatibility. It then explores the policy implications of the various forms of competition. The thesis of the paper is that root competition is caused by a severe disjunction between the demand for and supply of top-level domain names. ICANN has authorized a tiny number of new top-level domains (7) and subjected their operators to excruciatingly slow and expensive contractual negotiations. The growth of alternate DNS roots is an attempt to bypass that bottleneck. The paper arrives at the policy conclusion that competition among DNS roots should be permitted and is a healthy outlet for inefficiency or abuses of power by the dominant root administrator.<|reference_end|> | arxiv | @article{mueller2001competing,
title={Competing DNS Roots: Creative Destruction or Just Plain Destruction?},
author={Milton L. Mueller},
journal={arXiv preprint arXiv:cs/0109021},
year={2001},
number={TPRC-2001-029},
archivePrefix={arXiv},
eprint={cs/0109021},
primaryClass={cs.CY}
} | mueller2001competing |
arxiv-670090 | cs/0109022 | Interactive Timetabling | <|reference_start|>Interactive Timetabling: Timetabling is a typical application of constraint programming whose task is to allocate activities to slots in available resources respecting various constraints like precedence and capacity. In this paper we present a basic concept, a constraint model, and the solving algorithms for interactive timetabling. Interactive timetabling combines automated timetabling (the machine allocates the activities) with user interaction (the user can interfere with the process of timetabling). Because the user can see how the timetabling proceeds and can intervene this process, we believe that such approach is more convenient than full automated timetabling which behaves like a black-box. The contribution of this paper is twofold: we present a generic model to describe timetabling (and scheduling in general) problems and we propose an interactive algorithm for solving such problems.<|reference_end|> | arxiv | @article{muller2001interactive,
title={Interactive Timetabling},
author={Tomas Muller, Roman Bartak},
journal={arXiv preprint arXiv:cs/0109022},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109022},
primaryClass={cs.PL cs.AI}
} | muller2001interactive |
arxiv-670091 | cs/0109023 | Integrating Multiple Knowledge Sources for Robust Semantic Parsing | <|reference_start|>Integrating Multiple Knowledge Sources for Robust Semantic Parsing: This work explores a new robust approach for Semantic Parsing of unrestricted texts. Our approach considers Semantic Parsing as a Consistent Labelling Problem (CLP), allowing the integration of several knowledge types (syntactic and semantic) obtained from different sources (linguistic and statistic). The current implementation obtains 95% accuracy in model identification and 72% in case-role filling.<|reference_end|> | arxiv | @article{atserias2001integrating,
title={Integrating Multiple Knowledge Sources for Robust Semantic Parsing},
author={Jordi Atserias, Lluis Padro and German Rigau},
journal={Proceedings of Euroconference on Recent Advances in Natural
Language Processing (RANLP'01), p.8-14. Tzigov Chark, Bulgaria. Sept. 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109023},
primaryClass={cs.CL cs.AI}
} | atserias2001integrating |
arxiv-670092 | cs/0109024 | Verification of Timed Automata Using Rewrite Rules and Strategies | <|reference_start|>Verification of Timed Automata Using Rewrite Rules and Strategies: ELAN is a powerful language and environment for specifying and prototyping deduction systems in a language based on rewrite rules controlled by strategies. Timed automata is a class of continuous real-time models of reactive systems for which efficient model-checking algorithms have been devised. In this paper, we show that these algorithms can very easily be prototyped in the ELAN system. This paper argues through this example that rewriting based systems relying on rules and strategies are a good framework to prototype, study and test rather efficiently symbolic model-checking algorithms, i.e. algorithms which involve combination of graph exploration rules, deduction rules, constraint solving techniques and decision procedures.<|reference_end|> | arxiv | @article{beffara2001verification,
title={Verification of Timed Automata Using Rewrite Rules and Strategies},
author={Emmanuel Beffara, Olivier Bournez, Hassen Kacem, Claude Kirchner},
journal={arXiv preprint arXiv:cs/0109024},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109024},
primaryClass={cs.PL}
} | beffara2001verification |
arxiv-670093 | cs/0109025 | Dynamic Global Constraints: A First View | <|reference_start|>Dynamic Global Constraints: A First View: Global constraints proved themselves to be an efficient tool for modelling and solving large-scale real-life combinatorial problems. They encapsulate a set of binary constraints and using global reasoning about this set they filter the domains of involved variables better than arc consistency among the set of binary constraints. Moreover, global constraints exploit semantic information to achieve more efficient filtering than generalised consistency algorithms for n-ary constraints. Continued expansion of constraint programming (CP) to various application areas brings new challenges for design of global constraints. In particular, application of CP to advanced planning and scheduling (APS) requires dynamic additions of new variables and constraints during the process of constraint satisfaction and, thus, it would be helpful if the global constraints could adopt new variables. In the paper, we give a motivation for such dynamic global constraints and we describe a dynamic version of the well-known alldifferent constraint.<|reference_end|> | arxiv | @article{bartak2001dynamic,
title={Dynamic Global Constraints: A First View},
author={Roman Bartak},
journal={arXiv preprint arXiv:cs/0109025},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109025},
primaryClass={cs.PL cs.AI}
} | bartak2001dynamic |
arxiv-670094 | cs/0109026 | Conceptualising Regulatory Change - Explaining Shifts in Telecommunications Governance | <|reference_start|>Conceptualising Regulatory Change - Explaining Shifts in Telecommunications Governance: Drawing on perspectives from telecommunications policy and neo-Gramscian understandings of international political economy, this paper offers an explanation and analysis of the shifting patterns of regulation which have been evident in the telecommunications sector in recent years. It aims to illustrate explain and explore the implications of the movement of regulatory sovereignty away from the nation-state, through regional conduits, to global organisations in the crystallisation of a world system of telecommunications governance. Our central argument is that telecommunications governance has evolved from a regulatory arena characterised, in large part, by national diversity, to one wherein a more convergent global multilayered system is emerging. We suggest that the epicentre of this regulatory system is the relatively new World Trade Organisation (WTO). Working in concert with the WTO are existing well-established nodes regulation. In further complement, we see regional regulatory projects, notably the European Union (EU), as important conduits and nodes of regulation in the consolidation of a global regulatory regime. By way of procedure, we first explore the utility of a neo-Gramscian approach for understanding the development of global regulatory frameworks. Second, we survey something of the recent history - and, in extension, conventional wisdom - of telecommunications regulation at national and regional levels. Third, we demonstrate how a multilayered system of global telecommunications regulation has emerged centred around the regulatory authority of the WTO. Finally, we offer our concluding comments.<|reference_end|> | arxiv | @article{simpson2001conceptualising,
title={Conceptualising Regulatory Change - Explaining Shifts in
Telecommunications Governance},
author={Seamus Simpson and Rorden Wilkinson},
journal={arXiv preprint arXiv:cs/0109026},
year={2001},
number={TPRC-2001-043},
archivePrefix={arXiv},
eprint={cs/0109026},
primaryClass={cs.CY}
} | simpson2001conceptualising |
arxiv-670095 | cs/0109027 | Routing Permutations in Partitioned Optical Passive Star Networks | <|reference_start|>Routing Permutations in Partitioned Optical Passive Star Networks: It is shown that a POPS network with g groups and d processors per group can efficiently route any permutation among the n=dg processors. The number of slots used is optimal in the worst case, and is at most the double of the optimum for all permutations p such that p(i)<>i for all i.<|reference_end|> | arxiv | @article{mei2001routing,
title={Routing Permutations in Partitioned Optical Passive Star Networks},
author={Alessandro Mei and Romeo Rizzi},
journal={arXiv preprint arXiv:cs/0109027},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109027},
primaryClass={cs.DC cs.DS}
} | mei2001routing |
arxiv-670096 | cs/0109028 | Random Walks in Routing Landscapes | <|reference_start|>Random Walks in Routing Landscapes: In this paper we present a combinatorial optimisation view on the routing problem for connectionless packet networks by using the metaphor of a landscape. We examine the main properties of the routing landscapes as we define them and how they can help us on the evaluation of the problem difficulty and the generation of effective algorithms. We also present the random walk statistical technique to evaluate the main properties of those landscapes and a number of examples to demonstrate the use of the method.<|reference_end|> | arxiv | @article{michalareas2001random,
title={Random Walks in Routing Landscapes},
author={T.Michalareas, L.Sacks},
journal={arXiv preprint arXiv:cs/0109028},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109028},
primaryClass={cs.NI cs.CC}
} | michalareas2001random |
arxiv-670097 | cs/0109029 | Learning class-to-class selectional preferences | <|reference_start|>Learning class-to-class selectional preferences: Selectional preference learning methods have usually focused on word-to-class relations, e.g., a verb selects as its subject a given nominal class. This papers extends previous statistical models to class-to-class preferences, and presents a model that learns selectional preferences for classes of verbs. The motivation is twofold: different senses of a verb may have different preferences, and some classes of verbs can share preferences. The model is tested on a word sense disambiguation task which uses subject-verb and object-verb relationships extracted from a small sense-disambiguated corpus.<|reference_end|> | arxiv | @article{agirre2001learning,
title={Learning class-to-class selectional preferences},
author={E. Agirre, D. Martinez},
journal={Proceedings of the Workshop "Computational Natural Language
Learning" (CoNLL-2001). In conjunction with ACL'2001/EACL'2001. Toulouse,
France. 6-7th July 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109029},
primaryClass={cs.CL}
} | agirre2001learning |
arxiv-670098 | cs/0109030 | Knowledge Sources for Word Sense Disambiguation | <|reference_start|>Knowledge Sources for Word Sense Disambiguation: Two kinds of systems have been defined during the long history of WSD: principled systems that define which knowledge types are useful for WSD, and robust systems that use the information sources at hand, such as, dictionaries, light-weight ontologies or hand-tagged corpora. This paper tries to systematize the relation between desired knowledge types and actual information sources. We also compare the results for a wide range of algorithms that have been evaluated on a common test setting in our research group. We hope that this analysis will help change the shift from systems based on information sources to systems based on knowledge sources. This study might also shed some light on semi-automatic acquisition of desired knowledge types from existing resources.<|reference_end|> | arxiv | @article{agirre2001knowledge,
title={Knowledge Sources for Word Sense Disambiguation},
author={Eneko Agirre and David Martinez},
journal={Proceedings of the Fourth International Conference TSD 2001, Plzen
(Pilsen), Czech Republic, September 2001. Published in the Springer Verlag
Lecture Notes in Computer Science series. Vaclav Matousek, Pavel Mautner,
Roman Moucek, Karel Tauser (eds.)},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109030},
primaryClass={cs.CL}
} | agirre2001knowledge |
arxiv-670099 | cs/0109031 | Enriching WordNet concepts with topic signatures | <|reference_start|>Enriching WordNet concepts with topic signatures: This paper explores the possibility of enriching the content of existing ontologies. The overall goal is to overcome the lack of topical links among concepts in WordNet. Each concept is to be associated to a topic signature, i.e., a set of related words with associated weights. The signatures can be automatically constructed from the WWW or from sense-tagged corpora. Both approaches are compared and evaluated on a word sense disambiguation task. The results show that it is possible to construct clean signatures from the WWW using some filtering techniques.<|reference_end|> | arxiv | @article{agirre2001enriching,
title={Enriching WordNet concepts with topic signatures},
author={Eneko Agirre, Olatz Ansa, Eduard Hovy and David Martinez},
journal={Proceedings of the NAACL workshop on WordNet and Other lexical
Resources: Applications, Extensions and Customizations. Pittsburg, 2001},
year={2001},
archivePrefix={arXiv},
eprint={cs/0109031},
primaryClass={cs.CL}
} | agirre2001enriching |
arxiv-670100 | cs/0109032 | The Internet, 1995-2000: Access, Civic Involvement, and Social Interaction | <|reference_start|>The Internet, 1995-2000: Access, Civic Involvement, and Social Interaction: Our research, which began fielding surveys in 1995, and which have been repeated with variation in 1996, 1997 and 2000, was apparently the first to use national random telephone survey methods to track social and community aspects of Internet use, and to compare users and non-users. It also seems to be among the first that used these methods to compare users with non-users in regards to communication, social and community issues. The work has been largely supported by grants from the Markle Foundation of New York City as well as the Robert Wood Johnson Foundation. Abridged, see full text for complete abstract.<|reference_end|> | arxiv | @article{katz2001the,
title={The Internet, 1995-2000: Access, Civic Involvement, and Social
Interaction},
author={James Katz, Ronald E. Rice, and Philip Aspden},
journal={arXiv preprint arXiv:cs/0109032},
year={2001},
number={TPRC-2001-015},
archivePrefix={arXiv},
eprint={cs/0109032},
primaryClass={cs.CY}
} | katz2001the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.