corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-670401
cs/0202034
Covariance Plasticity and Regulated Criticality
<|reference_start|>Covariance Plasticity and Regulated Criticality: We propose that a regulation mechanism based on Hebbian covariance plasticity may cause the brain to operate near criticality. We analyze the effect of such a regulation on the dynamics of a network with excitatory and inhibitory neurons and uniform connectivity within and across the two populations. We show that, under broad conditions, the system converges to a critical state lying at the common boundary of three regions in parameter space; these correspond to three modes of behavior: high activity, low activity, oscillation.<|reference_end|>
arxiv
@article{bienenstock2002covariance, title={Covariance Plasticity and Regulated Criticality}, author={Elie Bienenstock and Daniel Lehmann}, journal={Advances in Complex Systems, 1(4) (1998) pp. 361-384}, year={2002}, number={Center for Neural Computation, Hebrew University, Jerusalem TR-95-1}, archivePrefix={arXiv}, eprint={cs/0202034}, primaryClass={cs.NE cs.AI nlin.AO q-bio} }
bienenstock2002covariance
arxiv-670402
cs/0202035
Sprinkling Selections over Join DAGs for Efficient Query Optimization
<|reference_start|>Sprinkling Selections over Join DAGs for Efficient Query Optimization: In optimizing queries, solutions based on AND/OR DAG can generate all possible join orderings and select placements before searching for optimal query execution strategy. But as the number of joins and selection conditions increase, the space and time complexity to generate optimal query plan increases exponentially. In this paper, we use join graph for a relational database schema to either pre-compute all possible join orderings that can be executed and store it as a join DAG or, extract joins in the queries to incrementally build a history join DAG as and when the queries are executed. The select conditions in the queries are appropriately placed in the retrieved join DAG (or, history join DAG) to generate optimal query execution strategy. We experimentally evaluate our query optimization technique on TPC-D/H query sets to show their effectiveness over AND/OR DAG query optimization strategy. Finally, we illustrate how our technique can be used for efficient multiple query optimization and selection of materialized views in data warehousing environments.<|reference_end|>
arxiv
@article{valluri2002sprinkling, title={Sprinkling Selections over Join DAGs for Efficient Query Optimization}, author={Satyanarayana R Valluri, Soujanya Vadapalli, Kamalakar Karlapalem}, journal={arXiv preprint arXiv:cs/0202035}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202035}, primaryClass={cs.DB} }
valluri2002sprinkling
arxiv-670403
cs/0202036
Equivalence and Isomorphism for Boolean Constraint Satisfaction
<|reference_start|>Equivalence and Isomorphism for Boolean Constraint Satisfaction: A Boolean constraint satisfaction instance is a conjunction of constraint applications, where the allowed constraints are drawn from a fixed set B of Boolean functions. We consider the problem of determining whether two given constraint satisfaction instances are equivalent and prove a Dichotomy Theorem by showing that for all sets C of allowed constraints, this problem is either polynomial-time solvable or coNP-complete, and we give a simple criterion to determine which case holds. A more general problem addressed in this paper is the isomorphism problem, the problem of determining whether there exists a renaming of the variables that makes two given constraint satisfaction instances equivalent in the above sense. We prove that this problem is coNP-hard if the corresponding equivalence problem is coNP-hard, and polynomial-time many-one reducible to the graph isomorphism problem in all other cases.<|reference_end|>
arxiv
@article{boehler2002equivalence, title={Equivalence and Isomorphism for Boolean Constraint Satisfaction}, author={E. Boehler, E. Hemaspaandra, Steffen Reith, Heribert Vollmer}, journal={arXiv preprint arXiv:cs/0202036}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202036}, primaryClass={cs.CC cs.LO} }
boehler2002equivalence
arxiv-670404
cs/0202037
Towards practical meta-querying
<|reference_start|>Towards practical meta-querying: We describe a meta-querying system for databases containing queries in addition to ordinary data. In the context of such databases, a meta-query is a query about queries. Representing stored queries in XML, and using the standard XML manipulation language XSLT as a sublanguage, we show that just a few features need to be added to SQL to turn it into a fully-fledged meta-query language. The good news is that these features can be directly supported by extensible database technology.<|reference_end|>
arxiv
@article{bussche2002towards, title={Towards practical meta-querying}, author={Jan Van den Bussche, Stijn Vansummeren, Gottfried Vossen}, journal={Information Systems, Volume 30, Issue 4 , June 2005, Pages 317-332}, year={2002}, doi={10.1016/j.is.2004.04.001}, archivePrefix={arXiv}, eprint={cs/0202037}, primaryClass={cs.DB} }
bussche2002towards
arxiv-670405
cs/0202038
The efficient generation of unstructured control volumes in 2D and 3D
<|reference_start|>The efficient generation of unstructured control volumes in 2D and 3D: Many problems in engineering, chemistry and physics require the representation of solutions in complex geometries. In the paper we deal with a problem of unstructured mesh generation for the control volume method. We propose an algorithm which bases on the spheres generation in central points of the control volumes.<|reference_end|>
arxiv
@article{jacek2002the, title={The efficient generation of unstructured control volumes in 2D and 3D}, author={Leszczynski Jacek, Pluta Sebastian}, journal={Lecture Notes in Computer Science (LNCS), Springer-Verlag, 2328, 2001, pp. 682-689}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202038}, primaryClass={cs.CG cs.CE cs.NA math.NA physics.comp-ph} }
jacek2002the
arxiv-670406
cs/0202039
Generalized Cores
<|reference_start|>Generalized Cores: Cores are, besides connectivity components, one among few concepts that provides us with efficient decompositions of large graphs and networks. In the paper a generalization of the notion of core of a graph based on vertex property function is presented. It is shown that for the local monotone vertex property functions the corresponding cores can be determined in $O(m \max (\Delta, \log n))$ time.<|reference_end|>
arxiv
@article{batagelj2002generalized, title={Generalized Cores}, author={V. Batagelj and M. Zaverv{s}nik}, journal={Advances in Data Analysis and Classification, 2011. Volume 5, Number 2, 129-145}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202039}, primaryClass={cs.DS cs.DM} }
batagelj2002generalized
arxiv-670407
cs/0203001
Towards Generic Refactoring
<|reference_start|>Towards Generic Refactoring: We study program refactoring while considering the language or even the programming paradigm as a parameter. We use typed functional programs, namely Haskell programs, as the specification medium for a corresponding refactoring framework. In order to detach ourselves from language syntax, our specifications adhere to the following style. (I) As for primitive algorithms for program analysis and transformation, we employ generic function combinators supporting generic traversal and polymorphic functions refined by ad-hoc cases. (II) As for the language abstractions involved in refactorings, we design a dedicated multi-parameter class. This class can be instantiated for abstractions as present in various languages, e.g., Java, Prolog or Haskell.<|reference_end|>
arxiv
@article{laemmel2002towards, title={Towards Generic Refactoring}, author={Ralf Laemmel}, journal={arXiv preprint arXiv:cs/0203001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203001}, primaryClass={cs.PL} }
laemmel2002towards
arxiv-670408
cs/0203002
Another perspective on Default Reasoning
<|reference_start|>Another perspective on Default Reasoning: The lexicographic closure of any given finite set D of normal defaults is defined. A conditional assertion "if a then b" is in this lexicographic closure if, given the defaults D and the fact a, one would conclude b. The lexicographic closure is essentially a rational extension of D, and of its rational closure, defined in a previous paper. It provides a logic of normal defaults that is different from the one proposed by R. Reiter and that is rich enough not to require the consideration of non-normal defaults. A large number of examples are provided to show that the lexicographic closure corresponds to the basic intuitions behind Reiter's logic of defaults.<|reference_end|>
arxiv
@article{lehmann2002another, title={Another perspective on Default Reasoning}, author={Daniel Lehmann}, journal={Annals of Mathematics and Artificial Intelligence, 15(1) (1995) pp. 61-82}, year={2002}, number={Leibniz Center for Research in Computer Science TR-92-12, July 1992}, archivePrefix={arXiv}, eprint={cs/0203002}, primaryClass={cs.AI} }
lehmann2002another
arxiv-670409
cs/0203003
Deductive Nonmonotonic Inference Operations: Antitonic Representations
<|reference_start|>Deductive Nonmonotonic Inference Operations: Antitonic Representations: We provide a characterization of those nonmonotonic inference operations C for which C(X) may be described as the set of all logical consequences of X together with some set of additional assumptions S(X) that depends anti-monotonically on X (i.e., X is a subset of Y implies that S(Y) is a subset of S(X)). The operations represented are exactly characterized in terms of properties most of which have been studied in Freund-Lehmann(cs.AI/0202031). Similar characterizations of right-absorbing and cumulative operations are also provided. For cumulative operations, our results fit in closely with those of Freund. We then discuss extending finitary operations to infinitary operations in a canonical way and discuss co-compactness properties. Our results provide a satisfactory notion of pseudo-compactness, generalizing to deductive nonmonotonic operations the notion of compactness for monotonic operations. They also provide an alternative, more elegant and more general, proof of the existence of an infinitary deductive extension for any finitary deductive operation (Theorem 7.9 of Freund-Lehmann).<|reference_end|>
arxiv
@article{kaluzhny2002deductive, title={Deductive Nonmonotonic Inference Operations: Antitonic Representations}, author={Yuri Kaluzhny and Daniel Lehmann}, journal={Journal of Logic and Computation, 5(1) (1995) pp. 111-122}, year={2002}, number={Leibniz Center for Research in Computer Science TR-94-3, March 1994}, archivePrefix={arXiv}, eprint={cs/0203003}, primaryClass={cs.AI} }
kaluzhny2002deductive
arxiv-670410
cs/0203004
Stereotypical Reasoning: Logical Properties
<|reference_start|>Stereotypical Reasoning: Logical Properties: Stereotypical reasoning assumes that the situation at hand is one of a kind and that it enjoys the properties generally associated with that kind of situation. It is one of the most basic forms of nonmonotonic reasoning. A formal model for stereotypical reasoning is proposed and the logical properties of this form of reasoning are studied. Stereotypical reasoning is shown to be cumulative under weak assumptions.<|reference_end|>
arxiv
@article{lehmann2002stereotypical, title={Stereotypical Reasoning: Logical Properties}, author={Daniel Lehmann}, journal={Logic Journal of the Interest Group in Pure and Applied Logics (IGPL), 6(1) (1998) pp. 49-58}, year={2002}, number={Leibniz Center for Research in Computer Science TR-97-10}, archivePrefix={arXiv}, eprint={cs/0203004}, primaryClass={cs.AI} }
lehmann2002stereotypical
arxiv-670411
cs/0203005
A Framework for Compiling Preferences in Logic Programs
<|reference_start|>A Framework for Compiling Preferences in Logic Programs: We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s < t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems.<|reference_end|>
arxiv
@article{delgrande2002a, title={A Framework for Compiling Preferences in Logic Programs}, author={J. P. Delgrande, T. Schaub, and H. Tompits}, journal={arXiv preprint arXiv:cs/0203005}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203005}, primaryClass={cs.AI} }
delgrande2002a
arxiv-670412
cs/0203006
Composing Programs in a Rewriting Logic for Declarative Programming
<|reference_start|>Composing Programs in a Rewriting Logic for Declarative Programming: Constructor-Based Conditional Rewriting Logic is a general framework for integrating first-order functional and logic programming which gives an algebraic semantics for non-deterministic functional-logic programs. In the context of this formalism, we introduce a simple notion of program module as an open program which can be extended together with several mechanisms to combine them. These mechanisms are based on a reduced set of operations. However, the high expressiveness of these operations enable us to model typical constructs for program modularization like hiding, export/import, genericity/instantiation, and inheritance in a simple way. We also deal with the semantic aspects of the proposal by introducing an immediate consequence operator, and studying several alternative semantics for a program module, based on this operator, in the line of logic programming: the operator itself, its least fixpoint (the least model of the module), the set of its pre-fixpoints (term models of the module), and some other variations in order to find a compositional and fully abstract semantics wrt the set of operations and a natural notion of observability.<|reference_end|>
arxiv
@article{molina2002composing, title={Composing Programs in a Rewriting Logic for Declarative Programming}, author={Juan M. Molina and Ernesto Pimentel}, journal={arXiv preprint arXiv:cs/0203006}, year={2002}, number={LCC852}, archivePrefix={arXiv}, eprint={cs/0203006}, primaryClass={cs.LO cs.PL} }
molina2002composing
arxiv-670413
cs/0203007
Two results for proiritized logic programming
<|reference_start|>Two results for proiritized logic programming: Prioritized default reasoning has illustrated its rich expressiveness and flexibility in knowledge representation and reasoning. However, many important aspects of prioritized default reasoning have yet to be thoroughly explored. In this paper, we investigate two properties of prioritized logic programs in the context of answer set semantics. Specifically, we reveal a close relationship between mutual defeasibility and uniqueness of the answer set for a prioritized logic program. We then explore how the splitting technique for extended logic programs can be extended to prioritized logic programs. We prove splitting theorems that can be used to simplify the evaluation of a prioritized logic program under certain conditions.<|reference_end|>
arxiv
@article{zhang2002two, title={Two results for proiritized logic programming}, author={Yan Zhang}, journal={arXiv preprint arXiv:cs/0203007}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203007}, primaryClass={cs.AI} }
zhang2002two
arxiv-670414
cs/0203008
Computational Geometry Column 43
<|reference_start|>Computational Geometry Column 43: The concept of pointed pseudo-triangulations is defined and a few of its applications described.<|reference_end|>
arxiv
@article{o'rourke2002computational, title={Computational Geometry Column 43}, author={Joseph O'Rourke}, journal={SIGACT News, 33(1) Issue 122, Mar. 2002, 58-60}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203008}, primaryClass={cs.CG cs.DM} }
o'rourke2002computational
arxiv-670415
cs/0203009
SPINning Parallel Systems Software
<|reference_start|>SPINning Parallel Systems Software: We describe our experiences in using SPIN to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic: processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the SPIN/PROMELA framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with SPIN.<|reference_end|>
arxiv
@article{matlin2002spinning, title={SPINning Parallel Systems Software}, author={O. S. Matlin, E. Lusk, and W. McCune}, journal={arXiv preprint arXiv:cs/0203009}, year={2002}, number={ANL/MCS-P921-1201}, archivePrefix={arXiv}, eprint={cs/0203009}, primaryClass={cs.LO cs.DC} }
matlin2002spinning
arxiv-670416
cs/0203010
On Learning by Exchanging Advice
<|reference_start|>On Learning by Exchanging Advice: One of the main questions concerning learning in Multi-Agent Systems is: (How) can agents benefit from mutual interaction during the learning process?. This paper describes the study of an interactive advice-exchange mechanism as a possible way to improve agents' learning performance. The advice-exchange technique, discussed here, uses supervised learning (backpropagation), where reinforcement is not directly coming from the environment but is based on advice given by peers with better performance score (higher confidence), to enhance the performance of a heterogeneous group of Learning Agents (LAs). The LAs are facing similar problems, in an environment where only reinforcement information is available. Each LA applies a different, well known, learning technique: Random Walk (hill-climbing), Simulated Annealing, Evolutionary Algorithms and Q-Learning. The problem used for evaluation is a simplified traffic-control simulation. Initial results indicate that advice-exchange can improve learning speed, although bad advice and/or blind reliance can disturb the learning performance.<|reference_end|>
arxiv
@article{nunes2002on, title={On Learning by Exchanging Advice}, author={L. Nunes, E. Oliveira}, journal={arXiv preprint arXiv:cs/0203010}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203010}, primaryClass={cs.LG cs.MA} }
nunes2002on
arxiv-670417
cs/0203011
Capturing Knowledge of User Preferences: ontologies on recommender systems
<|reference_start|>Capturing Knowledge of User Preferences: ontologies on recommender systems: Tools for filtering the World Wide Web exist, but they are hampered by the difficulty of capturing user preferences in such a dynamic environment. We explore the acquisition of user profiles by unobtrusive monitoring of browsing behaviour and application of supervised machine-learning techniques coupled with an ontological representation to extract user preferences. A multi-class approach to paper classification is used, allowing the paper topic taxonomy to be utilised during profile construction. The Quickstep recommender system is presented and two empirical studies evaluate it in a real work setting, measuring the effectiveness of using a hierarchical topic ontology compared with an extendable flat list.<|reference_end|>
arxiv
@article{middleton2002capturing, title={Capturing Knowledge of User Preferences: ontologies on recommender systems}, author={S.E. Middleton, D.C. De Roure, N.R. Shadbolt}, journal={arXiv preprint arXiv:cs/0203011}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203011}, primaryClass={cs.LG cs.MA} }
middleton2002capturing
arxiv-670418
cs/0203012
Interface agents: A review of the field
<|reference_start|>Interface agents: A review of the field: This paper reviews the origins of interface agents, discusses challenges that exist within the interface agent field and presents a survey of current attempts to find solutions to these challenges. A history of agent systems from their birth in the 1960's to the current day is described, along with the issues they try to address. A taxonomy of interface agent systems is presented, and today's agent systems categorized accordingly. Lastly, an analysis of the machine learning and user modelling techniques used by today's agents is presented.<|reference_end|>
arxiv
@article{middleton2002interface, title={Interface agents: A review of the field}, author={Stuart E. Middleton}, journal={arXiv preprint arXiv:cs/0203012}, year={2002}, number={ECSTR-IAM01-001}, archivePrefix={arXiv}, eprint={cs/0203012}, primaryClass={cs.MA cs.LG} }
middleton2002interface
arxiv-670419
cs/0203013
Representing and Aggregating Conflicting Beliefs
<|reference_start|>Representing and Aggregating Conflicting Beliefs: We consider the two-fold problem of representing collective beliefs and aggregating these beliefs. We propose modular, transitive relations for collective beliefs. They allow us to represent conflicting opinions and they have a clear semantics. We compare them with the quasi-transitive relations often used in Social Choice. Then, we describe a way to construct the belief state of an agent informed by a set of sources of varying degrees of reliability. This construction circumvents Arrow's Impossibility Theorem in a satisfactory manner. Finally, we give a simple set-theory-based operator for combining the information of multiple agents. We show that this operator satisfies the desirable invariants of idempotence, commutativity, and associativity, and, thus, is well-behaved when iterated, and we describe a computationally effective way of computing the resulting belief state.<|reference_end|>
arxiv
@article{maynard-reid2002representing, title={Representing and Aggregating Conflicting Beliefs}, author={Pedrito Maynard-Reid II (Miami University), Daniel Lehmann (Hebrew University)}, journal={Proceedings of the Seventh International Conference on Principles of Knowledge Representation and Reasoning (KR 2000), April 2000, pp. 153-164}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203013}, primaryClass={cs.AI cs.LO} }
maynard-reid2002representing
arxiv-670420
cs/0203014
Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance
<|reference_start|>Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance: Research into active networking has provided the incentive to re-visit what has traditionally been classified as distinct properties and characteristics of information transfer such as protocol versus service; at a more fundamental level this paper considers the blending of computation and communication by means of complexity. The specific service examined in this paper is network self-prediction enabled by Active Virtual Network Management Prediction. Computation/communication is analyzed via Kolmogorov Complexity. The result is a mechanism to understand and improve the performance of active networking and Active Virtual Network Management Prediction in particular. The Active Virtual Network Management Prediction mechanism allows information, in various states of algorithmic and static form, to be transported in the service of prediction for network management. The results are generally applicable to algorithmic transmission of information. Kolmogorov Complexity is used and experimentally validated as a theory describing the relationship among algorithmic compression, complexity, and prediction accuracy within an active network. Finally, the paper concludes with a complexity-based framework for Information Assurance that attempts to take a holistic view of vulnerability analysis.<|reference_end|>
arxiv
@article{bush2002active, title={Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance}, author={Stephen F. Bush}, journal={IEEE Computer Society Press, Proceedings of the 2002 DARPA Active Networks Conference and Exposition (DANCE 2002), May 29-31, 2002, San Francisco, California, USA}, year={2002}, doi={10.1109/DANCE.2002.1003518}, archivePrefix={arXiv}, eprint={cs/0203014}, primaryClass={cs.CC cs.NI} }
bush2002active
arxiv-670421
cs/0203015
Towards Experimental Nanosound Using Almost Disjoint Set Theory
<|reference_start|>Towards Experimental Nanosound Using Almost Disjoint Set Theory: Music composition using digital audio sequence editors is increasingly performed in a visual workspace where sound complexes are built from discrete sound objects, called gestures that are arranged in time and space to generate a continuous composition. The visual workspace, common to most industry standard audio loop sequencing software, is premised on the arrangement of gestures defined with geometric shape properties. Here, one aspect of fractal set theory was validated using audio-frequency sets to evaluate self-affine scaling behavior when new sound complexes are built through union and intersection operations on discrete musical gestures. Results showed that intersection of two sets revealed lower complexity compared with the union operator, meaning that the intersection of two sound gestures is an almost disjoint set, and in accord with formal logic. These results are also discussed with reference to fuzzy sets, cellular automata, nanotechnology and self-organization to further explore the link between sequenced notation and complexity.<|reference_end|>
arxiv
@article{jones2002towards, title={Towards Experimental Nanosound Using Almost Disjoint Set Theory}, author={Cameron L Jones}, journal={arXiv preprint arXiv:cs/0203015}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203015}, primaryClass={cs.SD cs.LO} }
jones2002towards
arxiv-670422
cs/0203016
Dimension in Complexity Classes
<|reference_start|>Dimension in Complexity Classes: A theory of resource-bounded dimension is developed using gales, which are natural generalizations of martingales. When the resource bound \Delta (a parameter of the theory) is unrestricted, the resulting dimension is precisely the classical Hausdorff dimension (sometimes called fractal dimension). Other choices of the parameter \Delta yield internal dimension theories in E, E2, ESPACE, and other complexity classes, and in the class of all decidable problems. In general, if C is such a class, then every set X of languages has a dimension in C, which is a real number dim(X|C) in [0,1]. Along with the elements of this theory, two preliminary applications are presented: 1. For every real number \alpha in (0,1/2), the set FREQ(<=\alpha), consisting of all languages that asymptotically contain at most \alpha of all strings, has dimension H(\alpha) -- the binary entropy of \alpha -- in E and in E2. 2. For every real number \alpha in (0,1), the set SIZE(\alpha* (2^n)/n), consisting of all languages decidable by Boolean circuits of at most \alpha*(2^n)/n gates, has dimension \alpha in ESPACE.<|reference_end|>
arxiv
@article{lutz2002dimension, title={Dimension in Complexity Classes}, author={Jack H. Lutz}, journal={arXiv preprint arXiv:cs/0203016}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203016}, primaryClass={cs.CC} }
lutz2002dimension
arxiv-670423
cs/0203017
The Dimensions of Individual Strings and Sequences
<|reference_start|>The Dimensions of Individual Strings and Sequences: A constructive version of Hausdorff dimension is developed using constructive supergales, which are betting strategies that generalize the constructive supermartingales used in the theory of individual random sequences. This constructive dimension is used to assign every individual (infinite, binary) sequence S a dimension, which is a real number dim(S) in the interval [0,1]. Sequences that are random (in the sense of Martin-Lof) have dimension 1, while sequences that are decidable, \Sigma^0_1, or \Pi^0_1 have dimension 0. It is shown that for every \Delta^0_2-computable real number \alpha in [0,1] there is a \Delta^0_2 sequence S such that \dim(S) = \alpha. A discrete version of constructive dimension is also developed using termgales, which are supergale-like functions that bet on the terminations of (finite, binary) strings as well as on their successive bits. This discrete dimension is used to assign each individual string w a dimension, which is a nonnegative real number dim(w). The dimension of a sequence is shown to be the limit infimum of the dimensions of its prefixes. The Kolmogorov complexity of a string is proven to be the product of its length and its dimension. This gives a new characterization of algorithmic information and a new proof of Mayordomo's recent theorem stating that the dimension of a sequence is the limit infimum of the average Kolmogorov complexity of its first n bits. Every sequence that is random relative to any computable sequence of coin-toss biases that converge to a real number \beta in (0,1) is shown to have dimension \H(\beta), the binary entropy of \beta.<|reference_end|>
arxiv
@article{lutz2002the, title={The Dimensions of Individual Strings and Sequences}, author={Jack H. Lutz}, journal={arXiv preprint arXiv:cs/0203017}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203017}, primaryClass={cs.CC} }
lutz2002the
arxiv-670424
cs/0203018
Improving Table Compression with Combinatorial Optimization
<|reference_start|>Improving Table Compression with Combinatorial Optimization: We study the problem of compressing massive tables within the partition-training paradigm introduced by Buchsbaum et al. [SODA'00], in which a table is partitioned by an off-line training procedure into disjoint intervals of columns, each of which is compressed separately by a standard, on-line compressor like gzip. We provide a new theory that unifies previous experimental observations on partitioning and heuristic observations on column permutation, all of which are used to improve compression rates. Based on the theory, we devise the first on-line training algorithms for table compression, which can be applied to individual files, not just continuously operating sources; and also a new, off-line training algorithm, based on a link to the asymmetric traveling salesman problem, which improves on prior work by rearranging columns prior to partitioning. We demonstrate these results experimentally. On various test files, the on-line algorithms provide 35-55% improvement over gzip with negligible slowdown; the off-line reordering provides up to 20% further improvement over partitioning alone. We also show that a variation of the table compression problem is MAX-SNP hard.<|reference_end|>
arxiv
@article{buchsbaum2002improving, title={Improving Table Compression with Combinatorial Optimization}, author={Adam L. Buchsbaum, Glenn S. Fowler, Raffaele Giancarlo}, journal={JACM 50(6):825-851, 2003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203018}, primaryClass={cs.DS} }
buchsbaum2002improving
arxiv-670425
cs/0203019
GridSim: A Toolkit for the Modeling and Simulation of Distributed Resource Management and Scheduling for Grid Computing
<|reference_start|>GridSim: A Toolkit for the Modeling and Simulation of Distributed Resource Management and Scheduling for Grid Computing: Clusters, grids, and peer-to-peer (P2P) networks have emerged as popular paradigms for next generation parallel and distributed computing. The management of resources and scheduling of applications in such large-scale distributed systems is a complex undertaking. In order to prove the effectiveness of resource brokers and associated scheduling algorithms, their performance needs to be evaluated under different scenarios such as varying number of resources and users with different requirements. In a grid environment, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. To overcome this limitation, we have developed a Java-based discrete-event grid simulation toolkit called GridSim. The toolkit supports modeling and simulation of heterogeneous grid resources (both time- and space-shared), users and application models. It provides primitives for creation of application tasks, mapping of tasks to resources, and their management. To demonstrate suitability of the GridSim toolkit, we have simulated a Nimrod-G like grid resource broker and evaluated the performance of deadline and budget constrained cost- and time-minimization scheduling algorithms.<|reference_end|>
arxiv
@article{buyya2002gridsim:, title={GridSim: A Toolkit for the Modeling and Simulation of Distributed Resource Management and Scheduling for Grid Computing}, author={Rajkumar Buyya and Manzur Murshed}, journal={Concurrency and Computation: Practice and Experience, Wiley, May 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203019}, primaryClass={cs.DC} }
buyya2002gridsim:
arxiv-670426
cs/0203020
A Deadline and Budget Constrained Cost-Time Optimisation Algorithm for Scheduling Task Farming Applications on Global Grids
<|reference_start|>A Deadline and Budget Constrained Cost-Time Optimisation Algorithm for Scheduling Task Farming Applications on Global Grids: Computational Grids and peer-to-peer (P2P) networks enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. The management and composition of resources and services for scheduling applications, however, becomes a complex undertaking. We have proposed a computational economy framework for regulating the supply and demand for resources and allocating them for applications based on the users quality of services requirements. The framework requires economy driven deadline and budget constrained (DBC) scheduling algorithms for allocating resources to application jobs in such a way that the users requirements are met. In this paper, we propose a new scheduling algorithm, called DBC cost-time optimisation, which extends the DBC cost-optimisation algorithm to optimise for time, keeping the cost of computation at the minimum. The superiority of this new scheduling algorithm, in achieving lower job completion time, is demonstrated by simulating the World-Wide Grid and scheduling task-farming applications for different deadline and budget scenarios using both this new and the cost optimisation scheduling algorithms.<|reference_end|>
arxiv
@article{buyya2002a, title={A Deadline and Budget Constrained Cost-Time Optimisation Algorithm for Scheduling Task Farming Applications on Global Grids}, author={Rajkumar Buyya and Manzur Murshed}, journal={Technical Report, Monash University, March 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203020}, primaryClass={cs.DC} }
buyya2002a
arxiv-670427
cs/0203021
NetNeg: A Connectionist-Agent Integrated System for Representing Musical Knowledge
<|reference_start|>NetNeg: A Connectionist-Agent Integrated System for Representing Musical Knowledge: The system presented here shows the feasibility of modeling the knowledge involved in a complex musical activity by integrating sub-symbolic and symbolic processes. This research focuses on the question of whether there is any advantage in integrating a neural network together with a distributed artificial intelligence approach within the music domain. The primary purpose of our work is to design a model that describes the different aspects a user might be interested in considering when involved in a musical activity. The approach we suggest in this work enables the musician to encode his knowledge, intuitions, and aesthetic taste into different modules. The system captures these aspects by computing and applying three distinct functions: rules, fuzzy concepts, and learning. As a case study, we began experimenting with first species two-part counterpoint melodies. We have developed a hybrid system composed of a connectionist module and an agent-based module to combine the sub-symbolic and symbolic levels to achieve this task. The technique presented here to represent musical knowledge constitutes a new approach for composing polyphonic music.<|reference_end|>
arxiv
@article{goldman2002netneg:, title={NetNeg: A Connectionist-Agent Integrated System for Representing Musical Knowledge}, author={Claudia V. Goldman, Dan Gang, Jeffrey S. Rosenschein and Daniel Lehmann}, journal={Annals of Mathematics and Artificial Intelligence, 25 (1999) pp. 69-90}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203021}, primaryClass={cs.AI cs.MA} }
goldman2002netneg:
arxiv-670428
cs/0203022
Three Optimisations for Sharing
<|reference_start|>Three Optimisations for Sharing: In order to improve precision and efficiency sharing analysis should track both freeness and linearity. The abstract unification algorithms for these combined domains are suboptimal, hence there is scope for improving precision. This paper proposes three optimisations for tracing sharing in combination with freeness and linearity. A novel connection between equations and sharing abstractions is used to establish correctness of these optimisations even in the presence of rational trees. A method for pruning intermediate sharing abstractions to improve efficiency is also proposed. The optimisations are lightweight and therefore some, if not all, of these optimisations will be of interest to the implementor.<|reference_end|>
arxiv
@article{howe2002three, title={Three Optimisations for Sharing}, author={Jacob M. Howe and Andy King}, journal={arXiv preprint arXiv:cs/0203022}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203022}, primaryClass={cs.PL} }
howe2002three
arxiv-670429
cs/0203023
Agent trade servers in financial exchange systems
<|reference_start|>Agent trade servers in financial exchange systems: New services based on the best-effort paradigm could complement the current deterministic services of an electronic financial exchange. Four crucial aspects of such systems would benefit from a hybrid stance: proper use of processing resources, bandwidth management, fault tolerance, and exception handling. We argue that a more refined view on Quality-of-Service control for exchange systems, in which the principal ambition of upholding a fair and orderly marketplace is left uncompromised, would benefit all interested parties.<|reference_end|>
arxiv
@article{lyback2002agent, title={Agent trade servers in financial exchange systems}, author={David Lyback and Magnus Boman}, journal={arXiv preprint arXiv:cs/0203023}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203023}, primaryClass={cs.CE} }
lyback2002agent
arxiv-670430
cs/0203024
The structure of broad topics on the Web
<|reference_start|>The structure of broad topics on the Web: The Web graph is a giant social network whose properties have been measured and modeled extensively in recent years. Most such studies concentrate on the graph structure alone, and do not consider textual properties of the nodes. Consequently, Web communities have been characterized purely in terms of graph structure and not on page content. We propose that a topic taxonomy such as Yahoo! or the Open Directory provides a useful framework for understanding the structure of content-based clusters and communities. In particular, using a topic taxonomy and an automatic classifier, we can measure the background distribution of broad topics on the Web, and analyze the capability of recent random walk algorithms to draw samples which follow such distributions. In addition, we can measure the probability that a page about one broad topic will link to another broad topic. Extending this experiment, we can measure how quickly topic context is lost while walking randomly on the Web graph. Estimates of this topic mixing distance may explain why a global PageRank is still meaningful in the context of broad queries. In general, our measurements may prove valuable in the design of community-specific crawlers and link-based ranking systems.<|reference_end|>
arxiv
@article{chakrabarti2002the, title={The structure of broad topics on the Web}, author={Soumen Chakrabarti, Mukul M. Joshi, Kunal Punera, David M. Pennock}, journal={arXiv preprint arXiv:cs/0203024}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203024}, primaryClass={cs.IR cs.DL} }
chakrabarti2002the
arxiv-670431
cs/0203025
Sufficiently Fat Polyhedra are not 2-castable
<|reference_start|>Sufficiently Fat Polyhedra are not 2-castable: In this note we consider the problem of manufacturing a convex polyhedral object via casting. We consider a generalization of the sand casting process where the object is manufactured by gluing together two identical faces of parts cast with a single piece mold. In this model we show that the class of convex polyhedra which can be enclosed between two concentric spheres of the ratio of their radii less than 1.07 cannot be manufactured using only two cast parts.<|reference_end|>
arxiv
@article{bremner2002sufficiently, title={Sufficiently Fat Polyhedra are not 2-castable}, author={David Bremner and Alexander Golynski}, journal={arXiv preprint arXiv:cs/0203025}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203025}, primaryClass={cs.CG} }
bremner2002sufficiently
arxiv-670432
cs/0203026
Conformal Geometry, Euclidean Space and Geometric Algebra
<|reference_start|>Conformal Geometry, Euclidean Space and Geometric Algebra: Projective geometry provides the preferred framework for most implementations of Euclidean space in graphics applications. Translations and rotations are both linear transformations in projective geometry, which helps when it comes to programming complicated geometrical operations. But there is a fundamental weakness in this approach - the Euclidean distance between points is not handled in a straightforward manner. Here we discuss a solution to this problem, based on conformal geometry. The language of geometric algebra is best suited to exploiting this geometry, as it handles the interior and exterior products in a single, unified framework. A number of applications are discussed, including a compact formula for reflecting a line off a general spherical surface.<|reference_end|>
arxiv
@article{doran2002conformal, title={Conformal Geometry, Euclidean Space and Geometric Algebra}, author={Chris Doran, Anthony Lasenby and Joan Lasenby}, journal={arXiv preprint arXiv:cs/0203026}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203026}, primaryClass={cs.CG cs.GR math.MG} }
doran2002conformal
arxiv-670433
cs/0203027
The Algorithms of Updating Sequential Patterns
<|reference_start|>The Algorithms of Updating Sequential Patterns: Because the data being mined in the temporal database will evolve with time, many researchers have focused on the incremental mining of frequent sequences in temporal database. In this paper, we propose an algorithm called IUS, using the frequent and negative border sequences in the original database for incremental sequence mining. To deal with the case where some data need to be updated from the original database, we present an algorithm called DUS to maintain sequential patterns in the updated database. We also define the negative border sequence threshold: Min_nbd_supp to control the number of sequences in the negative border.<|reference_end|>
arxiv
@article{zheng2002the, title={The Algorithms of Updating Sequential Patterns}, author={Qingguo Zheng, Ke Xu, Shilong Ma, Weifeng Lv}, journal={The Second SIAM Data mining2002: workshop HPDM}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203027}, primaryClass={cs.DB cs.AI} }
zheng2002the
arxiv-670434
cs/0203028
When to Update the sequential patterns of stream data?
<|reference_start|>When to Update the sequential patterns of stream data?: In this paper, we first define a difference measure between the old and new sequential patterns of stream data, which is proved to be a distance. Then we propose an experimental method, called TPD (Tradeoff between Performance and Difference), to decide when to update the sequential patterns of stream data by making a tradeoff between the performance of increasingly updating algorithms and the difference of sequential patterns. The experiments for the incremental updating algorithm IUS on two data sets show that generally, as the size of incremental windows grows, the values of the speedup and the values of the difference will decrease and increase respectively. It is also shown experimentally that the incremental ratio determined by the TPD method does not monotonically increase or decrease but changes in a range between 20 and 30 percentage for the IUS algorithm.<|reference_end|>
arxiv
@article{zheng2002when, title={When to Update the sequential patterns of stream data?}, author={Qingguo Zheng, Ke Xu, Shilong Ma}, journal={arXiv preprint arXiv:cs/0203028}, year={2002}, number={NLSDE_01_2002_3_27}, archivePrefix={arXiv}, eprint={cs/0203028}, primaryClass={cs.DB cs.AI} }
zheng2002when
arxiv-670435
cs/0203029
Forbidden Information
<|reference_start|>Forbidden Information: Goedel Incompleteness Theorem leaves open a way around it, vaguely perceived for a long time but not clearly identified. (Thus, Goedel believed informal arguments can answer any math question.) Closing this loophole does not seem obvious and involves Kolmogorov complexity. (This is unrelated to, well studied before, complexity quantifications of the usual Goedel effects.) I consider extensions U of the universal partial recursive predicate (or, say, Peano Arithmetic). I prove that any U either leaves an n-bit input (statement) unresolved or contains nearly all information about the n-bit prefix of any r.e. real r (which is n bits for some r). I argue that creating significant information about a SPECIFIC math sequence is impossible regardless of the methods used. Similar problems and answers apply to other unsolvability results for tasks allowing multiple solutions, e.g. non-recursive tilings.<|reference_end|>
arxiv
@article{levin2002forbidden, title={Forbidden Information}, author={Leonid A. Levin}, journal={JACM, 60(2), 2013}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203029}, primaryClass={cs.CC} }
levin2002forbidden
arxiv-670436
cs/0203030
Source Routing and Scheduling in Packet Networks
<|reference_start|>Source Routing and Scheduling in Packet Networks: We study {\em routing} and {\em scheduling} in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is {\em admissible} if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms. When the paths are known (either given by the adversary or computed as above) our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this paper we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet. Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths.<|reference_end|>
arxiv
@article{andrews2002source, title={Source Routing and Scheduling in Packet Networks}, author={Matthew Andrews, Antonio Fernandez, Ashish Goel, and Lisa Zhang}, journal={arXiv preprint arXiv:cs/0203030}, year={2002}, archivePrefix={arXiv}, eprint={cs/0203030}, primaryClass={cs.NI cs.DC} }
andrews2002source
arxiv-670437
cs/0204001
A steady state model for graph power laws
<|reference_start|>A steady state model for graph power laws: Power law distribution seems to be an important characteristic of web graphs. Several existing web graph models generate power law graphs by adding new vertices and non-uniform edge connectivities to existing graphs. Researchers have conjectured that preferential connectivity and incremental growth are both required for the power law distribution. In this paper, we propose a different web graph model with power law distribution that does not require incremental growth. We also provide a comparison of our model with several others in their ability to predict web graph clustering behavior.<|reference_end|>
arxiv
@article{eppstein2002a, title={A steady state model for graph power laws}, author={David Eppstein and Joseph Wang}, journal={arXiv preprint arXiv:cs/0204001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204001}, primaryClass={cs.DM cond-mat.dis-nn cs.SI} }
eppstein2002a
arxiv-670438
cs/0204002
Coin-Moving Puzzles
<|reference_start|>Coin-Moving Puzzles: We introduce a new family of one-player games, involving the movement of coins from one configuration to another. Moves are restricted so that a coin can be placed only in a position that is adjacent to at least two other coins. The goal of this paper is to specify exactly which of these games are solvable. By introducing the notion of a constant number of extra coins, we give tight theorems characterizing solvable puzzles on the square grid and equilateral-triangle grid. These existence results are supplemented by polynomial-time algorithms for finding a solution.<|reference_end|>
arxiv
@article{demaine2002coin-moving, title={Coin-Moving Puzzles}, author={Erik D. Demaine, Martin L. Demaine, Helena A. Verrill}, journal={arXiv preprint arXiv:cs/0204002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204002}, primaryClass={cs.DM cs.CG cs.GT} }
demaine2002coin-moving
arxiv-670439
cs/0204003
Blind Normalization of Speech From Different Channels and Speakers
<|reference_start|>Blind Normalization of Speech From Different Channels and Speakers: This paper describes representations of time-dependent signals that are invariant under any invertible time-independent transformation of the signal time series. Such a representation is created by rescaling the signal in a non-linear dynamic manner that is determined by recently encountered signal levels. This technique may make it possible to normalize signals that are related by channel-dependent and speaker-dependent transformations, without having to characterize the form of the signal transformations, which remain unknown. The technique is illustrated by applying it to the time-dependent spectra of speech that has been filtered to simulate the effects of different channels. The experimental results show that the rescaled speech representations are largely normalized (i.e., channel-independent), despite the channel-dependence of the raw (unrescaled) speech.<|reference_end|>
arxiv
@article{levin2002blind, title={Blind Normalization of Speech From Different Channels and Speakers}, author={David N. Levin (U. of Chicago)}, journal={arXiv preprint arXiv:cs/0204003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204003}, primaryClass={cs.CL} }
levin2002blind
arxiv-670440
cs/0204004
Models and Tools for Collaborative Annotation
<|reference_start|>Models and Tools for Collaborative Annotation: The Annotation Graph Toolkit (AGTK) is a collection of software which facilitates development of linguistic annotation tools. AGTK provides a database interface which allows applications to use a database server for persistent storage. This paper discusses various modes of collaborative annotation and how they can be supported with tools built using AGTK and its database interface. We describe the relational database schema and API, and describe a version of the TableTrans tool which supports collaborative annotation. The remainder of the paper discusses a high-level query language for annotation graphs, along with optimizations, in support of expressive and efficient access to the annotations held on a large central server. The paper demonstrates that it is straightforward to support a variety of different levels of collaborative annotation with existing AGTK-based tools, with a minimum of additional programming effort.<|reference_end|>
arxiv
@article{ma2002models, title={Models and Tools for Collaborative Annotation}, author={Xiaoyi Ma, Haejoong Lee, Steven Bird and Kazuaki Maeda}, journal={Proceedings of the Third International Conference on Language Resources and Evaluation, Paris: European Language Resources Association, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204004}, primaryClass={cs.CL cs.SD} }
ma2002models
arxiv-670441
cs/0204005
Creating Annotation Tools with the Annotation Graph Toolkit
<|reference_start|>Creating Annotation Tools with the Annotation Graph Toolkit: The Annotation Graph Toolkit is a collection of software supporting the development of annotation tools based on the annotation graph model. The toolkit includes application programming interfaces for manipulating annotation graph data and for importing data from other formats. There are interfaces for the scripting languages Tcl and Python, a database interface, specialized graphical user interfaces for a variety of annotation tasks, and several sample applications. This paper describes all the toolkit components for the benefit of would-be application developers.<|reference_end|>
arxiv
@article{maeda2002creating, title={Creating Annotation Tools with the Annotation Graph Toolkit}, author={Kazuaki Maeda, Steven Bird, Xiaoyi Ma, and Haejoong Lee}, journal={Proceedings of the Third International Conference on Language Resources and Evaluation, Paris: European Language Resources Association, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204005}, primaryClass={cs.CL cs.SD} }
maeda2002creating
arxiv-670442
cs/0204006
TableTrans, MultiTrans, InterTrans and TreeTrans: Diverse Tools Built on the Annotation Graph Toolkit
<|reference_start|>TableTrans, MultiTrans, InterTrans and TreeTrans: Diverse Tools Built on the Annotation Graph Toolkit: Four diverse tools built on the Annotation Graph Toolkit are described. Each tool associates linguistic codes and structures with time-series data. All are based on the same software library and tool architecture. TableTrans is for observational coding, using a spreadsheet whose rows are aligned to a signal. MultiTrans is for transcribing multi-party communicative interactions recorded using multi-channel signals. InterTrans is for creating interlinear text aligned to audio. TreeTrans is for creating and manipulating syntactic trees. This work demonstrates that the development of diverse tools and re-use of software components is greatly facilitated by a common high-level application programming interface for representing the data and managing input/output, together with a common architecture for managing the interaction of multiple components.<|reference_end|>
arxiv
@article{bird2002tabletrans,, title={TableTrans, MultiTrans, InterTrans and TreeTrans: Diverse Tools Built on the Annotation Graph Toolkit}, author={Steven Bird, Kazuaki Maeda, Xiaoyi Ma, Haejoong Lee, Beth Randall, and Salim Zayat}, journal={Proceedings of the Third International Conference on Language Resources and Evaluation, Paris: European Language Resources Association, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204006}, primaryClass={cs.CL cs.SD} }
bird2002tabletrans,
arxiv-670443
cs/0204007
An Integrated Framework for Treebanks and Multilayer Annotations
<|reference_start|>An Integrated Framework for Treebanks and Multilayer Annotations: Treebank formats and associated software tools are proliferating rapidly, with little consideration for interoperability. We survey a wide variety of treebank structures and operations, and show how they can be mapped onto the annotation graph model, and leading to an integrated framework encompassing tree and non-tree annotations alike. This development opens up new possibilities for managing and exploiting multilayer annotations.<|reference_end|>
arxiv
@article{cotton2002an, title={An Integrated Framework for Treebanks and Multilayer Annotations}, author={Scott Cotton and Steven Bird}, journal={Proceedings of the Third International Conference on Language Resources and Evaluation, Paris: European Language Resources Association, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204007}, primaryClass={cs.CL} }
cotton2002an
arxiv-670444
cs/0204008
The tip-of-the-tongue phenomenon: Irrelevant neural network localization or disruption of its interneuron links ?
<|reference_start|>The tip-of-the-tongue phenomenon: Irrelevant neural network localization or disruption of its interneuron links ?: On the base of recently proposed three-stage quantitative neural network model of the tip-of-the-tongue (TOT) phenomenon a possibility to occur of TOT states coursed by neural network interneuron links' disruption has been studied. Using a numerical example it was found that TOTs coursed by interneron links' disruption are in (1.5 + - 0.3)x1000 times less probable then those coursed by irrelevant (incomplete) neural network localization. It was shown that delayed TOT states' etiology cannot be related to neural network interneuron links' disruption.<|reference_end|>
arxiv
@article{gopych2002the, title={The tip-of-the-tongue phenomenon: Irrelevant neural network localization or disruption of its interneuron links ?}, author={Petro M. Gopych}, journal={arXiv preprint arXiv:cs/0204008}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204008}, primaryClass={cs.CL cs.AI q-bio.NC q-bio.QM} }
gopych2002the
arxiv-670445
cs/0204009
New Results on Monotone Dualization and Generating Hypergraph Transversals
<|reference_start|>New Results on Monotone Dualization and Generating Hypergraph Transversals: We consider the problem of dualizing a monotone CNF (equivalently, computing all minimal transversals of a hypergraph), whose associated decision problem is a prominent open problem in NP-completeness. We present a number of new polynomial time resp. output-polynomial time results for significant cases, which largely advance the tractability frontier and improve on previous results. Furthermore, we show that duality of two monotone CNFs can be disproved with limited nondeterminism. More precisely, this is feasible in polynomial time with O(chi(n) * log n) suitably guessed bits, where chi(n) is given by \chi(n)^chi(n) = n; note that chi(n) = o(log n). This result sheds new light on the complexity of this important problem.<|reference_end|>
arxiv
@article{eiter2002new, title={New Results on Monotone Dualization and Generating Hypergraph Transversals}, author={Thomas Eiter, Georg Gottlob, and Kazuhisa Makino}, journal={arXiv preprint arXiv:cs/0204009}, year={2002}, number={INFSYS RR-1843-02-05, Institut f. Informationssysteme, TU Wien, April 2002}, archivePrefix={arXiv}, eprint={cs/0204009}, primaryClass={cs.DS cs.CC} }
eiter2002new
arxiv-670446
cs/0204010
On the Computational Complexity of Consistent Query Answers
<|reference_start|>On the Computational Complexity of Consistent Query Answers: We consider here the problem of obtaining reliable, consistent information from inconsistent databases -- databases that do not have to satisfy given integrity constraints. We use the notion of consistent query answer -- a query answer which is true in every (minimal) repair of the database. We provide a complete classification of the computational complexity of consistent answers to first-order queries w.r.t. functional dependencies and denial constraints. We show how the complexity depends on the {\em type} of the constraints considered, their {\em number}, and the {\em size} of the query. We obtain several new PTIME cases, using new algorithms.<|reference_end|>
arxiv
@article{chomicki2002on, title={On the Computational Complexity of Consistent Query Answers}, author={Jan Chomicki, Jerzy Marcinkowski}, journal={arXiv preprint arXiv:cs/0204010}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204010}, primaryClass={cs.DB} }
chomicki2002on
arxiv-670447
cs/0204011
Fair Stateless Aggregate Traffic Marking using Active Queue Management Techniques
<|reference_start|>Fair Stateless Aggregate Traffic Marking using Active Queue Management Techniques: In heterogeneous networks such as today's Internet, the differentiated services architecture promises to provide QoS guarantees through scalable service differentiation. Traffic marking is an important component of this framework. In this paper, we propose two new aggregate markers that are stateless, scalable and fair. We leverage stateless Active Queue Management (AQM) algorithms to enable fair and efficient token distribution among individual flows of an aggregate. The first marker, Probabilistic Aggregate Marker (PAM), uses the Token Bucket burst size to probabilistically mark incoming packets to ensure TCP-friendly and proportionally fair marking. The second marker, Stateless Aggregate Fair Marker (F-SAM) approximates fair queueing techniques to isolate flows while marking packets of the aggregate. It distributes tokens evenly among the flows without maintaining per-flow state. Our simulation results show that our marking strategies show upto 30% improvement over other commonly used markers while marking flow aggregates. These improvements are in terms of better average throughput and fairness indices, in scenarios containing heterogeneous traffic consisting of TCP (both long lived elephants and short lived mice) and misbehaving UDP flows. As a bonus, F-SAM helps the mice to win the war against elephants.<|reference_end|>
arxiv
@article{das2002fair, title={Fair Stateless Aggregate Traffic Marking using Active Queue Management Techniques}, author={Abhimanyu Das, Deboyjoti Dutta, Ahmed Helmy}, journal={arXiv preprint arXiv:cs/0204011}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204011}, primaryClass={cs.NI} }
das2002fair
arxiv-670448
cs/0204012
Exploiting Synergy Between Ontologies and Recommender Systems
<|reference_start|>Exploiting Synergy Between Ontologies and Recommender Systems: Recommender systems learn about user preferences over time, automatically finding things of similar interest. This reduces the burden of creating explicit queries. Recommender systems do, however, suffer from cold-start problems where no initial information is available early on upon which to base recommendations. Semantic knowledge structures, such as ontologies, can provide valuable domain knowledge and user information. However, acquiring such knowledge and keeping it up to date is not a trivial task and user interests are particularly difficult to acquire and maintain. This paper investigates the synergy between a web-based research paper recommender system and an ontology containing information automatically extracted from departmental databases available on the web. The ontology is used to address the recommender systems cold-start problem. The recommender system addresses the ontology's interest-acquisition problem. An empirical evaluation of this approach is conducted and the performance of the integrated systems measured.<|reference_end|>
arxiv
@article{middleton2002exploiting, title={Exploiting Synergy Between Ontologies and Recommender Systems}, author={Stuart E. Middleton, Harith Alani, David C. De Roure}, journal={arXiv preprint arXiv:cs/0204012}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204012}, primaryClass={cs.LG cs.MA} }
middleton2002exploiting
arxiv-670449
cs/0204013
The Sketch of a Polymorphic Symphony
<|reference_start|>The Sketch of a Polymorphic Symphony: In previous work, we have introduced functional strategies, that is, first-class generic functions that can traverse into terms of any type while mixing uniform and type-specific behaviour. In the present paper, we give a detailed description of one particular Haskell-based model of functional strategies. This model is characterised as follows. Firstly, we employ first-class polymorphism as a form of second-order polymorphism as for the mere types of functional strategies. Secondly, we use an encoding scheme of run-time type case for mixing uniform and type-specific behaviour. Thirdly, we base all traversal on a fundamental combinator for folding over constructor applications. Using this model, we capture common strategic traversal schemes in a highly parameterised style. We study two original forms of parameterisation. Firstly, we design parameters for the specific control-flow, data-flow and traversal characteristics of more concrete traversal schemes. Secondly, we use overloading to postpone commitment to a specific type scheme of traversal. The resulting portfolio of traversal schemes can be regarded as a challenging benchmark for setups for typed generic programming. The way we develop the model and the suite of traversal schemes, it becomes clear that parameterised + typed strategic programming is best viewed as a potent combination of certain bits of parametric, intensional, polytypic, and ad-hoc polymorphism.<|reference_end|>
arxiv
@article{laemmel2002the, title={The Sketch of a Polymorphic Symphony}, author={Ralf Laemmel}, journal={arXiv preprint arXiv:cs/0204013}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204013}, primaryClass={cs.PL} }
laemmel2002the
arxiv-670450
cs/0204014
An Assessment of the Consistency for Software Measurement Methods
<|reference_start|>An Assessment of the Consistency for Software Measurement Methods: Consistency, defined as the requirement that a series of measurements of the same project carried out by different raters using the same method should produce similar results, is one of the most important aspects to be taken into account in the measurement methods of the software. In spite of this, there is a widespread view that many measurement methods introduce an undesirable amount of subjectivity in the measurement process. This perception has made several organizations develop revisions of the standard methods whose main aim is to improve their consistency by introducing some suitable modifications of those aspects which are believed to introduce a greater degree of subjectivity.Each revision of a method must be empirically evaluated to determine to what extent is the aim of improving its consistency achieved. In this article we will define an homogeneous statistic intended to describe the consistency level of a method, and we will develop the statistical analysis which should be carried out in order to conclude whether or not a measurement method is more consistent than other one.<|reference_end|>
arxiv
@article{monge2002an, title={An Assessment of the Consistency for Software Measurement Methods}, author={R. Asensio Monge (U. of Oviedo), F. Sanchis Marco (U.P. of Madrid), F. Torre Cervigon (U. of Oviedo)}, journal={arXiv preprint arXiv:cs/0204014}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204014}, primaryClass={cs.SE} }
monge2002an
arxiv-670451
cs/0204015
Design Patterns for Functional Strategic Programming
<|reference_start|>Design Patterns for Functional Strategic Programming: In previous work, we introduced the fundamentals and a supporting combinator library for \emph{strategic programming}. This an idiom for generic programming based on the notion of a \emph{functional strategy}: a first-class generic function that cannot only be applied to terms of any type, but which also allows generic traversal into subterms and can be customized with type-specific behaviour. This paper seeks to provide practicing functional programmers with pragmatic guidance in crafting their own strategic programs. We present the fundamentals and the support from a user's perspective, and we initiate a catalogue of \emph{strategy design patterns}. These design patterns aim at consolidating strategic programming expertise in accessible form.<|reference_end|>
arxiv
@article{laemmel2002design, title={Design Patterns for Functional Strategic Programming}, author={Ralf Laemmel and Joost Visser}, journal={arXiv preprint arXiv:cs/0204015}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204015}, primaryClass={cs.PL} }
laemmel2002design
arxiv-670452
cs/0204016
Making Abstract Domains Condensing
<|reference_start|>Making Abstract Domains Condensing: In this paper we show that reversible analysis of logic languages by abstract interpretation can be performed without loss of precision by systematically refining abstract domains. The idea is to include semantic structures into abstract domains in such a way that the refined abstract domain becomes rich enough to allow approximate bottom-up and top-down semantics to agree. These domains are known as condensing abstract domains. Substantially, an abstract domain is condensing if goal-driven and goal-independent analyses agree, namely no loss of precision is introduced by approximating queries in a goal-independent analysis. We prove that condensation is an abstract domain property and that the problem of making an abstract domain condensing boils down to the problem of making the domain complete with respect to unification. In a general abstract interpretation setting we show that when concrete domains and operations give rise to quantales, i.e. models of propositional linear logic, objects in a complete refined abstract domain can be explicitly characterized by linear logic-based formulations. This is the case for abstract domains for logic program analysis approximating computed answer substitutions where unification plays the role of multiplicative conjunction in a quantale of idempotent substitutions. Condensing abstract domains can therefore be systematically derived by minimally extending any, generally non-condensing domain, by a simple domain refinement operator.<|reference_end|>
arxiv
@article{giacobazzi2002making, title={Making Abstract Domains Condensing}, author={R. Giacobazzi, F. Ranzato and F. Scozzari}, journal={arXiv preprint arXiv:cs/0204016}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204016}, primaryClass={cs.PL cs.LO} }
giacobazzi2002making
arxiv-670453
cs/0204017
Solitaire Clobber
<|reference_start|>Solitaire Clobber: Clobber is a new two-player board game. In this paper, we introduce the one-player variant Solitaire Clobber where the goal is to remove as many stones as possible from the board by alternating white and black moves. We show that a checkerboard configuration on a single row (or single column) can be reduced to about n/4 stones. For boards with at least two rows and two columns, we show that a checkerboard configuration can be reduced to a single stone if and only if the number of stones is not a multiple of three, and otherwise it can be reduced to two stones. We also show that in general it is NP-complete to decide whether an arbitrary Clobber configuration can be reduced to a single stone.<|reference_end|>
arxiv
@article{demaine2002solitaire, title={Solitaire Clobber}, author={Erik D. Demaine, Martin L. Demaine, Rudolf Fleischer}, journal={arXiv preprint arXiv:cs/0204017}, year={2002}, number={HKUST-TCSC-2002-05}, archivePrefix={arXiv}, eprint={cs/0204017}, primaryClass={cs.DM cs.CG cs.GT} }
demaine2002solitaire
arxiv-670454
cs/0204018
A Framework for Datatype Transformation
<|reference_start|>A Framework for Datatype Transformation: We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.<|reference_end|>
arxiv
@article{kort2002a, title={A Framework for Datatype Transformation}, author={Jan Kort and Ralf Laemmel}, journal={arXiv preprint arXiv:cs/0204018}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204018}, primaryClass={cs.PL} }
kort2002a
arxiv-670455
cs/0204019
Fast Universalization of Investment Strategies with Provably Good Relative Returns
<|reference_start|>Fast Universalization of Investment Strategies with Provably Good Relative Returns: A universalization of a parameterized investment strategy is an online algorithm whose average daily performance approaches that of the strategy operating with the optimal parameters determined offline in hindsight. We present a general framework for universalizing investment strategies and discuss conditions under which investment strategies are universalizable. We present examples of common investment strategies that fit into our framework. The examples include both trading strategies that decide positions in individual stocks, and portfolio strategies that allocate wealth among multiple stocks. This work extends Cover's universal portfolio work. We also discuss the runtime efficiency of universalization algorithms. While a straightforward implementation of our algorithms runs in time exponential in the number of parameters, we show that the efficient universal portfolio computation technique of Kalai and Vempala involving the sampling of log-concave functions can be generalized to other classes of investment strategies.<|reference_end|>
arxiv
@article{akcoglu2002fast, title={Fast Universalization of Investment Strategies with Provably Good Relative Returns}, author={Karhan Akcoglu, Petros Drineas, Ming-Yang Kao}, journal={arXiv preprint arXiv:cs/0204019}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204019}, primaryClass={cs.CE cs.DS} }
akcoglu2002fast
arxiv-670456
cs/0204020
Seven Dimensions of Portability for Language Documentation and Description
<|reference_start|>Seven Dimensions of Portability for Language Documentation and Description: The process of documenting and describing the world's languages is undergoing radical transformation with the rapid uptake of new digital technologies for capture, storage, annotation and dissemination. However, uncritical adoption of new tools and technologies is leading to resources that are difficult to reuse and which are less portable than the conventional printed resources they replace. We begin by reviewing current uses of software tools and digital technologies for language documentation and description. This sheds light on how digital language documentation and description are created and managed, leading to an analysis of seven portability problems under the following headings: content, format, discovery, access, citation, preservation and rights. After characterizing each problem we provide a series of value statements, and this provides the framework for a broad range of best practice recommendations.<|reference_end|>
arxiv
@article{bird2002seven, title={Seven Dimensions of Portability for Language Documentation and Description}, author={Steven Bird and Gary Simons}, journal={Proceedings of the Workshop on Portability Issues in Human Language Technologies, Third International Conference on Language Resources and Evaluation, Paris: European Language Resources Association, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204020}, primaryClass={cs.CL cs.DL} }
bird2002seven
arxiv-670457
cs/0204021
Factual and Legal Risks regarding wireless Computer Networks
<|reference_start|>Factual and Legal Risks regarding wireless Computer Networks: The IEEE 802.11b wireless ethernet standart has several serious security flaws. This paper describes this flaws, surveys wireless networks in the Cologne/Bonn area to get an assessment of the security configurations of fielded networks and analizes the legal protections provided to wireless ethernet operators by german law. We conclude that wireless ethernets without additional security measures are not usable for any transmissions which are not meant for a public audience.<|reference_end|>
arxiv
@article{dornseif2002factual, title={Factual and Legal Risks regarding wireless Computer Networks}, author={Maximillian Dornseif and Kay Schumann and Christian Klein}, journal={DuD - Datenschutz und Datensicherheit, 4/2002, S. 226ff, Vieweg, ISSN 0724-4371}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204021}, primaryClass={cs.CY cs.CR} }
dornseif2002factual
arxiv-670458
cs/0204022
Annotation Graphs and Servers and Multi-Modal Resources: Infrastructure for Interdisciplinary Education, Research and Development
<|reference_start|>Annotation Graphs and Servers and Multi-Modal Resources: Infrastructure for Interdisciplinary Education, Research and Development: Annotation graphs and annotation servers offer infrastructure to support the analysis of human language resources in the form of time-series data such as text, audio and video. This paper outlines areas of common need among empirical linguists and computational linguists. After reviewing examples of data and tools used or under development for each of several areas, it proposes a common framework for future tool development, data annotation and resource sharing based upon annotation graphs and servers.<|reference_end|>
arxiv
@article{cieri2002annotation, title={Annotation Graphs and Servers and Multi-Modal Resources: Infrastructure for Interdisciplinary Education, Research and Development}, author={Christopher Cieri and Steven Bird}, journal={Proceedings of ACL Workshop on Sharing Tools and Resources for Research and Education, Toulouse, July 2001, pp 23-30}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204022}, primaryClass={cs.CL} }
cieri2002annotation
arxiv-670459
cs/0204023
Computational Phonology
<|reference_start|>Computational Phonology: Phonology, as it is practiced, is deeply computational. Phonological analysis is data-intensive and the resulting models are nothing other than specialized data structures and algorithms. In the past, phonological computation - managing data and developing analyses - was done manually with pencil and paper. Increasingly, with the proliferation of affordable computers, IPA fonts and drawing software, phonologists are seeking to move their computation work online. Computational Phonology provides the theoretical and technological framework for this migration, building on methodologies and tools from computational linguistics. This piece consists of an apology for computational phonology, a history, and an overview of current research.<|reference_end|>
arxiv
@article{bird2002computational, title={Computational Phonology}, author={Steven Bird}, journal={Oxford International Encyclopedia of Linguistics, 2nd Edition, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204023}, primaryClass={cs.CL} }
bird2002computational
arxiv-670460
cs/0204024
The Geometric Maximum Traveling Salesman Problem
<|reference_start|>The Geometric Maximum Traveling Salesman Problem: We consider the traveling salesman problem when the cities are points in R^d for some fixed d and distances are computed according to geometric distances, determined by some norm. We show that for any polyhedral norm, the problem of finding a tour of maximum length can be solved in polynomial time. If arithmetic operations are assumed to take unit time, our algorithms run in time O(n^{f-2} log n), where f is the number of facets of the polyhedron determining the polyhedral norm. Thus for example we have O(n^2 log n) algorithms for the cases of points in the plane under the Rectilinear and Sup norms. This is in contrast to the fact that finding a minimum length tour in each case is NP-hard. Our approach can be extended to the more general case of quasi-norms with not necessarily symmetric unit ball, where we get a complexity of O(n^{2f-2} log n). For the special case of two-dimensional metrics with f=4 (which includes the Rectilinear and Sup norms), we present a simple algorithm with O(n) running time. The algorithm does not use any indirect addressing, so its running time remains valid even in comparison based models in which sorting requires Omega(n \log n) time. The basic mechanism of the algorithm provides some intuition on why polyhedral norms allow fast algorithms. Complementing the results on simplicity for polyhedral norms, we prove that for the case of Euclidean distances in R^d for d>2, the Maximum TSP is NP-hard. This sheds new light on the well-studied difficulties of Euclidean distances.<|reference_end|>
arxiv
@article{barvinok2002the, title={The Geometric Maximum Traveling Salesman Problem}, author={Alexander Barvinok, Sandor P. Fekete, David S. Johnson, Arie Tamir, Gerhard J. Woeginger, Russ Woodroofe}, journal={Journal of the ACM, 50 (5) 2003, 641-664.}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204024}, primaryClass={cs.DS cs.CC} }
barvinok2002the
arxiv-670461
cs/0204025
Phonology
<|reference_start|>Phonology: Phonology is the systematic study of the sounds used in language, their internal structure, and their composition into syllables, words and phrases. Computational phonology is the application of formal and computational techniques to the representation and processing of phonological information. This chapter will present the fundamentals of descriptive phonology along with a brief overview of computational phonology.<|reference_end|>
arxiv
@article{bird2002phonology, title={Phonology}, author={Steven Bird}, journal={In Ruslan Mitkov (ed) (2002). Oxford Handbook of Computational Linguistics}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204025}, primaryClass={cs.CL} }
bird2002phonology
arxiv-670462
cs/0204026
Querying Databases of Annotated Speech
<|reference_start|>Querying Databases of Annotated Speech: Annotated speech corpora are databases consisting of signal data along with time-aligned symbolic `transcriptions'. Such databases are typically multidimensional, heterogeneous and dynamic. These properties present a number of tough challenges for representation and query. The temporal nature of the data adds an additional layer of complexity. This paper presents and harmonises two independent efforts to model annotated speech databases, one at Macquarie University and one at the University of Pennsylvania. Various query languages are described, along with illustrative applications to a variety of analytical problems. The research reported here forms a part of several ongoing projects to develop platform-independent open-source tools for creating, browsing, searching, querying and transforming linguistic databases, and to disseminate large linguistic databases over the internet.<|reference_end|>
arxiv
@article{cassidy2002querying, title={Querying Databases of Annotated Speech}, author={Steve Cassidy and Steven Bird}, journal={Database Technologies: Proceedings of the Eleventh Australasian Database Conference, pp. 12-20, IEEE Computer Society, 2000}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204026}, primaryClass={cs.CL cs.DB} }
cassidy2002querying
arxiv-670463
cs/0204027
Integrating selectional preferences in WordNet
<|reference_start|>Integrating selectional preferences in WordNet: Selectional preference learning methods have usually focused on word-to-class relations, e.g., a verb selects as its subject a given nominal class. This paper extends previous statistical models to class-to-class preferences, and presents a model that learns selectional preferences for classes of verbs, together with an algorithm to integrate the learned preferences in WordNet. The theoretical motivation is twofold: different senses of a verb may have different preferences, and classes of verbs may share preferences. On the practical side, class-to-class selectional preferences can be learned from untagged corpora (the same as word-to-class), they provide selectional preferences for less frequent word senses via inheritance, and more important, they allow for easy integration in WordNet. The model is trained on subject-verb and object-verb relationships extracted from a small corpus disambiguated with WordNet senses. Examples are provided illustrating that the theoretical motivations are well founded, and showing that the approach is feasible. Experimental results on a word sense disambiguation task are also provided.<|reference_end|>
arxiv
@article{agirre2002integrating, title={Integrating selectional preferences in WordNet}, author={Eneko Agirre and David Martinez}, journal={Proceedings of First International WordNet Conference. Mysore (India). 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204027}, primaryClass={cs.CL} }
agirre2002integrating
arxiv-670464
cs/0204028
Decision Lists for English and Basque
<|reference_start|>Decision Lists for English and Basque: In this paper we describe the systems we developed for the English (lexical and all-words) and Basque tasks. They were all supervised systems based on Yarowsky's Decision Lists. We used Semcor for training in the English all-words task. We defined different feature sets for each language. For Basque, in order to extract all the information from the text, we defined features that have not been used before in the literature, using a morphological analyzer. We also implemented systems that selected automatically good features and were able to obtain a prefixed precision (85%) at the cost of coverage. The systems that used all the features were identified as BCU-ehu-dlist-all and the systems that selected some features as BCU-ehu-dlist-best.<|reference_end|>
arxiv
@article{agirre2002decision, title={Decision Lists for English and Basque}, author={Eneko Agirre and David Martinez}, journal={Proceedings of the SENSEVAL-2 Workshop. In conjunction with ACL'2001/EACL'2001. Toulouse}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204028}, primaryClass={cs.CL} }
agirre2002decision
arxiv-670465
cs/0204029
The Basque task: did systems perform in the upperbound?
<|reference_start|>The Basque task: did systems perform in the upperbound?: In this paper we describe the Senseval 2 Basque lexical-sample task. The task comprised 40 words (15 nouns, 15 verbs and 10 adjectives) selected from Euskal Hiztegia, the main Basque dictionary. Most examples were taken from the Egunkaria newspaper. The method used to hand-tag the examples produced low inter-tagger agreement (75%) before arbitration. The four competing systems attained results well above the most frequent baseline and the best system scored 75% precision at 100% coverage. The paper includes an analysis of the tagging procedure used, as well as the performance of the competing systems. In particular, we argue that inter-tagger agreement is not a real upperbound for the Basque WSD task.<|reference_end|>
arxiv
@article{agirre2002the, title={The Basque task: did systems perform in the upperbound?}, author={Eneko Agirre, Elena Garcia, Mikel Lersundi, David Martinez and Eli Pociello}, journal={Proceedings of the SENSEVAL-2 Workshop. In conjunction with ACL'2001/EACL'2001. Toulouse}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204029}, primaryClass={cs.CL} }
agirre2002the
arxiv-670466
cs/0204030
Fast Hands-free Writing by Gaze Direction
<|reference_start|>Fast Hands-free Writing by Gaze Direction: We describe a method for text entry based on inverse arithmetic coding that relies on gaze direction and which is faster and more accurate than using an on-screen keyboard. These benefits are derived from two innovations: the writing task is matched to the capabilities of the eye, and a language model is used to make predictable words and phrases easier to write.<|reference_end|>
arxiv
@article{ward2002fast, title={Fast Hands-free Writing by Gaze Direction}, author={David J. Ward and David J.C. MacKay}, journal={Nature 418, 2002 p. 838 (22nd August 2002) www.nature.com}, year={2002}, doi={10.1038/418838a}, archivePrefix={arXiv}, eprint={cs/0204030}, primaryClass={cs.HC cs.AI} }
ward2002fast
arxiv-670467
cs/0204031
A Dynamic Approach to Characterizing Termination of General Logic Programs
<|reference_start|>A Dynamic Approach to Characterizing Termination of General Logic Programs: We present a new characterization of termination of general logic programs. Most existing termination analysis approaches rely on some static information about the structure of the source code of a logic program, such as modes/types, norms/level mappings, models/interargument relations, and the like. We propose a dynamic approach which employs some key dynamic features of an infinite (generalized) SLDNF-derivation, such as repetition of selected subgoals and recursive increase in term size. We also introduce a new formulation of SLDNF-trees, called generalized SLDNF-trees. Generalized SLDNF-trees deal with negative subgoals in the same way as Prolog and exist for any general logic programs.<|reference_end|>
arxiv
@article{shen2002a, title={A Dynamic Approach to Characterizing Termination of General Logic Programs}, author={Yi-Dong Shen, Jia-Huai You, Li-Yan Yuan, Samuel S. P. Shen and Qiang Yang}, journal={ACM Transactions on Computational Logic 4(4):417-430, 2003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204031}, primaryClass={cs.LO cs.PL} }
shen2002a
arxiv-670468
cs/0204032
Belief Revision and Rational Inference
<|reference_start|>Belief Revision and Rational Inference: The (extended) AGM postulates for belief revision seem to deal with the revision of a given theory K by an arbitrary formula, but not to constrain the revisions of two different theories by the same formula. A new postulate is proposed and compared with other similar postulates that have been proposed in the literature. The AGM revisions that satisfy this new postulate stand in one-to-one correspondence with the rational, consistency-preserving relations. This correspondence is described explicitly. Two viewpoints on iterative revisions are distinguished and discussed.<|reference_end|>
arxiv
@article{freund2002belief, title={Belief Revision and Rational Inference}, author={Michael Freund and Daniel Lehmann}, journal={arXiv preprint arXiv:cs/0204032}, year={2002}, number={Leibniz Center for Research in Computer Science, Hebrew University: TR-94-16, July 1994}, archivePrefix={arXiv}, eprint={cs/0204032}, primaryClass={cs.AI} }
freund2002belief
arxiv-670469
cs/0204033
Randomized selection revisited
<|reference_start|>Randomized selection revisited: We show that several versions of Floyd and Rivest's algorithm Select for finding the $k$th smallest of $n$ elements require at most $n+\min\{k,n-k\}+o(n)$ comparisons on average and with high probability. This rectifies the analysis of Floyd and Rivest, and extends it to the case of nondistinct elements. Our computational results confirm that Select may be the best algorithm in practice.<|reference_end|>
arxiv
@article{kiwiel2002randomized, title={Randomized selection revisited}, author={Krzysztof C. Kiwiel}, journal={arXiv preprint arXiv:cs/0204033}, year={2002}, number={PMMO-02-01}, archivePrefix={arXiv}, eprint={cs/0204033}, primaryClass={cs.DS} }
kiwiel2002randomized
arxiv-670470
cs/0204034
Monitoring and Debugging Concurrent and Distributed Object-Oriented Systems
<|reference_start|>Monitoring and Debugging Concurrent and Distributed Object-Oriented Systems: A major part of debugging, testing, and analyzing a complex software system is understanding what is happening within the system at run-time. Some developers advocate running within a debugger to better understand the system at this level. Others embed logging statements, even in the form of hard-coded calls to print functions, throughout the code. These techniques are all general, rough forms of what we call system monitoring, and, while they have limited usefulness in simple, sequential systems, they are nearly useless in complex, concurrent ones. We propose a set of new mechanisms, collectively known as a monitoring system, for understanding such complex systems, and we describe an example implementation of such a system, called IDebug, for the Java programming language.<|reference_end|>
arxiv
@article{kiniry2002monitoring, title={Monitoring and Debugging Concurrent and Distributed Object-Oriented Systems}, author={Joseph R. Kiniry}, journal={arXiv preprint arXiv:cs/0204034}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204034}, primaryClass={cs.SE} }
kiniry2002monitoring
arxiv-670471
cs/0204035
Semantic Properties for Lightweight Specification in Knowledgeable Development Environments
<|reference_start|>Semantic Properties for Lightweight Specification in Knowledgeable Development Environments: Semantic properties are domain-specific specification constructs used to augment an existing language with richer semantics. These properties are taken advantage of in system analysis, design, implementation, testing, and maintenance through the use of documentation and source-code transformation tools. Semantic properties are themselves specified at two levels: loosely with precise natural language, and formally within the problem domain. The refinement relationships between these specification levels, as well as between a semantic property's use and its realization in program code via tools, is specified with a new formal method for reuse called kind theory.<|reference_end|>
arxiv
@article{kiniry2002semantic, title={Semantic Properties for Lightweight Specification in Knowledgeable Development Environments}, author={Joseph R. Kiniry}, journal={arXiv preprint arXiv:cs/0204035}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204035}, primaryClass={cs.SE} }
kiniry2002semantic
arxiv-670472
cs/0204036
Semantic Component Composition
<|reference_start|>Semantic Component Composition: Building complex software systems necessitates the use of component-based architectures. In theory, of the set of components needed for a design, only some small portion of them are "custom"; the rest are reused or refactored existing pieces of software. Unfortunately, this is an idealized situation. Just because two components should work together does not mean that they will work together. The "glue" that holds components together is not just technology. The contracts that bind complex systems together implicitly define more than their explicit type. These "conceptual contracts" describe essential aspects of extra-system semantics: e.g., object models, type systems, data representation, interface action semantics, legal and contractual obligations, and more. Designers and developers spend inordinate amounts of time technologically duct-taping systems to fulfill these conceptual contracts because system-wide semantics have not been rigorously characterized or codified. This paper describes a formal characterization of the problem and discusses an initial implementation of the resulting theoretical system.<|reference_end|>
arxiv
@article{kiniry2002semantic, title={Semantic Component Composition}, author={Joseph R. Kiniry}, journal={arXiv preprint arXiv:cs/0204036}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204036}, primaryClass={cs.SE} }
kiniry2002semantic
arxiv-670473
cs/0204037
Kolmogorov's Structure Functions and Model Selection
<|reference_start|>Kolmogorov's Structure Functions and Model Selection: In 1974 Kolmogorov proposed a non-probabilistic approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The ``structure function'' of the given data expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. We show that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the ``true'' model is in the model class considered or not. In this setting, this happens {\em with certainty}, rather than with high probability as is in the classical case. We precisely quantify the goodness-of-fit of an individual model with respect to individual data. We show that--within the obvious constraints--every graph is realized by the structure function of some data. We determine the (un)computability properties of the various functions contemplated and of the ``algorithmic minimal sufficient statistic.''<|reference_end|>
arxiv
@article{vereshchagin2002kolmogorov's, title={Kolmogorov's Structure Functions and Model Selection}, author={Nikolai Vereshchagin (Moscow State University) and Paul Vitanyi (CWI and University of Amsterdam)}, journal={arXiv preprint arXiv:cs/0204037}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204037}, primaryClass={cs.CC math.PR physics.data-an} }
vereshchagin2002kolmogorov's
arxiv-670474
cs/0204038
Technology For Information Engineering (TIE): A New Way of Storing, Retrieving and Analyzing Information
<|reference_start|>Technology For Information Engineering (TIE): A New Way of Storing, Retrieving and Analyzing Information: The theoretical foundations of a new model and paradigm (called TIE) for data storage and access are introduced. Associations between data elements are stored in a single Matrix table, which is usually kept entirely in RAM for quick access. The model ties together a very intuitive "guided" GUI to the Matrix structure, allowing extremely easy complex searches through the data. Although it is an "Associative Model" in that it stores the data associations separately from the data itself, in contrast to other implementations of that model TIE guides the user to only the available information ensuring that every search is always fruitful. Very many diverse applications of the technology are reviewed.<|reference_end|>
arxiv
@article{lewak2002technology, title={Technology For Information Engineering (TIE): A New Way of Storing, Retrieving and Analyzing Information}, author={Jerzy Lewak}, journal={arXiv preprint arXiv:cs/0204038}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204038}, primaryClass={cs.DB cs.IR} }
lewak2002technology
arxiv-670475
cs/0204039
Precongruence Formats for Decorated Trace Semantics
<|reference_start|>Precongruence Formats for Decorated Trace Semantics: This paper explores the connection between semantic equivalences and preorders for concrete sequential processes, represented by means of labelled transition systems, and formats of transition system specifications using Plotkin's structural approach. For several preorders in the linear time - branching time spectrum a format is given, as general as possible, such that this preorder is a precongruence for all operators specifiable in that format. The formats are derived using the modal characterizations of the corresponding preorders.<|reference_end|>
arxiv
@article{bloom2002precongruence, title={Precongruence Formats for Decorated Trace Semantics}, author={B. Bloom (Cornell -> IBM), W.J. Fokkink (CWI) & R.J. van Glabbeek (Stanford)}, journal={arXiv preprint arXiv:cs/0204039}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204039}, primaryClass={cs.LO} }
bloom2002precongruence
arxiv-670476
cs/0204040
Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures
<|reference_start|>Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures: The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle $t$ action $y_t$ results in perception $x_t$ and reward $r_t$, where all quantities in general may depend on the complete history. The perception $x_t$ and reward $r_t$ are sampled from the (reactive) environmental probability distribution $\mu$. This very general setting includes, but is not limited to, (partial observable, k-th order) Markov decision processes. Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if $\mu$ is known. Reinforcement learning is usually used if $\mu$ is unknown. In the Bayesian approach one defines a mixture distribution $\xi$ as a weighted sum of distributions $\nu\in\M$, where $\M$ is any class of distributions including the true environment $\mu$. We show that the Bayes-optimal policy $p^\xi$ based on the mixture $\xi$ is self-optimizing in the sense that the average value converges asymptotically for all $\mu\in\M$ to the optimal value achieved by the (infeasible) Bayes-optimal policy $p^\mu$ which knows $\mu$ in advance. We show that the necessary condition that $\M$ admits self-optimizing policies at all, is also sufficient. No other structural assumptions are made on $\M$. As an example application, we discuss ergodic Markov decision processes, which allow for self-optimizing policies. Furthermore, we show that $p^\xi$ is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in {\em all} environments $\nu\in\M$ and a strictly higher value in at least one.<|reference_end|>
arxiv
@article{hutter2002self-optimizing, title={Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures}, author={Marcus Hutter}, journal={Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT-2002) 364-379}, year={2002}, number={IDSIA-04-02}, archivePrefix={arXiv}, eprint={cs/0204040}, primaryClass={cs.AI cs.LG math.OC math.PR} }
hutter2002self-optimizing
arxiv-670477
cs/0204041
Trust Brokerage Systems for the Internet
<|reference_start|>Trust Brokerage Systems for the Internet: This thesis addresses the problem of providing trusted individuals with confidential information about other individuals, in particular, granting access to databases of personal records using the World-Wide Web. It proposes an access rights management system for distributed databases which aims to create and implement organisation structures based on the wishes of the owners and of demands of the users of the databases. The dissertation describes how current software components could be used to implement this system; it re-examines the theory of collective choice to develop mechanisms for generating hierarchies of authorities; it analyses organisational processes for stability and develops a means of measuring the similarity of their hierarchies.<|reference_end|>
arxiv
@article{eaves2002trust, title={Trust Brokerage Systems for the Internet}, author={Walter Eaves}, journal={arXiv preprint arXiv:cs/0204041}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204041}, primaryClass={cs.CR cs.GT cs.NE} }
eaves2002trust
arxiv-670478
cs/0204042
Preprocessing Chains for Fast Dihedral Rotations Is Hard or Even Impossible
<|reference_start|>Preprocessing Chains for Fast Dihedral Rotations Is Hard or Even Impossible: We examine a computational geometric problem concerning the structure of polymers. We model a polymer as a polygonal chain in three dimensions. Each edge splits the polymer into two subchains, and a dihedral rotation rotates one of these chains rigidly about this edge. The problem is to determine, given a chain, an edge, and an angle of rotation, if the motion can be performed without causing the chain to self-intersect. An Omega(n log n) lower bound on the time complexity of this problem is known. We prove that preprocessing a chain of n edges and answering n dihedral rotation queries is 3SUM-hard, giving strong evidence that solving n queries requires Omega(n^2) time in the worst case. For dynamic queries, which also modify the chain if the requested dihedral rotation is feasible, we show that answering n queries is by itself 3SUM-hard, suggesting that sublinear query time is impossible after any amount of preprocessing.<|reference_end|>
arxiv
@article{soss2002preprocessing, title={Preprocessing Chains for Fast Dihedral Rotations Is Hard or Even Impossible}, author={Michael Soss, Jeff Erickson, and Mark Overmars}, journal={arXiv preprint arXiv:cs/0204042}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204042}, primaryClass={cs.CG} }
soss2002preprocessing
arxiv-670479
cs/0204043
Learning from Scarce Experience
<|reference_start|>Learning from Scarce Experience: Searching the space of policies directly for the optimal policy has been one popular method for solving partially observable reinforcement learning problems. Typically, with each change of the target policy, its value is estimated from the results of following that very policy. This requires a large number of interactions with the environment as different polices are considered. We present a family of algorithms based on likelihood ratio estimation that use data gathered when executing one policy (or collection of policies) to estimate the value of a different policy. The algorithms combine estimation and optimization stages. The former utilizes experience to build a non-parametric representation of an optimized function. The latter performs optimization on this estimate. We show positive empirical results and provide the sample complexity bound.<|reference_end|>
arxiv
@article{peshkin2002learning, title={Learning from Scarce Experience}, author={Leonid Peshkin and Christian R. Shelton}, journal={arXiv preprint arXiv:cs/0204043}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204043}, primaryClass={cs.AI cs.LG cs.NE cs.RO} }
peshkin2002learning
arxiv-670480
cs/0204044
Robust Global Localization Using Clustered Particle Filtering
<|reference_start|>Robust Global Localization Using Clustered Particle Filtering: Global mobile robot localization is the problem of determining a robot's pose in an environment, using sensor data, when the starting position is unknown. A family of probabilistic algorithms known as Monte Carlo Localization (MCL) is currently among the most popular methods for solving this problem. MCL algorithms represent a robot's belief by a set of weighted samples, which approximate the posterior probability of where the robot is located by using a Bayesian formulation of the localization problem. This article presents an extension to the MCL algorithm, which addresses its problems when localizing in highly symmetrical environments; a situation where MCL is often unable to correctly track equally probable poses for the robot. The problem arises from the fact that sample sets in MCL often become impoverished, when samples are generated according to their posterior likelihood. Our approach incorporates the idea of clusters of samples and modifies the proposal distribution considering the probability mass of those clusters. Experimental results are presented that show that this new extension to the MCL algorithm successfully localizes in symmetric environments where ordinary MCL often fails.<|reference_end|>
arxiv
@article{sanchez2002robust, title={Robust Global Localization Using Clustered Particle Filtering}, author={Javier Nicolas Sanchez, Adam Milstein, Evan Williamson}, journal={arXiv preprint arXiv:cs/0204044}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204044}, primaryClass={cs.RO cs.AI} }
sanchez2002robust
arxiv-670481
cs/0204045
Some applications of logic to feasibility in higher types
<|reference_start|>Some applications of logic to feasibility in higher types: In this paper we demonstrate that the class of basic feasible functionals has recursion theoretic properties which naturally generalize the corresponding properties of the class of feasible functions. We also improve the Kapron - Cook result on mashine representation of basic feasible functionals. Our proofs are based on essential applications of logic. We introduce a weak fragment of second order arithmetic with second order variables ranging over functions from N into N which suitably characterizes basic feasible functionals, and show that it is a useful tool for investigating the properties of basic feasible functionals. In particular, we provide an example how one can extract feasible "programs" from mathematical proofs which use non-feasible functionals (like second order polynomials).<|reference_end|>
arxiv
@article{ignjatovic2002some, title={Some applications of logic to feasibility in higher types}, author={Aleksandar Ignjatovic and Arun Sharma}, journal={arXiv preprint arXiv:cs/0204045}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204045}, primaryClass={cs.LO} }
ignjatovic2002some
arxiv-670482
cs/0204046
Optimal Aggregation Algorithms for Middleware
<|reference_start|>Optimal Aggregation Algorithms for Middleware: Let D be a database of N objects where each object has m fields. The objects are given in m sorted lists (where the ith list is sorted according to the ith field). Our goal is to find the top k objects according to a monotone aggregation function t, while minimizing access to the lists. The problem arises in several contexts. In particular Fagin (JCSS 1999) considered it for the purpose of aggregating information in a multimedia database system. We are interested in instance optimality, i.e. that our algorithm will be as good as any other (correct) algorithm on any instance. We provide and analyze several instance optimal algorithms for the task, with various access costs and models.<|reference_end|>
arxiv
@article{fagin2002optimal, title={Optimal Aggregation Algorithms for Middleware}, author={Ron Fagin, Amnon Lotem and Moni Naor}, journal={arXiv preprint arXiv:cs/0204046}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204046}, primaryClass={cs.DB cs.DS} }
fagin2002optimal
arxiv-670483
cs/0204047
Sampling Strategies for Mining in Data-Scarce Domains
<|reference_start|>Sampling Strategies for Mining in Data-Scarce Domains: Data mining has traditionally focused on the task of drawing inferences from large datasets. However, many scientific and engineering domains, such as fluid dynamics and aircraft design, are characterized by scarce data, due to the expense and complexity of associated experiments and simulations. In such data-scarce domains, it is advantageous to focus the data collection effort on only those regions deemed most important to support a particular data mining objective. This paper describes a mechanism that interleaves bottom-up data mining, to uncover multi-level structures in spatial data, with top-down sampling, to clarify difficult decisions in the mining process. The mechanism exploits relevant physical properties, such as continuity, correspondence, and locality, in a unified framework. This leads to effective mining and sampling decisions that are explainable in terms of domain knowledge and data characteristics. This approach is demonstrated in two diverse applications -- mining pockets in spatial data, and qualitative determination of Jordan forms of matrices.<|reference_end|>
arxiv
@article{ramakrishnan2002sampling, title={Sampling Strategies for Mining in Data-Scarce Domains}, author={Naren Ramakrishnan and Chris Bailey-Kellogg}, journal={arXiv preprint arXiv:cs/0204047}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204047}, primaryClass={cs.CE cs.AI} }
ramakrishnan2002sampling
arxiv-670484
cs/0204048
Economic-based Distributed Resource Management and Scheduling for Grid Computing
<|reference_start|>Economic-based Distributed Resource Management and Scheduling for Grid Computing: Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for peer-to-peer distributed computing by developing users' quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications.<|reference_end|>
arxiv
@article{buyya2002economic-based, title={Economic-based Distributed Resource Management and Scheduling for Grid Computing}, author={Rajkumar Buyya}, journal={Monash University, Melbourne, Australia, April 2002}, year={2002}, number={PhD Thesis, April 2002}, archivePrefix={arXiv}, eprint={cs/0204048}, primaryClass={cs.DC} }
buyya2002economic-based
arxiv-670485
cs/0204049
Memory-Based Shallow Parsing
<|reference_start|>Memory-Based Shallow Parsing: We present memory-based learning approaches to shallow parsing and apply these to five tasks: base noun phrase identification, arbitrary base phrase recognition, clause detection, noun phrase parsing and full parsing. We use feature selection techniques and system combination methods for improving the performance of the memory-based learner. Our approach is evaluated on standard data sets and the results are compared with that of other systems. This reveals that our approach works well for base phrase identification while its application towards recognizing embedded structures leaves some room for improvement.<|reference_end|>
arxiv
@article{sang2002memory-based, title={Memory-Based Shallow Parsing}, author={Erik F. Tjong Kim Sang}, journal={Journal of Machine Learning Research, volume 2 (March), 2002, pp. 559-594}, year={2002}, number={jmlr-2002-tks}, archivePrefix={arXiv}, eprint={cs/0204049}, primaryClass={cs.CL} }
sang2002memory-based
arxiv-670486
cs/0204050
Computing Homotopic Shortest Paths Efficiently
<|reference_start|>Computing Homotopic Shortest Paths Efficiently: This paper addresses the problem of finding shortest paths homotopic to a given disjoint set of paths that wind amongst point obstacles in the plane. We present a faster algorithm than previously known.<|reference_end|>
arxiv
@article{efrat2002computing, title={Computing Homotopic Shortest Paths Efficiently}, author={Alon Efrat, Stephen G. Kobourov and Anna Lubiw}, journal={arXiv preprint arXiv:cs/0204050}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204050}, primaryClass={cs.CG} }
efrat2002computing
arxiv-670487
cs/0204051
Parrondo Strategies for Artificial Traders
<|reference_start|>Parrondo Strategies for Artificial Traders: On markets with receding prices, artificial noise traders may consider alternatives to buy-and-hold. By simulating variations of the Parrondo strategy, using real data from the Swedish stock market, we produce first indications of a buy-low-sell-random Parrondo variation outperforming buy-and-hold. Subject to our assumptions, buy-low-sell-random also outperforms the traditional value and trend investor strategies. We measure the success of the Parrondo variations not only through their performance compared to other kinds of strategies, but also relative to varying levels of perfect information, received through messages within a multi-agent system of artificial traders.<|reference_end|>
arxiv
@article{boman2002parrondo, title={Parrondo Strategies for Artificial Traders}, author={Magnus Boman, Stefan Johansson, David Lyback}, journal={Intelligent Agent Technology; Zhong, Liu, Ohsuga, Bradshaw (eds); 150-159; World Scientific, 2001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204051}, primaryClass={cs.CE} }
boman2002parrondo
arxiv-670488
cs/0204052
Required sample size for learning sparse Bayesian networks with many variables
<|reference_start|>Required sample size for learning sparse Bayesian networks with many variables: Learning joint probability distributions on n random variables requires exponential sample size in the generic case. Here we consider the case that a temporal (or causal) order of the variables is known and that the (unknown) graph of causal dependencies has bounded in-degree Delta. Then the joint measure is uniquely determined by the probabilities of all (2 Delta+1)-tuples. Upper bounds on the sample size required for estimating their probabilities can be given in terms of the VC-dimension of the set of corresponding cylinder sets. The sample size grows less than linearly with n.<|reference_end|>
arxiv
@article{wocjan2002required, title={Required sample size for learning sparse Bayesian networks with many variables}, author={Pawel Wocjan, Dominik Janzing, and Thomas Beth (Universitaet Karlsruhe)}, journal={arXiv preprint arXiv:cs/0204052}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204052}, primaryClass={cs.LG math.PR} }
wocjan2002required
arxiv-670489
cs/0204053
Qualitative Analysis of Correspondence for Experimental Algorithmics
<|reference_start|>Qualitative Analysis of Correspondence for Experimental Algorithmics: Correspondence identifies relationships among objects via similarities among their components; it is ubiquitous in the analysis of spatial datasets, including images, weather maps, and computational simulations. This paper develops a novel multi-level mechanism for qualitative analysis of correspondence. Operators leverage domain knowledge to establish correspondence, evaluate implications for model selection, and leverage identified weaknesses to focus additional data collection. The utility of the mechanism is demonstrated in two applications from experimental algorithmics -- matrix spectral portrait analysis and graphical assessment of Jordan forms of matrices. Results show that the mechanism efficiently samples computational experiments and successfully uncovers high-level problem properties. It overcomes noise and data sparsity by leveraging domain knowledge to detect mutually reinforcing interpretations of spatial data.<|reference_end|>
arxiv
@article{bailey-kellogg2002qualitative, title={Qualitative Analysis of Correspondence for Experimental Algorithmics}, author={Chris Bailey-Kellogg, Naren Ramakrishnan}, journal={arXiv preprint arXiv:cs/0204053}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204053}, primaryClass={cs.AI cs.CE} }
bailey-kellogg2002qualitative
arxiv-670490
cs/0204054
Navigating the Small World Web by Textual Cues
<|reference_start|>Navigating the Small World Web by Textual Cues: Can a Web crawler efficiently locate an unknown relevant page? While this question is receiving much empirical attention due to its considerable commercial value in the search engine community [Cho98,Chakrabarti99,Menczer00,Menczer01], theoretical efforts to bound the performance of focused navigation have only exploited the link structure of the Web graph, neglecting other features [Kleinberg01,Adamic01,Kim02]. Here I investigate the connection between linkage and a content-induced topology of Web pages, suggesting that efficient paths can be discovered by decentralized navigation algorithms based on textual cues.<|reference_end|>
arxiv
@article{menczer2002navigating, title={Navigating the Small World Web by Textual Cues}, author={Filippo Menczer}, journal={arXiv preprint arXiv:cs/0204054}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204054}, primaryClass={cs.IR cs.NI} }
menczer2002navigating
arxiv-670491
cs/0204055
Intelligent Search of Correlated Alarms for GSM Networks with Model-based Constraints
<|reference_start|>Intelligent Search of Correlated Alarms for GSM Networks with Model-based Constraints: In order to control the process of data mining and focus on the things of interest to us, many kinds of constraints have been added into the algorithms of data mining. However, discovering the correlated alarms in the alarm database needs deep domain constraints. Because the correlated alarms greatly depend on the logical and physical architecture of networks. Thus we use the network model as the constraints of algorithms, including Scope constraint, Inter-correlated constraint and Intra-correlated constraint, in our proposed algorithm called SMC (Search with Model-based Constraints). The experiments show that the SMC algorithm with Inter-correlated or Intra-correlated constraint is about two times faster than the algorithm with no constraints.<|reference_end|>
arxiv
@article{zheng2002intelligent, title={Intelligent Search of Correlated Alarms for GSM Networks with Model-based Constraints}, author={Qingguo Zheng, Ke Xu, Weifeng Lv, Shilong Ma}, journal={the 9th IEEE International Conference on Telecommunications,June,2002, Beijing,China}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204055}, primaryClass={cs.NI cs.AI} }
zheng2002intelligent
arxiv-670492
cs/0204056
Trading Agents for Roaming Users
<|reference_start|>Trading Agents for Roaming Users: Some roaming users need services to manipulate autonomous processes. Trading agents running on agent trade servers are used as a case in point. We present a solution that provides the agent owners with means to upkeeping their desktop environment, and maintaining their agent trade server processes, via a briefcase service.<|reference_end|>
arxiv
@article{boman2002trading, title={Trading Agents for Roaming Users}, author={Magnus Boman, Markus Bylund, Fredrik Espinoza, Mats Danielson, David Lyback}, journal={arXiv preprint arXiv:cs/0204056}, year={2002}, archivePrefix={arXiv}, eprint={cs/0204056}, primaryClass={cs.CE} }
boman2002trading
arxiv-670493
cs/0205001
A Calculus for End-to-end Statistical Service Guarantees
<|reference_start|>A Calculus for End-to-end Statistical Service Guarantees: The deterministic network calculus offers an elegant framework for determining delays and backlog in a network with deterministic service guarantees to individual traffic flows. This paper addresses the problem of extending the network calculus to a probabilistic framework with statistical service guarantees. Here, the key difficulty relates to expressing, in a statistical setting, an end-to-end (network) service curve as a concatenation of per-node service curves. The notion of an effective service curve is developed as a probabilistic bound on the service received by an individual flow. It is shown that per-node effective service curves can be concatenated to yield a network effective service curve.<|reference_end|>
arxiv
@article{burchard2002a, title={A Calculus for End-to-end Statistical Service Guarantees}, author={A. Burchard, J. Liebeherr, S. D. Patek}, journal={IEEE Transactions on Information Theory 52:4105-4114 (2006)}, year={2002}, number={University of Virginia CS-2001-19 (2nd revised version)}, archivePrefix={arXiv}, eprint={cs/0205001}, primaryClass={cs.NI} }
burchard2002a
arxiv-670494
cs/0205002
A Polynomial Description of the Rijndael Advanced Encryption Standard
<|reference_start|>A Polynomial Description of the Rijndael Advanced Encryption Standard: The paper gives a polynomial description of the Rijndael Advanced Encryption Standard recently adopted by the National Institute of Standards and Technology. Special attention is given to the structure of the S-Box.<|reference_end|>
arxiv
@article{rosenthal2002a, title={A Polynomial Description of the Rijndael Advanced Encryption Standard}, author={Joachim Rosenthal}, journal={arXiv preprint arXiv:cs/0205002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205002}, primaryClass={cs.CR math.AC math.RA} }
rosenthal2002a
arxiv-670495
cs/0205003
The prospects for mathematical logic in the twenty-first century
<|reference_start|>The prospects for mathematical logic in the twenty-first century: The four authors present their speculations about the future developments of mathematical logic in the twenty-first century. The areas of recursion theory, proof theory and logic for computer science, model theory, and set theory are discussed independently.<|reference_end|>
arxiv
@article{buss2002the, title={The prospects for mathematical logic in the twenty-first century}, author={Samuel R. Buss and Alexander S. Kechris and Anand Pillay and Richard A. Shore}, journal={Bulletin of Symbolic Logic 7 (2001) 169-196}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205003}, primaryClass={cs.LO} }
buss2002the
arxiv-670496
cs/0205004
Weaves: A Novel Direct Code Execution Interface for Parallel High Performance Scientific Codes
<|reference_start|>Weaves: A Novel Direct Code Execution Interface for Parallel High Performance Scientific Codes: Scientific codes are increasingly being used in compositional settings, especially problem solving environments (PSEs). Typical compositional modeling frameworks require significant buy-in, in the form of commitment to a particular style of programming (e.g., distributed object components). While this solution is feasible for newer generations of component-based scientific codes, large legacy code bases present a veritable software engineering nightmare. We introduce Weaves a novel framework that enables modeling, composition, direct code execution, performance characterization, adaptation, and control of unmodified high performance scientific codes. Weaves is an efficient generalized framework for parallel compositional modeling that is a proper superset of the threads and processes models of programming. In this paper, our focus is on the transparent code execution interface enabled by Weaves. We identify design constraints, their impact on implementation alternatives, configuration scenarios, and present results from a prototype implementation on Intel x86 architectures.<|reference_end|>
arxiv
@article{varadarajan2002weaves:, title={Weaves: A Novel Direct Code Execution Interface for Parallel High Performance Scientific Codes}, author={Srinidhi Varadarajan, Joy Mukherjee, Naren Ramakrishnan}, journal={arXiv preprint arXiv:cs/0205004}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205004}, primaryClass={cs.DC cs.PF} }
varadarajan2002weaves:
arxiv-670497
cs/0205005
PSPACE-Completeness of Sliding-Block Puzzles and Other Problems through the Nondeterministic Constraint Logic Model of Computation
<|reference_start|>PSPACE-Completeness of Sliding-Block Puzzles and Other Problems through the Nondeterministic Constraint Logic Model of Computation: We present a nondeterministic model of computation based on reversing edge directions in weighted directed graphs with minimum in-flow constraints on vertices. Deciding whether this simple graph model can be manipulated in order to reverse the direction of a particular edge is shown to be PSPACE-complete by a reduction from Quantified Boolean Formulas. We prove this result in a variety of special cases including planar graphs and highly restricted vertex configurations, some of which correspond to a kind of passive constraint logic. Our framework is inspired by (and indeed a generalization of) the ``Generalized Rush Hour Logic'' developed by Flake and Baum. We illustrate the importance of our model of computation by giving simple reductions to show that several motion-planning problems are PSPACE-hard. Our main result along these lines is that classic unrestricted sliding-block puzzles are PSPACE-hard, even if the pieces are restricted to be all dominoes (1x2 blocks) and the goal is simply to move a particular piece. No prior complexity results were known about these puzzles. This result can be seen as a strengthening of the existing result that the restricted Rush Hour puzzles are PSPACE-complete, of which we also give a simpler proof. Finally, we strengthen the existing result that the pushing-blocks puzzle Sokoban is PSPACE-complete, by showing that it is PSPACE-complete even if no barriers are allowed.<|reference_end|>
arxiv
@article{hearn2002pspace-completeness, title={PSPACE-Completeness of Sliding-Block Puzzles and Other Problems through the Nondeterministic Constraint Logic Model of Computation}, author={Robert A. Hearn and Erik D. Demaine}, journal={arXiv preprint arXiv:cs/0205005}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205005}, primaryClass={cs.CC cs.GT} }
hearn2002pspace-completeness
arxiv-670498
cs/0205006
Unsupervised discovery of morphologically related words based on orthographic and semantic similarity
<|reference_start|>Unsupervised discovery of morphologically related words based on orthographic and semantic similarity: We present an algorithm that takes an unannotated corpus as its input, and returns a ranked list of probable morphologically related pairs as its output. The algorithm tries to discover morphologically related pairs by looking for pairs that are both orthographically and semantically similar, where orthographic similarity is measured in terms of minimum edit distance, and semantic similarity is measured in terms of mutual information. The procedure does not rely on a morpheme concatenation model, nor on distributional properties of word substrings (such as affix frequency). Experiments with German and English input give encouraging results, both in terms of precision (proportion of good pairs found at various cutoff points of the ranked list), and in terms of a qualitative analysis of the types of morphological patterns discovered by the algorithm.<|reference_end|>
arxiv
@article{baroni2002unsupervised, title={Unsupervised discovery of morphologically related words based on orthographic and semantic similarity}, author={Marco Baroni, Johannes Matiasek and Harald Trost}, journal={arXiv preprint arXiv:cs/0205006}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205006}, primaryClass={cs.CL} }
baroni2002unsupervised
arxiv-670499
cs/0205007
On-Line Paging against Adversarially Biased Random Inputs
<|reference_start|>On-Line Paging against Adversarially Biased Random Inputs: In evaluating an algorithm, worst-case analysis can be overly pessimistic. Average-case analysis can be overly optimistic. An intermediate approach is to show that an algorithm does well on a broad class of input distributions. Koutsoupias and Papadimitriou recently analyzed the least-recently-used (LRU) paging strategy in this manner, analyzing its performance on an input sequence generated by a so-called diffuse adversary -- one that must choose each request probabilitistically so that no page is chosen with probability more than some fixed epsilon>0. They showed that LRU achieves the optimal competitive ratio (for deterministic on-line algorithms), but they didn't determine the actual ratio. In this paper we estimate the optimal ratios within roughly a factor of two for both deterministic strategies (e.g. least-recently-used and first-in-first-out) and randomized strategies. Around the threshold epsilon ~ 1/k (where k is the cache size), the optimal ratios are both Theta(ln k). Below the threshold the ratios tend rapidly to O(1). Above the threshold the ratio is unchanged for randomized strategies but tends rapidly to Theta(k) for deterministic ones. We also give an alternate proof of the optimality of LRU.<|reference_end|>
arxiv
@article{young2002on-line, title={On-Line Paging against Adversarially Biased Random Inputs}, author={Neal E. Young}, journal={J. Algorithms 37(1) pp 218-235 (2000)}, year={2002}, doi={10.1006/jagm.2000.1099}, archivePrefix={arXiv}, eprint={cs/0205007}, primaryClass={cs.DS cs.CC} }
young2002on-line
arxiv-670500
cs/0205008
Improved Bicriteria Existence Theorems for Scheduling
<|reference_start|>Improved Bicriteria Existence Theorems for Scheduling: Two common objectives for evaluating a schedule are the makespan, or schedule length, and the average completion time. This short note gives improved bounds on the existence of schedules that simultaneously optimize both criteria. In particular, for any rho> 0, there exists a schedule of makespan at most 1+rho times the minimum, with average completion time at most (1-e)^rho times the minimum. The proof uses an infininite-dimensional linear program to generalize and strengthen a previous analysis by Cliff Stein and Joel Wein (1997).<|reference_end|>
arxiv
@article{aslam2002improved, title={Improved Bicriteria Existence Theorems for Scheduling}, author={Javed Aslam, April Rasala, Cliff Stein, Neal Young}, journal={ACM-SIAM Symposium on Discrete Algorithms, pp. 846-847 (1999)}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205008}, primaryClass={cs.DS cs.CC} }
aslam2002improved