corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-670801 | cs/0210013 | On the Sum-of-Squares Algorithm for Bin Packing | <|reference_start|>On the Sum-of-Squares Algorithm for Bin Packing: In this paper we present a theoretical analysis of the deterministic on-line {\em Sum of Squares} algorithm ($SS$) for bin packing introduced and studied experimentally in \cite{CJK99}, along with several new variants. $SS$ is applicable to any instance of bin packing in which the bin capacity $B$ and item sizes $s(a)$ are integral (or can be scaled to be so), and runs in time $O(nB)$. It performs remarkably well from an average case point of view: For any discrete distribution in which the optimal expected waste is sublinear, $SS$ also has sublinear expected waste. For any discrete distribution where the optimal expected waste is bounded, $SS$ has expected waste at most $O(\log n)$. In addition, we discuss several interesting variants on $SS$, including a randomized $O(nB\log B)$-time on-line algorithm $SS^*$, based on $SS$, whose expected behavior is essentially optimal for all discrete distributions. Algorithm $SS^*$ also depends on a new linear-programming-based pseudopolynomial-time algorithm for solving the NP-hard problem of determining, given a discrete distribution $F$, just what is the growth rate for the optimal expected waste. This article is a greatly expanded version of the conference paper \cite{sumsq2000}.<|reference_end|> | arxiv | @article{csirik2002on,
title={On the Sum-of-Squares Algorithm for Bin Packing},
author={Janos Csirik, David S. Johnson, Claire Kenyon, James B. Orlin, Peter
W. Shor, and Richard R. Weber},
journal={arXiv preprint arXiv:cs/0210013},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210013},
primaryClass={cs.DS}
} | csirik2002on |
arxiv-670802 | cs/0210014 | Current state of the Sonix -- the IBR-2 instrument control software and plans for future developments | <|reference_start|>Current state of the Sonix -- the IBR-2 instrument control software and plans for future developments: The Sonix is the main control software for the IBR-2 instruments. This is a modular configurable and flexible system created using the Varman (real time database) and the X11/OS9 graphical package in the OS-9 environment. In the last few years we were mostly focused on making this system more reliable and user friendly. Because the VME hardware and software upgrade is rather expensive we would like to replace existing VME + OS9 control computers with the PC+Windows XP ones in the future. This could be done with the help of VME-PCI adapters.<|reference_end|> | arxiv | @article{kirilov2002current,
title={Current state of the Sonix -- the IBR-2 instrument control software and
plans for future developments},
author={A.S.Kirilov},
journal={arXiv preprint arXiv:cs/0210014},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210014},
primaryClass={cs.HC}
} | kirilov2002current |
arxiv-670803 | cs/0210015 | New Developments in Interval Arithmetic and Their Implications for Floating-Point Standardization | <|reference_start|>New Developments in Interval Arithmetic and Their Implications for Floating-Point Standardization: We consider the prospect of a processor that can perform interval arithmetic at the same speed as conventional floating-point arithmetic. This makes it possible for all arithmetic to be performed with the superior security of interval methods without any penalty in speed. In such a situation the IEEE floating-point standard needs to be compared with a version of floating-point arithmetic that is ideal for the purpose of interval arithmetic. Such a comparison requires a succinct and complete exposition of interval arithmetic according to its recent developments. We present such an exposition in this paper. We conclude that the directed roundings toward the infinities and the definition of division by the signed zeros are valuable features of the standard. Because the operations of interval arithmetic are always defined, exceptions do not arise. As a result neither Nans nor exceptions are needed. Of the status flags, only the inexact flag may be useful. Denormalized numbers seem to have no use for interval arithmetic; in the use of interval constraints, they are a handicap.<|reference_end|> | arxiv | @article{van emden2002new,
title={New Developments in Interval Arithmetic and Their Implications for
Floating-Point Standardization},
author={M.H. van Emden},
journal={arXiv preprint arXiv:cs/0210015},
year={2002},
number={DCS-273-IR},
archivePrefix={arXiv},
eprint={cs/0210015},
primaryClass={cs.NA}
} | van emden2002new |
arxiv-670804 | cs/0210016 | Compact Floor-Planning via Orderly Spanning Trees | <|reference_start|>Compact Floor-Planning via Orderly Spanning Trees: Floor-planning is a fundamental step in VLSI chip design. Based upon the concept of orderly spanning trees, we present a simple O(n)-time algorithm to construct a floor-plan for any n-node plane triangulation. In comparison with previous floor-planning algorithms in the literature, our solution is not only simpler in the algorithm itself, but also produces floor-plans which require fewer module types. An equally important aspect of our new algorithm lies in its ability to fit the floor-plan area in a rectangle of size (n-1)x(2n+1)/3. Lower bounds on the worst-case area for floor-planning any plane triangulation are also provided in the paper.<|reference_end|> | arxiv | @article{liao2002compact,
title={Compact Floor-Planning via Orderly Spanning Trees},
author={Chien-Chih Liao, Hsueh-I Lu, Hsu-Chun Yen},
journal={Journal of Algorithms, 48(2):441-451, 2003},
year={2002},
doi={10.1016/S0196-6774(03)00057-9},
archivePrefix={arXiv},
eprint={cs/0210016},
primaryClass={cs.DS cs.CG}
} | liao2002compact |
arxiv-670805 | cs/0210017 | A New Interpretation of Amdahl's Law and Geometric Scalability | <|reference_start|>A New Interpretation of Amdahl's Law and Geometric Scalability: The multiprocessor effect refers to the loss of computing cycles due to processing overhead. Amdahl's law and the Multiprocessing Factor (MPF) are two scaling models used in industry and academia for estimating multiprocessor capacity in the presence of this multiprocessor effect. Both models express different laws of diminishing returns. Amdahl's law identifies diminishing processor capacity with a fixed degree of serialization in the workload, while the MPF model treats it as a constant geometric ratio. The utility of both models for performance evaluation stems from the presence of a single parameter that can be determined easily from a small set of benchmark measurements. This utility, however, is marred by a dilemma. The two models produce different results, especially for large processor configurations that are so important for today's applications. The question naturally arises: Which of these two models is the correct one to use? Ignoring this question merely reduces capacity prediction to arbitrary curve-fitting. Removing the dilemma requires a dynamical interpretation of these scaling models. We present a physical interpretation based on queueing theory and show that Amdahl's law corresponds to synchronous queueing in a bus model while the MPF model belongs to a Coxian server model. The latter exhibits unphysical effects such as sublinear response times hence, we caution against its use for large multiprocessor configurations.<|reference_end|> | arxiv | @article{gunther2002a,
title={A New Interpretation of Amdahl's Law and Geometric Scalability},
author={Neil J. Gunther},
journal={arXiv preprint arXiv:cs/0210017},
year={2002},
number={PDC-TR190402},
archivePrefix={arXiv},
eprint={cs/0210017},
primaryClass={cs.DC cs.PF}
} | gunther2002a |
arxiv-670806 | cs/0210018 | User software for the next generation | <|reference_start|>User software for the next generation: New generations of neutron scattering sources and instrumentation are providing challenges in data handling for user software. Time-of-Flight instruments used at pulsed sources typically produce hundreds or thousands of channels of data for each detector segment. New instruments are being designed with thousands to hundreds of thousands of detector segments. High intensity neutron sources make possible parametric studies and texture studies which further increase data handling requirements. The Integrated Spectral Analysis Workbench (ISAW) software developed at Argonne handles large numbers of spectra simultaneously while providing operations to reduce, sort, combine and export the data. It includes viewers to inspect the data in detail in real time. ISAW uses existing software components and packages where feasible and takes advantage of the excellent support for user interface design and network communication in Java. The included scripting language simplifies repetitive operations for analyzing many files related to a given experiment. Recent additions to ISAW include a contour view, a time-slice table view, routines for finding and fitting peaks in data, and support for data from other facilities using the NeXus format. In this paper, I give an overview of features and planned improvements of ISAW. Details of some of the improvements are covered in other presentations at this conference.<|reference_end|> | arxiv | @article{worlton2002user,
title={User software for the next generation},
author={T. G. Worlton, A. Chatterjee, J. P. Hammonds, P. F. Peterson, D. J.
Mikkelson, and R. L. Mikkelson},
journal={arXiv preprint arXiv:cs/0210018},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210018},
primaryClass={cs.GR cs.CE}
} | worlton2002user |
arxiv-670807 | cs/0210019 | A Historic Name-Trail Service | <|reference_start|>A Historic Name-Trail Service: People change the identifiers through which they are reachable online as they change jobs or residences or Internet service providers. This kind of personal mobility makes reaching people online error-prone. As people move, they do not always know who or what has cached their now obsolete identifiers so as to inform them of the move. Use of these old identifiers can cause delivery failure of important messages, or worse, may cause delivery of messages to unintended recipients. For example, a sensitive email message sent to my now obsolete work address at a former place of employment may reach my unfriendly former boss instead of me. In this paper we describe HINTS, a historic name-trail service. This service provides a persistent way to name willing participants online using today's transient online identifiers. HINTS accomplishes this by connecting together the names a person uses along with the times during which those names were valid for the person, thus giving people control over the historic use of their names. A correspondent who wishes to reach a mobile person can use an obsolete online name for that person, qualified with a time at which the online name was successfully used; HINTS resolves this historic name to a current valid online identifier for the intended recipient, if that recipient has chosen to leave a name trail in HINTS.<|reference_end|> | arxiv | @article{maniatis2002a,
title={A Historic Name-Trail Service},
author={Petros Maniatis and Mary Baker},
journal={arXiv preprint arXiv:cs/0210019},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210019},
primaryClass={cs.NI cs.DC}
} | maniatis2002a |
arxiv-670808 | cs/0210020 | Tetris is Hard, Even to Approximate | <|reference_start|>Tetris is Hard, Even to Approximate: In the popular computer game of Tetris, the player is given a sequence of tetromino pieces and must pack them into a rectangular gameboard initially occupied by a given configuration of filled squares; any completely filled row of the gameboard is cleared and all pieces above it drop by one row. We prove that in the offline version of Tetris, it is NP-complete to maximize the number of cleared rows, maximize the number of tetrises (quadruples of rows simultaneously filled and cleared), minimize the maximum height of an occupied square, or maximize the number of pieces placed before the game ends. We furthermore show the extreme inapproximability of the first and last of these objectives to within a factor of p^(1-epsilon), when given a sequence of p pieces, and the inapproximability of the third objective to within a factor of (2 - epsilon), for any epsilon>0. Our results hold under several variations on the rules of Tetris, including different models of rotation, limitations on player agility, and restricted piece sets.<|reference_end|> | arxiv | @article{demaine2002tetris,
title={Tetris is Hard, Even to Approximate},
author={Erik D. Demaine, Susan Hohenberger, David Liben-Nowell},
journal={arXiv preprint arXiv:cs/0210020},
year={2002},
number={MIT-LCS-TR-865},
archivePrefix={arXiv},
eprint={cs/0210020},
primaryClass={cs.CC cs.CG cs.DM}
} | demaine2002tetris |
arxiv-670809 | cs/0210021 | Reconciling MPEG-7 and MPEG-21 Semantics through a Common Event-Aware Metadata Model | <|reference_start|>Reconciling MPEG-7 and MPEG-21 Semantics through a Common Event-Aware Metadata Model: The "event" concept appears repeatedly when developing metadata models for the description and management of multimedia content. During the typical life cycle of multimedia content, events occur at many different levels - from the events which happen during content creation (directing, acting, camera panning and zooming) to the events which happen to the physical form (acquisition, relocation, damage of film or video) to the digital conversion, reformatting, editing and repackaging events, to the events which are depicted in the actual content (political, news, sporting) to the usage, ownership and copyright agreement events and even the metadata attribution events. Support is required within both MPEG-7 and MPEG-21 for the clear and unambiguous description of all of these event types which may occur at widely different levels of nesting and granularity. In this paper we first describe an event-aware model (the ABC model) which is capable of modeling and yet clearly differentiating between all of these, often recursive and overlapping events. We then illustrate how this model can be used as the foundation to facilitate semantic interoperability between MPEG-7 and MPEG-21. By expressing the semantics of both MPEG-7 and MPEG-21 metadata terms in RDF Schema (and some DAML+OIL extensions) and attaching the MPEG-7 and MPEG-21 class and property hierarchies to the appropriate top-level classes and properties of the ABC model, we are essentially able to define a single distributed machine-understandable ontology, which will enable interoperability of data and services across the entire multimedia content delivery chain.<|reference_end|> | arxiv | @article{hunter2002reconciling,
title={Reconciling MPEG-7 and MPEG-21 Semantics through a Common Event-Aware
Metadata Model},
author={Jane Hunter},
journal={arXiv preprint arXiv:cs/0210021},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210021},
primaryClass={cs.MM cs.DL}
} | hunter2002reconciling |
arxiv-670810 | cs/0210022 | An Elementary Fragment of Second-Order Lambda Calculus | <|reference_start|>An Elementary Fragment of Second-Order Lambda Calculus: A fragment of second-order lambda calculus (System F) is defined that characterizes the elementary recursive functions. Type quantification is restricted to be non-interleaved and stratified, i.e., the types are assigned levels, and a quantified variable can only be instantiated by a type of smaller level, with a slightly liberalized treatment of the level zero.<|reference_end|> | arxiv | @article{aehlig2002an,
title={An Elementary Fragment of Second-Order Lambda Calculus},
author={Klaus Aehlig, Jan Johannsen},
journal={arXiv preprint arXiv:cs/0210022},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210022},
primaryClass={cs.LO}
} | aehlig2002an |
arxiv-670811 | cs/0210023 | Geometric Aspects of Multiagent Systems | <|reference_start|>Geometric Aspects of Multiagent Systems: Recent advances in Multiagent Systems (MAS) and Epistemic Logic within Distributed Systems Theory, have used various combinatorial structures that model both the geometry of the systems and the Kripke model structure of models for the logic. Examining one of the simpler versions of these models, interpreted systems, and the related Kripke semantics of the logic $S5_n$ (an epistemic logic with $n$-agents), the similarities with the geometric / homotopy theoretic structure of groupoid atlases is striking. These latter objects arise in problems within algebraic K-theory, an area of algebra linked to the study of decomposition and normal form theorems in linear algebra. They have a natural well structured notion of path and constructions of path objects, etc., that yield a rich homotopy theory.<|reference_end|> | arxiv | @article{porter2002geometric,
title={Geometric Aspects of Multiagent Systems},
author={Timothy Porter},
journal={arXiv preprint arXiv:cs/0210023},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210023},
primaryClass={cs.MA cs.AI}
} | porter2002geometric |
arxiv-670812 | cs/0210024 | The Lazy Bureaucrat Scheduling Problem | <|reference_start|>The Lazy Bureaucrat Scheduling Problem: We introduce a new class of scheduling problems in which the optimization is performed by the worker (single ``machine'') who performs the tasks. A typical worker's objective is to minimize the amount of work he does (he is ``lazy''), or more generally, to schedule as inefficiently (in some sense) as possible. The worker is subject to the constraint that he must be busy when there is work that he can do; we make this notion precise both in the preemptive and nonpreemptive settings. The resulting class of ``perverse'' scheduling problems, which we denote ``Lazy Bureaucrat Problems,'' gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules.<|reference_end|> | arxiv | @article{arkin2002the,
title={The Lazy Bureaucrat Scheduling Problem},
author={Esther M. Arkin, Michael A. Bender, Joseph S. B. Mitchell and Steven
S. Skiena},
journal={arXiv preprint arXiv:cs/0210024},
year={2002},
doi={10.1016/S0890-5401(03)00060-9},
archivePrefix={arXiv},
eprint={cs/0210024},
primaryClass={cs.DS cs.DM}
} | arkin2002the |
arxiv-670813 | cs/0210025 | An Algorithm for Pattern Discovery in Time Series | <|reference_start|>An Algorithm for Pattern Discovery in Time Series: We present a new algorithm for discovering patterns in time series and other sequential data. We exhibit a reliable procedure for building the minimal set of hidden, Markovian states that is statistically capable of producing the behavior exhibited in the data -- the underlying process's causal states. Unlike conventional methods for fitting hidden Markov models (HMMs) to data, our algorithm makes no assumptions about the process's causal architecture (the number of hidden states and their transition structure), but rather infers it from the data. It starts with assumptions of minimal structure and introduces complexity only when the data demand it. Moreover, the causal states it infers have important predictive optimality properties that conventional HMM states lack. We introduce the algorithm, review the theory behind it, prove its asymptotic reliability, use large deviation theory to estimate its rate of convergence, and compare it to other algorithms which also construct HMMs from data. We also illustrate its behavior on an example process, and report selected numerical results from an implementation.<|reference_end|> | arxiv | @article{shalizi2002an,
title={An Algorithm for Pattern Discovery in Time Series},
author={Cosma Rohilla Shalizi, Kristina Lisa Shalizi, and James P. Crutchfield},
journal={arXiv preprint arXiv:cs/0210025},
year={2002},
number={SFI Working Paper 02-10-060},
archivePrefix={arXiv},
eprint={cs/0210025},
primaryClass={cs.LG cs.CL}
} | shalizi2002an |
arxiv-670814 | cs/0210026 | Encoding a Taxonomy of Web Attacks with Different-Length Vectors | <|reference_start|>Encoding a Taxonomy of Web Attacks with Different-Length Vectors: Web attacks, i.e. attacks exclusively using the HTTP protocol, are rapidly becoming one of the fundamental threats for information systems connected to the Internet. When the attacks suffered by web servers through the years are analyzed, it is observed that most of them are very similar, using a reduced number of attacking techniques. It is generally agreed that classification can help designers and programmers to better understand attacks and build more secure applications. As an effort in this direction, a new taxonomy of web attacks is proposed in this paper, with the objective of obtaining a practically useful reference framework for security applications. The use of the taxonomy is illustrated by means of multiplatform real world web attack examples. Along with this taxonomy, important features of each attack category are discussed. A suitable semantic-dependent web attack encoding scheme is defined that uses different-length vectors. Possible applications are described, which might benefit from this taxonomy and encoding scheme, such as intrusion detection systems and application firewalls.<|reference_end|> | arxiv | @article{alvarez2002encoding,
title={Encoding a Taxonomy of Web Attacks with Different-Length Vectors},
author={Gonzalo Alvarez and Slobodan Petrovic},
journal={Computers and Security 22, 435-449, 2003},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210026},
primaryClass={cs.CR cs.AI}
} | alvarez2002encoding |
arxiv-670815 | cs/0210027 | A uniform approach to logic programming semantics | <|reference_start|>A uniform approach to logic programming semantics: Part of the theory of logic programming and nonmonotonic reasoning concerns the study of fixed-point semantics for these paradigms. Several different semantics have been proposed during the last two decades, and some have been more successful and acknowledged than others. The rationales behind those various semantics have been manifold, depending on one's point of view, which may be that of a programmer or inspired by commonsense reasoning, and consequently the constructions which lead to these semantics are technically very diverse, and the exact relationships between them have not yet been fully understood. In this paper, we present a conceptually new method, based on level mappings, which allows to provide uniform characterizations of different semantics for logic programs. We will display our approach by giving new and uniform characterizations of some of the major semantics, more particular of the least model semantics for definite programs, of the Fitting semantics, and of the well-founded semantics. A novel characterization of the weakly perfect model semantics will also be provided.<|reference_end|> | arxiv | @article{hitzler2002a,
title={A uniform approach to logic programming semantics},
author={Pascal Hitzler and Matthias Wendt (Artificial Intelligence Institute,
Dresden University of Technology, Germany)},
journal={arXiv preprint arXiv:cs/0210027},
year={2002},
number={WV-02-14},
archivePrefix={arXiv},
eprint={cs/0210027},
primaryClass={cs.AI cs.LO}
} | hitzler2002a |
arxiv-670816 | cs/0210028 | Equivalences Among Aggregate Queries with Negation | <|reference_start|>Equivalences Among Aggregate Queries with Negation: Query equivalence is investigated for disjunctive aggregate queries with negated subgoals, constants and comparisons. A full characterization of equivalence is given for the aggregation functions count, max, sum, prod, toptwo and parity. A related problem is that of determining, for a given natural number N, whether two given queries are equivalent over all databases with at most N constants. We call this problem bounded equivalence. A complete characterization of decidability of bounded equivalence is given. In particular, it is shown that this problem is decidable for all the above aggregation functions as well as for count distinct and average. For quasilinear queries (i.e., queries where predicates that occur positively are not repeated) it is shown that equivalence can be decided in polynomial time for the aggregation functions count, max, sum, parity, prod, toptwo and average. A similar result holds for count distinct provided that a few additional conditions hold. The results are couched in terms of abstract characteristics of aggregation functions, and new proof techniques are used. Finally, the results above also imply that equivalence, under bag-set semantics, is decidable for non-aggregate queries with negation.<|reference_end|> | arxiv | @article{cohen2002equivalences,
title={Equivalences Among Aggregate Queries with Negation},
author={Sara Cohen, Werner Nutt, Yehoshua Sagiv},
journal={arXiv preprint arXiv:cs/0210028},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210028},
primaryClass={cs.DB cs.LO}
} | cohen2002equivalences |
arxiv-670817 | cs/0210029 | Integration and interoperability accessing electronic information resources in science and technology: the proposal of Brazilian Digital Library | <|reference_start|>Integration and interoperability accessing electronic information resources in science and technology: the proposal of Brazilian Digital Library: This paper describes technological and methodological options to achieve interoperability in accessing electronic information resources, available in Internet, in the scope of Brazilian Digital Library in Science and Technology Project - BDL, developed by Brazilian Institute for Scientific and Technical Information - IBICT. It stresses the impact of the Web in the publishing and communication processes in science and technology and also in the information systems and libraries. The work points out the two major objectives of the BDL Project: facilitates electronic publishing of different full text materials such as theses, journal articles, conference papers,grey literature - by Brazilian scientific community, so amplifying their nationally and internationally visibility; and achieving, through a unified gateway, thus avoiding a user to navigate and query across different information resources individually. The work explains technological options and standards that will assure interoperability in this context.<|reference_end|> | arxiv | @article{marcondes2002integration,
title={Integration and interoperability accessing electronic information
resources in science and technology: the proposal of Brazilian Digital
Library},
author={Carlos H. Marcondes, Luis Fernando Sayao},
journal={arXiv preprint arXiv:cs/0210029},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210029},
primaryClass={cs.DL}
} | marcondes2002integration |
arxiv-670818 | cs/0210030 | Intelligence and Cooperative Search by Coupled Local Minimizers | <|reference_start|>Intelligence and Cooperative Search by Coupled Local Minimizers: We show how coupling of local optimization processes can lead to better solutions than multi-start local optimization consisting of independent runs. This is achieved by minimizing the average energy cost of the ensemble, subject to synchronization constraints between the state vectors of the individual local minimizers. From an augmented Lagrangian which incorporates the synchronization constraints both as soft and hard constraints, a network is derived wherein the local minimizers interact and exchange information through the synchronization constraints. From the viewpoint of neural networks, the array can be considered as a Lagrange programming network for continuous optimization and as a cellular neural network (CNN). The penalty weights associated with the soft state synchronization constraints follow from the solution to a linear program. This expresses that the energy cost of the ensemble should maximally decrease. In this way successful local minimizers can implicitly impose their state to the others through a mechanism of master-slave dynamics resulting into a cooperative search mechanism. Improved information spreading within the ensemble is obtained by applying the concept of small-world networks. This work suggests, in an interdisciplinary context, the importance of information exchange and state synchronization within ensembles, towards issues as evolution, collective behaviour, optimality and intelligence.<|reference_end|> | arxiv | @article{suykens2002intelligence,
title={Intelligence and Cooperative Search by Coupled Local Minimizers},
author={J.A.K. Suykens, J. Vandewalle, B. De Moor},
journal={Int. J. Bifurcation and Chaos, Vol.11, No.8, pp.2133-2144, 2001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210030},
primaryClass={cs.AI cs.MA cs.NE}
} | suykens2002intelligence |
arxiv-670819 | cs/0210031 | The Weaves Reconfigurable Programming Framework | <|reference_start|>The Weaves Reconfigurable Programming Framework: This research proposes a language independent intra-process framework for object based composition of unmodified code modules. Intuitively, the two major programming models, threads and processes, can be considered as extremes along a sharing axis. Multiple threads through a process share all global state, whereas instances of a process (or independent processes) share no global state. Weaves provide the generalized framework that allows arbitrary (selective) sharing of state between multiple control flows through a process. The Weaves framework supports multiple independent components in a single process, with flexible state sharing and scheduling, all of which is achieved without requiring any modification to existing code bases. Furthermore, the framework allows dynamic instantiation of code modules and control flows through them. In effect, weaves create intra-process modules (similar to objects in OOP) from code written in any language. The Weaves paradigm allows objects to be arbitrarily shared, it is a true superset of both processes as well as threads, with code sharing and fast context switching time similar to threads. Weaves does not require any special support from either the language or application code, practically any code can be weaved. Weaves also include support for fast automatic checkpointing and recovery with no application support. This paper presents the elements of the Weaves framework and results from our implementation that works by reverse-analyzing source-code independent ELF object files. The current implementation has been validated over Sweep3D, a benchmark for 3D discrete ordinates neutron transport [Koch et al., 1992], and a user-level port of the Linux 2.4 family kernel TCP/IP protocol stack.<|reference_end|> | arxiv | @article{varadarajan2002the,
title={The Weaves Reconfigurable Programming Framework},
author={Srinidhi Varadarajan},
journal={arXiv preprint arXiv:cs/0210031},
year={2002},
archivePrefix={arXiv},
eprint={cs/0210031},
primaryClass={cs.PL cs.OS}
} | varadarajan2002the |
arxiv-670820 | cs/0211001 | Fast and Simple Computation of All Longest Common Subsequences | <|reference_start|>Fast and Simple Computation of All Longest Common Subsequences: This paper shows that a simple algorithm produces the {\em all-prefixes-LCSs-graph} in $O(mn)$ time for two input sequences of size $m$ and $n$. Given any prefix $p$ of the first input sequence and any prefix $q$ of the second input sequence, all longest common subsequences (LCSs) of $p$ and $q$ can be generated in time proportional to the output size, once the all-prefixes-LCSs-graph has been constructed. The problem can be solved in the context of generating all the distinct character strings that represent an LCS or in the context of generating all ways of embedding an LCS in the two input strings.<|reference_end|> | arxiv | @article{greenberg2002fast,
title={Fast and Simple Computation of All Longest Common Subsequences},
author={Ronald I. Greenberg},
journal={arXiv preprint arXiv:cs/0211001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211001},
primaryClass={cs.DS}
} | greenberg2002fast |
arxiv-670821 | cs/0211002 | Programming and Verifying Subgame Perfect Mechanisms | <|reference_start|>Programming and Verifying Subgame Perfect Mechanisms: An extension of the WHILE-language is developed for programming game-theoretic mechanisms involving multiple agents. Examples of such mechanisms include auctions, voting procedures, and negotiation protocols. A structured operational semantics is provided in terms of extensive games of almost perfect information. Hoare-style partial correctness assertions are proposed to reason about the correctness of these mechanisms, where correctness is interpreted as the existence of a subgame perfect equilibrium. Using an extensional approach to pre- and postconditions, we show that an extension of Hoare's original calculus is sound and complete for reasoning about subgame perfect equilibria in game-theoretic mechanisms.<|reference_end|> | arxiv | @article{pauly2002programming,
title={Programming and Verifying Subgame Perfect Mechanisms},
author={Marc Pauly},
journal={arXiv preprint arXiv:cs/0211002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211002},
primaryClass={cs.LO cs.GT}
} | pauly2002programming |
arxiv-670822 | cs/0211003 | Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm | <|reference_start|>Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm: The Markov Blanket Bayesian Classifier is a recently-proposed algorithm for construction of probabilistic classifiers. This paper presents an empirical comparison of the MBBC algorithm with three other Bayesian classifiers: Naive Bayes, Tree-Augmented Naive Bayes and a general Bayesian network. All of these are implemented using the K2 framework of Cooper and Herskovits. The classifiers are compared in terms of their performance (using simple accuracy measures and ROC curves) and speed, on a range of standard benchmark data sets. It is concluded that MBBC is competitive in terms of speed and accuracy with the other algorithms considered.<|reference_end|> | arxiv | @article{madden2002evaluation,
title={Evaluation of the Performance of the Markov Blanket Bayesian Classifier
Algorithm},
author={Michael G. Madden},
journal={arXiv preprint arXiv:cs/0211003},
year={2002},
number={NUIG-IT-011002},
archivePrefix={arXiv},
eprint={cs/0211003},
primaryClass={cs.LG}
} | madden2002evaluation |
arxiv-670823 | cs/0211004 | The DLV System for Knowledge Representation and Reasoning | <|reference_start|>The DLV System for Knowledge Representation and Reasoning: This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to $\Delta^P_3$-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof. Furthermore, we illustrate the general architecture of the DLV system which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.<|reference_end|> | arxiv | @article{leone2002the,
title={The DLV System for Knowledge Representation and Reasoning},
author={Nicola Leone, Gerald Pfeifer, Wolfgang Faber, Thomas Eiter, Georg
Gottlob, Simona Perri, Francesco Scarcello},
journal={ACM Transactions on Computational Logic 7(3):499-562, 2006},
year={2002},
doi={10.1145/1149114.1149117},
archivePrefix={arXiv},
eprint={cs/0211004},
primaryClass={cs.AI cs.LO cs.PL}
} | leone2002the |
arxiv-670824 | cs/0211005 | Prosody Based Co-analysis for Continuous Recognition of Coverbal Gestures | <|reference_start|>Prosody Based Co-analysis for Continuous Recognition of Coverbal Gestures: Although speech and gesture recognition has been studied extensively, all the successful attempts of combining them in the unified framework were semantically motivated, e.g., keyword-gesture cooccurrence. Such formulations inherited the complexity of natural language processing. This paper presents a Bayesian formulation that uses a phenomenon of gesture and speech articulation for improving accuracy of automatic recognition of continuous coverbal gestures. The prosodic features from the speech signal were coanalyzed with the visual signal to learn the prior probability of co-occurrence of the prominent spoken segments with the particular kinematical phases of gestures. It was found that the above co-analysis helps in detecting and disambiguating visually small gestures, which subsequently improves the rate of continuous gesture recognition. The efficacy of the proposed approach was demonstrated on a large database collected from the weather channel broadcast. This formulation opens new avenues for bottom-up frameworks of multimodal integration.<|reference_end|> | arxiv | @article{kettebekov2002prosody,
title={Prosody Based Co-analysis for Continuous Recognition of Coverbal
Gestures},
author={Sanshzar Kettebekov, Mohammed Yeasin, Rajeev Sharma},
journal={S. Kettebekov, M. Yeasin, and R. Sharma, "Prosody Based
Co-analysis for Continuous Recognition of Coverbal Gestures," presented at
International Conference on Multimodal Interfaces (ICMI'02), Pittsburgh, USA,
2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211005},
primaryClass={cs.CV cs.HC}
} | kettebekov2002prosody |
arxiv-670825 | cs/0211006 | Maximing the Margin in the Input Space | <|reference_start|>Maximing the Margin in the Input Space: We propose a novel criterion for support vector machine learning: maximizing the margin in the input space, not in the feature (Hilbert) space. This criterion is a discriminative version of the principal curve proposed by Hastie et al. The criterion is appropriate in particular when the input space is already a well-designed feature space with rather small dimensionality. The definition of the margin is generalized in order to represent prior knowledge. The derived algorithm consists of two alternating steps to estimate the dual parameters. Firstly, the parameters are initialized by the original SVM. Then one set of parameters is updated by Newton-like procedure, and the other set is updated by solving a quadratic programming problem. The algorithm converges in a few steps to a local optimum under mild conditions and it preserves the sparsity of support vectors. Although the complexity to calculate temporal variables increases the complexity to solve the quadratic programming problem for each step does not change. It is also shown that the original SVM can be seen as a special case. We further derive a simplified algorithm which enables us to use the existing code for the original SVM.<|reference_end|> | arxiv | @article{akaho2002maximing,
title={Maximing the Margin in the Input Space},
author={Shotaro Akaho (AIST Neuroscience Research Institute)},
journal={arXiv preprint arXiv:cs/0211006},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211006},
primaryClass={cs.AI cs.LG}
} | akaho2002maximing |
arxiv-670826 | cs/0211007 | Approximating Incomplete Kernel Matrices by the em Algorithm | <|reference_start|>Approximating Incomplete Kernel Matrices by the em Algorithm: In biological data, it is often the case that observed data are available only for a subset of samples. When a kernel matrix is derived from such data, we have to leave the entries for unavailable samples as missing. In this paper, we make use of a parametric model of kernel matrices, and estimate missing entries by fitting the model to existing entries. The parametric model is created as a set of spectral variants of a complete kernel matrix derived from another information source. For model fitting, we adopt the em algorithm based on the information geometry of positive definite matrices. We will report promising results on bacteria clustering experiments using two marker sequences: 16S and gyrB.<|reference_end|> | arxiv | @article{tsuda2002approximating,
title={Approximating Incomplete Kernel Matrices by the em Algorithm},
author={Koji Tsuda, Shotaro Akaho and Kiyoshi Asai (AIST)},
journal={arXiv preprint arXiv:cs/0211007},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211007},
primaryClass={cs.LG}
} | tsuda2002approximating |
arxiv-670827 | cs/0211008 | Can the whole brain be simpler than its "parts"? | <|reference_start|>Can the whole brain be simpler than its "parts"?: This is the first in a series of connected papers discussing the problem of a dynamically reconfigurable universal learning neurocomputer that could serve as a computational model for the whole human brain. The whole series is entitled "The Brain Zero Project. My Brain as a Dynamically Reconfigurable Universal Learning Neurocomputer." (For more information visit the website www.brain0.com.) This introductory paper is concerned with general methodology. Its main goal is to explain why it is critically important for both neural modeling and cognitive modeling to pay much attention to the basic requirements of the whole brain as a complex computing system. The author argues that it can be easier to develop an adequate computational model for the whole "unprogrammed" (untrained) human brain than to find adequate formal representations of some nontrivial parts of brain's performance. (In the same way as, for example, it is easier to describe the behavior of a complex analytical function than the behavior of its real and/or imaginary part.) The "curse of dimensionality" that plagues purely phenomenological ("brainless") cognitive theories is a natural penalty for an attempt to represent insufficiently large parts of brain's performance in a state space of insufficiently high dimensionality. A "partial" modeler encounters "Catch 22." An attempt to simplify a cognitive problem by artificially reducing its dimensionality makes the problem more difficult.<|reference_end|> | arxiv | @article{eliashberg2002can,
title={Can the whole brain be simpler than its "parts"?},
author={Victor Eliashberg},
journal={arXiv preprint arXiv:cs/0211008},
year={2002},
number={AER0-02-10},
archivePrefix={arXiv},
eprint={cs/0211008},
primaryClass={cs.AI}
} | eliashberg2002can |
arxiv-670828 | cs/0211009 | Improved Phylogeny Comparisons: Non-Shared Edges Nearest Neighbor Interchanges, and Subtree Transfers | <|reference_start|>Improved Phylogeny Comparisons: Non-Shared Edges Nearest Neighbor Interchanges, and Subtree Transfers: The number of the non-shared edges of two phylogenies is a basic measure of the dissimilarity between the phylogenies. The non-shared edges are also the building block for approximating a more sophisticated metric called the nearest neighbor interchange (NNI) distance. In this paper, we give the first subquadratic-time algorithm for finding the non-shared edges, which are then used to speed up the existing approximating algorithm for the NNI distance from $O(n^2)$ time to $O(n \log n)$ time. Another popular distance metric for phylogenies is the subtree transfer (STT) distance. Previous work on computing the STT distance considered degree-3 trees only. We give an approximation algorithm for the STT distance for degree-$d$ trees with arbitrary $d$ and with generalized STT operations.<|reference_end|> | arxiv | @article{hon2002improved,
title={Improved Phylogeny Comparisons: Non-Shared Edges Nearest Neighbor
Interchanges, and Subtree Transfers},
author={Wing-Kai Hon, Ming-Yang Kao, Tak-Wah Lam, Wing-Kin Sung, Siu-Ming Yiu},
journal={arXiv preprint arXiv:cs/0211009},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211009},
primaryClass={cs.DS}
} | hon2002improved |
arxiv-670829 | cs/0211010 | Efficient Tree Layout in a Multilevel Memory Hierarchy | <|reference_start|>Efficient Tree Layout in a Multilevel Memory Hierarchy: We consider the problem of laying out a tree with fixed parent/child structure in hierarchical memory. The goal is to minimize the expected number of block transfers performed during a search along a root-to-leaf path, subject to a given probability distribution on the leaves. This problem was previously considered by Gil and Itai, who developed optimal but slow algorithms when the block-transfer size B is known. We present faster but approximate algorithms for the same problem; the fastest such algorithm runs in linear time and produces a solution that is within an additive constant of optimal. In addition, we show how to extend any approximately optimal algorithm to the cache-oblivious setting in which the block-transfer size is unknown to the algorithm. The query performance of the cache-oblivious layout is within a constant factor of the query performance of the optimal known-block-size layout. Computing the cache-oblivious layout requires only logarithmically many calls to the layout algorithm for known block size; in particular, the cache-oblivious layout can be computed in O(N lg N) time, where N is the number of nodes. Finally, we analyze two greedy strategies, and show that they have a performance ratio between Omega(lg B / lg lg B) and O(lg B) when compared to the optimal layout.<|reference_end|> | arxiv | @article{alstrup2002efficient,
title={Efficient Tree Layout in a Multilevel Memory Hierarchy},
author={Stephen Alstrup, Michael A. Bender, Erik D. Demaine, Martin
Farach-Colton, Theis Rauhe, Mikkel Thorup},
journal={arXiv preprint arXiv:cs/0211010},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211010},
primaryClass={cs.DS}
} | alstrup2002efficient |
arxiv-670830 | cs/0211011 | Intersection Types and Lambda Theories | <|reference_start|>Intersection Types and Lambda Theories: We illustrate the use of intersection types as a semantic tool for showing properties of the lattice of lambda theories. Relying on the notion of easy intersection type theory we successfully build a filter model in which the interpretation of an arbitrary simple easy term is any filter which can be described in an uniform way by a predicate. This allows us to prove the consistency of a well-know lambda theory: this consistency has interesting consequences on the algebraic structure of the lattice of lambda theories.<|reference_end|> | arxiv | @article{dezani-ciancaglini2002intersection,
title={Intersection Types and Lambda Theories},
author={M.Dezani-Ciancaglini and S.Lusin},
journal={arXiv preprint arXiv:cs/0211011},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211011},
primaryClass={cs.LO}
} | dezani-ciancaglini2002intersection |
arxiv-670831 | cs/0211012 | Phase Transitions and all that | <|reference_start|>Phase Transitions and all that: The paper (as posted originally) contains several errors. It has been subsequently split into two papers, the corrected (and accepted for publication) versions appear in the archive as papers cs.CC/0503082 and cs.DM/0503083.<|reference_end|> | arxiv | @article{istrate2002phase,
title={Phase Transitions and all that},
author={Gabriel Istrate},
journal={arXiv preprint arXiv:cs/0211012},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211012},
primaryClass={cs.CC}
} | istrate2002phase |
arxiv-670832 | cs/0211013 | Algorithmic scalability in globally constrained conservative parallel discrete event simulations of asynchronous systems | <|reference_start|>Algorithmic scalability in globally constrained conservative parallel discrete event simulations of asynchronous systems: We consider parallel simulations for asynchronous systems employing L processing elements that are arranged on a ring. Processors communicate only among the nearest neighbors and advance their local simulated time only if it is guaranteed that this does not violate causality. In simulations with no constraints, in the infinite L-limit the utilization scales (Korniss et al, PRL 84, 2000); but, the width of the virtual time horizon diverges (i.e., the measurement phase of the algorithm does not scale). In this work, we introduce a moving window global constraint, which modifies the algorithm so that the measurement phase scales as well. We present results of systematic studies in which the system size (i.e., L and the volume load per processor) as well as the constraint are varied. The constraint eliminates the extreme fluctuations in the virtual time horizon, provides a bound on its width, and controls the average progress rate. The width of the window constraint can serve as a tuning parameter that, for a given volume load per processor, could be adjusted to optimize the utilization so as to maximize the efficiency. This result may find numerous applications in modeling the evolution of general spatially extended short-range interacting systems with asynchronous dynamics, including dynamic Monte Carlo studies.<|reference_end|> | arxiv | @article{kolakowska2002algorithmic,
title={Algorithmic scalability in globally constrained conservative parallel
discrete event simulations of asynchronous systems},
author={A. Kolakowska, M. A. Novotny, G. Korniss},
journal={Phys. Rev. E 67, 046703 (2003)},
year={2002},
doi={10.1103/PhysRevE.67.046703},
archivePrefix={arXiv},
eprint={cs/0211013},
primaryClass={cs.DC cond-mat.stat-mech cs.DS physics.comp-ph}
} | kolakowska2002algorithmic |
arxiv-670833 | cs/0211014 | Vanquishing the XCB Question: The Methodology Discovery of the Last Shortest Single Axiom for the Equivalential Calculus | <|reference_start|>Vanquishing the XCB Question: The Methodology Discovery of the Last Shortest Single Axiom for the Equivalential Calculus: With the inclusion of an effective methodology, this article answers in detail a question that, for a quarter of a century, remained open despite intense study by various researchers. Is the formula XCB = e(x,e(e(e(x,y),e(z,y)),z)) a single axiom for the classical equivalential calculus when the rules of inference consist of detachment (modus ponens) and substitution? Where the function e represents equivalence, this calculus can be axiomatized quite naturally with the formulas e(x,x), e(e(x,y),e(y,x)), and e(e(x,y),e(e(y,z),e(x,z))), which correspond to reflexivity, symmetry, and transitivity, respectively. (We note that e(x,x) is dependent on the other two axioms.) Heretofore, thirteen shortest single axioms for classical equivalence of length eleven had been discovered, and XCB was the only remaining formula of that length whose status was undetermined. To show that XCB is indeed such a single axiom, we focus on the rule of condensed detachment, a rule that captures detachment together with an appropriately general, but restricted, form of substitution. The proof we present in this paper consists of twenty-five applications of condensed detachment, completing with the deduction of transitivity followed by a deduction of symmetry. We also discuss some factors that may explain in part why XCB resisted relinquishing its treasure for so long. Our approach relied on diverse strategies applied by the automated reasoning program OTTER. Thus ends the search for shortest single axioms for the equivalential calculus.<|reference_end|> | arxiv | @article{wos2002vanquishing,
title={Vanquishing the XCB Question: The Methodology Discovery of the Last
Shortest Single Axiom for the Equivalential Calculus},
author={Larry Wos, Dolph Ulrich, Branden Fitelson},
journal={arXiv preprint arXiv:cs/0211014},
year={2002},
number={Preprint ANL/MCS-P971-0702},
archivePrefix={arXiv},
eprint={cs/0211014},
primaryClass={cs.LO cs.AI}
} | wos2002vanquishing |
arxiv-670834 | cs/0211015 | XCB, the Last of the Shortest Single Axioms for the Classical Equivalential Calculus | <|reference_start|>XCB, the Last of the Shortest Single Axioms for the Classical Equivalential Calculus: It has long been an open question whether the formula XCB = EpEEEpqErqr is, with the rules of substitution and detachment, a single axiom for the classical equivalential calculus. This paper answers that question affirmatively, thus completing a search for all such eleven-symbol single axioms that began seventy years ago.<|reference_end|> | arxiv | @article{wos2002xcb,,
title={XCB, the Last of the Shortest Single Axioms for the Classical
Equivalential Calculus},
author={Larry Wos, Dolph Ulrich, Branden Fitelson},
journal={arXiv preprint arXiv:cs/0211015},
year={2002},
number={ANL/MCS-P966-0602},
archivePrefix={arXiv},
eprint={cs/0211015},
primaryClass={cs.LO cs.AI}
} | wos2002xcb, |
arxiv-670835 | cs/0211016 | Efficient Solving of Quantified Inequality Constraints over the Real Numbers | <|reference_start|>Efficient Solving of Quantified Inequality Constraints over the Real Numbers: Let a quantified inequality constraint over the reals be a formula in the first-order predicate language over the structure of the real numbers, where the allowed predicate symbols are $\leq$ and $<$. Solving such constraints is an undecidable problem when allowing function symbols such $\sin$ or $\cos$. In the paper we give an algorithm that terminates with a solution for all, except for very special, pathological inputs. We ensure the practical efficiency of this algorithm by employing constraint programming techniques.<|reference_end|> | arxiv | @article{ratschan2002efficient,
title={Efficient Solving of Quantified Inequality Constraints over the Real
Numbers},
author={Stefan Ratschan},
journal={arXiv preprint arXiv:cs/0211016},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211016},
primaryClass={cs.LO cs.NA}
} | ratschan2002efficient |
arxiv-670836 | cs/0211017 | Probabilistic Parsing Strategies | <|reference_start|>Probabilistic Parsing Strategies: We present new results on the relation between purely symbolic context-free parsing strategies and their probabilistic counter-parts. Such parsing strategies are seen as constructions of push-down devices from grammars. We show that preservation of probability distribution is possible under two conditions, viz. the correct-prefix property and the property of strong predictiveness. These results generalize existing results in the literature that were obtained by considering parsing strategies in isolation. From our general results we also derive negative results on so-called generalized LR parsing.<|reference_end|> | arxiv | @article{nederhof2002probabilistic,
title={Probabilistic Parsing Strategies},
author={Mark-Jan Nederhof and Giorgio Satta},
journal={arXiv preprint arXiv:cs/0211017},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211017},
primaryClass={cs.CL}
} | nederhof2002probabilistic |
arxiv-670837 | cs/0211018 | Indexing schemes for similarity search: an illustrated paradigm | <|reference_start|>Indexing schemes for similarity search: an illustrated paradigm: We suggest a variation of the Hellerstein--Koutsoupias--Papadimitriou indexability model for datasets equipped with a similarity measure, with the aim of better understanding the structure of indexing schemes for similarity-based search and the geometry of similarity workloads. This in particular provides a unified approach to a great variety of schemes used to index into metric spaces and facilitates their transfer to more general similarity measures such as quasi-metrics. We discuss links between performance of indexing schemes and high-dimensional geometry. The concepts and results are illustrated on a very large concrete dataset of peptide fragments equipped with a biologically significant similarity measure.<|reference_end|> | arxiv | @article{pestov2002indexing,
title={Indexing schemes for similarity search: an illustrated paradigm},
author={Vladimir Pestov and Aleksandar Stojmirovic},
journal={Fundamenta Informaticae Vol. 70 (2006), No. 4, 367-385},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211018},
primaryClass={cs.DS}
} | pestov2002indexing |
arxiv-670838 | cs/0211019 | Schedulers for Rule-based Constraint Programming | <|reference_start|>Schedulers for Rule-based Constraint Programming: We study here schedulers for a class of rules that naturally arise in the context of rule-based constraint programming. We systematically derive a scheduler for them from a generic iteration algorithm of Apt [2000]. We apply this study to so-called membership rules of Apt and Monfroy [2001]. This leads to an implementation that yields for these rules a considerably better performance than their execution as standard CHR rules.<|reference_end|> | arxiv | @article{apt2002schedulers,
title={Schedulers for Rule-based Constraint Programming},
author={Krzysztof R. Apt, Sebastian Brand},
journal={arXiv preprint arXiv:cs/0211019},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211019},
primaryClass={cs.DS cs.PL}
} | apt2002schedulers |
arxiv-670839 | cs/0211020 | Monadic Datalog and the Expressive Power of Languages for Web Information Extraction | <|reference_start|>Monadic Datalog and the Expressive Power of Languages for Web Information Extraction: Research on information extraction from Web pages (wrapping) has seen much activity recently (particularly systems implementations), but little work has been done on formally studying the expressiveness of the formalisms proposed or on the theoretical foundations of wrapping. In this paper, we first study monadic datalog over trees as a wrapping language. We show that this simple language is equivalent to monadic second order logic (MSO) in its ability to specify wrappers. We believe that MSO has the right expressiveness required for Web information extraction and propose MSO as a yardstick for evaluating and comparing wrappers. Along the way, several other results on the complexity of query evaluation and query containment for monadic datalog over trees are established, and a simple normal form for this language is presented. Using the above results, we subsequently study the kernel fragment Elog$^-$ of the Elog wrapping language used in the Lixto system (a visual wrapper generator). Curiously, Elog$^-$ exactly captures MSO, yet is easier to use. Indeed, programs in this language can be entirely visually specified.<|reference_end|> | arxiv | @article{gottlob2002monadic,
title={Monadic Datalog and the Expressive Power of Languages for Web
Information Extraction},
author={Georg Gottlob and Christoph Koch},
journal={arXiv preprint arXiv:cs/0211020},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211020},
primaryClass={cs.DB}
} | gottlob2002monadic |
arxiv-670840 | cs/0211021 | Sequent and Hypersequent Calculi for Abelian and Lukasiewicz Logics | <|reference_start|>Sequent and Hypersequent Calculi for Abelian and Lukasiewicz Logics: We present two embeddings of infinite-valued Lukasiewicz logic L into Meyer and Slaney's abelian logic A, the logic of lattice-ordered abelian groups. We give new analytic proof systems for A and use the embeddings to derive corresponding systems for L. These include: hypersequent calculi for A and L and terminating versions of these calculi; labelled single sequent calculi for A and L of complexity co-NP; unlabelled single sequent calculi for A and L.<|reference_end|> | arxiv | @article{metcalfe2002sequent,
title={Sequent and Hypersequent Calculi for Abelian and Lukasiewicz Logics},
author={G. Metcalfe, N. Olivetti and D. Gabbay},
journal={arXiv preprint arXiv:cs/0211021},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211021},
primaryClass={cs.LO}
} | metcalfe2002sequent |
arxiv-670841 | cs/0211022 | Arithmetic, First-Order Logic, and Counting Quantifiers | <|reference_start|>Arithmetic, First-Order Logic, and Counting Quantifiers: This paper gives a thorough overview of what is known about first-order logic with counting quantifiers and with arithmetic predicates. As a main theorem we show that Presburger arithmetic is closed under unary counting quantifiers. Precisely, this means that for every first-order formula phi(y,z_1,...,z_k) over the signature {<,+} there is a first-order formula psi(x,z_1,...,z_k) which expresses over the structure <Nat,<,+> (respectively, over initial segments of this structure) that the variable x is interpreted exactly by the number of possible interpretations of the variable y for which the formula phi(y,z_1,...,z_k) is satisfied. Applying this theorem, we obtain an easy proof of Ruhl's result that reachability (and similarly, connectivity) in finite graphs is not expressible in first-order logic with unary counting quantifiers and addition. Furthermore, the above result on Presburger arithmetic helps to show the failure of a particular version of the Crane Beach conjecture.<|reference_end|> | arxiv | @article{schweikardt2002arithmetic,,
title={Arithmetic, First-Order Logic, and Counting Quantifiers},
author={Nicole Schweikardt},
journal={arXiv preprint arXiv:cs/0211022},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211022},
primaryClass={cs.LO}
} | schweikardt2002arithmetic, |
arxiv-670842 | cs/0211023 | SkyQuery: A WebService Approach to Federate Databases | <|reference_start|>SkyQuery: A WebService Approach to Federate Databases: Traditional science searched for new objects and phenomena that led to discoveries. Tomorrow's science will combine together the large pool of information in scientific archives and make discoveries. Scienthists are currently keen to federate together the existing scientific databases. The major challenge in building a federation of these autonomous and heterogeneous databases is system integration. Ineffective integration will result in defunct federations and under utilized scientific data. Astronomy, in particular, has many autonomous archives spread over the Internet. It is now seeking to federate these, with minimal effort, into a Virtual Observatory that will solve complex distributed computing tasks such as answering federated spatial join queries. In this paper, we present SkyQuery, a successful prototype of an evolving federation of astronomy archives. It interoperates using the emerging Web services standard. We describe the SkyQuery architecture and show how it efficiently evaluates a probabilistic federated spatial join query.<|reference_end|> | arxiv | @article{malik2002skyquery:,
title={SkyQuery: A WebService Approach to Federate Databases},
author={Tanu Malik, Alex S. Szalay, Tamas Budavari, Ani R. Thakar},
journal={arXiv preprint arXiv:cs/0211023},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211023},
primaryClass={cs.DB cs.CE}
} | malik2002skyquery: |
arxiv-670843 | cs/0211024 | Narses: A Scalable Flow-Based Network Simulator | <|reference_start|>Narses: A Scalable Flow-Based Network Simulator: Most popular, modern network simulators, such as ns, are targeted towards simulating low-level protocol details. These existing simulators are not intended for simulating large distributed applications with many hosts and many concurrent connections over long periods of simulated time. We introduce a new simulator, Narses, targeted towards large distributed applications. The goal of Narses is to simulate and validate large applications efficiently using network models of varying levels of detail. We introduce several simplifying assumptions that allow our simulator to scale to the needs of large distributed applications while maintaining a reasonable degree of accuracy. Initial results show up to a 45 times speedup while consuming 28% of the memory used by ns. Narses maintains a reasonable degree of accuracy -- within 8% on average.<|reference_end|> | arxiv | @article{giuli2002narses:,
title={Narses: A Scalable Flow-Based Network Simulator},
author={TJ Giuli, Mary Baker},
journal={arXiv preprint arXiv:cs/0211024},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211024},
primaryClass={cs.PF cs.NI}
} | giuli2002narses: |
arxiv-670844 | cs/0211025 | Effective Strong Dimension, Algorithmic Information, and Computational Complexity | <|reference_start|>Effective Strong Dimension, Algorithmic Information, and Computational Complexity: The two most important notions of fractal dimension are {\it Hausdorff dimension}, developed by Hausdorff (1919), and {\it packing dimension}, developed by Tricot (1982). Lutz (2000) has recently proven a simple characterization of Hausdorff dimension in terms of {\it gales}, which are betting strategies that generalize martingales. Imposing various computability and complexity constraints on these gales produces a spectrum of effective versions of Hausdorff dimension. In this paper we show that packing dimension can also be characterized in terms of gales. Moreover, even though the usual definition of packing dimension is considerably more complex than that of Hausdorff dimension, our gale characterization of packing dimension is an exact dual of -- and every bit as simple as -- the gale characterization of Hausdorff dimension. Effectivizing our gale characterization of packing dimension produces a variety of {\it effective strong dimensions}, which are exact duals of the effective dimensions mentioned above. We develop the basic properties of effective strong dimensions and prove a number of results relating them to fundamental aspects of randomness, Kolmogorov complexity, prediction, Boolean circuit-size complexity, polynomial-time degrees, and data compression.<|reference_end|> | arxiv | @article{athreya2002effective,
title={Effective Strong Dimension, Algorithmic Information, and Computational
Complexity},
author={Krishna B. Athreya, John M. Hitchcock, Jack H. Lutz, and Elvira
Mayordomo},
journal={arXiv preprint arXiv:cs/0211025},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211025},
primaryClass={cs.CC}
} | athreya2002effective |
arxiv-670845 | cs/0211026 | How long is a Proof? - A short note | <|reference_start|>How long is a Proof? - A short note: Withdrawn. Silly notion and out of context.<|reference_end|> | arxiv | @article{yaneff2002how,
title={How long is a Proof? - A short note},
author={A. G. Yaneff},
journal={arXiv preprint arXiv:cs/0211026},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211026},
primaryClass={cs.CC}
} | yaneff2002how |
arxiv-670846 | cs/0211027 | Adaptive Development of Koncepts in Virtual Animats: Insights into the Development of Knowledge | <|reference_start|>Adaptive Development of Koncepts in Virtual Animats: Insights into the Development of Knowledge: As a part of our effort for studying the evolution and development of cognition, we present results derived from synthetic experimentations in a virtual laboratory where animats develop koncepts adaptively and ground their meaning through action. We introduce the term "koncept" to avoid confusions and ambiguity derived from the wide use of the word "concept". We present the models which our animats use for abstracting koncepts from perceptions, plastically adapt koncepts, and associate koncepts with actions. On a more philosophical vein, we suggest that knowledge is a property of a cognitive system, not an element, and therefore observer-dependent.<|reference_end|> | arxiv | @article{gershenson2002adaptive,
title={Adaptive Development of Koncepts in Virtual Animats: Insights into the
Development of Knowledge},
author={Carlos Gershenson},
journal={arXiv preprint arXiv:cs/0211027},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211027},
primaryClass={cs.AI}
} | gershenson2002adaptive |
arxiv-670847 | cs/0211028 | Thinking Adaptive: Towards a Behaviours Virtual Laboratory | <|reference_start|>Thinking Adaptive: Towards a Behaviours Virtual Laboratory: In this paper we name some of the advantages of virtual laboratories; and propose that a Behaviours Virtual Laboratory should be useful for both biologists and AI researchers, offering a new perspective for understanding adaptive behaviour. We present our development of a Behaviours Virtual Laboratory, which at this stage is focused in action selection, and show some experiments to illustrate the properties of our proposal, which can be accessed via Internet.<|reference_end|> | arxiv | @article{gershenson2002thinking,
title={Thinking Adaptive: Towards a Behaviours Virtual Laboratory},
author={Carlos Gershenson, Pedro Pablo Gonzalez, Jose Negrete},
journal={arXiv preprint arXiv:cs/0211028},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211028},
primaryClass={cs.AI cs.MA}
} | gershenson2002thinking |
arxiv-670848 | cs/0211029 | Modelling intracellular signalling networks using behaviour-based systems and the blackboard architecture | <|reference_start|>Modelling intracellular signalling networks using behaviour-based systems and the blackboard architecture: This paper proposes to model the intracellular signalling networks using a fusion of behaviour-based systems and the blackboard architecture. In virtue of this fusion, the model developed by us, which has been named Cellulat, allows to take account two essential aspects of the intracellular signalling networks: (1) the cognitive capabilities of certain types of networks' components and (2) the high level of spatial organization of these networks. A simple example of modelling of Ca2+ signalling pathways using Cellulat is presented here. An intracellular signalling virtual laboratory is being developed from Cellulat.<|reference_end|> | arxiv | @article{perez2002modelling,
title={Modelling intracellular signalling networks using behaviour-based
systems and the blackboard architecture},
author={Pedro Pablo Gonzalez Perez, Carlos Gershenson, Maura Cardenas Garcia,
and Jaime Lagunez Otero},
journal={arXiv preprint arXiv:cs/0211029},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211029},
primaryClass={cs.MA q-bio.CB}
} | perez2002modelling |
arxiv-670849 | cs/0211030 | Integration of Computational Techniques for the Modelling of Signal Transduction | <|reference_start|>Integration of Computational Techniques for the Modelling of Signal Transduction: A cell can be seen as an adaptive autonomous agent or as a society of adaptive autonomous agents, where each can exhibit a particular behaviour depending on its cognitive capabilities. We present an intracellular signalling model obtained by integrating several computational techniques into an agent-based paradigm. Cellulat, the model, takes into account two essential aspects of the intracellular signalling networks: cognitive capacities and a spatial organization. Exemplifying the functionality of the system by modelling the EGFR signalling pathway, we discuss the methodology as well as the purposes of an intracellular signalling virtual laboratory, presently under development.<|reference_end|> | arxiv | @article{perez2002integration,
title={Integration of Computational Techniques for the Modelling of Signal
Transduction},
author={Pedro Pablo Gonzalez Perez, Maura Cardenas Garcia, Carlos Gershenson,
Jaime Lagunez-Otero},
journal={arXiv preprint arXiv:cs/0211030},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211030},
primaryClass={cs.MA q-bio.CB}
} | perez2002integration |
arxiv-670850 | cs/0211031 | Redundancy in Logic I: CNF Propositional Formulae | <|reference_start|>Redundancy in Logic I: CNF Propositional Formulae: A knowledge base is redundant if it contains parts that can be inferred from the rest of it. We study the problem of checking whether a CNF formula (a set of clauses) is redundant, that is, it contains clauses that can be derived from the other ones. Any CNF formula can be made irredundant by deleting some of its clauses: what results is an irredundant equivalent subset (I.E.S.) We study the complexity of some related problems: verification, checking existence of a I.E.S. with a given size, checking necessary and possible presence of clauses in I.E.S.'s, and uniqueness. We also consider the problem of redundancy with different definitions of equivalence.<|reference_end|> | arxiv | @article{liberatore2002redundancy,
title={Redundancy in Logic I: CNF Propositional Formulae},
author={Paolo Liberatore},
journal={arXiv preprint arXiv:cs/0211031},
year={2002},
doi={10.1016/j.artint.2004.11.002},
archivePrefix={arXiv},
eprint={cs/0211031},
primaryClass={cs.AI cs.CC}
} | liberatore2002redundancy |
arxiv-670851 | cs/0211032 | Solution Bounds for a Hypothetical Polynomial Time Aproximation Algorithm for the TSP | <|reference_start|>Solution Bounds for a Hypothetical Polynomial Time Aproximation Algorithm for the TSP: Bounds for the optimal tour length for a hypothetical TSP algorithm are derived.<|reference_end|> | arxiv | @article{yaneff2002solution,
title={Solution Bounds for a Hypothetical Polynomial Time Aproximation
Algorithm for the TSP},
author={A. G. Yaneff},
journal={arXiv preprint arXiv:cs/0211032},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211032},
primaryClass={cs.CC}
} | yaneff2002solution |
arxiv-670852 | cs/0211033 | Propositional satisfiability in declarative programming | <|reference_start|>Propositional satisfiability in declarative programming: Answer-set programming (ASP) paradigm is a way of using logic to solve search problems. Given a search problem, to solve it one designs a theory in the logic so that models of this theory represent problem solutions. To compute a solution to a problem one needs to compute a model of the corresponding theory. Several answer-set programming formalisms have been developed on the basis of logic programming with the semantics of stable models. In this paper we show that also the logic of predicate calculus gives rise to effective implementations of the ASP paradigm, similar in spirit to logic programming with stable model semantics and with a similar scope of applicability. Specifically, we propose two logics based on predicate calculus as formalisms for encoding search problems. We show that the expressive power of these logics is given by the class NP-search. We demonstrate how to use them in programming and develop computational tools for model finding. In the case of one of the logics our techniques reduce the problem to that of propositional satisfiability and allow one to use off-the-shelf satisfiability solvers. The language of the other logic has more complex syntax and provides explicit means to model some high-level constraints. For theories in this logic, we designed our own solver that takes advantage of the expanded syntax. We present experimental results demonstrating computational effectiveness of the overall approach.<|reference_end|> | arxiv | @article{east2002propositional,
title={Propositional satisfiability in declarative programming},
author={Deborah East and Miroslaw Truszczynski},
journal={arXiv preprint arXiv:cs/0211033},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211033},
primaryClass={cs.LO cs.AI}
} | east2002propositional |
arxiv-670853 | cs/0211035 | Monadic Style Control Constructs for Inference Systems | <|reference_start|>Monadic Style Control Constructs for Inference Systems: Recent advances in programming languages study and design have established a standard way of grounding computational systems representation in category theory. These formal results led to a better understanding of issues of control and side-effects in functional and imperative languages. Another benefit is a better way of modelling computational effects in logical frameworks. With this analogy in mind, we embark on an investigation of inference systems based on considering inference behaviour as a form of computation. We delineate a categorical formalisation of control constructs in inference systems. This representation emphasises the parallel between the modular articulation of the categorical building blocks (triples) used to account for the inference architecture and the modular composition of cognitive processes.<|reference_end|> | arxiv | @article{chauvet2002monadic,
title={Monadic Style Control Constructs for Inference Systems},
author={Jean-Marie Chauvet},
journal={arXiv preprint arXiv:cs/0211035},
year={2002},
number={DD-2002-1},
archivePrefix={arXiv},
eprint={cs/0211035},
primaryClass={cs.AI cs.PL}
} | chauvet2002monadic |
arxiv-670854 | cs/0211036 | Typical random 3-SAT formulae and the satisfiability threshold | <|reference_start|>Typical random 3-SAT formulae and the satisfiability threshold: We present a new structural (or syntatic) approach for estimating the satisfiability threshold of random 3-SAT formulae. We show its efficiency in obtaining a jump from the previous upper bounds, lowering them to 4.506. The method combines well with other techniques, and also applies to other problems, such as the 3-colourability of random graphs.<|reference_end|> | arxiv | @article{dubois2002typical,
title={Typical random 3-SAT formulae and the satisfiability threshold},
author={Olivier Dubois, Yacine Boufkhad, Jacques Mandler},
journal={arXiv preprint arXiv:cs/0211036},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211036},
primaryClass={cs.DM cs.CC}
} | dubois2002typical |
arxiv-670855 | cs/0211037 | Annotations for HTML to VoiceXML Transcoding: Producing Voice WebPages with Usability in Mind | <|reference_start|>Annotations for HTML to VoiceXML Transcoding: Producing Voice WebPages with Usability in Mind: Web pages contain a large variety of information, but are largely designed for use by graphical web browsers. Mobile access to web-based information often requires presenting HTML web pages using channels that are limited in their graphical capabilities such as small-screens or audio-only interfaces. Content transcoding and annotations have been explored as methods for intelligently presenting HTML documents. Much of this work has focused on transcoding for small-screen devices such as are found on PDAs and cell phones. Here, we focus on the use of annotations and transcoding for presenting HTML content through a voice user interface instantiated in VoiceXML. This transcoded voice interface is designed with an assumption that it will not be used for extended web browsing by voice, but rather to quickly gain directed access to information on web pages. We have found repeated structures that are common in the presentation of data on web pages that are well suited for voice presentation and navigation. In this paper, we describe these structures and their use in an annotation system we have implemented that produces a VoiceXML interface to information originally embedded in HTML documents. We describe the transcoding process used to translate HTML into VoiceXML, including transcoding features we have designed to lead to highly usable VoiceXML code.<|reference_end|> | arxiv | @article{shao2002annotations,
title={Annotations for HTML to VoiceXML Transcoding: Producing Voice WebPages
with Usability in Mind},
author={Zhiyan Shao, Robert Capra, Manuel A. Perez-Quinones},
journal={arXiv preprint arXiv:cs/0211037},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211037},
primaryClass={cs.HC}
} | shao2002annotations |
arxiv-670856 | cs/0211038 | Dynamic Adjustment of the Motivation Degree in an Action Selection Mechanism | <|reference_start|>Dynamic Adjustment of the Motivation Degree in an Action Selection Mechanism: This paper presents a model for dynamic adjustment of the motivation degree, using a reinforcement learning approach, in an action selection mechanism previously developed by the authors. The learning takes place in the modification of a parameter of the model of combination of internal and external stimuli. Experiments that show the claimed properties are presented, using a VR simulation developed for such purposes. The importance of adaptation by learning in action selection is also discussed.<|reference_end|> | arxiv | @article{gershenson2002dynamic,
title={Dynamic Adjustment of the Motivation Degree in an Action Selection
Mechanism},
author={Carlos Gershenson, Pedro Pablo Gonzalez},
journal={arXiv preprint arXiv:cs/0211038},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211038},
primaryClass={cs.AI}
} | gershenson2002dynamic |
arxiv-670857 | cs/0211039 | Action Selection Properties in a Software Simulated Agent | <|reference_start|>Action Selection Properties in a Software Simulated Agent: This article analyses the properties of the Internal Behaviour network, an action selection mechanism previously proposed by the authors, with the aid of a simulation developed for such ends. A brief review of the Internal Behaviour network is followed by the explanation of the implementation of the simulation. Then, experiments are presented and discussed analysing the properties of the action selection in the proposed model.<|reference_end|> | arxiv | @article{garcia2002action,
title={Action Selection Properties in a Software Simulated Agent},
author={Carlos Gershenson Garcia, Pedro Pablo Gonzalez Perez, Jose Negrete
Martinez},
journal={# MICAI 2000: Advances in Artificial Intelligence. Lecture Notes
in Artificial Intelligence 1793, pp. 634-648. Springer-Verlag},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211039},
primaryClass={cs.AI}
} | garcia2002action |
arxiv-670858 | cs/0211040 | A Model for Combination of External and Internal Stimuli in the Action Selection of an Autonomous Agent | <|reference_start|>A Model for Combination of External and Internal Stimuli in the Action Selection of an Autonomous Agent: This paper proposes a model for combination of external and internal stimuli for the action selection in an autonomous agent, based in an action selection mechanism previously proposed by the authors. This combination model includes additive and multiplicative elements, which allows to incorporate new properties, which enhance the action selection. A given parameter a, which is part of the proposed model, allows to regulate the degree of dependence of the observed external behaviour from the internal states of the entity.<|reference_end|> | arxiv | @article{perez2002a,
title={A Model for Combination of External and Internal Stimuli in the Action
Selection of an Autonomous Agent},
author={Pedro Pablo Gonzalez Perez, Jose Negrete Martinez, Ariel Barreiro
Garcia, Carlos Gershenson Garcia},
journal={MICAI 2000: Advances in Artificial Intelligence. Lecture Notes in
Artificial Intelligence 1793, pp. 621-633. Springer-Verlag},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211040},
primaryClass={cs.AI}
} | perez2002a |
arxiv-670859 | cs/0211041 | An Approach to Automatic Indexing of Scientific Publications in High Energy Physics for Database SPIRES HEP | <|reference_start|>An Approach to Automatic Indexing of Scientific Publications in High Energy Physics for Database SPIRES HEP: We introduce an approach to automatic indexing of e-prints based on a pattern-matching technique making extensive use of an Associative Patterns Dictionary (APD), developed by us. Entries in the APD consist of natural language phrases with the same semantic interpretation as a set of keywords from a controlled vocabulary. The method also allows to recognize within e-prints formulae written in TeX notations that might also appear as keywords. We present an automatic indexing system, AUTEX, which we have applied to keyword index e-prints in selected areas in high energy physics (HEP) making use of the DESY-HEPI thesaurus as a controlled vocabulary.<|reference_end|> | arxiv | @article{averin2002an,
title={An Approach to Automatic Indexing of Scientific Publications in High
Energy Physics for Database SPIRES HEP},
author={A.V. Averin (NSI, Moscow), L.A. Vassilevskaya (DESY, Hamburg)},
journal={arXiv preprint arXiv:cs/0211041},
year={2002},
number={DESY L-02-02 (November 2002)},
archivePrefix={arXiv},
eprint={cs/0211041},
primaryClass={cs.IR cs.DL}
} | averin2002an |
arxiv-670860 | cs/0211042 | Database Repairs and Analytic Tableaux | <|reference_start|>Database Repairs and Analytic Tableaux: In this article, we characterize in terms of analytic tableaux the repairs of inconsistent relational databases, that is databases that do not satisfy a given set of integrity constraints. For this purpose we provide closing and opening criteria for branches in tableaux that are built for database instances and their integrity constraints. We use the tableaux based characterization as a basis for consistent query answering, that is for retrieving from the database answers to queries that are consistent wrt the integrity constraints.<|reference_end|> | arxiv | @article{bertossi2002database,
title={Database Repairs and Analytic Tableaux},
author={Leopoldo Bertossi and Camilla Schwind},
journal={arXiv preprint arXiv:cs/0211042},
year={2002},
archivePrefix={arXiv},
eprint={cs/0211042},
primaryClass={cs.DB cs.LO}
} | bertossi2002database |
arxiv-670861 | cs/0212001 | Traveling Salesmen in the Presence of Competition | <|reference_start|>Traveling Salesmen in the Presence of Competition: We propose the ``Competing Salesmen Problem'' (CSP), a 2-player competitive version of the classical Traveling Salesman Problem. This problem arises when considering two competing salesmen instead of just one. The concern for a shortest tour is replaced by the necessity to reach any of the customers before the opponent does. In particular, we consider the situation where players take turns, moving along one edge at a time within a graph G=(V,E). The set of customers is given by a subset V_C V of the vertices. At any given time, both players know of their opponent's position. A player wins if he is able to reach a majority of the vertices in V_C before the opponent does. We prove that the CSP is PSPACE-complete, even if the graph is bipartite, and both players start at distance 2 from each other. We show that the starting player may lose the game, even if both players start from the same vertex. For bipartite graphs, we show that the starting player always can avoid a loss. We also show that the second player can avoid to lose by more than one customer, when play takes place on a graph that is a tree T, and V_C consists of leaves of T. For the case where T is a star and V_C consists of n leaves of T, we give a simple and fast strategy which is optimal for both players. If V_C consists not only of leaves, the situation is more involved.<|reference_end|> | arxiv | @article{fekete2002traveling,
title={Traveling Salesmen in the Presence of Competition},
author={Sandor P. Fekete, Rudolf Fleischer, Aviezri Fraenkel, and Matthias
Schmitt},
journal={Theoretical Computer Science, 313 (2004), 377-392.},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212001},
primaryClass={cs.CC}
} | fekete2002traveling |
arxiv-670862 | cs/0212002 | Survey propagation: an algorithm for satisfiability | <|reference_start|>Survey propagation: an algorithm for satisfiability: We study the satisfiability of randomly generated formulas formed by $M$ clauses of exactly $K$ literals over $N$ Boolean variables. For a given value of $N$ the problem is known to be most difficult with $\alpha=M/N$ close to the experimental threshold $\alpha_c$ separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when $\alpha$ is close to (but smaller than) $\alpha_c$. We introduce a new type of message passing algorithm which allows to find efficiently a satisfiable assignment of the variables in the difficult region. This algorithm is iterative and composed of two main parts. The first is a message-passing procedure which generalizes the usual methods like Sum-Product or Belief Propagation: it passes messages that are surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional heuristic.<|reference_end|> | arxiv | @article{braunstein2002survey,
title={Survey propagation: an algorithm for satisfiability},
author={A. Braunstein, M. Mezard, R. Zecchina},
journal={Random Structures and Algorithms 27, 201-226 (2005)},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212002},
primaryClass={cs.CC cond-mat.stat-mech}
} | braunstein2002survey |
arxiv-670863 | cs/0212003 | Ownership Confinement Ensures Representation Independence for Object-Oriented Programs | <|reference_start|>Ownership Confinement Ensures Representation Independence for Object-Oriented Programs: Dedicated to the memory of Edsger W.Dijkstra. Representation independence or relational parametricity formally characterizes the encapsulation provided by language constructs for data abstraction and justifies reasoning by simulation. Representation independence has been shown for a variety of languages and constructs but not for shared references to mutable state; indeed it fails in general for such languages. This paper formulates representation independence for classes, in an imperative, object-oriented language with pointers, subclassing and dynamic dispatch, class oriented visibility control, recursive types and methods, and a simple form of module. An instance of a class is considered to implement an abstraction using private fields and so-called representation objects. Encapsulation of representation objects is expressed by a restriction, called confinement, on aliasing. Representation independence is proved for programs satisfying the confinement condition. A static analysis is given for confinement that accepts common designs such as the observer and factory patterns. The formalization takes into account not only the usual interface between a client and a class that provides an abstraction but also the interface (often called ``protected'') between the class and its subclasses.<|reference_end|> | arxiv | @article{banerjee2002ownership,
title={Ownership Confinement Ensures Representation Independence for
Object-Oriented Programs},
author={Anindya Banerjee (1), David A. Naumann (2) ((1) Kansas State
University, (2) Stevens Institute of Technology)},
journal={arXiv preprint arXiv:cs/0212003},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212003},
primaryClass={cs.PL}
} | banerjee2002ownership |
arxiv-670864 | cs/0212004 | Minimal-Change Integrity Maintenance Using Tuple Deletions | <|reference_start|>Minimal-Change Integrity Maintenance Using Tuple Deletions: We address the problem of minimal-change integrity maintenance in the context of integrity constraints in relational databases. We assume that integrity-restoration actions are limited to tuple deletions. We identify two basic computational issues: repair checking (is a database instance a repair of a given database?) and consistent query answers (is a tuple an answer to a given query in every repair of a given database?). We study the computational complexity of both problems, delineating the boundary between the tractable and the intractable. We consider denial constraints, general functional and inclusion dependencies, as well as key and foreign key constraints. Our results shed light on the computational feasibility of minimal-change integrity maintenance. The tractable cases should lead to practical implementations. The intractability results highlight the inherent limitations of any integrity enforcement mechanism, e.g., triggers or referential constraint actions, as a way of performing minimal-change integrity maintenance.<|reference_end|> | arxiv | @article{chomicki2002minimal-change,
title={Minimal-Change Integrity Maintenance Using Tuple Deletions},
author={Jan Chomicki and Jerzy Marcinkowski},
journal={arXiv preprint arXiv:cs/0212004},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212004},
primaryClass={cs.DB}
} | chomicki2002minimal-change |
arxiv-670865 | cs/0212005 | Retractions of Types with Many Atoms | <|reference_start|>Retractions of Types with Many Atoms: We define a sound and complete proof system for affine beta-eta-retractions in simple types built over many atoms, and we state simple necessary conditions for arbitrary beta-eta-retractions in simple and polymorphic types.<|reference_end|> | arxiv | @article{regnier2002retractions,
title={Retractions of Types with Many Atoms},
author={Laurent Regnier, Pawel Urzyczyn},
journal={arXiv preprint arXiv:cs/0212005},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212005},
primaryClass={cs.LO}
} | regnier2002retractions |
arxiv-670866 | cs/0212006 | Use of openMosix for parallel I/O balancing on storage in Linux cluster | <|reference_start|>Use of openMosix for parallel I/O balancing on storage in Linux cluster: In this paper I present some experiences made in the matter of I/O for Linux Clustering. In particular is illustrated the use of the package openMosix, a balancer of workload for processes running in a cluster of nodes. I describe some tests for balancing the load of I/O storage massive processes in a cluster with four components. This work is been written for the proceedings of the workshop Linux cluster: the openMosix approach held at CINECA, Bologna, Italy, on 28 november 2002.<|reference_end|> | arxiv | @article{argentini2002use,
title={Use of openMosix for parallel I/O balancing on storage in Linux cluster},
author={Gianluca Argentini},
journal={arXiv preprint arXiv:cs/0212006},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212006},
primaryClass={cs.DC cs.DB}
} | argentini2002use |
arxiv-670867 | cs/0212007 | Optimized Color Gamuts for Tiled Displays | <|reference_start|>Optimized Color Gamuts for Tiled Displays: We consider the problem of finding a large color space that can be generated by all units in multi-projector tiled display systems. Viewing the problem geometrically as one of finding a large parallelepiped within the intersection of multiple parallelepipeds, and using colorimetric principles to define a volume-based objective function for comparing feasible solutions, we develop an algorithm for finding the optimal gamut in time O(n^3), where n denotes the number of projectors in the system. We also discuss more efficient quasiconvex programming algorithms for alternative objective functions based on maximizing the quality of the color space extrema.<|reference_end|> | arxiv | @article{bern2002optimized,
title={Optimized Color Gamuts for Tiled Displays},
author={Marshall Bern and David Eppstein},
journal={arXiv preprint arXiv:cs/0212007},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212007},
primaryClass={cs.CG cs.GR}
} | bern2002optimized |
arxiv-670868 | cs/0212008 | Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment | <|reference_start|>Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment: Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.<|reference_end|> | arxiv | @article{zhang2002principal,
title={Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent
Space Alignment},
author={Zhenyue Zhang and Hongyuan Zha},
journal={arXiv preprint arXiv:cs/0212008},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212008},
primaryClass={cs.LG cs.AI}
} | zhang2002principal |
arxiv-670869 | cs/0212009 | On the survey-propagation equations for the random K-satisfiability problem | <|reference_start|>On the survey-propagation equations for the random K-satisfiability problem: In this note we study the existence of a solution to the survey-propagation equations for the random K-satisfiability problem for a given instance. We conjecture that when the number of variables goes to infinity, the solution of these equations for a given instance can be approximated by the solution of the corresponding equations on an infinite tree. We conjecture (and we bring numerical evidence) that the survey-propagation equations on the infinite tree have an unique solution in the suitable range of parameters.<|reference_end|> | arxiv | @article{parisi2002on,
title={On the survey-propagation equations for the random K-satisfiability
problem},
author={Giorgio Parisi},
journal={arXiv preprint arXiv:cs/0212009},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212009},
primaryClass={cs.CC cond-mat.dis-nn}
} | parisi2002on |
arxiv-670870 | cs/0212010 | JohnnyVon: Self-Replicating Automata in Continuous Two-Dimensional Space | <|reference_start|>JohnnyVon: Self-Replicating Automata in Continuous Two-Dimensional Space: JohnnyVon is an implementation of self-replicating automata in continuous two-dimensional space. Two types of particles drift about in a virtual liquid. The particles are automata with discrete internal states but continuous external relationships. Their internal states are governed by finite state machines but their external relationships are governed by a simulated physics that includes brownian motion, viscosity, and spring-like attractive and repulsive forces. The particles can be assembled into patterns that can encode arbitrary strings of bits. We demonstrate that, if an arbitrary "seed" pattern is put in a "soup" of separate individual particles, the pattern will replicate by assembling the individual particles into copies of itself. We also show that, given sufficient time, a soup of separate individual particles will eventually spontaneously form self-replicating patterns. We discuss the implications of JohnnyVon for research in nanotechnology, theoretical biology, and artificial life.<|reference_end|> | arxiv | @article{smith2002johnnyvon:,
title={JohnnyVon: Self-Replicating Automata in Continuous Two-Dimensional Space},
author={Arnold Smith (National Research Council of Canada), Peter Turney
(National Research Council of Canada), Robert Ewaschuk (University of
Waterloo)},
journal={arXiv preprint arXiv:cs/0212010},
year={2002},
number={NRC-44953},
archivePrefix={arXiv},
eprint={cs/0212010},
primaryClass={cs.NE cs.CE}
} | smith2002johnnyvon: |
arxiv-670871 | cs/0212011 | Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction: Learning from Labeled and Unlabeled Data | <|reference_start|>Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction: Learning from Labeled and Unlabeled Data: Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. Good performance on this task has been obtained by approaching it as a supervised learning problem. An input document is treated as a set of candidate phrases that must be classified as either keyphrases or non-keyphrases. To classify a candidate phrase as a keyphrase, the most important features (attributes) appear to be the frequency and location of the candidate phrase in the document. Recent work has demonstrated that it is also useful to know the frequency of the candidate phrase as a manually assigned keyphrase for other documents in the same domain as the given document (e.g., the domain of computer science). Unfortunately, this keyphrase-frequency feature is domain-specific (the learning process must be repeated for each new domain) and training-intensive (good performance requires a relatively large number of training documents in the given domain, with manually assigned keyphrases). The aim of the work described here is to remove these limitations. In this paper, I introduce new features that are derived by mining lexical knowledge from a very large collection of unlabeled data, consisting of approximately 350 million Web pages without manually assigned keyphrases. I present experiments that show that the new features result in improved keyphrase extraction, although they are neither domain-specific nor training-intensive.<|reference_end|> | arxiv | @article{turney2002mining,
title={Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction:
Learning from Labeled and Unlabeled Data},
author={Peter D. Turney (National Research Council of Canada)},
journal={arXiv preprint arXiv:cs/0212011},
year={2002},
number={NRC-44947},
archivePrefix={arXiv},
eprint={cs/0212011},
primaryClass={cs.LG cs.IR}
} | turney2002mining |
arxiv-670872 | cs/0212012 | Unsupervised Learning of Semantic Orientation from a Hundred-Billion-Word Corpus | <|reference_start|>Unsupervised Learning of Semantic Orientation from a Hundred-Billion-Word Corpus: The evaluative character of a word is called its semantic orientation. A positive semantic orientation implies desirability (e.g., "honest", "intrepid") and a negative semantic orientation implies undesirability (e.g., "disturbing", "superfluous"). This paper introduces a simple algorithm for unsupervised learning of semantic orientation from extremely large corpora. The method involves issuing queries to a Web search engine and using pointwise mutual information to analyse the results. The algorithm is empirically evaluated using a training corpus of approximately one hundred billion words -- the subset of the Web that is indexed by the chosen search engine. Tested with 3,596 words (1,614 positive and 1,982 negative), the algorithm attains an accuracy of 80%. The 3,596 test words include adjectives, adverbs, nouns, and verbs. The accuracy is comparable with the results achieved by Hatzivassiloglou and McKeown (1997), using a complex four-stage supervised learning algorithm that is restricted to determining the semantic orientation of adjectives.<|reference_end|> | arxiv | @article{turney2002unsupervised,
title={Unsupervised Learning of Semantic Orientation from a
Hundred-Billion-Word Corpus},
author={Peter D. Turney (National Research Council of Canada), Michael L.
Littman (Stowe Research)},
journal={arXiv preprint arXiv:cs/0212012},
year={2002},
number={NRC-44929},
archivePrefix={arXiv},
eprint={cs/0212012},
primaryClass={cs.LG cs.IR}
} | turney2002unsupervised |
arxiv-670873 | cs/0212013 | Learning to Extract Keyphrases from Text | <|reference_start|>Learning to Extract Keyphrases from Text: Many academic journals ask their authors to provide a list of about five to fifteen key words, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a surprisingly wide variety of tasks for which keyphrases are useful, as we discuss in this paper. Recent commercial software, such as Microsoft's Word 97 and Verity's Search 97, includes algorithms that automatically extract keyphrases from documents. In this paper, we approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for this task. The third set of experiments examines the performance of GenEx on the task of metadata generation, relative to the performance of Microsoft's Word 97. The fourth and final set of experiments investigates the performance of GenEx on the task of highlighting, relative to Verity's Search 97. The experimental results support the claim that a specialized learning algorithm (GenEx) can generate better keyphrases than a general-purpose learning algorithm (C4.5) and the non-learning algorithms that are used in commercial software (Word 97 and Search 97).<|reference_end|> | arxiv | @article{turney2002learning,
title={Learning to Extract Keyphrases from Text},
author={Peter D. Turney (National Research Council of Canada)},
journal={arXiv preprint arXiv:cs/0212013},
year={2002},
number={NRC-41622},
archivePrefix={arXiv},
eprint={cs/0212013},
primaryClass={cs.LG cs.IR}
} | turney2002learning |
arxiv-670874 | cs/0212014 | Extraction of Keyphrases from Text: Evaluation of Four Algorithms | <|reference_start|>Extraction of Keyphrases from Text: Evaluation of Four Algorithms: This report presents an empirical evaluation of four algorithms for automatically extracting keywords and keyphrases from documents. The four algorithms are compared using five different collections of documents. For each document, we have a target set of keyphrases, which were generated by hand. The target keyphrases were generated for human readers; they were not tailored for any of the four keyphrase extraction algorithms. Each of the algorithms was evaluated by the degree to which the algorithm's keyphrases matched the manually generated keyphrases. The four algorithms were (1) the AutoSummarize feature in Microsoft's Word 97, (2) an algorithm based on Eric Brill's part-of-speech tagger, (3) the Summarize feature in Verity's Search 97, and (4) NRC's Extractor algorithm. For all five document collections, NRC's Extractor yields the best match with the manually generated keyphrases.<|reference_end|> | arxiv | @article{turney2002extraction,
title={Extraction of Keyphrases from Text: Evaluation of Four Algorithms},
author={Peter D. Turney (National Research Council of Canada)},
journal={arXiv preprint arXiv:cs/0212014},
year={2002},
number={NRC-41550},
archivePrefix={arXiv},
eprint={cs/0212014},
primaryClass={cs.LG cs.IR}
} | turney2002extraction |
arxiv-670875 | cs/0212015 | Answering Subcognitive Turing Test Questions: A Reply to French | <|reference_start|>Answering Subcognitive Turing Test Questions: A Reply to French: Robert French has argued that a disembodied computer is incapable of passing a Turing Test that includes subcognitive questions. Subcognitive questions are designed to probe the network of cultural and perceptual associations that humans naturally develop as we live, embodied and embedded in the world. In this paper, I show how it is possible for a disembodied computer to answer subcognitive questions appropriately, contrary to French's claim. My approach to answering subcognitive questions is to use statistical information extracted from a very large collection of text. In particular, I show how it is possible to answer a sample of subcognitive questions taken from French, by issuing queries to a search engine that indexes about 350 million Web pages. This simple algorithm may shed light on the nature of human (sub-) cognition, but the scope of this paper is limited to demonstrating that French is mistaken: a disembodied computer can answer subcognitive questions.<|reference_end|> | arxiv | @article{turney2002answering,
title={Answering Subcognitive Turing Test Questions: A Reply to French},
author={Peter D. Turney (National Research Council of Canada)},
journal={Journal of Experimental and Theoretical Artificial Intelligence,
(2001), 13 (4), 409-419},
year={2002},
number={NRC-44898},
archivePrefix={arXiv},
eprint={cs/0212015},
primaryClass={cs.CL}
} | turney2002answering |
arxiv-670876 | cs/0212016 | Complexity of the Exact Domatic Number Problem and of the Exact Conveyor Flow Shop Problem | <|reference_start|>Complexity of the Exact Domatic Number Problem and of the Exact Conveyor Flow Shop Problem: We prove that the exact versions of the domatic number problem are complete for the levels of the boolean hierarchy over NP. The domatic number problem, which arises in the area of computer networks, is the problem of partitioning a given graph into a maximum number of disjoint dominating sets. This number is called the domatic number of the graph. We prove that the problem of determining whether or not the domatic number of a given graph is {\em exactly} one of k given values is complete for the 2k-th level of the boolean hierarchy over NP. In particular, for k = 1, it is DP-complete to determine whether or not the domatic number of a given graph equals exactly a given integer. Note that DP is the second level of the boolean hierarchy over NP. We obtain similar results for the exact versions of generalized dominating set problems and of the conveyor flow shop problem. Our reductions apply Wagner's conditions sufficient to prove hardness for the levels of the boolean hierarchy over NP.<|reference_end|> | arxiv | @article{riege2002complexity,
title={Complexity of the Exact Domatic Number Problem and of the Exact Conveyor
Flow Shop Problem},
author={Tobias Riege and J"org Rothe},
journal={arXiv preprint arXiv:cs/0212016},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212016},
primaryClass={cs.CC}
} | riege2002complexity |
arxiv-670877 | cs/0212017 | Classes of Spatiotemporal Objects and Their Closure Properties | <|reference_start|>Classes of Spatiotemporal Objects and Their Closure Properties: We present a data model for spatio-temporal databases. In this model spatio-temporal data is represented as a finite union of objects described by means of a spatial reference object, a temporal object and a geometric transformation function that determines the change or movement of the reference object in time. We define a number of practically relevant classes of spatio-temporal objects, and give complete results concerning closure under Boolean set operators for these classes. Since only few classes are closed under all set operators, we suggest an extension of the model, which leads to better closure properties, and therefore increased practical applicability. We also discuss a normal form for this extended data model.<|reference_end|> | arxiv | @article{chomicki2002classes,
title={Classes of Spatiotemporal Objects and Their Closure Properties},
author={Jan Chomicki, Sofie Haesevoets, Bart Kuijpers, Peter Revesz},
journal={arXiv preprint arXiv:cs/0212017},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212017},
primaryClass={cs.DB}
} | chomicki2002classes |
arxiv-670878 | cs/0212018 | Real numbers having ultimately periodic representations in abstract numeration systems | <|reference_start|>Real numbers having ultimately periodic representations in abstract numeration systems: Using a genealogically ordered infinite regular language, we know how to represent an interval of R. Numbers having an ultimately periodic representation play a special role in classical numeration systems. The aim of this paper is to characterize the numbers having an ultimately periodic representation in generalized systems built on a regular language. The syntactical properties of these words are also investigated. Finally, we show the equivalence of the classical "theta"-expansions with our generalized representations in some special case related to a Pisot number "theta".<|reference_end|> | arxiv | @article{lecomte2002real,
title={Real numbers having ultimately periodic representations in abstract
numeration systems},
author={P. Lecomte, M. Rigo},
journal={arXiv preprint arXiv:cs/0212018},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212018},
primaryClass={cs.CC cs.CL}
} | lecomte2002real |
arxiv-670879 | cs/0212019 | Thinking, Learning, and Autonomous Problem Solving | <|reference_start|>Thinking, Learning, and Autonomous Problem Solving: Ever increasing computational power will require methods for automatic programming. We present an alternative to genetic programming, based on a general model of thinking and learning. The advantage is that evolution takes place in the space of constructs and can thus exploit the mathematical structures of this space. The model is formalized, and a macro language is presented which allows for a formal yet intuitive description of the problem under consideration. A prototype has been developed to implement the scheme in PERL. This method will lead to a concentration on the analysis of problems, to a more rapid prototyping, to the treatment of new problem classes, and to the investigation of philosophical problems. We see fields of application in nonlinear differential equations, pattern recognition, robotics, model building, and animated pictures.<|reference_end|> | arxiv | @article{becker2002thinking,,
title={Thinking, Learning, and Autonomous Problem Solving},
author={Joerg D. Becker},
journal={arXiv preprint arXiv:cs/0212019},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212019},
primaryClass={cs.NE}
} | becker2002thinking, |
arxiv-670880 | cs/0212020 | Learning Algorithms for Keyphrase Extraction | <|reference_start|>Learning Algorithms for Keyphrase Extraction: Many academic journals ask their authors to provide a list of about five to fifteen keywords, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a wide variety of tasks for which keyphrases are useful, as we discuss in this paper. We approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. We evaluate the performance of nine different configurations of C4.5. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for automatically extracting keyphrases from text. The experimental results support the claim that a custom-designed algorithm (GenEx), incorporating specialized procedural domain knowledge, can generate better keyphrases than a generalpurpose algorithm (C4.5). Subjective human evaluation of the keyphrases generated by Extractor suggests that about 80% of the keyphrases are acceptable to human readers. This level of performance should be satisfactory for a wide variety of applications.<|reference_end|> | arxiv | @article{turney2002learning,
title={Learning Algorithms for Keyphrase Extraction},
author={Peter D. Turney (National Research Council of Canada)},
journal={Information Retrieval, (2000), 2 (4), 303-336},
year={2002},
number={NRC-44105},
archivePrefix={arXiv},
eprint={cs/0212020},
primaryClass={cs.LG cs.CL cs.IR}
} | turney2002learning |
arxiv-670881 | cs/0212021 | A Simple Model of Unbounded Evolutionary Versatility as a Largest-Scale Trend in Organismal Evolution | <|reference_start|>A Simple Model of Unbounded Evolutionary Versatility as a Largest-Scale Trend in Organismal Evolution: The idea that there are any large-scale trends in the evolution of biological organisms is highly controversial. It is commonly believed, for example, that there is a large-scale trend in evolution towards increasing complexity, but empirical and theoretical arguments undermine this belief. Natural selection results in organisms that are well adapted to their local environments, but it is not clear how local adaptation can produce a global trend. In this paper, I present a simple computational model, in which local adaptation to a randomly changing environment results in a global trend towards increasing evolutionary versatility. In this model, for evolutionary versatility to increase without bound, the environment must be highly dynamic. The model also shows that unbounded evolutionary versatility implies an accelerating evolutionary pace. I believe that unbounded increase in evolutionary versatility is a large-scale trend in evolution. I discuss some of the testable predictions about organismal evolution that are suggested by the model.<|reference_end|> | arxiv | @article{turney2002a,
title={A Simple Model of Unbounded Evolutionary Versatility as a Largest-Scale
Trend in Organismal Evolution},
author={Peter D. Turney (National Research Council of Canada)},
journal={Artificial Life, (2000), 6, 109-128},
year={2002},
doi={10.1162/106454600568357},
number={NRC-43672},
archivePrefix={arXiv},
eprint={cs/0212021},
primaryClass={cs.NE cs.CE q-bio.PE}
} | turney2002a |
arxiv-670882 | cs/0212022 | Algorithms for Rapidly Dispersing Robot Swarms in Unknown Environments | <|reference_start|>Algorithms for Rapidly Dispersing Robot Swarms in Unknown Environments: We develop and analyze algorithms for dispersing a swarm of primitive robots in an unknown environment, R. The primary objective is to minimize the makespan, that is, the time to fill the entire region. An environment is composed of pixels that form a connected subset of the integer grid. There is at most one robot per pixel and robots move horizontally or vertically at unit speed. Robots enter R by means of k>=1 door pixels Robots are primitive finite automata, only having local communication, local sensors, and a constant-sized memory. We first give algorithms for the single-door case (i.e., k=1), analyzing the algorithms both theoretically and experimentally. We prove that our algorithms have optimal makespan 2A-1, where A is the area of R. We next give an algorithm for the multi-door case (k>1), based on a wall-following version of the leader-follower strategy. We prove that our strategy is O(log(k+1))-competitive, and that this bound is tight for our strategy and other related strategies.<|reference_end|> | arxiv | @article{hsiang2002algorithms,
title={Algorithms for Rapidly Dispersing Robot Swarms in Unknown Environments},
author={Tien-Ruey Hsiang, Esther M. Arkin, Michael Bender, Sandor P. Fekete,
and Joseph S.B. Mitchell},
journal={arXiv preprint arXiv:cs/0212022},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212022},
primaryClass={cs.RO}
} | hsiang2002algorithms |
arxiv-670883 | cs/0212023 | How to Shift Bias: Lessons from the Baldwin Effect | <|reference_start|>How to Shift Bias: Lessons from the Baldwin Effect: An inductive learning algorithm takes a set of data as input and generates a hypothesis as output. A set of data is typically consistent with an infinite number of hypotheses; therefore, there must be factors other than the data that determine the output of the learning algorithm. In machine learning, these other factors are called the bias of the learner. Classical learning algorithms have a fixed bias, implicit in their design. Recently developed learning algorithms dynamically adjust their bias as they search for a hypothesis. Algorithms that shift bias in this manner are not as well understood as classical algorithms. In this paper, we show that the Baldwin effect has implications for the design and analysis of bias shifting algorithms. The Baldwin effect was proposed in 1896, to explain how phenomena that might appear to require Lamarckian evolution (inheritance of acquired characteristics) can arise from purely Darwinian evolution. Hinton and Nowlan presented a computational model of the Baldwin effect in 1987. We explore a variation on their model, which we constructed explicitly to illustrate the lessons that the Baldwin effect has for research in bias shifting algorithms. The main lesson is that it appears that a good strategy for shift of bias in a learning algorithm is to begin with a weak bias and gradually shift to a strong bias.<|reference_end|> | arxiv | @article{turney2002how,
title={How to Shift Bias: Lessons from the Baldwin Effect},
author={Peter D. Turney (National Research Council of Canada)},
journal={Evolutionary Computation, (1996), 4 (3), 271-295},
year={2002},
number={NRC-40180},
archivePrefix={arXiv},
eprint={cs/0212023},
primaryClass={cs.LG cs.NE}
} | turney2002how |
arxiv-670884 | cs/0212024 | Unsupervised Language Acquisition: Theory and Practice | <|reference_start|>Unsupervised Language Acquisition: Theory and Practice: In this thesis I present various algorithms for the unsupervised machine learning of aspects of natural languages using a variety of statistical models. The scientific object of the work is to examine the validity of the so-called Argument from the Poverty of the Stimulus advanced in favour of the proposition that humans have language-specific innate knowledge. I start by examining an a priori argument based on Gold's theorem, that purports to prove that natural languages cannot be learned, and some formal issues related to the choice of statistical grammars rather than symbolic grammars. I present three novel algorithms for learning various parts of natural languages: first, an algorithm for the induction of syntactic categories from unlabelled text using distributional information, that can deal with ambiguous and rare words; secondly, a set of algorithms for learning morphological processes in a variety of languages, including languages such as Arabic with non-concatenative morphology; thirdly an algorithm for the unsupervised induction of a context-free grammar from tagged text. I carefully examine the interaction between the various components, and show how these algorithms can form the basis for a empiricist model of language acquisition. I therefore conclude that the Argument from the Poverty of the Stimulus is unsupported by the evidence.<|reference_end|> | arxiv | @article{clark2002unsupervised,
title={Unsupervised Language Acquisition: Theory and Practice},
author={Alexander Clark},
journal={D. Phil., School of Cognitive and Computing Sciences, University
of Sussex, 2001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212024},
primaryClass={cs.CL cs.LG}
} | clark2002unsupervised |
arxiv-670885 | cs/0212025 | Searching for Plannable Domains can Speed up Reinforcement Learning | <|reference_start|>Searching for Plannable Domains can Speed up Reinforcement Learning: Reinforcement learning (RL) involves sequential decision making in uncertain environments. The aim of the decision-making agent is to maximize the benefit of acting in its environment over an extended period of time. Finding an optimal policy in RL may be very slow. To speed up learning, one often used solution is the integration of planning, for example, Sutton's Dyna algorithm, or various other methods using macro-actions. Here we suggest to separate plannable, i.e., close to deterministic parts of the world, and focus planning efforts in this domain. A novel reinforcement learning method called plannable RL (pRL) is proposed here. pRL builds a simple model, which is used to search for macro actions. The simplicity of the model makes planning computationally inexpensive. It is shown that pRL finds an optimal policy, and that plannable macro actions found by pRL are near-optimal. In turn, it is unnecessary to try large numbers of macro actions, which enables fast learning. The utility of pRL is demonstrated by computer simulations.<|reference_end|> | arxiv | @article{szita2002searching,
title={Searching for Plannable Domains can Speed up Reinforcement Learning},
author={Istvan Szita, Balint Takacs and Andras Lorincz},
journal={arXiv preprint arXiv:cs/0212025},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212025},
primaryClass={cs.AI}
} | szita2002searching |
arxiv-670886 | cs/0212026 | A Generalization of the Lifting Lemma for Logic Programming | <|reference_start|>A Generalization of the Lifting Lemma for Logic Programming: Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.<|reference_end|> | arxiv | @article{payet2002a,
title={A Generalization of the Lifting Lemma for Logic Programming},
author={Etienne Payet and Fred Mesnard},
journal={arXiv preprint arXiv:cs/0212026},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212026},
primaryClass={cs.LO}
} | payet2002a |
arxiv-670887 | cs/0212027 | Qualitative Study of a Robot Arm as a Hamiltonian System | <|reference_start|>Qualitative Study of a Robot Arm as a Hamiltonian System: A double pendulum subject to external torques is used as a model to study the stability of a planar manipulator with two links and two rotational driven joints. The hamiltonian equations of motion and the fixed points (stationary solutions) in phase space are determined. Under suitable conditions, the presence of constant torques does not change the number of fixed points, and preserves the topology of orbits in their linear neighborhoods; two equivalent invariant manifolds are observed, each corresponding to a saddle-center fixed point.<|reference_end|> | arxiv | @article{monerat2002qualitative,
title={Qualitative Study of a Robot Arm as a Hamiltonian System},
author={G. A. Monerat, E. V. Correa Silva, A. G. Cyrino},
journal={arXiv preprint arXiv:cs/0212027},
year={2002},
archivePrefix={arXiv},
eprint={cs/0212027},
primaryClass={cs.RO}
} | monerat2002qualitative |
arxiv-670888 | cs/0212028 | Technical Note: Bias and the Quantification of Stability | <|reference_start|>Technical Note: Bias and the Quantification of Stability: Research on bias in machine learning algorithms has generally been concerned with the impact of bias on predictive accuracy. We believe that there are other factors that should also play a role in the evaluation of bias. One such factor is the stability of the algorithm; in other words, the repeatability of the results. If we obtain two sets of data from the same phenomenon, with the same underlying probability distribution, then we would like our learning algorithm to induce approximately the same concepts from both sets of data. This paper introduces a method for quantifying stability, based on a measure of the agreement between concepts. We also discuss the relationships among stability, predictive accuracy, and bias.<|reference_end|> | arxiv | @article{turney2002technical,
title={Technical Note: Bias and the Quantification of Stability},
author={Peter D. Turney (National Research Council of Canada)},
journal={Machine Learning, (1995), 20, 23-33},
year={2002},
number={NRC-38313},
archivePrefix={arXiv},
eprint={cs/0212028},
primaryClass={cs.LG cs.CV}
} | turney2002technical |
arxiv-670889 | cs/0212029 | A Theory of Cross-Validation Error | <|reference_start|>A Theory of Cross-Validation Error: This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-based learning.<|reference_end|> | arxiv | @article{turney2002a,
title={A Theory of Cross-Validation Error},
author={Peter D. Turney (National Research Council of Canada)},
journal={Journal of Experimental and Theoretical Artificial Intelligence,
(1994), 6, 361-391},
year={2002},
number={NRC-35072},
archivePrefix={arXiv},
eprint={cs/0212029},
primaryClass={cs.LG cs.CV}
} | turney2002a |
arxiv-670890 | cs/0212030 | Theoretical Analyses of Cross-Validation Error and Voting in Instance-Based Learning | <|reference_start|>Theoretical Analyses of Cross-Validation Error and Voting in Instance-Based Learning: This paper begins with a general theory of error in cross-validation testing of algorithms for supervised learning from examples. It is assumed that the examples are described by attribute-value pairs, where the values are symbolic. Cross-validation requires a set of training examples and a set of testing examples. The value of the attribute that is to be predicted is known to the learner in the training set, but unknown in the testing set. The theory demonstrates that cross-validation error has two components: error on the training set (inaccuracy) and sensitivity to noise (instability). This general theory is then applied to voting in instance-based learning. Given an example in the testing set, a typical instance-based learning algorithm predicts the designated attribute by voting among the k nearest neighbors (the k most similar examples) to the testing example in the training set. Voting is intended to increase the stability (resistance to noise) of instance-based learning, but a theoretical analysis shows that there are circumstances in which voting can be destabilizing. The theory suggests ways to minimize cross-validation error, by insuring that voting is stable and does not adversely affect accuracy.<|reference_end|> | arxiv | @article{turney2002theoretical,
title={Theoretical Analyses of Cross-Validation Error and Voting in
Instance-Based Learning},
author={Peter D. Turney (National Research Council of Canada)},
journal={Journal of Experimental and Theoretical Artificial Intelligence,
(1994), 6, 331-360},
year={2002},
number={NRC-35073},
archivePrefix={arXiv},
eprint={cs/0212030},
primaryClass={cs.LG cs.CV}
} | turney2002theoretical |
arxiv-670891 | cs/0212031 | Contextual Normalization Applied to Aircraft Gas Turbine Engine Diagnosis | <|reference_start|>Contextual Normalization Applied to Aircraft Gas Turbine Engine Diagnosis: Diagnosing faults in aircraft gas turbine engines is a complex problem. It involves several tasks, including rapid and accurate interpretation of patterns in engine sensor data. We have investigated contextual normalization for the development of a software tool to help engine repair technicians with interpretation of sensor data. Contextual normalization is a new strategy for employing machine learning. It handles variation in data that is due to contextual factors, rather than the health of the engine. It does this by normalizing the data in a context-sensitive manner. This learning strategy was developed and tested using 242 observations of an aircraft gas turbine engine in a test cell, where each observation consists of roughly 12,000 numbers, gathered over a 12 second interval. There were eight classes of observations: seven deliberately implanted classes of faults and a healthy class. We compared two approaches to implementing our learning strategy: linear regression and instance-based learning. We have three main results. (1) For the given problem, instance-based learning works better than linear regression. (2) For this problem, contextual normalization works better than other common forms of normalization. (3) The algorithms described here can be the basis for a useful software tool for assisting technicians with the interpretation of sensor data.<|reference_end|> | arxiv | @article{turney2002contextual,
title={Contextual Normalization Applied to Aircraft Gas Turbine Engine
Diagnosis},
author={Peter D. Turney (National Research Council of Canada), Michael Halasz
(National Research Council of Canada)},
journal={Journal of Applied Intelligence, (1993), 3, 109-129},
year={2002},
number={NRC-35028},
archivePrefix={arXiv},
eprint={cs/0212031},
primaryClass={cs.LG cs.CE cs.CV}
} | turney2002contextual |
arxiv-670892 | cs/0212032 | Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews | <|reference_start|>Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews: This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., "subtle nuances") and a negative semantic orientation when it has bad associations (e.g., "very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word "excellent" minus the mutual information between the given phrase and the word "poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews.<|reference_end|> | arxiv | @article{turney2002thumbs,
title={Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised
Classification of Reviews},
author={Peter D. Turney (National Research Council of Canada)},
journal={Proceedings of the 40th Annual Meeting of the Association for
Computational Linguistics, (2002), Philadelphia, Pennsylvania, 417-424},
year={2002},
number={NRC-44946},
archivePrefix={arXiv},
eprint={cs/0212032},
primaryClass={cs.LG cs.CL cs.IR}
} | turney2002thumbs |
arxiv-670893 | cs/0212033 | Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL | <|reference_start|>Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL: This paper presents a simple unsupervised learning algorithm for recognizing synonyms, based on statistical data acquired by querying a Web search engine. The algorithm, called PMI-IR, uses Pointwise Mutual Information (PMI) and Information Retrieval (IR) to measure the similarity of pairs of words. PMI-IR is empirically evaluated using 80 synonym test questions from the Test of English as a Foreign Language (TOEFL) and 50 synonym test questions from a collection of tests for students of English as a Second Language (ESL). On both tests, the algorithm obtains a score of 74%. PMI-IR is contrasted with Latent Semantic Analysis (LSA), which achieves a score of 64% on the same 80 TOEFL questions. The paper discusses potential applications of the new unsupervised learning algorithm and some implications of the results for LSA and LSI (Latent Semantic Indexing).<|reference_end|> | arxiv | @article{turney2002mining,
title={Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL},
author={Peter D. Turney (National Research Council of Canada)},
journal={Proceedings of the Twelfth European Conference on Machine
Learning, (2001), Freiburg, Germany, 491-502},
year={2002},
number={NRC-44893},
archivePrefix={arXiv},
eprint={cs/0212033},
primaryClass={cs.LG cs.CL cs.IR}
} | turney2002mining |
arxiv-670894 | cs/0212034 | Types of Cost in Inductive Concept Learning | <|reference_start|>Types of Cost in Inductive Concept Learning: Inductive concept learning is the task of learning to assign cases to a discrete set of classes. In real-world applications of concept learning, there are many different types of cost involved. The majority of the machine learning literature ignores all types of cost (unless accuracy is interpreted as a type of cost measure). A few papers have investigated the cost of misclassification errors. Very few papers have examined the many other types of cost. In this paper, we attempt to create a taxonomy of the different types of cost that are involved in inductive concept learning. This taxonomy may help to organize the literature on cost-sensitive learning. We hope that it will inspire researchers to investigate all types of cost in inductive concept learning in more depth.<|reference_end|> | arxiv | @article{turney2002types,
title={Types of Cost in Inductive Concept Learning},
author={Peter D. Turney (National Research Council of Canada)},
journal={Workshop on Cost-Sensitive Learning at the Seventeenth
International Conference on Machine Learning, (2000), Stanford University,
California, 15-21},
year={2002},
number={NRC-43671},
archivePrefix={arXiv},
eprint={cs/0212034},
primaryClass={cs.LG cs.CV}
} | turney2002types |
arxiv-670895 | cs/0212035 | Exploiting Context When Learning to Classify | <|reference_start|>Exploiting Context When Learning to Classify: This paper addresses the problem of classifying observations when features are context-sensitive, specifically when the testing set involves a context that is different from the training set. The paper begins with a precise definition of the problem, then general strategies are presented for enhancing the performance of classification algorithms on this type of problem. These strategies are tested on two domains. The first domain is the diagnosis of gas turbine engines. The problem is to diagnose a faulty engine in one context, such as warm weather, when the fault has previously been seen only in another context, such as cold weather. The second domain is speech recognition. The problem is to recognize words spoken by a new speaker, not represented in the training set. For both domains, exploiting context results in substantially more accurate classification.<|reference_end|> | arxiv | @article{turney2002exploiting,
title={Exploiting Context When Learning to Classify},
author={Peter D. Turney (National Research Council of Canada)},
journal={Proceedings of the European Conference on Machine Learning,
Vienna, Austria, (1993), 402-407},
year={2002},
number={NRC-35058},
archivePrefix={arXiv},
eprint={cs/0212035},
primaryClass={cs.LG cs.CV}
} | turney2002exploiting |
arxiv-670896 | cs/0212036 | Myths and Legends of the Baldwin Effect | <|reference_start|>Myths and Legends of the Baldwin Effect: This position paper argues that the Baldwin effect is widely misunderstood by the evolutionary computation community. The misunderstandings appear to fall into two general categories. Firstly, it is commonly believed that the Baldwin effect is concerned with the synergy that results when there is an evolving population of learning individuals. This is only half of the story. The full story is more complicated and more interesting. The Baldwin effect is concerned with the costs and benefits of lifetime learning by individuals in an evolving population. Several researchers have focussed exclusively on the benefits, but there is much to be gained from attention to the costs. This paper explains the two sides of the story and enumerates ten of the costs and benefits of lifetime learning by individuals in an evolving population. Secondly, there is a cluster of misunderstandings about the relationship between the Baldwin effect and Lamarckian inheritance of acquired characteristics. The Baldwin effect is not Lamarckian. A Lamarckian algorithm is not better for most evolutionary computing problems than a Baldwinian algorithm. Finally, Lamarckian inheritance is not a better model of memetic (cultural) evolution than the Baldwin effect.<|reference_end|> | arxiv | @article{turney2002myths,
title={Myths and Legends of the Baldwin Effect},
author={Peter D. Turney (National Research Council of Canada)},
journal={13th International Conference on Machine Learning, Workshop on
Evolutionary Computation and Machine Learning, Bari, Italy, (1996), 135-142},
year={2002},
number={NRC-39220},
archivePrefix={arXiv},
eprint={cs/0212036},
primaryClass={cs.LG cs.NE}
} | turney2002myths |
arxiv-670897 | cs/0212037 | The Management of Context-Sensitive Features: A Review of Strategies | <|reference_start|>The Management of Context-Sensitive Features: A Review of Strategies: In this paper, we review five heuristic strategies for handling context-sensitive features in supervised machine learning from examples. We discuss two methods for recovering lost (implicit) contextual information. We mention some evidence that hybrid strategies can have a synergetic effect. We then show how the work of several machine learning researchers fits into this framework. While we do not claim that these strategies exhaust the possibilities, it appears that the framework includes all of the techniques that can be found in the published literature on contextsensitive learning.<|reference_end|> | arxiv | @article{turney2002the,
title={The Management of Context-Sensitive Features: A Review of Strategies},
author={Peter D. Turney (National Research Council of Canada)},
journal={13th International Conference on Machine Learning, Workshop on
Learning in Context-Sensitive Domains, Bari, Italy, (1996), 60-66},
year={2002},
number={NRC-39221},
archivePrefix={arXiv},
eprint={cs/0212037},
primaryClass={cs.LG cs.CV}
} | turney2002the |
arxiv-670898 | cs/0212038 | The Identification of Context-Sensitive Features: A Formal Definition of Context for Concept Learning | <|reference_start|>The Identification of Context-Sensitive Features: A Formal Definition of Context for Concept Learning: A large body of research in machine learning is concerned with supervised learning from examples. The examples are typically represented as vectors in a multi-dimensional feature space (also known as attribute-value descriptions). A teacher partitions a set of training examples into a finite number of classes. The task of the learning algorithm is to induce a concept from the training examples. In this paper, we formally distinguish three types of features: primary, contextual, and irrelevant features. We also formally define what it means for one feature to be context-sensitive to another feature. Context-sensitive features complicate the task of the learner and potentially impair the learner's performance. Our formal definitions make it possible for a learner to automatically identify context-sensitive features. After context-sensitive features have been identified, there are several strategies that the learner can employ for managing the features; however, a discussion of these strategies is outside of the scope of this paper. The formal definitions presented here correct a flaw in previously proposed definitions. We discuss the relationship between our work and a formal definition of relevance.<|reference_end|> | arxiv | @article{turney2002the,
title={The Identification of Context-Sensitive Features: A Formal Definition of
Context for Concept Learning},
author={Peter D. Turney (National Research Council of Canada)},
journal={13th International Conference on Machine Learning, Workshop on
Learning in Context-Sensitive Domains, Bari, Italy, (1996), 53-59},
year={2002},
number={NRC-39222},
archivePrefix={arXiv},
eprint={cs/0212038},
primaryClass={cs.LG cs.CV}
} | turney2002the |
arxiv-670899 | cs/0212039 | Low Size-Complexity Inductive Logic Programming: The East-West Challenge Considered as a Problem in Cost-Sensitive Classification | <|reference_start|>Low Size-Complexity Inductive Logic Programming: The East-West Challenge Considered as a Problem in Cost-Sensitive Classification: The Inductive Logic Programming community has considered proof-complexity and model-complexity, but, until recently, size-complexity has received little attention. Recently a challenge was issued "to the international computing community" to discover low size-complexity Prolog programs for classifying trains. The challenge was based on a problem first proposed by Ryszard Michalski, 20 years ago. We interpreted the challenge as a problem in cost-sensitive classification and we applied a recently developed cost-sensitive classifier to the competition. Our algorithm was relatively successful (we won a prize). This paper presents our algorithm and analyzes the results of the competition.<|reference_end|> | arxiv | @article{turney2002low,
title={Low Size-Complexity Inductive Logic Programming: The East-West Challenge
Considered as a Problem in Cost-Sensitive Classification},
author={Peter D. Turney (National Research Council of Canada)},
journal={Proceedings of the Fifth International Inductive Logic Programming
Workshop, Leuven, Belgium, (1995), 247-263},
year={2002},
number={NRC-39164},
archivePrefix={arXiv},
eprint={cs/0212039},
primaryClass={cs.LG cs.NE}
} | turney2002low |
arxiv-670900 | cs/0212040 | Data Engineering for the Analysis of Semiconductor Manufacturing Data | <|reference_start|>Data Engineering for the Analysis of Semiconductor Manufacturing Data: We have analyzed manufacturing data from several different semiconductor manufacturing plants, using decision tree induction software called Q-YIELD. The software generates rules for predicting when a given product should be rejected. The rules are intended to help the process engineers improve the yield of the product, by helping them to discover the causes of rejection. Experience with Q-YIELD has taught us the importance of data engineering -- preprocessing the data to enable or facilitate decision tree induction. This paper discusses some of the data engineering problems we have encountered with semiconductor manufacturing data. The paper deals with two broad classes of problems: engineering the features in a feature vector representation and engineering the definition of the target concept (the classes). Manufacturing process data present special problems for feature engineering, since the data have multiple levels of granularity (detail, resolution). Engineering the target concept is important, due to our focus on understanding the past, as opposed to the more common focus in machine learning on predicting the future.<|reference_end|> | arxiv | @article{turney2002data,
title={Data Engineering for the Analysis of Semiconductor Manufacturing Data},
author={Peter D. Turney (National Research Council of Canada)},
journal={Proceedings of the IJCAI-95 Workshop on Data Engineering for
Inductive Learning, Montreal, Quebec, (1995), 50-59},
year={2002},
number={NRC-39163},
archivePrefix={arXiv},
eprint={cs/0212040},
primaryClass={cs.LG cs.CE cs.CV}
} | turney2002data |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.