corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-671501 | cs/0310057 | An Introduction to Using Software Tools for Automatic Differentiation | <|reference_start|>An Introduction to Using Software Tools for Automatic Differentiation: We give a gentle introduction to using various software tools for automatic differentiation (AD). Ready-to-use examples are discussed, and links to further information are presented. Our target audience includes all those who are looking for a straightforward way to get started using the available AD technology. The document is dynamic in the sense that its content will be updated as the AD software evolves.<|reference_end|> | arxiv | @article{naumann2003an,
title={An Introduction to Using Software Tools for Automatic Differentiation},
author={Uwe Naumann and Andrea Walther},
journal={arXiv preprint arXiv:cs/0310057},
year={2003},
number={ANL/MCS-TM-254},
archivePrefix={arXiv},
eprint={cs/0310057},
primaryClass={cs.MS}
} | naumann2003an |
arxiv-671502 | cs/0310058 | Application Architecture for Spoken Language Resources in Organisational Settings | <|reference_start|>Application Architecture for Spoken Language Resources in Organisational Settings: Special technologies need to be used to take advantage of, and overcome, the challenges associated with acquiring, transforming, storing, processing, and distributing spoken language resources in organisations. This paper introduces an application architecture consisting of tools and supporting utilities for indexing and transcription, and describes how these tools, together with downstream processing and distribution systems, can be integrated into a workflow. Two sample applications for this architecture are outlined- the analysis of decision-making processes in organisations and the deployment of systems development methods by designers in the field.<|reference_end|> | arxiv | @article{clarke2003application,
title={Application Architecture for Spoken Language Resources in Organisational
Settings},
author={Rodney J. Clarke, Dali Dong and Philip C. Windridge},
journal={arXiv preprint arXiv:cs/0310058},
year={2003},
archivePrefix={arXiv},
eprint={cs/0310058},
primaryClass={cs.CL}
} | clarke2003application |
arxiv-671503 | cs/0310059 | Design and Implementation of MPICH2 over InfiniBand with RDMA Support | <|reference_start|>Design and Implementation of MPICH2 over InfiniBand with RDMA Support: For several years, MPI has been the de facto standard for writing parallel applications. One of the most popular MPI implementations is MPICH. Its successor, MPICH2, features a completely new design that provides more performance and flexibility. To ensure portability, it has a hierarchical structure based on which porting can be done at different levels. In this paper, we present our experiences designing and implementing MPICH2 over InfiniBand. Because of its high performance and open standard, InfiniBand is gaining popularity in the area of high-performance computing. Our study focuses on optimizing the performance of MPI-1 functions in MPICH2. One of our objectives is to exploit Remote Direct Memory Access (RDMA) in Infiniband to achieve high performance. We have based our design on the RDMA Channel interface provided by MPICH2, which encapsulates architecture-dependent communication functionalities into a very small set of functions. Starting with a basic design, we apply different optimizations and also propose a zero-copy-based design. We characterize the impact of our optimizations and designs using microbenchmarks. We have also performed an application-level evaluation using the NAS Parallel Benchmarks. Our optimized MPICH2 implementation achieves 7.6 $\mu$s latency and 857 MB/s bandwidth, which are close to the raw performance of the underlying InfiniBand layer. Our study shows that the RDMA Channel interface in MPICH2 provides a simple, yet powerful, abstraction that enables implementations with high performance by exploiting RDMA operations in InfiniBand. To the best of our knowledge, this is the first high-performance design and implementation of MPICH2 on InfiniBand using RDMA support.<|reference_end|> | arxiv | @article{liu2003design,
title={Design and Implementation of MPICH2 over InfiniBand with RDMA Support},
author={Jiuxing Liu, Weihang Jiang, Pete Wyckoff, Dhabaleswar K. Panda, David
Ashton, Darius Buntinas, William Gropp, Brian Toonen},
journal={arXiv preprint arXiv:cs/0310059},
year={2003},
number={Preprint ANL/MCS-P1103-1003},
archivePrefix={arXiv},
eprint={cs/0310059},
primaryClass={cs.AR cs.DC}
} | liu2003design |
arxiv-671504 | cs/0310060 | Puzzle: Zermelo-Fraenkel set theory is inconsistent | <|reference_start|>Puzzle: Zermelo-Fraenkel set theory is inconsistent: In this note, we present a puzzle. We prove that Zermelo-Fraenkel set theory is inconsistent by proving, using Zermelo-Fraenkel set theory, the false statement that any algorithm that determines whether any $n \times n$ matrix over $\mathbb F_2$, the finite field of order 2, is nonsingular must run in exponential time in the worst-case scenario. The object of the puzzle is to find the error in the proof.<|reference_end|> | arxiv | @article{feinstein2003puzzle:,
title={Puzzle: Zermelo-Fraenkel set theory is inconsistent},
author={Craig Alan Feinstein},
journal={arXiv preprint arXiv:cs/0310060},
year={2003},
archivePrefix={arXiv},
eprint={cs/0310060},
primaryClass={cs.CC}
} | feinstein2003puzzle: |
arxiv-671505 | cs/0310061 | Local-search techniques for propositional logic extended with cardinality constraints | <|reference_start|>Local-search techniques for propositional logic extended with cardinality constraints: We study local-search satisfiability solvers for propositional logic extended with cardinality atoms, that is, expressions that provide explicit ways to model constraints on cardinalities of sets. Adding cardinality atoms to the language of propositional logic facilitates modeling search problems and often results in concise encodings. We propose two ``native'' local-search solvers for theories in the extended language. We also describe techniques to reduce the problem to standard propositional satisfiability and allow us to use off-the-shelf SAT solvers. We study these methods experimentally. Our general finding is that native solvers designed specifically for the extended language perform better than indirect methods relying on SAT solvers.<|reference_end|> | arxiv | @article{liu2003local-search,
title={Local-search techniques for propositional logic extended with
cardinality constraints},
author={Lengning Liu, Miroslaw Truszczynski},
journal={arXiv preprint arXiv:cs/0310061},
year={2003},
archivePrefix={arXiv},
eprint={cs/0310061},
primaryClass={cs.AI}
} | liu2003local-search |
arxiv-671506 | cs/0310062 | WSAT(cc) - a fast local-search ASP solver | <|reference_start|>WSAT(cc) - a fast local-search ASP solver: We describe WSAT(cc), a local-search solver for computing models of theories in the language of propositional logic extended by cardinality atoms. WSAT(cc) is a processing back-end for the logic PS+, a recently proposed formalism for answer-set programming.<|reference_end|> | arxiv | @article{liu2003wsat(cc),
title={WSAT(cc) - a fast local-search ASP solver},
author={Lengning Liu, Miroslaw Truszczynski},
journal={arXiv preprint arXiv:cs/0310062},
year={2003},
archivePrefix={arXiv},
eprint={cs/0310062},
primaryClass={cs.AI}
} | liu2003wsat(cc) |
arxiv-671507 | cs/0310063 | Logic programs with monotone cardinality atoms | <|reference_start|>Logic programs with monotone cardinality atoms: We investigate mca-programs, that is, logic programs with clauses built of monotone cardinality atoms of the form kX, where k is a non-negative integer and X is a finite set of propositional atoms. We develop a theory of mca-programs. We demonstrate that the operational concept of the one-step provability operator generalizes to mca-programs, but the generalization involves nondeterminism. Our main results show that the formalism of mca-programs is a common generalization of (1) normal logic programming with its semantics of models, supported models and stable models, (2) logic programming with cardinality atoms and with the semantics of stable models, as defined by Niemela, Simons and Soininen, and (3) of disjunctive logic programming with the possible-model semantics of Sakama and Inoue.<|reference_end|> | arxiv | @article{marek2003logic,
title={Logic programs with monotone cardinality atoms},
author={Victor W. Marek, Ilkka Niemela, Miroslaw Truszczynski},
journal={arXiv preprint arXiv:cs/0310063},
year={2003},
archivePrefix={arXiv},
eprint={cs/0310063},
primaryClass={cs.LO}
} | marek2003logic |
arxiv-671508 | cs/0310064 | Satisfiability and computing van der Waerden numbers | <|reference_start|>Satisfiability and computing van der Waerden numbers: In this paper we bring together the areas of combinatorics and propositional satisfiability. Many combinatorial theorems establish, often constructively, the existence of positive integer functions, without actually providing their closed algebraic form or tight lower and upper bounds. The area of Ramsey theory is especially rich in such results. Using the problem of computing van der Waerden numbers as an example, we show that these problems can be represented by parameterized propositional theories in such a way that decisions concerning their satisfiability determine the numbers (function) in question. We show that by using general-purpose complete and local-search techniques for testing propositional satisfiability, this approach becomes effective -- competitive with specialized approaches. By following it, we were able to obtain several new results pertaining to the problem of computing van der Waerden numbers. We also note that due to their properties, especially their structural simplicity and computational hardness, propositional theories that arise in this research can be of use in development, testing and benchmarking of SAT solvers.<|reference_end|> | arxiv | @article{dransfield2003satisfiability,
title={Satisfiability and computing van der Waerden numbers},
author={Michael R. Dransfield, Victor W. Marek, Miroslaw Truszczynski},
journal={arXiv preprint arXiv:cs/0310064},
year={2003},
archivePrefix={arXiv},
eprint={cs/0310064},
primaryClass={cs.LO}
} | dransfield2003satisfiability |
arxiv-671509 | cs/0310065 | Maintaining Information in Fully-Dynamic Trees with Top Trees | <|reference_start|>Maintaining Information in Fully-Dynamic Trees with Top Trees: We introduce top trees as a design of a new simpler interface for data structures maintaining information in a fully-dynamic forest. We demonstrate how easy and versatile they are to use on a host of different applications. For example, we show how to maintain the diameter, center, and median of each tree in the forest. The forest can be updated by insertion and deletion of edges and by changes to vertex and edge weights. Each update is supported in O(log n) time, where n is the size of the tree(s) involved in the update. Also, we show how to support nearest common ancestor queries and level ancestor queries with respect to arbitrary roots in O(log n) time. Finally, with marked and unmarked vertices, we show how to compute distances to a nearest marked vertex. The later has applications to approximate nearest marked vertex in general graphs, and thereby to static optimization problems over shortest path metrics. Technically speaking, top trees are easily implemented either with Frederickson's topology trees [Ambivalent Data Structures for Dynamic 2-Edge-Connectivity and k Smallest Spanning Trees, SIAM J. Comput. 26 (2) pp. 484-538, 1997] or with Sleator and Tarjan's dynamic trees [A Data Structure for Dynamic Trees. J. Comput. Syst. Sc. 26 (3) pp. 362-391, 1983]. However, we claim that the interface is simpler for many applications, and indeed our new bounds are quadratic improvements over previous bounds where they exist.<|reference_end|> | arxiv | @article{alstrup2003maintaining,
title={Maintaining Information in Fully-Dynamic Trees with Top Trees},
author={Stephen Alstrup, Jacob Holm, Kristian de Lichtenberg, Mikkel Thorup},
journal={arXiv preprint arXiv:cs/0310065},
year={2003},
archivePrefix={arXiv},
eprint={cs/0310065},
primaryClass={cs.DS}
} | alstrup2003maintaining |
arxiv-671510 | cs/0311001 | Modeling State in Software Debugging of VHDL-RTL Designs -- A Model-Based Diagnosis Approach | <|reference_start|>Modeling State in Software Debugging of VHDL-RTL Designs -- A Model-Based Diagnosis Approach: In this paper we outline an approach of applying model-based diagnosis to the field of automatic software debugging of hardware designs. We present our value-level model for debugging VHDL-RTL designs and show how to localize the erroneous component responsible for an observed misbehavior. Furthermore, we discuss an extension of our model that supports the debugging of sequential circuits, not only at a given point in time, but also allows for considering the temporal behavior of VHDL-RTL designs. The introduced model is capable of handling state inherently present in every sequential circuit. The principal applicability of the new model is outlined briefly and we use industrial-sized real world examples from the ISCAS'85 benchmark suite to discuss the scalability of our approach.<|reference_end|> | arxiv | @article{peischl2003modeling,
title={Modeling State in Software Debugging of VHDL-RTL Designs -- A
Model-Based Diagnosis Approach},
author={Bernhard Peischl, Franz Wotawa},
journal={arXiv preprint arXiv:cs/0311001},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311001},
primaryClass={cs.AI cs.SE}
} | peischl2003modeling |
arxiv-671511 | cs/0311002 | Computing Convex Hulls with a Linear Solver | <|reference_start|>Computing Convex Hulls with a Linear Solver: A programming tactic involving polyhedra is reported that has been widely applied in the polyhedral analysis of (constraint) logic programs. The method enables the computations of convex hulls that are required for polyhedral analysis to be coded with linear constraint solving machinery that is available in many Prolog systems. To appear in Theory and Practice of Logic Programming (TPLP)<|reference_end|> | arxiv | @article{benoy2003computing,
title={Computing Convex Hulls with a Linear Solver},
author={Florence Benoy and Andy King and Fred Mesnard},
journal={arXiv preprint arXiv:cs/0311002},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311002},
primaryClass={cs.PL}
} | benoy2003computing |
arxiv-671512 | cs/0311003 | Enhancing a Search Algorithm to Perform Intelligent Backtracking | <|reference_start|>Enhancing a Search Algorithm to Perform Intelligent Backtracking: This paper illustrates how a Prolog program, using chronological backtracking to find a solution in some search space, can be enhanced to perform intelligent backtracking. The enhancement crucially relies on the impurity of Prolog that allows a program to store information when a dead end is reached. To illustrate the technique, a simple search program is enhanced. To appear in Theory and Practice of Logic Programming. Keywords: intelligent backtracking, dependency-directed backtracking, backjumping, conflict-directed backjumping, nogood sets, look-back.<|reference_end|> | arxiv | @article{bruynooghe2003enhancing,
title={Enhancing a Search Algorithm to Perform Intelligent Backtracking},
author={Maurice Bruynooghe},
journal={arXiv preprint arXiv:cs/0311003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311003},
primaryClass={cs.AI cs.LO}
} | bruynooghe2003enhancing |
arxiv-671513 | cs/0311004 | Utility-Probability Duality | <|reference_start|>Utility-Probability Duality: This paper presents duality between probability distributions and utility functions.<|reference_end|> | arxiv | @article{abbas2003utility-probability,
title={Utility-Probability Duality},
author={Ali Abbas, Jim Matheson},
journal={arXiv preprint arXiv:cs/0311004},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311004},
primaryClass={cs.AI}
} | abbas2003utility-probability |
arxiv-671514 | cs/0311005 | On The Cost Distribution of a Memory Bound Function | <|reference_start|>On The Cost Distribution of a Memory Bound Function: Memory Bound Functions have been proposed for fighting spam, resisting Sybil attacks and other purposes. A particular implementation of such functions has been proposed in which the average effort required to generate a proof of effort is set by parameters E and l to E * l. The distribution of effort required to generate an individual proof about this average is fairly broad. When particular uses of these functions are envisaged, the choice of E and l, and the system design surrounding the generation and verification of proofs of effort, need to take the breadth of the distribution into account. We show the distribution for this implementation, discuss the system design issues in the context of two proposed applications, and suggest an improved implementation.<|reference_end|> | arxiv | @article{rosenthal2003on,
title={On The Cost Distribution of a Memory Bound Function},
author={David S. H. Rosenthal},
journal={arXiv preprint arXiv:cs/0311005},
year={2003},
number={LOCKSS TR2003-02},
archivePrefix={arXiv},
eprint={cs/0311005},
primaryClass={cs.CR cs.DL}
} | rosenthal2003on |
arxiv-671515 | cs/0311006 | How Push-To-Talk Makes Talk Less Pushy | <|reference_start|>How Push-To-Talk Makes Talk Less Pushy: This paper presents an exploratory study of college-age students using two-way, push-to-talk cellular radios. We describe the observed and reported use of cellular radio by the participants. We discuss how the half-duplex, lightweight cellular radio communication was associated with reduced interactional commitment, which meant the cellular radios could be used for a wide range of conversation styles. One such style, intermittent conversation, is characterized by response delays. Intermittent conversation is surprising in an audio medium, since it is typically associated with textual media such as instant messaging. We present design implications of our findings.<|reference_end|> | arxiv | @article{woodruff2003how,
title={How Push-To-Talk Makes Talk Less Pushy},
author={Allison Woodruff and Paul M. Aoki},
journal={Proc. ACM SIGGROUP Conf. on Supporting Group Work, Sanibel Island,
FL, Nov. 2003, 170-179. ACM Press.},
year={2003},
doi={10.1145/958160.958187},
archivePrefix={arXiv},
eprint={cs/0311006},
primaryClass={cs.HC}
} | woodruff2003how |
arxiv-671516 | cs/0311007 | Parametric Connectives in Disjunctive Logic Programming | <|reference_start|>Parametric Connectives in Disjunctive Logic Programming: Disjunctive Logic Programming (\DLP) is an advanced formalism for Knowledge Representation and Reasoning (KRR). \DLP is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class $\SigmaP{2}$ ($\NP^{\NP}$). Importantly, the \DLP encodings are often simple and natural. In this paper, we single out some limitations of \DLP for KRR, which cannot naturally express problems where the size of the disjunction is not known ``a priori'' (like N-Coloring), but it is part of the input. To overcome these limitations, we further enhance the knowledge modelling abilities of \DLP, by extending this language by {\em Parametric Connectives (OR and AND)}. These connectives allow us to represent compactly the disjunction/conjunction of a set of atoms having a given property. We formally define the semantics of the new language, named $DLP^{\bigvee,\bigwedge}$ and we show the usefulness of the new constructs on relevant knowledge-based problems. We address implementation issues and discuss related works.<|reference_end|> | arxiv | @article{perri2003parametric,
title={Parametric Connectives in Disjunctive Logic Programming},
author={Simona Perri, Nicola Leone},
journal={arXiv preprint arXiv:cs/0311007},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311007},
primaryClass={cs.AI}
} | perri2003parametric |
arxiv-671517 | cs/0311008 | A Parameterised Hierarchy of Argumentation Semantics for Extended Logic Programming and its Application to the Well-founded Semantics | <|reference_start|>A Parameterised Hierarchy of Argumentation Semantics for Extended Logic Programming and its Application to the Well-founded Semantics: Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX$_p$. Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.<|reference_end|> | arxiv | @article{schweimeier2003a,
title={A Parameterised Hierarchy of Argumentation Semantics for Extended Logic
Programming and its Application to the Well-founded Semantics},
author={Ralf Schweimeier and Michael Schroeder},
journal={arXiv preprint arXiv:cs/0311008},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311008},
primaryClass={cs.LO cs.AI}
} | schweimeier2003a |
arxiv-671518 | cs/0311009 | OGSA/Globus Evaluation for Data Intensive Applications | <|reference_start|>OGSA/Globus Evaluation for Data Intensive Applications: We present an architecture of Globus Toolkit 3 based testbed intended for evaluation of applicability of the Open Grid Service Architecture (OGSA) for Data Intensive Applications.<|reference_end|> | arxiv | @article{demichev2003ogsa/globus,
title={OGSA/Globus Evaluation for Data Intensive Applications},
author={A. Demichev (1), D. Foster (2), V. Kalyaev (1), A.Kryukov (1), M.
Lamanna (2), V. Pose (3), R. B. Da Rocha (2) and C. Wang (4) ((1) Skobeltsyn
Institute of Nuclear Physics, Moscow State University, Moscow, Russia, (2)
CERN-IT, Geneva, Switzerland, (3) JINR LIT, Dubna, Russia, (4) Academica
Sinica, Taipei, Taiwan)},
journal={arXiv preprint arXiv:cs/0311009},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311009},
primaryClass={cs.DC}
} | demichev2003ogsa/globus |
arxiv-671519 | cs/0311010 | Problem of Application Job Monitoring in GRID Systems | <|reference_start|>Problem of Application Job Monitoring in GRID Systems: We present a new approach to monitoring of the execution process of an application job in the GRID environment. The main point of the approach is use of GRID ervices to access monitoring information with the security level available in GRID.<|reference_end|> | arxiv | @article{kalyaev2003problem,
title={Problem of Application Job Monitoring in GRID Systems},
author={V. Kalyaev, A. Kryukov (Skobektsyn Institute of Nuclear Physics Moscow
State University, Moscow, Russia)},
journal={arXiv preprint arXiv:cs/0311010},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311010},
primaryClass={cs.DC}
} | kalyaev2003problem |
arxiv-671520 | cs/0311011 | On an explicit finite difference method for fractional diffusion equations | <|reference_start|>On an explicit finite difference method for fractional diffusion equations: A numerical method to solve the fractional diffusion equation, which could also be easily extended to many other fractional dynamics equations, is considered. These fractional equations have been proposed in order to describe anomalous transport characterized by non-Markovian kinetics and the breakdown of Fick's law. In this paper we combine the forward time centered space (FTCS) method, well known for the numerical integration of ordinary diffusion equations, with the Grunwald-Letnikov definition of the fractional derivative operator to obtain an explicit fractional FTCS scheme for solving the fractional diffusion equation. The resulting method is amenable to a stability analysis a la von Neumann. We show that the analytical stability bounds are in excellent agreement with numerical tests. Comparison between exact analytical solutions and numerical predictions are made.<|reference_end|> | arxiv | @article{yuste2003on,
title={On an explicit finite difference method for fractional diffusion
equations},
author={S. B. Yuste and L. Acedo},
journal={SIAM J. Numer. Anal. Vol. 42, No. 5, pp. 1862--1874 (2005)},
year={2003},
doi={10.1137/030602666},
archivePrefix={arXiv},
eprint={cs/0311011},
primaryClass={cs.NA cond-mat.stat-mech cs.CE physics.comp-ph}
} | yuste2003on |
arxiv-671521 | cs/0311012 | A rigorous definition of axial lines: ridges on isovist fields | <|reference_start|>A rigorous definition of axial lines: ridges on isovist fields: We suggest that 'axial lines' defined by (Hillier and Hanson, 1984) as lines of uninterrupted movement within urban streetscapes or buildings, appear as ridges in isovist fields (Benedikt, 1979). These are formed from the maximum diametric lengths of the individual isovists, sometimes called viewsheds, that make up these fields (Batty and Rana, 2004). We present an image processing technique for the identification of lines from ridges, discuss current strengths and weaknesses of the method, and show how it can be implemented easily and effectively.<|reference_end|> | arxiv | @article{carvalho2003a,
title={A rigorous definition of axial lines: ridges on isovist fields},
author={Rui Carvalho and Michael Batty},
journal={arXiv preprint arXiv:cs/0311012},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311012},
primaryClass={cs.CV cs.CG}
} | carvalho2003a |
arxiv-671522 | cs/0311013 | Optimized Flooding Protocol for Ad hoc Networks | <|reference_start|>Optimized Flooding Protocol for Ad hoc Networks: Flooding provides important control and route establishment functionality for a number of unicast and multicast protocols in Mobile Ad Hoc Networks. Considering its wide use as a building block for other network layer protocols, the flooding methodology should deliver a packet from one node to all other network nodes using as few messages as possible. In this paper, we propose the Optimized Flooding Protocol (OFP), based on a variation of The Covering Problem that is encountered in geometry, to minimize the unnecessary transmissions drastically and still be able to cover the whole region. OFP does not need hello messages and hence OFP saves a significant amount of wireless bandwidth and incurs lesser overhead. We present simulation results to show the efficiency of OFP in both ideal cases and randomly distributed networks. Moreover, OFP is scalable with respect to density; in fact OFP requires lesser number of transmissions at higher densities. OFP is also resilient to transmission errors.<|reference_end|> | arxiv | @article{paruchuri2003optimized,
title={Optimized Flooding Protocol for Ad hoc Networks},
author={Vamsi Paruchuri, Arjan Durresi, Raj Jain},
journal={arXiv preprint arXiv:cs/0311013},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311013},
primaryClass={cs.NI}
} | paruchuri2003optimized |
arxiv-671523 | cs/0311014 | Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet | <|reference_start|>Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet: Various optimality properties of universal sequence predictors based on Bayes-mixtures in general, and Solomonoff's prediction scheme in particular, will be studied. The probability of observing $x_t$ at time $t$, given past observations $x_1...x_{t-1}$ can be computed with the chain rule if the true generating distribution $\mu$ of the sequences $x_1x_2x_3...$ is known. If $\mu$ is unknown, but known to belong to a countable or continuous class $\M$ one can base ones prediction on the Bayes-mixture $\xi$ defined as a $w_\nu$-weighted sum or integral of distributions $\nu\in\M$. The cumulative expected loss of the Bayes-optimal universal prediction scheme based on $\xi$ is shown to be close to the loss of the Bayes-optimal, but infeasible prediction scheme based on $\mu$. We show that the bounds are tight and that no other predictor can lead to significantly smaller bounds. Furthermore, for various performance measures, we show Pareto-optimality of $\xi$ and give an Occam's razor argument that the choice $w_\nu\sim 2^{-K(\nu)}$ for the weights is optimal, where $K(\nu)$ is the length of the shortest program describing $\nu$. The results are applied to games of chance, defined as a sequence of bets, observations, and rewards. The prediction schemes (and bounds) are compared to the popular predictors based on expert advice. Extensions to infinite alphabets, partial, delayed and probabilistic prediction, classification, and more active systems are briefly discussed.<|reference_end|> | arxiv | @article{hutter2003optimality,
title={Optimality of Universal Bayesian Sequence Prediction for General Loss
and Alphabet},
author={Marcus Hutter},
journal={Journal of Machine Learning Research 4 (2003) 971-1000},
year={2003},
number={IDSIA-02-02},
archivePrefix={arXiv},
eprint={cs/0311014},
primaryClass={cs.LG cs.AI math.PR}
} | hutter2003optimality |
arxiv-671524 | cs/0311015 | Make search become the internal function of Internet | <|reference_start|>Make search become the internal function of Internet: Domain Resource Integrated System (DRIS) is introduced in this paper. DRIS is a distributed information retrieval system, which will solve problems like poor coverage, long update interval in current web search system. The most distinct character of DRIS is that it's a public opening system, and acts as an internal component of Internet, but not the production of a company. The implementation of DRIS is also represented.<|reference_end|> | arxiv | @article{wang2003make,
title={Make search become the internal function of Internet},
author={Liang Wang, Yiping Guo, Ming Fang},
journal={arXiv preprint arXiv:cs/0311015},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311015},
primaryClass={cs.IR cs.DL cs.NI}
} | wang2003make |
arxiv-671525 | cs/0311016 | Generic and Efficient Program Monitoring by trace analysis | <|reference_start|>Generic and Efficient Program Monitoring by trace analysis: Program execution monitoring consists of checking whole executions for given properties in order to collect global run-time information. Monitoring is very useful to maintain programs. However, application developers face the following dilemma: either they use existing tools which never exactly fit their needs, or they invest a lot of effort to implement monitoring code. In this article we argue that, when an event-oriented tracer exists, the compiler developers can enable the application developers to easily code their own, relevant, monitors which will run efficiently. We propose a high-level operator, called foldt, which operates on execution traces. One of the key advantages of our approach is that it allows a clean separation of concerns; the definition of monitors is neither intertwined in the user source code nor in the language compiler. We give a number of applications of the foldt operator to compute monitors for Mercury program executions: execution profiles, graphical abstract views, and two test coverage measurements. Each example is implemented by a few simple lines of Mercury. Detailed measurements show acceptable performance of the basic mechanism of foldt for executions of several millions of execution events.<|reference_end|> | arxiv | @article{jahier2003generic,
title={Generic and Efficient Program Monitoring by trace analysis},
author={Erwan Jahier and Mireille Ducass'e},
journal={E. Jahier and M. Ducass'e "Generic Program Monitoring by Trace
Analysis" in the Theory and Practice of Logic Programming journal, Volume 2
part 4&5, pp 613-645, September 2002, Special Issue Program Development,
Cambridge University Press},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311016},
primaryClass={cs.PL}
} | jahier2003generic |
arxiv-671526 | cs/0311017 | 2 P2P or Not 2 P2P? | <|reference_start|>2 P2P or Not 2 P2P?: In the hope of stimulating discussion, we present a heuristic decision tree that designers can use to judge the likely suitability of a P2P architecture for their applications. It is based on the characteristics of a wide range of P2P systems from the literature, both proposed and deployed.<|reference_end|> | arxiv | @article{roussopoulos20032,
title={2 P2P or Not 2 P2P?},
author={Mema Roussopoulos (Harvard University), Mary Baker (HP Labs), David S.
H. Rosenthal (Stanford University Libraries), TJ Giuli (Stanford University),
Petros Maniatis (Intel Research), Jeff Mogul (HP Labs)},
journal={arXiv preprint arXiv:cs/0311017},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311017},
primaryClass={cs.NI cs.AR}
} | roussopoulos20032 |
arxiv-671527 | cs/0311018 | Ackermann Encoding, Bisimulations, and OBDDs | <|reference_start|>Ackermann Encoding, Bisimulations, and OBDDs: We propose an alternative way to represent graphs via OBDDs based on the observation that a partition of the graph nodes allows sharing among the employed OBDDs. In the second part of the paper we present a method to compute at the same time the quotient w.r.t. the maximum bisimulation and the OBDD representation of a given graph. The proposed computation is based on an OBDD-rewriting of the notion of Ackermann encoding of hereditarily finite sets into natural numbers.<|reference_end|> | arxiv | @article{piazza2003ackermann,
title={Ackermann Encoding, Bisimulations, and OBDDs},
author={Carla Piazza (1) and Alberto Policriti (2) ((1) Universita' Ca'
Foscari di Venezia (2) Universita' degli Studi di Udine)},
journal={arXiv preprint arXiv:cs/0311018},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311018},
primaryClass={cs.LO cs.DS}
} | piazza2003ackermann |
arxiv-671528 | cs/0311019 | Replay Debugging of Complex Real-Time Systems: Experiences from Two Industrial Case Studies | <|reference_start|>Replay Debugging of Complex Real-Time Systems: Experiences from Two Industrial Case Studies: Deterministic replay is a method for allowing complex multitasking real-time systems to be debugged using standard interactive debuggers. Even though several replay techniques have been proposed for parallel, multi-tasking and real-time systems, the solutions have so far lingered on a prototype academic level, with very little results to show from actual state-of-the-practice commercial applications. This paper describes a major deterministic replay debugging case study performed on a full-scale industrial robot control system, as well as a minor replay instrumentation case study performed on a military aircraft radar system. In this article, we will show that replay debugging is feasible in complex multi-million lines of code software projects running on top of off-the-shelf real-time operating systems. Furthermore, we will discuss how replay debugging can be introduced in existing systems without impracticable analysis efforts. In addition, we will present benchmarking results from both studies, indicating that the instrumentation overhead is acceptable and affordable.<|reference_end|> | arxiv | @article{sundmark2003replay,
title={Replay Debugging of Complex Real-Time Systems: Experiences from Two
Industrial Case Studies},
author={Daniel Sundmark, Henrik Thane, Joel Huselius, Anders Pettersson, Roger
Mellander, Ingemar Reiyer, Mattias Kallvi},
journal={arXiv preprint arXiv:cs/0311019},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311019},
primaryClass={cs.RO}
} | sundmark2003replay |
arxiv-671529 | cs/0311020 | An Optimal Algorithm for the Maximum-Density Segment Problem | <|reference_start|>An Optimal Algorithm for the Maximum-Density Segment Problem: We address a fundamental problem arising from analysis of biomolecular sequences. The input consists of two numbers $w_{\min}$ and $w_{\max}$ and a sequence $S$ of $n$ number pairs $(a_i,w_i)$ with $w_i>0$. Let {\em segment} $S(i,j)$ of $S$ be the consecutive subsequence of $S$ between indices $i$ and $j$. The {\em density} of $S(i,j)$ is $d(i,j)=(a_i+a_{i+1}+...+a_j)/(w_i+w_{i+1}+...+w_j)$. The {\em maximum-density segment problem} is to find a maximum-density segment over all segments $S(i,j)$ with $w_{\min}\leq w_i+w_{i+1}+...+w_j \leq w_{\max}$. The best previously known algorithm for the problem, due to Goldwasser, Kao, and Lu, runs in $O(n\log(w_{\max}-w_{\min}+1))$ time. In the present paper, we solve the problem in O(n) time. Our approach bypasses the complicated {\em right-skew decomposition}, introduced by Lin, Jiang, and Chao. As a result, our algorithm has the capability to process the input sequence in an online manner, which is an important feature for dealing with genome-scale sequences. Moreover, for a type of input sequences $S$ representable in $O(m)$ space, we show how to exploit the sparsity of $S$ and solve the maximum-density segment problem for $S$ in $O(m)$ time.<|reference_end|> | arxiv | @article{chung2003an,
title={An Optimal Algorithm for the Maximum-Density Segment Problem},
author={Kai-min Chung and Hsueh-I Lu},
journal={SIAM Journal on Computing, 34(2):373-387, 2004},
year={2003},
doi={10.1137/S0097539704440430},
archivePrefix={arXiv},
eprint={cs/0311020},
primaryClass={cs.DS cs.DM}
} | chung2003an |
arxiv-671530 | cs/0311021 | LCG-1 Deployment and usage experience | <|reference_start|>LCG-1 Deployment and usage experience: LCG-1 is the second release of the software framework for the LHC Computing Grid project. In our work we describe the installation process, arising problems and their solutions, and configuration tuning details of the complete LCG-1 site, including all LCG elements required for the self-sufficient site.<|reference_end|> | arxiv | @article{shamardin2003lcg-1,
title={LCG-1 Deployment and usage experience},
author={L. Shamardin (Skobektsyn Institute of Nuclear Physics Moscow State
University, Moscow, Russia)},
journal={arXiv preprint arXiv:cs/0311021},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311021},
primaryClass={cs.DC}
} | shamardin2003lcg-1 |
arxiv-671531 | cs/0311022 | Temporalized logics and automata for time granularity | <|reference_start|>Temporalized logics and automata for time granularity: Suitable extensions of the monadic second-order theory of k successors have been proposed in the literature to capture the notion of time granularity. In this paper, we provide the monadic second-order theories of downward unbounded layered structures, which are infinitely refinable structures consisting of a coarsest domain and an infinite number of finer and finer domains, and of upward unbounded layered structures, which consist of a finest domain and an infinite number of coarser and coarser domains, with expressively complete and elementarily decidable temporal logic counterparts. We obtain such a result in two steps. First, we define a new class of combined automata, called temporalized automata, which can be proved to be the automata-theoretic counterpart of temporalized logics, and show that relevant properties, such as closure under Boolean operations, decidability, and expressive equivalence with respect to temporal logics, transfer from component automata to temporalized ones. Then, we exploit the correspondence between temporalized logics and automata to reduce the task of finding the temporal logic counterparts of the given theories of time granularity to the easier one of finding temporalized automata counterparts of them.<|reference_end|> | arxiv | @article{franceschet2003temporalized,
title={Temporalized logics and automata for time granularity},
author={M. Franceschet and A. Montanari},
journal={arXiv preprint arXiv:cs/0311022},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311022},
primaryClass={cs.LO}
} | franceschet2003temporalized |
arxiv-671532 | cs/0311023 | The Chameleon Type Debugger (Tool Demonstration) | <|reference_start|>The Chameleon Type Debugger (Tool Demonstration): In this tool demonstration, we give an overview of the Chameleon type debugger. The type debugger's primary use is to identify locations within a source program which are involved in a type error. By further examining these (potentially) problematic program locations, users gain a better understanding of their program and are able to work towards the actual mistake which was the cause of the type error. The debugger is interactive, allowing the user to provide additional information to narrow down the search space. One of the novel aspects of the debugger is the ability to explain erroneous-looking types. In the event that an unexpected type is inferred, the debugger can highlight program locations which contributed to that result. Furthermore, due to the flexible constraint-based foundation that the debugger is built upon, it can naturally handle advanced type system features such as Haskell's type classes and functional dependencies.<|reference_end|> | arxiv | @article{stuckey2003the,
title={The Chameleon Type Debugger (Tool Demonstration)},
author={Peter J. Stuckey, Martin Sulzmann, Jeremy Wazny},
journal={arXiv preprint arXiv:cs/0311023},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311023},
primaryClass={cs.PL}
} | stuckey2003the |
arxiv-671533 | cs/0311024 | Logic-Based Specification Languages for Intelligent Software Agents | <|reference_start|>Logic-Based Specification Languages for Intelligent Software Agents: The research field of Agent-Oriented Software Engineering (AOSE) aims to find abstractions, languages, methodologies and toolkits for modeling, verifying, validating and prototyping complex applications conceptualized as Multiagent Systems (MASs). A very lively research sub-field studies how formal methods can be used for AOSE. This paper presents a detailed survey of six logic-based executable agent specification languages that have been chosen for their potential to be integrated in our ARPEGGIO project, an open framework for specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each executable language, the logic foundations are described and an example of use is shown. A comparison of the six languages and a survey of similar approaches complete the paper, together with considerations of the advantages of using logic-based languages in MAS modeling and prototyping.<|reference_end|> | arxiv | @article{mascardi2003logic-based,
title={Logic-Based Specification Languages for Intelligent Software Agents},
author={Viviana Mascardi, Maurizio Martelli, Leon Sterling},
journal={arXiv preprint arXiv:cs/0311024},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311024},
primaryClass={cs.AI}
} | mascardi2003logic-based |
arxiv-671534 | cs/0311025 | Fine-Grained Authorization for Job Execution in the Grid: Design and Implementation | <|reference_start|>Fine-Grained Authorization for Job Execution in the Grid: Design and Implementation: In this paper we describe our work on enabling fine-grained authorization for resource usage and management. We address the need of virtual organizations to enforce their own polices in addition to those of the resource owners, in regard to both resource consumption and job management. To implement this design, we propose changes and extensions to the Globus Toolkit's version 2 resource management mechanism. We describe the prototype and the policy language that we designed to express fine-grained policies, and we present an analysis of our solution.<|reference_end|> | arxiv | @article{keahey2003fine-grained,
title={Fine-Grained Authorization for Job Execution in the Grid: Design and
Implementation},
author={K. Keahey, V. Welch, S. Lang, B. Liu, and S. Meder},
journal={arXiv preprint arXiv:cs/0311025},
year={2003},
number={Preprint ANL/MCS-P1094-0903},
archivePrefix={arXiv},
eprint={cs/0311025},
primaryClass={cs.CR cs.DC}
} | keahey2003fine-grained |
arxiv-671535 | cs/0311026 | Great Expectations Part I: On the Customizability of Generalized Expected Utility | <|reference_start|>Great Expectations Part I: On the Customizability of Generalized Expected Utility: We propose a generalization of expected utility that we call generalized EU (GEU), where a decision maker's beliefs are represented by plausibility measures, and the decision maker's tastes are represented by general (i.e.,not necessarily real-valued) utility functions. We show that every agent, ``rational'' or not, can be modeled as a GEU maximizer. We then show that we can customize GEU by selectively imposing just the constraints we want. In particular, we show how each of Savage's postulates corresponds to constraints on GEU.<|reference_end|> | arxiv | @article{chu2003great,
title={Great Expectations. Part I: On the Customizability of Generalized
Expected Utility},
author={Francis C. Chu, Joseph Y. Halpern},
journal={arXiv preprint arXiv:cs/0311026},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311026},
primaryClass={cs.AI}
} | chu2003great |
arxiv-671536 | cs/0311027 | Great Expectations Part II: Generalized Expected Utility as a Universal Decision Rule | <|reference_start|>Great Expectations Part II: Generalized Expected Utility as a Universal Decision Rule: Many different rules for decision making have been introduced in the literature. We show that a notion of generalized expected utility proposed in Part I of this paper is a universal decision rule, in the sense that it can represent essentially all other decision rules.<|reference_end|> | arxiv | @article{chu2003great,
title={Great Expectations. Part II: Generalized Expected Utility as a Universal
Decision Rule},
author={Francis C. Chu, Joseph Y. Halpern},
journal={arXiv preprint arXiv:cs/0311027},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311027},
primaryClass={cs.AI}
} | chu2003great |
arxiv-671537 | cs/0311028 | Using Counterfactuals in Knowledge-Based Programming | <|reference_start|>Using Counterfactuals in Knowledge-Based Programming: This paper adds counterfactuals to the framework of knowledge-based programs of Fagin, Halpern, Moses, and Vardi. The use of counterfactuals is illustrated by designing a protocol in which an agent stops sending messages once it knows that it is safe to do so. Such behavior is difficult to capture in the original framework because it involves reasoning about counterfactual executions, including ones that are not consistent with the protocol. Attempts to formalize these notions without counterfactuals are shown to lead to rather counterintuitive behavior.<|reference_end|> | arxiv | @article{halpern2003using,
title={Using Counterfactuals in Knowledge-Based Programming},
author={Joseph Y. Halpern and Yoram Moses},
journal={arXiv preprint arXiv:cs/0311028},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311028},
primaryClass={cs.DC cs.AI}
} | halpern2003using |
arxiv-671538 | cs/0311029 | Staging Transformations for Multimodal Web Interaction Management | <|reference_start|>Staging Transformations for Multimodal Web Interaction Management: Multimodal interfaces are becoming increasingly ubiquitous with the advent of mobile devices, accessibility considerations, and novel software technologies that combine diverse interaction media. In addition to improving access and delivery capabilities, such interfaces enable flexible and personalized dialogs with websites, much like a conversation between humans. In this paper, we present a software framework for multimodal web interaction management that supports mixed-initiative dialogs between users and websites. A mixed-initiative dialog is one where the user and the website take turns changing the flow of interaction. The framework supports the functional specification and realization of such dialogs using staging transformations -- a theory for representing and reasoning about dialogs based on partial input. It supports multiple interaction interfaces, and offers sessioning, caching, and co-ordination functions through the use of an interaction manager. Two case studies are presented to illustrate the promise of this approach.<|reference_end|> | arxiv | @article{narayan2003staging,
title={Staging Transformations for Multimodal Web Interaction Management},
author={Michael Narayan, Chris Williams, Saverio Perugini, and Naren
Ramakrishnan},
journal={arXiv preprint arXiv:cs/0311029},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311029},
primaryClass={cs.IR cs.PL}
} | narayan2003staging |
arxiv-671539 | cs/0311030 | Set K-Cover Algorithms for Energy Efficient Monitoring in Wireless Sensor Networks | <|reference_start|>Set K-Cover Algorithms for Energy Efficient Monitoring in Wireless Sensor Networks: Wireless sensor networks (WSNs) are emerging as an effective means for environment monitoring. This paper investigates a strategy for energy efficient monitoring in WSNs that partitions the sensors into covers, and then activates the covers iteratively in a round-robin fashion. This approach takes advantage of the overlap created when many sensors monitor a single area. Our work builds upon previous work in "Power Efficient Organization of Wireless Sensor Networks" by Slijepcevic and Potkonjak, where the model is first formulated. We have designed three approximation algorithms for a variation of the SET K-COVER problem, where the objective is to partition the sensors into covers such that the number of covers that include an area, summed over all areas, is maximized. The first algorithm is randomized and partitions the sensors, in expectation, within a fraction 1 - 1/e (~.63) of the optimum. We present two other deterministic approximation algorithms. One is a distributed greedy algorithm with a 1/2 approximation ratio and the other is a centralized greedy algorithm with a 1 - 1/e approximation ratio. We show that it is NP-Complete to guarantee better than 15/16 of the optimal coverage, indicating that all three algorithms perform well with respect to the best approximation algorithm possible. Simulations indicate that in practice, the deterministic algorithms perform far above their worst case bounds, consistently covering more than 72% of what is covered by an optimum solution. Simulations also indicate that the increase in longevity is proportional to the amount of overlap amongst the sensors. The algorithms are fast, easy to use, and according to simulations, significantly increase the longevity of sensor networks. The randomized algorithm in particular seems quite practical.<|reference_end|> | arxiv | @article{abrams2003set,
title={Set K-Cover Algorithms for Energy Efficient Monitoring in Wireless
Sensor Networks},
author={Zoe Abrams, Ashish Goel, Serge Plotkin},
journal={arXiv preprint arXiv:cs/0311030},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311030},
primaryClass={cs.DS}
} | abrams2003set |
arxiv-671540 | cs/0311031 | Towards an Intelligent Database System Founded on the SP Theory of Computing and Cognition | <|reference_start|>Towards an Intelligent Database System Founded on the SP Theory of Computing and Cognition: The SP theory of computing and cognition, described in previous publications, is an attractive model for intelligent databases because it provides a simple but versatile format for different kinds of knowledge, it has capabilities in artificial intelligence, and it can also function like established database models when that is required. This paper describes how the SP model can emulate other models used in database applications and compares the SP model with those other models. The artificial intelligence capabilities of the SP model are reviewed and its relationship with other artificial intelligence systems is described. Also considered are ways in which current prototypes may be translated into an 'industrial strength' working system.<|reference_end|> | arxiv | @article{wolff2003towards,
title={Towards an Intelligent Database System Founded on the SP Theory of
Computing and Cognition},
author={J. Gerard Wolff},
journal={J G Wolff, Data & Knowledge Engineering 60, 596-624, 2007},
year={2003},
doi={10.1016/j.datak.2006.04.003},
archivePrefix={arXiv},
eprint={cs/0311031},
primaryClass={cs.DB cs.AI}
} | wolff2003towards |
arxiv-671541 | cs/0311032 | A Very Short Self-Interpreter | <|reference_start|>A Very Short Self-Interpreter: In this paper we would like to present a very short (possibly the shortest) self-interpreter, based on a simplistic Turing-complete imperative language. This interpreter explicitly processes the statements of the language, which means the interpreter constitutes a description of the language inside that same language. The paper does not require any specific knowledge; however, experience in programming and a vivid imagination are beneficial.<|reference_end|> | arxiv | @article{mazonka2003a,
title={A Very Short Self-Interpreter},
author={Oleg Mazonka, Daniel B. Cristofani},
journal={arXiv preprint arXiv:cs/0311032},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311032},
primaryClass={cs.PL}
} | mazonka2003a |
arxiv-671542 | cs/0311033 | The Rank-Frequency Analysis for the Functional Style Corpora in the Ukrainian Language | <|reference_start|>The Rank-Frequency Analysis for the Functional Style Corpora in the Ukrainian Language: We use the rank-frequency analysis for the estimation of Kernel Vocabulary size within specific corpora of Ukrainian. The extrapolation of high-rank behaviour is utilized for estimation of the total vocabulary size.<|reference_end|> | arxiv | @article{buk2003the,
title={The Rank-Frequency Analysis for the Functional Style Corpora in the
Ukrainian Language},
author={Solomija N. Buk, Andrij A. Rovenchak},
journal={Journal of Quantitative Linguistics, Vol. 11, No. 3, P. 161-171
(2004)},
year={2003},
doi={10.1080/0929617042000314912},
archivePrefix={arXiv},
eprint={cs/0311033},
primaryClass={cs.CL}
} | buk2003the |
arxiv-671543 | cs/0311034 | Visualization of variations in human brain morphology using differentiating reflection functions | <|reference_start|>Visualization of variations in human brain morphology using differentiating reflection functions: Conventional visualization media such as MRI prints and computer screens are inherently two dimensional, making them incapable of displaying true 3D volume data sets. By applying only transparency or intensity projection, and ignoring light-matter interaction, results will likely fail to give optimal results. Little research has been done on using reflectance functions to visually separate the various segments of a MRI volume. We will explore if applying specific reflectance functions to individual anatomical structures can help in building an intuitive 2D image from a 3D dataset. We will test our hypothesis by visualizing a statistical analysis of the genetic influences on variations in human brain morphology because it inherently contains complex and many different types of data making it a good candidate for our approach<|reference_end|> | arxiv | @article{koldenhof2003visualization,
title={Visualization of variations in human brain morphology using
differentiating reflection functions},
author={Gibby Koldenhof},
journal={arXiv preprint arXiv:cs/0311034},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311034},
primaryClass={cs.GR}
} | koldenhof2003visualization |
arxiv-671544 | cs/0311035 | Improving TCP/IP Performance over Wireless IEEE 80211 Link | <|reference_start|>Improving TCP/IP Performance over Wireless IEEE 80211 Link: Cellular phones, wireless laptops, personal portable devices that supports both voice and data access are all examples of communicating devices that uses wireless communication. Sine TCP/IP (and UDP) is the dominant technology in use in the internet, it is expected that they will be used (and they are currently) over wireless connections. In this paper, we investigate the performance of the TCP (and UDP) over IEEE802.11 wireless MAC protocol. We investigate the performance of the TCP and UDP assuming three different traffic patterns. First bulk transmission where the main concern is the throughput. Second real-time audio (using UDP) in the existence of bulk TCP transmission where the main concern is the packet loss for audio traffic. Finally web traffic where the main concern is the response time. We also investigate the effect of using forward Error Correction (FEC) technique and the MAC sublayer parameters on the throughput and response time.<|reference_end|> | arxiv | @article{petrovic2003improving,
title={Improving TCP/IP Performance over Wireless IEEE 802.11 Link},
author={Milenko Petrovic and Mokhtar Aboelaze},
journal={arXiv preprint arXiv:cs/0311035},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311035},
primaryClass={cs.NI cs.PF}
} | petrovic2003improving |
arxiv-671545 | cs/0311036 | Measuring the Functional Load of Phonological Contrasts | <|reference_start|>Measuring the Functional Load of Phonological Contrasts: Frequency counts are a measure of how much use a language makes of a linguistic unit, such as a phoneme or word. However, what is often important is not the units themselves, but the contrasts between them. A measure is therefore needed for how much use a language makes of a contrast, i.e. the functional load (FL) of the contrast. We generalize previous work in linguistics and speech recognition and propose a family of measures for the FL of several phonological contrasts, including phonemic oppositions, distinctive features, suprasegmentals, and phonological rules. We then test it for robustness to changes of corpora. Finally, we provide examples in Cantonese, Dutch, English, German and Mandarin, in the context of historical linguistics, language acquisition and speech recognition. More information can be found at http://dinoj.info/research/fload<|reference_end|> | arxiv | @article{surendran2003measuring,
title={Measuring the Functional Load of Phonological Contrasts},
author={Dinoj Surendran and Partha Niyogi},
journal={arXiv preprint arXiv:cs/0311036},
year={2003},
number={TR-2003-12},
archivePrefix={arXiv},
eprint={cs/0311036},
primaryClass={cs.CL}
} | surendran2003measuring |
arxiv-671546 | cs/0311037 | DUCT: An Interactive Define-Use Chain Navigation Tool for Relative Debugging | <|reference_start|>DUCT: An Interactive Define-Use Chain Navigation Tool for Relative Debugging: This paper describes an interactive tool that facilitates following define-use chains in large codes. The motivation for the work is to support relative debugging, where it is necessary to iteratively refine a set of asser-tions between different versions of a program. DUCT is novel because it exploits the Microsoft Intermediate Language (MSIL) that underpins the .NET Framework. Accordingly, it works on a wide range of programming languages without any modification. The paper describes the design and implementation of DUCT, and then illustrates its use with a small case study.<|reference_end|> | arxiv | @article{searle2003duct:,
title={DUCT: An Interactive Define-Use Chain Navigation Tool for Relative
Debugging},
author={Aaron Searle, John Gough, David Abramson},
journal={In M. Ronsse, K. De Bosschere (eds), proceedings of the Fifth
International Workshop on Automated De-bugging (AADEBUG 2003), September
2003, Ghent},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311037},
primaryClass={cs.SE}
} | searle2003duct: |
arxiv-671547 | cs/0311038 | XPath-Logic and XPathLog: A Logic-Programming Style XML Data Manipulation Language | <|reference_start|>XPath-Logic and XPathLog: A Logic-Programming Style XML Data Manipulation Language: We define XPathLog as a Datalog-style extension of XPath. XPathLog provides a clear, declarative language for querying and manipulating XML whose perspectives are especially in XML data integration. In our characterization, the formal semantics is defined wrt. an edge-labeled graph-based model which covers the XML data model. We give a complete, logic-based characterization of XML data and the main language concept for XML, XPath. XPath-Logic extends the XPath language with variable bindings and embeds it into first-order logic. XPathLog is then the Horn fragment of XPath-Logic, providing a Datalog-style, rule-based language for querying and manipulating XML data. The model-theoretic semantics of XPath-Logic serves as the base of XPathLog as a logic-programming language, whereas also an equivalent answer-set semantics for evaluating XPathLog queries is given. In contrast to other approaches, the XPath syntax and semantics is also used for a declarative specification how the database should be updated: when used in rule heads, XPath filters are interpreted as specifications of elements and properties which should be added to the database.<|reference_end|> | arxiv | @article{may2003xpath-logic,
title={XPath-Logic and XPathLog: A Logic-Programming Style XML Data
Manipulation Language},
author={Wolfgang May},
journal={arXiv preprint arXiv:cs/0311038},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311038},
primaryClass={cs.DB}
} | may2003xpath-logic |
arxiv-671548 | cs/0311039 | Quantum m-out-of-n Oblivious Transfer | <|reference_start|>Quantum m-out-of-n Oblivious Transfer: In the m-out-of-n oblivious transfer (OT) model, one party Alice sends n bits to another party Bob, Bob can get only m bits from the n bits. However, Alice cannot know which m bits Bob received. Y.Mu[MJV02]} and Naor[Naor01] presented classical m-out-of-n oblivious transfer based on discrete logarithm. As the work of Shor [Shor94], the discrete logarithm can be solved in polynomial time by quantum computers, so such OTs are unsafe to the quantum computer. In this paper, we construct a quantum m-out-of-n OT (QOT) scheme based on the transmission of polarized light and show that the scheme is robust to general attacks, i.e. the QOT scheme satisfies statistical correctness and statistical privacy.<|reference_end|> | arxiv | @article{chen2003quantum,
title={Quantum m-out-of-n Oblivious Transfer},
author={Zhide Chen, Hong Zhu},
journal={arXiv preprint arXiv:cs/0311039},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311039},
primaryClass={cs.CR quant-ph}
} | chen2003quantum |
arxiv-671549 | cs/0311040 | Idempotent I/O for safe time travel | <|reference_start|>Idempotent I/O for safe time travel: Debuggers for logic programming languages have traditionally had a capability most other debuggers did not: the ability to jump back to a previous state of the program, effectively travelling back in time in the history of the computation. This ``retry'' capability is very useful, allowing programmers to examine in detail a part of the computation that they previously stepped over. Unfortunately, it also creates a problem: while the debugger may be able to restore the previous values of variables, it cannot restore the part of the program's state that is affected by I/O operations. If the part of the computation being jumped back over performs I/O, then the program will perform these I/O operations twice, which will result in unwanted effects ranging from the benign (e.g. output appearing twice) to the fatal (e.g. trying to close an already closed file). We present a simple mechanism for ensuring that every I/O action called for by the program is executed at most once, even if the programmer asks the debugger to travel back in time from after the action to before the action. The overhead of this mechanism is low enough and can be controlled well enough to make it practical to use it to debug computations that do significant amounts of I/O.<|reference_end|> | arxiv | @article{somogyi2003idempotent,
title={Idempotent I/O for safe time travel},
author={Zoltan Somogyi},
journal={arXiv preprint arXiv:cs/0311040},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311040},
primaryClass={cs.PL cs.SE}
} | somogyi2003idempotent |
arxiv-671550 | cs/0311041 | S-ToPSS: Semantic Toronto Publish/Subscribe System | <|reference_start|>S-ToPSS: Semantic Toronto Publish/Subscribe System: The increase in the amount of data on the Internet has led to the development of a new generation of applications based on selective information dissemination where, data is distributed only to interested clients. Such applications require a new middleware architecture that can efficiently match user interests with available information. Middleware that can satisfy this requirement include event-based architectures such as publish-subscribe systems. In this demonstration paper we address the problem of semantic matching. We investigate how current publish/subscribe systems can be extended with semantic capabilities. Our main contribution is the development and validation (through demonstration) of a semantic pub/sub system prototype S-ToPSS (Semantic Toronto Publish/Subscribe System).<|reference_end|> | arxiv | @article{petrovic2003s-topss:,
title={S-ToPSS: Semantic Toronto Publish/Subscribe System},
author={Milenko Petrovic and Ioana Burcea and Hans-Arno Jacobsen},
journal={arXiv preprint arXiv:cs/0311041},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311041},
primaryClass={cs.DC cs.DB}
} | petrovic2003s-topss: |
arxiv-671551 | cs/0311042 | Toward Attribute Efficient Learning Algorithms | <|reference_start|>Toward Attribute Efficient Learning Algorithms: We make progress on two important problems regarding attribute efficient learnability. First, we give an algorithm for learning decision lists of length $k$ over $n$ variables using $2^{\tilde{O}(k^{1/3})} \log n$ examples and time $n^{\tilde{O}(k^{1/3})}$. This is the first algorithm for learning decision lists that has both subexponential sample complexity and subexponential running time in the relevant parameters. Our approach establishes a relationship between attribute efficient learning and polynomial threshold functions and is based on a new construction of low degree, low weight polynomial threshold functions for decision lists. For a wide range of parameters our construction matches a 1994 lower bound due to Beigel for the ODDMAXBIT predicate and gives an essentially optimal tradeoff between polynomial threshold function degree and weight. Second, we give an algorithm for learning an unknown parity function on $k$ out of $n$ variables using $O(n^{1-1/k})$ examples in time polynomial in $n$. For $k=o(\log n)$ this yields a polynomial time algorithm with sample complexity $o(n)$. This is the first polynomial time algorithm for learning parity on a superconstant number of variables with sublinear sample complexity.<|reference_end|> | arxiv | @article{klivans2003toward,
title={Toward Attribute Efficient Learning Algorithms},
author={Adam R. Klivans and Rocco A. Servedio},
journal={arXiv preprint arXiv:cs/0311042},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311042},
primaryClass={cs.LG}
} | klivans2003toward |
arxiv-671552 | cs/0311043 | Combining Logic Programs and Monadic Second Order Logics by Program Transformation | <|reference_start|>Combining Logic Programs and Monadic Second Order Logics by Program Transformation: We present a program synthesis method based on unfold/fold transformation rules which can be used for deriving terminating definite logic programs from formulas of the Weak Monadic Second Order theory of one successor (WS1S). This synthesis method can also be used as a proof method which is a decision procedure for closed formulas of WS1S. We apply our synthesis method for translating CLP(WS1S) programs into logic programs and we use it also as a proof method for verifying safety properties of infinite state systems.<|reference_end|> | arxiv | @article{fioravanti2003combining,
title={Combining Logic Programs and Monadic Second Order Logics by Program
Transformation},
author={F. Fioravanti, A. Pettorossi, M. Proietti},
journal={arXiv preprint arXiv:cs/0311043},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311043},
primaryClass={cs.PL cs.LO}
} | fioravanti2003combining |
arxiv-671553 | cs/0311044 | Derivation of Efficient Logic Programs by Specialization and Reduction of Nondeterminism | <|reference_start|>Derivation of Efficient Logic Programs by Specialization and Reduction of Nondeterminism: Program specialization is a program transformation methodology which improves program efficiency by exploiting the information about the input data which are available at compile time. We show that current techniques for program specialization based on partial evaluation do not perform well on nondeterministic logic programs. We then consider a set of transformation rules which extend the ones used for partial evaluation, and we propose a strategy for guiding the application of these extended rules so to derive very efficient specialized programs. The efficiency improvements which sometimes are exponential, are due to the reduction of nondeterminism and to the fact that the computations which are performed by the initial programs in different branches of the computation trees, are performed by the specialized programs within single branches. In order to reduce nondeterminism we also make use of mode information for guiding the unfolding process. To exemplify our technique, we show that we can automatically derive very efficient matching programs and parsers for regular languages. The derivations we have performed could not have been done by previously known partial evaluation techniques.<|reference_end|> | arxiv | @article{pettorossi2003derivation,
title={Derivation of Efficient Logic Programs by Specialization and Reduction
of Nondeterminism},
author={Alberto Pettorossi, Maurizio Proietti, Sophie Renault},
journal={arXiv preprint arXiv:cs/0311044},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311044},
primaryClass={cs.PL cs.LO}
} | pettorossi2003derivation |
arxiv-671554 | cs/0311045 | Unsupervised Grammar Induction in a Framework of Information Compression by Multiple Alignment, Unification and Search | <|reference_start|>Unsupervised Grammar Induction in a Framework of Information Compression by Multiple Alignment, Unification and Search: This paper describes a novel approach to grammar induction that has been developed within a framework designed to integrate learning with other aspects of computing, AI, mathematics and logic. This framework, called "information compression by multiple alignment, unification and search" (ICMAUS), is founded on principles of Minimum Length Encoding pioneered by Solomonoff and others. Most of the paper describes SP70, a computer model of the ICMAUS framework that incorporates processes for unsupervised learning of grammars. An example is presented to show how the model can infer a plausible grammar from appropriate input. Limitations of the current model and how they may be overcome are briefly discussed.<|reference_end|> | arxiv | @article{wolff2003unsupervised,
title={Unsupervised Grammar Induction in a Framework of Information Compression
by Multiple Alignment, Unification and Search},
author={J Gerard Wolff},
journal={Proceedings of the Workshop and Tutorial on Learning Context-Free
Grammars (in association with the 14th European Conference on Machine
Learning and the 7th European Conference on Principles and Practice of
Knowledge Discovery in Databases (ECML/PKDD 2003), September 2003,
Cavtat-Dubrovnik, Croata), editors: C. de la Higuera and P. Adriaans and M.
van Zaanen and J. Oncina, pp 113-124},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311045},
primaryClass={cs.AI}
} | wolff2003unsupervised |
arxiv-671555 | cs/0311046 | Algebras for Agent Norm-Regulation | <|reference_start|>Algebras for Agent Norm-Regulation: An abstract architecture for idealized multi-agent systems whose behaviour is regulated by normative systems is developed and discussed. Agent choices are determined partially by the preference ordering of possible states and partially by normative considerations: The agent chooses that act which leads to the best outcome of all permissible actions. If an action is non-permissible depends on if the result of performing that action leads to a state satisfying a condition which is forbidden, according to the norms regulating the multi-agent system. This idea is formalized by defining set-theoretic predicates characterizing multi-agent systems. The definition of the predicate uses decision theory, the Kanger-Lindahl theory of normative positions, and an algebraic representation of normative systems.<|reference_end|> | arxiv | @article{odelstad2003algebras,
title={Algebras for Agent Norm-Regulation},
author={Jan Odelstad, Magnus Boman},
journal={arXiv preprint arXiv:cs/0311046},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311046},
primaryClass={cs.LO}
} | odelstad2003algebras |
arxiv-671556 | cs/0311047 | I know what you mean: semantic issues in Internet-scale publish/subscribe systems | <|reference_start|>I know what you mean: semantic issues in Internet-scale publish/subscribe systems: In recent years, the amount of information on the Internet has increased exponentially developing great interest in selective information dissemination systems. The publish/subscribe paradigm is particularly suited for designing systems for routing information and requests according to their content throughout wide-area network of brokers. Current publish/subscribe systems use limited syntax-based content routing but since publishers and subscribers are anonymous and decoupled in time, space and location, often over wide-area network boundary, they do not necessarily speak the same language. Consequently, adding semantics to current publish/subscribe systems is important. In this paper we identify and examine the issues in developing semantic-based content routing for publish/subscribe broker networks.<|reference_end|> | arxiv | @article{burcea2003i,
title={I know what you mean: semantic issues in Internet-scale
publish/subscribe systems},
author={Ioana Burcea, Milenko Petrovic and Hans-Arno Jacobsen},
journal={arXiv preprint arXiv:cs/0311047},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311047},
primaryClass={cs.DC cs.DB}
} | burcea2003i |
arxiv-671557 | cs/0311048 | Turning CARTwheels: An Alternating Algorithm for Mining Redescriptions | <|reference_start|>Turning CARTwheels: An Alternating Algorithm for Mining Redescriptions: We present an unusual algorithm involving classification trees where two trees are grown in opposite directions so that they are matched at their leaves. This approach finds application in a new data mining task we formulate, called "redescription mining". A redescription is a shift-of-vocabulary, or a different way of communicating information about a given subset of data; the goal of redescription mining is to find subsets of data that afford multiple descriptions. We highlight the importance of this problem in domains such as bioinformatics, which exhibit an underlying richness and diversity of data descriptors (e.g., genes can be studied in a variety of ways). Our approach helps integrate multiple forms of characterizing datasets, situates the knowledge gained from one dataset in the context of others, and harnesses high-level abstractions for uncovering cryptic and subtle features of data. Algorithm design decisions, implementation details, and experimental results are presented.<|reference_end|> | arxiv | @article{kumar2003turning,
title={Turning CARTwheels: An Alternating Algorithm for Mining Redescriptions},
author={Deept Kumar, Naren Ramakrishnan, Malcolm Potts, and Richard F. Helm},
journal={arXiv preprint arXiv:cs/0311048},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311048},
primaryClass={cs.CE cs.AI}
} | kumar2003turning |
arxiv-671558 | cs/0311049 | Performance of TCP/UDP under Ad Hoc IEEE80211 | <|reference_start|>Performance of TCP/UDP under Ad Hoc IEEE80211: TCP is the De facto standard for connection oriented transport layer protocol, while UDP is the De facto standard for transport layer protocol, which is used with real time traffic for audio and video. Although there have been many attempts to measure and analyze the performance of the TCP protocol in wireless networks, very few research was done on the UDP or the interaction between TCP and UDP traffic over the wireless link. In this paper, we tudy the performance of TCP and UDP over IEEE802.11 ad hoc network. We used two topologies, a string and a mesh topology. Our work indicates that IEEE802.11 as a ad-hoc network is not very suitable for bulk transfer using TCP. It also indicates that it is much better for real-time audio. Although one has to be careful here since real-time audio does require much less bandwidth than the wireless link bandwidth. Careful and detailed studies are needed to further clarify that issue.<|reference_end|> | arxiv | @article{petrovic2003performance,
title={Performance of TCP/UDP under Ad Hoc IEEE802.11},
author={Milenko Petrovic and Mokhtar Aboelaze},
journal={arXiv preprint arXiv:cs/0311049},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311049},
primaryClass={cs.NI cs.PF}
} | petrovic2003performance |
arxiv-671559 | cs/0311050 | Data mining and Privacy in Public Sector using Intelligent Agents (discussion paper) | <|reference_start|>Data mining and Privacy in Public Sector using Intelligent Agents (discussion paper): The public sector comprises government agencies, ministries, education institutions, health providers and other types of government, commercial and not-for-profit organisations. Unlike commercial enterprises, this environment is highly heterogeneous in all aspects. This forms a complex network which is not always optimised. A lack of optimisation and communication hinders information sharing between the network nodes limiting the flow of information. Another limiting aspect is privacy of personal information and security of operations of some nodes or segments of the network. Attempts to reorganise the network or improve communications to make more information available for sharing and analysis may be hindered or completely halted by public concerns over privacy, political agendas, social and technological barriers. This paper discusses a technical solution for information sharing while addressing the privacy concerns with no need for reorganisation of the existing public sector infrastructure . The solution is based on imposing an additional layer of Intelligent Software Agents and Knowledge Bases for data mining and analysis.<|reference_end|> | arxiv | @article{voskob2003data,
title={Data mining and Privacy in Public Sector using Intelligent Agents
(discussion paper)},
author={Max Voskob, Nuck Punin},
journal={arXiv preprint arXiv:cs/0311050},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311050},
primaryClass={cs.CY cs.AI cs.IR cs.MA}
} | voskob2003data |
arxiv-671560 | cs/0311051 | Integrating existing cone-shaped and projection-based cardinal direction relations and a TCSP-like decidable generalisation | <|reference_start|>Integrating existing cone-shaped and projection-based cardinal direction relations and a TCSP-like decidable generalisation: We consider the integration of existing cone-shaped and projection-based calculi of cardinal direction relations, well-known in QSR. The more general, integrating language we consider is based on convex constraints of the qualitative form $r(x,y)$, $r$ being a cone-shaped or projection-based cardinal direction atomic relation, or of the quantitative form $(\alpha ,\beta)(x,y)$, with $\alpha ,\beta\in [0,2\pi)$ and $(\beta -\alpha)\in [0,\pi ]$: the meaning of the quantitative constraint, in particular, is that point $x$ belongs to the (convex) cone-shaped area rooted at $y$, and bounded by angles $\alpha$ and $\beta$. The general form of a constraint is a disjunction of the form $[r_1\vee...\vee r_{n_1}\vee (\alpha_1,\beta_1)\vee...\vee (\alpha _{n_2},\beta_{n_2})](x,y)$, with $r_i(x,y)$, $i=1... n_1$, and $(\alpha _i,\beta_i)(x,y)$, $i=1... n_2$, being convex constraints as described above: the meaning of such a general constraint is that, for some $i=1... n_1$, $r_i(x,y)$ holds, or, for some $i=1... n_2$, $(\alpha_i,\beta_i)(x,y)$ holds. A conjunction of such general constraints is a $\tcsp$-like CSP, which we will refer to as an $\scsp$ (Spatial Constraint Satisfaction Problem). An effective solution search algorithm for an $\scsp$ will be described, which uses (1) constraint propagation, based on a composition operation to be defined, as the filtering method during the search, and (2) the Simplex algorithm, guaranteeing completeness, at the leaves of the search tree. The approach is particularly suited for large-scale high-level vision, such as, e.g., satellite-like surveillance of a geographic area.<|reference_end|> | arxiv | @article{isli2003integrating,
title={Integrating existing cone-shaped and projection-based cardinal direction
relations and a TCSP-like decidable generalisation},
author={Amar Isli},
journal={arXiv preprint arXiv:cs/0311051},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311051},
primaryClass={cs.AI}
} | isli2003integrating |
arxiv-671561 | cs/0311052 | A Situation Calculus-based Approach To Model Ubiquitous Information Services | <|reference_start|>A Situation Calculus-based Approach To Model Ubiquitous Information Services: This paper presents an augmented situation calculus-based approach to model autonomous computing paradigm in ubiquitous information services. To make it practical for commercial development and easier to support autonomous paradigm imposed by ubiquitous information services, we made improvements based on Reiter's standard situation calculus. First we explore the inherent relationship between fluents and evolution: since not all fluents contribute to systems' evolution and some fluents can be derived from some others, we define those fluents that are sufficient and necessary to determine evolutional potential as decisive fluents, and then we prove that their successor states wrt to deterministic complex actions satisfy Markov property. Then, within the calculus framework we build, we introduce validity theory to model the autonomous services with application-specific validity requirements, including: validity fluents to axiomatize validity requirements, heuristic multiple alternative service choices ranging from complete acceptance, partial acceptance, to complete rejection, and validity-ensured policy to comprise such alternative service choices into organic, autonomously-computable services. Our approach is demonstrated by a ubiquitous calendaring service, ACS, throughout the paper.<|reference_end|> | arxiv | @article{wen-yu2003a,
title={A Situation Calculus-based Approach To Model Ubiquitous Information
Services},
author={Dong Wen-Yu, Xu Ke, Lin Meng-Xiang},
journal={arXiv preprint arXiv:cs/0311052},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311052},
primaryClass={cs.AI cs.HC}
} | wen-yu2003a |
arxiv-671562 | cs/0311053 | Weak Bezout inequality for D-modules | <|reference_start|>Weak Bezout inequality for D-modules: Let $\{w_{i,j}\}_{1\leq i\leq n, 1\leq j\leq s} \subset L_m=F(X_1,...,X_m)[{\partial \over \partial X_1},..., {\partial \over \partial X_m}]$ be linear partial differential operators of orders with respect to ${\partial \over \partial X_1},..., {\partial \over \partial X_m}$ at most $d$. We prove an upper bound n(4m^2d\min\{n,s\})^{4^{m-t-1}(2(m-t))} on the leading coefficient of the Hilbert-Kolchin polynomial of the left $L_m$-module $<\{w_{1,j}, ..., w_{n,j}\}_{1\leq j \leq s} > \subset L_m^n$ having the differential type $t$ (also being equal to the degree of the Hilbert-Kolchin polynomial). The main technical tool is the complexity bound on solving systems of linear equations over {\it algebras of fractions} of the form $$L_m(F[X_1,..., X_m, {\partial \over \partial X_1},..., {\partial \over \partial X_k}])^{-1}.$$<|reference_end|> | arxiv | @article{grigoriev2003weak,
title={Weak Bezout inequality for D-modules},
author={Dima Grigoriev},
journal={arXiv preprint arXiv:cs/0311053},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311053},
primaryClass={cs.SC cs.CC}
} | grigoriev2003weak |
arxiv-671563 | cs/0311054 | Copyright and Creativity: Authors and Photographers | <|reference_start|>Copyright and Creativity: Authors and Photographers: The history of the occupations "author" and "photographer" provides an insightful perspective on copyright and creativity. The concept of the romantic author, associated with personal creative genius, gained prominence in the eighteenth century. However, in the U.S. in 1900 only about three thousand persons professed their occupation to be "author." Self-professed "photographers" were then about ten times as numerous as authors. Being a photographer was associated with manufacturing and depended only on mastering technical skills and making a living. Being an author, in contrast, was an elite status associated with science and literature. Across the twentieth century, the number of writers and authors grew much more rapidly than the number of photographers. The relative success of writers and authors in creating jobs seems to have depended not on differences in copyright or possibilities for self-production, but on greater occupational innovation. Creativity in organizing daily work is an important form of creativity.<|reference_end|> | arxiv | @article{galbi2003copyright,
title={Copyright and Creativity: Authors and Photographers},
author={Douglas A. Galbi},
journal={arXiv preprint arXiv:cs/0311054},
year={2003},
archivePrefix={arXiv},
eprint={cs/0311054},
primaryClass={cs.CY cs.DL}
} | galbi2003copyright |
arxiv-671564 | cs/0312001 | The concept of strong and weak virtual reality | <|reference_start|>The concept of strong and weak virtual reality: We approach the virtual reality phenomenon by studying its relationship to set theory, and we investigate the case where this is done using the wellfoundedness property of sets. Our hypothesis is that non-wellfounded sets (hypersets) give rise to a different quality of virtual reality than do familiar wellfounded sets. We initially provide an alternative approach to virtual reality based on Sommerhoff's idea of first and second order self-awareness; both categories of self-awareness are considered as necessary conditions for consciousness in terms of higher cognitive functions. We then introduce a representation of first and second order self-awareness through sets, and assume that these sets, which we call events, originally form a collection of wellfounded sets. Strong virtual reality characterizes virtual reality environments which have the limited capacity to create only events associated with wellfounded sets. In contrast, the more general concept of weak virtual reality characterizes collections of virtual reality mediated events altogether forming an entirety larger than any collection of wellfounded sets. By giving reference to Aczel's hyperset theory we indicate that this definition is not empty, because hypersets encompass wellfounded sets already. Moreover, we argue that weak virtual reality could be realized in human history through continued progress in computer technology. Finally, we reformulate our characterization into a more general framework, and use Baltag's Structural Theory of Sets (STS) to show that within this general hyperset theory Sommerhoff's first and second order self-awareness as well as both concepts of virtual reality admit a consistent mathematical representation.<|reference_end|> | arxiv | @article{lisewski2003the,
title={The concept of strong and weak virtual reality},
author={A. M. Lisewski},
journal={Minds and Machines, 16 (2), 201-219 (2006)},
year={2003},
doi={10.1007/s11023-006-9037-z},
archivePrefix={arXiv},
eprint={cs/0312001},
primaryClass={cs.LO nlin.AO physics.comp-ph}
} | lisewski2003the |
arxiv-671565 | cs/0312002 | On Structuring Proof Search for First Order Linear Logic | <|reference_start|>On Structuring Proof Search for First Order Linear Logic: Full first order linear logic can be presented as an abstract logic programming language in Miller's system Forum, which yields a sensible operational interpretation in the 'proof search as computation' paradigm. However, Forum still has to deal with syntactic details that would normally be ignored by a reasonable operational semantics. In this respect, Forum improves on Gentzen systems for linear logic by restricting the language and the form of inference rules. We further improve on Forum by restricting the class of formulae allowed, in a system we call G-Forum, which is still equivalent to full first order linear logic. The only formulae allowed in G-Forum have the same shape as Forum sequents: the restriction does not diminish expressiveness and makes G-Forum amenable to proof theoretic analysis. G-Forum consists of two (big) inference rules, for which we show a cut elimination procedure. This does not need to appeal to finer detail in formulae and sequents than is provided by G-Forum, thus successfully testing the internal symmetries of our system.<|reference_end|> | arxiv | @article{bruscoli2003on,
title={On Structuring Proof Search for First Order Linear Logic},
author={Paola Bruscoli and Alessio Guglielmi},
journal={Theoretical computer science 360 (1-3), pp. 42-76. 2006},
year={2003},
doi={10.1016/j.tcs.2005.11.047},
number={Technical Report WV-03-10 TU Dresden},
archivePrefix={arXiv},
eprint={cs/0312002},
primaryClass={cs.LO}
} | bruscoli2003on |
arxiv-671566 | cs/0312003 | Hybrid LQG-Neural Controller for Inverted Pendulum System | <|reference_start|>Hybrid LQG-Neural Controller for Inverted Pendulum System: The paper presents a hybrid system controller, incorporating a neural and an LQG controller. The neural controller has been optimized by genetic algorithms directly on the inverted pendulum system. The failure free optimization process stipulated a relatively small region of the asymptotic stability of the neural controller, which is concentrated around the regulation point. The presented hybrid controller combines benefits of a genetically optimized neural controller and an LQG controller in a single system controller. High quality of the regulation process is achieved through utilization of the neural controller, while stability of the system during transient processes and a wide range of operation are assured through application of the LQG controller. The hybrid controller has been validated by applying it to a simulation model of an inherently unstable system of inverted pendulum.<|reference_end|> | arxiv | @article{sazonov2003hybrid,
title={Hybrid LQG-Neural Controller for Inverted Pendulum System},
author={E.S. Sazonov, P. Klinkhachorn and R. L. Klein},
journal={Proceedings of 35th Southeastern Symposium on System Theory
(SSST), Morgantown, WV, March 2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312003},
primaryClass={cs.NE cs.LG}
} | sazonov2003hybrid |
arxiv-671567 | cs/0312004 | Improving spam filtering by combining Naive Bayes with simple k-nearest neighbor searches | <|reference_start|>Improving spam filtering by combining Naive Bayes with simple k-nearest neighbor searches: Using naive Bayes for email classification has become very popular within the last few months. They are quite easy to implement and very efficient. In this paper we want to present empirical results of email classification using a combination of naive Bayes and k-nearest neighbor searches. Using this technique we show that the accuracy of a Bayes filter can be improved slightly for a high number of features and significantly for a small number of features.<|reference_end|> | arxiv | @article{etzold2003improving,
title={Improving spam filtering by combining Naive Bayes with simple k-nearest
neighbor searches},
author={Daniel Etzold},
journal={arXiv preprint arXiv:cs/0312004},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312004},
primaryClass={cs.LG}
} | etzold2003improving |
arxiv-671568 | cs/0312005 | A Cartography for 2x2 Symmetric Games | <|reference_start|>A Cartography for 2x2 Symmetric Games: A bidimensional representation of the space of 2x2 Symmetric Games in the strategic representation is proposed. This representation provides a tool for the classification of 2x2 symmetric games, quantification of the fraction of them having a certain feature, and predictions of changes in the characteristics of a game when a change in done on the payoff matrix that defines it.<|reference_end|> | arxiv | @article{huertas-rosero2003a,
title={A Cartography for 2x2 Symmetric Games},
author={Alvaro Francisco Huertas-Rosero},
journal={arXiv preprint arXiv:cs/0312005},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312005},
primaryClass={cs.GT}
} | huertas-rosero2003a |
arxiv-671569 | cs/0312006 | Benchmarking and Implementation of Probability-Based Simulations on Programmable Graphics Cards | <|reference_start|>Benchmarking and Implementation of Probability-Based Simulations on Programmable Graphics Cards: The latest Graphics Processing Units (GPUs) are reported to reach up to 200 billion floating point operations per second (200 Gflops) and to have price performance of 0.1 cents per M flop. These facts raise great interest in the plausibility of extending the GPUs' use to non-graphics applications, in particular numerical simulations on structured grids (lattice). We review previous work on using GPUs for non-graphics applications, implement probability-based simulations on the GPU, namely the Ising and percolation models, implement vector operation benchmarks for the GPU, and finally compare the CPU's and GPU's performance. A general conclusion from the results obtained is that moving computations from the CPU to the GPU is feasible, yielding good time and price performance, for certain lattice computations. Preliminary results also show that it is feasible to use them in parallel<|reference_end|> | arxiv | @article{tomov2003benchmarking,
title={Benchmarking and Implementation of Probability-Based Simulations on
Programmable Graphics Cards},
author={S. Tomov (1), M. McGuigan (1), R. Bennett (1), G. Smith (1), J.
Spiletic (1) ((1) Brookhaven National Laboratory, Data Analysis and
Visualization, Upton, NY)},
journal={arXiv preprint arXiv:cs/0312006},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312006},
primaryClass={cs.GR cs.PF}
} | tomov2003benchmarking |
arxiv-671570 | cs/0312007 | Counting complexity classes for numeric computations II: algebraic and semialgebraic sets | <|reference_start|>Counting complexity classes for numeric computations II: algebraic and semialgebraic sets: We define counting classes #P_R and #P_C in the Blum-Shub-Smale setting of computations over the real or complex numbers, respectively. The problems of counting the number of solutions of systems of polynomial inequalities over R, or of systems of polynomial equalities over C, respectively, turn out to be natural complete problems in these classes. We investigate to what extent the new counting classes capture the complexity of computing basic topological invariants of semialgebraic sets (over R) and algebraic sets (over C). We prove that the problem of computing the (modified) Euler characteristic of semialgebraic sets is FP_R^{#P_R}-complete, and that the problem of computing the geometric degree of complex algebraic sets is FP_C^{#P_C}-complete. We also define new counting complexity classes in the classical Turing model via taking Boolean parts of the classes above, and show that the problems to compute the Euler characteristic and the geometric degree of (semi)algebraic sets given by integer polynomials are complete in these classes. We complement the results in the Turing model by proving, for all k in N, the FPSPACE-hardness of the problem of computing the k-th Betti number of the zet of real zeros of a given integer polynomial. This holds with respect to the singular homology as well as for the Borel-Moore homology.<|reference_end|> | arxiv | @article{buergisser2003counting,
title={Counting complexity classes for numeric computations II: algebraic and
semialgebraic sets},
author={Peter Buergisser and Felipe Cucker},
journal={Journal of Complexity 22(2): 147-191 (2006)},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312007},
primaryClass={cs.CC math.AT}
} | buergisser2003counting |
arxiv-671571 | cs/0312008 | Embedding Web-based Statistical Translation Models in Cross-Language Information Retrieval | <|reference_start|>Embedding Web-based Statistical Translation Models in Cross-Language Information Retrieval: Although more and more language pairs are covered by machine translation services, there are still many pairs that lack translation resources. Cross-language information retrieval (CLIR) is an application which needs translation functionality of a relatively low level of sophistication since current models for information retrieval (IR) are still based on a bag-of-words. The Web provides a vast resource for the automatic construction of parallel corpora which can be used to train statistical translation models automatically. The resulting translation models can be embedded in several ways in a retrieval model. In this paper, we will investigate the problem of automatically mining parallel texts from the Web and different ways of integrating the translation models within the retrieval process. Our experiments on standard test collections for CLIR show that the Web-based translation models can surpass commercial MT systems in CLIR tasks. These results open the perspective of constructing a fully automatic query translation device for CLIR at a very low cost.<|reference_end|> | arxiv | @article{kraaij2003embedding,
title={Embedding Web-based Statistical Translation Models in Cross-Language
Information Retrieval},
author={Wessel Kraaij, Jian-Yun Nie and Michel Simard},
journal={Computational Linguistics 29(3) september 2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312008},
primaryClass={cs.CL cs.IR}
} | kraaij2003embedding |
arxiv-671572 | cs/0312009 | Failure-Free Genetic Algorithm Optimization of a System Controller Using SAFE/LEARNING Controllers in Tandem | <|reference_start|>Failure-Free Genetic Algorithm Optimization of a System Controller Using SAFE/LEARNING Controllers in Tandem: The paper presents a method for failure free genetic algorithm optimization of a system controller. Genetic algorithms present a powerful tool that facilitates producing near-optimal system controllers. Applied to such methods of computational intelligence as neural networks or fuzzy logic, these methods are capable of combining the non-linear mapping capabilities of the latter with learning the system behavior directly, that is, without a prior model. At the same time, genetic algorithms routinely produce solutions that lead to the failure of the controlled system. Such solutions are generally unacceptable for applications where safe operation must be guaranteed. We present here a method of design, which allows failure-free application of genetic algorithms through utilization of SAFE and LEARNING controllers in tandem, where the SAFE controller recovers the system from dangerous states while the LEARNING controller learns its behavior. The method has been validated by applying it to an inherently unstable system of inverted pendulum.<|reference_end|> | arxiv | @article{sazonov2003failure-free,
title={Failure-Free Genetic Algorithm Optimization of a System Controller Using
SAFE/LEARNING Controllers in Tandem},
author={E.S.Sazonov, D. Del Gobbo, P. Klinkhachorn and R. L. Klein},
journal={Proceedings of 34th Southeastern Symposium on System Theory
(SSST), Huntsville, AL, March 2002},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312009},
primaryClass={cs.NE cs.LG}
} | sazonov2003failure-free |
arxiv-671573 | cs/0312010 | Designing of a Community-based Translation Center | <|reference_start|>Designing of a Community-based Translation Center: Interfaces that support multi-lingual content can reach a broader community. We wish to extend the reach of CITIDEL, a digital library for computing education materials, to support multiple languages. By doing so, we hope that it will increase the number of users, and in turn the number of resources. This paper discusses three approaches to translation (automated translation, developer-based, and community-based), and a brief evaluation of these approaches. It proposes a design for an online community translation center where volunteers help translate interface components and educational materials available in CITIDEL.<|reference_end|> | arxiv | @article{mcdevitt2003designing,
title={Designing of a Community-based Translation Center},
author={Kathleen McDevitt, Manuel A. Perez-Quinones, Olga I. Padilla-Falto},
journal={arXiv preprint arXiv:cs/0312010},
year={2003},
number={TR-03-30},
archivePrefix={arXiv},
eprint={cs/0312010},
primaryClass={cs.HC cs.DL}
} | mcdevitt2003designing |
arxiv-671574 | cs/0312011 | Constraint Optimization and Statistical Mechanics | <|reference_start|>Constraint Optimization and Statistical Mechanics: In these lectures I will present an introduction to the results that have been recently obtained in constraint optimization of random problems using statistical mechanics techniques. After presenting the general results, in order to simplify the presentation I will describe in details only the problems related to the coloring of a random graph.<|reference_end|> | arxiv | @article{parisi2003constraint,
title={Constraint Optimization and Statistical Mechanics},
author={Giorgio Parisi},
journal={arXiv preprint arXiv:cs/0312011},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312011},
primaryClass={cs.CC cond-mat.dis-nn cs.DS}
} | parisi2003constraint |
arxiv-671575 | cs/0312012 | Methods to Model-Check Parallel Systems Software | <|reference_start|>Methods to Model-Check Parallel Systems Software: We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD.<|reference_end|> | arxiv | @article{matlin2003methods,
title={Methods to Model-Check Parallel Systems Software},
author={Olga Shumsky Matlin, William McCune, and Ewing Lusk},
journal={arXiv preprint arXiv:cs/0312012},
year={2003},
number={ANL/MCS-TM-261},
archivePrefix={arXiv},
eprint={cs/0312012},
primaryClass={cs.LO cs.DC}
} | matlin2003methods |
arxiv-671576 | cs/0312013 | Fuzziness versus probability again | <|reference_start|>Fuzziness versus probability again: A construction of a fuzzy logic controller based on an analogy between fuzzy conditional rule of inference and marginal probability in terms of the conditional probability function has been proposed.<|reference_end|> | arxiv | @article{jurkovic2003fuzziness,
title={Fuzziness versus probability again},
author={F.Jurkovic},
journal={arXiv preprint arXiv:cs/0312013},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312013},
primaryClass={cs.LO}
} | jurkovic2003fuzziness |
arxiv-671577 | cs/0312014 | Logical Characterizations of Heap Abstractions | <|reference_start|>Logical Characterizations of Heap Abstractions: Shape analysis concerns the problem of determining "shape invariants" for programs that perform destructive updating on dynamically allocated storage. In recent work, we have shown how shape analysis can be performed, using an abstract interpretation based on 3-valued first-order logic. In that work, concrete stores are finite 2-valued logical structures, and the sets of stores that can possibly arise during execution are represented (conservatively) using a certain family of finite 3-valued logical structures. In this paper, we show how 3-valued structures that arise in shape analysis can be characterized using formulas in first-order logic with transitive closure. We also define a non-standard ("supervaluational") semantics for 3-valued first-order logic that is more precise than a conventional 3-valued semantics, and demonstrate that the supervaluational semantics can be effectively implemented using existing theorem provers.<|reference_end|> | arxiv | @article{yorsh2003logical,
title={Logical Characterizations of Heap Abstractions},
author={G. Yorsh, T. Reps, M. Sagiv, R. Wilhelm},
journal={arXiv preprint arXiv:cs/0312014},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312014},
primaryClass={cs.LO}
} | yorsh2003logical |
arxiv-671578 | cs/0312015 | Soft lambda-calculus: a language for polynomial time computation | <|reference_start|>Soft lambda-calculus: a language for polynomial time computation: Soft linear logic ([Lafont02]) is a subsystem of linear logic characterizing the class PTIME. We introduce Soft lambda-calculus as a calculus typable in the intuitionistic and affine variant of this logic. We prove that the (untyped) terms of this calculus are reducible in polynomial time. We then extend the type system of Soft logic with recursive types. This allows us to consider non-standard types for representing lists. Using these datatypes we examine the concrete expressivity of Soft lambda-calculus with the example of the insertion sort algorithm.<|reference_end|> | arxiv | @article{baillot2003soft,
title={Soft lambda-calculus: a language for polynomial time computation},
author={Patrick Baillot, Virgile Mogbil},
journal={arXiv preprint arXiv:cs/0312015},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312015},
primaryClass={cs.LO cs.CC}
} | baillot2003soft |
arxiv-671579 | cs/0312016 | Taking the Initiative with Extempore: Exploring Out-of-Turn Interactions with Websites | <|reference_start|>Taking the Initiative with Extempore: Exploring Out-of-Turn Interactions with Websites: We present the first study to explore the use of out-of-turn interaction in websites. Out-of-turn interaction is a technique which empowers the user to supply unsolicited information while browsing. This approach helps flexibly bridge any mental mismatch between the user and the website, in a manner fundamentally different from faceted browsing and site-specific search tools. We built a user interface (Extempore) which accepts out-of-turn input via voice or text; and employed it in a US congressional website, to determine if users utilize out-of-turn interaction for information-finding tasks, and their rationale for doing so. The results indicate that users are adept at discerning when out-of-turn interaction is necessary in a particular task, and actively interleaved it with browsing. However, users found cascading information across information-finding subtasks challenging. Therefore, this work not only improves our understanding of out-of-turn interaction, but also suggests further opportunities to enrich browsing experiences for users.<|reference_end|> | arxiv | @article{perugini2003taking,
title={Taking the Initiative with Extempore: Exploring Out-of-Turn Interactions
with Websites},
author={Saverio Perugini, Mary E. Pinney, Naren Ramakrishnan, Manuel A.
Perez-Quinones, and Mary Beth Rosson},
journal={arXiv preprint arXiv:cs/0312016},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312016},
primaryClass={cs.HC cs.IR}
} | perugini2003taking |
arxiv-671580 | cs/0312017 | An Exploratory Study of Mobile Computing Use by Knowledge Workers | <|reference_start|>An Exploratory Study of Mobile Computing Use by Knowledge Workers: This paper describes some preliminary results from a 20-week study on the use of Compaq iPAQ Personal Digital Assistants (PDAs) by 10 senior developers, analysts, technical managers, and senior organisational managers. The goal of the study was to identify what applications were used, how and where they were used, the problems and issues that arose, and how use of the iPAQs changed over the study period. The paper highlights some interesting uses of the iPAQs, and identifies some of the characteristics of successful mobile applications.<|reference_end|> | arxiv | @article{prekop2003an,
title={An Exploratory Study of Mobile Computing Use by Knowledge Workers},
author={Paul Prekop},
journal={arXiv preprint arXiv:cs/0312017},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312017},
primaryClass={cs.HC}
} | prekop2003an |
arxiv-671581 | cs/0312018 | Mapping Subsets of Scholarly Information | <|reference_start|>Mapping Subsets of Scholarly Information: We illustrate the use of machine learning techniques to analyze, structure, maintain, and evolve a large online corpus of academic literature. An emerging field of research can be identified as part of an existing corpus, permitting the implementation of a more coherent community structure for its practitioners.<|reference_end|> | arxiv | @article{ginsparg2003mapping,
title={Mapping Subsets of Scholarly Information},
author={Paul Ginsparg, Paul Houle, Thorsten Joachims, and Jae-Hoon Sul
(Cornell University)},
journal={arXiv preprint arXiv:cs/0312018},
year={2003},
doi={10.1073/pnas.0308253100},
archivePrefix={arXiv},
eprint={cs/0312018},
primaryClass={cs.IR cs.LG}
} | ginsparg2003mapping |
arxiv-671582 | cs/0312019 | Verification of recursive parallel systems | <|reference_start|>Verification of recursive parallel systems: In this paper we consider the problem of proving properties of infinite behaviour of formalisms suitable to describe (infinite state) systems with recursion and parallelism. As a formal setting, we consider the framework of Process Rewriting Systems (PRSs). For a meaningfull fragment of PRSs, allowing to accommodate both Pushdown Automata and Petri Nets, we state decidability results for a class of properties about infinite derivations (infinite term rewritings). The given results can be exploited for the automatic verification of some classes of linear time properties of infinite state systems described by PRSs. In order to exemplify the assessed results, we introduce a meaningful automaton based formalism which allows to express both recursion and multi--treading.<|reference_end|> | arxiv | @article{bozzelli2003verification,
title={Verification of recursive parallel systems},
author={Laura Bozzelli, Massimo Benerecetti and Adriano Peron},
journal={arXiv preprint arXiv:cs/0312019},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312019},
primaryClass={cs.LO}
} | bozzelli2003verification |
arxiv-671583 | cs/0312020 | Modeling Object Oriented Constraint Programs in Z | <|reference_start|>Modeling Object Oriented Constraint Programs in Z: Object oriented constraint programs (OOCPs) emerge as a leading evolution of constraint programming and artificial intelligence, first applied to a range of industrial applications called configuration problems. The rich variety of technical approaches to solving configuration problems (CLP(FD), CC(FD), DCSP, Terminological systems, constraint programs with set variables ...) is a source of difficulty. No universally accepted formal language exists for communicating about OOCPs, which makes the comparison of systems difficult. We present here a Z based specification of OOCPs which avoids the falltrap of hidden object semantics. The object system is part of the specification, and captures all of the most advanced notions from the object oriented modeling standard UML. The paper illustrates these issues and the conciseness and precision of Z by the specification of a working OOCP that solves an historical AI problem : parsing a context free grammar. Being written in Z, an OOCP specification also supports formal proofs. The whole builds the foundation of an adaptative and evolving framework for communicating about constrained object models and programs.<|reference_end|> | arxiv | @article{henocque2003modeling,
title={Modeling Object Oriented Constraint Programs in Z},
author={Laurent Henocque},
journal={arXiv preprint arXiv:cs/0312020},
year={2003},
number={RR-LSIS-03-006},
archivePrefix={arXiv},
eprint={cs/0312020},
primaryClass={cs.AI}
} | henocque2003modeling |
arxiv-671584 | cs/0312021 | ICT-based planning and the missing educational link | <|reference_start|>ICT-based planning and the missing educational link: The past century ended with an unexpected explosion of Information and Communication Technologies (ICT), both in planning/managing public policies, and in exchanging knowledge. However, the extent to which ICT-based tools increase the level of public knowledge, or help decision makers is still uncertain. Although indirectly, the overload of unfiltered Web-based information seems able to hamper the knowledge growth of people, particularly in some developing communities, whereas Decision Support Systems (DSS) and Geographical Information Systems (GIS) prove to be ineffective if managed by unskilled planning bodies. Given such warns, this paper outlines how the different social and cultural awareness of local communities can affect the outcomes of ICT-based tools. It further explores the impacts of ICT-based tools on community development and spatial planning, emphasizing the role of proper literacy and education for effective management.<|reference_end|> | arxiv | @article{camarda2003ict-based,
title={ICT-based planning and the missing educational link},
author={Domenico Camarda},
journal={arXiv preprint arXiv:cs/0312021},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312021},
primaryClass={cs.CY}
} | camarda2003ict-based |
arxiv-671585 | cs/0312022 | GridEmail: A Case for Economically Regulated Internet-based Interpersonal Communications | <|reference_start|>GridEmail: A Case for Economically Regulated Internet-based Interpersonal Communications: Email has emerged as a dominant form of electronic communication between people. Spam is a major problem for email users, with estimates of up to 56% of email falling into that category. Control of Spam is being attempted with technical and legislative methods. In this paper we look at email and spam from a supply-demand perspective. We propose Gridemail, an email system based on an economy of communicating parties, where participants? motivations are represented as pricing policies and profiles. This system is expected to help people regulate their personal communications to suit their conditions, and help in removing unwanted messages.<|reference_end|> | arxiv | @article{soysa2003gridemail:,
title={GridEmail: A Case for Economically Regulated Internet-based
Interpersonal Communications},
author={Manjuka Soysa, Rajkumar Buyya, and Baikunth Nath},
journal={arXiv preprint arXiv:cs/0312022},
year={2003},
number={GRIDS-TR-2003-6},
archivePrefix={arXiv},
eprint={cs/0312022},
primaryClass={cs.DC}
} | soysa2003gridemail: |
arxiv-671586 | cs/0312023 | Inferring Termination Conditions for Logic Programs using Backwards Analysis | <|reference_start|>Inferring Termination Conditions for Logic Programs using Backwards Analysis: This paper focuses on the inference of modes for which a logic program is guaranteed to terminate. This generalises traditional termination analysis where an analyser tries to verify termination for a specified mode. Our contribution is a methodology in which components of traditional termination analysis are combined with backwards analysis to obtain an analyser for termination inference. We identify a condition on the components of the analyser which guarantees that termination inference will infer all modes which can be checked to terminate. The application of this methodology to enhance a traditional termination analyser to perform also termination inference is demonstrated.<|reference_end|> | arxiv | @article{genaim2003inferring,
title={Inferring Termination Conditions for Logic Programs using Backwards
Analysis},
author={Samir Genaim and Michael Codish},
journal={arXiv preprint arXiv:cs/0312023},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312023},
primaryClass={cs.PL}
} | genaim2003inferring |
arxiv-671587 | cs/0312024 | Evolution: Google vs DRIS | <|reference_start|>Evolution: Google vs DRIS: This paper gives an absolute new search system that builds the information retrieval infrastructure for Internet. Now most search engine companies are mainly concerned with how to make profit from company users by advertisement and ranking prominence, but never consider what its real customers will feel. Few web search engines can sell billions dollars just at the cost of inconvenience of most Internet users, but not its high quality of search service. When we have to bear the bothersome advertisements in the awful results and have no choices, Internet as the kind of public good will surely be undermined. If current Internet can't fully ensure our right to know, it may need some sound improvements or a revolution.<|reference_end|> | arxiv | @article{liang2003evolution:,
title={Evolution: Google vs. DRIS},
author={Wang Liang, Guo Yiping, Fang Ming},
journal={arXiv preprint arXiv:cs/0312024},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312024},
primaryClass={cs.DL cs.IR cs.NI}
} | liang2003evolution: |
arxiv-671588 | cs/0312025 | Soft Constraint Programming to Analysing Security Protocols | <|reference_start|>Soft Constraint Programming to Analysing Security Protocols: Security protocols stipulate how the remote principals of a computer network should interact in order to obtain specific security goals. The crucial goals of confidentiality and authentication may be achieved in various forms, each of different strength. Using soft (rather than crisp) constraints, we develop a uniform formal notion for the two goals. They are no longer formalised as mere yes/no properties as in the existing literature, but gain an extra parameter, the security level. For example, different messages can enjoy different levels of confidentiality, or a principal can achieve different levels of authentication with different principals. The goals are formalised within a general framework for protocol analysis that is amenable to mechanisation by model checking. Following the application of the framework to analysing the asymmetric Needham-Schroeder protocol, we have recently discovered a new attack on that protocol as a form of retaliation by principals who have been attacked previously. Having commented on that attack, we then demonstrate the framework on a bigger, largely deployed protocol consisting of three phases, Kerberos.<|reference_end|> | arxiv | @article{bella2003soft,
title={Soft Constraint Programming to Analysing Security Protocols},
author={Giampaolo Bella and Stefano Bistarelli},
journal={TPLP 4(5-6): 545-572 (2004)},
year={2003},
doi={10.1017/S1471068404002121},
archivePrefix={arXiv},
eprint={cs/0312025},
primaryClass={cs.CR cs.AI}
} | bella2003soft |
arxiv-671589 | cs/0312026 | Speedup of Logic Programs by Binarization and Partial Deduction | <|reference_start|>Speedup of Logic Programs by Binarization and Partial Deduction: Binary logic programs can be obtained from ordinary logic programs by a binarizing transformation. In most cases, binary programs obtained this way are less efficient than the original programs. (Demoen, 1992) showed an interesting example of a logic program whose computational behaviour was improved when it was transformed to a binary program and then specialized by partial deduction. The class of B-stratifiable logic programs is defined. It is shown that for every B-stratifiable logic program, binarization and subsequent partial deduction produce a binary program which does not contain variables for continuations introduced by binarization. Such programs usually have a better computational behaviour than the original ones. Both binarization and partial deduction can be easily automated. A comparison with other related approaches to program transformation is given.<|reference_end|> | arxiv | @article{hruza2003speedup,
title={Speedup of Logic Programs by Binarization and Partial Deduction},
author={Jan Hruza, Petr Stepanek},
journal={arXiv preprint arXiv:cs/0312026},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312026},
primaryClass={cs.PL cs.AI}
} | hruza2003speedup |
arxiv-671590 | cs/0312027 | An Open Ended Tree | <|reference_start|>An Open Ended Tree: An open ended list is a well known data structure in Prolog programs. It is frequently used to represent a value changing over time, while this value is referred to from several places in the data structure of the application. A weak point in this technique is that the time complexity is linear in the number of updates to the value represented by the open ended list. In this programming pearl we present a variant of the open ended list, namely an open ended tree, with an update and access time complexity logarithmic in the number of updates to the value.<|reference_end|> | arxiv | @article{vandecasteele2003an,
title={An Open Ended Tree},
author={Henk Vandecasteele and Gerda Janssens},
journal={TPLP Vol 3(3) 2003 pp 377-385},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312027},
primaryClass={cs.PL}
} | vandecasteele2003an |
arxiv-671591 | cs/0312028 | Minimal founded semantics for disjunctive logic programs and deductive databases | <|reference_start|>Minimal founded semantics for disjunctive logic programs and deductive databases: In this paper, we propose a variant of stable model semantics for disjunctive logic programming and deductive databases. The semantics, called minimal founded, generalizes stable model semantics for normal (i.e. non disjunctive) programs but differs from disjunctive stable model semantics (the extension of stable model semantics for disjunctive programs). Compared with disjunctive stable model semantics, minimal founded semantics seems to be more intuitive, it gives meaning to programs which are meaningless under stable model semantics and is no harder to compute. More specifically, minimal founded semantics differs from stable model semantics only for disjunctive programs having constraint rules or rules working as constraints. We study the expressive power of the semantics and show that for general disjunctive datalog programs it has the same power as disjunctive stable model semantics.<|reference_end|> | arxiv | @article{furfaro2003minimal,
title={Minimal founded semantics for disjunctive logic programs and deductive
databases},
author={Filippo Furfaro, Gianluigi Greco, Sergio Greco},
journal={Theory and Practice of Logic Programming, 4(1): 75-93 (2004)},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312028},
primaryClass={cs.LO cs.AI}
} | furfaro2003minimal |
arxiv-671592 | cs/0312029 | Strong Equivalence Made Easy: Nested Expressions and Weight Constraints | <|reference_start|>Strong Equivalence Made Easy: Nested Expressions and Weight Constraints: Logic programs P and Q are strongly equivalent if, given any program R, programs P union R and Q union R are equivalent (that is, have the same answer sets). Strong equivalence is convenient for the study of equivalent transformations of logic programs: one can prove that a local change is correct without considering the whole program. Lifschitz, Pearce and Valverde showed that Heyting's logic of here-and-there can be used to characterize strong equivalence for logic programs with nested expressions (which subsume the better-known extended disjunctive programs). This note considers a simpler, more direct characterization of strong equivalence for such programs, and shows that it can also be applied without modification to the weight constraint programs of Niemela and Simons. Thus, this characterization of strong equivalence is convenient for the study of equivalent transformations of logic programs written in the input languages of answer set programming systems dlv and smodels. The note concludes with a brief discussion of results that can be used to automate reasoning about strong equivalence, including a novel encoding that reduces the problem of deciding the strong equivalence of a pair of weight constraint programs to that of deciding the inconsistency of a weight constraint program.<|reference_end|> | arxiv | @article{turner2003strong,
title={Strong Equivalence Made Easy: Nested Expressions and Weight Constraints},
author={Hudson Turner},
journal={Theory and Practice of Logic Programming, vol 3 (4&5), pages
609-622, 2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312029},
primaryClass={cs.LO cs.AI}
} | turner2003strong |
arxiv-671593 | cs/0312030 | CSIEC (Computer Simulator in Educational Communication): An Intelligent Web-Based Teaching System for Foreign Language Learning | <|reference_start|>CSIEC (Computer Simulator in Educational Communication): An Intelligent Web-Based Teaching System for Foreign Language Learning: In this paper we present an innovative intelligent web-based computer-aided instruction system for foreign language learning: CSIEC (Computer Simulator in Educational Communication). This system can not only grammatically understand the sentences in English given from the users via Internet, but also reasonably and individually speak with the users. At first the related works in this research field are analyzed. Then we introduce the system goals and the system framework, i.e., the natural language understanding mechanism (NLML, NLOMJ and NLDB) and the communicational response (CR). Finally we give the syntactic and semantic content of this instruction system, i.e. some important notations of English grammar used in it and their relations with the NLOMJ.<|reference_end|> | arxiv | @article{jia2003csiec,
title={CSIEC (Computer Simulator in Educational Communication): An Intelligent
Web-Based Teaching System for Foreign Language Learning},
author={Jiyou Jia},
journal={arXiv preprint arXiv:cs/0312030},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312030},
primaryClass={cs.CY}
} | jia2003csiec |
arxiv-671594 | cs/0312031 | Distributed WWW Programming using (Ciao-)Prolog and the PiLLoW library | <|reference_start|>Distributed WWW Programming using (Ciao-)Prolog and the PiLLoW library: We discuss from a practical point of view a number of issues involved in writing distributed Internet and WWW applications using LP/CLP systems. We describe PiLLoW, a public-domain Internet and WWW programming library for LP/CLP systems that we have designed in order to simplify the process of writing such applications. PiLLoW provides facilities for accessing documents and code on the WWW; parsing, manipulating and generating HTML and XML structured documents and data; producing HTML forms; writing form handlers and CGI-scripts; and processing HTML/XML templates. An important contribution of PiLLoW is to model HTML/XML code (and, thus, the content of WWW pages) as terms. The PiLLoW library has been developed in the context of the Ciao Prolog system, but it has been adapted to a number of popular LP/CLP systems, supporting most of its functionality. We also describe the use of concurrency and a high-level model of client-server interaction, Ciao Prolog's active modules, in the context of WWW programming. We propose a solution for client-side downloading and execution of Prolog code, using generic browsers. Finally, we also provide an overview of related work on the topic.<|reference_end|> | arxiv | @article{cabeza2003distributed,
title={Distributed WWW Programming using (Ciao-)Prolog and the PiLLoW library},
author={Daniel Cabeza and Manuel V. Hermenegildo},
journal={Theory and Practice of Logic Programming, Vol 1(3), 2001, 251-282},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312031},
primaryClass={cs.DC cs.PL}
} | cabeza2003distributed |
arxiv-671595 | cs/0312032 | Learning in a Compiler for MINSAT Algorithms | <|reference_start|>Learning in a Compiler for MINSAT Algorithms: This paper describes learning in a compiler for algorithms solving classes of the logic minimization problem MINSAT, where the underlying propositional formula is in conjunctive normal form (CNF) and where costs are associated with the True/False values of the variables. Each class consists of all instances that may be derived from a given propositional formula and costs for True/False values by fixing or deleting variables, and by deleting clauses. The learning step begins once the compiler has constructed a solution algorithm for a given class. The step applies that algorithm to comparatively few instances of the class, analyses the performance of the algorithm on these instances, and modifies the underlying propositional formula, with the goal that the algorithm will perform much better on all instances of the class.<|reference_end|> | arxiv | @article{remshagen2003learning,
title={Learning in a Compiler for MINSAT Algorithms},
author={Anja Remshagen and Klaus Truemper},
journal={Theory and practice of Logic Programming, Vol 3(3), pp 271-286,
2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312032},
primaryClass={cs.LO}
} | remshagen2003learning |
arxiv-671596 | cs/0312033 | Using sensors in the web crawling process | <|reference_start|>Using sensors in the web crawling process: This paper offers a short description of an Internet information field monitoring system, which places a special module-sensor on the side of the Web-server to detect changes in information resources and subsequently reindexes only the resources signalized by the corresponding sensor. Concise results of simulation research and an implementation attempt of the given "sensors" concept are provided.<|reference_end|> | arxiv | @article{zemskov2003using,
title={Using sensors in the web crawling process},
author={Ilya Zemskov},
journal={arXiv preprint arXiv:cs/0312033},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312033},
primaryClass={cs.IR cs.DL}
} | zemskov2003using |
arxiv-671597 | cs/0312034 | Sharing secret color images using cellular automata with memory | <|reference_start|>Sharing secret color images using cellular automata with memory: A {k,n}-threshold scheme based on two-dimensional memory cellular automata is proposed to share images in a secret way. This method allows to encode an image into n shared images so that only qualified subsets of k or more shares can recover the secret image, but any k-1 or fewer of them gain no information about the original image. The main characteristics of this new scheme are: each shared image has the same size that the original one, and the recovered image is exactly the same than the secret image; i.e., there is no loss of resolution.<|reference_end|> | arxiv | @article{alvarez2003sharing,
title={Sharing secret color images using cellular automata with memory},
author={Gonzalo Alvarez, Luis Hernandez, Angel Martin},
journal={arXiv preprint arXiv:cs/0312034},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312034},
primaryClass={cs.CR}
} | alvarez2003sharing |
arxiv-671598 | cs/0312035 | Analysis of Implementation Hierocrypt-3 algorithm (and its comparison to Camellia algorithm) using ALTERA devices | <|reference_start|>Analysis of Implementation Hierocrypt-3 algorithm (and its comparison to Camellia algorithm) using ALTERA devices: Alghoritms: HIEROCRYPT-3, CAMELLIA and ANUBIS, GRAND CRU, NOEKEON, NUSH, Q, RC6, SAFER++128, SC2000, SHACAL were requested for the submission of block ciphers (high level block cipher) to NESSIE (New European Schemes for Signatures, Integrity, and Encryption) project. The main purpose of this project was to put forward a portfolio of strong cryptographic primitives of various types. The NESSIE project was a three year long project and has been divided into two phases. The first was finished in June 2001r. CAMELLIA, RC6, SAFER++128 and SHACAL were accepted for the second phase of the evaluation process. HIEROCRYPT-3 had key schedule problems, and there were attacks for up to 3,5 rounds out of 6, at least hardware implementations of this cipher were extremely slow [12]. HIEROCRYPT-3 was not selected to Phase II. CAMELLIA was selected as an algorithm suggested for future standard. In the paper we present the hardware implementations these two algorithms with 128-bit blocks and 128-bit keys, using ALTERA devices and their comparisons.<|reference_end|> | arxiv | @article{rogawski2003analysis,
title={Analysis of Implementation Hierocrypt-3 algorithm (and its comparison to
Camellia algorithm) using ALTERA devices},
author={Marcin Rogawski},
journal={arXiv preprint arXiv:cs/0312035},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312035},
primaryClass={cs.CR cs.PF}
} | rogawski2003analysis |
arxiv-671599 | cs/0312036 | What Causes a System to Satisfy a Specification? | <|reference_start|>What Causes a System to Satisfy a Specification?: Even when a system is proven to be correct with respect to a specification, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. Coverage metrics attempt to check which parts of a system are actually relevant for the verification process to succeed. Recent work on coverage in model checking suggests several coverage metrics and algorithms for finding parts of the system that are not covered by the specification. The work has already proven to be effective in practice, detecting design errors that escape early verification efforts in industrial settings. In this paper, we relate a formal definition of causality given by Halpern and Pearl [2001] to coverage. We show that it gives significant insight into unresolved issues regarding the definition of coverage and leads to potentially useful extensions of coverage. In particular, we introduce the notion of responsibility, which assigns to components of a system a quantitative measure of their relevance to the satisfaction of the specification.<|reference_end|> | arxiv | @article{chockler2003what,
title={What Causes a System to Satisfy a Specification?},
author={Hana Chockler, Joseph Y. Halpern, and Orna Kupferman},
journal={arXiv preprint arXiv:cs/0312036},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312036},
primaryClass={cs.LO cs.AI}
} | chockler2003what |
arxiv-671600 | cs/0312037 | Characterizing and Reasoning about Probabilistic and Non-Probabilistic Expectation | <|reference_start|>Characterizing and Reasoning about Probabilistic and Non-Probabilistic Expectation: Expectation is a central notion in probability theory. The notion of expectation also makes sense for other notions of uncertainty. We introduce a propositional logic for reasoning about expectation, where the semantics depends on the underlying representation of uncertainty. We give sound and complete axiomatizations for the logic in the case that the underlying representation is (a) probability, (b) sets of probability measures, (c) belief functions, and (d) possibility measures. We show that this logic is more expressive than the corresponding logic for reasoning about likelihood in the case of sets of probability measures, but equi-expressive in the case of probability, belief, and possibility. Finally, we show that satisfiability for these logics is NP-complete, no harder than satisfiability for propositional logic.<|reference_end|> | arxiv | @article{halpern2003characterizing,
title={Characterizing and Reasoning about Probabilistic and Non-Probabilistic
Expectation},
author={Joseph Y. Halpern and Riccardo Pucella},
journal={arXiv preprint arXiv:cs/0312037},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312037},
primaryClass={cs.AI cs.LO}
} | halpern2003characterizing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.