corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-670601 | cs/0206029 | Computer-Generated Photorealistic Hair | <|reference_start|>Computer-Generated Photorealistic Hair: This paper presents an efficient method for generating and rendering photorealistic hair in two dimensional pictures. The method consists of three major steps. Simulating an artist drawing is used to design the rough hair shape. A convolution based filter is then used to generate photorealistic hair patches. A refine procedure is finally used to blend the boundaries of the patches with surrounding areas. This method can be used to create all types of photorealistic human hair (head hair, facial hair and body hair). It is also suitable for fur and grass generation. Applications of this method include: hairstyle designing/editing, damaged hair image restoration, human hair animation, virtual makeover of a human, and landscape creation.<|reference_end|> | arxiv | @article{lin2002computer-generated,
title={Computer-Generated Photorealistic Hair},
author={Alice J. Lin},
journal={arXiv preprint arXiv:cs/0206029},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206029},
primaryClass={cs.GR}
} | lin2002computer-generated |
arxiv-670602 | cs/0206030 | A Probabilistic Method for Analyzing Japanese Anaphora Integrating Zero Pronoun Detection and Resolution | <|reference_start|>A Probabilistic Method for Analyzing Japanese Anaphora Integrating Zero Pronoun Detection and Resolution: This paper proposes a method to analyze Japanese anaphora, in which zero pronouns (omitted obligatory cases) are used to refer to preceding entities (antecedents). Unlike the case of general coreference resolution, zero pronouns have to be detected prior to resolution because they are not expressed in discourse. Our method integrates two probability parameters to perform zero pronoun detection and resolution in a single framework. The first parameter quantifies the degree to which a given case is a zero pronoun. The second parameter quantifies the degree to which a given entity is the antecedent for a detected zero pronoun. To compute these parameters efficiently, we use corpora with/without annotations of anaphoric relations. We show the effectiveness of our method by way of experiments.<|reference_end|> | arxiv | @article{seki2002a,
title={A Probabilistic Method for Analyzing Japanese Anaphora Integrating Zero
Pronoun Detection and Resolution},
author={Kazuhiro Seki, Atsushi Fujii, and Tetsuya Ishikawa},
journal={Proceedings of the 19th International Conference on Computational
Linguistics (COLING 2002), pp.911-917, Aug. 2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206030},
primaryClass={cs.CL}
} | seki2002a |
arxiv-670603 | cs/0206031 | A sufficient condition for global invertibility of Lipschitz mapping | <|reference_start|>A sufficient condition for global invertibility of Lipschitz mapping: We show that S.Vavasis' sufficient condition for global invertibility of a polynomial mapping can be easily generalized to the case of a general Lipschitz mapping. Keywords: Invertibility conditions, generalized Jacobian, nonsmooth analysis.<|reference_end|> | arxiv | @article{tarasov2002a,
title={A sufficient condition for global invertibility of Lipschitz mapping},
author={S. Tarasov},
journal={arXiv preprint arXiv:cs/0206031},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206031},
primaryClass={cs.NA}
} | tarasov2002a |
arxiv-670604 | cs/0206032 | A correct proof of the heuristic GCD algorithm | <|reference_start|>A correct proof of the heuristic GCD algorithm: In this note, we fill a gap in the proof of the heuristic GCD in the multivariate case made by Char, Geddes and Gonnet (JSC 1989) and give some additionnal information on this method.<|reference_end|> | arxiv | @article{parisse2002a,
title={A correct proof of the heuristic GCD algorithm},
author={Bernard Parisse},
journal={arXiv preprint arXiv:cs/0206032},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206032},
primaryClass={cs.SC}
} | parisse2002a |
arxiv-670605 | cs/0206033 | Algorithms for Media | <|reference_start|>Algorithms for Media: Falmagne recently introduced the concept of a medium, a combinatorial object encompassing hyperplane arrangements, topological orderings, acyclic orientations, and many other familiar structures. We find efficient solutions for several algorithmic problems on media: finding short reset sequences, shortest paths, testing whether a medium has a closed orientation, and listing the states of a medium given a black-box description.<|reference_end|> | arxiv | @article{eppstein2002algorithms,
title={Algorithms for Media},
author={David Eppstein, Jean-Claude Falmagne},
journal={arXiv preprint arXiv:cs/0206033},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206033},
primaryClass={cs.DS}
} | eppstein2002algorithms |
arxiv-670606 | cs/0206034 | Applying a Hybrid Query Translation Method to Japanese/English Cross-Language Patent Retrieval | <|reference_start|>Applying a Hybrid Query Translation Method to Japanese/English Cross-Language Patent Retrieval: This paper applies an existing query translation method to cross-language patent retrieval. In our method, multiple dictionaries are used to derive all possible translations for an input query, and collocational statistics are used to resolve translation ambiguity. We used Japanese/English parallel patent abstracts to perform comparative experiments, where our method outperformed a simple dictionary-based query translation method, and achieved 76% of monolingual retrieval in terms of average precision.<|reference_end|> | arxiv | @article{fukui2002applying,
title={Applying a Hybrid Query Translation Method to Japanese/English
Cross-Language Patent Retrieval},
author={Masatoshi Fukui, Shigeto Higuchi, Youichi Nakatani, Masao Tanaka,
Atsushi Fujii and Tetsuya Ishikawa},
journal={ACM SIGIR 2000 Workshop on Patent Retrieval, July, 2000},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206034},
primaryClass={cs.CL}
} | fukui2002applying |
arxiv-670607 | cs/0206035 | PRIME: A System for Multi-lingual Patent Retrieval | <|reference_start|>PRIME: A System for Multi-lingual Patent Retrieval: Given the growing number of patents filed in multiple countries, users are interested in retrieving patents across languages. We propose a multi-lingual patent retrieval system, which translates a user query into the target language, searches a multilingual database for patents relevant to the query, and improves the browsing efficiency by way of machine translation and clustering. Our system also extracts new translations from patent families consisting of comparable patents, to enhance the translation dictionary.<|reference_end|> | arxiv | @article{higuchi2002prime:,
title={PRIME: A System for Multi-lingual Patent Retrieval},
author={Shigeto Higuchi, Masatoshi Fukui, Atsushi Fujii and Tetsuya Ishikawa},
journal={Proceedings of MT Summit VIII, pp.163-167, Sep. 2001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206035},
primaryClass={cs.CL}
} | higuchi2002prime: |
arxiv-670608 | cs/0206036 | Language Modeling for Multi-Domain Speech-Driven Text Retrieval | <|reference_start|>Language Modeling for Multi-Domain Speech-Driven Text Retrieval: We report experimental results associated with speech-driven text retrieval, which facilitates retrieving information in multiple domains with spoken queries. Since users speak contents related to a target collection, we produce language models used for speech recognition based on the target collection, so as to improve both the recognition and retrieval accuracy. Experiments using existing test collections combined with dictated queries showed the effectiveness of our method.<|reference_end|> | arxiv | @article{itou2002language,
title={Language Modeling for Multi-Domain Speech-Driven Text Retrieval},
author={Katunobu Itou, Atsushi Fujii and Tetsuya Ishikawa},
journal={IEEE Automatic Speech Recognition and Understanding Workshop, Dec.
2001},
year={2002},
doi={10.1109/ASRU.2001.1034653},
archivePrefix={arXiv},
eprint={cs/0206036},
primaryClass={cs.CL}
} | itou2002language |
arxiv-670609 | cs/0206037 | Speech-Driven Text Retrieval: Using Target IR Collections for Statistical Language Model Adaptation in Speech Recognition | <|reference_start|>Speech-Driven Text Retrieval: Using Target IR Collections for Statistical Language Model Adaptation in Speech Recognition: Speech recognition has of late become a practical technology for real world applications. Aiming at speech-driven text retrieval, which facilitates retrieving information with spoken queries, we propose a method to integrate speech recognition and retrieval methods. Since users speak contents related to a target collection, we adapt statistical language models used for speech recognition based on the target collection, so as to improve both the recognition and retrieval accuracy. Experiments using existing test collections combined with dictated queries showed the effectiveness of our method.<|reference_end|> | arxiv | @article{fujii2002speech-driven,
title={Speech-Driven Text Retrieval: Using Target IR Collections for
Statistical Language Model Adaptation in Speech Recognition},
author={Atsushi Fujii, Katunobu Itou and Tetsuya Ishikawa},
journal={Anni R. Coden and Eric W. Brown and Savitha Srinivasan (Eds.),
Information Retrieval Techniques for Speech Applications (LNCS 2273),
pp.94-104, Springer, 2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206037},
primaryClass={cs.CL}
} | fujii2002speech-driven |
arxiv-670610 | cs/0206038 | A Multilevel Approach to Topology-Aware Collective Operations in Computational Grids | <|reference_start|>A Multilevel Approach to Topology-Aware Collective Operations in Computational Grids: The efficient implementation of collective communiction operations has received much attention. Initial efforts produced "optimal" trees based on network communication models that assumed equal point-to-point latencies between any two processes. This assumption is violated in most practical settings, however, particularly in heterogeneous systems such as clusters of SMPs and wide-area "computational Grids," with the result that collective operations perform suboptimally. In response, more recent work has focused on creating topology-aware trees for collective operations that minimize communication across slower channels (e.g., a wide-area network). While these efforts have significant communication benefits, they all limit their view of the network to only two layers. We present a strategy based upon a multilayer view of the network. By creating multilevel topology-aware trees we take advantage of communication cost differences at every level in the network. We used this strategy to implement topology-aware versions of several MPI collective operations in MPICH-G2, the Globus Toolkit[tm]-enabled version of the popular MPICH implementation of the MPI standard. Using information about topology provided by MPICH-G2, we construct these multilevel topology-aware trees automatically during execution. We present results demonstrating the advantages of our multilevel approach by comparing it to the default (topology-unaware) implementation provided by MPICH and a topology-aware two-layer implementation.<|reference_end|> | arxiv | @article{karonis2002a,
title={A Multilevel Approach to Topology-Aware Collective Operations in
Computational Grids},
author={N. T. Karonis, B. de Supinski, I. Foster, W. Gropp, E. Lusk},
journal={arXiv preprint arXiv:cs/0206038},
year={2002},
number={Preprint ANL/MCS-P948-0402},
archivePrefix={arXiv},
eprint={cs/0206038},
primaryClass={cs.DC}
} | karonis2002a |
arxiv-670611 | cs/0206039 | Hidden Markov model segmentation of hydrological and enviromental time series | <|reference_start|>Hidden Markov model segmentation of hydrological and enviromental time series: Motivated by Hubert's segmentation procedure we discuss the application of hidden Markov models (HMM) to the segmentation of hydrological and enviromental time series. We use a HMM algorithm which segments time series of several hundred terms in a few seconds and is computationally feasible for even longer time series. The segmentation algorithm computes the Maximum Likelihood segmentation by use of an expectation / maximization iteration. We rigorously prove algorithm convergence and use numerical experiments, involving temperature and river discharge time series, to show that the algorithm usually converges to the globally optimal segmentation. The relation of the proposed algorithm to Hubert's segmentation procedure is also discussed.<|reference_end|> | arxiv | @article{kehagias2002hidden,
title={Hidden Markov model segmentation of hydrological and enviromental time
series},
author={Ath. Kehagias},
journal={arXiv preprint arXiv:cs/0206039},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206039},
primaryClass={cs.CE cs.NA math.NA nlin.CD physics.data-an}
} | kehagias2002hidden |
arxiv-670612 | cs/0206040 | MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface | <|reference_start|>MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface: Application development for distributed computing "Grids" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I/O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.<|reference_end|> | arxiv | @article{karonis2002mpich-g2:,
title={MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface},
author={N. T. Karonis, B. Toonen, and I. Foster},
journal={arXiv preprint arXiv:cs/0206040},
year={2002},
number={Preprint ANL/MCS-P942-0402},
archivePrefix={arXiv},
eprint={cs/0206040},
primaryClass={cs.DC}
} | karonis2002mpich-g2: |
arxiv-670613 | cs/0206041 | Anticipatory Guidance of Plot | <|reference_start|>Anticipatory Guidance of Plot: An anticipatory system for guiding plot development in interactive narratives is described. The executable model is a finite automaton that provides the implemented system with a look-ahead. The identification of undesirable future states in the model is used to guide the player, in a transparent manner. In this way, too radical twists of the plot can be avoided. Since the player participates in the development of the plot, such guidance can have many forms, depending on the environment of the player, on the behavior of the other players, and on the means of player interaction. We present a design method for interactive narratives which produces designs suitable for the implementation of anticipatory mechanisms. Use of the method is illustrated by application to our interactive computer game Kaktus.<|reference_end|> | arxiv | @article{laaksolahti2002anticipatory,
title={Anticipatory Guidance of Plot},
author={Jarmo Laaksolahti, Magnus Boman},
journal={arXiv preprint arXiv:cs/0206041},
year={2002},
archivePrefix={arXiv},
eprint={cs/0206041},
primaryClass={cs.AI}
} | laaksolahti2002anticipatory |
arxiv-670614 | cs/0207001 | National Infrastructure Contingencies: Survey of Wireless Technology Support | <|reference_start|>National Infrastructure Contingencies: Survey of Wireless Technology Support: In modern society, the flow of information has become the lifeblood of commerce and social interaction. This movement of data supports most aspects of the United States economy in particular, as well as, serving as the vehicle upon which governmental agencies react to social conditions. In addition, it is understood that the continuance of efficient and reliable data communications during times of national or regional disaster remains a priority in the United States. The coordination of emergency response and area revitalization / rehabilitation efforts between local, state, and federal emergency response is increasingly necessary as agencies strive to work more seamlessly between the affected organizations. Additionally, international support is often made available to react to such adverse conditions as wildfire suppression scenarios and therefore require the efficient management of workforce and associated logistics support. It is through the examination of the issues related to un-tethered data transmission during infrastructure contingencies that responders may best tailor a unified approach to the rapid recovery after disasters occur.<|reference_end|> | arxiv | @article{fussell2002national,
title={National Infrastructure Contingencies: Survey of Wireless Technology
Support},
author={Ronald M. Fussell},
journal={arXiv preprint arXiv:cs/0207001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207001},
primaryClass={cs.DC cs.CE}
} | fussell2002national |
arxiv-670615 | cs/0207002 | Using eigenvectors of the bigram graph to infer morpheme identity | <|reference_start|>Using eigenvectors of the bigram graph to infer morpheme identity: This paper describes the results of some experiments exploring statistical methods to infer syntactic behavior of words and morphemes from a raw corpus in an unsupervised fashion. It shares certain points in common with Brown et al (1992) and work that has grown out of that: it employs statistical techniques to analyze syntactic behavior based on what words occur adjacent to a given word. However, we use an eigenvector decomposition of a nearest-neighbor graph to produce a two-dimensional rendering of the words of a corpus in which words of the same syntactic category tend to form neighborhoods. We exploit this technique for extending the value of automatic learning of morphology. In particular, we look at the suffixes derived from a corpus by unsupervised learning of morphology, and we ask which of these suffixes have a consistent syntactic function (e.g., in English, -tion is primarily a mark of nouns, but -s marks both noun plurals and 3rd person present on verbs), and we determine that this method works well for this task.<|reference_end|> | arxiv | @article{belkin2002using,
title={Using eigenvectors of the bigram graph to infer morpheme identity},
author={Mikhail Belkin and John Goldsmith},
journal={arXiv preprint arXiv:cs/0207002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207002},
primaryClass={cs.CL}
} | belkin2002using |
arxiv-670616 | cs/0207003 | Analysis of Titles and Readers For Title Generation Centered on the Readers | <|reference_start|>Analysis of Titles and Readers For Title Generation Centered on the Readers: The title of a document has two roles, to give a compact summary and to lead the reader to read the document. Conventional title generation focuses on finding key expressions from the author's wording in the document to give a compact summary and pays little attention to the reader's interest. To make the title play its second role properly, it is indispensable to clarify the content (``what to say'') and wording (``how to say'') of titles that are effective to attract the target reader's interest. In this article, we first identify typical content and wording of titles aimed at general readers in a comparative study between titles of technical papers and headlines rewritten for newspapers. Next, we describe the results of a questionnaire survey on the effects of the content and wording of titles on the reader's interest. The survey of general and knowledgeable readers shows both common and different tendencies in interest.<|reference_end|> | arxiv | @article{senda2002analysis,
title={Analysis of Titles and Readers For Title Generation Centered on the
Readers},
author={Yasuko Senda, Yasusi Sinohara (Central Research Institute of Electric
Power Industry, Japan)},
journal={COLING'2002(The 19TH International Conference on Computational
Linguistics)},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207003},
primaryClass={cs.CL}
} | senda2002analysis |
arxiv-670617 | cs/0207004 | Optimally cutting a surface into a disk | <|reference_start|>Optimally cutting a surface into a disk: We consider the problem of cutting a set of edges on a polyhedral manifold surface, possibly with boundary, to obtain a single topological disk, minimizing either the total number of cut edges or their total length. We show that this problem is NP-hard, even for manifolds without boundary and for punctured spheres. We also describe an algorithm with running time n^{O(g+k)}, where n is the combinatorial complexity, g is the genus, and k is the number of boundary components of the input surface. Finally, we describe a greedy algorithm that outputs a O(log^2 g)-approximation of the minimum cut graph in O(g^2 n log n) time.<|reference_end|> | arxiv | @article{erickson2002optimally,
title={Optimally cutting a surface into a disk},
author={Jeff Erickson and Sariel Har-Peled},
journal={arXiv preprint arXiv:cs/0207004},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207004},
primaryClass={cs.CG cs.DS cs.GR}
} | erickson2002optimally |
arxiv-670618 | cs/0207005 | Efficient Deep Processing of Japanese | <|reference_start|>Efficient Deep Processing of Japanese: We present a broad coverage Japanese grammar written in the HPSG formalism with MRS semantics. The grammar is created for use in real world applications, such that robustness and performance issues play an important role. It is connected to a POS tagging and word segmentation tool. This grammar is being developed in a multilingual context, requiring MRS structures that are easily comparable across languages.<|reference_end|> | arxiv | @article{siegel2002efficient,
title={Efficient Deep Processing of Japanese},
author={Melanie Siegel and Emily M. Bender},
journal={arXiv preprint arXiv:cs/0207005},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207005},
primaryClass={cs.CL}
} | siegel2002efficient |
arxiv-670619 | cs/0207006 | Orthonormal RBF wavelet and ridgelet-like series and transforms for high-dimensional problems | <|reference_start|>Orthonormal RBF wavelet and ridgelet-like series and transforms for high-dimensional problems: This paper developed a systematic strategy establishing RBF on the wavelet analysis, which includes continuous and discrete RBF orthonormal wavelet transforms respectively in terms of singular fundamental solutions and nonsingular general solutions of differential operators. In particular, the harmonic Bessel RBF transforms were presented for high-dimensional data processing. It was also found that the kernel functions of convection-diffusion operator are feasible to construct some stable ridgelet-like RBF transforms. We presented time-space RBF transforms based on non-singular solution and fundamental solution of time-dependent differential operators. The present methodology was further extended to analysis of some known RBFs such as the MQ, Gaussian and pre-wavelet kernel RBFs.<|reference_end|> | arxiv | @article{chen2002orthonormal,
title={Orthonormal RBF wavelet and ridgelet-like series and transforms for
high-dimensional problems},
author={W. Chen},
journal={Int. J. Nonlinear Sci. & Numer. Simulation, 2(2), 155-160, 2001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207006},
primaryClass={cs.SC}
} | chen2002orthonormal |
arxiv-670620 | cs/0207007 | Evolutionary Circuit Design: Information Theory Perspective on Signal Propagation | <|reference_start|>Evolutionary Circuit Design: Information Theory Perspective on Signal Propagation: This paper presents case-study results on the application of information theoretic approach to gate-level evolutionary circuit design. We introduce information measures to provide better estimates of synthesis criteria of digital circuits. For example, the analysis of signal propagation during evolving gate-level synthesis can be improved by using information theoretic measures that will make it possible to find the most effective geometry and therefore predict the cost of the final design solution. The problem is considered from the information engine point of view. That is, the process of evolutionary gate-level circuit design is presented via such measures as entropy, logical work and information vitality. Some examples of geometry driven synthesis are provided to prove the above idea.<|reference_end|> | arxiv | @article{popel2002evolutionary,
title={Evolutionary Circuit Design: Information Theory Perspective on Signal
Propagation},
author={Denis V. Popel and Nawar Al-Hakeem},
journal={ISSPIT'2001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207007},
primaryClass={cs.OH}
} | popel2002evolutionary |
arxiv-670621 | cs/0207008 | Agent Programming with Declarative Goals | <|reference_start|>Agent Programming with Declarative Goals: A long and lasting problem in agent research has been to close the gap between agent logics and agent programming frameworks. The main reason for this problem of establishing a link between agent logics and agent programming frameworks is identified and explained by the fact that agent programming frameworks have not incorporated the concept of a `declarative goal'. Instead, such frameworks have focused mainly on plans or `goals-to-do' instead of the end goals to be realised which are also called `goals-to-be'. In this paper, a new programming language called GOAL is introduced which incorporates such declarative goals. The notion of a `commitment strategy' - one of the main theoretical insights due to agent logics, which explains the relation between beliefs and goals - is used to construct a computational semantics for GOAL. Finally, a proof theory for proving properties of GOAL agents is introduced. Thus, we offer a complete theory of agent programming in the sense that our theory provides both for a programming framework and a programming logic for such agents. An example program is proven correct by using this programming logic.<|reference_end|> | arxiv | @article{de boer2002agent,
title={Agent Programming with Declarative Goals},
author={F.S. de Boer, K.V. Hindriks, W. van der Hoek and J.-J.Ch. Meyer},
journal={arXiv preprint arXiv:cs/0207008},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207008},
primaryClass={cs.AI cs.PL}
} | de boer2002agent |
arxiv-670622 | cs/0207009 | Computing Elementary Symmetric Polynomials with a Sublinear Number of Multiplications | <|reference_start|>Computing Elementary Symmetric Polynomials with a Sublinear Number of Multiplications: Elementary symmetric polynomials $S_n^k$ are used as a benchmark for the bounded-depth arithmetic circuit model of computation. In this work we prove that $S_n^k$ modulo composite numbers $m=p_1p_2$ can be computed with much fewer multiplications than over any field, if the coefficients of monomials $x_{i_1}x_{i_2}... x_{i_k}$ are allowed to be 1 either mod $p_1$ or mod $p_2$ but not necessarily both. More exactly, we prove that for any constant $k$ such a representation of $S_n^k$ can be computed modulo $p_1p_2$ using only $\exp(O(\sqrt{\log n}\log\log n))$ multiplications on the most restricted depth-3 arithmetic circuits, for $\min({p_1,p_2})>k!$. Moreover, the number of multiplications remain sublinear while $k=O(\log\log n).$ In contrast, the well-known Graham-Pollack bound yields an $n-1$ lower bound for the number of multiplications even for the exact computation (not the representation) of $S_n^2$. Our results generalize for other non-prime power composite moduli as well. The proof uses the famous BBR-polynomial of Barrington, Beigel and Rudich.<|reference_end|> | arxiv | @article{grolmusz2002computing,
title={Computing Elementary Symmetric Polynomials with a Sublinear Number of
Multiplications},
author={Vince Grolmusz},
journal={arXiv preprint arXiv:cs/0207009},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207009},
primaryClass={cs.CC cs.DM cs.DS}
} | grolmusz2002computing |
arxiv-670623 | cs/0207010 | Symmetric boundary knot method | <|reference_start|>Symmetric boundary knot method: The boundary knot method (BKM) is a recent boundary-type radial basis function (RBF) collocation scheme for general PDEs. Like the method of fundamental solution (MFS), the RBF is employed to approximate the inhomogeneous terms via the dual reciprocity principle. Unlike the MFS, the method uses a nonsingular general solution instead of a singular fundamental solution to evaluate the homogeneous solution so as to circumvent the controversial artificial boundary outside the physical domain. The BKM is meshfree, superconvergent, integration free, very easy to learn and program. The original BKM, however, loses symmetricity in the presense of mixed boundary. In this study, by analogy with Hermite RBF interpolation, we developed a symmetric BKM scheme. The accuracy and efficiency of the symmetric BKM are also numerically validated in some 2D and 3D Helmholtz and diffusion reaction problems under complicated geometries.<|reference_end|> | arxiv | @article{chen2002symmetric,
title={Symmetric boundary knot method},
author={W. Chen},
journal={Engng. Anal. Bound. Elem., 26(6), 489-494, 2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207010},
primaryClass={cs.CE cs.CG}
} | chen2002symmetric |
arxiv-670624 | cs/0207011 | Improving Web Database Access Using Decision Diagrams | <|reference_start|>Improving Web Database Access Using Decision Diagrams: In some areas of management and commerce, especially in Electronic commerce (E-commerce), that are accelerated by advances in Web technologies, it is essential to support the decision making process using formal methods. Among the problems of E-commerce applications: reducing the time of data access so that huge databases can be searched quickly; decreasing the cost of database design ... etc. We present the application of Decision Diagrams design using Information Theory approach to improve database access speeds. We show that such utilization provides systematic and visual ways of applying Decision Making methods to simplify complex Web engineering problems.<|reference_end|> | arxiv | @article{popel2002improving,
title={Improving Web Database Access Using Decision Diagrams},
author={Denis V. Popel and Nawar Al-Hakeem},
journal={arXiv preprint arXiv:cs/0207011},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207011},
primaryClass={cs.LO cs.DB}
} | popel2002improving |
arxiv-670625 | cs/0207012 | Synthesis of Low-Power Digital Circuits Derived from Binary Decision Diagrams | <|reference_start|>Synthesis of Low-Power Digital Circuits Derived from Binary Decision Diagrams: This paper introduces a novel method for synthesizing digital circuits derived from Binary Decision Diagrams (BDDs) that can yield to reduction in power dissipation. The power reduction is achieved by decreasing the switching activity in a circuit while paying close attention to information measures as an optimization criterion. We first present the technique of efficient BDD-based computation of information measures which are used to guide the power optimization procedures. Using this technique, we have developed an algorithm of BDD reordering which leads to reducing the power consumption of the circuits derived from BDDs. Results produced by the synthesis on the ISCAS benchmark circuits are very encouraging.<|reference_end|> | arxiv | @article{popel2002synthesis,
title={Synthesis of Low-Power Digital Circuits Derived from Binary Decision
Diagrams},
author={Denis V. Popel},
journal={ECCTD 2001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207012},
primaryClass={cs.AR}
} | popel2002synthesis |
arxiv-670626 | cs/0207013 | A Compact Graph Model of Handwritten Images: Integration into Authentification and Recognition | <|reference_start|>A Compact Graph Model of Handwritten Images: Integration into Authentification and Recognition: A novel algorithm for creating a mathematical model of curved shapes is introduced. The core of the algorithm is based on building a graph representation of the contoured image, which occupies less storage space than produced by raster compression techniques. Different advanced applications of the mathematical model are discussed: recognition of handwritten characters and verification of handwritten text and signatures for authentification purposes. Reducing the storage requirements due to the efficient mathematical model results in faster retrieval and processing times. The experimental outcomes in compression of contoured images and recognition of handwritten numerals are given.<|reference_end|> | arxiv | @article{popel2002a,
title={A Compact Graph Model of Handwritten Images: Integration into
Authentification and Recognition},
author={Denis V. Popel},
journal={SSPR 2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207013},
primaryClass={cs.HC cs.DS}
} | popel2002a |
arxiv-670627 | cs/0207014 | On the Information Engine of Circuit Design | <|reference_start|>On the Information Engine of Circuit Design: This paper addresses a new approach to find a spectrum of information measures for the process of digital circuit synthesis. We consider the problem from the information engine point of view. The circuit synthesis as a whole and different steps of the design process (an example of decision diagram is given) are presented via such measurements as entropy, logical work and information vitality. We also introduce new information measures to provide better estimates of synthesis criteria. We show that the basic properties of information engine, such as the conservation law of information flow and the equilibrium law of information can be formulated.<|reference_end|> | arxiv | @article{popel2002on,
title={On the Information Engine of Circuit Design},
author={Denis V. Popel and Nawar Al-Hakeem},
journal={MWSCAS 2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207014},
primaryClass={cs.AR}
} | popel2002on |
arxiv-670628 | cs/0207015 | New advances in dual reciprocity and boundary-only RBF methods | <|reference_start|>New advances in dual reciprocity and boundary-only RBF methods: This paper made some significant advances in the dual reciprocity and boundary-only RBF techniques. The proposed boundary knot method (BKM) is different from the standard boundary element method in a number of important aspects. Namely, it is truly meshless, exponential convergence, integration-free (of course, no singular integration), boundary-only for general problems, and leads to symmetric matrix under certain conditions (able to be extended to general cases after further modified). The BKM also avoids the artificial boundary in the method of fundamental solution. An amazing finding is that the BKM can formulate linear modeling equations for nonlinear partial differential systems with linear boundary conditions. This merit makes it circumvent all perplexing issues in the iteration solution of nonlinear equations. On the other hand, by analogy with Green's second identity, this paper also presents a general solution RBF (GSR) methodology to construct efficient RBFs in the dual reciprocity and domain-type RBF collocation methods. The GSR approach first establishes an explicit relationship between the BEM and RBF itself on the ground of the weighted residual principle. This paper also discusses the RBF convergence and stability problems within the framework of integral equation theory.<|reference_end|> | arxiv | @article{chen2002new,
title={New advances in dual reciprocity and boundary-only RBF methods},
author={W. Chen, M. Tanaka},
journal={arXiv preprint arXiv:cs/0207015},
year={2002},
number={Proc. of BEM technique confer., Vol. 10, 17-22, Tokyo, Japan, 2000},
archivePrefix={arXiv},
eprint={cs/0207015},
primaryClass={cs.CE cs.CG}
} | chen2002new |
arxiv-670629 | cs/0207016 | Relationship between boundary integral equation and radial basis function | <|reference_start|>Relationship between boundary integral equation and radial basis function: This paper aims to survey our recent work relating to the radial basis function (RBF) from some new views of points. In the first part, we established the RBF on numerical integration analysis based on an intrinsic relationship between the Green's boundary integral representation and RBF. It is found that the kernel function of integral equation is important to create efficient RBF. The fundamental solution RBF (FS-RBF) was presented as a novel strategy constructing operator-dependent RBF. We proposed a conjecture formula featuring the dimension affect on error bound to show the independent-dimension merit of the RBF techniques. We also discussed wavelet RBF, localized RBF schemes, and the influence of node placement on the RBF solution accuracy. The centrosymmetric matrix structure of the RBF interpolation matrix under symmetric node placing is proved. The second part of this paper is concerned with the boundary knot method (BKM), a new boundary-only, meshless, spectral convergent, integration-free RBF collocation technique. The BKM was tested to the Helmholtz, Laplace, linear and nonlinear convection-diffusion problems. In particular, we introduced the response knot-dependent nonsingular general solution to calculate varying-parameter and nonlinear steady convection-diffusion problems very efficiently. By comparing with the multiple dual reciprocity method, we discussed the completeness issue of the BKM. Finally, the nonsingular solutions for some known differential operators were given in appendix. Also we expanded the RBF concepts by introducing time-space RBF for transient problems.<|reference_end|> | arxiv | @article{chen2002relationship,
title={Relationship between boundary integral equation and radial basis
function},
author={W. Chen, M. Tanaka},
journal={arXiv preprint arXiv:cs/0207016},
year={2002},
number={JASCOME 57th BEM Confer., 30 Sept. 2000},
archivePrefix={arXiv},
eprint={cs/0207016},
primaryClass={cs.CE cs.CG}
} | chen2002relationship |
arxiv-670630 | cs/0207017 | New Insights in Boundary-only and Domain-type RBF Methods | <|reference_start|>New Insights in Boundary-only and Domain-type RBF Methods: This paper has made some significant advances in the boundary-only and domain-type RBF techniques. The proposed boundary knot method (BKM) is different from the standard boundary element method in a number of important aspects. Namely, it is truly meshless, exponential convergence, integration-free (of course, no singular integration), boundary-only for general problems, and leads to symmetric matrix under certain conditions (able to be extended to general cases after further modified). The BKM also avoids the artificial boundary in the method of fundamental solution. An amazing finding is that the BKM can formulate linear modeling equations for nonlinear partial differential systems with linear boundary conditions. This merit makes it circumvent all perplexing issues in the iteration solution of nonlinear equations. On the other hand, by analogy with Green's second identity, we also presents a general solution RBF (GSR) methodology to construct efficient RBFs in the domain-type RBF collocation method and dual reciprocity method. The GSR approach first establishes an explicit relationship between the BEM and RBF itself on the ground of the potential theory. This paper also discusses some essential issues relating to the RBF computing, which include time-space RBFs, direct and indirect RBF schemes, finite RBF method, and the application of multipole and wavelet to the RBF solution of the PDEs.<|reference_end|> | arxiv | @article{chen2002new,
title={New Insights in Boundary-only and Domain-type RBF Methods},
author={W. Chen, M. Tanaka},
journal={J. Nonlinear Sci. & Numer. Simulation, 1(3), 145-151, 2000},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207017},
primaryClass={cs.CE cs.CG}
} | chen2002new |
arxiv-670631 | cs/0207018 | Definitions of distance function in radial basis function approach | <|reference_start|>Definitions of distance function in radial basis function approach: Very few studies involve how to construct the efficient RBFs by means of problem features. Recently the present author presented general solution RBF (GS-RBF) methodology to create operator-dependent RBFs successfully [1]. On the other hand, the normal radial basis function (RBF) is defined via Euclidean space distance function or the geodesic distance [2]. This purpose of this note is to redefine distance function in conjunction with problem features, which include problem-dependent and time-space distance function.<|reference_end|> | arxiv | @article{chen2002definitions,
title={Definitions of distance function in radial basis function approach},
author={W. Chen},
journal={arXiv preprint arXiv:cs/0207018},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207018},
primaryClass={cs.CE cs.CG}
} | chen2002definitions |
arxiv-670632 | cs/0207019 | Information Measures in Detecting and Recognizing Symmetries | <|reference_start|>Information Measures in Detecting and Recognizing Symmetries: This paper presents a method to detect and recognize symmetries in Boolean functions. The idea is to use information theoretic measures of Boolean functions to detect sub-space of possible symmetric variables. Coupled with the new techniques of efficient estimations of information measures on Binary Decision Diagrams (BDDs) we obtain promised results in symmetries detection for large-scale functions.<|reference_end|> | arxiv | @article{popel2002information,
title={Information Measures in Detecting and Recognizing Symmetries},
author={Denis V. Popel},
journal={MWSCAS 2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207019},
primaryClass={cs.OH}
} | popel2002information |
arxiv-670633 | cs/0207020 | Towards Efficient Calculation of Information Measures for Reordering of Binary Decision Diagrams | <|reference_start|>Towards Efficient Calculation of Information Measures for Reordering of Binary Decision Diagrams: This paper introduces new technique for efficient calculation of different Shannon information measures which operates Binary Decision Diagrams (BDDs). We offer an algorithm of BDD reordering which demonstrates the improvement of the obtaining outcomes over the existing reordering approaches. The technique and the reordering algorithm have been implemented, and the results on circuits' benchmarks are analyzed. We point out that the results are quite promising, the algorithm is very fast, and it is easy to implement. Finally, we show that our approach to BDD reordering can yield to reduction in the power dissipation for the circuits derived from BDDs.<|reference_end|> | arxiv | @article{popel2002towards,
title={Towards Efficient Calculation of Information Measures for Reordering of
Binary Decision Diagrams},
author={Denis V. Popel},
journal={SCS 2001},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207020},
primaryClass={cs.OH}
} | popel2002towards |
arxiv-670634 | cs/0207021 | Abduction, ASP and Open Logic Programs | <|reference_start|>Abduction, ASP and Open Logic Programs: Open logic programs and open entailment have been recently proposed as an abstract framework for the verification of incomplete specifications based upon normal logic programs and the stable model semantics. There are obvious analogies between open predicates and abducible predicates. However, despite superficial similarities, there are features of open programs that have no immediate counterpart in the framework of abduction and viceversa. Similarly, open programs cannot be immediately simulated with answer set programming (ASP). In this paper we start a thorough investigation of the relationships between open inference, abduction and ASP. We shall prove that open programs generalize the other two frameworks. The generalized framework suggests interesting extensions of abduction under the generalized stable model semantics. In some cases, we will be able to reduce open inference to abduction and ASP, thereby estimating its computational complexity. At the same time, the aforementioned reduction opens the way to new applications of abduction and ASP.<|reference_end|> | arxiv | @article{bonatti2002abduction,,
title={Abduction, ASP and Open Logic Programs},
author={Piero A. Bonatti},
journal={arXiv preprint arXiv:cs/0207021},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207021},
primaryClass={cs.AI}
} | bonatti2002abduction, |
arxiv-670635 | cs/0207022 | What is a Joint Goal? Games with Beliefs and Defeasible Desires | <|reference_start|>What is a Joint Goal? Games with Beliefs and Defeasible Desires: In this paper we introduce a qualitative decision and game theory based on belief (B) and desire (D) rules. We show that a group of agents acts as if it is maximizing achieved joint goals.<|reference_end|> | arxiv | @article{dastani2002what,
title={What is a Joint Goal? Games with Beliefs and Defeasible Desires},
author={Mehdi Dastani and Leendert van der Torre},
journal={Proceedings of NMR02, Toulouse, 2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207022},
primaryClass={cs.MA cs.GT}
} | dastani2002what |
arxiv-670636 | cs/0207023 | Domain-Dependent Knowledge in Answer Set Planning | <|reference_start|>Domain-Dependent Knowledge in Answer Set Planning: In this paper we consider three different kinds of domain-dependent control knowledge (temporal, procedural and HTN-based) that are useful in planning. Our approach is declarative and relies on the language of logic programming with answer set semantics (AnsProlog*). AnsProlog* is designed to plan without control knowledge. We show how temporal, procedural and HTN-based control knowledge can be incorporated into AnsProlog* by the modular addition of a small number of domain-dependent rules, without the need to modify the planner. We formally prove the correctness of our planner, both in the absence and presence of the control knowledge. Finally, we perform some initial experimentation that demonstrates the potential reduction in planning time that can be achieved when procedural domain knowledge is used to solve planning problems with large plan length.<|reference_end|> | arxiv | @article{son2002domain-dependent,
title={Domain-Dependent Knowledge in Answer Set Planning},
author={Tran Cao Son, Chitta Baral, Nam Tran, and Sheila McIlraith},
journal={arXiv preprint arXiv:cs/0207023},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207023},
primaryClass={cs.AI}
} | son2002domain-dependent |
arxiv-670637 | cs/0207024 | On Concise Encodings of Preferred Extensions | <|reference_start|>On Concise Encodings of Preferred Extensions: Much work on argument systems has focussed on preferred extensions which define the maximal collectively defensible subsets. Identification and enumeration of these subsets is (under the usual assumptions) computationally demanding. We consider approaches to deciding if a subset S is a preferred extension which query a representations encoding all such extensions, so that the computational effort is invested once only (for the initial enumeration) rather than for each separate query.<|reference_end|> | arxiv | @article{dunne2002on,
title={On Concise Encodings of Preferred Extensions},
author={Paul E. Dunne},
journal={arXiv preprint arXiv:cs/0207024},
year={2002},
number={Dept. of Comp. Sci., Univ. of Liverpool, Tech. Report ULCS-02-003},
archivePrefix={arXiv},
eprint={cs/0207024},
primaryClass={cs.AI cs.CC cs.DS}
} | dunne2002on |
arxiv-670638 | cs/0207025 | "Minimal defence": a refinement of the preferred semantics for argumentation frameworks | <|reference_start|>"Minimal defence": a refinement of the preferred semantics for argumentation frameworks: Dung's abstract framework for argumentation enables a study of the interactions between arguments based solely on an ``attack'' binary relation on the set of arguments. Various ways to solve conflicts between contradictory pieces of information have been proposed in the context of argumentation, nonmonotonic reasoning or logic programming, and can be captured by appropriate semantics within Dung's framework. A common feature of these semantics is that one can always maximize in some sense the set of acceptable arguments. We propose in this paper to extend Dung's framework in order to allow for the representation of what we call ``restricted'' arguments: these arguments should only be used if absolutely necessary, that is, in order to support other arguments that would otherwise be defeated. We modify Dung's preferred semantics accordingly: a set of arguments becomes acceptable only if it contains a minimum of restricted arguments, for a maximum of unrestricted arguments.<|reference_end|> | arxiv | @article{cayrol2002"minimal,
title={"Minimal defence": a refinement of the preferred semantics for
argumentation frameworks},
author={C. Cayrol, S. Doutre, M.-C. Lagasquie-Schiex, J. Mengin},
journal={Proceedings of the 9th International Workshop on Non-Monotonic
Reasoning, 2002, pp. 408-415},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207025},
primaryClass={cs.AI}
} | cayrol2002"minimal |
arxiv-670639 | cs/0207026 | Linear-Time Algorithms for Computing Maximum-Density Sequence Segments with Bioinformatics Applications | <|reference_start|>Linear-Time Algorithms for Computing Maximum-Density Sequence Segments with Bioinformatics Applications: We study an abstract optimization problem arising from biomolecular sequence analysis. For a sequence A of pairs (a_i,w_i) for i = 1,..,n and w_i>0, a segment A(i,j) is a consecutive subsequence of A starting with index i and ending with index j. The width of A(i,j) is w(i,j) = sum_{i <= k <= j} w_k, and the density is (sum_{i<= k <= j} a_k)/ w(i,j). The maximum-density segment problem takes A and two values L and U as input and asks for a segment of A with the largest possible density among those of width at least L and at most U. When U is unbounded, we provide a relatively simple, O(n)-time algorithm, improving upon the O(n \log L)-time algorithm by Lin, Jiang and Chao. When both L and U are specified, there are no previous nontrivial results. We solve the problem in O(n) time if w_i=1 for all i, and more generally in O(n+n\log(U-L+1)) time when w_i>=1 for all i.<|reference_end|> | arxiv | @article{goldwasser2002linear-time,
title={Linear-Time Algorithms for Computing Maximum-Density Sequence Segments
with Bioinformatics Applications},
author={Michael H. Goldwasser, Ming-Yang Kao, Hsueh-I Lu},
journal={Journal of Computer and System Sciences, 70(2):128-144, 2005},
year={2002},
doi={10.1016/j.jcss.2004.08.001},
archivePrefix={arXiv},
eprint={cs/0207026},
primaryClass={cs.DS cs.DM}
} | goldwasser2002linear-time |
arxiv-670640 | cs/0207027 | Permutation graphs, fast forward permutations, and sampling the cycle structure of a permutation | <|reference_start|>Permutation graphs, fast forward permutations, and sampling the cycle structure of a permutation: A permutation P on {1,..,N} is a_fast_forward_permutation_ if for each m the computational complexity of evaluating P^m(x)$ is small independently of m and x. Naor and Reingold constructed fast forward pseudorandom cycluses and involutions. By studying the evolution of permutation graphs, we prove that the number of queries needed to distinguish a random cyclus from a random permutation on {1,..,N} is Theta(N) if one does not use queries of the form P^m(x), but is only Theta(1) if one is allowed to make such queries. We construct fast forward permutations which are indistinguishable from random permutations even when queries of the form P^m(x) are allowed. This is done by introducing an efficient method to sample the cycle structure of a random permutation, which in turn solves an open problem of Naor and Reingold.<|reference_end|> | arxiv | @article{tsaban2002permutation,
title={Permutation graphs, fast forward permutations, and sampling the cycle
structure of a permutation},
author={Boaz Tsaban},
journal={Journal of Algorithms 47 (2003), 104--121},
year={2002},
doi={10.1016/S0196-6774(03)00017-8},
archivePrefix={arXiv},
eprint={cs/0207027},
primaryClass={cs.CR cs.CC math.CO math.PR}
} | tsaban2002permutation |
arxiv-670641 | cs/0207028 | Greedy Facility Location Algorithms Analyzed using Dual Fitting with Factor-Revealing LP | <|reference_start|>Greedy Facility Location Algorithms Analyzed using Dual Fitting with Factor-Revealing LP: In this paper, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O(mlog m) and O(n^3), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.<|reference_end|> | arxiv | @article{jain2002greedy,
title={Greedy Facility Location Algorithms Analyzed using Dual Fitting with
Factor-Revealing LP},
author={Kamal Jain and Mohammad Mahdian and Evangelos Markakis and Amin Saberi
and Vijay V. Vazirani},
journal={arXiv preprint arXiv:cs/0207028},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207028},
primaryClass={cs.DS cs.GT}
} | jain2002greedy |
arxiv-670642 | cs/0207029 | Two Representations for Iterative Non-prioritized Change | <|reference_start|>Two Representations for Iterative Non-prioritized Change: We address a general representation problem for belief change, and describe two interrelated representations for iterative non-prioritized change: a logical representation in terms of persistent epistemic states, and a constructive representation in terms of flocks of bases.<|reference_end|> | arxiv | @article{bochman2002two,
title={Two Representations for Iterative Non-prioritized Change},
author={Alexander Bochman},
journal={arXiv preprint arXiv:cs/0207029},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207029},
primaryClass={cs.AI}
} | bochman2002two |
arxiv-670643 | cs/0207030 | Collective Argumentation | <|reference_start|>Collective Argumentation: An extension of an abstract argumentation framework, called collective argumentation, is introduced in which the attack relation is defined directly among sets of arguments. The extension turns out to be suitable, in particular, for representing semantics of disjunctive logic programs. Two special kinds of collective argumentation are considered in which the opponents can share their arguments.<|reference_end|> | arxiv | @article{bochman2002collective,
title={Collective Argumentation},
author={Alexander Bochman},
journal={arXiv preprint arXiv:cs/0207030},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207030},
primaryClass={cs.AI}
} | bochman2002collective |
arxiv-670644 | cs/0207031 | Intuitions and the modelling of defeasible reasoning: some case studies | <|reference_start|>Intuitions and the modelling of defeasible reasoning: some case studies: The purpose of this paper is to address some criticisms recently raised by John Horty in two articles against the validity of two commonly accepted defeasible reasoning patterns, viz. reinstatement and floating conclusions. I shall argue that Horty's counterexamples, although they significantly raise our understanding of these reasoning patterns, do not show their invalidity. Some of them reflect patterns which, if made explicit in the formalisation, avoid the unwanted inference without having to give up the criticised inference principles. Other examples seem to involve hidden assumptions about the specific problem which, if made explicit, are nothing but extra information that defeat the defeasible inference. These considerations will be put in a wider perspective by reflecting on the nature of defeasible reasoning principles as principles of justified acceptance rather than `real' logical inference.<|reference_end|> | arxiv | @article{prakken2002intuitions,
title={Intuitions and the modelling of defeasible reasoning: some case studies},
author={Henry Prakken},
journal={arXiv preprint arXiv:cs/0207031},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207031},
primaryClass={cs.AI cs.LO}
} | prakken2002intuitions |
arxiv-670645 | cs/0207032 | Alternative Characterizations for Strong Equivalence of Logic Programs | <|reference_start|>Alternative Characterizations for Strong Equivalence of Logic Programs: In this work we present additional results related to the property of strong equivalence of logic programs. This property asserts that two programs share the same set of stable models, even under the addition of new rules. As shown in a recent work by Lifschitz, Pearce and Valverde, strong equivalence can be simply reduced to equivalence in the logic of Here-and-There (HT). In this paper we provide two alternatives respectively based on classical logic and 3-valued logic. The former is applicable to general rules, but not for nested expressions, whereas the latter is applicable for nested expressions but, when moving to an unrestricted syntax, it generally yields different results from HT.<|reference_end|> | arxiv | @article{cabalar2002alternative,
title={Alternative Characterizations for Strong Equivalence of Logic Programs},
author={Pedro Cabalar},
journal={arXiv preprint arXiv:cs/0207032},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207032},
primaryClass={cs.AI cs.LO}
} | cabalar2002alternative |
arxiv-670646 | cs/0207033 | Reducing the Computational Requirements of the Differential Quadrature Method | <|reference_start|>Reducing the Computational Requirements of the Differential Quadrature Method: This paper shows that the weighting coefficient matrices of the differential quadrature method (DQM) are centrosymmetric or skew-centrosymmetric if the grid spacings are symmetric irrespective of whether they are equal or unequal. A new skew centrosymmetric matrix is also discussed. The application of the properties of centrosymmetric and skew centrosymmetric matrix can reduce the computational effort of the DQM for calculations of the inverse, determinant, eigenvectors and eigenvalues by 75%. This computational advantage are also demonstrated via several numerical examples.<|reference_end|> | arxiv | @article{chen2002reducing,
title={Reducing the Computational Requirements of the Differential Quadrature
Method},
author={W Chen, Xinwei Wang, Yongxi Yu},
journal={Numerical Methods for Partial Differential Equations, 12, 565-577,
1996},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207033},
primaryClass={cs.CE cs.CG}
} | chen2002reducing |
arxiv-670647 | cs/0207034 | A Note on the DQ Analysis of Anisotropic Plates | <|reference_start|>A Note on the DQ Analysis of Anisotropic Plates: Recently, Bert, Wang and Striz [1, 2] applied the differential quadrature (DQ) and harmonic differential quadrature (HDQ) methods to analyze static and dynamic behaviors of anisotropic plates. Their studies showed that the methods were conceptually simple and computationally efficient in comparison to other numerical techniques. Based on some recent work by the present author [3, 4], the purpose of this note is to further simplify the formulation effort and improve computing efficiency in applying the DQ and HDQ methods for these cases.<|reference_end|> | arxiv | @article{chen2002a,
title={A Note on the DQ Analysis of Anisotropic Plates},
author={W Chen, Weixing He, Tingxiu Zhong},
journal={J. of Sound & Vibration, 204(1), 180-182, 1997},
year={2002},
doi={10.1006/jsvi.1996.0895},
archivePrefix={arXiv},
eprint={cs/0207034},
primaryClass={cs.SC}
} | chen2002a |
arxiv-670648 | cs/0207035 | A Lyapunov Formulation for Efficient Solution of the Poisson and Convection-Diffusion Equations by the Differential Quadrature Method | <|reference_start|>A Lyapunov Formulation for Efficient Solution of the Poisson and Convection-Diffusion Equations by the Differential Quadrature Method: Civan and Sliepcevich [1, 2] suggested that special matrix solver should be developed to further reduce the computing effort in applying the differential quadrature (DQ) method for the Poisson and convection-diffusion equations. Therefore, the purpose of the present communication is to introduce and apply the Lyapunov formulation which can be solved much more efficiently than the Gaussian elimination method. Civan and Sliepcevich [2] first presented DQ approximate formulas in polynomial form for partial derivatives in tow-dimensional variable domain. For simplifying formulation effort, Chen et al. [3] proposed the compact matrix form of these DQ approximate formulas. In this study, by using these matrix approximate formulas, the DQ formulations for the Poisson and convection-diffusion equations can be expressed as the Lyapunov algebraic matrix equation. The formulation effort is simplified, and a simple and explicit matrix formulation is obtained. A variety of fast algorithms in the solution of the Lyapunov equation [4-6] can be successfully applied in the DQ analysis of these two-dimensional problems, and, thus, the computing effort can be greatly reduced. Finally, we also point out that the present reduction technique can be easily extended to the three-dimensional cases.<|reference_end|> | arxiv | @article{chen2002a,
title={A Lyapunov Formulation for Efficient Solution of the Poisson and
Convection-Diffusion Equations by the Differential Quadrature Method},
author={W. Chen, Tingxiu Zhong},
journal={J. of Computational Physics, 139, 1-7, 1998},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207035},
primaryClass={cs.CE cs.CG}
} | chen2002a |
arxiv-670649 | cs/0207036 | System Description for a Scalable, Fault-Tolerant, Distributed Garbage Collector | <|reference_start|>System Description for a Scalable, Fault-Tolerant, Distributed Garbage Collector: We describe an efficient and fault-tolerant algorithm for distributed cyclic garbage collection. The algorithm imposes few requirements on the local machines and allows for flexibility in the choice of local collector and distributed acyclic garbage collector to use with it. We have emphasized reducing the number and size of network messages without sacrificing the promptness of collection throughout the algorithm. Our proposed collector is a variant of back tracing to avoid extensive synchronization between machines. We have added an explicit forward tracing stage to the standard back tracing stage and designed a tuned heuristic to reduce the total amount of work done by the collector. Of particular note is the development of fault-tolerant cooperation between traces and a heuristic that aggressively reduces the set of suspect objects.<|reference_end|> | arxiv | @article{allen2002system,
title={System Description for a Scalable, Fault-Tolerant, Distributed Garbage
Collector},
author={N. Allen and T. Terriberry},
journal={arXiv preprint arXiv:cs/0207036},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207036},
primaryClass={cs.DC}
} | allen2002system |
arxiv-670650 | cs/0207037 | Some logics of belief and disbelief | <|reference_start|>Some logics of belief and disbelief: The introduction of explicit notions of rejection, or disbelief, into logics for knowledge representation can be justified in a number of ways. Motivations range from the need for versions of negation weaker than classical negation, to the explicit recording of classic belief contraction operations in the area of belief change, and the additional levels of expressivity obtained from an extended version of belief change which includes disbelief contraction. In this paper we present four logics of disbelief which address some or all of these intuitions. Soundness and completeness results are supplied and the logics are compared with respect to applicability and utility.<|reference_end|> | arxiv | @article{chopra2002some,
title={Some logics of belief and disbelief},
author={Samir Chopra, Johannes Heidema, Thomas Meyer},
journal={arXiv preprint arXiv:cs/0207037},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207037},
primaryClass={cs.AI cs.LO}
} | chopra2002some |
arxiv-670651 | cs/0207038 | Iterated revision and the axiom of recovery: a unified treatment via epistemic states | <|reference_start|>Iterated revision and the axiom of recovery: a unified treatment via epistemic states: The axiom of recovery, while capturing a central intuition regarding belief change, has been the source of much controversy. We argue briefly against putative counterexamples to the axiom--while agreeing that some of their insight deserves to be preserved--and present additional recovery-like axioms in a framework that uses epistemic states, which encode preferences, as the object of revisions. This provides a framework in which iterated revision becomes possible and makes explicit the connection between iterated belief change and the axiom of recovery. We provide a representation theorem that connects the semantic conditions that we impose on iterated revision and the additional syntactical properties mentioned. We also show some interesting similarities between our framework and that of Darwiche-Pearl. In particular, we show that the intuitions underlying the controversial (C2) postulate are captured by the recovery axiom and our recovery-like postulates (the latter can be seen as weakenings of (C2).<|reference_end|> | arxiv | @article{chopra2002iterated,
title={Iterated revision and the axiom of recovery: a unified treatment via
epistemic states},
author={Samir Chopra, Aditya Ghose, Thomas Meyer},
journal={arXiv preprint arXiv:cs/0207038},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207038},
primaryClass={cs.AI cs.LO}
} | chopra2002iterated |
arxiv-670652 | cs/0207039 | Dual reciprocity BEM and dynamic programming filter for inverse elastodynamic problems | <|reference_start|>Dual reciprocity BEM and dynamic programming filter for inverse elastodynamic problems: This paper presents the first coupling application of the dual reciprocity BEM (DRBEM) and dynamic programming filter to inverse elastodynamic problem. The DRBEM is the only BEM method, which does not require domain discretization for general linear and nonlinear dynamic problems. Since the size of numerical discretization system has a great effect on the computing effort of recursive or iterative calculations of inverse analysis, the intrinsic boundary-only merit of the DRBEM causes a considerable computational saving. On the other hand, the strengths of the dynamic programming filter lie in its mathematical simplicity, easy to program and great flexibility in the type, number and locations of measurements and unknown inputs. The combination of these two techniques is therefore very attractive for the solution of practical inverse problems. In this study, the spatial and temporal partial derivatives of the governing equation are respectively discretized first by the DRBEM and the precise integration method, and then, by using dynamic programming with regularization, dynamic load is estimated based on noisy measurements of velocity and displacement at very few locations. Numerical experiments involved with the periodic and Heaviside impact load are conducted to demonstrate the applicability, efficiency and simplicity of this strategy. The affect of noise level, regularization parameter, and measurement types on the estimation is also investigated.<|reference_end|> | arxiv | @article{tanaka2002dual,
title={Dual reciprocity BEM and dynamic programming filter for inverse
elastodynamic problems},
author={Masataka Tanaka, W Chen},
journal={Transactions of the Japan Society for Computational Engineering
and Science, 2, 20000003, 2000},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207039},
primaryClass={cs.CE cs.CG}
} | tanaka2002dual |
arxiv-670653 | cs/0207040 | Well-Founded Argumentation Semantics for Extended Logic Programming | <|reference_start|>Well-Founded Argumentation Semantics for Extended Logic Programming: This paper defines an argumentation semantics for extended logic programming and shows its equivalence to the well-founded semantics with explicit negation. We set up a general framework in which we extensively compare this semantics to other argumentation semantics, including those of Dung, and Prakken and Sartor. We present a general dialectical proof theory for these argumentation semantics.<|reference_end|> | arxiv | @article{schweimeier2002well-founded,
title={Well-Founded Argumentation Semantics for Extended Logic Programming},
author={Ralf Schweimeier and Michael Schroeder},
journal={arXiv preprint arXiv:cs/0207040},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207040},
primaryClass={cs.LO cs.AI}
} | schweimeier2002well-founded |
arxiv-670654 | cs/0207041 | RBF-based meshless boundary knot method and boundary particle method | <|reference_start|>RBF-based meshless boundary knot method and boundary particle method: This paper is concerned with the two new boundary-type radial basis function collocation schemes, boundary knot method (BKM) and boundary particle method (BPM). The BKM is developed based on the dual reciprocity theorem, while the BPM employs the multiple reciprocity technique. Unlike the method of fundamental solution, the wto methods use the nonsingular general solutions instead of singular fundamental solution to circumvent the controversial artificial boundary outside physical domain. Compared with the boundary element method, both the BKM and BPM are meshfree, superconvergent, meshfree, integration free, symmetric, and mathematically simple collocation techniques for general PDEs. In particular, the BPM does not require any inner nodes for inhomogeneous problems. In this study, the accuracy and efficiency of the two methods are numerically demonstrated to some 2D, 3D Helmholtz and convection-diffusion problems under complicated geometries.<|reference_end|> | arxiv | @article{chen2002rbf-based,
title={RBF-based meshless boundary knot method and boundary particle method},
author={W. Chen},
journal={arXiv preprint arXiv:cs/0207041},
year={2002},
number={Proc. of the China Congress on Computational Mechanics's 2001, pp.
319-326, Guangzhou, China, Dec. 2001},
archivePrefix={arXiv},
eprint={cs/0207041},
primaryClass={cs.CE cs.CG}
} | chen2002rbf-based |
arxiv-670655 | cs/0207042 | Logic Programming with Ordered Disjunction | <|reference_start|>Logic Programming with Ordered Disjunction: Logic programs with ordered disjunction (LPODs) combine ideas underlying Qualitative Choice Logic (Brewka et al. KR 2002) and answer set programming. Logic programming under answer set semantics is extended with a new connective called ordered disjunction. The new connective allows us to represent alternative, ranked options for problem solutions in the heads of rules: A \times B intuitively means: if possible A, but if A is not possible then at least B. The semantics of logic programs with ordered disjunction is based on a preference relation on answer sets. LPODs are useful for applications in design and configuration and can serve as a basis for qualitative decision making.<|reference_end|> | arxiv | @article{brewka2002logic,
title={Logic Programming with Ordered Disjunction},
author={Gerhard Brewka},
journal={arXiv preprint arXiv:cs/0207042},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207042},
primaryClass={cs.AI}
} | brewka2002logic |
arxiv-670656 | cs/0207043 | A meshless, integration-free, and boundary-only RBF technique | <|reference_start|>A meshless, integration-free, and boundary-only RBF technique: Based on the radial basis function (RBF), non-singular general solution and dual reciprocity method (DRM), this paper presents an inherently meshless, integration-free, boundary-only RBF collocation techniques for numerical solution of various partial differential equation systems. The basic ideas behind this methodology are very mathematically simple. In this study, the RBFs are employed to approximate the inhomogeneous terms via the DRM, while non-singular general solution leads to a boundary-only RBF formulation for homogenous solution. The present scheme is named as the boundary knot method (BKM) to differentiate it from the other numerical techniques. In particular, due to the use of nonsingular general solutions rather than singular fundamental solutions, the BKM is different from the method of fundamental solution in that the former does no require the artificial boundary and results in the symmetric system equations under certain conditions. The efficiency and utility of this new technique are validated through a number of typical numerical examples. Completeness concern of the BKM due to the only use of non-singular part of complete fundamental solution is also discussed.<|reference_end|> | arxiv | @article{chen2002a,
title={A meshless, integration-free, and boundary-only RBF technique},
author={W. Chen, M. Tanaka},
journal={Computers and Mathematics with Applications, 43, 379-391, 2002},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207043},
primaryClass={cs.CE cs.CG}
} | chen2002a |
arxiv-670657 | cs/0207044 | Declarative program development in Prolog with GUPU | <|reference_start|>Declarative program development in Prolog with GUPU: We present GUPU, a side-effect free environment specialized for programming courses. It seamlessly guides and supports students during all phases of program development, covering specification, implementation, and program debugging. GUPU features several innovations in this area. The specification phase is supported by reference implementations augmented with diagnostic facilities. During implementation, immediate feedback from test cases and from visualization tools helps the programmer's program understanding. A set of slicing techniques narrows down programming errors. The whole process is guided by a marking system.<|reference_end|> | arxiv | @article{neumerkel2002declarative,
title={Declarative program development in Prolog with GUPU},
author={Ulrich Neumerkel, Stefan Kral},
journal={arXiv preprint arXiv:cs/0207044},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207044},
primaryClass={cs.SE}
} | neumerkel2002declarative |
arxiv-670658 | cs/0207045 | Compilation of Propositional Weighted Bases | <|reference_start|>Compilation of Propositional Weighted Bases: In this paper, we investigate the extent to which knowledge compilation can be used to improve inference from propositional weighted bases. We present a general notion of compilation of a weighted base that is parametrized by any equivalence--preserving compilation function. Both negative and positive results are presented. On the one hand, complexity results are identified, showing that the inference problem from a compiled weighted base is as difficult as in the general case, when the prime implicates, Horn cover or renamable Horn cover classes are targeted. On the other hand, we show that the inference problem becomes tractable whenever DNNF-compilations are used and clausal queries are considered. Moreover, we show that the set of all preferred models of a DNNF-compilation of a weighted base can be computed in time polynomial in the output size. Finally, we sketch how our results can be used in model-based diagnosis in order to compute the most probable diagnoses of a system.<|reference_end|> | arxiv | @article{darwiche2002compilation,
title={Compilation of Propositional Weighted Bases},
author={Adnan Darwiche and Pierre Marquis},
journal={arXiv preprint arXiv:cs/0207045},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207045},
primaryClass={cs.AI}
} | darwiche2002compilation |
arxiv-670659 | cs/0207046 | COINS: a constraint-based interactive solving system | <|reference_start|>COINS: a constraint-based interactive solving system: This paper describes the COINS (COnstraint-based INteractive Solving) system: a conflict-based constraint solver. It helps understanding inconsistencies, simulates constraint additions and/or retractions (without any propagation), determines if a given constraint belongs to a conflict and provides diagnosis tools (e.g. why variable v cannot take value val). COINS also uses user-friendly representation of conflicts and explanations.<|reference_end|> | arxiv | @article{ouis2002coins:,
title={COINS: a constraint-based interactive solving system},
author={Samir Ouis, Narendra Jussien, Patrice Boizumault},
journal={arXiv preprint arXiv:cs/0207046},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207046},
primaryClass={cs.SE}
} | ouis2002coins: |
arxiv-670660 | cs/0207047 | Tracing and Explaining Execution of CLP(FD) Programs | <|reference_start|>Tracing and Explaining Execution of CLP(FD) Programs: Previous work in the area of tracing CLP(FD) programs mainly focuses on providing information about control of execution and domain modification. In this paper, we present a trace structure that provides information about additional important aspects. We incorporate explanations in the trace structure, i.e. reasons for why certain solver actions occur. Furthermore, we come up with a format for describing the execution of the filtering algorithms of global constraints. Some new ideas about the design of the trace are also presented. For example, we have modeled our trace as a nested block structure in order to achieve a hierarchical view. Also, new ways about how to represent and identify different entities such as constraints and domain variables are presented.<|reference_end|> | arxiv | @article{agren2002tracing,
title={Tracing and Explaining Execution of CLP(FD) Programs},
author={Magnus Agren, Tamas Szeredi, Nicolas Beldiceanu, Mats Carlsson},
journal={arXiv preprint arXiv:cs/0207047},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207047},
primaryClass={cs.SE}
} | agren2002tracing |
arxiv-670661 | cs/0207048 | CLPGUI: a generic graphical user interface for constraint logic programming over finite domains | <|reference_start|>CLPGUI: a generic graphical user interface for constraint logic programming over finite domains: CLPGUI is a graphical user interface for visualizing and interacting with constraint logic programs over finite domains. In CLPGUI, the user can control the execution of a CLP program through several views of constraints, of finite domain variables and of the search tree. CLPGUI is intended to be used both for teaching purposes, and for debugging and improving complex programs of realworld scale. It is based on a client-server architecture for connecting the CLP process to a Java-based GUI process. Communication by message passing provides an open architecture which facilitates the reuse of graphical components and the porting to different constraint programming systems. Arbitrary constraints and goals can be posted incrementally from the GUI. We propose several dynamic 2D and 3D visualizations of the search tree and of the evolution of finite domain variables. We argue that the 3D representation of search trees proposed in this paper provides the most appropriate visualization of large search trees. We describe the current implementation of the annotations and of the interactive execution model in GNU-Prolog, and report some evaluation results.<|reference_end|> | arxiv | @article{fages2002clpgui:,
title={CLPGUI: a generic graphical user interface for constraint logic
programming over finite domains},
author={Francois Fages},
journal={arXiv preprint arXiv:cs/0207048},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207048},
primaryClass={cs.SE}
} | fages2002clpgui: |
arxiv-670662 | cs/0207049 | More Precise Yet Efficient Type Inference for Logic Programs | <|reference_start|>More Precise Yet Efficient Type Inference for Logic Programs: Type analyses of logic programs which aim at inferring the types of the program being analyzed are presented in a unified abstract interpretation-based framework. This covers most classical abstract interpretation-based type analyzers for logic programs, built on either top-down or bottom-up interpretation of the program. In this setting, we discuss the widening operator, arguably a crucial one. We present a new widening which is more precise than those previously proposed. Practical results with our analysis domain are also presented, showing that it also allows for efficient analysis.<|reference_end|> | arxiv | @article{vaucheret2002more,
title={More Precise Yet Efficient Type Inference for Logic Programs},
author={Claudio Vaucheret, Francisco Bueno},
journal={arXiv preprint arXiv:cs/0207049},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207049},
primaryClass={cs.SE}
} | vaucheret2002more |
arxiv-670663 | cs/0207050 | Value withdrawal explanations: a theoretical tool for programming environments | <|reference_start|>Value withdrawal explanations: a theoretical tool for programming environments: Constraint logic programming combines declarativity and efficiency thanks to constraint solvers implemented for specific domains. Value withdrawal explanations have been efficiently used in several constraints programming environments but there does not exist any formalization of them. This paper is an attempt to fill this lack. Furthermore, we hope that this theoretical tool could help to validate some programming environments. A value withdrawal explanation is a tree describing the withdrawal of a value during a domain reduction by local consistency notions and labeling. Domain reduction is formalized by a search tree using two kinds of operators: operators for local consistency notions and operators for labeling. These operators are defined by sets of rules. Proof trees are built with respect to these rules. For each removed value, there exists such a proof tree which is the withdrawal explanation of this value.<|reference_end|> | arxiv | @article{lesaint2002value,
title={Value withdrawal explanations: a theoretical tool for programming
environments},
author={Willy Lesaint},
journal={arXiv preprint arXiv:cs/0207050},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207050},
primaryClass={cs.SE}
} | lesaint2002value |
arxiv-670664 | cs/0207051 | Exporting Prolog source code | <|reference_start|>Exporting Prolog source code: In this paper we present a simple source code configuration tool. ExLibris operates on libraries and can be used to extract from local libraries all code relevant to a particular project. Our approach is not designed to address problems arising in code production lines, but rather, to support the needs of individual or small teams of researchers who wish to communicate their Prolog programs. In the process, we also wish to accommodate and encourage the writing of reusable code. Moreover, we support and propose ways of dealing with issues arising in the development of code that can be run on a variety of like-minded Prolog systems. With consideration to these aims we have made the following decisions: (i) support file-based source development, (ii) require minimal program transformation, (iii) target simplicity of usage, and (iv) introduce minimum number of new primitives.<|reference_end|> | arxiv | @article{angelopoulos2002exporting,
title={Exporting Prolog source code},
author={Nicos Angelopoulos},
journal={arXiv preprint arXiv:cs/0207051},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207051},
primaryClass={cs.SE}
} | angelopoulos2002exporting |
arxiv-670665 | cs/0207052 | Proceedings of the 12th International Workshop on Logic Programming Environments | <|reference_start|>Proceedings of the 12th International Workshop on Logic Programming Environments: The twelfth Workshop on Logic Programming Environments, WLPE 2002, is one in a series of international workshops held in the topic area. The workshops facilitate the exchange ideas and results among researchers and system developers on all aspects of environments for logic programming. Relevant topics for these workshops include user interfaces, human engineering, execution visualization, development tools, providing for new paradigms, and interfacing to language system tools and external systems. This twelfth workshop held in Copenhaguen. It follows the successful eleventh Workshop on Logic Programming Environments held in Cyprus in December, 2001. WLPE 2002 features ten presentations. The presentations involve, in some way, constraint logic programming, object-oriented programming and abstract interpretation. Topics areas addressed include tools for software development, execution visualization, software maintenance, instructional aids. This workshop was a post-conference workshop at ICLP 2002. Alexandre Tessier, Program Chair, WLPE 2002, June 2002.<|reference_end|> | arxiv | @article{tessier2002proceedings,
title={Proceedings of the 12th International Workshop on Logic Programming
Environments},
author={Alexandre Tessier},
journal={arXiv preprint arXiv:cs/0207052},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207052},
primaryClass={cs.SE}
} | tessier2002proceedings |
arxiv-670666 | cs/0207053 | An Architecture for Making Object-Oriented Systems Available from Prolog | <|reference_start|>An Architecture for Making Object-Oriented Systems Available from Prolog: It is next to impossible to develop real-life applications in just pure Prolog. With XPCE we realised a mechanism for integrating Prolog with an external object-oriented system that turns this OO system into a natural extension to Prolog. We describe the design and how it can be applied to other external OO systems.<|reference_end|> | arxiv | @article{wielemaker2002an,
title={An Architecture for Making Object-Oriented Systems Available from Prolog},
author={Jan Wielemaker, Anjo Anjewierden},
journal={arXiv preprint arXiv:cs/0207053},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207053},
primaryClass={cs.SE}
} | wielemaker2002an |
arxiv-670667 | cs/0207054 | Enhancing Usefulness of Declarative Programming Frameworks through Complete Integration | <|reference_start|>Enhancing Usefulness of Declarative Programming Frameworks through Complete Integration: The Gisela framework for declarative programming was developed with the specific aim of providing a tool that would be useful for knowledge representation and reasoning within real-world applications. To achieve this, a complete integration into an object-oriented application development environment was used. The framework and methodology developed provide two alternative application programming interfaces (APIs): Programming using objects or programming using a traditional equational declarative style. In addition to providing complete integration, Gisela also allows extensions and modifications due to the general computation model and well-defined APIs. We give a brief overview of the declarative model underlying Gisela and we present the methodology proposed for building applications together with some real examples.<|reference_end|> | arxiv | @article{falkman2002enhancing,
title={Enhancing Usefulness of Declarative Programming Frameworks through
Complete Integration},
author={Goran Falkman, Olof Torgersson},
journal={arXiv preprint arXiv:cs/0207054},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207054},
primaryClass={cs.SE}
} | falkman2002enhancing |
arxiv-670668 | cs/0207055 | The Rise and Fall of the Church-Turing Thesis | <|reference_start|>The Rise and Fall of the Church-Turing Thesis: The essay consists of three parts. In the first part, it is explained how theory of algorithms and computations evaluates the contemporary situation with computers and global networks. In the second part, it is demonstrated what new perspectives this theory opens through its new direction that is called theory of super-recursive algorithms. These algorithms have much higher computing power than conventional algorithmic schemes. In the third part, we explicate how realization of what this theory suggests might influence life of people in future. It is demonstrated that now the theory is far ahead computing practice and practice has to catch up with the theory. We conclude with a comparison of different approaches to the development of information technology.<|reference_end|> | arxiv | @article{burgin2002the,
title={The Rise and Fall of the Church-Turing Thesis},
author={Mark Burgin},
journal={arXiv preprint arXiv:cs/0207055},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207055},
primaryClass={cs.CC cs.AI}
} | burgin2002the |
arxiv-670669 | cs/0207056 | Modeling Complex Domains of Actions and Change | <|reference_start|>Modeling Complex Domains of Actions and Change: This paper studies the problem of modeling complex domains of actions and change within high-level action description languages. We investigate two main issues of concern: (a) can we represent complex domains that capture together different problems such as ramifications, non-determinism and concurrency of actions, at a high-level, close to the given natural ontology of the problem domain and (b) what features of such a representation can affect, and how, its computational behaviour. The paper describes the main problems faced in this representation task and presents the results of an empirical study, carried out through a series of controlled experiments, to analyze the computational performance of reasoning in these representations. The experiments compare different representations obtained, for example, by changing the basic ontology of the domain or by varying the degree of use of indirect effect laws through domain constraints. This study has helped to expose the main sources of computational difficulty in the reasoning and suggest some methodological guidelines for representing complex domains. Although our work has been carried out within one particular high-level description language, we believe that the results, especially those that relate to the problems of representation, are independent of the specific modeling language.<|reference_end|> | arxiv | @article{kakas2002modeling,
title={Modeling Complex Domains of Actions and Change},
author={Antonis Kakas and Loizos Michael},
journal={arXiv preprint arXiv:cs/0207056},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207056},
primaryClass={cs.AI}
} | kakas2002modeling |
arxiv-670670 | cs/0207057 | Physical Traces: Quantum vs Classical Information Processing | <|reference_start|>Physical Traces: Quantum vs Classical Information Processing: Within the Geometry of Interaction (GoI) paradigm, we present a setting that enables qualitative differences between classical and quantum processes to be explored. The key construction is the physical interpretation/realization of the traced monoidal categories of finite-dimensional vector spaces with tensor product as monoidal structure and of finite sets and relations with Cartesian product as monoidal structure, both of them providing a so-called wave-style GoI. The developments in this paper reveal that envisioning state update due to quantum measurement as a process provides a powerful tool for developing high-level approaches to quantum information processing.<|reference_end|> | arxiv | @article{abramsky2002physical,
title={Physical Traces: Quantum vs. Classical Information Processing},
author={Samson Abramsky and Bob Coecke},
journal={Electronic Notes in Theoretical Computer Science 69 (2003)},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207057},
primaryClass={cs.CG cs.LO math.CT quant-ph}
} | abramsky2002physical |
arxiv-670671 | cs/0207058 | Question Answering over Unstructured Data without Domain Restrictions | <|reference_start|>Question Answering over Unstructured Data without Domain Restrictions: Information needs are naturally represented as questions. Automatic Natural-Language Question Answering (NLQA) has only recently become a practical task on a larger scale and without domain constraints. This paper gives a brief introduction to the field, its history and the impact of systematic evaluation competitions. It is then demonstrated that an NLQA system for English can be built and evaluated in a very short time using off-the-shelf parsers and thesauri. The system is based on Robust Minimal Recursion Semantics (RMRS) and is portable with respect to the parser used as a frontend. It applies atomic term unification supported by question classification and WordNet lookup for semantic similarity matching of parsed question representation and free text.<|reference_end|> | arxiv | @article{leidner2002question,
title={Question Answering over Unstructured Data without Domain Restrictions},
author={Jochen L. Leidner},
journal={arXiv preprint arXiv:cs/0207058},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207058},
primaryClass={cs.CL cs.IR}
} | leidner2002question |
arxiv-670672 | cs/0207059 | Value Based Argumentation Frameworks | <|reference_start|>Value Based Argumentation Frameworks: This paper introduces the notion of value-based argumentation frameworks, an extension of the standard argumentation frameworks proposed by Dung, which are able toshow how rational decision is possible in cases where arguments derive their force from the social values their acceptance would promote.<|reference_end|> | arxiv | @article{bench-capon2002value,
title={Value Based Argumentation Frameworks},
author={T. Bench-Capon},
journal={arXiv preprint arXiv:cs/0207059},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207059},
primaryClass={cs.AI}
} | bench-capon2002value |
arxiv-670673 | cs/0207060 | Preferred well-founded semantics for logic programming by alternating fixpoints: Preliminary report | <|reference_start|>Preferred well-founded semantics for logic programming by alternating fixpoints: Preliminary report: We analyze the problem of defining well-founded semantics for ordered logic programs within a general framework based on alternating fixpoint theory. We start by showing that generalizations of existing answer set approaches to preference are too weak in the setting of well-founded semantics. We then specify some informal yet intuitive criteria and propose a semantical framework for preference handling that is more suitable for defining well-founded semantics for ordered logic programs. The suitability of the new approach is convinced by the fact that many attractive properties are satisfied by our semantics. In particular, our semantics is still correct with respect to various existing answer sets semantics while it successfully overcomes the weakness of their generalization to well-founded semantics. Finally, we indicate how an existing preferred well-founded semantics can be captured within our semantical framework.<|reference_end|> | arxiv | @article{schaub2002preferred,
title={Preferred well-founded semantics for logic programming by alternating
fixpoints: Preliminary report},
author={Torsten Schaub, Kewen Wang},
journal={arXiv preprint arXiv:cs/0207060},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207060},
primaryClass={cs.AI}
} | schaub2002preferred |
arxiv-670674 | cs/0207061 | Linear-Time Pointer-Machine Algorithms for Path-Evaluation Problems on Trees and Graphs | <|reference_start|>Linear-Time Pointer-Machine Algorithms for Path-Evaluation Problems on Trees and Graphs: We present algorithms that run in linear time on pointer machines for a collection of problems, each of which either directly or indirectly requires the evaluation of a function defined on paths in a tree. These problems previously had linear-time algorithms but only for random-access machines (RAMs); the best pointer-machine algorithms were super-linear by an inverse-Ackermann-function factor. Our algorithms are also simpler, in some cases substantially, than the previous linear-time RAM algorithms. Our improvements come primarily from three new ideas: a refined analysis of path compression that gives a linear bound if the compressions favor certain nodes, a pointer-based radix sort as a replacement for table-based methods, and a more careful partitioning of a tree into easily managed parts. Our algorithms compute nearest common ancestors off-line, verify and construct minimum spanning trees, do interval analysis on a flowgraph, find the dominators of a flowgraph, and build the component tree of a weighted tree.<|reference_end|> | arxiv | @article{buchsbaum2002linear-time,
title={Linear-Time Pointer-Machine Algorithms for Path-Evaluation Problems on
Trees and Graphs},
author={Adam L. Buchsbaum, Loukas Georgiadis, Haim Kaplan, Anne Rogers, Robert
E. Tarjan, and Jeffery R. Westbrook},
journal={arXiv preprint arXiv:cs/0207061},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207061},
primaryClass={cs.DS}
} | buchsbaum2002linear-time |
arxiv-670675 | cs/0207062 | Some addenda on distance function wavelets | <|reference_start|>Some addenda on distance function wavelets: This report will add some supplements to the recently finished report series on the distance function wavelets (DFW). First, we define the general distance in terms of the Riesz potential, and then, the distance function Abel wavelets are derived via the fractional integral and Laplacian. Second, the DFW Weyl transform is found to be a shifted Laplace potential DFW. The DFW Radon transform is also presented. Third, we present a conjecture on truncation error formula of the multiple reciprocity Laplace DFW series and discuss its error distributions in terms of node density distributions. Forth, we point out that the Hermite distance function interpolation can be used to replace overlapping in the domain decomposition in order to produce sparse matrix. Fifth, the shape parameter is explained as a virtual extra axis contribution in terms of the MQ-type Possion kernel. The report is concluded with some remarks on a range of other issues.<|reference_end|> | arxiv | @article{chen2002some,
title={Some addenda on distance function wavelets},
author={W. Chen},
journal={arXiv preprint arXiv:cs/0207062},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207062},
primaryClass={cs.NA cs.CE}
} | chen2002some |
arxiv-670676 | cs/0207063 | Parallel Delaunay Refinement: Algorithms and Analyses | <|reference_start|>Parallel Delaunay Refinement: Algorithms and Analyses: In this paper, we analyze the complexity of natural parallelizations of Delaunay refinement methods for mesh generation. The parallelizations employ a simple strategy: at each iteration, they choose a set of ``independent'' points to insert into the domain, and then update the Delaunay triangulation. We show that such a set of independent points can be constructed efficiently in parallel and that the number of iterations needed is $O(\log^2(L/s))$, where $L$ is the diameter of the domain, and $s$ is the smallest edge in the output mesh. In addition, we show that the insertion of each independent set of points can be realized sequentially by Ruppert's method in two dimensions and Shewchuk's in three dimensions. Therefore, our parallel Delaunay refinement methods provide the same element quality and mesh size guarantees as the sequential algorithms in both two and three dimensions. For quasi-uniform meshes, such as those produced by Chew's method, we show that the number of iterations can be reduced to $O(\log(L/s))$. To the best of our knowledge, these are the first provably polylog$(L/s)$ parallel time Delaunay meshing algorithms that generate well-shaped meshes of size optimal to within a constant.<|reference_end|> | arxiv | @article{spielman2002parallel,
title={Parallel Delaunay Refinement: Algorithms and Analyses},
author={Dan A. Spielman, Shang-hua Teng, and Alper Ungor},
journal={arXiv preprint arXiv:cs/0207063},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207063},
primaryClass={cs.CG}
} | spielman2002parallel |
arxiv-670677 | cs/0207064 | Interpolation Theorems for Nonmonotonic Reasoning Systems | <|reference_start|>Interpolation Theorems for Nonmonotonic Reasoning Systems: Craig's interpolation theorem (Craig 1957) is an important theorem known for propositional logic and first-order logic. It says that if a logical formula $\beta$ logically follows from a formula $\alpha$, then there is a formula $\gamma$, including only symbols that appear in both $\alpha,\beta$, such that $\beta$ logically follows from $\gamma$ and $\gamma$ logically follows from $\alpha$. Such theorems are important and useful for understanding those logics in which they hold as well as for speeding up reasoning with theories in those logics. In this paper we present interpolation theorems in this spirit for three nonmonotonic systems: circumscription, default logic and logic programs with the stable models semantics (a.k.a. answer set semantics). These results give us better understanding of those logics, especially in contrast to their nonmonotonic characteristics. They suggest that some \emph{monotonicity} principle holds despite the failure of classic monotonicity for these logics. Also, they sometimes allow us to use methods for the decomposition of reasoning for these systems, possibly increasing their applicability and tractability. Finally, they allow us to build structured representations that use those logics.<|reference_end|> | arxiv | @article{amir2002interpolation,
title={Interpolation Theorems for Nonmonotonic Reasoning Systems},
author={Eyal Amir},
journal={arXiv preprint arXiv:cs/0207064},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207064},
primaryClass={cs.AI cs.LO}
} | amir2002interpolation |
arxiv-670678 | cs/0207065 | Embedding Default Logic in Propositional Argumentation Systems | <|reference_start|>Embedding Default Logic in Propositional Argumentation Systems: In this paper we present a transformation of finite propositional default theories into so-called propositional argumentation systems. This transformation allows to characterize all notions of Reiter's default logic in the framework of argumentation systems. As a consequence, computing extensions, or determining wether a given formula belongs to one extension or all extensions can be answered without leaving the field of classical propositional logic. The transformation proposed is linear in the number of defaults.<|reference_end|> | arxiv | @article{berzati2002embedding,
title={Embedding Default Logic in Propositional Argumentation Systems},
author={Dritan Berzati, Bernhard Anrig and Juerg Kohlas},
journal={arXiv preprint arXiv:cs/0207065},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207065},
primaryClass={cs.AI}
} | berzati2002embedding |
arxiv-670679 | cs/0207066 | Polynomial Time Data Reduction for Dominating Set | <|reference_start|>Polynomial Time Data Reduction for Dominating Set: Dealing with the NP-complete Dominating Set problem on undirected graphs, we demonstrate the power of data reduction by preprocessing from a theoretical as well as a practical side. In particular, we prove that Dominating Set restricted to planar graphs has a so-called problem kernel of linear size, achieved by two simple and easy to implement reduction rules. Moreover, having implemented our reduction rules, first experiments indicate the impressive practical potential of these rules. Thus, this work seems to open up a new and prospective way how to cope with one of the most important problems in graph theory and combinatorial optimization.<|reference_end|> | arxiv | @article{alber2002polynomial,
title={Polynomial Time Data Reduction for Dominating Set},
author={Jochen Alber (1), Michael R. Fellows (2), Rolf Niedermeier (1) ((1)
Universitaet Tuebingen Germany, (2) University of Newcastle Australia)},
journal={arXiv preprint arXiv:cs/0207066},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207066},
primaryClass={cs.DS}
} | alber2002polynomial |
arxiv-670680 | cs/0207067 | On the existence and multiplicity of extensions in dialectical argumentation | <|reference_start|>On the existence and multiplicity of extensions in dialectical argumentation: In the present paper, the existence and multiplicity problems of extensions are addressed. The focus is on extension of the stable type. The main result of the paper is an elegant characterization of the existence and multiplicity of extensions in terms of the notion of dialectical justification, a close cousin of the notion of admissibility. The characterization is given in the context of the particular logic for dialectical argumentation DEFLOG. The results are of direct relevance for several well-established models of defeasible reasoning (like default logic, logic programming and argumentation frameworks), since elsewhere dialectical argumentation has been shown to have close formal connections with these models.<|reference_end|> | arxiv | @article{verheij2002on,
title={On the existence and multiplicity of extensions in dialectical
argumentation},
author={Bart Verheij},
journal={Verheij, Bart (2002). On the existence and the multiplicity of
extensions in dialectical argumentation. Proceedings of the 9th International
Workshop on Non-Monotonic Reasoning (NMR'2002) (eds. S. Benferhat and E.
Giunchiglia), pp. 416-425. Toulouse},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207067},
primaryClass={cs.AI}
} | verheij2002on |
arxiv-670681 | cs/0207068 | Knuth-Bendix constraint solving is NP-complete | <|reference_start|>Knuth-Bendix constraint solving is NP-complete: We show the NP-completeness of the existential theory of term algebras with the Knuth-Bendix order by giving a nondeterministic polynomial-time algorithm for solving Knuth-Bendix ordering constraints.<|reference_end|> | arxiv | @article{korovin2002knuth-bendix,
title={Knuth-Bendix constraint solving is NP-complete},
author={Konstantin Korovin and Andrei Voronkov},
journal={arXiv preprint arXiv:cs/0207068},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207068},
primaryClass={cs.LO}
} | korovin2002knuth-bendix |
arxiv-670682 | cs/0207069 | Small Large-Scale Wireless Networks: Mobility-Assisted Resource Discovery | <|reference_start|>Small Large-Scale Wireless Networks: Mobility-Assisted Resource Discovery: In this study, the concept of small worlds is investigated in the context of large-scale wireless ad hoc and sensor networks. Wireless networks are spatial graphs that are usually much more clustered than random networks and have much higher path length characteristics. We observe that by adding only few random links, path length of wireless networks can be reduced drastically without affecting clustering. What is even more interesting is that such links need not be formed randomly but may be confined to a limited number of hops between the connected nodes. This has an important practical implication, as now we can introduce a distributed algorithm in large-scale wireless networks, based on what we call contacts, to improve the performance of resource discovery in such networks, without resorting to global flooding. We propose new contact-based protocols for adding logical short cuts in wireless networks efficiently. The new protocols take advantage of mobility in order to increase reachability of the search. We study the performance of our proposed contact-based architecture, and clarify the context in which large-scale wireless networks can be turned into small world networks.<|reference_end|> | arxiv | @article{helmy2002small,
title={Small Large-Scale Wireless Networks: Mobility-Assisted Resource
Discovery},
author={Ahmed Helmy},
journal={arXiv preprint arXiv:cs/0207069},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207069},
primaryClass={cs.NI}
} | helmy2002small |
arxiv-670683 | cs/0207070 | A continuation semantics of interrogatives that accounts for Baker's ambiguity | <|reference_start|>A continuation semantics of interrogatives that accounts for Baker's ambiguity: Wh-phrases in English can appear both raised and in-situ. However, only in-situ wh-phrases can take semantic scope beyond the immediately enclosing clause. I present a denotational semantics of interrogatives that naturally accounts for these two properties. It neither invokes movement or economy, nor posits lexical ambiguity between raised and in-situ occurrences of the same wh-phrase. My analysis is based on the concept of continuations. It uses a novel type system for higher-order continuations to handle wide-scope wh-phrases while remaining strictly compositional. This treatment sheds light on the combinatorics of interrogatives as well as other kinds of so-called A'-movement.<|reference_end|> | arxiv | @article{shan2002a,
title={A continuation semantics of interrogatives that accounts for Baker's
ambiguity},
author={Chung-chieh Shan (Harvard University)},
journal={Proceedings of SALT XII: Semantics and Linguistic Theory, ed.
Brendan Jackson, 246-265 (2002)},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207070},
primaryClass={cs.CL cs.PL}
} | shan2002a |
arxiv-670684 | cs/0207071 | A Polynomial Translation of Logic Programs with Nested Expressions into Disjunctive Logic Programs: Preliminary Report | <|reference_start|>A Polynomial Translation of Logic Programs with Nested Expressions into Disjunctive Logic Programs: Preliminary Report: Nested logic programs have recently been introduced in order to allow for arbitrarily nested formulas in the heads and the bodies of logic program rules under the answer sets semantics. Nested expressions can be formed using conjunction, disjunction, as well as the negation as failure operator in an unrestricted fashion. This provides a very flexible and compact framework for knowledge representation and reasoning. Previous results show that nested logic programs can be transformed into standard (unnested) disjunctive logic programs in an elementary way, applying the negation as failure operator to body literals only. This is of great practical relevance since it allows us to evaluate nested logic programs by means of off-the-shelf disjunctive logic programming systems, like DLV. However, it turns out that this straightforward transformation results in an exponential blow-up in the worst-case, despite the fact that complexity results indicate that there is a polynomial translation among both formalisms. In this paper, we take up this challenge and provide a polynomial translation of logic programs with nested expressions into disjunctive logic programs. Moreover, we show that this translation is modular and (strongly) faithful. We have implemented both the straightforward as well as our advanced transformation; the resulting compiler serves as a front-end to DLV and is publicly available on the Web.<|reference_end|> | arxiv | @article{pearce2002a,
title={A Polynomial Translation of Logic Programs with Nested Expressions into
Disjunctive Logic Programs: Preliminary Report},
author={David Pearce, Vladimir Sarsakov, Torsten Schaub, Hans Tompits, Stefan
Woltran},
journal={arXiv preprint arXiv:cs/0207071},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207071},
primaryClass={cs.AI cs.LO}
} | pearce2002a |
arxiv-670685 | cs/0207072 | Complexity of Nested Circumscription and Nested Abnormality Theories | <|reference_start|>Complexity of Nested Circumscription and Nested Abnormality Theories: The need for a circumscriptive formalism that allows for simple yet elegant modular problem representation has led Lifschitz (AIJ, 1995) to introduce nested abnormality theories (NATs) as a tool for modular knowledge representation, tailored for applying circumscription to minimize exceptional circumstances. Abstracting from this particular objective, we propose L_{CIRC}, which is an extension of generic propositional circumscription by allowing propositional combinations and nesting of circumscriptive theories. As shown, NATs are naturally embedded into this language, and are in fact of equal expressive capability. We then analyze the complexity of L_{CIRC} and NATs, and in particular the effect of nesting. The latter is found to be a source of complexity, which climbs the Polynomial Hierarchy as the nesting depth increases and reaches PSPACE-completeness in the general case. We also identify meaningful syntactic fragments of NATs which have lower complexity. In particular, we show that the generalization of Horn circumscription in the NAT framework remains CONP-complete, and that Horn NATs without fixed letters can be efficiently transformed into an equivalent Horn CNF, which implies polynomial solvability of principal reasoning tasks. Finally, we also study extensions of NATs and briefly address the complexity in the first-order case. Our results give insight into the ``cost'' of using L_{CIRC} (resp. NATs) as a host language for expressing other formalisms such as action theories, narratives, or spatial theories.<|reference_end|> | arxiv | @article{cadoli2002complexity,
title={Complexity of Nested Circumscription and Nested Abnormality Theories},
author={Marco Cadoli, Thomas Eiter, and Georg Gottlob},
journal={arXiv preprint arXiv:cs/0207072},
year={2002},
number={INFSYS RR-1843-02-10, Institut f. Informationssysteme, TU Vienna,
2002},
archivePrefix={arXiv},
eprint={cs/0207072},
primaryClass={cs.AI cs.CC cs.LO}
} | cadoli2002complexity |
arxiv-670686 | cs/0207073 | Reinforcing Reachable Routes | <|reference_start|>Reinforcing Reachable Routes: This paper studies the evaluation of routing algorithms from the perspective of reachability routing, where the goal is to determine all paths between a sender and a receiver. Reachability routing is becoming relevant with the changing dynamics of the Internet and the emergence of low-bandwidth wireless/ad-hoc networks. We make the case for reinforcement learning as the framework of choice to realize reachability routing, within the confines of the current Internet infrastructure. The setting of the reinforcement learning problem offers several advantages, including loop resolution, multi-path forwarding capability, cost-sensitive routing, and minimizing state overhead, while maintaining the incremental spirit of current backbone routing algorithms. We identify research issues in reinforcement learning applied to the reachability routing problem to achieve a fluid and robust backbone routing framework. The paper is targeted toward practitioners seeking to implement a reachability routing algorithm.<|reference_end|> | arxiv | @article{varadarajan2002reinforcing,
title={Reinforcing Reachable Routes},
author={Srinidhi Varadarajan and Naren Ramakrishnan},
journal={arXiv preprint arXiv:cs/0207073},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207073},
primaryClass={cs.NI cs.AI}
} | varadarajan2002reinforcing |
arxiv-670687 | cs/0207074 | Paraconsistency of Interactive Computation | <|reference_start|>Paraconsistency of Interactive Computation: The goal of computational logic is to allow us to model computation as well as to reason about it. We argue that a computational logic must be able to model interactive computation. We show that first-order logic cannot model interactive computation due to the incompleteness of interaction. We show that interactive computation is necessarily paraconsistent, able to model both a fact and its negation, due to the role of the world (environment) in determining the course of the computation. We conclude that paraconsistency is a necessary property for a logic that can model interactive computation.<|reference_end|> | arxiv | @article{goldin2002paraconsistency,
title={Paraconsistency of Interactive Computation},
author={Dina Goldin (U. of Connecticut), Peter Wegner (Brown U.)},
journal={arXiv preprint arXiv:cs/0207074},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207074},
primaryClass={cs.LO}
} | goldin2002paraconsistency |
arxiv-670688 | cs/0207075 | Nonmonotonic Probabilistic Logics between Model-Theoretic Probabilistic Logic and Probabilistic Logic under Coherence | <|reference_start|>Nonmonotonic Probabilistic Logics between Model-Theoretic Probabilistic Logic and Probabilistic Logic under Coherence: Recently, it has been shown that probabilistic entailment under coherence is weaker than model-theoretic probabilistic entailment. Moreover, probabilistic entailment under coherence is a generalization of default entailment in System P. In this paper, we continue this line of research by presenting probabilistic generalizations of more sophisticated notions of classical default entailment that lie between model-theoretic probabilistic entailment and probabilistic entailment under coherence. That is, the new formalisms properly generalize their counterparts in classical default reasoning, they are weaker than model-theoretic probabilistic entailment, and they are stronger than probabilistic entailment under coherence. The new formalisms are useful especially for handling probabilistic inconsistencies related to conditioning on zero events. They can also be applied for probabilistic belief revision. More generally, in the same spirit as a similar previous paper, this paper sheds light on exciting new formalisms for probabilistic reasoning beyond the well-known standard ones.<|reference_end|> | arxiv | @article{lukasiewicz2002nonmonotonic,
title={Nonmonotonic Probabilistic Logics between Model-Theoretic Probabilistic
Logic and Probabilistic Logic under Coherence},
author={Thomas Lukasiewicz},
journal={arXiv preprint arXiv:cs/0207075},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207075},
primaryClass={cs.AI}
} | lukasiewicz2002nonmonotonic |
arxiv-670689 | cs/0207076 | Introducing Dynamic Behavior in Amalgamated Knowledge Bases | <|reference_start|>Introducing Dynamic Behavior in Amalgamated Knowledge Bases: The problem of integrating knowledge from multiple and heterogeneous sources is a fundamental issue in current information systems. In order to cope with this problem, the concept of mediator has been introduced as a software component providing intermediate services, linking data resources and application programs, and making transparent the heterogeneity of the underlying systems. In designing a mediator architecture, we believe that an important aspect is the definition of a formal framework by which one is able to model integration according to a declarative style. To this purpose, the use of a logical approach seems very promising. Another important aspect is the ability to model both static integration aspects, concerning query execution, and dynamic ones, concerning data updates and their propagation among the various data sources. Unfortunately, as far as we know, no formal proposals for logically modeling mediator architectures both from a static and dynamic point of view have already been developed. In this paper, we extend the framework for amalgamated knowledge bases, presented by Subrahmanian, to deal with dynamic aspects. The language we propose is based on the Active U-Datalog language, and extends it with annotated logic and amalgamation concepts. We model the sources of information and the mediator (also called supervisor) as Active U-Datalog deductive databases, thus modeling queries, transactions, and active rules, interpreted according to the PARK semantics. By using active rules, the system can efficiently perform update propagation among different databases. The result is a logical environment, integrating active and deductive rules, to perform queries and update propagation in an heterogeneous mediated framework.<|reference_end|> | arxiv | @article{bertino2002introducing,
title={Introducing Dynamic Behavior in Amalgamated Knowledge Bases},
author={Elisa Bertino, Barbara Catania, and Paolo Perlasca},
journal={arXiv preprint arXiv:cs/0207076},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207076},
primaryClass={cs.PL cs.DB cs.LO}
} | bertino2002introducing |
arxiv-670690 | cs/0207077 | Libra: An Economy driven Job Scheduling System for Clusters | <|reference_start|>Libra: An Economy driven Job Scheduling System for Clusters: Clusters of computers have emerged as mainstream parallel and distributed platforms for high-performance, high-throughput and high-availability computing. To enable effective resource management on clusters, numerous cluster managements systems and schedulers have been designed. However, their focus has essentially been on maximizing CPU performance, but not on improving the value of utility delivered to the user and quality of services. This paper presents a new computational economy driven scheduling system called Libra, which has been designed to support allocation of resources based on the users? quality of service (QoS) requirements. It is intended to work as an add-on to the existing queuing and resource management system. The first version has been implemented as a plugin scheduler to the PBS (Portable Batch System) system. The scheduler offers market-based economy driven service for managing batch jobs on clusters by scheduling CPU time according to user utility as determined by their budget and deadline rather than system performance considerations. The Libra scheduler ensures that both these constraints are met within an O(n) run-time. The Libra scheduler has been simulated using the GridSim toolkit to carry out a detailed performance analysis. Results show that the deadline and budget based proportional resource allocation strategy improves the utility of the system and user satisfaction as compared to system-centric scheduling strategies.<|reference_end|> | arxiv | @article{sherwani2002libra:,
title={Libra: An Economy driven Job Scheduling System for Clusters},
author={Jahanzeb Sherwani, Nosheen Ali, Nausheen Lotia, Zahra Hayat, and
Rajkumar Buyya},
journal={arXiv preprint arXiv:cs/0207077},
year={2002},
number={Technical Report, July 2002, Dept. of Computer Science and Software
Engineering, The University of Melbourne},
archivePrefix={arXiv},
eprint={cs/0207077},
primaryClass={cs.DC cs.DS}
} | sherwani2002libra: |
arxiv-670691 | cs/0207078 | Randomized Approximation Schemes for Cuts and Flows in Capacitated Graphs | <|reference_start|>Randomized Approximation Schemes for Cuts and Flows in Capacitated Graphs: We improve on random sampling techniques for approximately solving problems that involve cuts and flows in graphs. We give a near-linear-time construction that transforms any graph on n vertices into an O(n\log n)-edge graph on the same vertices whose cuts have approximately the same value as the original graph's. In this new graph, for example, we can run the O(m^{3/2})-time maximum flow algorithm of Goldberg and Rao to find an s--t minimum cut in O(n^{3/2}) time. This corresponds to a (1+epsilon)-times minimum s--t cut in the original graph. In a similar way, we can approximate a sparsest cut to within O(log n) in O(n^2) time using a previous O(mn)-time algorithm. A related approach leads to a randomized divide and conquer algorithm producing an approximately maximum flow in O(m sqrt{n}) time.<|reference_end|> | arxiv | @article{benczur2002randomized,
title={Randomized Approximation Schemes for Cuts and Flows in Capacitated
Graphs},
author={Andras Benczur and David R. Karger},
journal={arXiv preprint arXiv:cs/0207078},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207078},
primaryClass={cs.DS cs.DM}
} | benczur2002randomized |
arxiv-670692 | cs/0207079 | On non-abelian homomorphic public-key cryptosystems | <|reference_start|>On non-abelian homomorphic public-key cryptosystems: An important problem of modern cryptography concerns secret public-key computations in algebraic structures. We construct homomorphic cryptosystems being (secret) epimorphisms f:G --> H, where G, H are (publically known) groups and H is finite. A letter of a message to be encrypted is an element h element of H, while its encryption g element of G is such that f(g)=h. A homomorphic cryptosystem allows one to perform computations (operating in a group G) with encrypted information (without knowing the original message over H). In this paper certain homomorphic cryptosystems are constructed for the first time for non-abelian groups H (earlier, homomorphic cryptosystems were known only in the Abelian case). In fact, we present such a system for any solvable (fixed) group H.<|reference_end|> | arxiv | @article{grigoriev2002on,
title={On non-abelian homomorphic public-key cryptosystems},
author={D. Grigoriev and I. Ponomarenko},
journal={arXiv preprint arXiv:cs/0207079},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207079},
primaryClass={cs.CR}
} | grigoriev2002on |
arxiv-670693 | cs/0207080 | Public-key cryptography and invariant theory | <|reference_start|>Public-key cryptography and invariant theory: Public-key cryptosystems are suggested based on invariants of groups. We give also an overview of the known cryptosystems which involve groups.<|reference_end|> | arxiv | @article{grigoriev2002public-key,
title={Public-key cryptography and invariant theory},
author={D. Grigoriev},
journal={arXiv preprint arXiv:cs/0207080},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207080},
primaryClass={cs.CR}
} | grigoriev2002public-key |
arxiv-670694 | cs/0207081 | Moebius-Invariant Natural Neighbor Interpolation | <|reference_start|>Moebius-Invariant Natural Neighbor Interpolation: We propose an interpolation method that is invariant under Moebius transformations; that is, interpolation followed by transformation gives the same result as transformation followed by interpolation. The method uses natural (Delaunay) neighbors, but weights neighbors according to angles formed by Delaunay circles.<|reference_end|> | arxiv | @article{bern2002moebius-invariant,
title={Moebius-Invariant Natural Neighbor Interpolation},
author={Marshall Bern, David Eppstein},
journal={arXiv preprint arXiv:cs/0207081},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207081},
primaryClass={cs.CG}
} | bern2002moebius-invariant |
arxiv-670695 | cs/0207082 | Dynamic Generators of Topologically Embedded Graphs | <|reference_start|>Dynamic Generators of Topologically Embedded Graphs: We provide a data structure for maintaining an embedding of a graph on a surface (represented combinatorially by a permutation of edges around each vertex) and computing generators of the fundamental group of the surface, in amortized time O(log n + log g(log log g)^3) per update on a surface of genus g; we can also test orientability of the surface in the same time, and maintain the minimum and maximum spanning tree of the graph in time O(log n + log^4 g) per update. Our data structure allows edge insertion and deletion as well as the dual operations; these operations may implicitly change the genus of the embedding surface. We apply similar ideas to improve the constant factor in a separator theorem for low-genus graphs, and to find in linear time a tree-decomposition of low-genus low-diameter graphs.<|reference_end|> | arxiv | @article{eppstein2002dynamic,
title={Dynamic Generators of Topologically Embedded Graphs},
author={David Eppstein},
journal={arXiv preprint arXiv:cs/0207082},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207082},
primaryClass={cs.DS}
} | eppstein2002dynamic |
arxiv-670696 | cs/0207083 | Evaluating Defaults | <|reference_start|>Evaluating Defaults: We seek to find normative criteria of adequacy for nonmonotonic logic similar to the criterion of validity for deductive logic. Rather than stipulating that the conclusion of an inference be true in all models in which the premises are true, we require that the conclusion of a nonmonotonic inference be true in ``almost all'' models of a certain sort in which the premises are true. This ``certain sort'' specification picks out the models that are relevant to the inference, taking into account factors such as specificity and vagueness, and previous inferences. The frequencies characterizing the relevant models reflect known frequencies in our actual world. The criteria of adequacy for a default inference can be extended by thresholding to criteria of adequacy for an extension. We show that this avoids the implausibilities that might otherwise result from the chaining of default inferences. The model proportions, when construed in terms of frequencies, provide a verifiable grounding of default rules, and can become the basis for generating default rules from statistics.<|reference_end|> | arxiv | @article{kyburg2002evaluating,
title={Evaluating Defaults},
author={Henry E. Kyburg Jr. and Choh Man Teng},
journal={arXiv preprint arXiv:cs/0207083},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207083},
primaryClass={cs.AI}
} | kyburg2002evaluating |
arxiv-670697 | cs/0207084 | Paraconsistent Reasoning via Quantified Boolean Formulas,I: Axiomatising Signed Systems | <|reference_start|>Paraconsistent Reasoning via Quantified Boolean Formulas,I: Axiomatising Signed Systems: Signed systems were introduced as a general, syntax-independent framework for paraconsistent reasoning, that is, non-trivialised reasoning from inconsistent information. In this paper, we show how the family of corresponding paraconsistent consequence relations can be axiomatised by means of quantified Boolean formulas. This approach has several benefits. First, it furnishes an axiomatic specification of paraconsistent reasoning within the framework of signed systems. Second, this axiomatisation allows us to identify upper bounds for the complexity of the different signed consequence relations. We strengthen these upper bounds by providing strict complexity results for the considered reasoning tasks. Finally, we obtain an implementation of different forms of paraconsistent reasoning by appeal to the existing system QUIP.<|reference_end|> | arxiv | @article{besnard2002paraconsistent,
title={Paraconsistent Reasoning via Quantified Boolean Formulas,I: Axiomatising
Signed Systems},
author={Philippe Besnard (1), Torsten Schaub (1), Hans Tompits (2), Stefan
Woltran (2) ((1) Institut f"ur Informatik, Universit"at Potsdam, (2)
Institut f"ur Informationssysteme, Technische Universit"at Wien)},
journal={arXiv preprint arXiv:cs/0207084},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207084},
primaryClass={cs.LO cs.CC}
} | besnard2002paraconsistent |
arxiv-670698 | cs/0207085 | Repairing Inconsistent Databases: A Model-Theoretic Approach and Abductive Reasoning | <|reference_start|>Repairing Inconsistent Databases: A Model-Theoretic Approach and Abductive Reasoning: In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to `repair' a database. We do so by characterizing the possibilities to `recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.<|reference_end|> | arxiv | @article{arieli2002repairing,
title={Repairing Inconsistent Databases: A Model-Theoretic Approach and
Abductive Reasoning},
author={Ofer Arieli (1), Marc Denecker (2), Bert Van Nuffelen (2), Maurice
Bruynooghe (2) ((1) The Academic College of Tel-Aviv, Israel (2) The Catholic
University of Leuven, Belgium)},
journal={arXiv preprint arXiv:cs/0207085},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207085},
primaryClass={cs.LO cs.DB}
} | arieli2002repairing |
arxiv-670699 | cs/0207086 | A Model-Theoretic Semantics for Defeasible Logic | <|reference_start|>A Model-Theoretic Semantics for Defeasible Logic: Defeasible logic is an efficient logic for defeasible reasoning. It is defined through a proof theory and, until now, has had no model theory. In this paper a model-theoretic semantics is given for defeasible logic. The logic is sound and complete with respect to the semantics. We also briefly outline how this approach extends to a wide range of defeasible logics.<|reference_end|> | arxiv | @article{maher2002a,
title={A Model-Theoretic Semantics for Defeasible Logic},
author={Michael J. Maher (Loyola University, Chicago)},
journal={arXiv preprint arXiv:cs/0207086},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207086},
primaryClass={cs.LO}
} | maher2002a |
arxiv-670700 | cs/0207087 | Axiomatic Aspects of Default Inference | <|reference_start|>Axiomatic Aspects of Default Inference: This paper studies axioms for nonmonotonic consequences from a semantics-based point of view, focusing on a class of mathematical structures for reasoning about partial information without a predefined syntax/logic. This structure is called a default structure. We study axioms for the nonmonotonic consequence relation derived from extensions as in Reiter's default logic, using skeptical reasoning, but extensions are now used for the construction of possible worlds in a default information structure. In previous work we showed that skeptical reasoning arising from default-extensions obeys a well-behaved set of axioms including the axiom of cautious cut. We show here that, remarkably, the converse is also true: any consequence relation obeying this set of axioms can be represented as one constructed from skeptical reasoning. We provide representation theorems to relate axioms for nonmonotonic consequence relation and properties about extensions, and provide one-to-one correspondence between nonmonotonic systems which satisfies the law of cautious monotony and default structures with unique extensions. Our results give a theoretical justification for a set of basic rules governing the update of nonmonotonic knowledge bases, demonstrating the derivation of them from the more concrete and primitive construction of extensions. It is also striking to note that proofs of the representation theorems show that only shallow extensions are necessary, in the sense that the number of iterations needed to achieve an extension is at most three. All of these developments are made possible by taking a more liberal view of consistency: consistency is a user defined predicate, satisfying some basic properties.<|reference_end|> | arxiv | @article{zhang2002axiomatic,
title={Axiomatic Aspects of Default Inference},
author={Guo-Qiang Zhang (Case Western Reserve University)},
journal={arXiv preprint arXiv:cs/0207087},
year={2002},
archivePrefix={arXiv},
eprint={cs/0207087},
primaryClass={cs.LO}
} | zhang2002axiomatic |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.