corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-671601 | cs/0312038 | Responsibility and blame: a structural-model approach | <|reference_start|>Responsibility and blame: a structural-model approach: Causality is typically treated an all-or-nothing concept; either A is a cause of B or it is not. We extend the definition of causality introduced by Halpern and Pearl [2001] to take into account the degree of responsibility of A for B. For example, if someone wins an election 11--0, then each person who votes for him is less responsible for the victory than if he had won 6--5. We then define a notion of degree of blame, which takes into account an agent's epistemic state. Roughly speaking, the degree of blame of A for B is the expected degree of responsibility of A for B, taken over the epistemic state of an agent.<|reference_end|> | arxiv | @article{chockler2003responsibility,
title={Responsibility and blame: a structural-model approach},
author={Hana Chockler and Joseph Y. Halpern},
journal={arXiv preprint arXiv:cs/0312038},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312038},
primaryClass={cs.AI cs.LO}
} | chockler2003responsibility |
arxiv-671602 | cs/0312039 | Uniform test of algorithmic randomness over a general space | <|reference_start|>Uniform test of algorithmic randomness over a general space: The algorithmic theory of randomness is well developed when the underlying space is the set of finite or infinite sequences and the underlying probability distribution is the uniform distribution or a computable distribution. These restrictions seem artificial. Some progress has been made to extend the theory to arbitrary Bernoulli distributions (by Martin-Loef), and to arbitrary distributions (by Levin). We recall the main ideas and problems of Levin's theory, and report further progress in the same framework. - We allow non-compact spaces (like the space of continuous functions, underlying the Brownian motion). - The uniform test (deficiency of randomness) d_P(x) (depending both on the outcome x and the measure P should be defined in a general and natural way. - We see which of the old results survive: existence of universal tests, conservation of randomness, expression of tests in terms of description complexity, existence of a universal measure, expression of mutual information as "deficiency of independence. - The negative of the new randomness test is shown to be a generalization of complexity in continuous spaces; we show that the addition theorem survives. The paper's main contribution is introducing an appropriate framework for studying these questions and related ones (like statistics for a general family of distributions).<|reference_end|> | arxiv | @article{gacs2003uniform,
title={Uniform test of algorithmic randomness over a general space},
author={Peter Gacs},
journal={Theoretical Computer Science 341 (2005) 91-137},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312039},
primaryClass={cs.CC}
} | gacs2003uniform |
arxiv-671603 | cs/0312040 | Diagnostic reasoning with A-Prolog | <|reference_start|>Diagnostic reasoning with A-Prolog: In this paper we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs.<|reference_end|> | arxiv | @article{balduccini2003diagnostic,
title={Diagnostic reasoning with A-Prolog},
author={Marcello Balduccini and Michael Gelfond},
journal={TPLP Vol 3(4&5) (2003) 425-461},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312040},
primaryClass={cs.AI}
} | balduccini2003diagnostic |
arxiv-671604 | cs/0312041 | Greedy Algorithms in Datalog | <|reference_start|>Greedy Algorithms in Datalog: In the design of algorithms, the greedy paradigm provides a powerful tool for solving efficiently classical computational problems, within the framework of procedural languages. However, expressing these algorithms within the declarative framework of logic-based languages has proven a difficult research challenge. In this paper, we extend the framework of Datalog-like languages to obtain simple declarative formulations for such problems, and propose effective implementation techniques to ensure computational complexities comparable to those of procedural formulations. These advances are achieved through the use of the "choice" construct, extended with preference annotations to effect the selection of alternative stable-models and nondeterministic fixpoints. We show that, with suitable storage structures, the differential fixpoint computation of our programs matches the complexity of procedural algorithms in classical search and optimization problems.<|reference_end|> | arxiv | @article{greco2003greedy,
title={Greedy Algorithms in Datalog},
author={Sergio Greco, Carlo Zaniolo},
journal={Theory and Practice of Logic Programming, 1(4): 381-407, 2001},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312041},
primaryClass={cs.DB cs.AI}
} | greco2003greedy |
arxiv-671605 | cs/0312042 | Declarative Semantics for Active Rules | <|reference_start|>Declarative Semantics for Active Rules: In this paper we analyze declarative deterministic and non-deterministic semantics for active rules. In particular we consider several (partial) stable model semantics, previously defined for deductive rules, such as well-founded, max deterministic, unique total stable model, total stable model, and maximal stable model semantics. The semantics of an active program AP is given by first rewriting it into a deductive program P, then computing a model M defining the declarative semantics of P and, finally, applying `consistent' updates contained in M to the source database. The framework we propose permits a natural integration of deductive and active rules and can also be applied to queries with function symbols or to queries over infinite databases.<|reference_end|> | arxiv | @article{flesca2003declarative,
title={Declarative Semantics for Active Rules},
author={Sergio Flesca, Sergio Greco},
journal={Theory and Practice of Logic Programming, 1(1): 43-69, 2001},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312042},
primaryClass={cs.DB}
} | flesca2003declarative |
arxiv-671606 | cs/0312043 | On A Theory of Probabilistic Deductive Databases | <|reference_start|>On A Theory of Probabilistic Deductive Databases: We propose a framework for modeling uncertainty where both belief and doubt can be given independent, first-class status. We adopt probability theory as the mathematical formalism for manipulating uncertainty. An agent can express the uncertainty in her knowledge about a piece of information in the form of a confidence level, consisting of a pair of intervals of probability, one for each of her belief and doubt. The space of confidence levels naturally leads to the notion of a trilattice, similar in spirit to Fitting's bilattices. Intuitively, thep oints in such a trilattice can be ordered according to truth, information, or precision. We develop a framework for probabilistic deductive databases by associating confidence levels with the facts and rules of a classical deductive database. While the trilattice structure offers a variety of choices for defining the semantics of probabilistic deductive databases, our choice of semantics is based on the truth-ordering, which we find to be closest to the classical framework for deductive databases. In addition to proposing a declarative semantics based on valuations and an equivalent semantics based on fixpoint theory, we also propose a proof procedure and prove it sound and complete. We show that while classical Datalog query programs have a polynomial time data complexity, certain query programs in the probabilistic deductive database framework do not even terminate on some input databases. We identify a large natural class of query programs of practical interest in our framework, and show that programs in this class possess polynomial time data complexity, i.e., not only do they terminate on every input database, they are guaranteed to do so in a number of steps polynomial in the input database size.<|reference_end|> | arxiv | @article{lakshmanan2003on,
title={On A Theory of Probabilistic Deductive Databases},
author={Laks V. S. Lakshmanan and Fereidoon Sadri},
journal={arXiv preprint arXiv:cs/0312043},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312043},
primaryClass={cs.DB}
} | lakshmanan2003on |
arxiv-671607 | cs/0312044 | Clustering by compression | <|reference_start|>Clustering by compression: We present a new method for clustering based on compression. The method doesn't use subject-specific features or background knowledge, and works as follows: First, we determine a universal similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files (singly and in pairwise concatenation). Second, we apply a hierarchical clustering method. The NCD is universal in that it is not restricted to a specific application area, and works across application area boundaries. A theoretical precursor, the normalized information distance, co-developed by one of the authors, is provably optimal but uses the non-computable notion of Kolmogorov complexity. We propose precise notions of similarity metric, normal compressor, and show that the NCD based on a normal compressor is a similarity metric that approximates universality. To extract a hierarchy of clusters from the distance matrix, we determine a dendrogram (binary tree) by a new quartet method and a fast heuristic to implement it. The method is implemented and available as public software, and is robust under choice of different compressors. To substantiate our claims of universality and robustness, we report evidence of successful application in areas as diverse as genomics, virology, languages, literature, music, handwritten digits, astronomy, and combinations of objects from completely different domains, using statistical, dictionary, and block sorting compressors. In genomics we presented new evidence for major questions in Mammalian evolution, based on whole-mitochondrial genomic analysis: the Eutherian orders and the Marsupionta hypothesis against the Theria hypothesis.<|reference_end|> | arxiv | @article{cilibrasi2003clustering,
title={Clustering by compression},
author={Rudi Cilibrasi (CWI) and Paul Vitanyi (CWI and University of
Amsterdam)},
journal={arXiv preprint arXiv:cs/0312044},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312044},
primaryClass={cs.CV cond-mat.stat-mech cs.AI physics.data-an q-bio.GN q-bio.QM}
} | cilibrasi2003clustering |
arxiv-671608 | cs/0312045 | Weight Constraints as Nested Expressions | <|reference_start|>Weight Constraints as Nested Expressions: We compare two recent extensions of the answer set (stable model) semantics of logic programs. One of them, due to Lifschitz, Tang and Turner, allows the bodies and heads of rules to contain nested expressions. The other, due to Niemela and Simons, uses weight constraints. We show that there is a simple, modular translation from the language of weight constraints into the language of nested expressions that preserves the program's answer sets. Nested expressions can be eliminated from the result of this translation in favor of additional atoms. The translation makes it possible to compute answer sets for some programs with weight constraints using satisfiability solvers, and to prove the strong equivalence of programs with weight constraints using the logic of here-and there.<|reference_end|> | arxiv | @article{ferraris2003weight,
title={Weight Constraints as Nested Expressions},
author={Paolo Ferraris and Vladimir Lifschitz},
journal={arXiv preprint arXiv:cs/0312045},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312045},
primaryClass={cs.AI}
} | ferraris2003weight |
arxiv-671609 | cs/0312046 | On the Abductive or Deductive Nature of Database Schema Validation and Update Processing Problems | <|reference_start|>On the Abductive or Deductive Nature of Database Schema Validation and Update Processing Problems: We show that database schema validation and update processing problems such as view updating, materialized view maintenance, integrity constraint checking, integrity constraint maintenance or condition monitoring can be classified as problems of either abductive or deductive nature, according to the reasoning paradigm that inherently suites them. This is done by performing abductive and deductive reasoning on the event rules [Oli91], a set of rules that define the difference between consecutive database states In this way, we show that it is possible to provide methods able to deal with all these problems as a whole. We also show how some existing general deductive and abductive procedures may be used to reason on the event rules. In this way, we show that these procedures can deal with all database schema validation and update processing problems considered in this paper.<|reference_end|> | arxiv | @article{teniente2003on,
title={On the Abductive or Deductive Nature of Database Schema Validation and
Update Processing Problems},
author={Ernest Teniente and Toni Urpi},
journal={Theory and Practice of Logic Programming 3(3):287-327, may 2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312046},
primaryClass={cs.DB cs.LO}
} | teniente2003on |
arxiv-671610 | cs/0312047 | Mapping weblog communities | <|reference_start|>Mapping weblog communities: Websites of a particular class form increasingly complex networks, and new tools are needed to map and understand them. A way of visualizing this complex network is by mapping it. A map highlights which members of the community have similar interests, and reveals the underlying social network. In this paper, we will map a network of websites using Kohonen's self-organizing map (SOM), a neural-net like method generally used for clustering and visualization of complex data sets. The set of websites considered has been the Blogalia weblog hosting site (based at http://www.blogalia.com/), a thriving community of around 200 members, created in January 2002. In this paper we show how SOM discovers interesting community features, its relation with other community-discovering algorithms, and the way it highlights the set of communities formed over the network.<|reference_end|> | arxiv | @article{merelo-guervos2003mapping,
title={Mapping weblog communities},
author={Juan-J. Merelo-Guervos, Beatriz Prieto, Fatima Rateb, Fernando Tricas},
journal={arXiv preprint arXiv:cs/0312047},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312047},
primaryClass={cs.NE}
} | merelo-guervos2003mapping |
arxiv-671611 | cs/0312048 | Representation Dependence in Probabilistic Inference | <|reference_start|>Representation Dependence in Probabilistic Inference: Non-deductive reasoning systems are often {\em representation dependent}: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed this as a significant problem. For example, the principle of maximum entropy has been subjected to much criticism due to its representation dependence. There has, however, been almost no work investigating representation dependence. In this paper, we formalize this notion and show that it is not a problem specific to maximum entropy. In fact, we show that any representation-independent probabilistic inference procedure that ignores irrelevant information is essentially entailment, in a precise sense. Moreover, we show that representation independence is incompatible with even a weak default assumption of independence. We then show that invariance under a restricted class of representation changes can form a reasonable compromise between representation independence and other desiderata, and provide a construction of a family of inference procedures that provides such restricted representation independence, using relative entropy.<|reference_end|> | arxiv | @article{halpern2003representation,
title={Representation Dependence in Probabilistic Inference},
author={Joseph Y. Halpern and Daphne Koller},
journal={arXiv preprint arXiv:cs/0312048},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312048},
primaryClass={cs.AI cs.LO}
} | halpern2003representation |
arxiv-671612 | cs/0312049 | Using virtual processors for SPMD parallel programs | <|reference_start|>Using virtual processors for SPMD parallel programs: In this paper I describe some results on the use of virtual processors technology for parallelize some SPMD computational programs. The tested technology is the INTEL Hyper Threading on real processors, and the programs are MATLAB scripts for floating points computation. The conclusions of the work concern on the utility and limits of the used approach. The main result is that using virtual processors is a good technique for improving parallel programs not only for memory-based computations, but in the case of massive disk-storage operations too.<|reference_end|> | arxiv | @article{argentini2003using,
title={Using virtual processors for SPMD parallel programs},
author={Gianluca Argentini},
journal={arXiv preprint arXiv:cs/0312049},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312049},
primaryClass={cs.DC}
} | argentini2003using |
arxiv-671613 | cs/0312050 | A Flexible Pragmatics-driven Language Generator for Animated Agents | <|reference_start|>A Flexible Pragmatics-driven Language Generator for Animated Agents: This paper describes the NECA MNLG; a fully implemented Multimodal Natural Language Generation module. The MNLG is deployed as part of the NECA system which generates dialogues between animated agents. The generation module supports the seamless integration of full grammar rules, templates and canned text. The generator takes input which allows for the specification of syntactic, semantic and pragmatic constraints on the output.<|reference_end|> | arxiv | @article{piwek2003a,
title={A Flexible Pragmatics-driven Language Generator for Animated Agents},
author={Paul Piwek},
journal={Proceedings of the Research Note Sessions of the 10th Conference
of the European Chapter of the Association for Computational Linguistics
(EACL'03), 2003, pp. 151-154},
year={2003},
number={ITRI-03-05},
archivePrefix={arXiv},
eprint={cs/0312050},
primaryClass={cs.CL cs.MM}
} | piwek2003a |
arxiv-671614 | cs/0312051 | Towards Automated Generation of Scripted Dialogue: Some Time-Honoured Strategies | <|reference_start|>Towards Automated Generation of Scripted Dialogue: Some Time-Honoured Strategies: The main aim of this paper is to introduce automated generation of scripted dialogue as a worthwhile topic of investigation. In particular the fact that scripted dialogue involves two layers of communication, i.e., uni-directional communication between the author and the audience of a scripted dialogue and bi-directional pretended communication between the characters featuring in the dialogue, is argued to raise some interesting issues. Our hope is that the combined study of the two layers will forge links between research in text generation and dialogue processing. The paper presents a first attempt at creating such links by studying three types of strategies for the automated generation of scripted dialogue. The strategies are derived from examples of human-authored and naturally occurring dialogue.<|reference_end|> | arxiv | @article{piwek2003towards,
title={Towards Automated Generation of Scripted Dialogue: Some Time-Honoured
Strategies},
author={Paul Piwek, Kees van Deemter},
journal={Proceedings of EDILOG: 6th Workshop on the Semantics and
Pragmatics of Dialogue, 2002, pp. 141-148},
year={2003},
number={ITRI-02-11},
archivePrefix={arXiv},
eprint={cs/0312051},
primaryClass={cs.CL cs.AI}
} | piwek2003towards |
arxiv-671615 | cs/0312052 | Dialogue as Discourse: Controlling Global Properties of Scripted Dialogue | <|reference_start|>Dialogue as Discourse: Controlling Global Properties of Scripted Dialogue: This paper explains why scripted dialogue shares some crucial properties with discourse. In particular, when scripted dialogues are generated by a Natural Language Generation system, the generator can apply revision strategies that cannot normally be used when the dialogue results from an interaction between autonomous agents (i.e., when the dialogue is not scripted). The paper explains that the relevant revision operators are best applied at the level of a dialogue plan and discusses how the generator may decide when to apply a given revision operator.<|reference_end|> | arxiv | @article{piwek2003dialogue,
title={Dialogue as Discourse: Controlling Global Properties of Scripted
Dialogue},
author={Paul Piwek, Kees van Deemter},
journal={Proceedings of AAAI Spring Symposium on Natural Language
Generation in Spoken and Written Dialogue, Stanford, 2003},
year={2003},
number={ITRI-03-04},
archivePrefix={arXiv},
eprint={cs/0312052},
primaryClass={cs.CL cs.AI}
} | piwek2003dialogue |
arxiv-671616 | cs/0312053 | On the Expressibility of Stable Logic Programming | <|reference_start|>On the Expressibility of Stable Logic Programming: (We apologize for pidgin LaTeX) Schlipf \cite{sch91} proved that Stable Logic Programming (SLP) solves all $\mathit{NP}$ decision problems. We extend Schlipf's result to prove that SLP solves all search problems in the class $\mathit{NP}$. Moreover, we do this in a uniform way as defined in \cite{mt99}. Specifically, we show that there is a single $\mathrm{DATALOG}^{\neg}$ program $P_{\mathit{Trg}}$ such that given any Turing machine $M$, any polynomial $p$ with non-negative integer coefficients and any input $\sigma$ of size $n$ over a fixed alphabet $\Sigma$, there is an extensional database $\mathit{edb}_{M,p,\sigma}$ such that there is a one-to-one correspondence between the stable models of $\mathit{edb}_{M,p,\sigma} \cup P_{\mathit{Trg}}$ and the accepting computations of the machine $M$ that reach the final state in at most $p(n)$ steps. Moreover, $\mathit{edb}_{M,p,\sigma}$ can be computed in polynomial time from $p$, $\sigma$ and the description of $M$ and the decoding of such accepting computations from its corresponding stable model of $\mathit{edb}_{M,p,\sigma} \cup P_{\mathit{Trg}}$ can be computed in linear time. A similar statement holds for Default Logic with respect to $\Sigma_2^\mathrm{P}$-search problems\footnote{The proof of this result involves additional technical complications and will be a subject of another publication.}.<|reference_end|> | arxiv | @article{marek2003on,
title={On the Expressibility of Stable Logic Programming},
author={Victor W. Marek and Jeffrey B. Remmel},
journal={TCLP 3(2003), pp. 551-567q},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312053},
primaryClass={cs.AI}
} | marek2003on |
arxiv-671617 | cs/0312054 | Partitioning schemes for quicksort and quickselect | <|reference_start|>Partitioning schemes for quicksort and quickselect: We introduce several modifications of the partitioning schemes used in Hoare's quicksort and quickselect algorithms, including ternary schemes which identify keys less or greater than the pivot. We give estimates for the numbers of swaps made by each scheme. Our computational experiments indicate that ternary schemes allow quickselect to identify all keys equal to the selected key at little additional cost.<|reference_end|> | arxiv | @article{kiwiel2003partitioning,
title={Partitioning schemes for quicksort and quickselect},
author={Krzysztof C. Kiwiel},
journal={arXiv preprint arXiv:cs/0312054},
year={2003},
number={PMMO-03-01},
archivePrefix={arXiv},
eprint={cs/0312054},
primaryClass={cs.DS}
} | kiwiel2003partitioning |
arxiv-671618 | cs/0312055 | Randomized selection with quintary partitions | <|reference_start|>Randomized selection with quintary partitions: We show that several versions of Floyd and Rivest's algorithm Select for finding the $k$th smallest of $n$ elements require at most $n+\min\{k,n-k\}+o(n)$ comparisons on average and with high probability. This rectifies the analysis of Floyd and Rivest, and extends it to the case of nondistinct elements. Our computational results confirm that Select may be the best algorithm in practice.<|reference_end|> | arxiv | @article{kiwiel2003randomized,
title={Randomized selection with quintary partitions},
author={Krzysztof C. Kiwiel},
journal={arXiv preprint arXiv:cs/0312055},
year={2003},
number={PMMO-03-02},
archivePrefix={arXiv},
eprint={cs/0312055},
primaryClass={cs.DS}
} | kiwiel2003randomized |
arxiv-671619 | cs/0312056 | The Geometric Thickness of Low Degree Graphs | <|reference_start|>The Geometric Thickness of Low Degree Graphs: We prove that the geometric thickness of graphs whose maximum degree is no more than four is two. All of our algorithms run in O(n) time, where n is the number of vertices in the graph. In our proofs, we present an embedding algorithm for graphs with maximum degree three that uses an n x n grid and a more complex algorithm for embedding a graph with maximum degree four. We also show a variation using orthogonal edges for maximum degree four graphs that also uses an n x n grid. The results have implications in graph theory, graph drawing, and VLSI design.<|reference_end|> | arxiv | @article{duncan2003the,
title={The Geometric Thickness of Low Degree Graphs},
author={Christian A. Duncan (University of Miami), David Eppstein (University
of California, Irvine), Stephen G. Kobourov (University of Arizona)},
journal={arXiv preprint arXiv:cs/0312056},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312056},
primaryClass={cs.CG cs.DM}
} | duncan2003the |
arxiv-671620 | cs/0312057 | Abduction in Well-Founded Semantics and Generalized Stable Models | <|reference_start|>Abduction in Well-Founded Semantics and Generalized Stable Models: Abductive logic programming offers a formalism to declaratively express and solve problems in areas such as diagnosis, planning, belief revision and hypothetical reasoning. Tabled logic programming offers a computational mechanism that provides a level of declarativity superior to that of Prolog, and which has supported successful applications in fields such as parsing, program analysis, and model checking. In this paper we show how to use tabled logic programming to evaluate queries to abductive frameworks with integrity constraints when these frameworks contain both default and explicit negation. The result is the ability to compute abduction over well-founded semantics with explicit negation and answer sets. Our approach consists of a transformation and an evaluation method. The transformation adjoins to each objective literal $O$ in a program, an objective literal $not(O)$ along with rules that ensure that $not(O)$ will be true if and only if $O$ is false. We call the resulting program a {\em dual} program. The evaluation method, \wfsmeth, then operates on the dual program. \wfsmeth{} is sound and complete for evaluating queries to abductive frameworks whose entailment method is based on either the well-founded semantics with explicit negation, or on answer sets. Further, \wfsmeth{} is asymptotically as efficient as any known method for either class of problems. In addition, when abduction is not desired, \wfsmeth{} operating on a dual program provides a novel tabling method for evaluating queries to ground extended programs whose complexity and termination properties are similar to those of the best tabling methods for the well-founded semantics. A publicly available meta-interpreter has been developed for \wfsmeth{} using the XSB system.<|reference_end|> | arxiv | @article{alferes2003abduction,
title={Abduction in Well-Founded Semantics and Generalized Stable Models},
author={Jos'e J'ulio Alferes, Lu'is Moniz Pereira and Terrance Swift},
journal={arXiv preprint arXiv:cs/0312057},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312057},
primaryClass={cs.LO cs.AI}
} | alferes2003abduction |
arxiv-671621 | cs/0312058 | Acquiring Lexical Paraphrases from a Single Corpus | <|reference_start|>Acquiring Lexical Paraphrases from a Single Corpus: This paper studies the potential of identifying lexical paraphrases within a single corpus, focusing on the extraction of verb paraphrases. Most previous approaches detect individual paraphrase instances within a pair (or set) of comparable corpora, each of them containing roughly the same information, and rely on the substantial level of correspondence of such corpora. We present a novel method that successfully detects isolated paraphrase instances within a single corpus without relying on any a-priori structure and information. A comparison suggests that an instance-based approach may be combined with a vector based approach in order to assess better the paraphrase likelihood for many verb pairs.<|reference_end|> | arxiv | @article{glickman2003acquiring,
title={Acquiring Lexical Paraphrases from a Single Corpus},
author={Oren Glickman and Ido Dagan},
journal={arXiv preprint arXiv:cs/0312058},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312058},
primaryClass={cs.CL cs.AI cs.IR cs.LG}
} | glickman2003acquiring |
arxiv-671622 | cs/0312059 | Polyhierarchical Classifications Induced by Criteria Polyhierarchies, and Taxonomy Algebra | <|reference_start|>Polyhierarchical Classifications Induced by Criteria Polyhierarchies, and Taxonomy Algebra: A new approach to the construction of general persistent polyhierarchical classifications is proposed. It is based on implicit description of category polyhierarchy by a generating polyhierarchy of classification criteria. Similarly to existing approaches, the classification categories are defined by logical functions encoded by attributive expressions. However, the generating hierarchy explicitly predefines domains of criteria applicability, and the semantics of relations between categories is invariant to changes in the universe composition, extending variety of criteria, and increasing their cardinalities. The generating polyhierarchy is an independent, compact, portable, and re-usable information structure serving as a template classification. It can be associated with one or more particular sets of objects, included in more general classifications as a standard component, or used as a prototype for more comprehensive classifications. The approach dramatically simplifies development and unplanned modifications of persistent hierarchical classifications compared with tree, DAG, and faceted schemes. It can be efficiently implemented in common DBMS, while considerably reducing amount of computer resources required for storage, maintenance, and use of complex polyhierarchies.<|reference_end|> | arxiv | @article{babikov2003polyhierarchical,
title={Polyhierarchical Classifications Induced by Criteria Polyhierarchies,
and Taxonomy Algebra},
author={Pavel Babikov, Oleg Gontcharov, Maria Babikova},
journal={arXiv preprint arXiv:cs/0312059},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312059},
primaryClass={cs.AI cs.IR}
} | babikov2003polyhierarchical |
arxiv-671623 | cs/0312060 | Part-of-Speech Tagging with Minimal Lexicalization | <|reference_start|>Part-of-Speech Tagging with Minimal Lexicalization: We use a Dynamic Bayesian Network to represent compactly a variety of sublexical and contextual features relevant to Part-of-Speech (PoS) tagging. The outcome is a flexible tagger (LegoTag) with state-of-the-art performance (3.6% error on a benchmark corpus). We explore the effect of eliminating redundancy and radically reducing the size of feature vocabularies. We find that a small but linguistically motivated set of suffixes results in improved cross-corpora generalization. We also show that a minimal lexicon limited to function words is sufficient to ensure reasonable performance.<|reference_end|> | arxiv | @article{savova2003part-of-speech,
title={Part-of-Speech Tagging with Minimal Lexicalization},
author={Virginia Savova, Leonid Peshkin},
journal={arXiv preprint arXiv:cs/0312060},
year={2003},
archivePrefix={arXiv},
eprint={cs/0312060},
primaryClass={cs.CL cs.LG}
} | savova2003part-of-speech |
arxiv-671624 | cs/0401001 | Initial Experiences Re-Exporting Duplicate and Similarity Computation with an OAI-PMH aggregator | <|reference_start|>Initial Experiences Re-Exporting Duplicate and Similarity Computation with an OAI-PMH aggregator: The proliferation of the Open Archive Initiative Protocol for Metadata Harvesting (OAI-PMH) has resulted in the creation of a large number of service providers, all harvesting from either data providers or aggregators. If data were available regarding the similarity of metadata records, service providers could track redundant records across harvests from multiple sources as well as provide additional end-user services. Due to the large number of metadata formats and the diverse mapping strategies employed by data providers, similarity calculation requirements necessitate the use of information retrieval strategies. We describe an OAI-PMH aggregator implementation that uses the optional ``<about>'' container to re-export the results of similarity calculations. Metadata records (3751) were harvested from a NASA data provider and similarities for the records were computed. The results were useful for detecting duplicates, similarities and metadata errors.<|reference_end|> | arxiv | @article{harrison2004initial,
title={Initial Experiences Re-Exporting Duplicate and Similarity Computation
with an OAI-PMH aggregator},
author={Terry L. Harrison, Aravind Elango, Johan Bollen and Michael Nelson},
journal={arXiv preprint arXiv:cs/0401001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401001},
primaryClass={cs.DL cs.DS}
} | harrison2004initial |
arxiv-671625 | cs/0401002 | A Comparison of Cryptography Courses | <|reference_start|>A Comparison of Cryptography Courses: The author taught two courses on cryptography, one at Duke University aimed at non-mathematics majors and one at Rose-Hulman Institute of Technology aimed at mathematics and computer science majors. Both tried to incorporate technical and societal aspects of cryptography, with varying emphases. This paper will discuss the strengths and weaknesses of both courses and compare the differences in the author's approach.<|reference_end|> | arxiv | @article{holden2004a,
title={A Comparison of Cryptography Courses},
author={Joshua Holden},
journal={Cryptologia, 28 (2), 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401002},
primaryClass={cs.CR cs.CY}
} | holden2004a |
arxiv-671626 | cs/0401003 | Randomized selection with tripartitioning | <|reference_start|>Randomized selection with tripartitioning: We show that several versions of Floyd and Rivest's algorithm Select [Comm.\ ACM {\bf 18} (1975) 173] for finding the $k$th smallest of $n$ elements require at most $n+\min\{k,n-k\}+o(n)$ comparisons on average, even when equal elements occur. This parallels our recent analysis of another variant due to Floyd and Rivest [Comm. ACM {\bf 18} (1975) 165--172]. Our computational results suggest that both variants perform well in practice, and may compete with other selection methods, such as Hoare's Find or quickselect with median-of-3 pivots.<|reference_end|> | arxiv | @article{kiwiel2004randomized,
title={Randomized selection with tripartitioning},
author={Krzysztof C. Kiwiel},
journal={arXiv preprint arXiv:cs/0401003},
year={2004},
number={PMMO-04-01},
archivePrefix={arXiv},
eprint={cs/0401003},
primaryClass={cs.DS}
} | kiwiel2004randomized |
arxiv-671627 | cs/0401004 | Cyborg Systems as Platforms for Computer-Vision Algorithm-Development for Astrobiology | <|reference_start|>Cyborg Systems as Platforms for Computer-Vision Algorithm-Development for Astrobiology: Employing the allegorical imagery from the film "The Matrix", we motivate and discuss our `Cyborg Astrobiologist' research program. In this research program, we are using a wearable computer and video camcorder in order to test and train a computer-vision system to be a field-geologist and field-astrobiologist.<|reference_end|> | arxiv | @article{mcguire2004cyborg,
title={Cyborg Systems as Platforms for Computer-Vision Algorithm-Development
for Astrobiology},
author={Patrick C. McGuire, J.A. Rodriguez-Manfredi, E. Sebastian-Martinez, J.
Gomez-Elvira, E. Diaz-Martinez, J. Ormo, K. Neuffer, A. Giaquinta, F.
Camps-Martinez, A. Lepinette-Malvitte, J. Perez-Mercader, H. Ritter, M.
Oesker, J. Ontrup, J. Walter},
journal={Proc. of the III European Workshop on Exo-Astrobiology, "Mars: The
Search for Life", Madrid, November 2003 (ESA SP-545, March 2004) pp.141-144},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401004},
primaryClass={cs.CV astro-ph cs.AI}
} | mcguire2004cyborg |
arxiv-671628 | cs/0401005 | About Unitary Rating Score Constructing | <|reference_start|>About Unitary Rating Score Constructing: It is offered to pool test points of different subjects and different aspects of the same subject together in order to get the unitary rating score, by the way of nonlinear transformation of indicator points in accordance with Zipf's distribution. It is proposed to use the well-studied distribution of Intellectuality Quotient IQ as the reference distribution for latent variable "progress in studies".<|reference_end|> | arxiv | @article{victor2004about,
title={About Unitary Rating Score Constructing},
author={Kromer Victor},
journal={arXiv preprint arXiv:cs/0401005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401005},
primaryClass={cs.LG}
} | victor2004about |
arxiv-671629 | cs/0401006 | Cluster computing performances using virtual processors and mathematical software | <|reference_start|>Cluster computing performances using virtual processors and mathematical software: In this paper I describe some results on the use of virtual processors technology for parallelize some SPMD computational programs in a cluster environment. The tested technology is the INTEL Hyper Threading on real processors, and the programs are MATLAB 6.5 Release 13 scripts for floating points computation. By the use of this technology, I tested that a cluster can run with benefit a number of concurrent processes double the amount of physical processors. The conclusions of the work concern on the utility and limits of the used approach. The main result is that using virtual processors is a good technique for improving parallel programs not only for memory-based computations, but in the case of massive disk-storage operations too.<|reference_end|> | arxiv | @article{argentini2004cluster,
title={Cluster computing performances using virtual processors and mathematical
software},
author={Gianluca Argentini},
journal={arXiv preprint arXiv:cs/0401006},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401006},
primaryClass={cs.DC cs.MS}
} | argentini2004cluster |
arxiv-671630 | cs/0401007 | Design of a Community-based Translation Center | <|reference_start|>Design of a Community-based Translation Center: Interfaces that support multi-lingual content can reach a broader community. We wish to extend the reach of CITIDEL, a digital library for computing education materials, to support multiple languages. By doing so, we hope that it will increase the number of users, and in turn the number of resources. This paper discusses three approaches to translation (automated translation, developer-based, and community-based), and a brief evaluation of these approaches. It proposes a design for an online community translation center where volunteers help translate interface components and educational materials available in CITIDEL.<|reference_end|> | arxiv | @article{mcdevitt2004design,
title={Design of a Community-based Translation Center},
author={K. McDevitt, M. A. P'erez-Qui~nones, O. I. Padilla-Falto},
journal={arXiv preprint arXiv:cs/0401007},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401007},
primaryClass={cs.HC cs.DL}
} | mcdevitt2004design |
arxiv-671631 | cs/0401008 | Algorithm xxx: Modified Bessel functions of imaginary order and positive argument | <|reference_start|>Algorithm xxx: Modified Bessel functions of imaginary order and positive argument: Fortran 77 programs for the computation of modified Bessel functions of purely imaginary order are presented. The codes compute the functions $K_{ia}(x)$, $L_{ia}(x)$ and their derivatives for real $a$ and positive $x$; these functions are independent solutions of the differential equation $x^2 w'' +x w' +(a^2 -x^2)w=0$. The code also computes exponentially scaled functions. The range of computation is $(x,a)\in (0,1500]\times [-1500,1500]$ when scaled functions are considered and it is larger than $(0,500]\times [-400,400]$ for standard IEEE double precision arithmetic. The relative accuracy is better than $10^{-13}$ in the range $(0,200]\times [-200,200]$ and close to $10^{-12}$ in $(0,1500]\times [-1500,1500]$.<|reference_end|> | arxiv | @article{gil2004algorithm,
title={Algorithm xxx: Modified Bessel functions of imaginary order and positive
argument},
author={Amparo Gil, Javier Segura, Nico M. Temme},
journal={arXiv preprint arXiv:cs/0401008},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401008},
primaryClass={cs.MS cs.NA math.NA}
} | gil2004algorithm |
arxiv-671632 | cs/0401009 | Unifying Computing and Cognition: The SP Theory and its Applications | <|reference_start|>Unifying Computing and Cognition: The SP Theory and its Applications: This book develops the conjecture that all kinds of information processing in computers and in brains may usefully be understood as "information compression by multiple alignment, unification and search". This "SP theory", which has been under development since 1987, provides a unified view of such things as the workings of a universal Turing machine, the nature of 'knowledge', the interpretation and production of natural language, pattern recognition and best-match information retrieval, several kinds of probabilistic reasoning, planning and problem solving, unsupervised learning, and a range of concepts in mathematics and logic. The theory also provides a basis for the design of an 'SP' computer with several potential advantages compared with traditional digital computers.<|reference_end|> | arxiv | @article{wolff2004unifying,
title={Unifying Computing and Cognition: The SP Theory and its Applications},
author={J Gerard Wolff},
journal={arXiv preprint arXiv:cs/0401009},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401009},
primaryClass={cs.AI}
} | wolff2004unifying |
arxiv-671633 | cs/0401010 | On the Cost of Participating in a Peer-to-Peer Network | <|reference_start|>On the Cost of Participating in a Peer-to-Peer Network: In this paper, we model the cost incurred by each peer participating in a peer-to-peer network. Such a cost model allows to gauge potential disincentives for peers to collaborate, and provides a measure of the ``total cost'' of a network, which is a possible benchmark to distinguish between proposals. We characterize the cost imposed on a node as a function of the experienced load and the node connectivity, and show how our model applies to a few proposed routing geometries for distributed hash tables (DHTs). We further outline a number of open questions this research has raised.<|reference_end|> | arxiv | @article{christin2004on,
title={On the Cost of Participating in a Peer-to-Peer Network},
author={Nicolas Christin and John Chuang},
journal={arXiv preprint arXiv:cs/0401010},
year={2004},
number={p2pecon TR-2003-12-CC},
archivePrefix={arXiv},
eprint={cs/0401010},
primaryClass={cs.NI}
} | christin2004on |
arxiv-671634 | cs/0401011 | Heuristic average-case analysis of the backtrack resolution of random 3-Satisfiability instances | <|reference_start|>Heuristic average-case analysis of the backtrack resolution of random 3-Satisfiability instances: An analysis of the average-case complexity of solving random 3-Satisfiability (SAT) instances with backtrack algorithms is presented. We first interpret previous rigorous works in a unifying framework based on the statistical physics notions of dynamical trajectories, phase diagram and growth process. It is argued that, under the action of the Davis--Putnam--Loveland--Logemann (DPLL) algorithm, 3-SAT instances are turned into 2+p-SAT instances whose characteristic parameters (ratio alpha of clauses per variable, fraction p of 3-clauses) can be followed during the operation, and define resolution trajectories. Depending on the location of trajectories in the phase diagram of the 2+p-SAT model, easy (polynomial) or hard (exponential) resolutions are generated. Three regimes are identified, depending on the ratio alpha of the 3-SAT instance to be solved. Lower sat phase: for small ratios, DPLL almost surely finds a solution in a time growing linearly with the number N of variables. Upper sat phase: for intermediate ratios, instances are almost surely satisfiable but finding a solution requires exponential time (2 ^ (N omega) with omega>0) with high probability. Unsat phase: for large ratios, there is almost always no solution and proofs of refutation are exponential. An analysis of the growth of the search tree in both upper sat and unsat regimes is presented, and allows us to estimate omega as a function of alpha. This analysis is based on an exact relationship between the average size of the search tree and the powers of the evolution operator encoding the elementary steps of the search heuristic.<|reference_end|> | arxiv | @article{cocco2004heuristic,
title={Heuristic average-case analysis of the backtrack resolution of random
3-Satisfiability instances},
author={Simona Cocco, Remi Monasson},
journal={Theoretical Computer Science (2004) A 320, 345},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401011},
primaryClass={cs.DS cond-mat.stat-mech cs.CC}
} | cocco2004heuristic |
arxiv-671635 | cs/0401012 | Algebraic Elimination of epsilon-transitions | <|reference_start|>Algebraic Elimination of epsilon-transitions: We present here algebraic formulas associating a k-automaton to a k-epsilon-automaton. The existence depends on the definition of the star of matrices and of elements in the semiring k. For this reason, we present the theorem which allows the transformation of k-epsilon-automata into k-automata. The two automata have the same behaviour.<|reference_end|> | arxiv | @article{duchamp2004algebraic,
title={Algebraic Elimination of epsilon-transitions},
author={Gerard Duchamp (LIFAR, LIPN), Hatem Hadj Kacem (LIFAR), Eric
Laugerotte (LIFAR)},
journal={arXiv preprint arXiv:cs/0401012},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401012},
primaryClass={cs.SC cs.DS}
} | duchamp2004algebraic |
arxiv-671636 | cs/0401013 | Verification of Process Rewrite Systems in normal form | <|reference_start|>Verification of Process Rewrite Systems in normal form: We consider the problem of model--checking for Process Rewrite Systems (PRSs) in normal form. In a PRS in normal form every rewrite rule either only deals with procedure calls and procedure termination, possibly with value return, (this kind of rules allows to capture Pushdown Processes), or only deals with dynamic activation of processes and synchronization (this kind of rules allows to capture Petri Nets). The model-checking problem for PRSs and action-based linear temporal logic (ALTL) is undecidable. However, decidability of model--checking for PRSs and some interesting fragment of ALTL remains an open question. In this paper we state decidability results concerning generalized acceptance properties about infinite derivations (infinite term rewritings) in PRSs in normal form. As a consequence, we obtain decidability of the model-checking (restricted to infinite runs) for PRSs in normal form and a meaningful fragment of ALTL.<|reference_end|> | arxiv | @article{bozzelli2004verification,
title={Verification of Process Rewrite Systems in normal form},
author={Laura Bozzelli},
journal={arXiv preprint arXiv:cs/0401013},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401013},
primaryClass={cs.OH}
} | bozzelli2004verification |
arxiv-671637 | cs/0401014 | Nested Intervals with Farey Fractions | <|reference_start|>Nested Intervals with Farey Fractions: Relational Databases are universally conceived as an advance over their predecessors Network and Hierarchical models. Superior in every querying respect, they turned out to be surprisingly incomplete when modeling transitive dependencies. Almost every couple of months a question how to model a tree in the database surfaces at comp.database.theory newsgroup. This article completes a series of articles exploring Nested Intervals Model. Previous articles introduced tree encoding with Binary Rational Numbers. However, binary encoding grows exponentially, both in breadth and in depth. In this article, we'll leverage Farey fractions in order to overcome this problem. We'll also demonstrate that our implementation scales to a tree with 1M nodes.<|reference_end|> | arxiv | @article{tropashko2004nested,
title={Nested Intervals with Farey Fractions},
author={Vadim Tropashko},
journal={arXiv preprint arXiv:cs/0401014},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401014},
primaryClass={cs.DB}
} | tropashko2004nested |
arxiv-671638 | cs/0401015 | Query Answering in Peer-to-Peer Data Exchange Systems | <|reference_start|>Query Answering in Peer-to-Peer Data Exchange Systems: The problem of answering queries posed to a peer who is a member of a peer-to-peer data exchange system is studied. The answers have to be consistent wrt to both the local semantic constraints and the data exchange constraints with other peers; and must also respect certain trust relationships between peers. A semantics for peer consistent answers under exchange constraints and trust relationships is introduced and some techniques for obtaining those answers are presented.<|reference_end|> | arxiv | @article{bertossi2004query,
title={Query Answering in Peer-to-Peer Data Exchange Systems},
author={Leopoldo Bertossi and Loreto Bravo},
journal={arXiv preprint arXiv:cs/0401015},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401015},
primaryClass={cs.DB cs.LO}
} | bertossi2004query |
arxiv-671639 | cs/0401016 | Generalized Strong Preservation by Abstract Interpretation | <|reference_start|>Generalized Strong Preservation by Abstract Interpretation: Standard abstract model checking relies on abstract Kripke structures which approximate concrete models by gluing together indistinguishable states, namely by a partition of the concrete state space. Strong preservation for a specification language L encodes the equivalence of concrete and abstract model checking of formulas in L. We show how abstract interpretation can be used to design abstract models that are more general than abstract Kripke structures. Accordingly, strong preservation is generalized to abstract interpretation-based models and precisely related to the concept of completeness in abstract interpretation. The problem of minimally refining an abstract model in order to make it strongly preserving for some language L can be formulated as a minimal domain refinement in abstract interpretation in order to get completeness w.r.t. the logical/temporal operators of L. It turns out that this refined strongly preserving abstract model always exists and can be characterized as a greatest fixed point. As a consequence, some well-known behavioural equivalences, like bisimulation, simulation and stuttering, and their corresponding partition refinement algorithms can be elegantly characterized in abstract interpretation as completeness properties and refinements.<|reference_end|> | arxiv | @article{ranzato2004generalized,
title={Generalized Strong Preservation by Abstract Interpretation},
author={Francesco Ranzato and Francesco Tapparo},
journal={arXiv preprint arXiv:cs/0401016},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401016},
primaryClass={cs.LO cs.PL}
} | ranzato2004generalized |
arxiv-671640 | cs/0401017 | Better Foreground Segmentation Through Graph Cuts | <|reference_start|>Better Foreground Segmentation Through Graph Cuts: For many tracking and surveillance applications, background subtraction provides an effective means of segmenting objects moving in front of a static background. Researchers have traditionally used combinations of morphological operations to remove the noise inherent in the background-subtracted result. Such techniques can effectively isolate foreground objects, but tend to lose fidelity around the borders of the segmentation, especially for noisy input. This paper explores the use of a minimum graph cut algorithm to segment the foreground, resulting in qualitatively and quantitiatively cleaner segmentations. Experiments on both artificial and real data show that the graph-based method reduces the error around segmented foreground objects. A MATLAB code implementation is available at http://www.cs.smith.edu/~nhowe/research/code/#fgseg<|reference_end|> | arxiv | @article{howe2004better,
title={Better Foreground Segmentation Through Graph Cuts},
author={Nicholas R. Howe & Alexandra Deschamps},
journal={arXiv preprint arXiv:cs/0401017},
year={2004},
doi={10.1016/j.eswa.2010.09.137},
archivePrefix={arXiv},
eprint={cs/0401017},
primaryClass={cs.CV}
} | howe2004better |
arxiv-671641 | cs/0401018 | Factor Temporal Prognosis of Tick-Borne Encephalitis Foci Functioning on the South of Russian Far East | <|reference_start|>Factor Temporal Prognosis of Tick-Borne Encephalitis Foci Functioning on the South of Russian Far East: A method of temporal factor prognosis of TE (tick-borne encephalitis) infection has been developed. The high precision of the prognosis results for a number of geographical regions of Primorsky Krai has been achieved. The method can be applied not only to epidemiological research but also to others.<|reference_end|> | arxiv | @article{bolotin2004factor,
title={Factor Temporal Prognosis of Tick-Borne Encephalitis Foci Functioning on
the South of Russian Far East},
author={E.I. Bolotin, G.Sh. Tsitsiashvili, I.V. Golycheva},
journal={arXiv preprint arXiv:cs/0401018},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401018},
primaryClass={cs.CV}
} | bolotin2004factor |
arxiv-671642 | cs/0401019 | Using biased coins as oracles | <|reference_start|>Using biased coins as oracles: While it is well known that a Turing machine equipped with the ability to flip a fair coin cannot compute more that a standard Turing machine, we show that this is not true for a biased coin. Indeed, any oracle set $X$ may be coded as a probability $p_{X}$ such that if a Turing machine is given a coin which lands heads with probability $p_{X}$ it can compute any function recursive in $X$ with arbitrarily high probability. We also show how the assumption of a non-recursive bias can be weakened by using a sequence of increasingly accurate recursive biases or by choosing the bias at random from a distribution with a non-recursive mean. We conclude by briefly mentioning some implications regarding the physical realisability of such methods.<|reference_end|> | arxiv | @article{ord2004using,
title={Using biased coins as oracles},
author={Toby Ord and Tien D. Kieu},
journal={arXiv preprint arXiv:cs/0401019},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401019},
primaryClass={cs.OH quant-ph}
} | ord2004using |
arxiv-671643 | cs/0401020 | Presynaptic modulation as fast synaptic switching: state-dependent modulation of task performance | <|reference_start|>Presynaptic modulation as fast synaptic switching: state-dependent modulation of task performance: Neuromodulatory receptors in presynaptic position have the ability to suppress synaptic transmission for seconds to minutes when fully engaged. This effectively alters the synaptic strength of a connection. Much work on neuromodulation has rested on the assumption that these effects are uniform at every neuron. However, there is considerable evidence to suggest that presynaptic regulation may be in effect synapse-specific. This would define a second "weight modulation" matrix, which reflects presynaptic receptor efficacy at a given site. Here we explore functional consequences of this hypothesis. By analyzing and comparing the weight matrices of networks trained on different aspects of a task, we identify the potential for a low complexity "modulation matrix", which allows to switch between differently trained subtasks while retaining general performance characteristics for the task. This means that a given network can adapt itself to different task demands by regulating its release of neuromodulators. Specifically, we suggest that (a) a network can provide optimized responses for related classification tasks without the need to train entirely separate networks and (b) a network can blend a "memory mode" which aims at reproducing memorized patterns and a "novelty mode" which aims to facilitate classification of new patterns. We relate this work to the known effects of neuromodulators on brain-state dependent processing.<|reference_end|> | arxiv | @article{scheler2004presynaptic,
title={Presynaptic modulation as fast synaptic switching: state-dependent
modulation of task performance},
author={Gabriele Scheler and Johann Schumann},
journal={Neural Networks, 2003. Proceedings of the International Joint
Conference on (Volume:1 ) 218 - 223},
year={2004},
doi={10.1109/IJCNN.2003.1223347},
archivePrefix={arXiv},
eprint={cs/0401020},
primaryClass={cs.NE q-bio.NC}
} | scheler2004presynaptic |
arxiv-671644 | cs/0401021 | A correct, precise and efficient integration of set-sharing, freeness and linearity for the analysis of finite and rational tree languages | <|reference_start|>A correct, precise and efficient integration of set-sharing, freeness and linearity for the analysis of finite and rational tree languages: It is well-known that freeness and linearity information positively interact with aliasing information, allowing both the precision and the efficiency of the sharing analysis of logic programs to be improved. In this paper we present a novel combination of set-sharing with freeness and linearity information, which is characterized by an improved abstract unification operator. We provide a new abstraction function and prove the correctness of the analysis for both the finite tree and the rational tree cases. Moreover, we show that the same notion of redundant information as identified in (Bagnara et al. 2002; Zaffanella et al. 2002) also applies to this abstract domain combination: this allows for the implementation of an abstract unification operator running in polynomial time and achieving the same precision on all the considered observable properties.<|reference_end|> | arxiv | @article{hill2004a,
title={A correct, precise and efficient integration of set-sharing, freeness
and linearity for the analysis of finite and rational tree languages},
author={Patricia M. Hill, Enea Zaffanella, Roberto Bagnara},
journal={arXiv preprint arXiv:cs/0401021},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401021},
primaryClass={cs.PL}
} | hill2004a |
arxiv-671645 | cs/0401022 | Enhanced sharing analysis techniques: a comprehensive evaluation | <|reference_start|>Enhanced sharing analysis techniques: a comprehensive evaluation: Sharing, an abstract domain developed by D. Jacobs and A. Langen for the analysis of logic programs, derives useful aliasing information. It is well-known that a commonly used core of techniques, such as the integration of Sharing with freeness and linearity information, can significantly improve the precision of the analysis. However, a number of other proposals for refined domain combinations have been circulating for years. One feature that is common to these proposals is that they do not seem to have undergone a thorough experimental evaluation even with respect to the expected precision gains. In this paper we experimentally evaluate: helping Sharing with the definitely ground variables found using Pos, the domain of positive Boolean formulas; the incorporation of explicit structural information; a full implementation of the reduced product of Sharing and Pos; the issue of reordering the bindings in the computation of the abstract mgu; an original proposal for the addition of a new mode recording the set of variables that are deemed to be ground or free; a refined way of using linearity to improve the analysis; the recovery of hidden information in the combination of Sharing with freeness information. Finally, we discuss the issue of whether tracking compoundness allows the computation of more sharing information.<|reference_end|> | arxiv | @article{bagnara2004enhanced,
title={Enhanced sharing analysis techniques: a comprehensive evaluation},
author={Roberto Bagnara, Enea Zaffanella, Patricia M. Hill},
journal={arXiv preprint arXiv:cs/0401022},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401022},
primaryClass={cs.PL}
} | bagnara2004enhanced |
arxiv-671646 | cs/0401023 | Surface Triangulation -- The Metric Approach | <|reference_start|>Surface Triangulation -- The Metric Approach: We embark in a program of studying the problem of better approximating surfaces by triangulations(triangular meshes) by considering the approximating triangulations as finite metric spaces and the target smooth surface as their Haussdorff-Gromov limit. This allows us to define in a more natural way the relevant elements, constants and invariants s.a. principal directions and principal values, Gaussian and Mean curvature, etc. By a "natural way" we mean an intrinsic, discrete, metric definitions as opposed to approximating or paraphrasing the differentiable notions. In this way we hope to circumvent computational errors and, indeed, conceptual ones, that are often inherent to the classical, "numerical" approach. In this first study we consider the problem of determining the Gaussian curvature of a polyhedral surface, by using the {\em embedding curvature} in the sense of Wald (and Menger). We present two modalities of employing these definitions for the computation of Gaussian curvature.<|reference_end|> | arxiv | @article{saucan2004surface,
title={Surface Triangulation -- The Metric Approach},
author={Emil Saucan},
journal={arXiv preprint arXiv:cs/0401023},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401023},
primaryClass={cs.GR cs.CG math.MG}
} | saucan2004surface |
arxiv-671647 | cs/0401024 | A system for reflection in C++ | <|reference_start|>A system for reflection in C++: Object-oriented programming languages such as Java and Objective C have become popular for implementing agent-based and other object-based simulations since objects in those languages can {\em reflect} (i.e. make runtime queries of an object's structure). This allows, for example, a fairly trivial {\em serialisation} routine (conversion of an object into a binary representation that can be stored or passed over a network) to be written. However C++ does not offer this ability, as type information is thrown away at compile time. Yet C++ is often a preferred development environment, whether for performance reasons or for its expressive features such as operator overloading. In this paper, we present the {\em Classdesc} system which brings many of the benefits of object reflection to C++.<|reference_end|> | arxiv | @article{madina2004a,
title={A system for reflection in C++},
author={Duraid Madina and Russell K. Standish},
journal={Proceedings AUUG 2001: Always on and Everywhere, 207. ISBN
0957753225},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401024},
primaryClass={cs.PL}
} | madina2004a |
arxiv-671648 | cs/0401025 | Running C++ models undet the Swarm environment | <|reference_start|>Running C++ models undet the Swarm environment: Objective-C is still the language of choice if users want to run their simulation efficiently under the Swarm environment since the Swarm environment itself was written in Objective-C. The language is a fast, object-oriented and easy to learn. However, the language is less well known than, less expressive than, and lacks support for many important features of C++ (eg. OpenMP for high performance computing application). In this paper, we present a methodology and software tools that we have developed for auto generating an Objective-C object template (and all the necessary interfacing functions) from a given C++ model, utilising the Classdesc's object description technology, so that the C++ model can both be run and accessed under the Objective-C and C++ environments. We also present a methodology for modifying an existing Swarm application to make part of the model (eg. the heatbug's step method) run under the C++ environment.<|reference_end|> | arxiv | @article{leow2004running,
title={Running C++ models undet the Swarm environment},
author={Richard Leow and Russell K. Standish},
journal={Proceedings SwarmFest 2003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401025},
primaryClass={cs.MA}
} | leow2004running |
arxiv-671649 | cs/0401026 | EcoLab: Agent Based Modeling for C++ programmers | <|reference_start|>EcoLab: Agent Based Modeling for C++ programmers: \EcoLab{} is an agent based modeling system for C++ programmers, strongly influenced by the design of Swarm. This paper is just a brief outline of \EcoLab's features, more details can be found in other published articles, documentation and source code from the \EcoLab{} website.<|reference_end|> | arxiv | @article{standish2004ecolab:,
title={EcoLab: Agent Based Modeling for C++ programmers},
author={Russell K. Standish and Richard Leow},
journal={Proceedings SwarmFest 2003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401026},
primaryClass={cs.MA}
} | standish2004ecolab: |
arxiv-671650 | cs/0401027 | ClassdescMP: Easy MPI programming in C++ | <|reference_start|>ClassdescMP: Easy MPI programming in C++: ClassdescMP is a distributed memory parallel programming system for use with C++ and MPI. It uses the Classdesc reflection system to ease the task of building complicated messages to be sent between processes. It doesn't hide the underlying MPI API, so it is an augmentation of MPI capabilities. Users can still call standard MPI function calls if needed for performance reasons.<|reference_end|> | arxiv | @article{standish2004classdescmp:,
title={ClassdescMP: Easy MPI programming in C++},
author={Russell K. Standish and Duraid Madina},
journal={Computational Science, Sloot et al. (eds), Lecture Notes in
Computer Science 2660, Springer, 896. (2003)},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401027},
primaryClass={cs.DC}
} | standish2004classdescmp: |
arxiv-671651 | cs/0401028 | Automated Resolution of Noisy Bibliographic References | <|reference_start|>Automated Resolution of Noisy Bibliographic References: We describe a system used by the NASA Astrophysics Data System to identify bibliographic references obtained from scanned article pages by OCR methods with records in a bibliographic database. We analyze the process generating the noisy references and conclude that the three-step procedure of correcting the OCR results, parsing the corrected string and matching it against the database provides unsatisfactory results. Instead, we propose a method that allows a controlled merging of correction, parsing and matching, inspired by dependency grammars. We also report on the effectiveness of various heuristics that we have employed to improve recall.<|reference_end|> | arxiv | @article{demleitner2004automated,
title={Automated Resolution of Noisy Bibliographic References},
author={Markus Demleitner, Michael Kurtz, Alberto Accomazzi, G"unther
Eichhorn, Carolyn S. Grant, Steven S. Murray},
journal={arXiv preprint arXiv:cs/0401028},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401028},
primaryClass={cs.DL}
} | demleitner2004automated |
arxiv-671652 | cs/0401029 | Dynamic Linking of Smart Digital Objects Based on User Navigation Patterns | <|reference_start|>Dynamic Linking of Smart Digital Objects Based on User Navigation Patterns: We discuss a methodology to dynamically generate links among digital objects by means of an unsupervised learning mechanism which analyzes user link traversal patterns. We performed an experiment with a test bed of 150 complex data objects, referred to as buckets. Each bucket manages its own content, provides methods to interact with users and individually maintains a set of links to other buckets. We demonstrate that buckets were capable of dynamically adjusting their links to other buckets according to user link selections, thereby generating a meaningful network of bucket relations. Our results indicate such adaptive networks of linked buckets approximate the collective link preferences of a community of user<|reference_end|> | arxiv | @article{elango2004dynamic,
title={Dynamic Linking of Smart Digital Objects Based on User Navigation
Patterns},
author={Aravind Elango and Johan Bollen and Michael L. Nelson},
journal={arXiv preprint arXiv:cs/0401029},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401029},
primaryClass={cs.DL}
} | elango2004dynamic |
arxiv-671653 | cs/0401030 | Pseudorandom number generation by $p$-adic ergodic transformations | <|reference_start|>Pseudorandom number generation by $p$-adic ergodic transformations: The paper study counter-dependent pseudorandom generators; the latter are generators such that their state transition function (and output function) is being modified dynamically while working: For such a generator the recurrence sequence of states satisfies a congruence $x_{i+1}\equiv f_i(x_i)\pmod{2^n}$, while its output sequence is of the form $z_{i}=F_i(u_i)$. The paper introduces techniques and constructions that enable one to compose generators that output uniformly distributed sequences of a maximum period length and with high linear and 2-adic spans. The corresponding stream chipher is provably strong against a known plaintext attack (up to a plausible conjecture). Both state transition function and output function could be key-dependent, so the only information available to a cryptanalyst is that these functions belong to some (exponentially large) class. These functions are compositions of standard machine instructions (such as addition, multiplication, bitwise logical operations, etc.) The compositions should satisfy rather loose conditions; so the corresponding generators are flexible enough and could be easily implemented as computer programs.<|reference_end|> | arxiv | @article{anashin2004pseudorandom,
title={Pseudorandom number generation by $p$-adic ergodic transformations},
author={Vladimir Anashin},
journal={"Applied Algebraic Dynamics", volume 49 of de Gruyter Expositions
in Mathematics, 2009, 269-304},
year={2004},
archivePrefix={arXiv},
eprint={cs/0401030},
primaryClass={cs.CR}
} | anashin2004pseudorandom |
arxiv-671654 | cs/0402001 | Mobile Re-Finding of Web Information Using a Voice Interface | <|reference_start|>Mobile Re-Finding of Web Information Using a Voice Interface: Mobile access to information is a considerable problem for many users, especially to information found on the Web. In this paper, we explore how a voice-controlled service, accessible by telephone, could support mobile users' needs for refinding specific information previously found on the Web. We outline challenges in creating such a service and describe architectural and user interfaces issues discovered in an exploratory prototype we built called WebContext. We also present the results of a study, motivated by our experience with WebContext, to explore what people remember about information that they are trying to refind and how they express information refinding requests in a collaborative conversation. As part of the study, we examine how end-usercreated Web page annotations can be used to help support mobile information re-finding. We observed the use of URLs, page titles, and descriptions of page contents to help identify waypoints in the search process. Furthermore, we observed that the annotations were utilized extensively, indicating that explicitly added context by the user can play an important role in re-finding.<|reference_end|> | arxiv | @article{capra2004mobile,
title={Mobile Re-Finding of Web Information Using a Voice Interface},
author={Robert G. Capra, Manuel A. Perez-Quinones},
journal={arXiv preprint arXiv:cs/0402001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402001},
primaryClass={cs.HC cs.IR}
} | capra2004mobile |
arxiv-671655 | cs/0402002 | Deciding Disjunctive Linear Arithmetic with SAT | <|reference_start|>Deciding Disjunctive Linear Arithmetic with SAT: Disjunctive Linear Arithmetic (DLA) is a major decidable theory that is supported by almost all existing theorem provers. The theory consists of Boolean combinations of predicates of the form $\Sigma_{j=1}^{n}a_j\cdot x_j \le b$, where the coefficients $a_j$, the bound $b$ and the variables $x_1 >... x_n$ are of type Real ($\mathbb{R}$). We show a reduction to propositional logic from disjunctive linear arithmetic based on Fourier-Motzkin elimination. While the complexity of this procedure is not better than competing techniques, it has practical advantages in solving verification problems. It also promotes the option of deciding a combination of theories by reducing them to this logic. Results from experiments show that this method has a strong advantage over existing techniques when there are many disjunctions in the formula.<|reference_end|> | arxiv | @article{strichman2004deciding,
title={Deciding Disjunctive Linear Arithmetic with SAT},
author={Ofer Strichman},
journal={arXiv preprint arXiv:cs/0402002},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402002},
primaryClass={cs.LO}
} | strichman2004deciding |
arxiv-671656 | cs/0402003 | Semantic Optimization of Preference Queries | <|reference_start|>Semantic Optimization of Preference Queries: The notion of preference is becoming more and more ubiquitous in present-day information systems. Preferences are primarily used to filter and personalize the information reaching the users of such systems. In database systems, preferences are usually captured as preference relations that are used to build preference queries. In our approach, preference queries are relational algebra or SQL queries that contain occurrences of the winnow operator ("find the most preferred tuples in a given relation"). We present here a number of semantic optimization techniques applicable to preference queries. The techniques make use of integrity constraints, and make it possible to remove redundant occurrences of the winnow operator and to apply a more efficient algorithm for the computation of winnow. We also study the propagation of integrity constraints in the result of the winnow. We have identified necessary and sufficient conditions for the applicability of our techniques, and formulated those conditions as constraint satisfiability problems.<|reference_end|> | arxiv | @article{chomicki2004semantic,
title={Semantic Optimization of Preference Queries},
author={Jan Chomicki},
journal={arXiv preprint arXiv:cs/0402003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402003},
primaryClass={cs.DB}
} | chomicki2004semantic |
arxiv-671657 | cs/0402004 | Baptista-type chaotic cryptosystems: Problems and countermeasures | <|reference_start|>Baptista-type chaotic cryptosystems: Problems and countermeasures: In 1998, M. S. Baptista proposed a chaotic cryptosystem, which has attracted much attention from the chaotic cryptography community: some of its modifications and also attacks have been reported in recent years. In [Phys. Lett. A 307 (2003) 22], we suggested a method to enhance the security of Baptista-type cryptosystem, which can successfully resist all proposed attacks. However, the enhanced Baptista-type cryptosystem has a nontrivial defect, which produces errors in the decrypted data with a generally small but nonzero probability, and the consequent error propagation exists. In this Letter, we analyze this defect and discuss how to rectify it. In addition, we point out some newly-found problems existing in all Baptista-type cryptosystems and consequently propose corresponding countermeasures.<|reference_end|> | arxiv | @article{li2004baptista-type,
title={Baptista-type chaotic cryptosystems: Problems and countermeasures},
author={Shujun Li, Guanrong Chen, Kwok-Wo Wong, Xuanqin Mou and Yuanlong Cai},
journal={Physics Letters A, 332(5-6):368-375, 2004},
year={2004},
doi={10.1016/j.physleta.2004.09.028},
archivePrefix={arXiv},
eprint={cs/0402004},
primaryClass={cs.CR nlin.CD}
} | li2004baptista-type |
arxiv-671658 | cs/0402005 | Improved randomized selection | <|reference_start|>Improved randomized selection: We show that several versions of Floyd and Rivest's improved algorithm Select for finding the $k$th smallest of $n$ elements require at most $n+\min\{k,n-k\}+O(n^{1/2}\ln^{1/2}n)$ comparisons on average and with high probability. This rectifies the analysis of Floyd and Rivest, and extends it to the case of nondistinct elements. Encouraging computational results on large median-finding problems are reported.<|reference_end|> | arxiv | @article{kiwiel2004improved,
title={Improved randomized selection},
author={Krzysztof C. Kiwiel},
journal={arXiv preprint arXiv:cs/0402005},
year={2004},
number={PMMO-04-02},
archivePrefix={arXiv},
eprint={cs/0402005},
primaryClass={cs.DS}
} | kiwiel2004improved |
arxiv-671659 | cs/0402006 | MammoGrid: Large-Scale Distributed Mammogram Analysis | <|reference_start|>MammoGrid: Large-Scale Distributed Mammogram Analysis: Breast cancer as a medical condition and mammograms as images exhibit many dimensions of variability across the population. Similarly, the way diagnostic systems are used and maintained by clinicians varies between imaging centres and breast screening programmes, and so does the appearance of the mammograms generated. A distributed database that reflects the spread of pathologies across the population is an invaluable tool for the epidemiologist and the understanding of the variation in image acquisition protocols is essential to a radiologist in a screening programme. Exploiting emerging grid technology, the aim of the MammoGrid [1] project is to develop a Europe-wide database of mammograms that will be used to investigate a set of important healthcare applications and to explore the potential of the grid to support effective co-working between healthcare professionals.<|reference_end|> | arxiv | @article{amendolia2004mammogrid:,
title={MammoGrid: Large-Scale Distributed Mammogram Analysis},
author={S. Roberto Amendolia, Michael Brady, Richard McClatchey, Miguel
Mulet-Parada, Mohammed Odeh & Tony Solomonides},
journal={arXiv preprint arXiv:cs/0402006},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402006},
primaryClass={cs.SE}
} | amendolia2004mammogrid: |
arxiv-671660 | cs/0402007 | An Integrated Approach for Extraction of Objects from XML and Transformation to Heterogeneous Object Oriented Databases | <|reference_start|>An Integrated Approach for Extraction of Objects from XML and Transformation to Heterogeneous Object Oriented Databases: CERN's (European Organization for Nuclear Research) WISDOM project uses XML for the replication of data between different data repositories in a heterogeneous operating system environment. For exchanging data from Web-resident databases, the data needs to be transformed into XML and back to the database format. Many different approaches are employed to do this transformation. This paper addresses issues that make this job more efficient and robust than existing approaches. It incorporates the World Wide Web Consortium (W3C) XML Schema specification in the database-XML relationship. Incorporation of the XML Schema exhibits significant improvements in XML content usage and reduces the limitations of DTD-based database XML services. Secondly the paper explores the possibility of database independent transformation of data between XML and different databases. It proposes a standard XML format that every serialized object should follow. This makes it possible to use objects of heterogeneous database seamlessly using XML.<|reference_end|> | arxiv | @article{ahmad2004an,
title={An Integrated Approach for Extraction of Objects from XML and
Transformation to Heterogeneous Object Oriented Databases},
author={Uzair Ahmad, Mohammad Waseem Hassan, Arshad Ali, Richard McClatchey,
Ian Willers},
journal={arXiv preprint arXiv:cs/0402007},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402007},
primaryClass={cs.DB cs.SE}
} | ahmad2004an |
arxiv-671661 | cs/0402008 | A Use-Case Driven Approach in Requirements Engineering : The Mammogrid Project | <|reference_start|>A Use-Case Driven Approach in Requirements Engineering : The Mammogrid Project: We report on the application of the use-case modeling technique to identify and specify the user requirements of the MammoGrid project in an incremental and controlled iterative approach. Modeling has been carried out in close collaboration with clinicians and radiologists with no prior experience of use cases. The study reveals the advantages and limitations of applying this technique to requirements specification in the domains of breast cancer screening and mammography research, with implications for medical imaging more generally. In addition, this research has shown a return on investment in use-case modeling in shorter gaps between phases of the requirements engineering process. The qualitative result of this analysis leads us to propose that a use-case modeling approach may result in reducing the cycle of the requirements engineering process for medical imaging.<|reference_end|> | arxiv | @article{odeh2004a,
title={A Use-Case Driven Approach in Requirements Engineering : The Mammogrid
Project},
author={Mohammed Odeh, Tamas Hauer, Richard McClatchey & Tony Solomonides},
journal={arXiv preprint arXiv:cs/0402008},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402008},
primaryClass={cs.DB cs.SE}
} | odeh2004a |
arxiv-671662 | cs/0402009 | Resolving Clinicians Queries Across a Grids Infrastructure | <|reference_start|>Resolving Clinicians Queries Across a Grids Infrastructure: The past decade has witnessed order of magnitude increases in computing power, data storage capacity and network speed, giving birth to applications which may handle large data volumes of increased complexity, distributed over the Internet. Grids computing promises to resolve many of the difficulties in facilitating medical image analysis to allow radiologists to collaborate without having to co-locate. The EU-funded MammoGrid project aims to investigate the feasibility of developing a Grid-enabled European database of mammograms and provide an information infrastructure which federates multiple mammogram databases. This will enable clinicians to develop new common, collaborative and co-operative approaches to the analysis of mammographic data. This paper focuses on one of the key requirements for large-scale distributed mammogram analysis: resolving queries across a grid-connected federation of images.<|reference_end|> | arxiv | @article{estrella2004resolving,
title={Resolving Clinicians Queries Across a Grids Infrastructure},
author={F Estrella, C del Frate, T Hauer, R McClatchey, M Odeh, D Rogulin, S R
Amendolia, D Schottlander, T Solomonides, R Warren},
journal={arXiv preprint arXiv:cs/0402009},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402009},
primaryClass={cs.DB cs.SE}
} | estrella2004resolving |
arxiv-671663 | cs/0402010 | Encapsulation for Practical Simplification Procedures | <|reference_start|>Encapsulation for Practical Simplification Procedures: ACL2 was used to prove properties of two simplification procedures. The procedures differ in complexity but solve the same programming problem that arises in the context of a resolution/paramodulation theorem proving system. Term rewriting is at the core of the two procedures, but details of the rewriting procedure itself are irrelevant. The ACL2 encapsulate construct was used to assert the existence of the rewriting function and to state some of its properties. Termination, irreducibility, and soundness properties were established for each procedure. The availability of the encapsulation mechanism in ACL2 is considered essential to rapid and efficient verification of this kind of algorithm.<|reference_end|> | arxiv | @article{matlin2004encapsulation,
title={Encapsulation for Practical Simplification Procedures},
author={Olga Shumsky Matlin and William McCune},
journal={arXiv preprint arXiv:cs/0402010},
year={2004},
number={Preprint ANL/MCS-P1039-0403},
archivePrefix={arXiv},
eprint={cs/0402010},
primaryClass={cs.LO}
} | matlin2004encapsulation |
arxiv-671664 | cs/0402011 | Accurately modeling the Internet topology | <|reference_start|>Accurately modeling the Internet topology: Based on measurements of the Internet topology data, we found out that there are two mechanisms which are necessary for the correct modeling of the Internet topology at the Autonomous Systems (AS) level: the Interactive Growth of new nodes and new internal links, and a nonlinear preferential attachment, where the preference probability is described by a positive-feedback mechanism. Based on the above mechanisms, we introduce the Positive-Feedback Preference (PFP) model which accurately reproduces many topological properties of the AS-level Internet, including: degree distribution, rich-club connectivity, the maximum degree, shortest path length, short cycles, disassortative mixing and betweenness centrality. The PFP model is a phenomenological model which provides a novel insight into the evolutionary dynamics of real complex networks.<|reference_end|> | arxiv | @article{zhou2004accurately,
title={Accurately modeling the Internet topology},
author={Shi Zhou and Raul J. Mondragon},
journal={Physical Review E, vol. 70, no. 066108, Dec. 2004.},
year={2004},
doi={10.1103/PhysRevE.70.066108},
archivePrefix={arXiv},
eprint={cs/0402011},
primaryClass={cs.NI}
} | zhou2004accurately |
arxiv-671665 | cs/0402012 | A Knowledge-Theoretic Analysis of Uniform Distributed Coordination and Failure Detectors | <|reference_start|>A Knowledge-Theoretic Analysis of Uniform Distributed Coordination and Failure Detectors: It is shown that, in a precise sense, if there is no bound on the number of faulty processes in a system with unreliable but fair communication, Uniform Distributed Coordination (UDC) can be attained if and only if a system has perfect failure detectors. This result is generalized to the case where there is a bound t on the number of faulty processes. It is shown that a certain type of generalized failure detector is necessary and sufficient for achieving UDC in a context with at most t faulty processes. Reasoning about processes' knowledge as to which other processes are faulty plays a key role in the analysis.<|reference_end|> | arxiv | @article{halpern2004a,
title={A Knowledge-Theoretic Analysis of Uniform Distributed Coordination and
Failure Detectors},
author={Joseph Y. Halpern and Aleta Ricciardi},
journal={arXiv preprint arXiv:cs/0402012},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402012},
primaryClass={cs.DC}
} | halpern2004a |
arxiv-671666 | cs/0402013 | Corollaries on the fixpoint completion: studying the stable semantics by means of the Clark completion | <|reference_start|>Corollaries on the fixpoint completion: studying the stable semantics by means of the Clark completion: The fixpoint completion fix(P) of a normal logic program P is a program transformation such that the stable models of P are exactly the models of the Clark completion of fix(P). This is well-known and was studied by Dung and Kanchanasut (1989). The correspondence, however, goes much further: The Gelfond-Lifschitz operator of P coincides with the immediate consequence operator of fix(P), as shown by Wendt (2002), and even carries over to standard operators used for characterizing the well-founded and the Kripke-Kleene semantics. We will apply this knowledge to the study of the stable semantics, and this will allow us to almost effortlessly derive new results concerning fixed-point and metric-based semantics, and neural-symbolic integration.<|reference_end|> | arxiv | @article{hitzler2004corollaries,
title={Corollaries on the fixpoint completion: studying the stable semantics by
means of the Clark completion},
author={Pascal Hitzler},
journal={arXiv preprint arXiv:cs/0402013},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402013},
primaryClass={cs.AI cs.LO}
} | hitzler2004corollaries |
arxiv-671667 | cs/0402014 | Self-Organising Networks for Classification: developing Applications to Science Analysis for Astroparticle Physics | <|reference_start|>Self-Organising Networks for Classification: developing Applications to Science Analysis for Astroparticle Physics: Physics analysis in astroparticle experiments requires the capability of recognizing new phenomena; in order to establish what is new, it is important to develop tools for automatic classification, able to compare the final result with data from different detectors. A typical example is the problem of Gamma Ray Burst detection, classification, and possible association to known sources: for this task physicists will need in the next years tools to associate data from optical databases, from satellite experiments (EGRET, GLAST), and from Cherenkov telescopes (MAGIC, HESS, CANGAROO, VERITAS).<|reference_end|> | arxiv | @article{de angelis2004self-organising,
title={Self-Organising Networks for Classification: developing Applications to
Science Analysis for Astroparticle Physics},
author={A. De Angelis, P. Boinee, M. Frailis, E. Milotti},
journal={arXiv preprint arXiv:cs/0402014},
year={2004},
doi={10.1016/j.physa.2004.02.023},
archivePrefix={arXiv},
eprint={cs/0402014},
primaryClass={cs.NE astro-ph cs.AI}
} | de angelis2004self-organising |
arxiv-671668 | cs/0402015 | A Preliminary Study for the development of an Early Method for the Measurement in Function Points of a Software Product | <|reference_start|>A Preliminary Study for the development of an Early Method for the Measurement in Function Points of a Software Product: The Function Points Analysis (FPA) of A.J. Albrecht is a method to determine the functional size of software products. The International Function Point Users Group, (IFPUG), establishes the FPA like a standard in the software functional size measurement. The IFPUG [3] [4] method follows the Albrecht's method and incorporates in its succesive versions modifications to the rules and hints with the intention of improving it [7]. The required documentation level to apply the method is the functional specification which corresponds to level I in the Rudolph's clasification [8]. This documentation is avalaible with some difficulty for those companies which are dedicated to develop software for third parties when they have to prepare the appropiate budget for this development. Then, we face the need of developing an early method [6] [9] for measuring the functional size of a software product that we will name to abbreviate it Early Method or EFPM (Early Function Point Method). The required documentation to apply the EFPM would be the User Requirements or some analogous documentations. This is a part of a research work now in process in Oviedo University. In this article we only show the following, results: From the measurements of a set of projects using the IFPUG method v. 4.1 we obtain the linear correlation coefficients between the total number of Function Points for each project and the counters of the ILFs number, ILFs+EIFs number and EIs+EOs+EQs number. Using the preliminary results we compute the regression functions. This results we will allow us to determine the factors to be considered in the development of EFPM and to estimate the function points.<|reference_end|> | arxiv | @article{monge2004a,
title={A Preliminary Study for the development of an Early Method for the
Measurement in Function Points of a Software Product},
author={Ramon Asensio Monge, Francisco Sanchis Marco, Fernando Torre Cervigon,
Victor Garcia Garcia, Gustavo Uria Paino},
journal={arXiv preprint arXiv:cs/0402015},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402015},
primaryClass={cs.SE}
} | monge2004a |
arxiv-671669 | cs/0402016 | Perspects in astrophysical databases | <|reference_start|>Perspects in astrophysical databases: Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large datasets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large datasets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.<|reference_end|> | arxiv | @article{frailis2004perspects,
title={Perspects in astrophysical databases},
author={M. Frailis, A. De Angelis, V. Roberto},
journal={Physica A338 (2004) 54-59},
year={2004},
doi={10.1016/j.physa.2004.02.024},
archivePrefix={arXiv},
eprint={cs/0402016},
primaryClass={cs.DB astro-ph}
} | frailis2004perspects |
arxiv-671670 | cs/0402017 | Alchemi: A NET-based Grid Computing Framework and its Integration into Global Grids | <|reference_start|>Alchemi: A NET-based Grid Computing Framework and its Integration into Global Grids: Computational grids that couple geographically distributed resources are becoming the de-facto computing platform for solving large-scale problems in science, engineering, and commerce. Software to enable grid computing has been primarily written for Unix-class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of desktop computers i.e. those running variants of the Microsoft Windows operating system. Addressing Windows-based grid computing is particularly important from the software industry's viewpoint where interest in grids is emerging rapidly. Microsoft's .NET Framework has become near-ubiquitous for implementing commercial distributed systems for Windows-based platforms, positioning it as the ideal platform for grid computing in this context. In this paper we present Alchemi, a .NET-based grid computing framework that provides the runtime machinery and programming environment required to construct desktop grids and develop grid applications. It allows flexible application composition by supporting an object-oriented grid application programming model in addition to a grid job model. Cross-platform support is provided via a web services interface and a flexible execution model supports dedicated and non-dedicated (voluntary) execution by grid nodes.<|reference_end|> | arxiv | @article{luther2004alchemi:,
title={Alchemi: A .NET-based Grid Computing Framework and its Integration into
Global Grids},
author={Akshay Luther, Rajkumar Buyya, Rajiv Ranjan, and Srikumar Venugopal},
journal={Technical Report, GRIDS-TR-2003-8, Grid Computing and Distributed
Systems Laboratory, University of Melbourne, Australia, December 2003},
year={2004},
number={GRIDS-TR-2003-8},
archivePrefix={arXiv},
eprint={cs/0402017},
primaryClass={cs.DC}
} | luther2004alchemi: |
arxiv-671671 | cs/0402018 | P2P Networks for Content Sharing | <|reference_start|>P2P Networks for Content Sharing: Peer-to-peer (P2P) technologies have been widely used for content sharing, popularly called "file-swapping" networks. This chapter gives a broad overview of content sharing P2P technologies. It starts with the fundamental concept of P2P computing followed by the analysis of network topologies used in peer-to-peer systems. Next, three milestone peer-to-peer technologies: Napster, Gnutella, and Fasttrack are explored in details, and they are finally concluded with the comparison table in the last section.<|reference_end|> | arxiv | @article{ding2004p2p,
title={P2P Networks for Content Sharing},
author={Choon Hoong Ding, Sarana Nutanong, and Rajkumar Buyya},
journal={Technical Report, GRIDS-TR-2003-7, Grid Computing and Distributed
Systems Laboratory, University of Melbourne, Australia, December 2003},
year={2004},
number={GRIDS-TR-2003-7},
archivePrefix={arXiv},
eprint={cs/0402018},
primaryClass={cs.DC}
} | ding2004p2p |
arxiv-671672 | cs/0402019 | The Munich Rent Advisor: A Success for Logic Programming on the Internet | <|reference_start|>The Munich Rent Advisor: A Success for Logic Programming on the Internet: Most cities in Germany regularly publish a booklet called the {\em Mietspiegel}. It basically contains a verbal description of an expert system. It allows the calculation of the estimated fair rent for a flat. By hand, one may need a weekend to do so. With our computerized version, the {\em Munich Rent Advisor}, the user just fills in a form in a few minutes and the rent is calculated immediately. We also extended the functionality and applicability of the {\em Mietspiegel} so that the user need not answer all questions on the form. The key to computing with partial information using high-level programming was to use constraint logic programming. We rely on the internet, and more specifically the World Wide Web, to provide this service to a broad user group. More than ten thousand people have used our service in the last three years. This article describes the experiences in implementing and using the {\em Munich Rent Advisor}. Our results suggests that logic programming with constraints can be an important ingredient in intelligent internet systems.<|reference_end|> | arxiv | @article{fruehwirth2004the,
title={The Munich Rent Advisor: A Success for Logic Programming on the Internet},
author={Thom Fruehwirth and Slim Abdennadher},
journal={arXiv preprint arXiv:cs/0402019},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402019},
primaryClass={cs.AI cs.DS}
} | fruehwirth2004the |
arxiv-671673 | cs/0402020 | Geometrical Complexity of Classification Problems | <|reference_start|>Geometrical Complexity of Classification Problems: Despite encouraging recent progresses in ensemble approaches, classification methods seem to have reached a plateau in development. Further advances depend on a better understanding of geometrical and topological characteristics of point sets in high-dimensional spaces, the preservation of such characteristics under feature transformations and sampling processes, and their interaction with geometrical models used in classifiers. We discuss an attempt to measure such properties from data sets and relate them to classifier accuracies.<|reference_end|> | arxiv | @article{ho2004geometrical,
title={Geometrical Complexity of Classification Problems},
author={Tin Kam Ho},
journal={arXiv preprint arXiv:cs/0402020},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402020},
primaryClass={cs.CV}
} | ho2004geometrical |
arxiv-671674 | cs/0402021 | A Numerical Example on the Principles of Stochastic Discrimination | <|reference_start|>A Numerical Example on the Principles of Stochastic Discrimination: Studies on ensemble methods for classification suffer from the difficulty of modeling the complementary strengths of the components. Kleinberg's theory of stochastic discrimination (SD) addresses this rigorously via mathematical notions of enrichment, uniformity, and projectability of an ensemble. We explain these concepts via a very simple numerical example that captures the basic principles of the SD theory and method. We focus on a fundamental symmetry in point set covering that is the key observation leading to the foundation of the theory. We believe a better understanding of the SD method will lead to developments of better tools for analyzing other ensemble methods.<|reference_end|> | arxiv | @article{ho2004a,
title={A Numerical Example on the Principles of Stochastic Discrimination},
author={Tin Kam Ho},
journal={arXiv preprint arXiv:cs/0402021},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402021},
primaryClass={cs.CV cs.LG}
} | ho2004a |
arxiv-671675 | cs/0402022 | Automatically Generating Interfaces for Personalized Interaction with Digital Libraries | <|reference_start|>Automatically Generating Interfaces for Personalized Interaction with Digital Libraries: We present an approach to automatically generate interfaces supporting personalized interaction with digital libraries; these interfaces augment the user-DL dialog by empowering the user to (optionally) supply out-of-turn information during an interaction, flatten or restructure the dialog, and enquire about dialog options. Interfaces generated using this approach for CITIDEL are described.<|reference_end|> | arxiv | @article{perugini2004automatically,
title={Automatically Generating Interfaces for Personalized Interaction with
Digital Libraries},
author={Saverio Perugini, Naren Ramakrishnan, and Edward A. Fox},
journal={arXiv preprint arXiv:cs/0402022},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402022},
primaryClass={cs.DL cs.HC}
} | perugini2004automatically |
arxiv-671676 | cs/0402023 | A Service-Based Approach for Managing Mammography Data | <|reference_start|>A Service-Based Approach for Managing Mammography Data: Grid-based technologies are emerging as a potential open-source standards-based solution for managing and collabo-rating distributed resources. In view of these new computing solutions, the Mammogrid project is developing a service-based and Grid-aware application which manages a Euro-pean-wide database of mammograms. Medical conditions such as breast cancer, and mammograms as images, are ex-tremely complex with many dimensions of variability across the population. An effective solution for the management of disparate mammogram data sources is a federation of autonomous multi-centre sites which transcends national boundaries. The Mammogrid solution utilizes the Grid tech-nologies to integrate geographically distributed data sets. The Mammogrid application will explore the potential of the Grid to support effective co-working among radiologists through-out the EU. This paper outlines the Mammogrid service-based approach in managing a federation of grid-connected mam-mography databases.<|reference_end|> | arxiv | @article{estrella2004a,
title={A Service-Based Approach for Managing Mammography Data},
author={Florida Estrella, Richard McClatchey, Dmitry Rogulina, Roberto
Amendolia, Tony Solomonides},
journal={arXiv preprint arXiv:cs/0402023},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402023},
primaryClass={cs.DB cs.SE}
} | estrella2004a |
arxiv-671677 | cs/0402024 | Pattern Reification as the Basis for Description-Driven Systems | <|reference_start|>Pattern Reification as the Basis for Description-Driven Systems: One of the main factors driving object-oriented software development for information systems is the requirement for systems to be tolerant to change. To address this issue in designing systems, this paper proposes a pattern-based, object-oriented, description-driven system (DDS) architecture as an extension to the standard UML four-layer meta-model. A DDS architecture is proposed in which aspects of both static and dynamic systems behavior can be captured via descriptive models and meta-models. The proposed architecture embodies four main elements - firstly, the adoption of a multi-layered meta-modeling architecture and reflective meta-level architecture, secondly the identification of four data modeling relationships that can be made explicit such that they can be modified dynamically, thirdly the identification of five design patterns which have emerged from practice and have proved essential in providing reusable building blocks for data management, and fourthly the encoding of the structural properties of the five design patterns by means of one fundamental pattern, the Graph pattern. A practical example of this philosophy, the CRISTAL project, is used to demonstrate the use of description-driven data objects to handle system evolution.<|reference_end|> | arxiv | @article{estrella2004pattern,
title={Pattern Reification as the Basis for Description-Driven Systems},
author={Florida Estrella, Zsolt Kovacs, Jean-Marie Le Goff, Richard
McClatchey, Tony Solomonides & Norbert Toth},
journal={Journal of Software and System Modeling Vol 2 No 2, pp 108-119
Springer-Verlag, ISSN: 1619-1366, 2003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402024},
primaryClass={cs.DB cs.SE}
} | estrella2004pattern |
arxiv-671678 | cs/0402025 | A perspective on the Healthgrid initiative | <|reference_start|>A perspective on the Healthgrid initiative: This paper presents a perspective on the Healthgrid initiative which involves European projects deploying pioneering applications of grid technology in the health sector. In the last couple of years, several grid projects have been funded on health related issues at national and European levels. A crucial issue is to maximize their cross fertilization in the context of an environment where data of medical interest can be stored and made easily available to the different actors in healthcare, physicians, healthcare centres and administrations, and of course the citizens. The Healthgrid initiative, represented by the Healthgrid association (http://www.healthgrid.org), was initiated to bring the necessary long term continuity, to reinforce and promote awareness of the possibilities and advantages linked to the deployment of GRID technologies in health. Technologies to address the specific requirements for medical applications are under development. Results from the DataGrid and other projects are given as examples of early applications.<|reference_end|> | arxiv | @article{breton2004a,
title={A perspective on the Healthgrid initiative},
author={V. Breton, A.E.Solomonides, R.H.McClatchey},
journal={arXiv preprint arXiv:cs/0402025},
year={2004},
doi={10.1109/CCGrid.2004.1336598},
archivePrefix={arXiv},
eprint={cs/0402025},
primaryClass={cs.DB cs.SE}
} | breton2004a |
arxiv-671679 | cs/0402026 | Redundancy and Robustness of the AS-level Internet topology and its models | <|reference_start|>Redundancy and Robustness of the AS-level Internet topology and its models: A comparison between the topological properties of the measured Internet topology, at the autonomous system level (AS graph), and the equivalent graphs generated by two different power law topology generators is presented. Only one of the synthetic generators reproduces the tier connectivity of the AS graph.<|reference_end|> | arxiv | @article{zhou2004redundancy,
title={Redundancy and Robustness of the AS-level Internet topology and its
models},
author={Shi Zhou and Raul J. Mondragon},
journal={IEE Electronic Letters, vol. 40, no. 2, pp. 151-15. on 22 January
2004},
year={2004},
doi={10.1049/el:20040078},
archivePrefix={arXiv},
eprint={cs/0402026},
primaryClass={cs.NI}
} | zhou2004redundancy |
arxiv-671680 | cs/0402027 | Efficient and Scalable Barrier over Quadrics and Myrinet with a New NIC-Based Collective Message Passing Protocol | <|reference_start|>Efficient and Scalable Barrier over Quadrics and Myrinet with a New NIC-Based Collective Message Passing Protocol: Modern interconnects often have programmable processors in the network interface that can be utilized to offload communication processing from host CPU. In this paper, we explore different schemes to support collective operations at the network interface and propose a new collective protocol. With barrier as an initial case study, we have demontrated that much of the communication processing can be greatly simplified with this collective protocol. Accordingly, %with our proposed collective processing scheme, we have designed and implemented efficient and scalable NIC-based barrier operations over two high performance interconnects, Quadrics and Myrinet. Our evaluation shows that, over a Quadrics cluster of 8 nodes with ELan3 Network, the NIC-based barrier operation achieves a barrier latency of only 5.60$\mu$s. This result is a 2.48 factor of improvement over the Elanlib tree-based barrier operation. Over a Myrinet cluster of 8 nodes with LANai-XP NIC cards, a barrier latency of 14.20$\mu$s over 8 nodes is achieved. This is a 2.64 factor of improvement over the host-based barrier algorithm. Furthermore, an analytical model developed for the proposed scheme indicates that a NIC-based barrier operation on a 1024-node cluster can be performed with only 22.13$\mu$s latency over Quadrics and with 38.94$\mu$s latency over Myrinet. These results indicate the potential for developing high performance communication subsystems for next generation clusters.<|reference_end|> | arxiv | @article{yu2004efficient,
title={Efficient and Scalable Barrier over Quadrics and Myrinet with a New
NIC-Based Collective Message Passing Protocol},
author={Weikuan Yu, Darius Buntinas, Rich L. Graham, and Dhabaleswar K. Panda},
journal={arXiv preprint arXiv:cs/0402027},
year={2004},
number={Preprint ANL/MCS-P1121-0204},
archivePrefix={arXiv},
eprint={cs/0402027},
primaryClass={cs.DC cs.AR}
} | yu2004efficient |
arxiv-671681 | cs/0402028 | The lattice dimension of a graph | <|reference_start|>The lattice dimension of a graph: We describe a polynomial time algorithm for, given an undirected graph G, finding the minimum dimension d such that G may be isometrically embedded into the d-dimensional integer lattice Z^d.<|reference_end|> | arxiv | @article{eppstein2004the,
title={The lattice dimension of a graph},
author={David Eppstein},
journal={Eur. J. Combinatorics 26(6):585-592, 2005},
year={2004},
doi={10.1016/j.ejc.2004.05.001},
archivePrefix={arXiv},
eprint={cs/0402028},
primaryClass={cs.DS math.CO}
} | eppstein2004the |
arxiv-671682 | cs/0402029 | Mapping Topics and Topic Bursts in PNAS | <|reference_start|>Mapping Topics and Topic Bursts in PNAS: Scientific research is highly dynamic. New areas of science continually evolve;others gain or lose importance, merge or split. Due to the steady increase in the number of scientific publications it is hard to keep an overview of the structure and dynamic development of one's own field of science, much less all scientific domains. However, knowledge of hot topics, emergent research frontiers, or change of focus in certain areas is a critical component of resource allocation decisions in research labs, governmental institutions, and corporations. This paper demonstrates the utilization of Kleinberg's burst detection algorithm, co-word occurrence analysis, and graph layout techniques to generate maps that support the identification of major research topics and trends. The approach was applied to analyze and map the complete set of papers published in the Proceedings of the National Academy of Sciences (PNAS) in the years 1982-2001. Six domain experts examined and commented on the resulting maps in an attempt to reconstruct the evolution of major research areas covered by PNAS.<|reference_end|> | arxiv | @article{mane2004mapping,
title={Mapping Topics and Topic Bursts in PNAS},
author={Ketan Mane and Katy B"orner},
journal={arXiv preprint arXiv:cs/0402029},
year={2004},
doi={10.1073/pnas.0307626100},
archivePrefix={arXiv},
eprint={cs/0402029},
primaryClass={cs.IR cs.HC}
} | mane2004mapping |
arxiv-671683 | cs/0402030 | Computational complexity and simulation of rare events of Ising spin glasses | <|reference_start|>Computational complexity and simulation of rare events of Ising spin glasses: We discuss the computational complexity of random 2D Ising spin glasses, which represent an interesting class of constraint satisfaction problems for black box optimization. Two extremal cases are considered: (1) the +/- J spin glass, and (2) the Gaussian spin glass. We also study a smooth transition between these two extremal cases. The computational complexity of all studied spin glass systems is found to be dominated by rare events of extremely hard spin glass samples. We show that complexity of all studied spin glass systems is closely related to Frechet extremal value distribution. In a hybrid algorithm that combines the hierarchical Bayesian optimization algorithm (hBOA) with a deterministic bit-flip hill climber, the number of steps performed by both the global searcher (hBOA) and the local searcher follow Frechet distributions. Nonetheless, unlike in methods based purely on local search, the parameters of these distributions confirm good scalability of hBOA with local search. We further argue that standard performance measures for optimization algorithms--such as the average number of evaluations until convergence--can be misleading. Finally, our results indicate that for highly multimodal constraint satisfaction problems, such as Ising spin glasses, recombination-based search can provide qualitatively better results than mutation-based search.<|reference_end|> | arxiv | @article{pelikan2004computational,
title={Computational complexity and simulation of rare events of Ising spin
glasses},
author={Martin Pelikan, Jiri Ocenasek, Simon Trebst, Matthias Troyer, and
Fabien Alet},
journal={arXiv preprint arXiv:cs/0402030},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402030},
primaryClass={cs.NE cs.AI}
} | pelikan2004computational |
arxiv-671684 | cs/0402031 | Parameter-less hierarchical BOA | <|reference_start|>Parameter-less hierarchical BOA: The parameter-less hierarchical Bayesian optimization algorithm (hBOA) enables the use of hBOA without the need for tuning parameters for solving each problem instance. There are three crucial parameters in hBOA: (1) the selection pressure, (2) the window size for restricted tournaments, and (3) the population size. Although both the selection pressure and the window size influence hBOA performance, performance should remain low-order polynomial with standard choices of these two parameters. However, there is no standard population size that would work for all problems of interest and the population size must thus be eliminated in a different way. To eliminate the population size, the parameter-less hBOA adopts the population-sizing technique of the parameter-less genetic algorithm. Based on the existing theory, the parameter-less hBOA should be able to solve nearly decomposable and hierarchical problems in quadratic or subquadratic number of function evaluations without the need for setting any parameters whatsoever. A number of experiments are presented to verify scalability of the parameter-less hBOA.<|reference_end|> | arxiv | @article{pelikan2004parameter-less,
title={Parameter-less hierarchical BOA},
author={Martin Pelikan and Tz-Kai Lin},
journal={arXiv preprint arXiv:cs/0402031},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402031},
primaryClass={cs.NE cs.AI}
} | pelikan2004parameter-less |
arxiv-671685 | cs/0402032 | Fitness inheritance in the Bayesian optimization algorithm | <|reference_start|>Fitness inheritance in the Bayesian optimization algorithm: This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions, but also for estimating their fitness. The results indicate that fitness inheritance is a promising concept in BOA, because population-sizing requirements for building appropriate models of promising solutions lead to good fitness estimates even if only a small proportion of candidate solutions is evaluated using the actual fitness function. This can lead to a reduction of the number of actual fitness evaluations by a factor of 30 or more.<|reference_end|> | arxiv | @article{pelikan2004fitness,
title={Fitness inheritance in the Bayesian optimization algorithm},
author={Martin Pelikan and Kumara Sastry},
journal={arXiv preprint arXiv:cs/0402032},
year={2004},
number={IlliGAL Report No. 2004009},
archivePrefix={arXiv},
eprint={cs/0402032},
primaryClass={cs.NE cs.AI cs.LG}
} | pelikan2004fitness |
arxiv-671686 | cs/0402033 | Recycling Computed Answers in Rewrite Systems for Abduction | <|reference_start|>Recycling Computed Answers in Rewrite Systems for Abduction: In rule-based systems, goal-oriented computations correspond naturally to the possible ways that an observation may be explained. In some applications, we need to compute explanations for a series of observations with the same domain. The question whether previously computed answers can be recycled arises. A yes answer could result in substantial savings of repeated computations. For systems based on classic logic, the answer is YES. For nonmonotonic systems however, one tends to believe that the answer should be NO, since recycling is a form of adding information. In this paper, we show that computed answers can always be recycled, in a nontrivial way, for the class of rewrite procedures that we proposed earlier for logic programs with negation. We present some experimental results on an encoding of the logistics domain.<|reference_end|> | arxiv | @article{lin2004recycling,
title={Recycling Computed Answers in Rewrite Systems for Abduction},
author={Fangzhen Lin and Jia-Huai You},
journal={arXiv preprint arXiv:cs/0402033},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402033},
primaryClass={cs.AI}
} | lin2004recycling |
arxiv-671687 | cs/0402034 | Kolmogorov complexity and symmetric relational structures | <|reference_start|>Kolmogorov complexity and symmetric relational structures: We study partitions of Fra\"{\i}ss\'{e} limits of classes of finite relational structures where the partitions are encoded by infinite binary sequences which are random in the sense of Kolmogorov, Chaitin and Solomonoff. It is shown that partition by a random sequence of a Fra\"{\i}ss\'{e} limit preserves the limit property of the object.<|reference_end|> | arxiv | @article{fouché2004kolmogorov,
title={Kolmogorov complexity and symmetric relational structures},
author={W.L. Fouch'e, P.H. Potgieter},
journal={The Journal of Symbolic Logic, Volume 63, Number 3, September
1998, 1083-1094},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402034},
primaryClass={cs.CC cs.DM}
} | fouché2004kolmogorov |
arxiv-671688 | cs/0402035 | Memory As A Monadic Control Construct In Problem-Solving | <|reference_start|>Memory As A Monadic Control Construct In Problem-Solving: Recent advances in programming languages study and design have established a standard way of grounding computational systems representation in category theory. These formal results led to a better understanding of issues of control and side-effects in functional and imperative languages. This framework can be successfully applied to the investigation of the performance of Artificial Intelligence (AI) inference and cognitive systems. In this paper, we delineate a categorical formalisation of memory as a control structure driving performance in inference systems. Abstracting away control mechanisms from three widely used representations of memory in cognitive systems (scripts, production rules and clusters) we explain how categorical triples capture the interaction between learning and problem-solving.<|reference_end|> | arxiv | @article{chauvet2004memory,
title={Memory As A Monadic Control Construct In Problem-Solving},
author={Jean-Marie Chauvet},
journal={arXiv preprint arXiv:cs/0402035},
year={2004},
number={ND-2004-1},
archivePrefix={arXiv},
eprint={cs/0402035},
primaryClass={cs.AI}
} | chauvet2004memory |
arxiv-671689 | cs/0402036 | Towards a Model-Based Framework for Integrating Usability and Software Engineering Life Cycles | <|reference_start|>Towards a Model-Based Framework for Integrating Usability and Software Engineering Life Cycles: In this position paper we propose a process model that provides a development infrastructure in which the usability engineering and software engineering life cycles co-exist in complementary roles. We describe the motivation, hurdles, rationale, arguments, and implementation plan for the need, specification, and the usefulness of such a model.<|reference_end|> | arxiv | @article{pyla2004towards,
title={Towards a Model-Based Framework for Integrating Usability and Software
Engineering Life Cycles},
author={Pardha S. Pyla, Manuel A. Perez-Quinones, James D. Arthur & H. Rex
Hartson},
journal={arXiv preprint arXiv:cs/0402036},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402036},
primaryClass={cs.HC}
} | pyla2004towards |
arxiv-671690 | cs/0402037 | The pre-history of quantum computation | <|reference_start|>The pre-history of quantum computation: The main ideas behind developments in the theory and technology of quantum computation were formulated in the late 1970s and early 1980s by two physicists in the West and a mathematician in the former Soviet Union. It is not generally known in the West that the subject has roots in the Russian technical literature. The author hopes to present as impartial a synthesis as possible of the early history of thought on this subject. The role of reversible and irreversible computational processes is examined briefly as it relates to the origins of quantum computing and the so-called Information Paradox in physics.<|reference_end|> | arxiv | @article{potgieter2004the,
title={The pre-history of quantum computation},
author={P.H. Potgieter},
journal={Suid-Afrikaanse Tydskrif vir Natuurwetenskap en Tegnologie, Vol
23, Issue 1 / 2, Mar / Jun, 2-6 (2004)},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402037},
primaryClass={cs.GL}
} | potgieter2004the |
arxiv-671691 | cs/0402038 | Towards a Mathematical Theory of the Delays of the Asynchronous Circuits | <|reference_start|>Towards a Mathematical Theory of the Delays of the Asynchronous Circuits: The inequations of the delays of the asynchronous circuits are written, by making use of pseudo-Boolean differential calculus. We consider these efforts to be a possible starting point in the semi-formalized reconstruction of the digital electrical engineering (which is a non-formalized theory).<|reference_end|> | arxiv | @article{vlad2004towards,
title={Towards a Mathematical Theory of the Delays of the Asynchronous Circuits},
author={Serban E. Vlad},
journal={Analele Universitatii din Oradea, Fascicola Matematica, TOM IX,
2002},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402038},
primaryClass={cs.LO}
} | vlad2004towards |
arxiv-671692 | cs/0402039 | On the Inertia of the Asynchronous Circuits | <|reference_start|>On the Inertia of the Asynchronous Circuits: We present the bounded delays, the absolute inertia and the relative inertia.<|reference_end|> | arxiv | @article{vlad2004on,
title={On the Inertia of the Asynchronous Circuits},
author={Serban E. Vlad},
journal={CAIM 2003, Oradea, Romania, May 29-31, 2003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402039},
primaryClass={cs.LO}
} | vlad2004on |
arxiv-671693 | cs/0402040 | Defining the Delays of the Asynchronous Circuits | <|reference_start|>Defining the Delays of the Asynchronous Circuits: We define the delays of a circuit, as well as the properties of determinism, order, time invariance, constancy, symmetry and the serial connection.<|reference_end|> | arxiv | @article{vlad2004defining,
title={Defining the Delays of the Asynchronous Circuits},
author={Serban E. Vlad},
journal={CAIM 2003, Oradea, Romania, May 29-31, 2003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402040},
primaryClass={cs.LO}
} | vlad2004defining |
arxiv-671694 | cs/0402041 | Examples of Models of the Asynchronous Circuits | <|reference_start|>Examples of Models of the Asynchronous Circuits: We define the delays of a circuit, as well as the properties of determinism, order, time invariance, constancy, symmetry and the serial connection.<|reference_end|> | arxiv | @article{vlad2004examples,
title={Examples of Models of the Asynchronous Circuits},
author={Serban E. Vlad},
journal={the 10-th Symposium of Mathematics and its Applications,
Politehnica University of Timisoara, Timisoara, 2003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402041},
primaryClass={cs.LO}
} | vlad2004examples |
arxiv-671695 | cs/0402042 | Anonymity and Information Hiding in Multiagent Systems | <|reference_start|>Anonymity and Information Hiding in Multiagent Systems: We provide a framework for reasoning about information-hiding requirements in multiagent systems and for reasoning about anonymity in particular. Our framework employs the modal logic of knowledge within the context of the runs and systems framework, much in the spirit of our earlier work on secrecy [Halpern and O'Neill 2002]. We give several definitions of anonymity with respect to agents, actions, and observers in multiagent systems, and we relate our definitions of anonymity to other definitions of information hiding, such as secrecy. We also give probabilistic definitions of anonymity that are able to quantify an observer s uncertainty about the state of the system. Finally, we relate our definitions of anonymity to other formalizations of anonymity and information hiding, including definitions of anonymity in the process algebra CSP and definitions of information hiding using function views.<|reference_end|> | arxiv | @article{halpern2004anonymity,
title={Anonymity and Information Hiding in Multiagent Systems},
author={Joseph Y. Halpern and Kevin R. O'Neill},
journal={arXiv preprint arXiv:cs/0402042},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402042},
primaryClass={cs.CR cs.LO cs.MA}
} | halpern2004anonymity |
arxiv-671696 | cs/0402043 | The UPLNC Compiler: Design and Implementation | <|reference_start|>The UPLNC Compiler: Design and Implementation: The implementation of the compiler of the UPLNC language is presented with a full source code listing.<|reference_end|> | arxiv | @article{vitchev2004the,
title={The UPLNC Compiler: Design and Implementation},
author={Evgueniy Vitchev},
journal={arXiv preprint arXiv:cs/0402043},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402043},
primaryClass={cs.PL}
} | vitchev2004the |
arxiv-671697 | cs/0402044 | A General Framework for Bounds for Higher-Dimensional Orthogonal Packing Problems | <|reference_start|>A General Framework for Bounds for Higher-Dimensional Orthogonal Packing Problems: Higher-dimensional orthogonal packing problems have a wide range of practical applications, including packing, cutting, and scheduling. In the context of a branch-and-bound framework for solving these packing problems to optimality, it is of crucial importance to have good and easy bounds for an optimal solution. Previous efforts have produced a number of special classes of such bounds. Unfortunately, some of these bounds are somewhat complicated and hard to generalize. We present a new approach for obtaining classes of lower bounds for higher-dimensional packing problems; our bounds improve and simplify several well-known bounds from previous literature. In addition, our approach provides an easy framework for proving correctness of new bounds.<|reference_end|> | arxiv | @article{fekete2004a,
title={A General Framework for Bounds for Higher-Dimensional Orthogonal Packing
Problems},
author={Sandor P. Fekete and Joerg Schepers},
journal={arXiv preprint arXiv:cs/0402044},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402044},
primaryClass={cs.DS cs.CG}
} | fekete2004a |
arxiv-671698 | cs/0402045 | The Freeze-Tag Problem: How to Wake Up a Swarm of Robots | <|reference_start|>The Freeze-Tag Problem: How to Wake Up a Swarm of Robots: An optimization problem that naturally arises in the study of swarm robotics is the Freeze-Tag Problem (FTP) of how to awaken a set of ``asleep'' robots, by having an awakened robot move to their locations. Once a robot is awake, it can assist in awakening other slumbering robots.The objective is to have all robots awake as early as possible. While the FTP bears some resemblance to problems from areas in combinatorial optimization such as routing, broadcasting, scheduling, and covering, its algorithmic characteristics are surprisingly different. We consider both scenarios on graphs and in geometric environments.In graphs, robots sleep at vertices and there is a length function on the edges. Awake robots travel along edges, with time depending on edge length. For most scenarios, we consider the offline version of the problem, in which each awake robot knows the position of all other robots. We prove that the problem is NP-hard, even for the special case of star graphs. We also establish hardness of approximation, showing that it is NP-hard to obtain an approximation factor better than 5/3, even for graphs of bounded degree.These lower bounds are complemented with several positive algorithmic results, including: (1) We show that the natural greedy strategy on star graphs has a tight worst-case performance of 7/3 and give a polynomial-time approximation scheme (PTAS) for star graphs. (2) We give a simple O(log D)-competitive online algorithm for graphs with maximum degree D and locally bounded edge weights. (3) We give a PTAS, running in nearly linear time, for geometrically embedded instances.<|reference_end|> | arxiv | @article{arkin2004the,
title={The Freeze-Tag Problem: How to Wake Up a Swarm of Robots},
author={Esther M. Arkin, Michael A. Bender, Sandor P. Fekete, Joseph S. B.
Mitchell, and Martin Skutella},
journal={arXiv preprint arXiv:cs/0402045},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402045},
primaryClass={cs.DS}
} | arkin2004the |
arxiv-671699 | cs/0402046 | Spam filter analysis | <|reference_start|>Spam filter analysis: Unsolicited bulk email (aka. spam) is a major problem on the Internet. To counter spam, several techniques, ranging from spam filters to mail protocol extensions like hashcash, have been proposed. In this paper we investigate the effectiveness of several spam filtering techniques and technologies. Our analysis was performed by simulating email traffic under different conditions. We show that genetic algorithm based spam filters perform best at server level and naive Bayesian filters are the most appropriate for filtering at user level.<|reference_end|> | arxiv | @article{garcia2004spam,
title={Spam filter analysis},
author={Flavio D. Garcia, Jaap-Henk Hoepman},
journal={arXiv preprint arXiv:cs/0402046},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402046},
primaryClass={cs.CR}
} | garcia2004spam |
arxiv-671700 | cs/0402047 | Parameter-less Optimization with the Extended Compact Genetic Algorithm and Iterated Local Search | <|reference_start|>Parameter-less Optimization with the Extended Compact Genetic Algorithm and Iterated Local Search: This paper presents a parameter-less optimization framework that uses the extended compact genetic algorithm (ECGA) and iterated local search (ILS), but is not restricted to these algorithms. The presented optimization algorithm (ILS+ECGA) comes as an extension of the parameter-less genetic algorithm (GA), where the parameters of a selecto-recombinative GA are eliminated. The approach that we propose is tested on several well known problems. In the absence of domain knowledge, it is shown that ILS+ECGA is a robust and easy-to-use optimization method.<|reference_end|> | arxiv | @article{lima2004parameter-less,
title={Parameter-less Optimization with the Extended Compact Genetic Algorithm
and Iterated Local Search},
author={Claudio F. Lima, Fernando G. Lobo},
journal={arXiv preprint arXiv:cs/0402047},
year={2004},
archivePrefix={arXiv},
eprint={cs/0402047},
primaryClass={cs.NE}
} | lima2004parameter-less |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.