corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-671401
cs/0309012
Exploration of RNA Editing and Design of Robust Genetic Algorithms
<|reference_start|>Exploration of RNA Editing and Design of Robust Genetic Algorithms: This paper presents our computational methodology using Genetic Algorithms (GA) for exploring the nature of RNA editing. These models are constructed using several genetic editing characteristics that are gleaned from the RNA editing system as observed in several organisms. We have expanded the traditional Genetic Algorithm with artificial editing mechanisms as proposed by (Rocha, 1997). The incorporation of editing mechanisms provides a means for artificial agents with genetic descriptions to gain greater phenotypic plasticity, which may be environmentally regulated. Our first implementations of these ideas have shed some light into the evolutionary implications of RNA editing. Based on these understandings, we demonstrate how to select proper RNA editors for designing more robust GAs, and the results will show promising applications to real-world problems. We expect that the framework proposed will both facilitate determining the evolutionary role of RNA editing in biology, and advance the current state of research in Genetic Algorithms.<|reference_end|>
arxiv
@article{huang2003exploration, title={Exploration of RNA Editing and Design of Robust Genetic Algorithms}, author={C. Huang, L.M. Rocha}, journal={arXiv preprint arXiv:cs/0309012}, year={2003}, number={LAUR 03-4314}, archivePrefix={arXiv}, eprint={cs/0309012}, primaryClass={cs.NE cs.AI nlin.AO q-bio.GN} }
huang2003exploration
arxiv-671402
cs/0309013
Semi-metric Behavior in Document Networks and its Application to Recommendation Systems
<|reference_start|>Semi-metric Behavior in Document Networks and its Application to Recommendation Systems: Recommendation systems for different Document Networks (DN) such as the World Wide Web (WWW) and Digital Libraries, often use distance functions extracted from relationships among documents and keywords. For instance, documents in the WWW are related via a hyperlink network, while documents in bibliographic databases are related by citation and collaboration networks. Furthermore, documents are related to keyterms. The distance functions computed from these relations establish associative networks among items of the DN, referred to as Distance Graphs, which allow recommendation systems to identify relevant associations for individual users. However, modern recommendation systems need to integrate associative data from multiple sources such as different databases, web sites, and even other users. Thus, we are presented with a problem of combining evidence (about associations between items) from different sources characterized by distance functions. In this paper we describe our work on (1) inferring relevant associations from, as well as characterizing, semi-metric distance graphs and (2) combining evidence from different distance graphs in a recommendation system. Regarding (1), we present the idea of semi-metric distance graphs, and introduce ratios to measure semi-metric behavior. We compute these ratios for several DN such as digital libraries and web sites and show that they are useful to identify implicit associations. Regarding (2), we describe an algorithm to combine evidence from distance graphs that uses Evidence Sets, a set structure based on Interval Valued Fuzzy Sets and Dempster-Shafer Theory of Evidence. This algorithm has been developed for a recommendation system named TalkMine.<|reference_end|>
arxiv
@article{rocha2003semi-metric, title={Semi-metric Behavior in Document Networks and its Application to Recommendation Systems}, author={L.M. Rocha}, journal={In: Soft Computing Agents: A New Perspective for Dynamic Information Systems. V. Loia (Ed.) International Series Frontiers in Artificial Intelligence and Applications. IOS Press, pp. 137-163, 2002}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309013}, primaryClass={cs.IR cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.DL cs.HC cs.MA} }
rocha2003semi-metric
arxiv-671403
cs/0309014
Optimal Covering Tours with Turn Costs
<|reference_start|>Optimal Covering Tours with Turn Costs: We give the first algorithmic study of a class of ``covering tour'' problems related to the geometric Traveling Salesman Problem: Find a polygonal tour for a cutter so that it sweeps out a specified region (``pocket''), in order to minimize a cost that depends mainly on the number of em turns. These problems arise naturally in manufacturing applications of computational geometry to automatic tool path generation and automatic inspection systems, as well as arc routing (``postman'') problems with turn penalties. We prove the NP-completeness of minimum-turn milling and give efficient approximation algorithms for several natural versions of the problem, including a polynomial-time approximation scheme based on a novel adaptation of the m-guillotine method.<|reference_end|>
arxiv
@article{arkin2003optimal, title={Optimal Covering Tours with Turn Costs}, author={Esther M. Arkin, Michael A. Bender, Erik D. Demaine, Sandor P. Fekete, Joseph S. B. Mitchell, and Saurabh Sethia}, journal={arXiv preprint arXiv:cs/0309014}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309014}, primaryClass={cs.DS cs.CG} }
arkin2003optimal
arxiv-671404
cs/0309015
Reliable and Efficient Inference of Bayesian Networks from Sparse Data by Statistical Learning Theory
<|reference_start|>Reliable and Efficient Inference of Bayesian Networks from Sparse Data by Statistical Learning Theory: To learn (statistical) dependencies among random variables requires exponentially large sample size in the number of observed random variables if any arbitrary joint probability distribution can occur. We consider the case that sparse data strongly suggest that the probabilities can be described by a simple Bayesian network, i.e., by a graph with small in-degree \Delta. Then this simple law will also explain further data with high confidence. This is shown by calculating bounds on the VC dimension of the set of those probability measures that correspond to simple graphs. This allows to select networks by structural risk minimization and gives reliability bounds on the error of the estimated joint measure without (in contrast to a previous paper) any prior assumptions on the set of possible joint measures. The complexity for searching the optimal Bayesian networks of in-degree \Delta increases only polynomially in the number of random varibales for constant \Delta and the optimal joint measure associated with a given graph can be found by convex optimization.<|reference_end|>
arxiv
@article{janzing2003reliable, title={Reliable and Efficient Inference of Bayesian Networks from Sparse Data by Statistical Learning Theory}, author={Dominik Janzing and Daniel Herrmann}, journal={arXiv preprint arXiv:cs/0309015}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309015}, primaryClass={cs.LG} }
janzing2003reliable
arxiv-671405
cs/0309016
Using Simulated Annealing to Calculate the Trembles of Trembling Hand Perfection
<|reference_start|>Using Simulated Annealing to Calculate the Trembles of Trembling Hand Perfection: Within the literature on non-cooperative game theory, there have been a number of attempts to propose logorithms which will compute Nash equilibria. Rather than derive a new algorithm, this paper shows that the family of algorithms known as Markov chain Monte Carlo (MCMC) can be used to calculate Nash equilibria. MCMC is a type of Monte Carlo simulation that relies on Markov chains to ensure its regularity conditions. MCMC has been widely used throughout the statistics and optimization literature, where variants of this algorithm are known as simulated annealing. This paper shows that there is interesting connection between the trembles that underlie the functioning of this algorithm and the type of Nash refinement known as trembling hand perfection.<|reference_end|>
arxiv
@article{mcdonald2003using, title={Using Simulated Annealing to Calculate the Trembles of Trembling Hand Perfection}, author={Stuart McDonald and Liam Wagner}, journal={Proceedings of IEEE Congress on Evolutionary Computation 2003, vol.4, pp. 2482-2489}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309016}, primaryClass={cs.GT cs.CC cs.DS cs.LG cs.NE q-bio.PE} }
mcdonald2003using
arxiv-671406
cs/0309017
Enumerating planar locally finite Cayley graphs
<|reference_start|>Enumerating planar locally finite Cayley graphs: We characterize the set of planar locally finite Cayley graphs, and give a finite representation of these graphs by a special kind of finite state automata called labeling schemes. As a result, we are able to enumerate and describe all planar locally finite Cayley graphs of a given degree. This analysis allows us to solve the problem of decision of the locally finite planarity for a word-problem-decidable presentation. Keywords: vertex-transitive, Cayley graph, planar graph, tiling, labeling scheme<|reference_end|>
arxiv
@article{renault2003enumerating, title={Enumerating planar locally finite Cayley graphs}, author={David Renault}, journal={arXiv preprint arXiv:cs/0309017}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309017}, primaryClass={cs.DM} }
renault2003enumerating
arxiv-671407
cs/0309018
Using Propagation for Solving Complex Arithmetic Constraints
<|reference_start|>Using Propagation for Solving Complex Arithmetic Constraints: Solving a system of nonlinear inequalities is an important problem for which conventional numerical analysis has no satisfactory method. With a box-consistency algorithm one can compute a cover for the solution set to arbitrarily close approximation. Because of difficulties in the use of propagation for complex arithmetic expressions, box consistency is computed with interval arithmetic. In this paper we present theorems that support a simple modification of propagation that allows complex arithmetic expressions to be handled efficiently. The version of box consistency that is obtained in this way is stronger than when interval arithmetic is used.<|reference_end|>
arxiv
@article{van emden2003using, title={Using Propagation for Solving Complex Arithmetic Constraints}, author={M.H. van Emden and B. Moa}, journal={arXiv preprint arXiv:cs/0309018}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309018}, primaryClass={math.NA cs.AR cs.CC cs.NA cs.PF cs.RO} }
van emden2003using
arxiv-671408
cs/0309019
Building a Test Collection for Speech-Driven Web Retrieval
<|reference_start|>Building a Test Collection for Speech-Driven Web Retrieval: This paper describes a test collection (benchmark data) for retrieval systems driven by spoken queries. This collection was produced in the subtask of the NTCIR-3 Web retrieval task, which was performed in a TREC-style evaluation workshop. The search topics and document collection for the Web retrieval task were used to produce spoken queries and language models for speech recognition, respectively. We used this collection to evaluate the performance of our retrieval system. Experimental results showed that (a) the use of target documents for language modeling and (b) enhancement of the vocabulary size in speech recognition were effective in improving the system performance.<|reference_end|>
arxiv
@article{fujii2003building, title={Building a Test Collection for Speech-Driven Web Retrieval}, author={Atsushi Fujii and Katunobu Itou}, journal={Proceedings of the 8th European Conference on Speech Communication and Technology (Eurospeech 2003), pp.1153-1156, Sep. 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309019}, primaryClass={cs.CL} }
fujii2003building
arxiv-671409
cs/0309020
Threshold values of Random K-SAT from the cavity method
<|reference_start|>Threshold values of Random K-SAT from the cavity method: Using the cavity equations of \cite{mezard:parisi:zecchina:02,mezard:zecchina:02}, we derive the various threshold values for the number of clauses per variable of the random $K$-satisfiability problem, generalizing the previous results to $K \ge 4$. We also give an analytic solution of the equations, and some closed expressions for these thresholds, in an expansion around large $K$. The stability of the solution is also computed. For any $K$, the satisfiability threshold is found to be in the stable region of the solution, which adds further credit to the conjecture that this computation gives the exact satisfiability threshold.<|reference_end|>
arxiv
@article{mertens2003threshold, title={Threshold values of Random K-SAT from the cavity method}, author={Stephan Mertens, Marc Mezard and Riccardo Zecchina}, journal={arXiv preprint arXiv:cs/0309020}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309020}, primaryClass={cs.CC cond-mat.dis-nn cs.DM} }
mertens2003threshold
arxiv-671410
cs/0309021
A Cross-media Retrieval System for Lecture Videos
<|reference_start|>A Cross-media Retrieval System for Lecture Videos: We propose a cross-media lecture-on-demand system, in which users can selectively view specific segments of lecture videos by submitting text queries. Users can easily formulate queries by using the textbook associated with a target lecture, even if they cannot come up with effective keywords. Our system extracts the audio track from a target lecture video, generates a transcription by large vocabulary continuous speech recognition, and produces a text index. Experimental results showed that by adapting speech recognition to the topic of the lecture, the recognition accuracy increased and the retrieval accuracy was comparable with that obtained by human transcription.<|reference_end|>
arxiv
@article{fujii2003a, title={A Cross-media Retrieval System for Lecture Videos}, author={Atsushi Fujii, Katunobu Itou, Tomoyosi Akiba, and Tetsuya Ishikawa}, journal={Proceedings of the 8th European Conference on Speech Communication and Technology (Eurospeech 2003), pp.1149-1152, Sep. 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309021}, primaryClass={cs.CL} }
fujii2003a
arxiv-671411
cs/0309022
Proposed Specification of a Distributed XML-Query Network
<|reference_start|>Proposed Specification of a Distributed XML-Query Network: W3C's XML-Query language offers a powerful instrument for information retrieval on XML repositories. This article describes an implementation of this retrieval in a real world's scenario. Distributed XML-Query processing reduces load on every single attending node to an acceptable level. The network allows every participant to control their computing load themselves. Furthermore XML-repositories may stay at the rights holder, so every Data-Provider can decide, whether to process critical queries or not. If Data-Providers keep redundant information, this distributed network improves reliability of information with duplicates removed.<|reference_end|>
arxiv
@article{thiemann2003proposed, title={Proposed Specification of a Distributed XML-Query Network}, author={Christian Thiemann, Michael Schlenker, Thomas Severiens}, journal={arXiv preprint arXiv:cs/0309022}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309022}, primaryClass={cs.DC cs.IR} }
thiemann2003proposed
arxiv-671412
cs/0309023
Efficient Algorithms for Citation Network Analysis
<|reference_start|>Efficient Algorithms for Citation Network Analysis: In the paper very efficient, linear in number of arcs, algorithms for determining Hummon and Doreian's arc weights SPLC and SPNP in citation network are proposed, and some theoretical properties of these weights are presented. The nonacyclicity problem in citation networks is discussed. An approach to identify on the basis of arc weights an important small subnetwork is proposed and illustrated on the citation networks of SOM (self organizing maps) literature and US patents.<|reference_end|>
arxiv
@article{batagelj2003efficient, title={Efficient Algorithms for Citation Network Analysis}, author={Vladimir Batagelj (University of Ljubljana, Slovenia)}, journal={inside the book: V. Batagelj, P. Doreian, A. Ferligoj, N. Kej\v{z}ar: Understanding Large Temporal Networks and Spatial Networks. Wiley, 2014. ISBN: 978-0-470-71452-2}, year={2003}, number={IMFM 897}, archivePrefix={arXiv}, eprint={cs/0309023}, primaryClass={cs.DL cs.DM cs.DS} }
batagelj2003efficient
arxiv-671413
cs/0309024
Results on the quantitative mu-calculus qMu
<|reference_start|>Results on the quantitative mu-calculus qMu: The mu-calculus is a powerful tool for specifying and verifying transition systems, including those with both demonic and angelic choice; its quantitative generalisation qMu extends that to probabilistic choice. We show that for a finite-state system the logical interpretation of qMu, via fixed-points in a domain of real-valued functions into [0,1], is equivalent to an operational interpretation given as a turn-based gambling game between two players. The logical interpretation provides direct access to axioms, laws and meta-theorems. The operational, game- based interpretation aids the intuition and continues in the more general context to provide a surprisingly practical specification tool. A corollary of our proofs is an extension of Everett's singly-nested games result in the finite turn-based case: we prove well-definedness of the minimax value, and existence of fixed memoriless strategies, for all qMu games/formulae, of arbitrary (including alternating) nesting structure.<|reference_end|>
arxiv
@article{mciver2003results, title={Results on the quantitative mu-calculus qMu}, author={Annabelle McIver and Carroll Morgan}, journal={arXiv preprint arXiv:cs/0309024}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309024}, primaryClass={cs.LO cs.GT} }
mciver2003results
arxiv-671414
cs/0309025
Evidential Force Aggregation
<|reference_start|>Evidential Force Aggregation: In this paper we develop an evidential force aggregation method intended for classification of evidential intelligence into recognized force structures. We assume that the intelligence has already been partitioned into clusters and use the classification method individually in each cluster. The classification is based on a measure of fitness between template and fused intelligence that makes it possible to handle intelligence reports with multiple nonspecific and uncertain propositions. With this measure we can aggregate on a level-by-level basis, starting from general intelligence to achieve a complete force structure with recognized units on all hierarchical levels.<|reference_end|>
arxiv
@article{schubert2003evidential, title={Evidential Force Aggregation}, author={Johan Schubert}, journal={in Proceedings of the Sixth International Conference on Information Fusion (FUSION 2003), pp. 1223-1229, Cairns, Australia, 8-11 July 2003, International Society of Information Fusion, 2003}, year={2003}, number={FOI-S-0960-SE}, archivePrefix={arXiv}, eprint={cs/0309025}, primaryClass={cs.AI} }
schubert2003evidential
arxiv-671415
cs/0309026
A thought experiment on Quantum Mechanics and Distributed Failure Detection
<|reference_start|>A thought experiment on Quantum Mechanics and Distributed Failure Detection: One of the biggest problems in current distributed systems is that presented by one machine attempting to determine the liveness of another in a timely manner. Unfortunately, the symptoms exhibited by a failed machine can also be the result of other causes, e.g., an overloaded machine or network which drops messages, making it impossible to detect a machine failure with cetainty until that machine recovers. This is a well understood problem and one which has led to a large body of research into failure suspectors: since it is not possible to detect a failure, the best one can do is suspect a failure and program accordingly. However, one machine's suspicions may not be the same as another's; therefore, these algorithms spend a considerable effort in ensuring a consistent view among all available machines of who is suspects of being failed. This paper describes a thought experiment on how quantum mechanics may be used to provide a failure detector that is guaranteed to give both accurate and instantaneous information about the liveness of machines, no matter the distances involved.<|reference_end|>
arxiv
@article{little2003a, title={A thought experiment on Quantum Mechanics and Distributed Failure Detection}, author={Mark C. Little}, journal={arXiv preprint arXiv:cs/0309026}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309026}, primaryClass={cs.DC} }
little2003a
arxiv-671416
cs/0309027
Proceedings of the Fifth International Workshop on Automated Debugging (AADEBUG 2003)
<|reference_start|>Proceedings of the Fifth International Workshop on Automated Debugging (AADEBUG 2003): Over the past decades automated debugging has seen major achievements. However, as debugging is by necessity attached to particular programming paradigms, the results are scattered. To alleviate this problem, the Automated and Algorithmic Debugging workshop (AADEBUG for short) was organised in 1993 in Link"oping (Sweden). As this workshop proved to be successful, subsequent workshops have been organised in 1995 (Saint-Malo, France), 1997 (again in Link"oping, Sweden) and 2000 (Munich, Germany). In 2003, the workshop is organised in Ghent, Belgium, the proceedings of which you are reading right now.<|reference_end|>
arxiv
@article{ronsse2003proceedings, title={Proceedings of the Fifth International Workshop on Automated Debugging (AADEBUG 2003)}, author={Michiel Ronsse and Koen De Bosschere}, journal={arXiv preprint arXiv:cs/0309027}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309027}, primaryClass={cs.SE cs.PL} }
ronsse2003proceedings
arxiv-671417
cs/0309028
cTI: A constraint-based termination inference tool for ISO-Prolog
<|reference_start|>cTI: A constraint-based termination inference tool for ISO-Prolog: We present cTI, the first system for universal left-termination inference of logic programs. Termination inference generalizes termination analysis and checking. Traditionally, a termination analyzer tries to prove that a given class of queries terminates. This class must be provided to the system, for instance by means of user annotations. Moreover, the analysis must be redone every time the class of queries of interest is updated. Termination inference, in contrast, requires neither user annotations nor recomputation. In this approach, terminating classes for all predicates are inferred at once. We describe the architecture of cTI and report an extensive experimental evaluation of the system covering many classical examples from the logic programming termination literature and several Prolog programs of respectable size and complexity.<|reference_end|>
arxiv
@article{mesnard2003cti:, title={cTI: A constraint-based termination inference tool for ISO-Prolog}, author={Fred Mesnard and Roberto Bagnara}, journal={arXiv preprint arXiv:cs/0309028}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309028}, primaryClass={cs.PL} }
mesnard2003cti:
arxiv-671418
cs/0309029
Instrumenting self-modifying code
<|reference_start|>Instrumenting self-modifying code: Adding small code snippets at key points to existing code fragments is called instrumentation. It is an established technique to debug certain otherwise hard to solve faults, such as memory management issues and data races. Dynamic instrumentation can already be used to analyse code which is loaded or even generated at run time.With the advent of environments such as the Java Virtual Machine with optimizing Just-In-Time compilers, a new obstacle arises: self-modifying code. In order to instrument this kind of code correctly, one must be able to detect modifications and adapt the instrumentation code accordingly, preferably without incurring a high penalty speedwise. In this paper we propose an innovative technique that uses the hardware page protection mechanism of modern processors to detect such modifications. We also show how an instrumentor can adapt the instrumented version depending on the kind of modificiations as well as an experimental evaluation of said techniques.<|reference_end|>
arxiv
@article{maebe2003instrumenting, title={Instrumenting self-modifying code}, author={J. Maebe, K. De Bosschere}, journal={arXiv preprint arXiv:cs/0309029}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309029}, primaryClass={cs.SE} }
maebe2003instrumenting
arxiv-671419
cs/0309030
Model-Based Debugging using Multiple Abstract Models
<|reference_start|>Model-Based Debugging using Multiple Abstract Models: This paper introduces an automatic debugging framework that relies on model-based reasoning techniques to locate faults in programs. In particular, model-based diagnosis, together with an abstract interpretation based conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in programs. Design information and partial specifications are applied to guide a model revision process, which allows for automatic detection and correction of structural faults.<|reference_end|>
arxiv
@article{mayer2003model-based, title={Model-Based Debugging using Multiple Abstract Models}, author={Wolfgang Mayer and Markus Stumptner}, journal={arXiv preprint arXiv:cs/0309030}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309030}, primaryClass={cs.SE cs.AI} }
mayer2003model-based
arxiv-671420
cs/0309031
Timestamp Based Execution Control for C and Java Programs
<|reference_start|>Timestamp Based Execution Control for C and Java Programs: Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.<|reference_end|>
arxiv
@article{maruyama2003timestamp, title={Timestamp Based Execution Control for C and Java Programs}, author={Kazutaka Maruyama, Minoru Terada}, journal={arXiv preprint arXiv:cs/0309031}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309031}, primaryClass={cs.SE} }
maruyama2003timestamp
arxiv-671421
cs/0309032
Towards declarative diagnosis of constraint programs over finite domains
<|reference_start|>Towards declarative diagnosis of constraint programs over finite domains: The paper proposes a theoretical approach of the debugging of constraint programs based on a notion of explanation tree. The proposed approach is an attempt to adapt algorithmic debugging to constraint programming. In this theoretical framework for domain reduction, explanations are proof trees explaining value removals. These proof trees are defined by inductive definitions which express the removals of values as consequences of other value removals. Explanations may be considered as the essence of constraint programming. They are a declarative view of the computation trace. The diagnosis consists in locating an error in an explanation rooted by a symptom.<|reference_end|>
arxiv
@article{ferrand2003towards, title={Towards declarative diagnosis of constraint programs over finite domains}, author={Gerard Ferrand, Willy Lesaint, Alexandre Tessier}, journal={arXiv preprint arXiv:cs/0309032}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309032}, primaryClass={cs.SE} }
ferrand2003towards
arxiv-671422
cs/0309033
Lower bounds for predecessor searching in the cell probe model
<|reference_start|>Lower bounds for predecessor searching in the cell probe model: We consider a fundamental problem in data structures, static predecessor searching: Given a subset S of size n from the universe [m], store S so that queries of the form "What is the predecessor of x in S?" can be answered efficiently. We study this problem in the cell probe model introduced by Yao. Recently, Beame and Fich obtained optimal bounds on the number of probes needed by any deterministic query scheme if the associated storage scheme uses only n^{O(1)} cells of word size (\log m)^{O(1)} bits. We give a new lower bound proof for this problem that matches the bounds of Beame and Fich. Our lower bound proof has the following advantages: it works for randomised query schemes too, while Beame and Fich's proof works for deterministic query schemes only. It also extends to `quantum address-only' query schemes that we define in this paper, and is simpler than Beame and Fich's proof. We prove our lower bound using the round elimination approach of Miltersen, Nisan, Safra and Wigderson. Using tools from information theory, we prove a strong round elimination lemma for communication complexity that enables us to obtain a tight lower bound for the predecessor problem. Our strong round elimination lemma also extends to quantum communication complexity. We also use our round elimination lemma to obtain a rounds versus communication tradeoff for the `greater-than' problem, improving on the tradeoff in Miltersen et al. We believe that our round elimination lemma is of independent interest and should have other applications.<|reference_end|>
arxiv
@article{sen2003lower, title={Lower bounds for predecessor searching in the cell probe model}, author={Pranab Sen and S. Venkatesh}, journal={arXiv preprint arXiv:cs/0309033}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309033}, primaryClass={cs.CC cs.DS quant-ph} }
sen2003lower
arxiv-671423
cs/0309034
Measuring Praise and Criticism: Inference of Semantic Orientation from Association
<|reference_start|>Measuring Praise and Criticism: Inference of Semantic Orientation from Association: The evaluative character of a word is called its semantic orientation. Positive semantic orientation indicates praise (e.g., "honest", "intrepid") and negative semantic orientation indicates criticism (e.g., "disturbing", "superfluous"). Semantic orientation varies in both direction (positive or negative) and degree (mild to strong). An automated system for measuring semantic orientation would have application in text classification, text filtering, tracking opinions in online discussions, analysis of survey responses, and automated chat systems (chatbots). This paper introduces a method for inferring the semantic orientation of a word from its statistical association with a set of positive and negative paradigm words. Two instances of this approach are evaluated, based on two different statistical measures of word association: pointwise mutual information (PMI) and latent semantic analysis (LSA). The method is experimentally tested with 3,596 words (including adjectives, adverbs, nouns, and verbs) that have been manually labeled positive (1,614 words) and negative (1,982 words). The method attains an accuracy of 82.8% on the full test set, but the accuracy rises above 95% when the algorithm is allowed to abstain from classifying mild words.<|reference_end|>
arxiv
@article{turney2003measuring, title={Measuring Praise and Criticism: Inference of Semantic Orientation from Association}, author={Peter D. Turney (National Research Council of Canada), Michael L. Littman (Rutgers University)}, journal={ACM Transactions on Information Systems (TOIS), (2003), 21 (4), 315-346}, year={2003}, number={NRC-46516}, archivePrefix={arXiv}, eprint={cs/0309034}, primaryClass={cs.CL cs.IR cs.LG} }
turney2003measuring
arxiv-671424
cs/0309035
Combining Independent Modules to Solve Multiple-choice Synonym and Analogy Problems
<|reference_start|>Combining Independent Modules to Solve Multiple-choice Synonym and Analogy Problems: Existing statistical approaches to natural language problems are very coarse approximations to the true complexity of language processing. As such, no single technique will be best for all problem instances. Many researchers are examining ensemble methods that combine the output of successful, separately developed modules to create more accurate solutions. This paper examines three merging rules for combining probability distributions: the well known mixture rule, the logarithmic rule, and a novel product rule. These rules were applied with state-of-the-art results to two problems commonly used to assess human mastery of lexical semantics -- synonym questions and analogy questions. All three merging rules result in ensembles that are more accurate than any of their component modules. The differences among the three rules are not statistically significant, but it is suggestive that the popular mixture rule is not the best rule for either of the two problems.<|reference_end|>
arxiv
@article{turney2003combining, title={Combining Independent Modules to Solve Multiple-choice Synonym and Analogy Problems}, author={Peter D. Turney, Michael L. Littman, Jeffrey Bigham, Victor Shnayder}, journal={Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP-03), (2003), Borovets, Bulgaria, 482-489}, year={2003}, number={NRC-46506}, archivePrefix={arXiv}, eprint={cs/0309035}, primaryClass={cs.CL cs.IR cs.LG} }
turney2003combining
arxiv-671425
cs/0309036
A Neural Network Assembly Memory Model Based on an Optimal Binary Signal Detection Theory
<|reference_start|>A Neural Network Assembly Memory Model Based on an Optimal Binary Signal Detection Theory: A ternary/binary data coding algorithm and conditions under which Hopfield networks implement optimal convolutional or Hamming decoding algorithms has been described. Using the coding/decoding approach (an optimal Binary Signal Detection Theory, BSDT) introduced a Neural Network Assembly Memory Model (NNAMM) is built. The model provides optimal (the best) basic memory performance and demands the use of a new memory unit architecture with two-layer Hopfield network, N-channel time gate, auxiliary reference memory, and two nested feedback loops. NNAMM explicitly describes the dependence on time of a memory trace retrieval, gives a possibility of metamemory simulation, generalized knowledge representation, and distinct description of conscious and unconscious mental processes. A model of smallest inseparable part or an "atom" of consciousness is also defined. The NNAMM's neurobiological backgrounds and its applications to solving some interdisciplinary problems are shortly discussed. BSDT could implement the "best neural code" used in nervous tissues of animals and humans.<|reference_end|>
arxiv
@article{gopych2003a, title={A Neural Network Assembly Memory Model Based on an Optimal Binary Signal Detection Theory}, author={Petro M. Gopych}, journal={Problemy Programmirovaniya (Programming Problems, Kyiv, Ukraine), 2004, no. 2-3, pp. 473-479.}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309036}, primaryClass={cs.AI cs.IR cs.NE q-bio.NC q-bio.QM} }
gopych2003a
arxiv-671426
cs/0309037
Postmortem Object Type Identification
<|reference_start|>Postmortem Object Type Identification: This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems -- a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one C-based system -- the Solaris operating system kernel -- and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.<|reference_end|>
arxiv
@article{cantrill2003postmortem, title={Postmortem Object Type Identification}, author={Bryan M. Cantrill}, journal={arXiv preprint arXiv:cs/0309037}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309037}, primaryClass={cs.SE} }
cantrill2003postmortem
arxiv-671427
cs/0309038
A novel evolutionary formulation of the maximum independent set problem
<|reference_start|>A novel evolutionary formulation of the maximum independent set problem: We introduce a novel evolutionary formulation of the problem of finding a maximum independent set of a graph. The new formulation is based on the relationship that exists between a graph's independence number and its acyclic orientations. It views such orientations as individuals and evolves them with the aid of evolutionary operators that are very heavily based on the structure of the graph and its acyclic orientations. The resulting heuristic has been tested on some of the Second DIMACS Implementation Challenge benchmark graphs, and has been found to be competitive when compared to several of the other heuristics that have also been tested on those graphs.<|reference_end|>
arxiv
@article{barbosa2003a, title={A novel evolutionary formulation of the maximum independent set problem}, author={V. C. Barbosa, L. C. D. Campos}, journal={Journal of Combinatorial Optimization 8 (2004), 419-437}, year={2003}, doi={10.1007/s10878-004-4835-9}, number={ES-615/03}, archivePrefix={arXiv}, eprint={cs/0309038}, primaryClass={cs.NE} }
barbosa2003a
arxiv-671428
cs/0309039
Two novel evolutionary formulations of the graph coloring problem
<|reference_start|>Two novel evolutionary formulations of the graph coloring problem: We introduce two novel evolutionary formulations of the problem of coloring the nodes of a graph. The first formulation is based on the relationship that exists between a graph's chromatic number and its acyclic orientations. It views such orientations as individuals and evolves them with the aid of evolutionary operators that are very heavily based on the structure of the graph and its acyclic orientations. The second formulation, unlike the first one, does not tackle one graph at a time, but rather aims at evolving a `program' to color all graphs belonging to a class whose members all have the same number of nodes and other common attributes. The heuristics that result from these formulations have been tested on some of the Second DIMACS Implementation Challenge benchmark graphs, and have been found to be competitive when compared to the several other heuristics that have also been tested on those graphs.<|reference_end|>
arxiv
@article{barbosa2003two, title={Two novel evolutionary formulations of the graph coloring problem}, author={V. C. Barbosa, C. A. G. Assis, J. O. do Nascimento}, journal={Journal of Combinatorial Optimization 8 (2004), 41-63}, year={2003}, doi={10.1023/B:JOCO.0000021937.26468.b2}, number={ES-553/01}, archivePrefix={arXiv}, eprint={cs/0309039}, primaryClass={cs.NE} }
barbosa2003two
arxiv-671429
cs/0309040
A distributed algorithm to find k-dominating sets
<|reference_start|>A distributed algorithm to find k-dominating sets: We consider a connected undirected graph $G(n,m)$ with $n$ nodes and $m$ edges. A $k$-dominating set $D$ in $G$ is a set of nodes having the property that every node in $G$ is at most $k$ edges away from at least one node in $D$. Finding a $k$-dominating set of minimum size is NP-hard. We give a new synchronous distributed algorithm to find a $k$-dominating set in $G$ of size no greater than $\lfloor n/(k+1)\rfloor$. Our algorithm requires $O(k\log^*n)$ time and $O(m\log k+n\log k\log^*n)$ messages to run. It has the same time complexity as the best currently known algorithm, but improves on that algorithm's message complexity and is, in addition, conceptually simpler.<|reference_end|>
arxiv
@article{penso2003a, title={A distributed algorithm to find k-dominating sets}, author={L. D. Penso, V. C. Barbosa}, journal={Discrete Applied Mathematics 141 (2004), 243-253}, year={2003}, doi={10.1016/S0166-218X(03)00368-8}, number={ES-552/01}, archivePrefix={arXiv}, eprint={cs/0309040}, primaryClass={cs.DC} }
penso2003a
arxiv-671430
cs/0309041
Fast Verification of Convexity of Piecewise-linear Surfaces
<|reference_start|>Fast Verification of Convexity of Piecewise-linear Surfaces: We show that a realization of a closed connected PL-manifold of dimension n-1 in n-dimensional Euclidean space (n>2) is the boundary of a convex polyhedron (finite or infinite) if and only if the interior of each (n-3)-face has a point, which has a neighborhood lying on the boundary of an n-dimensional convex body. No initial assumptions about the topology or orientability of the input surface are made. The theorem is derived from a refinement and generalization of Van Heijenoort's theorem on locally convex manifolds to spherical spaces. Our convexity criterion for PL-manifolds implies an easy polynomial-time algorithm for checking convexity of a given PL-surface in n-dimensional Euclidean or spherical space, n>2. The algorithm is worst case optimal with respect to both the number of operations and the algebraic degree. The algorithm works under significantly weaker assumptions and is easier to implement than convexity verification algorithms suggested by Mehlhorn et al (1996-1999), and Devillers et al.(1998). A paradigm of approximate convexity is suggested and a simplified algorithm of smaller degree and complexity is suggested for approximate floating point convexity verification.<|reference_end|>
arxiv
@article{rybnikov2003fast, title={Fast Verification of Convexity of Piecewise-linear Surfaces}, author={Konstantin Rybnikov}, journal={arXiv preprint arXiv:cs/0309041}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309041}, primaryClass={cs.CG cs.CV} }
rybnikov2003fast
arxiv-671431
cs/0309042
On reducing the complexity of matrix clocks
<|reference_start|>On reducing the complexity of matrix clocks: Matrix clocks are a generalization of the notion of vector clocks that allows the local representation of causal precedence to reach into an asynchronous distributed computation's past with depth $x$, where $x\ge 1$ is an integer. Maintaining matrix clocks correctly in a system of $n$ nodes requires that everymessage be accompanied by $O(n^x)$ numbers, which reflects an exponential dependency of the complexity of matrix clocks upon the desired depth $x$. We introduce a novel type of matrix clock, one that requires only $nx$ numbers to be attached to each message while maintaining what for many applications may be the most significant portion of the information that the original matrix clock carries. In order to illustrate the new clock's applicability, we demonstrate its use in the monitoring of certain resource-sharing computations.<|reference_end|>
arxiv
@article{drummond2003on, title={On reducing the complexity of matrix clocks}, author={L. M. A. Drummond, V. C. Barbosa}, journal={Parallel Computing 29 (2003), 895-905}, year={2003}, doi={10.1016/S0167-8191(03)00066-8}, archivePrefix={arXiv}, eprint={cs/0309042}, primaryClass={cs.DC} }
drummond2003on
arxiv-671432
cs/0309043
Finding approximate palindromes in strings
<|reference_start|>Finding approximate palindromes in strings: We introduce a novel definition of approximate palindromes in strings, and provide an algorithm to find all maximal approximate palindromes in a string with up to $k$ errors. Our definition is based on the usual edit operations of approximate pattern matching, and the algorithm we give, for a string of size $n$ on a fixed alphabet, runs in $O(k^2 n)$ time. We also discuss two implementation-related improvements to the algorithm, and demonstrate their efficacy in practice by means of both experiments and an average-case analysis.<|reference_end|>
arxiv
@article{porto2003finding, title={Finding approximate palindromes in strings}, author={A. H. L. Porto, V. C. Barbosa}, journal={Pattern Recognition 35 (2002), 2581-2591}, year={2003}, doi={10.1016/S0031-3203(01)00179-0}, archivePrefix={arXiv}, eprint={cs/0309043}, primaryClass={cs.DS} }
porto2003finding
arxiv-671433
cs/0309044
The combinatorics of resource sharing
<|reference_start|>The combinatorics of resource sharing: We discuss general models of resource-sharing computations, with emphasis on the combinatorial structures and concepts that underlie the various deadlock models that have been proposed, the design of algorithms and deadlock-handling policies, and concurrency issues. These structures are mostly graph-theoretic in nature, or partially ordered sets for the establishment of priorities among processes and acquisition orders on resources. We also discuss graph-coloring concepts as they relate to resource sharing.<|reference_end|>
arxiv
@article{barbosa2003the, title={The combinatorics of resource sharing}, author={V. C. Barbosa}, journal={arXiv preprint arXiv:cs/0309044}, year={2003}, doi={10.1007/978-1-4757-3609-0_2}, archivePrefix={arXiv}, eprint={cs/0309044}, primaryClass={cs.OS cs.DC} }
barbosa2003the
arxiv-671434
cs/0309045
A uniform approach to constraint-solving for lists, multisets, compact lists, and sets
<|reference_start|>A uniform approach to constraint-solving for lists, multisets, compact lists, and sets: Lists, multisets, and sets are well-known data structures whose usefulness is widely recognized in various areas of Computer Science. These data structures have been analyzed from an axiomatic point of view with a parametric approach in (*) where the relevant unification algorithms have been developed. In this paper we extend these results considering more general constraints including not only equality but also membership constraints as well as their negative counterparts. (*) A. Dovier, A. Policriti, and G. Rossi. A uniform axiomatic view of lists, multisets, and sets, and the relevant unification algorithms. Fundamenta Informaticae, 36(2/3):201--234, 1998.<|reference_end|>
arxiv
@article{dovier2003a, title={A uniform approach to constraint-solving for lists, multisets, compact lists, and sets}, author={Agostino Dovier, Carla Piazza and Gianfranco Rossi}, journal={arXiv preprint arXiv:cs/0309045}, year={2003}, number={"Quaderni del Dipartimento di Matematica", 235}, archivePrefix={arXiv}, eprint={cs/0309045}, primaryClass={cs.PL cs.LO cs.SC} }
dovier2003a
arxiv-671435
cs/0309046
The Liar and Related Paradoxes: Fuzzy Truth Value Assignment for Collections of Self-Referential Sentences
<|reference_start|>The Liar and Related Paradoxes: Fuzzy Truth Value Assignment for Collections of Self-Referential Sentences: We study self-referential sentences of the type related to the Liar paradox. In particular, we consider the problem of assigning consistent fuzzy truth values to collections of self-referential sentences. We show that the problem can be reduced to the solution of a system of nonlinear equations. Furthermore, we prove that, under mild conditions, such a system always has a solution (i.e. a consistent truth value assignment) and that, for a particular implementation of logical ``and'', ``or'' and ``negation'', the ``mid-point'' solution is always consistent. Next we turn to computational issues and present several truth-value assignment algorithms; we argue that these algorithms can be understood as generalized sequential reasoning. In an Appendix we present a large number of examples of self-referential collections (including the Liar and the Strengthened Liar), we formulate the corresponding truth value equations and solve them analytically and/ or numerically.<|reference_end|>
arxiv
@article{vezerides2003the, title={The Liar and Related Paradoxes: Fuzzy Truth Value Assignment for Collections of Self-Referential Sentences}, author={K. Vezerides and Ath. Kehagias}, journal={arXiv preprint arXiv:cs/0309046}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309046}, primaryClass={cs.LO} }
vezerides2003the
arxiv-671436
cs/0309047
Causes and Effects in Computer Programs
<|reference_start|>Causes and Effects in Computer Programs: Debugging is commonly understood as finding and fixing the cause of a problem. But what does ``cause'' mean? How can we find causes? How can we prove that a cause is a cause--or even ``the'' cause? This paper defines common terms in debugging, highlights the principal techniques, their capabilities and limitations.<|reference_end|>
arxiv
@article{zeller2003causes, title={Causes and Effects in Computer Programs}, author={Andreas Zeller}, journal={arXiv preprint arXiv:cs/0309047}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309047}, primaryClass={cs.SE} }
zeller2003causes
arxiv-671437
cs/0309048
Goedel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements
<|reference_start|>Goedel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements: We present the first class of mathematically rigorous, general, fully self-referential, self-improving, optimally efficient problem solvers. Inspired by Kurt Goedel's celebrated self-referential formulas (1931), such a problem solver rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. The searcher systematically and efficiently tests computable proof techniques (programs whose outputs are proofs) until it finds a provably useful, computable self-rewrite. We show that such a self-rewrite is globally optimal - no local maxima! - since the code first had to prove that it is not useful to continue the proof search for alternative self-rewrites. Unlike previous non-self-referential methods based on hardwired proof searchers, ours not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the O()-notation, provided the utility of such speed-ups is provable at all.<|reference_end|>
arxiv
@article{schmidhuber2003goedel, title={Goedel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements}, author={Juergen Schmidhuber}, journal={Variants published in "Adaptive Agents and Multi-Agent Systems II", LNCS 3394, p. 1-23, Springer, 2005: ISBN 978-3-540-25260-3; as well as in Proc. ICANN 2005, LNCS 3697, p. 223-233, Springer, 2005 (plenary talk); as well as in "Artificial General Intelligence", Series: Cognitive Technologies, Springer, 2006: ISBN-13: 978-3-540-23733-4}, year={2003}, number={IDSIA-19-03}, archivePrefix={arXiv}, eprint={cs/0309048}, primaryClass={cs.LO cs.AI} }
schmidhuber2003goedel
arxiv-671438
cs/0309049
Control and Debugging of Distributed Programs Using Fiddle
<|reference_start|>Control and Debugging of Distributed Programs Using Fiddle: The main goal of Fiddle, a distributed debugging engine, is to provide a flexible platform for developing debugging tools. Fiddle provides a layered set of interfaces with a minimal set of debugging functionalities, for the inspection and control of distributed and multi-threaded applications. This paper illustrates how Fiddle is used to support integrated testing and debugging. The approach described is based on a tool, called Deipa, that interprets sequences of commands read from an input file, generated by an independent testing tool. Deipa acts as a Fiddle client, in order to enforce specific execution paths in a distributed PVM program. Other Fiddle clients may be used along with Deipa for the fine debugging at process level. Fiddle and Deipa functionalities and architectures are described, and a working example shows a step-by-step application of these tools.<|reference_end|>
arxiv
@article{lourenco2003control, title={Control and Debugging of Distributed Programs Using Fiddle}, author={Joao Lourenco, Jose C. Cunha, Vitor Moreira}, journal={arXiv preprint arXiv:cs/0309049}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309049}, primaryClass={cs.DC} }
lourenco2003control
arxiv-671439
cs/0309050
Computing Igusa's Local Zeta Functions of Univariate Polynomials, and Linear Feedback Shift Registers
<|reference_start|>Computing Igusa's Local Zeta Functions of Univariate Polynomials, and Linear Feedback Shift Registers: We give a polynomial time algorithm for computing the Igusa local zeta function $Z(s,f)$ attached to a polynomial $f(x)\in \QTR{Bbb}{Z}[x]$, in one variable, with splitting field $\QTR{Bbb}{Q}$, and a prime number $p$. We also propose a new class of Linear Feedback Shift Registers based on the computation of Igusa's local zeta function.<|reference_end|>
arxiv
@article{zuniga-galindo2003computing, title={Computing Igusa's Local Zeta Functions of Univariate Polynomials, and Linear Feedback Shift Registers}, author={W. A. Zuniga-Galindo}, journal={arXiv preprint arXiv:cs/0309050}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309050}, primaryClass={cs.SC cs.CR} }
zuniga-galindo2003computing
arxiv-671440
cs/0309051
New Lattice Based Cryptographic Constructions
<|reference_start|>New Lattice Based Cryptographic Constructions: We introduce the use of Fourier analysis on lattices as an integral part of a lattice based construction. The tools we develop provide an elegant description of certain Gaussian distributions around lattice points. Our results include two cryptographic constructions which are based on the worst-case hardness of the unique shortest vector problem. The main result is a new public key cryptosystem whose security guarantee is considerably stronger than previous results ($O(n^{1.5})$ instead of $O(n^7)$). This provides the first alternative to Ajtai and Dwork's original 1996 cryptosystem. Our second result is a family of collision resistant hash functions which, apart from improving the security in terms of the unique shortest vector problem, is also the first example of an analysis which is not based on Ajtai's iterative step. Surprisingly, both results are derived from one theorem which presents two indistinguishable distributions on the segment $[0,1)$. It seems that this theorem can have further applications and as an example we mention how it can be used to solve an open problem related to quantum computation.<|reference_end|>
arxiv
@article{regev2003new, title={New Lattice Based Cryptographic Constructions}, author={Oded Regev}, journal={arXiv preprint arXiv:cs/0309051}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309051}, primaryClass={cs.CR} }
regev2003new
arxiv-671441
cs/0309052
Minimal DFAs for Testing Divisibility
<|reference_start|>Minimal DFAs for Testing Divisibility: We present and prove a theorem answering the question "how many states does a minimal deterministic finite automaton (DFA) that recognizes the set of base-b numbers divisible by k have?"<|reference_end|>
arxiv
@article{alexeev2003minimal, title={Minimal DFAs for Testing Divisibility}, author={Boris Alexeev}, journal={J. Comput. System Sci. 69 (2004), no. 2, 235--243}, year={2003}, doi={10.1016/j.jcss.2004.02.001}, archivePrefix={arXiv}, eprint={cs/0309052}, primaryClass={cs.CC} }
alexeev2003minimal
arxiv-671442
cs/0309053
A Hierarchical Situation Calculus
<|reference_start|>A Hierarchical Situation Calculus: A situation calculus is presented that provides a solution to the frame problem for hierarchical situations, that is, situations that have a modular structure in which parts of the situation behave in a relatively independent manner. This situation calculus is given in a relational, functional, and modal logic form. Each form permits both a single level hierarchy or a multiple level hierarchy, giving six versions of the formalism in all, and a number of sub-versions of these. For multiple level hierarchies, it is possible to give equations between parts of the situation to impose additional structure on the problem. This approach is compared to others in the literature.<|reference_end|>
arxiv
@article{plaisted2003a, title={A Hierarchical Situation Calculus}, author={David A. Plaisted}, journal={arXiv preprint arXiv:cs/0309053}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309053}, primaryClass={cs.AI cs.LO} }
plaisted2003a
arxiv-671443
cs/0309054
Active Internet Traffic Filtering: Real-time Response to Denial of Service Attacks
<|reference_start|>Active Internet Traffic Filtering: Real-time Response to Denial of Service Attacks: Denial of Service (DoS) attacks are one of the most challenging threats to Internet security. An attacker typically compromises a large number of vulnerable hosts and uses them to flood the victim's site with malicious traffic, clogging its tail circuit and interfering with normal traffic. At present, the network operator of a site under attack has no other resolution but to respond manually by inserting filters in the appropriate edge routers to drop attack traffic. However, as DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. In this paper, we present Active Internet Traffic Filtering, a new automatic filter propagation protocol. We argue that this system provides a guaranteed, significant level of protection against DoS attacks in exchange for a reasonable, bounded amount of router resources. We also argue that the proposed system cannot be abused by a malicious node to interfere with normal Internet operation. Finally, we argue that it retains its efficiency in the face of continued Internet growth.<|reference_end|>
arxiv
@article{argyraki2003active, title={Active Internet Traffic Filtering: Real-time Response to Denial of Service Attacks}, author={Katerina J. Argyraki and David R. Cheriton}, journal={Updated versions in Proc. USENIX Annual Technical Conference, April 2005, and IEEE/ACM Transactions on Networking, 17(4):1284-1297, August 2009}, year={2003}, doi={10.1109/TNET.2008.2007431}, archivePrefix={arXiv}, eprint={cs/0309054}, primaryClass={cs.NI} }
argyraki2003active
arxiv-671444
cs/0309055
A mathematical framework for automated bug localization
<|reference_start|>A mathematical framework for automated bug localization: In this paper, we propose a mathematical framework for automated bug localization. This framework can be briefly summarized as follows. A program execution can be represented as a rooted acyclic directed graph. We define an execution snapshot by a cut-set on the graph. A program state can be regarded as a conjunction of labels on edges in a cut-set. Then we argue that a debugging task is a pruning process of the execution graph by using cut-sets. A pruning algorithm, i.e., a debugging task, is also presented.<|reference_end|>
arxiv
@article{ohta2003a, title={A mathematical framework for automated bug localization}, author={Tsuyoshi Ohta, Tadanori Mizuno (Shizuoka University)}, journal={arXiv preprint arXiv:cs/0309055}, year={2003}, archivePrefix={arXiv}, eprint={cs/0309055}, primaryClass={cs.SE} }
ohta2003a
arxiv-671445
cs/0310001
A Performance Analysis Tool for Nokia Mobile Phone Software
<|reference_start|>A Performance Analysis Tool for Nokia Mobile Phone Software: Performance problems are often observed in embedded software systems. The reasons for poor performance are frequently not obvious. Bottlenecks can occur in any of the software components along the execution path. Therefore it is important to instrument and monitor the different components contributing to the runtime behavior of an embedded software system. Performance analysis tools can help locate performance bottlenecks in embedded software systems by monitoring the software's execution and producing easily understandable performance data. We maintain and further develop a tool for analyzing the performance of Nokia mobile phone software. The user can select among four performance analysis reports to be generated: average processor load, processor utilization, task execution time statistics, and task execution timeline. Each of these reports provides important information about where execution time is being spent. The demo will show how the tool helps to identify performance bottlenecks in Nokia mobile phone software and better understand areas of poor performance.<|reference_end|>
arxiv
@article{metz2003a, title={A Performance Analysis Tool for Nokia Mobile Phone Software}, author={Edu Metz, Raimondas Lencevicius}, journal={arXiv preprint arXiv:cs/0310001}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310001}, primaryClass={cs.SE cs.PF} }
metz2003a
arxiv-671446
cs/0310002
The Graphics Card as a Streaming Computer
<|reference_start|>The Graphics Card as a Streaming Computer: Massive data sets have radically changed our understanding of how to design efficient algorithms; the streaming paradigm, whether it in terms of number of passes of an external memory algorithm, or the single pass and limited memory of a stream algorithm, appears to be the dominant method for coping with large data. A very different kind of massive computation has had the same effect at the level of the CPU. The most prominent example is that of the computations performed by a graphics card. The operations themselves are very simple, and require very little memory, but require the ability to perform many computations extremely fast and in parallel to whatever degree possible. What has resulted is a stream processor that is highly optimized for stream computations. An intriguing side effect of this is the growing use of a graphics card as a general purpose stream processing engine. In an ever-increasing array of applications, researchers are discovering that performing a computation on a graphics card is far faster than performing it on a CPU, and so are using a GPU as a stream co-processor.<|reference_end|>
arxiv
@article{venkatasubramanian2003the, title={The Graphics Card as a Streaming Computer}, author={Suresh Venkatasubramanian}, journal={In SIGMOD Workshop on Management and Processing of Massive Data (June 2003)}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310002}, primaryClass={cs.GR cs.AR} }
venkatasubramanian2003the
arxiv-671447
cs/0310003
The Wake Up and Report Problem is Time-Equivalent to the Firing Squad Synchronization Problem
<|reference_start|>The Wake Up and Report Problem is Time-Equivalent to the Firing Squad Synchronization Problem: We consider several problems relating to strongly-connected directed networks of identical finite-state processors that work synchronously in discrete time steps. The conceptually simplest of these is the Wake Up and Report Problem; this is the problem of having a unique "root" processor send a signal to all other processors in the network and then enter a special "done" state only when all other processors have received the signal. The most difficult of the problems we consider is the classic Firing Squad Synchronization Problem; this is the much-studied problem of achieving macro-synchronization in a network given micro-synchronization. We show via a complex algorithmic application of the "snake" data structure first introduced in Even, Litman, and Winkler [ELW], that these two problems are asymptotically time-equivalent up to a constant factor. This result leads immediately to the inclusion of several other related problems into this new asymptotic time-class.<|reference_end|>
arxiv
@article{goldstein2003the, title={The Wake Up and Report Problem is Time-Equivalent to the Firing Squad Synchronization Problem}, author={Darin Goldstein and Nick Meyer}, journal={arXiv preprint arXiv:cs/0310003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310003}, primaryClass={cs.DC cs.DS} }
goldstein2003the
arxiv-671448
cs/0310004
Determination of the Topology of a Directed Network
<|reference_start|>Determination of the Topology of a Directed Network: We consider strongly-connected directed networks of identical synchronous, finite-state processors with in- and out-degree uniformly bounded by a network constant. Via a straightforward extension of Ostrovsky and Wilkerson's Backwards Communication Algorithm in [OW], we exhibit a protocol which solves the Global Topology Determination Problem, the problem of having the root processor map the global topology of a network of unknown size and topology, with running time O(ND) where N represents the number of processors and D represents the diameter of the network. A simple counting argument suffices to show that the Global Topology Determination Problem has time-complexity Omega(N logN) which makes the protocol presented asymptotically time-optimal for many large networks.<|reference_end|>
arxiv
@article{goldstein2003determination, title={Determination of the Topology of a Directed Network}, author={Darin Goldstein}, journal={arXiv preprint arXiv:cs/0310004}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310004}, primaryClass={cs.DC cs.DS} }
goldstein2003determination
arxiv-671449
cs/0310005
Using Artificial Intelligence for Model Selection
<|reference_start|>Using Artificial Intelligence for Model Selection: We apply the optimization algorithm Adaptive Simulated Annealing (ASA) to the problem of analyzing data on a large population and selecting the best model to predict that an individual with various traits will have a particular disease. We compare ASA with traditional forward and backward regression on computer simulated data. We find that the traditional methods of modeling are better for smaller data sets whereas a numerically stable ASA seems to perform better on larger and more complicated data sets.<|reference_end|>
arxiv
@article{goldstein2003using, title={Using Artificial Intelligence for Model Selection}, author={Darin Goldstein, William Murray, and Binh Yang}, journal={arXiv preprint arXiv:cs/0310005}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310005}, primaryClass={cs.AI q-bio.QM} }
goldstein2003using
arxiv-671450
cs/0310006
The Lowell Database Research Self Assessment
<|reference_start|>The Lowell Database Research Self Assessment: A group of senior database researchers gathers every few years to assess the state of database research and to point out problem areas that deserve additional focus. This report summarizes the discussion and conclusions of the sixth ad-hoc meeting held May 4-6, 2003 in Lowell, Mass. It observes that information management continues to be a critical component of most complex software systems. It recommends that database researchers increase focus on: integration of text, data, code, and streams; fusion of information from heterogeneous data sources; reasoning about uncertain data; unsupervised data mining for interesting correlations; information privacy; and self-adaptation and repair.<|reference_end|>
arxiv
@article{abiteboul2003the, title={The Lowell Database Research Self Assessment}, author={Serge Abiteboul, Rakesh Agrawal, Phil Bernstein, Mike Carey, Stefano Ceri, Bruce Croft, David DeWitt, Mike Franklin, Hector Garcia Molina, Dieter Gawlick, Jim Gray, Laura Haas, Alon Halevy, Joe Hellerstein, Yannis Ioannidis, Martin Kersten, Michael Pazzani, Mike Lesk, David Maier, Jeff Naughton, Hans Schek, Timos Sellis, Avi Silberschatz, Mike Stonebraker, Rick Snodgrass, Jeff Ullman, Gerhard Weikum, Jennifer Widom, and Stan Zdonik}, journal={arXiv preprint arXiv:cs/0310006}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310006}, primaryClass={cs.DB} }
abiteboul2003the
arxiv-671451
cs/0310007
Event-based Program Analysis with DeWiz
<|reference_start|>Event-based Program Analysis with DeWiz: Due to the increased complexity of parallel and distributed programs, debugging of them is considered to be the most difficult and time consuming part of the software lifecycle. Tool support is hence a crucial necessity to hide complexity from the user. However, most existing tools seem inadequate as soon as the program under consideration exploits more than a few processors over a long execution time. This problem is addressed by the novel debugging tool DeWiz (Debugging Wizard), whose focus lies on scalability. DeWiz has a modular, scalable architecture, and uses the event graph model as a representation of the investigated program. DeWiz provides a set of modules, which can be combined to generate, analyze, and visualize event graph data. Within this processing pipeline the toolset tries to extract useful information, which is presented to the user at an arbitrary level of abstraction. Additionally, DeWiz is a framework, which can be used to easily implement arbitrary user-defined modules.<|reference_end|>
arxiv
@article{schaubschlaeger2003event-based, title={Event-based Program Analysis with DeWiz}, author={Ch. Schaubschlaeger, D. Kranzlmueller, J. Volkert}, journal={arXiv preprint arXiv:cs/0310007}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310007}, primaryClass={cs.SE} }
schaubschlaeger2003event-based
arxiv-671452
cs/0310008
Poster on MPI application in Computational Fluid Dynamics
<|reference_start|>Poster on MPI application in Computational Fluid Dynamics: Poster-presentation of the paper "Message Passing Fluids: molecules as processes in parallel computational fluids" held at "EURO PVMMPI 2003" Congress; the paper is on the proceedings "Recent Advances in Parallel Virtual Machine and Message Passing Interface", 10th European PVM/MPI User's Group Meeting, LNCS 2840, Springer-Verlag, Dongarra-Laforenza-Orlando editors, pp. 550-554.<|reference_end|>
arxiv
@article{argentini2003poster, title={Poster on MPI application in Computational Fluid Dynamics}, author={Gianluca Argentini}, journal={arXiv preprint arXiv:cs/0310008}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310008}, primaryClass={cs.DC cs.GR} }
argentini2003poster
arxiv-671453
cs/0310009
On Interference of Signals and Generalization in Feedforward Neural Networks
<|reference_start|>On Interference of Signals and Generalization in Feedforward Neural Networks: This paper studies how the generalization ability of neurons can be affected by mutual processing of different signals. This study is done on the basis of a feedforward artificial neural network. The mutual processing of signals can possibly be a good model of patterns in a set generalized by a neural network and in effect may improve generalization. In this paper it is discussed that the interference may also cause a highly random generalization. Adaptive activation functions are discussed as a way of reducing that type of generalization. A test of a feedforward neural network is performed that shows the discussed random generalization.<|reference_end|>
arxiv
@article{rataj2003on, title={On Interference of Signals and Generalization in Feedforward Neural Networks}, author={Artur Rataj}, journal={arXiv preprint arXiv:cs/0310009}, year={2003}, number={IITiS-2002-08-1-1.04}, archivePrefix={arXiv}, eprint={cs/0310009}, primaryClass={cs.NE} }
rataj2003on
arxiv-671454
cs/0310010
Transient Diversity in Multi-Agent Systems
<|reference_start|>Transient Diversity in Multi-Agent Systems: Diversity is an important aspect of highly efficient multi-agent teams. We introduce the main factors that drive a multi-agent system in either direction along the diversity scale. A metric for diversity is described, and we speculate on the concept of transient diversity. Finally, an experiment on social entropy using a RoboCup simulated soccer team is presented.<|reference_end|>
arxiv
@article{lyback2003transient, title={Transient Diversity in Multi-Agent Systems}, author={David Lyback}, journal={arXiv preprint arXiv:cs/0310010}, year={2003}, number={M.Sc. thesis No. 99-x-097. Dept. of Computer and Systems Sciences, Royal Inst. of Technology, Sweden}, archivePrefix={arXiv}, eprint={cs/0310010}, primaryClass={cs.AI cs.MA} }
lyback2003transient
arxiv-671455
cs/0310011
Re-Finding Found Things: An Exploratory Study of How Users Re-Find Information
<|reference_start|>Re-Finding Found Things: An Exploratory Study of How Users Re-Find Information: The problem of how people find information is studied extensively; however, the problem of how people organize, re-use, and re-find information that they have found is not as well understood. Recently, several projects have conducted in-situ studies to explore how people re-find and re-use information. Here, we present results and observations from a controlled, laboratory study of refinding information found on the web. Our study was conducted as a collaborative exercise with pairs of participants. One participant acted as a retriever, helping the other participant re-find information by telephone. This design allowed us to gain insight into the strategies that users employed to re-find information, and into how domain artifacts and contextual information were used to aid the re-finding process. We also introduced the ability for users to add their own explicitly artifacts in the form of making annotations on the web pages they viewed. We observe that re-finding often occurs as a two stage, iterative process in which users first attempt to locate an information source (search), and once found, begin a process to find the specific information being sought (browse). Our findings are consistent with research on waypoints; orienteering approaches to re-finding; and navigation of electronic spaces. Furthermore, we observed that annotations were utilized extensively, indicating that explicitly added context by the user can play an important role in re-finding.<|reference_end|>
arxiv
@article{capra2003re-finding, title={Re-Finding Found Things: An Exploratory Study of How Users Re-Find Information}, author={Robert G. Capra, Manuel A. Perez-Quinones}, journal={arXiv preprint arXiv:cs/0310011}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310011}, primaryClass={cs.HC cs.IR} }
capra2003re-finding
arxiv-671456
cs/0310012
A Formal Comparison of Visual Web Wrapper Generators
<|reference_start|>A Formal Comparison of Visual Web Wrapper Generators: We study the core fragment of the Elog wrapping language used in the Lixto system (a visual wrapper generator) and formally compare Elog to other wrapping languages proposed in the literature.<|reference_end|>
arxiv
@article{gottlob2003a, title={A Formal Comparison of Visual Web Wrapper Generators}, author={Georg Gottlob and Christoph Koch}, journal={arXiv preprint arXiv:cs/0310012}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310012}, primaryClass={cs.DB} }
gottlob2003a
arxiv-671457
cs/0310013
WebTeach in practice: the entrance test to the Engineering faculty in Florence
<|reference_start|>WebTeach in practice: the entrance test to the Engineering faculty in Florence: We present the WebTeach project, formed by a web interface to database for test management, a wiki site for the diffusion of teaching material and student forums, and a suite for the generation of multiple-choice mathematical quiz with automatic elaboration of forms. This system has been massively tested for the entrance test to the Engineering Faculty of the University of Florence, Italy<|reference_end|>
arxiv
@article{bagnoli2003webteach, title={WebTeach in practice: the entrance test to the Engineering faculty in Florence}, author={Franco Bagnoli, Fabio Franci, Francesco Mugelli, Andrea Sterbini}, journal={arXiv preprint arXiv:cs/0310013}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310013}, primaryClass={cs.HC cs.IR} }
bagnoli2003webteach
arxiv-671458
cs/0310014
Effective XML Representation for Spoken Language in Organisations
<|reference_start|>Effective XML Representation for Spoken Language in Organisations: Spoken Language can be used to provide insights into organisational processes, unfortunately transcription and coding stages are very time consuming and expensive. The concept of partial transcription and coding is proposed in which spoken language is indexed prior to any subsequent processing. The functional linguistic theory of texture is used to describe the effects of partial transcription on observational records. The standard used to encode transcript context and metadata is called CHAT, but a previous XML schema developed to implement it contains design assumptions that make it difficult to support partial transcription for example. This paper describes a more effective XML schema that overcomes many of these problems and is intended for use in applications that support the rapid development of spoken language deliverables.<|reference_end|>
arxiv
@article{clarke2003effective, title={Effective XML Representation for Spoken Language in Organisations}, author={Rodney J. Clarke, Philip C. Windridge and Dali Dong}, journal={arXiv preprint arXiv:cs/0310014}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310014}, primaryClass={cs.CL} }
clarke2003effective
arxiv-671459
cs/0310015
Debugging Tool for Localizing Faulty Processes in Message Passing Programs
<|reference_start|>Debugging Tool for Localizing Faulty Processes in Message Passing Programs: In message passing programs, once a process terminates with an unexpected error, the terminated process can propagate the error to the rest of processes through communication dependencies, resulting in a program failure. Therefore, to locate faults, developers must identify the group of processes involved in the original error and faulty processes that activate faults. This paper presents a novel debugging tool, named MPI-PreDebugger (MPI-PD), for localizing faulty processes in message passing programs. MPI-PD automatically distinguishes the original and the propagated errors by checking communication errors during program execution. If MPI-PD observes any communication errors, it backtraces communication dependencies and points out potential faulty processes in a timeline view. We also introduce three case studies, in which MPI-PD has been shown to play the key role in their debugging. From these studies, we believe that MPI-PD helps developers to locate faults and allows them to concentrate in correcting their programs.<|reference_end|>
arxiv
@article{okita2003debugging, title={Debugging Tool for Localizing Faulty Processes in Message Passing Programs}, author={Masao Okita, Fumihiko Ino and Kenichi Hagihara}, journal={arXiv preprint arXiv:cs/0310015}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310015}, primaryClass={cs.SE} }
okita2003debugging
arxiv-671460
cs/0310016
Debugging Backwards in Time
<|reference_start|>Debugging Backwards in Time: By recording every state change in the run of a program, it is possible to present the programmer every bit of information that might be desired. Essentially, it becomes possible to debug the program by going ``backwards in time,'' vastly simplifying the process of debugging. An implementation of this idea, the ``Omniscient Debugger,'' is used to demonstrate its viability and has been used successfully on a number of large programs. Integration with an event analysis engine for searching and control is presented. Several small-scale user studies provide encouraging results. Finally performance issues and implementation are discussed along with possible optimizations. This paper makes three contributions of interest: the concept and technique of ``going backwards in time,'' the GUI which presents a global view of the program state and has a formal notion of ``navigation through time,'' and the integration with an event analyzer.<|reference_end|>
arxiv
@article{lewis2003debugging, title={Debugging Backwards in Time}, author={Bil Lewis}, journal={arXiv preprint arXiv:cs/0310016}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310016}, primaryClass={cs.SE} }
lewis2003debugging
arxiv-671461
cs/0310017
Circle and sphere blending with conformal geometric algebra
<|reference_start|>Circle and sphere blending with conformal geometric algebra: Blending schemes based on circles provide smooth `fair' interpolations between series of points. Here we demonstrate a simple, robust set of algorithms for performing circle blends for a range of cases. An arbitrary level of G-continuity can be achieved by simple alterations to the underlying parameterisation. Our method exploits the computational framework provided by conformal geometric algebra. This employs a five-dimensional representation of points in space, in contrast to the four-dimensional representation typically used in projective geometry. The advantage of the conformal scheme is that straight lines and circles are treated in a single, unified framework. As a further illustration of the power of the conformal framework, the basic idea is extended to the case of sphere blending to interpolate over a surface.<|reference_end|>
arxiv
@article{doran2003circle, title={Circle and sphere blending with conformal geometric algebra}, author={Chris Doran}, journal={arXiv preprint arXiv:cs/0310017}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310017}, primaryClass={cs.CG cs.GR} }
doran2003circle
arxiv-671462
cs/0310018
The Study of the Application of a Keywords-based Chatbot System on the Teaching of Foreign Languages
<|reference_start|>The Study of the Application of a Keywords-based Chatbot System on the Teaching of Foreign Languages: This paper reports the findings of a study conducted on the application of an on-line human-computer dialog system with natural language (chatbot) on the teaching of foreign languages. A keywords-based human-computer dialog system makes it possible that the user could chat with the computer using a natural language, i.e. in English or in German to some extent. So an experiment has been made using this system online to work as a chat partner with the users learning the foreign languages. Dialogs between the users and the chatbot are collected. Findings indicate that the dialogs between the human and the computer are mostly very short because the user finds the responses from the computer are mostly repeated and irrelevant with the topics and context and the program does not understand the language at all. With analysis of the keywords or pattern-matching mechanism used in this chatbot it can be concluded that this kind of system can not work as a teaching assistant program in foreign language learning.<|reference_end|>
arxiv
@article{jia2003the, title={The Study of the Application of a Keywords-based Chatbot System on the Teaching of Foreign Languages}, author={Jiyou Jia}, journal={arXiv preprint arXiv:cs/0310018}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310018}, primaryClass={cs.CY cs.CL} }
jia2003the
arxiv-671463
cs/0310019
A hierarchical Algorithm to Solve the Shortest Path Problem in Valued Graphs
<|reference_start|>A hierarchical Algorithm to Solve the Shortest Path Problem in Valued Graphs: This paper details a new algorithm to solve the shortest path problem in valued graphs. Its complexity is $O(D \log v)$ where $D$ is the graph diameter and $v$ its number of vertices. This complexity has to be compared to the one of the Dijkstra's algorithm, which is $O(e\log v)$ where $e$ is the number of edges of the graph. This new algorithm lies on a hierarchical representation of the graph, using radix trees. The performances of this algorithm show a major improvement over the ones of the algorithms known up to now.<|reference_end|>
arxiv
@article{koskas2003a, title={A hierarchical Algorithm to Solve the Shortest Path Problem in Valued Graphs}, author={Michel Koskas}, journal={arXiv preprint arXiv:cs/0310019}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310019}, primaryClass={cs.DS cs.DM} }
koskas2003a
arxiv-671464
cs/0310020
Pure Prolog Execution in 21 Rules
<|reference_start|>Pure Prolog Execution in 21 Rules: A simple mathematical definition of the 4-port model for pure Prolog is given. The model combines the intuition of ports with a compact representation of execution state. Forward and backward derivation steps are possible. The model satisfies a modularity claim, making it suitable for formal reasoning.<|reference_end|>
arxiv
@article{kulas2003pure, title={Pure Prolog Execution in 21 Rules}, author={Marija Kulas}, journal={arXiv preprint arXiv:cs/0310020}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310020}, primaryClass={cs.PL cs.LO} }
kulas2003pure
arxiv-671465
cs/0310021
Fuzzy Relational Modeling of Cost and Affordability for Advanced Technology Manufacturing Environment
<|reference_start|>Fuzzy Relational Modeling of Cost and Affordability for Advanced Technology Manufacturing Environment: Relational representation of knowledge makes it possible to perform all the computations and decision making in a uniform relational way by means of special relational compositions called triangle and square products. In this paper some applications in manufacturing related to cost analysis are described. Testing fuzzy relational structures for various relational properties allows us to discover dependencies, hierarchies, similarities, and equivalences of the attributes characterizing technological processes and manufactured artifacts in their relationship to costs and performance. A brief overview of mathematical aspects of BK-relational products is given in Appendix 1 together with further references in the literature.<|reference_end|>
arxiv
@article{kohout2003fuzzy, title={Fuzzy Relational Modeling of Cost and Affordability for Advanced Technology Manufacturing Environment}, author={Ladislav J. Kohout, Eunjin Kim, Gary Zenz}, journal={arXiv preprint arXiv:cs/0310021}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310021}, primaryClass={cs.CE cs.AI math.OC} }
kohout2003fuzzy
arxiv-671466
cs/0310022
Smoothed Analysis of the Condition Numbers and Growth Factors of Matrices
<|reference_start|>Smoothed Analysis of the Condition Numbers and Growth Factors of Matrices: Let $\orig{A}$ be any matrix and let $A$ be a slight random perturbation of $\orig{A}$. We prove that it is unlikely that $A$ has large condition number. Using this result, we prove it is unlikely that $A$ has large growth factor under Gaussian elimination without pivoting. By combining these results, we bound the smoothed precision needed by Gaussian elimination without pivoting. Our results improve the average-case analysis of Gaussian elimination without pivoting performed by Yeung and Chan (SIAM J. Matrix Anal. Appl., 1997).<|reference_end|>
arxiv
@article{sankar2003smoothed, title={Smoothed Analysis of the Condition Numbers and Growth Factors of Matrices}, author={Arvind Sankar, Daniel A. Spielman, Shang-Hua Teng}, journal={arXiv preprint arXiv:cs/0310022}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310022}, primaryClass={cs.NA cs.DS} }
sankar2003smoothed
arxiv-671467
cs/0310023
Application of Kullback-Leibler Metric to Speech Recognition
<|reference_start|>Application of Kullback-Leibler Metric to Speech Recognition: Article discusses the application of Kullback-Leibler divergence to the recognition of speech signals and suggests three algorithms implementing this divergence criterion: correlation algorithm, spectral algorithm and filter algorithm. Discussion covers an approach to the problem of speech variability and is illustrated with the results of experimental modeling of speech signals. The article gives a number of recommendations on the choice of appropriate model parameters and provides a comparison to some other methods of speech recognition.<|reference_end|>
arxiv
@article{bocharov2003application, title={Application of Kullback-Leibler Metric to Speech Recognition}, author={Igor Bocharov, Pavel Lukin}, journal={arXiv preprint arXiv:cs/0310023}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310023}, primaryClass={cs.AI} }
bocharov2003application
arxiv-671468
cs/0310024
Availability Guarantee for Deterministic Replay Starting Points in Real-Time Systems
<|reference_start|>Availability Guarantee for Deterministic Replay Starting Points in Real-Time Systems: Cyclic debugging requires repeatable executions. As non-deterministic or real-time systems typically do not have the potential to provide this, special methods are required. One such method is replay, a process that requires monitoring of a running system and logging of the data produced by that monitoring. We shall discuss the process of preparing the replay, a part of the process that has not been very well described before.<|reference_end|>
arxiv
@article{huselius2003availability, title={Availability Guarantee for Deterministic Replay Starting Points in Real-Time Systems}, author={Joel Huselius, Henrik Thane and Daniel Sundmark}, journal={arXiv preprint arXiv:cs/0310024}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310024}, primaryClass={cs.SE} }
huselius2003availability
arxiv-671469
cs/0310025
A Monitoring Language for Run Time and Post-Mortem Behavior Analysis and Visualization
<|reference_start|>A Monitoring Language for Run Time and Post-Mortem Behavior Analysis and Visualization: UFO is a new implementation of FORMAN, a declarative monitoring language, in which rules are compiled into execution monitors that run on a virtual machine supported by the Alamo monitor architecture.<|reference_end|>
arxiv
@article{auguston2003a, title={A Monitoring Language for Run Time and Post-Mortem Behavior Analysis and Visualization}, author={Mikhail Auguston, Clinton Jeffery, Scott Underwood}, journal={arXiv preprint arXiv:cs/0310025}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310025}, primaryClass={cs.SE cs.PL} }
auguston2003a
arxiv-671470
cs/0310026
Generalized Systematic Debugging for Attribute Grammars
<|reference_start|>Generalized Systematic Debugging for Attribute Grammars: Attribute grammars (AGs) are known to be a useful formalism for semantic analysis and translation. However, debugging AGs is complex owing to inherent difficulties of AGs, such as recursive grammar structure and attribute dependency. In this paper, a new systematic method of debugging AGs is proposed. Our approach is, in principle, based on previously proposed algorithmic debugging of AGs, but is more general. This easily enables integration of various query-based systematic debugging methods, including the slice-based method. The proposed method has been implemented in Aki, a debugger for AG description. We evaluated our new approach experimentally using Aki, which demonstrates the usability of our debugging method.<|reference_end|>
arxiv
@article{sasaki2003generalized, title={Generalized Systematic Debugging for Attribute Grammars}, author={Akira Sasaki and Masataka Sassa}, journal={arXiv preprint arXiv:cs/0310026}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310026}, primaryClass={cs.SE} }
sasaki2003generalized
arxiv-671471
cs/0310027
On the continuous Fermat-Weber problem
<|reference_start|>On the continuous Fermat-Weber problem: We give the first exact algorithmic study of facility location problems that deal with finding a median for a continuum of demand points. In particular, we consider versions of the ``continuous k-median (Fermat-Weber) problem'' where the goal is to select one or more center points that minimize the average distance to a set of points in a demand region. In such problems, the average is computed as an integral over the relevant region, versus the usual discrete sum of distances. The resulting facility location problems are inherently geometric, requiring analysis techniques of computational geometry. We provide polynomial-time algorithms for various versions of the L1 1-median (Fermat-Weber) problem. We also consider the multiple-center version of the L1 k-median problem, which we prove is NP-hard for large k.<|reference_end|>
arxiv
@article{fekete2003on, title={On the continuous Fermat-Weber problem}, author={Sandor P. Fekete and Joseph S.B. Mitchell and Karin Beurer}, journal={arXiv preprint arXiv:cs/0310027}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310027}, primaryClass={cs.CG cs.DS} }
fekete2003on
arxiv-671472
cs/0310028
Providing Diversity in K-Nearest Neighbor Query Results
<|reference_start|>Providing Diversity in K-Nearest Neighbor Query Results: Given a point query Q in multi-dimensional space, K-Nearest Neighbor (KNN) queries return the K closest answers according to given distance metric in the database with respect to Q. In this scenario, it is possible that a majority of the answers may be very similar to some other, especially when the data has clusters. For a variety of applications, such homogeneous result sets may not add value to the user. In this paper, we consider the problem of providing diversity in the results of KNN queries, that is, to produce the closest result set such that each answer is sufficiently different from the rest. We first propose a user-tunable definition of diversity, and then present an algorithm, called MOTLEY, for producing a diverse result set as per this definition. Through a detailed experimental evaluation on real and synthetic data, we show that MOTLEY can produce diverse result sets by reading only a small fraction of the tuples in the database. Further, it imposes no additional overhead on the evaluation of traditional KNN queries, thereby providing a seamless interface between diversity and distance.<|reference_end|>
arxiv
@article{jain2003providing, title={Providing Diversity in K-Nearest Neighbor Query Results}, author={Anoop Jain, Parag Sarda, Jayant R. Haritsa}, journal={arXiv preprint arXiv:cs/0310028}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310028}, primaryClass={cs.DB} }
jain2003providing
arxiv-671473
cs/0310029
Optimizing Noncontiguous Accesses in MPI-IO
<|reference_start|>Optimizing Noncontiguous Accesses in MPI-IO: The I/O access patterns of many parallel applications consist of accesses to a large number of small, noncontiguous pieces of data. If an application's I/O needs are met by making many small, distinct I/O requests, however, the I/O performance degrades drastically. To avoid this problem, MPI-IO allows users to access noncontiguous data with a single I/O function call, unlike in Unix I/O. In this paper, we explain how critical this feature of MPI-IO is for high performance and how it enables implementations to perform optimizations. We first provide a classification of the different ways of expressing an application's I/O needs in MPI-IO--we classify them into four levels, called level~0 through level~3. We demonstrate that, for applications with noncontiguous access patterns, the I/O performance improves dramatically if users write their applications to make level-3 requests (noncontiguous, collective) rather than level-0 requests (Unix style). We then describe how our MPI-IO implementation, ROMIO, delivers high performance for noncontiguous requests. We explain in detail the two key optimizations ROMIO performs: data sieving for noncontiguous requests from one process and collective I/O for noncontiguous requests from multiple processes. We describe how we have implemented these optimizations portably on multiple machines and file systems, controlled their memory requirements, and also achieved high performance. We demonstrate the performance and portability with performance results for three applications--an astrophysics-application template (DIST3D), the NAS BTIO benchmark, and an unstructured code (UNSTRUC)--on five different parallel machines: HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, and SGI Origin2000.<|reference_end|>
arxiv
@article{thakur2003optimizing, title={Optimizing Noncontiguous Accesses in MPI-IO}, author={Rajeev Thakur, William Gropp, Ewing Lusk}, journal={Parallel Computing 28(1) (January 2002), pp. 83-105}, year={2003}, number={ANL/MCS-P913-1001}, archivePrefix={arXiv}, eprint={cs/0310029}, primaryClass={cs.DC} }
thakur2003optimizing
arxiv-671474
cs/0310030
A Particular Bug Trap: Execution Replay Using Virtual Machines
<|reference_start|>A Particular Bug Trap: Execution Replay Using Virtual Machines: Execution-replay (ER) is well known in the literature but has been restricted to special system architectures for many years. Improved hardware resources and the maturity of virtual machine technology promise to make ER useful for a broader range of development projects. This paper describes an approach to create a practical, generic ER infrastructure for desktop PC systems using virtual machine technology. In the created VM environment arbitrary application programs will run and be replayed unmodified, neither instrumentation nor recompilation are required.<|reference_end|>
arxiv
@article{oppitz2003a, title={A Particular Bug Trap: Execution Replay Using Virtual Machines}, author={Oliver Oppitz}, journal={arXiv preprint arXiv:cs/0310030}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310030}, primaryClass={cs.DC} }
oppitz2003a
arxiv-671475
cs/0310031
A weak definition of Delaunay triangulation
<|reference_start|>A weak definition of Delaunay triangulation: We show that the traditional criterion for a simplex to belong to the Delaunay triangulation of a point set is equivalent to a criterion which is a priori weaker. The argument is quite general; as well as the classical Euclidean case, it applies to hyperbolic and hemispherical geometries and to Edelsbrunner's weighted Delaunay triangulation. In spherical geometry, we establish a similar theorem under a genericity condition. The weak definition finds natural application in the problem of approximating a point-cloud data set with a simplical complex.<|reference_end|>
arxiv
@article{de silva2003a, title={A weak definition of Delaunay triangulation}, author={Vin de Silva}, journal={arXiv preprint arXiv:cs/0310031}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310031}, primaryClass={cs.CG} }
de silva2003a
arxiv-671476
cs/0310032
A combinatorial characterization of higher-dimensional orthogonal packing
<|reference_start|>A combinatorial characterization of higher-dimensional orthogonal packing: Higher-dimensional orthogonal packing problems have a wide range of practical applications, including packing, cutting, and scheduling. Previous efforts for exact algorithms have been unable to avoid structural problems that appear for instances in two- or higher-dimensional space. We present a new approach for modeling packings, using a graph-theoretical characterization of feasible packings. Our characterization allows it to deal with classes of packings that share a certain combinatorial structure, instead of having to consider one packing at a time. In addition, we can make use of elegant algorithmic properties of certain classes of graphs. This allows our characterization to be the basis for a successful branch-and-bound framework. This is the first in a series of papers describing new approaches to higher-dimensional packing.<|reference_end|>
arxiv
@article{fekete2003a, title={A combinatorial characterization of higher-dimensional orthogonal packing}, author={Sandor P. Fekete and Joerg Schepers}, journal={arXiv preprint arXiv:cs/0310032}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310032}, primaryClass={cs.DS cs.CG} }
fekete2003a
arxiv-671477
cs/0310033
A Hash of Hash Functions
<|reference_start|>A Hash of Hash Functions: In this paper, we present a general review of hash functions in a cryptographic sense. We give special emphasis on some particular topics such as cipher block chaining message authentication code (CBC MAC) and its variants. This paper also broadens the information given in some well known surveys, by including more details on block-cipher based hash functions and security of different hash schemes.<|reference_end|>
arxiv
@article{ozsari2003a, title={A Hash of Hash Functions}, author={Turker Ozsari}, journal={arXiv preprint arXiv:cs/0310033}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310033}, primaryClass={cs.CR cs.CY} }
ozsari2003a
arxiv-671478
cs/0310034
Minimizing the stabbing number of matchings, trees, and triangulations
<|reference_start|>Minimizing the stabbing number of matchings, trees, and triangulations: The (axis-parallel) stabbing number of a given set of line segments is the maximum number of segments that can be intersected by any one (axis-parallel) line. This paper deals with finding perfect matchings, spanning trees, or triangulations of minimum stabbing number for a given set of points. The complexity of these problems has been a long-standing open question; in fact, it is one of the original 30 outstanding open problems in computational geometry on the list by Demaine, Mitchell, and O'Rourke. The answer we provide is negative for a number of minimum stabbing problems by showing them NP-hard by means of a general proof technique. It implies non-trivial lower bounds on the approximability. On the positive side we propose a cut-based integer programming formulation for minimizing the stabbing number of matchings and spanning trees. We obtain lower bounds (in polynomial time) from the corresponding linear programming relaxations, and show that an optimal fractional solution always contains an edge of at least constant weight. This result constitutes a crucial step towards a constant-factor approximation via an iterated rounding scheme. In computational experiments we demonstrate that our approach allows for actually solving problems with up to several hundred points optimally or near-optimally.<|reference_end|>
arxiv
@article{fekete2003minimizing, title={Minimizing the stabbing number of matchings, trees, and triangulations}, author={Sandor P. Fekete and Marco Luebbecke and Henk Meijer}, journal={arXiv preprint arXiv:cs/0310034}, year={2003}, doi={10.1007/s00454-008-9114-6}, archivePrefix={arXiv}, eprint={cs/0310034}, primaryClass={cs.CG cs.DS} }
fekete2003minimizing
arxiv-671479
cs/0310035
Supporting Exploratory Queries in Database Centric Web Applications
<|reference_start|>Supporting Exploratory Queries in Database Centric Web Applications: Users of database-centric Web applications, especially in the e-commerce domain, often resort to exploratory ``trial-and-error'' queries since the underlying data space is huge and unfamiliar, and there are several alternatives for search attributes in this space. For example, scouting for cheap airfares typically involves posing multiple queries, varying flight times, dates, and airport locations. Exploratory queries are problematic from the perspective of both the user and the server. For the database server, it results in a drastic reduction in effective throughput since much of the processing is duplicated in each successive query. For the client, it results in a marked increase in response times, especially when accessing the service through wireless channels. In this paper, we investigate the design of automated techniques to minimize the need for repetitive exploratory queries. Specifically, we present SAUNA, a server-side query relaxation algorithm that, given the user's initial range query and a desired cardinality for the answer set, produces a relaxed query that is expected to contain the required number of answers. The algorithm incorporates a range-query-specific distance metric that is weighted to produce relaxed queries of a desired shape (e.g. aspect ratio preserving), and utilizes multi-dimensional histograms for query size estimation. A detailed performance evaluation of SAUNA over a variety of multi-dimensional data sets indicates that its relaxed queries can significantly reduce the costs associated with exploratory query processing.<|reference_end|>
arxiv
@article{kadlag2003supporting, title={Supporting Exploratory Queries in Database Centric Web Applications}, author={Abhijit Kadlag, Amol Wanjari, Juliana Freire, Jayant R. Haritsa}, journal={arXiv preprint arXiv:cs/0310035}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310035}, primaryClass={cs.DB} }
kadlag2003supporting
arxiv-671480
cs/0310036
Solving Sparse, Symmetric, Diagonally-Dominant Linear Systems in Time $O (m^131)$
<|reference_start|>Solving Sparse, Symmetric, Diagonally-Dominant Linear Systems in Time $O (m^131)$: We present a linear-system solver that, given an $n$-by-$n$ symmetric positive semi-definite, diagonally dominant matrix $A$ with $m$ non-zero entries and an $n$-vector $\bb $, produces a vector $\xxt$ within relative distance $\epsilon$ of the solution to $A \xx = \bb$ in time $O (m^{1.31} \log (n \kappa_{f} (A)/\epsilon)^{O (1)})$, where $\kappa_{f} (A)$ is the log of the ratio of the largest to smallest non-zero eigenvalue of $A$. In particular, $\log (\kappa_{f} (A)) = O (b \log n)$, where $b$ is the logarithm of the ratio of the largest to smallest non-zero entry of $A$. If the graph of $A$ has genus $m^{2\theta}$ or does not have a $K_{m^{\theta}} $ minor, then the exponent of $m$ can be improved to the minimum of $1 + 5 \theta $ and $(9/8) (1+\theta)$. The key contribution of our work is an extension of Vaidya's techniques for constructing and analyzing combinatorial preconditioners.<|reference_end|>
arxiv
@article{spielman2003solving, title={Solving Sparse, Symmetric, Diagonally-Dominant Linear Systems in Time $O (m^{1.31})$}, author={Daniel A. Spielman and Shang-Hua Teng}, journal={arXiv preprint arXiv:cs/0310036}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310036}, primaryClass={cs.DS cs.NA} }
spielman2003solving
arxiv-671481
cs/0310037
Maximum dispersion and geometric maximum weight cliques
<|reference_start|>Maximum dispersion and geometric maximum weight cliques: We consider a facility location problem, where the objective is to ``disperse'' a number of facilities, i.e., select a given number k of locations from a discrete set of n candidates, such that the average distance between selected locations is maximized. In particular, we present algorithmic results for the case where vertices are represented by points in d-dimensional space, and edge weights correspond to rectilinear distances. Problems of this type have been considered before, with the best result being an approximation algorithm with performance ratio 2. For the case where k is fixed, we establish a linear-time algorithm that finds an optimal solution. For the case where k is part of the input, we present a polynomial-time approximation scheme.<|reference_end|>
arxiv
@article{fekete2003maximum, title={Maximum dispersion and geometric maximum weight cliques}, author={Sandor P. Fekete and Henk Meijer}, journal={Algorithmica, 38 (3) 2004, 501-511.}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310037}, primaryClass={cs.DS cs.CG} }
fekete2003maximum
arxiv-671482
cs/0310038
On Addressing Efficiency Concerns in Privacy Preserving Data Mining
<|reference_start|>On Addressing Efficiency Concerns in Privacy Preserving Data Mining: Data mining services require accurate input data for their results to be meaningful, but privacy concerns may influence users to provide spurious information. To encourage users to provide correct inputs, we recently proposed a data distortion scheme for association rule mining that simultaneously provides both privacy to the user and accuracy in the mining results. However, mining the distorted database can be orders of magnitude more time-consuming as compared to mining the original database. In this paper, we address this issue and demonstrate that by (a) generalizing the distortion process to perform symbol-specific distortion, (b) appropriately choosing the distortion parameters, and (c) applying a variety of optimizations in the reconstruction process, runtime efficiencies that are well within an order of magnitude of undistorted mining can be achieved.<|reference_end|>
arxiv
@article{agrawal2003on, title={On Addressing Efficiency Concerns in Privacy Preserving Data Mining}, author={Shipra Agrawal, Vijay Krishnan, Jayant Haritsa}, journal={arXiv preprint arXiv:cs/0310038}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310038}, primaryClass={cs.DB} }
agrawal2003on
arxiv-671483
cs/0310039
A Game Theoretic Framework for Incentives in P2P Systems
<|reference_start|>A Game Theoretic Framework for Incentives in P2P Systems: Peer-To-Peer (P2P) networks are self-organizing, distributed systems, with no centralized authority or infrastructure. Because of the voluntary participation, the availability of resources in a P2P system can be highly variable and unpredictable. In this paper, we use ideas from Game Theory to study the interaction of strategic and rational peers, and propose a differential service-based incentive scheme to improve the system's performance.<|reference_end|>
arxiv
@article{buragohain2003a, title={A Game Theoretic Framework for Incentives in P2P Systems}, author={Chiranjeeb Buragohain, Divyakant Agrawal, Subhash Suri}, journal={Proc. of the Third International Conference on P2P Computing (P2P2003), Linkoping Sweden, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310039}, primaryClass={cs.GT} }
buragohain2003a
arxiv-671484
cs/0310040
Automated Fault Localization Using Potential Invariants
<|reference_start|>Automated Fault Localization Using Potential Invariants: We present a general method for fault localization based on abstracting over program traces, and a tool that implements the method using Ernst's notion of potential invariants. Our experiments so far have been unsatisfactory, suggesting that further research is needed before invariants can be used to locate faults.<|reference_end|>
arxiv
@article{pytlik2003automated, title={Automated Fault Localization Using Potential Invariants}, author={Brock Pytlik, Manos Renieris, Shriram Krishnamurthi and Steven P. Reiss}, journal={arXiv preprint arXiv:cs/0310040}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310040}, primaryClass={cs.SE} }
pytlik2003automated
arxiv-671485
cs/0310041
A Dynamic Programming Algorithm for the Segmentation of Greek Texts
<|reference_start|>A Dynamic Programming Algorithm for the Segmentation of Greek Texts: In this paper we introduce a dynamic programming algorithm to perform linear text segmentation by global minimization of a segmentation cost function which consists of: (a) within-segment word similarity and (b) prior information about segment length. The evaluation of the segmentation accuracy of the algorithm on a text collection consisting of Greek texts showed that the algorithm achieves high segmentation accuracy and appears to be very innovating and promissing.<|reference_end|>
arxiv
@article{fragkou2003a, title={A Dynamic Programming Algorithm for the Segmentation of Greek Texts}, author={Pavlina Fragkou}, journal={arXiv preprint arXiv:cs/0310041}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310041}, primaryClass={cs.CL cs.DL} }
fragkou2003a
arxiv-671486
cs/0310042
Rigorous design of tracers: an experiment for constraint logic programming
<|reference_start|>Rigorous design of tracers: an experiment for constraint logic programming: In order to design and implement tracers, one must decide what exactly to trace and how to produce this trace. On the one hand, trace designs are too often guided by implementation concerns and are not as useful as they should be. On the other hand, an interesting trace which cannot be produced efficiently, is not very useful either. In this article we propose a methodology which helps to efficiently produce accurate traces. Firstly, design a formal specification of the trace model. Secondly, derive a prototype tracer from this specification. Thirdly, analyze the produced traces. Fourthly, implement an efficient tracer. Lastly, compare the traces of the two tracers. At each step, problems can be found. In that case one has to iterate the process. We have successfully applied the proposed methodology to the design and implementation of a real tracer for constraint logic programming which is able to efficiently generate information required to build interesting graphical views of executions.<|reference_end|>
arxiv
@article{ducasse2003rigorous, title={Rigorous design of tracers: an experiment for constraint logic programming}, author={Mireille Ducasse, Ludovic Langevine, Pierre Deransart}, journal={arXiv preprint arXiv:cs/0310042}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310042}, primaryClass={cs.SE} }
ducasse2003rigorous
arxiv-671487
cs/0310043
Value-at-Risk and Expected Shortfall for Quadratic portfolio of securities with mixture of elliptic Distributed Risk Factors
<|reference_start|>Value-at-Risk and Expected Shortfall for Quadratic portfolio of securities with mixture of elliptic Distributed Risk Factors: Generally, in the financial literature, the notion of quadratic VaR is implicitly confused with the Delta-Gamma VaR, because more authors dealt with portfolios that contains derivatives instruments. In this paper, we postpone to estimate the Value-at-Risk of a quadratic portfolio of securities (i.e equities) without the Delta and Gamma greeks, when the joint log-returns changes with multivariate elliptic distribution. We have reduced the estimation of the quadratic VaR of such portfolio to a resolution of one dimensional integral equation. To illustrate our method, we give special attention to the mixture of normal and mixture of t-student distribution. For given VaR, when joint Risk Factors changes with elliptic distribution, we show how to estimate an Expected Shortfall .<|reference_end|>
arxiv
@article{kamdem2003value-at-risk, title={Value-at-Risk and Expected Shortfall for Quadratic portfolio of securities with mixture of elliptic Distributed Risk Factors}, author={Jules Sadefo Kamdem}, journal={arXiv preprint arXiv:cs/0310043}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310043}, primaryClass={cs.CE math.CA} }
kamdem2003value-at-risk
arxiv-671488
cs/0310044
The Algebra of Utility Inference
<|reference_start|>The Algebra of Utility Inference: Richard Cox [1] set the axiomatic foundations of probable inference and the algebra of propositions. He showed that consistency within these axioms requires certain rules for updating belief. In this paper we use the analogy between probability and utility introduced in [2] to propose an axiomatic foundation for utility inference and the algebra of preferences. We show that consistency within these axioms requires certain rules for updating preference. We discuss a class of utility functions that stems from the axioms of utility inference and show that this class is the basic building block for any general multiattribute utility function. We use this class of utility functions together with the algebra of preferences to construct utility functions represented by logical operations on the attributes.<|reference_end|>
arxiv
@article{abbas2003the, title={The Algebra of Utility Inference}, author={Ali E. Abbas}, journal={arXiv preprint arXiv:cs/0310044}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310044}, primaryClass={cs.AI} }
abbas2003the
arxiv-671489
cs/0310045
An information theory for preferences
<|reference_start|>An information theory for preferences: Recent literature in the last Maximum Entropy workshop introduced an analogy between cumulative probability distributions and normalized utility functions. Based on this analogy, a utility density function can de defined as the derivative of a normalized utility function. A utility density function is non-negative and integrates to unity. These two properties form the basis of a correspondence between utility and probability. A natural application of this analogy is a maximum entropy principle to assign maximum entropy utility values. Maximum entropy utility interprets many of the common utility functions based on the preference information needed for their assignment, and helps assign utility values based on partial preference information. This paper reviews maximum entropy utility and introduces further results that stem from the duality between probability and utility.<|reference_end|>
arxiv
@article{abbas2003an, title={An information theory for preferences}, author={Ali E. Abbas}, journal={arXiv preprint arXiv:cs/0310045}, year={2003}, doi={10.1063/1.1751362}, archivePrefix={arXiv}, eprint={cs/0310045}, primaryClass={cs.AI} }
abbas2003an
arxiv-671490
cs/0310046
Theory of One Tape Linear Time Turing Machines
<|reference_start|>Theory of One Tape Linear Time Turing Machines: A theory of one-tape (one-head) linear-time Turing machines is essentially different from its polynomial-time counterpart since these machines are closely related to finite state automata. This paper discusses structural-complexity issues of one-tape Turing machines of various types (deterministic, nondeterministic, reversible, alternating, probabilistic, counting, and quantum Turing machines) that halt in linear time, where the running time of a machine is defined as the length of any longest computation path. We explore structural properties of one-tape linear-time Turing machines and clarify how the machines' resources affect their computational patterns and power.<|reference_end|>
arxiv
@article{tadaki2003theory, title={Theory of One Tape Linear Time Turing Machines}, author={Kohtaro Tadaki, Tomoyuki Yamakami, and Jack C.H. Lin}, journal={(journal version) Theoretical Computer Science, Vol.411, pp.22-43, 2010}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310046}, primaryClass={cs.CC} }
tadaki2003theory
arxiv-671491
cs/0310047
Abductive Logic Programs with Penalization: Semantics, Complexity and Implementation
<|reference_start|>Abductive Logic Programs with Penalization: Semantics, Complexity and Implementation: Abduction, first proposed in the setting of classical logics, has been studied with growing interest in the logic programming area during the last years. In this paper we study abduction with penalization in the logic programming framework. This form of abductive reasoning, which has not been previously analyzed in logic programming, turns out to represent several relevant problems, including optimization problems, very naturally. We define a formal model for abduction with penalization over logic programs, which extends the abductive framework proposed by Kakas and Mancarella. We address knowledge representation issues, encoding a number of problems in our abductive framework. In particular, we consider some relevant problems, taken from different domains, ranging from optimization theory to diagnosis and planning; their encodings turn out to be simple and elegant in our formalism. We thoroughly analyze the computational complexity of the main problems arising in the context of abduction with penalization from logic programs. Finally, we implement a system supporting the proposed abductive framework on top of the DLV engine. To this end, we design a translation from abduction problems with penalties into logic programs with weak constraints. We prove that this approach is sound and complete.<|reference_end|>
arxiv
@article{perri2003abductive, title={Abductive Logic Programs with Penalization: Semantics, Complexity and Implementation}, author={Simona Perri, Francesco Scarcello, Nicola Leone}, journal={arXiv preprint arXiv:cs/0310047}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310047}, primaryClass={cs.AI} }
perri2003abductive
arxiv-671492
cs/0310048
Managing Evolving Business Workflows through the Capture of Descriptive Information
<|reference_start|>Managing Evolving Business Workflows through the Capture of Descriptive Information: Business systems these days need to be agile to address the needs of a changing world. In particular the discipline of Enterprise Application Integration requires business process management to be highly reconfigurable with the ability to support dynamic workflows, inter-application integration and process reconfiguration. Basing EAI systems on model-resident or on a so-called description-driven approach enables aspects of flexibility, distribution, system evolution and integration to be addressed in a domain-independent manner. Such a system called CRISTAL is described in this paper with particular emphasis on its application to EAI problem domains. A practical example of the CRISTAL technology in the domain of manufacturing systems, called Agilium, is described to demonstrate the principles of model-driven system evolution and integration. The approach is compared to other model-driven development approaches such as the Model-Driven Architecture of the OMG and so-called Adaptive Object Models.<|reference_end|>
arxiv
@article{gaspard2003managing, title={Managing Evolving Business Workflows through the Capture of Descriptive Information}, author={Sebastien Gaspard, Florida Estrella, Richard McClatchey & Regis Dindeleux}, journal={arXiv preprint arXiv:cs/0310048}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310048}, primaryClass={cs.SE cs.DB} }
gaspard2003managing
arxiv-671493
cs/0310049
An O(m) Algorithm for Cores Decomposition of Networks
<|reference_start|>An O(m) Algorithm for Cores Decomposition of Networks: The structure of large networks can be revealed by partitioning them to smaller parts, which are easier to handle. One of such decompositions is based on $k$--cores, proposed in 1983 by Seidman. In the paper an efficient, $O(m)$, $m$ is the number of lines, algorithm for determining the cores decomposition of a given network is presented.<|reference_end|>
arxiv
@article{batagelj2003an, title={An O(m) Algorithm for Cores Decomposition of Networks}, author={V. Batagelj and M. Zaversnik}, journal={Advances in Data Analysis and Classification, 2011. Volume 5, Number 2, 129-145}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310049}, primaryClass={cs.DS cs.DM} }
batagelj2003an
arxiv-671494
cs/0310050
Feedforward Neural Networks with Diffused Nonlinear Weight Functions
<|reference_start|>Feedforward Neural Networks with Diffused Nonlinear Weight Functions: In this paper, feedforward neural networks are presented that have nonlinear weight functions based on look--up tables, that are specially smoothed in a regularization called the diffusion. The idea of such a type of networks is based on the hypothesis that the greater number of adaptive parameters per a weight function might reduce the total number of the weight functions needed to solve a given problem. Then, if the computational complexity of a propagation through a single such a weight function would be kept low, then the introduced neural networks might possibly be relatively fast. A number of tests is performed, showing that the presented neural networks may indeed perform better in some cases than the classic neural networks and a number of other learning machines.<|reference_end|>
arxiv
@article{rataj2003feedforward, title={Feedforward Neural Networks with Diffused Nonlinear Weight Functions}, author={Artur Rataj}, journal={arXiv preprint arXiv:cs/0310050}, year={2003}, number={IITiS-2003-02-23-1-1.06}, archivePrefix={arXiv}, eprint={cs/0310050}, primaryClass={cs.NE} }
rataj2003feedforward
arxiv-671495
cs/0310051
Nearly-Linear Time Algorithms for Graph Partitioning, Graph Sparsification, and Solving Linear Systems
<|reference_start|>Nearly-Linear Time Algorithms for Graph Partitioning, Graph Sparsification, and Solving Linear Systems: This paper has been divided into three papers. arXiv:0809.3232, arXiv:0808.4134, arXiv:cs/0607105<|reference_end|>
arxiv
@article{spielman2003nearly-linear, title={Nearly-Linear Time Algorithms for Graph Partitioning, Graph Sparsification, and Solving Linear Systems}, author={Daniel A. Spielman and Shang-Hua Teng}, journal={arXiv preprint arXiv:cs/0310051}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310051}, primaryClass={cs.DS cs.NA} }
spielman2003nearly-linear
arxiv-671496
cs/0310052
On secret sharing for graphs
<|reference_start|>On secret sharing for graphs: In the paper we discuss how to share the secrets, that are graphs. So, far secret sharing schemes were designed to work with numbers. As the first step, we propose conditions for "graph to number" conversion methods. Hence, the existing schemes can be used, without weakening their properties. Next, we show how graph properties can be used to extend capabilities of secret sharing schemes. This leads to proposal of using such properties for number based secret sharing.<|reference_end|>
arxiv
@article{kulesza2003on, title={On secret sharing for graphs}, author={Kamil Kulesza and Zbigniew Kotulski}, journal={arXiv preprint arXiv:cs/0310052}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310052}, primaryClass={cs.CR} }
kulesza2003on
arxiv-671497
cs/0310053
Secret Sharing for n-Colorable Graphs with Application to Public Key Cryptography
<|reference_start|>Secret Sharing for n-Colorable Graphs with Application to Public Key Cryptography: At the beginning some results from the field of graph theory are presented. Next we show how to share a secret that is proper n-coloring of the graph, with the known structure. The graph is described and converted to the form, where colors assigned to vertices form the number with entries from Zn. A secret sharing scheme (SSS) for the graph coloring is proposed. The proposed method is applied to the public-key cryptosystem called "Polly Cracker". In this case the graph structure is a public key, while proper 3-colouring of the graph is a private key. We show how to share the private key. Sharing particular n-coloring (color-to-vertex assignment) for the known-structure graph is presented next.<|reference_end|>
arxiv
@article{kulesza2003secret, title={Secret Sharing for n-Colorable Graphs with Application to Public Key Cryptography}, author={Kamil Kulesza and Zbigniew Kotulski}, journal={Proceedings of 5th NATO Regional Conference on Military Communication and Information Systems, Capturing New CIS Technologies, RCMIS 2003. 22-24 pazdziernik 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310053}, primaryClass={cs.CR} }
kulesza2003secret
arxiv-671498
cs/0310054
Kleene algebra with domain
<|reference_start|>Kleene algebra with domain: We propose Kleene algebra with domain (KAD), an extension of Kleene algebra with two equational axioms for a domain and a codomain operation, respectively. KAD considerably augments the expressiveness of Kleene algebra, in particular for the specification and analysis of state transition systems. We develop the basic calculus, discuss some related theories and present the most important models of KAD. We demonstrate applicability by two examples: First, an algebraic reconstruction of Noethericity and well-foundedness; second, an algebraic reconstruction of propositional Hoare logic.<|reference_end|>
arxiv
@article{desharnais2003kleene, title={Kleene algebra with domain}, author={J. Desharnais, B. M"oller, G. Struth}, journal={arXiv preprint arXiv:cs/0310054}, year={2003}, archivePrefix={arXiv}, eprint={cs/0310054}, primaryClass={cs.LO} }
desharnais2003kleene
arxiv-671499
cs/0310055
Mace4 Reference Manual and Guide
<|reference_start|>Mace4 Reference Manual and Guide: Mace4 is a program that searches for finite models of first-order formulas. For a given domain size, all instances of the formulas over the domain are constructed. The result is a set of ground clauses with equality. Then, a decision procedure based on ground equational rewriting is applied. If satisfiability is detected, one or more models are printed. Mace4 is a useful complement to first-order theorem provers, with the prover searching for proofs and Mace4 looking for countermodels, and it is useful for work on finite algebras. Mace4 performs better on equational problems than did our previous model-searching program Mace2.<|reference_end|>
arxiv
@article{mccune2003mace4, title={Mace4 Reference Manual and Guide}, author={William McCune}, journal={arXiv preprint arXiv:cs/0310055}, year={2003}, number={ANL/MCS-TM-264}, archivePrefix={arXiv}, eprint={cs/0310055}, primaryClass={cs.SC cs.MS} }
mccune2003mace4
arxiv-671500
cs/0310056
OTTER 33 Reference Manual
<|reference_start|>OTTER 33 Reference Manual: OTTER is a resolution-style theorem-proving program for first-order logic with equality. OTTER includes the inference rules binary resolution, hyperresolution, UR-resolution, and binary paramodulation. Some of its other abilities and features are conversion from first-order formulas to clauses, forward and back subsumption, factoring, weighting, answer literals, term ordering, forward and back demodulation, evaluable functions and predicates, Knuth-Bendix completion, and the hints strategy. OTTER is coded in ANSI C, is free, and is portable to many different kinds of computer.<|reference_end|>
arxiv
@article{mccune2003otter, title={OTTER 3.3 Reference Manual}, author={William McCune}, journal={arXiv preprint arXiv:cs/0310056}, year={2003}, number={ANL/MCS-TM-263}, archivePrefix={arXiv}, eprint={cs/0310056}, primaryClass={cs.SC cs.MS} }
mccune2003otter