corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-671901 | cs/0405085 | On the Expressive Power of First-Order Boolean Functions in PCF | <|reference_start|>On the Expressive Power of First-Order Boolean Functions in PCF: Recent results of Bucciarelli show that the semilattice of degrees of parallelism of first-order boolean functions in PCF has both infinite chains and infinite antichains. By considering a simple subclass of Sieber's sequentiality relations, we identify levels in the semilattice and derive inexpressibility results concerning functions on different levels. This allows us to further explore the structure of the semilattice of degrees of parallelism: we identify semilattices characterized by simple level properties, and show the existence of new infinite hierarchies which are in a certain sense natural with respect to the levels.<|reference_end|> | arxiv | @article{pucella2004on,
title={On the Expressive Power of First-Order Boolean Functions in PCF},
author={Riccardo Pucella, Prakash Panangaden},
journal={Theoretical Computer Science 266(1-2), pp. 543-567, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405085},
primaryClass={cs.PL}
} | pucella2004on |
arxiv-671902 | cs/0405086 | A New Dynamical Domain Decomposition Method for Parallel Molecular Dynamics Simulation on Grid | <|reference_start|>A New Dynamical Domain Decomposition Method for Parallel Molecular Dynamics Simulation on Grid: We develop a new Lagrangian material particle -- dynamical domain decomposition method (MPD^3) for large scale parallel molecular dynamics (MD) simulation of nonstationary heterogeneous systems on a heterogeneous computing net. MPD^3 is based on Voronoi decomposition of simulated matter. The map of Voronoi polygons is known as the Dirichlet tessellation and used for grid generation in computational fluid dynamics. From the hydrodynamics point of view the moving Voronoi polygon looks as a material particle (MP). MPs can exchange particles and information. To balance heterogeneous computing conditions the MP centers should be dependent on timing data. We propose a simple and efficient iterative algorithm which based on definition of the timing-dependent balancing displacement of MP center for next simulation step. The MPD^3 program was tested in various computing environments and physical problems. We have demonstrated that MPD^3 is a high-adaptive decomposition algorithm for MD simulation. It was shown that the well-balanced decomposition can result from dynamical Voronoi polygon tessellation. One would expect the similar approach can be successfully applied for other particle methods like Monte Carlo, particle-in-cell, and smooth-particle-hydrodynamics.<|reference_end|> | arxiv | @article{zhakhovskii2004a,
title={A New Dynamical Domain Decomposition Method for Parallel Molecular
Dynamics Simulation on Grid},
author={Vasilii Zhakhovskii, Katsunobu Nishihara, Yuko Fukuda, and Shinji
Shimojo},
journal={arXiv preprint arXiv:cs/0405086},
year={2004},
number={Annual Progress Report 2003, Institute of Laser Engineering,Osaka
University (2004)},
archivePrefix={arXiv},
eprint={cs/0405086},
primaryClass={cs.DC}
} | zhakhovskii2004a |
arxiv-671903 | cs/0405087 | A Grid Information Infrastructure for Medical Image Analysis | <|reference_start|>A Grid Information Infrastructure for Medical Image Analysis: The storage and manipulation of digital images and the analysis of the information held in those images are essential requirements for next-generation medical information systems. The medical community has been exploring collaborative approaches for managing image data and exchanging knowledge and Grid technology [1] is a promising approach to enabling distributed analysis across medical institutions and for developing new collaborative and cooperative approaches for image analysis without the necessity for clinicians to co-locate. The EU-funded MammoGrid project [2] is one example of this and it aims to develop a Europe-wide database of mammograms to support effective co-working between healthcare professionals across the EU. The MammoGrid prototype comprises a high-quality clinician visualization workstation (for data acquisition and inspection), a DICOM-compliant interface to a set of medical services (annotation, security, image analysis, data storage and querying services) residing on a so-called Grid-box and secure access to a network of other Grid-boxes connected through Grid middleware. One of the main deliverables of the project is a Grid-enabled infrastructure that manages federated mammogram databases across Europe. This paper outlines the MammoGrid Information Infrastructure (MII) for meta-data analysis and knowledge discovery in the medical imaging domain.<|reference_end|> | arxiv | @article{rogulin2004a,
title={A Grid Information Infrastructure for Medical Image Analysis},
author={D Rogulin, F Estrella, T Hauer, R McClatchey & T Solomonides},
journal={arXiv preprint arXiv:cs/0405087},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405087},
primaryClass={cs.DB cs.DC}
} | rogulin2004a |
arxiv-671904 | cs/0405088 | High-Level Networking With Mobile Code And First Order AND-Continuations | <|reference_start|>High-Level Networking With Mobile Code And First Order AND-Continuations: We describe a scheme for moving living code between a set of distributed processes coordinated with unification based Linda operations, and its application to building a comprehensive Logic programming based Internet programming framework. Mobile threads are implemented by capturing first order continuations in a compact data structure sent over the network. Code is fetched lazily from its original base turned into a server as the continuation executes at the remote site. Our code migration techniques, in combination with a dynamic recompilation scheme, ensure that heavily used code moves up smoothly on a speed hierarchy while volatile dynamic code is kept in a quickly updatable form. Among the examples, we describe how to build programmable client and server components (Web servers, in particular) and mobile agents.<|reference_end|> | arxiv | @article{tarau2004high-level,
title={High-Level Networking With Mobile Code And First Order AND-Continuations},
author={Paul Tarau, Veronica Dahl},
journal={Theory and Practice of Logic Programming, vol. 1, no. 3, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405088},
primaryClass={cs.PL}
} | tarau2004high-level |
arxiv-671905 | cs/0405089 | Convex Hull of Planar H-Polyhedra | <|reference_start|>Convex Hull of Planar H-Polyhedra: Suppose $<A_i, \vec{c}_i>$ are planar (convex) H-polyhedra, that is, $A_i \in \mathbb{R}^{n_i \times 2}$ and $\vec{c}_i \in \mathbb{R}^{n_i}$. Let $P_i = \{\vec{x} \in \mathbb{R}^2 \mid A_i\vec{x} \leq \vec{c}_i \}$ and $n = n_1 + n_2$. We present an $O(n \log n)$ algorithm for calculating an H-polyhedron $<A, \vec{c}>$ with the smallest $P = \{\vec{x} \in \mathbb{R}^2 \mid A\vec{x} \leq \vec{c} \}$ such that $P_1 \cup P_2 \subseteq P$.<|reference_end|> | arxiv | @article{simon2004convex,
title={Convex Hull of Planar H-Polyhedra},
author={Axel Simon and Andy King},
journal={International Journal of Computer Mathematics, 81(4):259-271, 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405089},
primaryClass={cs.CG}
} | simon2004convex |
arxiv-671906 | cs/0405090 | Propositional Defeasible Logic has Linear Complexity | <|reference_start|>Propositional Defeasible Logic has Linear Complexity: Defeasible logic is a rule-based nonmonotonic logic, with both strict and defeasible rules, and a priority relation on rules. We show that inference in the propositional form of the logic can be performed in linear time. This contrasts markedly with most other propositional nonmonotonic logics, in which inference is intractable.<|reference_end|> | arxiv | @article{maher2004propositional,
title={Propositional Defeasible Logic has Linear Complexity},
author={Michael J. Maher},
journal={Theory and Practice of Logic Programming, vol. 1, no. 6, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405090},
primaryClass={cs.AI}
} | maher2004propositional |
arxiv-671907 | cs/0405091 | CLAIRE: Combining Sets, Search And Rules To Better Express Algorithms | <|reference_start|>CLAIRE: Combining Sets, Search And Rules To Better Express Algorithms: This paper presents a programming language which includes paradigms that are usually associated with declarative languages, such as sets, rules and search, into an imperative (functional) language. Although these paradigms are separately well known and are available under various programming environments, the originality of the CLAIRE language comes from the tight integration, which yields interesting run-time performances, and from the richness of this combination, which yields new ways in which to express complex algorithmic patterns with few elegant lines. To achieve the opposite goals of a high abstraction level (conciseness and readability) and run-time performance (CLAIRE is used as a C++ preprocessor), we have developed two kinds of compiler: first, a pattern pre-processor handles iterations over both concrete and abstract sets (data types and program fragments), in a completely user-extensible manner; secondly, an inference compiler transforms a set of logical rules into a set of functions (demons that are used through procedural attachment).<|reference_end|> | arxiv | @article{caseau2004claire:,
title={CLAIRE: Combining Sets, Search And Rules To Better Express Algorithms},
author={Yves Caseau, Francois-Xavier Josset, Francois Laburthe},
journal={Theory and Practice of Logic Programming, vol. 2, no. 6, 2002},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405091},
primaryClass={cs.PL}
} | caseau2004claire: |
arxiv-671908 | cs/0405092 | Learning Hybrid Algorithms for Vehicle Routing Problems | <|reference_start|>Learning Hybrid Algorithms for Vehicle Routing Problems: This paper presents a generic technique for improving hybrid algorithms through the discovery of and tuning of meta-heuristics. The idea is to represent a family of push/pull heuristics that are based upon inserting and removing tasks in a current solution, with an algebra. We then let a learning algorithm search for the best possible algebraic term, which represents a hybrid algorithm for a given set of problems and an optimization criterion. In a previous paper, we described this algebra in detail and provided a set of preliminary results demonstrating the utility of this approach, using vehicle routing with time windows (VRPTW) as a domain example. In this paper we expand upon our results providing a more robust experimental framework and learning algorithms, and report on some new results using the standard Solomon benchmarks. In particular, we show that our learning algorithm is able to achieve results similar to the best-published algorithms using only a fraction of the CPU time. We also show that the automatic tuning of the best hybrid combination of such techniques yields a better solution than hand tuning, with considerably less effort.<|reference_end|> | arxiv | @article{caseau2004learning,
title={Learning Hybrid Algorithms for Vehicle Routing Problems},
author={Yves Caseau, Glenn Silverstein, Francois Laburthe},
journal={Theory and Practice of Logic Programming, vol. 1, no. 6, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405092},
primaryClass={cs.PL}
} | caseau2004learning |
arxiv-671909 | cs/0405093 | Computerized Face Detection and Recognition | <|reference_start|>Computerized Face Detection and Recognition: This publication presents methods for face detection, analysis and recognition: fast normalized cross-correlation (fast correlation coefficient) between multiple templates based face pre-detection method, method for detection of exact face contour based on snakes and Generalized Gradient Vector Flow field, method for combining recognition algorithms based on Cumulative Match Characteristics in order to increase recognition speed and accuracy, and face recognition method based on Principal Component Analysis of the Wavelet Packet Decomposition allowing to use PCA - based recognition method with large number of training images. For all the methods are presented experimental results and comparisons of speed and accuracy with large face databases.<|reference_end|> | arxiv | @article{perlibakas2004computerized,
title={Computerized Face Detection and Recognition},
author={Vytautas Perlibakas},
journal={arXiv preprint arXiv:cs/0405093},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405093},
primaryClass={cs.CV}
} | perlibakas2004computerized |
arxiv-671910 | cs/0405094 | The Complexity of Maximum Matroid-Greedoid Intersection and Weighted Greedoid Maximization | <|reference_start|>The Complexity of Maximum Matroid-Greedoid Intersection and Weighted Greedoid Maximization: The maximum intersection problem for a matroid and a greedoid, given by polynomial-time oracles, is shown $NP$-hard by expressing the satisfiability of boolean formulas in 3-conjunctive normal form as such an intersection. The corresponding approximation problems are shown $NP$-hard for certain approximation performance bounds. Moreover, some natural parameterized variants of the problem are shown $W[P]$-hard. The results are in contrast with the maximum matroid-matroid intersection which is solvable in polynomial time by an old result of Edmonds. We also prove that it is $NP$-hard to approximate the weighted greedoid maximization within $2^{n^{O(1)}}$ where $n$ is the size of the domain of the greedoid. A preliminary version ``The Complexity of Maximum Matroid-Greedoid Intersection'' appeared in Proc. FCT 2001, LNCS 2138, pp. 535--539, Springer-Verlag 2001.<|reference_end|> | arxiv | @article{mielikäinen2004the,
title={The Complexity of Maximum Matroid-Greedoid Intersection and Weighted
Greedoid Maximization},
author={Taneli Mielik"ainen and Esko Ukkonen},
journal={arXiv preprint arXiv:cs/0405094},
year={2004},
number={Report C-2004-2, Department of Computer Science, University of
Helsinki},
archivePrefix={arXiv},
eprint={cs/0405094},
primaryClass={cs.DS}
} | mielikäinen2004the |
arxiv-671911 | cs/0405095 | Blind Detection and Compensation of Camera Lens Geometric Distortions | <|reference_start|>Blind Detection and Compensation of Camera Lens Geometric Distortions: This paper presents a blind detection and compensation technique for camera lens geometric distortions. The lens distortion introduces higher-order correlations in the frequency domain and in turn it can be detected using higher-order spectral analysis tools without assuming any specific calibration target. The existing blind lens distortion removal method only considered a single-coefficient radial distortion model. In this paper, two coefficients are considered to model approximately the geometric distortion. All the models considered have analytical closed-form inverse formulae.<|reference_end|> | arxiv | @article{ma2004blind,
title={Blind Detection and Compensation of Camera Lens Geometric Distortions},
author={Lili Ma and YangQuan Chen and Kevin L. Moore},
journal={SIAM Imaging Science, 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405095},
primaryClass={cs.CV}
} | ma2004blind |
arxiv-671912 | cs/0405096 | Developing Intellectual Network Management Facilities by Means of Pattern Recognition Theory | <|reference_start|>Developing Intellectual Network Management Facilities by Means of Pattern Recognition Theory: In this paper considered question of using pattern recognition methods in network equipment state identification.<|reference_end|> | arxiv | @article{chashkov2004developing,
title={Developing Intellectual Network Management Facilities by Means of
Pattern Recognition Theory},
author={Yuriy A. Chashkov},
journal={arXiv preprint arXiv:cs/0405096},
year={2004},
number={MIEM-02-06},
archivePrefix={arXiv},
eprint={cs/0405096},
primaryClass={cs.NI}
} | chashkov2004developing |
arxiv-671913 | cs/0405097 | A Coalgebraic Approach to Kleene Algebra with Tests | <|reference_start|>A Coalgebraic Approach to Kleene Algebra with Tests: Kleene algebra with tests is an extension of Kleene algebra, the algebra of regular expressions, which can be used to reason about programs. We develop a coalgebraic theory of Kleene algebra with tests, along the lines of the coalgebraic theory of regular expressions based on deterministic automata. Since the known automata-theoretic presentation of Kleene algebra with tests does not lend itself to a coalgebraic theory, we define a new interpretation of Kleene algebra with tests expressions and a corresponding automata-theoretic presentation. One outcome of the theory is a coinductive proof principle, that can be used to establish equivalence of our Kleene algebra with tests expressions.<|reference_end|> | arxiv | @article{chen2004a,
title={A Coalgebraic Approach to Kleene Algebra with Tests},
author={Hubie Chen, Riccardo Pucella},
journal={Theoretical Computer Science, 327 (1-2), 23-44 (2004)},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405097},
primaryClass={cs.LO cs.PL}
} | chen2004a |
arxiv-671914 | cs/0405098 | A Logic for Reasoning about Evidence | <|reference_start|>A Logic for Reasoning about Evidence: We introduce a logic for reasoning about evidence that essentially views evidence as a function from prior beliefs (before making an observation) to posterior beliefs (after making the observation). We provide a sound and complete axiomatization for the logic, and consider the complexity of the decision problem. Although the reasoning in the logic is mainly propositional, we allow variables representing numbers and quantification over them. This expressive power seems necessary to capture important properties of evidence.<|reference_end|> | arxiv | @article{halpern2004a,
title={A Logic for Reasoning about Evidence},
author={Joseph Y. Halpern, Riccardo Pucella},
journal={Journal of Artificial Intelligence Research 26, pp. 1-34, 2006},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405098},
primaryClass={cs.AI cs.LO}
} | halpern2004a |
arxiv-671915 | cs/0405099 | Web search engine based on DNS | <|reference_start|>Web search engine based on DNS: Now no web search engine can cover more than 60 percent of all the pages on Internet. The update interval of most pages database is almost one month. This condition hasn't changed for many years. Converge and recency problems have become the bottleneck problem of current web search engine. To solve these problems, a new system, search engine based on DNS is proposed in this paper. This system adopts the hierarchical distributed architecture like DNS, which is different from any current commercial search engine. In theory, this system can cover all the web pages on Internet. Its update interval could even be one day. The original idea, detailed content and implementation of this system all are introduced in this paper.<|reference_end|> | arxiv | @article{liang2004web,
title={Web search engine based on DNS},
author={Wang Liang, Guo YiPing, Fang Ming},
journal={arXiv preprint arXiv:cs/0405099},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405099},
primaryClass={cs.NI cs.IR}
} | liang2004web |
arxiv-671916 | cs/0405100 | Typing constraint logic programs | <|reference_start|>Typing constraint logic programs: We present a prescriptive type system with parametric polymorphism and subtyping for constraint logic programs. The aim of this type system is to detect programming errors statically. It introduces a type discipline for constraint logic programs and modules, while maintaining the capabilities of performing the usual coercions between constraint domains, and of typing meta-programming predicates, thanks to the flexibility of subtyping. The property of subject reduction expresses the consistency of a prescriptive type system w.r.t. the execution model: if a program is "well-typed", then all derivations starting from a "well-typed" goal are again "well-typed". That property is proved w.r.t. the abstract execution model of constraint programming which proceeds by accumulation of constraints only, and w.r.t. an enriched execution model with type constraints for substitutions. We describe our implementation of the system for type checking and type inference. We report our experimental results on type checking ISO-Prolog, the (constraint) libraries of Sicstus Prolog and other Prolog programs.<|reference_end|> | arxiv | @article{fages2004typing,
title={Typing constraint logic programs},
author={Francois Fages, Emmanuel Coquery},
journal={Theory and Practice of Logic Programming, vol. 1, no. 6, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405100},
primaryClass={cs.PL}
} | fages2004typing |
arxiv-671917 | cs/0405101 | Worst-Case Groundness Analysis Using Definite Boolean Functions | <|reference_start|>Worst-Case Groundness Analysis Using Definite Boolean Functions: This note illustrates theoretical worst-case scenarios for groundness analyses obtained through abstract interpretation over the abstract domains of definite (Def) and positive (Pos) Boolean functions. For Def, an example is given for which any Def-based abstract interpretation for groundness analysis follows a chain which is exponential in the number of argument positions as well as in the number of clauses but sub-exponential in the size of the program. For Pos, we strengthen a previous result by illustrating an example for which any Pos-based abstract interpretation for groundness analysis follows a chain which is exponential in the size of the program. It remains an open problem to determine if the worst case for Def is really as bad as that for Pos.<|reference_end|> | arxiv | @article{genaim2004worst-case,
title={Worst-Case Groundness Analysis Using Definite Boolean Functions},
author={Samir Genaim, Michael Codish, Jacob M. Howe},
journal={Theory and Practice of Logic Programming, vol. 1, no. 5, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405101},
primaryClass={cs.PL}
} | genaim2004worst-case |
arxiv-671918 | cs/0405102 | A Proof Theoretic Approach to Failure in Functional Logic Programming | <|reference_start|>A Proof Theoretic Approach to Failure in Functional Logic Programming: How to extract negative information from programs is an important issue in logic programming. Here we address the problem for functional logic programs, from a proof-theoretic perspective. The starting point of our work is CRWL (Constructor based ReWriting Logic), a well established theoretical framework for functional logic programming, whose fundamental notion is that of non-strict non-deterministic function. We present a proof calculus, CRWLF, which is able to deduce negative information from CRWL-programs. In particular, CRWLF is able to prove finite failure of reduction within CRWL.<|reference_end|> | arxiv | @article{lopez-fraguas2004a,
title={A Proof Theoretic Approach to Failure in Functional Logic Programming},
author={Francisco Javier Lopez-Fraguas, Jaime Sanchez-Hernandez},
journal={Theory and Practice of Logic Programming, vol. 4, no. 1&2, 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405102},
primaryClass={cs.PL}
} | lopez-fraguas2004a |
arxiv-671919 | cs/0405103 | On model checking data-independent systems with arrays without reset | <|reference_start|>On model checking data-independent systems with arrays without reset: A system is data-independent with respect to a data type X iff the operations it can perform on values of type X are restricted to just equality testing. The system may also store, input and output values of type X. We study model checking of systems which are data-independent with respect to two distinct type variables X and Y, and may in addition use arrays with indices from X and values from Y . Our main interest is the following parameterised model-checking problem: whether a given program satisfies a given temporal-logic formula for all non-empty nite instances of X and Y . Initially, we consider instead the abstraction where X and Y are infinite and where partial functions with finite domains are used to model arrays. Using a translation to data-independent systems without arrays, we show that the u-calculus model-checking problem is decidable for these systems. From this result, we can deduce properties of all systems with finite instances of X and Y . We show that there is a procedure for the above parameterised model-checking problem of the universal fragment of the u-calculus, such that it always terminates but may give false negatives. We also deduce that the parameterised model-checking problem of the universal disjunction-free fragment of the u-calculus is decidable. Practical motivations for model checking data-independent systems with arrays include verification of memory and cache systems, where X is the type of memory addresses, and Y the type of storable values. As an example we verify a fault-tolerant memory interface over a set of unreliable memories.<|reference_end|> | arxiv | @article{lazic2004on,
title={On model checking data-independent systems with arrays without reset},
author={R.S. Lazic, T.C. Newcomb, A.W. Roscoe},
journal={Theory and Practice of Logic Programming, vol. 4, no. 5&6, 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405103},
primaryClass={cs.LO}
} | lazic2004on |
arxiv-671920 | cs/0405104 | Knowledge Reduction and Discovery based on Demarcation Information | <|reference_start|>Knowledge Reduction and Discovery based on Demarcation Information: Knowledge reduction, includes attribute reduction and value reduction, is an important topic in rough set literature. It is also closely relevant to other fields, such as machine learning and data mining. In this paper, an algorithm called TWI-SQUEEZE is proposed. It can find a reduct, or an irreducible attribute subset after two scans. Its soundness and computational complexity are given, which show that it is the fastest algorithm at present. A measure of variety is brought forward, of which algorithm TWI-SQUEEZE can be regarded as an application. The author also argues the rightness of this measure as a measure of information, which can make it a unified measure for "differentiation, a concept appeared in cognitive psychology literature. Value reduction is another important aspect of knowledge reduction. It is interesting that using the same algorithm we can execute a complete value reduction efficiently. The complete knowledge reduction, which results in an irreducible table, can therefore be accomplished after four scans of table. The byproducts of reduction are two classifiers of different styles. In this paper, various cases and models will be discussed to prove the efficiency and effectiveness of the algorithm. Some topics, such as how to integrate user preference to find a local optimal attribute subset will also be discussed.<|reference_end|> | arxiv | @article{he2004knowledge,
title={Knowledge Reduction and Discovery based on Demarcation Information},
author={Yuguo He},
journal={arXiv preprint arXiv:cs/0405104},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405104},
primaryClass={cs.LG cs.DB cs.IT math.IT}
} | he2004knowledge |
arxiv-671921 | cs/0405105 | Study of Pakistan Election System as Intelligent e-Election | <|reference_start|>Study of Pakistan Election System as Intelligent e-Election: The proposed election system lies in ensuring that it is transparent and impartial.Thus while the electoral system may vary from country to country, It has to take into account the peculiarities of every society while at the same time incorporating remedies to problems prevailing in the system. The Electoral process expressed serious concerns regarding the independence of the Election Commission of Pakistan, the restrictions on political parties and their candidates, the misuse of state resources, some unbalanced coverage in the state media, deficiencies in the compilation of the voting register and significant problems relating to the provision of ID cards. The holding of a general election does not in itself guarantee the restoration of democracy. The unjustified interference with electoral arrangements, as detailed above, irrespective of the alleged motivation, resulted in serious flaws being inflicted on the electoral process. Additionally, questions remain as to whether or not there will be a full transfer of power from a military to civilian administration. The Independent study research has following modules: Login/Subscription Module Candidate Subscription Module Vote casting Module Administration Module Intelligent decision data analysis Module<|reference_end|> | arxiv | @article{nadeem2004study,
title={Study of Pakistan Election System as Intelligent e-Election},
author={Muhammad Nadeem, Javaid R. Laghari (Szabist, Karachi)},
journal={arXiv preprint arXiv:cs/0405105},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405105},
primaryClass={cs.CY}
} | nadeem2004study |
arxiv-671922 | cs/0405106 | Pruning Search Space in Defeasible Argumentation | <|reference_start|>Pruning Search Space in Defeasible Argumentation: Defeasible argumentation has experienced a considerable growth in AI in the last decade. Theoretical results have been combined with development of practical applications in AI & Law, Case-Based Reasoning and various knowledge-based systems. However, the dialectical process associated with inference is computationally expensive. This paper focuses on speeding up this inference process by pruning the involved search space. Our approach is twofold. On one hand, we identify distinguished literals for computing defeat. On the other hand, we restrict ourselves to a subset of all possible conflicting arguments by introducing dialectical constraints.<|reference_end|> | arxiv | @article{chesñevar2004pruning,
title={Pruning Search Space in Defeasible Argumentation},
author={Carlos Iv'an Ches~nevar and Guillermo Ricardo Simari and Alejandro
Javier Garc'ia},
journal={Proc. of the Workshop on Advances and Trends in Search in
Artificial Intelligence, pp.40-47. International Conf. of the Chilean Society
in Computer Science, Santiago, Chile, 2000},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405106},
primaryClass={cs.AI}
} | chesñevar2004pruning |
arxiv-671923 | cs/0405107 | A Framework for Combining Defeasible Argumentation with Labeled Deduction | <|reference_start|>A Framework for Combining Defeasible Argumentation with Labeled Deduction: In the last years, there has been an increasing demand of a variety of logical systems, prompted mostly by applications of logic in AI and other related areas. Labeled Deductive Systems (LDS) were developed as a flexible methodology to formalize such a kind of complex logical systems. Defeasible argumentation has proven to be a successful approach to formalizing commonsense reasoning, encompassing many other alternative formalisms for defeasible reasoning. Argument-based frameworks share some common notions (such as the concept of argument, defeater, etc.) along with a number of particular features which make it difficult to compare them with each other from a logical viewpoint. This paper introduces LDSar, a LDS for defeasible argumentation in which many important issues concerning defeasible argumentation are captured within a unified logical framework. We also discuss some logical properties and extensions that emerge from the proposed framework.<|reference_end|> | arxiv | @article{chesñevar2004a,
title={A Framework for Combining Defeasible Argumentation with Labeled
Deduction},
author={Carlos Iv'an Ches~nevar and Guillermo Ricardo Simari},
journal={In "Computer Modeling of Scientific Reasoning" (C.Delrieux,
J.Legris, Eds.). Pp. 43-56, Ed. Ediuns, Argentina, 2003. ISBN 987-89281-89-6},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405107},
primaryClass={cs.AI cs.SC}
} | chesñevar2004a |
arxiv-671924 | cs/0405108 | "User Interfaces" and the Social Negotiation of Availability | <|reference_start|>"User Interfaces" and the Social Negotiation of Availability: In current presence or availability systems, the method of presenting a user's state often supposes an instantaneous notion of that state - for example, a visualization is rendered or an inference is made about the potential actions that might be consistent with a user's state. Drawing on observational research on the use of existing communication technology, we argue (as have others in the past) that determination of availability is often a joint process, and often one that takes the form of a negotiation (whether implicit or explicit). We briefly describe our current research on applying machine learning to infer degrees of conversational engagement from observed conversational behavior. Such inferences can be applied to facilitate the implicit negotiation of conversational engagement - in effect, helping users to weave together the act of contact with the act of determining availability.<|reference_end|> | arxiv | @article{aoki2004"user,
title={"User Interfaces" and the Social Negotiation of Availability},
author={Paul M. Aoki and Allison Woodruff},
journal={Workshop on Forecasting Presence and Availability, ACM SIGCHI
Conf. on Human Factors in Computing Systems (CHI 2004), Vienna, Austria, Apr.
2004.},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405108},
primaryClass={cs.HC}
} | aoki2004"user |
arxiv-671925 | cs/0405109 | Conversation Analysis and the User Experience | <|reference_start|>Conversation Analysis and the User Experience: We provide two case studies in the application of ideas drawn from conversation analysis to the design of technologies that enhance the experience of human conversation. We first present a case study of the design of an electronic guidebook, focusing on how conversation analytic principles played a role in the design process. We then discuss how the guidebook project has inspired our continuing work in social, mobile audio spaces. In particular, we describe some as yet unrealized concepts for adaptive audio spaces.<|reference_end|> | arxiv | @article{woodruff2004conversation,
title={Conversation Analysis and the User Experience},
author={Allison Woodruff and Paul M. Aoki},
journal={Workshop on Exploring Experience Methods Across Disciplines, ACM
SIGCHI Conf. on Human Factors in Computing Systems (CHI 2004), Vienna,
Austria, Apr. 2004.},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405109},
primaryClass={cs.HC}
} | woodruff2004conversation |
arxiv-671926 | cs/0405110 | An analysis of a bounded resource search puzzle | <|reference_start|>An analysis of a bounded resource search puzzle: Consider the commonly known puzzle, given $k$ glass balls, find an optimal algorithm to determine the lowest floor of a building of $n$ floors from which a thrown glass ball will break. This puzzle was originally posed in its original form in \cite{focs1980}and was later cited in the book \cite{algthc}. There are several internet sites that presents this puzzle and its solution to the special case of $k=2$ balls. This is the first such analysis of the puzzle in its general form. Several variations of this puzzle have been studied with applications in Network Loading \cite{cgstctl} which analyzes a case similar to a scenario where an adversary is changing the lowest floor with time. Although the algorithm specified in \cite{algthc} solves the problem, it is not an efficient algorithm. In this paper another algorithm for the same problem is analyzed. It is shown that if $m$ is the minimum number of attempts required then for $k \geq m$ we have $m = \log (n+1)$ and for $k < m$ we have, $1 + \sum_{i=1}^{k}{{m-1}\choose{i}} < n \leq \sum_{i=1}^{k}{{m}\choose{i}}$<|reference_end|> | arxiv | @article{ananthraman2004an,
title={An analysis of a bounded resource search puzzle},
author={Gopal Ananthraman},
journal={arXiv preprint arXiv:cs/0405110},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405110},
primaryClass={cs.DS cs.DM}
} | ananthraman2004an |
arxiv-671927 | cs/0405111 | Attrition Defenses for a Peer-to-Peer Digital Preservation System | <|reference_start|>Attrition Defenses for a Peer-to-Peer Digital Preservation System: In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.<|reference_end|> | arxiv | @article{giuli2004attrition,
title={Attrition Defenses for a Peer-to-Peer Digital Preservation System},
author={T.J. Giuli, Petros Maniatis, Mary Baker, David S. H. Rosenthal, Mema
Roussopoulos},
journal={arXiv preprint arXiv:cs/0405111},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405111},
primaryClass={cs.CR}
} | giuli2004attrition |
arxiv-671928 | cs/0405112 | Really Straight Graph Drawings | <|reference_start|>Really Straight Graph Drawings: This paper has been withdrawn by the authors. It has been replaced by the papers: "Drawings of Planar Graphs with Few Slopes and Segments" (math/0606450) and "Graph Drawings with Few Slopes" (math/0606446).<|reference_end|> | arxiv | @article{dujmovic2004really,
title={Really Straight Graph Drawings},
author={Vida Dujmovic, Matthew Suderman, David R. Wood},
journal={arXiv preprint arXiv:cs/0405112},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405112},
primaryClass={cs.DM cs.CG}
} | dujmovic2004really |
arxiv-671929 | cs/0405113 | A proposal to design expert system for the calculations in the domain of QFT | <|reference_start|>A proposal to design expert system for the calculations in the domain of QFT: Main purposes of the paper are followings: 1) To show examples of the calculations in domain of QFT via ``derivative rules'' of an expert system; 2) To consider advantages and disadvantage that technology of the calculations; 3) To reflect about how one would develop new physical theories, what knowledge would be useful in their investigations and how this problem can be connected with designing an expert system.<|reference_end|> | arxiv | @article{severe2004a,
title={A proposal to design expert system for the calculations in the domain of
QFT},
author={Andrea Severe},
journal={arXiv preprint arXiv:cs/0405113},
year={2004},
archivePrefix={arXiv},
eprint={cs/0405113},
primaryClass={cs.AI}
} | severe2004a |
arxiv-671930 | cs/0406001 | Side-Information Coding with Turbo Codes and its Application to Quantum Key Distribution | <|reference_start|>Side-Information Coding with Turbo Codes and its Application to Quantum Key Distribution: Turbo coding is a powerful class of forward error correcting codes, which can achieve performances close to the Shannon limit. The turbo principle can be applied to the problem of side-information source coding, and we investigate here its application to the reconciliation problem occurring in a continuous-variable quantum key distribution protocol.<|reference_end|> | arxiv | @article{nguyen2004side-information,
title={Side-Information Coding with Turbo Codes and its Application to Quantum
Key Distribution},
author={Kim-Chi Nguyen, Gilles Van Assche and Nicolas J. Cerf},
journal={arXiv preprint arXiv:cs/0406001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406001},
primaryClass={cs.IT cs.CR math.IT quant-ph}
} | nguyen2004side-information |
arxiv-671931 | cs/0406002 | A novel approach to symbolic algebra | <|reference_start|>A novel approach to symbolic algebra: A prototype for an extensible interactive graphical term manipulation system is presented that combines pattern matching and nondeterministic evaluation to provide a convenient framework for doing tedious algebraic manipulations that so far had to be done manually in a semi-automatic fashion.<|reference_end|> | arxiv | @article{fischbacher2004a,
title={A novel approach to symbolic algebra},
author={Thomas Fischbacher},
journal={arXiv preprint arXiv:cs/0406002},
year={2004},
number={AEI-2004-043},
archivePrefix={arXiv},
eprint={cs/0406002},
primaryClass={cs.SC}
} | fischbacher2004a |
arxiv-671932 | cs/0406003 | Algorithms for weighted multi-tape automata | <|reference_start|>Algorithms for weighted multi-tape automata: This report defines various operations for weighted multi-tape automata (WMTAs) and describes algorithms that have been implemented for those operations in the WFSC toolkit. Some algorithms are new, others are known or similar to known algorithms. The latter will be recalled to make this report more complete and self-standing. We present a new approach to multi-tape intersection, meaning the intersection of a number of tapes of one WMTA with the same number of tapes of another WMTA. In our approach, multi-tape intersection is not considered as an atomic operation but rather as a sequence of more elementary ones, which facilitates its implementation. We show an example of multi-tape intersection, actually transducer intersection, that can be compiled with our approach but not with several other methods that we analysed. To show the practical relavance of our work, we include an example of application: the preservation of intermediate results in transduction cascades.<|reference_end|> | arxiv | @article{kempe2004algorithms,
title={Algorithms for weighted multi-tape automata},
author={Andre Kempe (1), Franck Guingne (1,2), Florent Nicart (1,2) ((1) Xerox
Research Centre Europe, France, (2) Rouen University, France)},
journal={arXiv preprint arXiv:cs/0406003},
year={2004},
number={XRCE Research Report 2004/031},
archivePrefix={arXiv},
eprint={cs/0406003},
primaryClass={cs.CL cs.DS}
} | kempe2004algorithms |
arxiv-671933 | cs/0406004 | Application of Business Intelligence In Banks (Pakistan) | <|reference_start|>Application of Business Intelligence In Banks (Pakistan): The financial services industry is rapidly changing. Factors such as globalization, deregulation, mergers and acquisitions, competition from non-financial institutions, and technological innovation, have forced companies to re-think their business.Many large companies have been using Business Intelligence (BI) computer software for some years to help them gain competitive advantage. With the introduction of cheaper and more generalized products to the market place BI is now in the reach of smaller and medium sized companies. Business Intelligence is also known as knowledge management, management information systems (MIS), Executive information systems (EIS) and On-line analytical Processing (OLAP).<|reference_end|> | arxiv | @article{nadeem2004application,
title={Application of Business Intelligence In Banks (Pakistan)},
author={Muhammad Nadeem and Syed Ata Hussain Jaffri (Szabist)},
journal={arXiv preprint arXiv:cs/0406004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406004},
primaryClass={cs.DB}
} | nadeem2004application |
arxiv-671934 | cs/0406005 | Microreboot -- A Technique for Cheap Recovery | <|reference_start|>Microreboot -- A Technique for Cheap Recovery: A significant fraction of software failures in large-scale Internet systems are cured by rebooting, even when the exact failure causes are unknown. However, rebooting can be expensive, causing nontrivial service disruption or downtime even when clusters and failover are employed. In this work we separate process recovery from data recovery to enable microrebooting -- a fine-grain technique for surgically recovering faulty application components, without disturbing the rest of the application. We evaluate microrebooting in an Internet auction system running on an application server. Microreboots recover most of the same failures as full reboots, but do so an order of magnitude faster and result in an order of magnitude savings in lost work. This cheap form of recovery engenders a new approach to high availability: microreboots can be employed at the slightest hint of failure, prior to node failover in multi-node clusters, even when mistakes in failure detection are likely; failure and recovery can be masked from end users through transparent call-level retries; and systems can be rejuvenated by parts, without ever being shut down.<|reference_end|> | arxiv | @article{candea2004microreboot,
title={Microreboot -- A Technique for Cheap Recovery},
author={George Candea, Shinichi Kawamoto, Yuichi Fujiki, Greg Friedman,
Armando Fox},
journal={Proc. 6th Symposium on Operating Systems Design and Implementation
(OSDI), San Francisco, CA, Dec 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406005},
primaryClass={cs.OS cs.DC}
} | candea2004microreboot |
arxiv-671935 | cs/0406006 | Dichotomy Theorems for Alternation-Bounded Quantified Boolean Formulas | <|reference_start|>Dichotomy Theorems for Alternation-Bounded Quantified Boolean Formulas: In 1978, Schaefer proved his famous dichotomy theorem for generalized satisfiability problems. He defined an infinite number of propositional satisfiability problems, showed that all these problems are either in P or NP-complete, and gave a simple criterion to determine which of the two cases holds. This result is surprising in light of Ladner's theorem, which implies that there are an infinite number of complexity classes between P and NP-complete (under the assumption that P is not equal to NP). Schaefer also stated a dichotomy theorem for quantified generalized Boolean formulas, but this theorem was only recently proven by Creignou, Khanna, and Sudan, and independently by Dalmau: Determining truth of quantified Boolean formulas is either PSPACE-complete or in P. This paper looks at alternation-bounded quantified generalized Boolean formulas. In their unrestricted forms, these problems are the canonical problems complete for the levels of the polynomial hierarchy. In this paper, we prove dichotomy theorems for alternation-bounded quantified generalized Boolean formulas, by showing that these problems are either $\Sigma_i^p$-complete or in P, and we give a simple criterion to determine which of the two cases holds. This is the first result that obtains dichotomy for an infinite number of classes at once.<|reference_end|> | arxiv | @article{hemaspaandra2004dichotomy,
title={Dichotomy Theorems for Alternation-Bounded Quantified Boolean Formulas},
author={Edith Hemaspaandra},
journal={arXiv preprint arXiv:cs/0406006},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406006},
primaryClass={cs.CC cs.LO}
} | hemaspaandra2004dichotomy |
arxiv-671936 | cs/0406007 | Parallel Mixed Bayesian Optimization Algorithm: A Scaleup Analysis | <|reference_start|>Parallel Mixed Bayesian Optimization Algorithm: A Scaleup Analysis: Estimation of Distribution Algorithms have been proposed as a new paradigm for evolutionary optimization. This paper focuses on the parallelization of Estimation of Distribution Algorithms. More specifically, the paper discusses how to predict performance of parallel Mixed Bayesian Optimization Algorithm (MBOA) that is based on parallel construction of Bayesian networks with decision trees. We determine the time complexity of parallel Mixed Bayesian Optimization Algorithm and compare this complexity with experimental results obtained by solving the spin glass optimization problem. The empirical results fit well the theoretical time complexity, so the scalability and efficiency of parallel Mixed Bayesian Optimization Algorithm for unknown instances of spin glass benchmarks can be predicted. Furthermore, we derive the guidelines that can be used to design effective parallel Estimation of Distribution Algorithms with the speedup proportional to the number of variables in the problem.<|reference_end|> | arxiv | @article{ocenasek2004parallel,
title={Parallel Mixed Bayesian Optimization Algorithm: A Scaleup Analysis},
author={Jiri Ocenasek, Martin Pelikan},
journal={arXiv preprint arXiv:cs/0406007},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406007},
primaryClass={cs.NE cs.DC}
} | ocenasek2004parallel |
arxiv-671937 | cs/0406008 | Image compression by rectangular wavelet transform | <|reference_start|>Image compression by rectangular wavelet transform: We study image compression by a separable wavelet basis $\big\{\psi(2^{k_1}x-i)\psi(2^{k_2}y-j),$ $\phi(x-i)\psi(2^{k_2}y-j),$ $\psi(2^{k_1}(x-i)\phi(y-j),$ $\phi(x-i)\phi(y-i)\big\},$ where $k_1, k_2 \in \mathbb{Z}_+$; $i,j\in\mathbb{Z}$; and $\phi,\psi$ are elements of a standard biorthogonal wavelet basis in $L_2(\mathbb{R})$. Because $k_1\ne k_2$, the supports of the basis elements are rectangles, and the corresponding transform is known as the {\em rectangular wavelet transform}. We prove that if one-dimensional wavelet basis has $M$ dual vanishing moments then the rate of approximation by $N$ coefficients of rectangular wavelet transform is $\mathcal{O}(N^{-M}\log^C N)$ for functions with mixed derivative of order $M$ in each direction. The square wavelet transform yields the approximation rate is $\mathcal{O}(N^{-M/2})$ for functions with all derivatives of the total order $M$. Thus, the rectangular wavelet transform can outperform the square one if an image has a mixed derivative. We provide experimental comparison of image compression which shows that rectangular wavelet transform outperform the square one.<|reference_end|> | arxiv | @article{zavadsky2004image,
title={Image compression by rectangular wavelet transform},
author={Vyacheslav Zavadsky},
journal={arXiv preprint arXiv:cs/0406008},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406008},
primaryClass={cs.CV}
} | zavadsky2004image |
arxiv-671938 | cs/0406009 | Implementation of Logical Functions in the Game of Life | <|reference_start|>Implementation of Logical Functions in the Game of Life: The Game of Life cellular automaton is a classical example of a massively parallel collision-based computing device. The automaton exhibits mobile patterns, gliders, and generators of the mobile patterns, glider guns, in its evolution. We show how to construct the basic logical operations, AND, OR, NOT in space-time configurations of the cellular automaton. Also decomposition of complicated Boolean functions is discussed. Advantages of our technique are demonstrated on an example of binary adder, realized via collision of glider streams.<|reference_end|> | arxiv | @article{rennard2004implementation,
title={Implementation of Logical Functions in the Game of Life},
author={J.-P. Rennard},
journal={Rennard, J.-P. (2002). Implementation of Logical Functions in the
Game of Life. In A. Adamatzky (Ed.), Collision-Based Computing (pp. 491-512).
London: Springer},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406009},
primaryClass={cs.CC}
} | rennard2004implementation |
arxiv-671939 | cs/0406010 | Another Proof of an Extension of a Curious Identity | <|reference_start|>Another Proof of an Extension of a Curious Identity: Based on Jensen formulae and the second kind of Chebyshev polynomials, another proof is presented for an extension of a curious binomial identity due to Z. W. Sun and K. J. Wu.<|reference_end|> | arxiv | @article{sun2004another,
title={Another Proof of an Extension of a Curious Identity},
author={Yidong Sun},
journal={arXiv preprint arXiv:cs/0406010},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406010},
primaryClass={cs.DM}
} | sun2004another |
arxiv-671940 | cs/0406011 | Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences | <|reference_start|>Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences: We present a new method for nonlinear prediction of discrete random sequences under minimal structural assumptions. We give a mathematical construction for optimal predictors of such processes, in the form of hidden Markov models. We then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which approximates the ideal predictor from data. We discuss the reliability of CSSR, its data requirements, and its performance in simulations. Finally, we compare our approach to existing methods using variable-length Markov models and cross-validated hidden Markov models, and show theoretically and experimentally that our method delivers results superior to the former and at least comparable to the latter.<|reference_end|> | arxiv | @article{shalizi2004blind,
title={Blind Construction of Optimal Nonlinear Recursive Predictors for
Discrete Sequences},
author={Cosma Rohilla Shalizi and Kristina Lisa Shalizi},
journal={pp. 504--511 in Max Chickering and Joseph Halpern (eds.),
_Uncertainty in Artificial Intelligence: Proceedings of the Twentieth
Conference_ (2004)},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406011},
primaryClass={cs.LG math.ST nlin.CD physics.data-an stat.TH}
} | shalizi2004blind |
arxiv-671941 | cs/0406012 | Secure Prolog-Based Mobile Code | <|reference_start|>Secure Prolog-Based Mobile Code: LogicWeb mobile code consists of Prolog-like rules embedded in Web pages, thereby adding logic programming behaviour to those pages. Since LogicWeb programs are downloaded from foreign hosts and executed locally, there is a need to protect the client from buggy or malicious code. A security model is crucial for making LogicWeb mobile code safe to execute. This paper presents such a model, which supports programs of varying trust levels by using different resource access policies. The implementation of the model derives from an extended operational semantics for the LogicWeb language, which provides a precise meaning of safety.<|reference_end|> | arxiv | @article{loke2004secure,
title={Secure Prolog-Based Mobile Code},
author={Seng Wai Loke, Andrew Davison},
journal={Theory and Practice of Logic Programming, vol. 1, no. 3, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406012},
primaryClass={cs.PL}
} | loke2004secure |
arxiv-671942 | cs/0406013 | Optimization of Bound Disjunctive Queries with Constraints | <|reference_start|>Optimization of Bound Disjunctive Queries with Constraints: "To Appear in Theory and Practice of Logic Programming (TPLP)" This paper presents a technique for the optimization of bound queries over disjunctive deductive databases with constraints. The proposed approach is an extension of the well-known Magic-Set technique and is well-suited for being integrated in current bottom-up (stable) model inference engines. More specifically, it is based on the exploitation of binding propagation techniques which reduce the size of the data relevant to answer the query and, consequently, reduces both the complexity of computing a single model and the number of models to be considered. The motivation of this work stems from the observation that traditional binding propagation optimization techniques for bottom-up model generator systems, simulating the goal driven evaluation of top-down engines, are only suitable for positive (disjunctive) queries, while hard problems are expressed using unstratified negation. The main contribution of the paper consists in the extension of a previous technique, defined for positive disjunctive queries, to queries containing both disjunctive heads and constraints (a simple and expressive form of unstratified negation). As the usual way of expressing declaratively hard problems is based on the guess-and-check technique, where the guess part is expressed by means of disjunctive rules and the check part is expressed by means of constraints, the technique proposed here is highly relevant for the optimization of queries expressing hard problems. The value of the technique has been proved by several experiments.<|reference_end|> | arxiv | @article{greco2004optimization,
title={Optimization of Bound Disjunctive Queries with Constraints},
author={G. Greco, S. Greco, I. Trubtsyna, E. Zumpano},
journal={arXiv preprint arXiv:cs/0406013},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406013},
primaryClass={cs.LO}
} | greco2004optimization |
arxiv-671943 | cs/0406014 | O(1) Reversible Tree Navigation Without Cycles | <|reference_start|>O(1) Reversible Tree Navigation Without Cycles: Imperative programmers often use cyclically linked trees in order to achieve O(1) navigation time to neighbours. Some logic programmers believe that cyclic terms are necessary to achieve the same in logic-based languages. An old but little-known technique provides O(1) time and space navigation without cyclic links, in the form of reversible predicates. A small modification provides O(1) amortised time and space editing.<|reference_end|> | arxiv | @article{o'keefe2004o(1),
title={O(1) Reversible Tree Navigation Without Cycles},
author={Richard A. O'Keefe},
journal={Theory and Practice of Logic Programming, vol. 1, no. 5, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406014},
primaryClass={cs.PL}
} | o'keefe2004o(1) |
arxiv-671944 | cs/0406015 | Zipf's law and the creation of musical context | <|reference_start|>Zipf's law and the creation of musical context: This article discusses the extension of the notion of context from linguistics to the domain of music. In language, the statistical regularity known as Zipf's law -which concerns the frequency of usage of different words- has been quantitatively related to the process of text generation. This connection is established by Simon's model, on the basis of a few assumptions regarding the accompanying creation of context. Here, it is shown that the statistics of note usage in musical compositions are compatible with the predictions of Simon's model. This result, which gives objective support to the conceptual likeness of context in language and music, is obtained through automatic analysis of the digital versions of several compositions. As a by-product, a quantitative measure of context definiteness is introduced and used to compare tonal and atonal works.<|reference_end|> | arxiv | @article{zanette2004zipf's,
title={Zipf's law and the creation of musical context},
author={Damian H. Zanette},
journal={arXiv preprint arXiv:cs/0406015},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406015},
primaryClass={cs.CL cond-mat.stat-mech}
} | zanette2004zipf's |
arxiv-671945 | cs/0406016 | Schema-based Scheduling of Event Processors and Buffer Minimization for Queries on Structured Data Streams | <|reference_start|>Schema-based Scheduling of Event Processors and Buffer Minimization for Queries on Structured Data Streams: We introduce an extension of the XQuery language, FluX, that supports event-based query processing and the conscious handling of main memory buffers. Purely event-based queries of this language can be executed on streaming XML data in a very direct way. We then develop an algorithm that allows to efficiently rewrite XQueries into the event-based FluX language. This algorithm uses order constraints from a DTD to schedule event handlers and to thus minimize the amount of buffering required for evaluating a query. We discuss the various technical aspects of query optimization and query evaluation within our framework. This is complemented with an experimental evaluation of our approach.<|reference_end|> | arxiv | @article{koch2004schema-based,
title={Schema-based Scheduling of Event Processors and Buffer Minimization for
Queries on Structured Data Streams},
author={Christoph Koch, Stefanie Scherzinger, Nicole Schweikardt, Bernhard
Stegmaier},
journal={arXiv preprint arXiv:cs/0406016},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406016},
primaryClass={cs.DB}
} | koch2004schema-based |
arxiv-671946 | cs/0406017 | Using Self-Organising Mappings to Learn the Structure of Data Manifolds | <|reference_start|>Using Self-Organising Mappings to Learn the Structure of Data Manifolds: In this paper it is shown how to map a data manifold into a simpler form by progressively discarding small degrees of freedom. This is the key to self-organising data fusion, where the raw data is embedded in a very high-dimensional space (e.g. the pixel values of one or more images), and the requirement is to isolate the important degrees of freedom which lie on a low-dimensional manifold. A useful advantage of the approach used in this paper is that the computations are arranged as a feed-forward processing chain, where all the details of the processing in each stage of the chain are learnt by self-organisation. This approach is demonstrated using hierarchically correlated data, which causes the processing chain to split the data into separate processing channels, and then to progressively merge these channels wherever they are correlated with each other. This is the key to self-organising data fusion.<|reference_end|> | arxiv | @article{luttrell2004using,
title={Using Self-Organising Mappings to Learn the Structure of Data Manifolds},
author={Stephen Luttrell},
journal={arXiv preprint arXiv:cs/0406017},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406017},
primaryClass={cs.NE cs.CV}
} | luttrell2004using |
arxiv-671947 | cs/0406018 | Effects of wireless computing technology | <|reference_start|>Effects of wireless computing technology: Wireless technology can provide many benefits to computing including faster response to queries, reduced time spent on paperwork, increased online time for users, just-in-time and real time control, tighter communications between clients and hosts. Wireless Computing is governed by two general forces: Technology, which provides a set of basic building blocks and User Applications, which determine a set of operations that must be carried out efficiently on demand. This paper summarizes technological changes that are underway and describes their impact on wireless computing development and implementation. It also describes the applications that influence the development and implementation of wireless computing and shows what current systems offer.<|reference_end|> | arxiv | @article{eremin2004effects,
title={Effects of wireless computing technology},
author={A. A. Eremin},
journal={arXiv preprint arXiv:cs/0406018},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406018},
primaryClass={cs.OH}
} | eremin2004effects |
arxiv-671948 | cs/0406019 | Providing Service Guarantees in High-Speed Switching Systems with Feedback Output Queuing | <|reference_start|>Providing Service Guarantees in High-Speed Switching Systems with Feedback Output Queuing: We consider the problem of providing service guarantees in a high-speed packet switch. As basic requirements, the switch should be scalable to high speeds per port, a large number of ports and a large number of traffic flows with independent guarantees. Existing scalable solutions are based on Virtual Output Queuing, which is computationally complex when required to provide service guarantees for a large number of flows. We present a novel architecture for packet switching that provides support for such service guarantees. A cost-effective fabric with small external speedup is combined with a feedback mechanism that enables the fabric to be virtually lossless, thus avoiding packet drops indiscriminate of flows. Through analysis and simulation, we show that this architecture provides accurate support for service guarantees, has low computational complexity and is scalable to very high port speeds.<|reference_end|> | arxiv | @article{firoiu2004providing,
title={Providing Service Guarantees in High-Speed Switching Systems with
Feedback Output Queuing},
author={Victor Firoiu, Xiaohui Zhang, Emre Gunduzhan, Nicolas Christin},
journal={arXiv preprint arXiv:cs/0406019},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406019},
primaryClass={cs.NI}
} | firoiu2004providing |
arxiv-671949 | cs/0406020 | Algorithms for Drawing Media | <|reference_start|>Algorithms for Drawing Media: We describe algorithms for drawing media, systems of states, tokens and actions that have state transition graphs in the form of partial cubes. Our algorithms are based on two principles: embedding the state transition graph in a low-dimensional integer lattice and projecting the lattice onto the plane, or drawing the medium as a planar graph with centrally symmetric faces.<|reference_end|> | arxiv | @article{eppstein2004algorithms,
title={Algorithms for Drawing Media},
author={David Eppstein},
journal={arXiv preprint arXiv:cs/0406020},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406020},
primaryClass={cs.DS cs.CG}
} | eppstein2004algorithms |
arxiv-671950 | cs/0406021 | A direct formulation for sparse PCA using semidefinite programming | <|reference_start|>A direct formulation for sparse PCA using semidefinite programming: We examine the problem of approximating, in the Frobenius-norm sense, a positive, semidefinite symmetric matrix by a rank-one matrix, with an upper bound on the cardinality of its eigenvector. The problem arises in the decomposition of a covariance matrix into sparse factors, and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming based relaxation for our problem. We also discuss Nesterov's smooth minimization technique applied to the SDP arising in the direct sparse PCA method.<|reference_end|> | arxiv | @article{d'aspremont2004a,
title={A direct formulation for sparse PCA using semidefinite programming},
author={Alexandre d'Aspremont, Laurent El Ghaoui, Michael I. Jordan, Gert R.
G. Lanckriet},
journal={arXiv preprint arXiv:cs/0406021},
year={2004},
number={Working paper: UCB//CSD-04-1330},
archivePrefix={arXiv},
eprint={cs/0406021},
primaryClass={cs.CE}
} | d'aspremont2004a |
arxiv-671951 | cs/0406022 | Uncovering the epistemological and ontological assumptions of software designers | <|reference_start|>Uncovering the epistemological and ontological assumptions of software designers: The ontological and epistemological positions adopted by information systems design methods are incommensur-able when pushed to their extremes. Information systems research has therefore tended to focus on the similarities between different positions, usually in search of a single, unifying position. However, by focusing on the similari-ties, the clarity of argument provided by any one philoso-phical position is necessarily diminished. Consequently, researchers often treat the philosophical foundations of design methods as being of only minor importance. In this paper, we have deliberately chosen to focus on the differences between various philosophical positions. From this focus, we believe we can offer a clearer under-standing of the empirical behaviour of software as viewed from particular philosophical positions. Since the em-pirical evidence does not favour any single position, we conclude by arguing for the validity of ad hoc approaches to software design which we believe provides a stronger and more theoretically grounded approach to software design.<|reference_end|> | arxiv | @article{king2004uncovering,
title={Uncovering the epistemological and ontological assumptions of software
designers},
author={David King and Chris Kimble},
journal={Proceedings 9e colloque de l'AIM, Evry, France, May 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406022},
primaryClass={cs.SE cs.GL}
} | king2004uncovering |
arxiv-671952 | cs/0406023 | Notions of Equivalence in Software Design | <|reference_start|>Notions of Equivalence in Software Design: Design methods in information systems frequently create software descriptions using formal languages. Nonetheless, most software designers prefer to describe software using natural languages. This distinction is not simply a matter of convenience. Natural languages are not the same as formal languages; in particular, natural languages do not follow the notions of equivalence used by formal languages. In this paper, we show both the existence and coexistence of different notions of equivalence by extending the no-tion of oracles used in formal languages. This allows distinctions to be made between the trustworthy oracles assumed by formal languages and the untrust-worthy oracles used by natural languages. By examin-ing the notion of equivalence, we hope to encourage designers of software to rethink the place of ambiguity in software design.<|reference_end|> | arxiv | @article{king2004notions,
title={Notions of Equivalence in Software Design},
author={David King and Chris Kimble},
journal={Proceedings 9e colloque de l'AIM, Evry, France, May 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406023},
primaryClass={cs.SE}
} | king2004notions |
arxiv-671953 | cs/0406024 | Layout of Graphs with Bounded Tree-Width | <|reference_start|>Layout of Graphs with Bounded Tree-Width: A \emph{queue layout} of a graph consists of a total order of the vertices, and a partition of the edges into \emph{queues}, such that no two edges in the same queue are nested. The minimum number of queues in a queue layout of a graph is its \emph{queue-number}. A \emph{three-dimensional (straight-line grid) drawing} of a graph represents the vertices by points in $\mathbb{Z}^3$ and the edges by non-crossing line-segments. This paper contributes three main results: (1) It is proved that the minimum volume of a certain type of three-dimensional drawing of a graph $G$ is closely related to the queue-number of $G$. In particular, if $G$ is an $n$-vertex member of a proper minor-closed family of graphs (such as a planar graph), then $G$ has a $O(1)\times O(1)\times O(n)$ drawing if and only if $G$ has O(1) queue-number. (2) It is proved that queue-number is bounded by tree-width, thus resolving an open problem due to Ganley and Heath (2001), and disproving a conjecture of Pemmaraju (1992). This result provides renewed hope for the positive resolution of a number of open problems in the theory of queue layouts. (3) It is proved that graphs of bounded tree-width have three-dimensional drawings with O(n) volume. This is the most general family of graphs known to admit three-dimensional drawings with O(n) volume. The proofs depend upon our results regarding \emph{track layouts} and \emph{tree-partitions} of graphs, which may be of independent interest.<|reference_end|> | arxiv | @article{dujmovic2004layout,
title={Layout of Graphs with Bounded Tree-Width},
author={Vida Dujmovic, Pat Morin, David R. Wood},
journal={SIAM J. Computing 34.3:553-579, 2005},
year={2004},
doi={10.1137/S0097539702416141},
archivePrefix={arXiv},
eprint={cs/0406024},
primaryClass={cs.DM cs.CG}
} | dujmovic2004layout |
arxiv-671954 | cs/0406025 | Directional Consistency for Continuous Numerical Constraints | <|reference_start|>Directional Consistency for Continuous Numerical Constraints: Bounds consistency is usually enforced on continuous constraints by first decomposing them into binary and ternary primitives. This decomposition has long been shown to drastically slow down the computation of solutions. To tackle this, Benhamou et al. have introduced an algorithm that avoids formally decomposing constraints. Its better efficiency compared to the former method has already been experimentally demonstrated. It is shown here that their algorithm implements a strategy to enforce on a continuous constraint a consistency akin to Directional Bounds Consistency as introduced by Dechter and Pearl for discrete problems. The algorithm is analyzed in this framework, and compared with algorithms that enforce bounds consistency. These theoretical results are eventually contrasted with new experimental results on standard benchmarks from the interval constraint community.<|reference_end|> | arxiv | @article{goualard2004directional,
title={Directional Consistency for Continuous Numerical Constraints},
author={Frederic Goualard and Laurent Granvilliers},
journal={arXiv preprint arXiv:cs/0406025},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406025},
primaryClass={cs.AI cs.MS}
} | goualard2004directional |
arxiv-671955 | cs/0406026 | Improving Prolog Programs: Refactoring for Prolog | <|reference_start|>Improving Prolog Programs: Refactoring for Prolog: Refactoring is an established technique from the OO-community to restructure code: it aims at improving software readability, maintainability and extensibility. Although refactoring is not tied to the OO-paradigm in particular, its ideas have not been applied to Logic Programming until now. This paper applies the ideas of refactoring to Prolog programs. A catalogue is presented listing refactorings classified according to scope. Some of the refactorings have been adapted from the OO-paradigm, while others have been specifically designed for Prolog. Also the discrepancy between intended and operational semantics in Prolog is addressed by some of the refactorings. In addition, ViPReSS, a semi-automatic refactoring browser, is discussed and the experience with applying \vipress to a large Prolog legacy system is reported. Our main conclusion is that refactoring is not only a viable technique in Prolog but also a rather desirable one.<|reference_end|> | arxiv | @article{schrijvers2004improving,
title={Improving Prolog Programs: Refactoring for Prolog},
author={Tom Schrijvers, Alexander Serebrenik},
journal={arXiv preprint arXiv:cs/0406026},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406026},
primaryClass={cs.SE cs.PL}
} | schrijvers2004improving |
arxiv-671956 | cs/0406027 | Fluctuation in Peer-to-Peer Networks: Mitigating Its Effect on DHT Performance | <|reference_start|>Fluctuation in Peer-to-Peer Networks: Mitigating Its Effect on DHT Performance: Due to the transient nature of peers, any Peer-to-Peer network is in peril to falling apart if peers do not receive routing table updates periodically. To this end, maintenance, which affects every peer, ensures connectedness and sustained data operation performance. However, a high rate of change in peer population usually incurs lots of network maintenance messages and can severely degrade overall performance. We discuss three methods how to tackle and mitigate the effect of peer fluctuation on a tree-based distributed hash table.<|reference_end|> | arxiv | @article{fahrenholtz2004fluctuation,
title={Fluctuation in Peer-to-Peer Networks: Mitigating Its Effect on DHT
Performance},
author={Dietrich Fahrenholtz and Volker Turau},
journal={arXiv preprint arXiv:cs/0406027},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406027},
primaryClass={cs.NI cs.DC}
} | fahrenholtz2004fluctuation |
arxiv-671957 | cs/0406028 | Ramsey-type theorems for metric spaces with applications to online problems | <|reference_start|>Ramsey-type theorems for metric spaces with applications to online problems: A nearly logarithmic lower bound on the randomized competitive ratio for the metrical task systems problem is presented. This implies a similar lower bound for the extensively studied k-server problem. The proof is based on Ramsey-type theorems for metric spaces, that state that every metric space contains a large subspace which is approximately a hierarchically well-separated tree (and in particular an ultrametric). These Ramsey-type theorems may be of independent interest.<|reference_end|> | arxiv | @article{bartal2004ramsey-type,
title={Ramsey-type theorems for metric spaces with applications to online
problems},
author={Yair Bartal, Bela Bollobas, Manor Mendel},
journal={J. Comput. System Sci. 72(5):890-921, 2006},
year={2004},
doi={10.1016/j.jcss.2005.05.008},
archivePrefix={arXiv},
eprint={cs/0406028},
primaryClass={cs.DS}
} | bartal2004ramsey-type |
arxiv-671958 | cs/0406029 | Subset Queries in Relational Databases | <|reference_start|>Subset Queries in Relational Databases: In this paper, we motivated the need for relational database systems to support subset query processing. We defined new operators in relational algebra, and new constructs in SQL for expressing subset queries. We also illustrated the applicability of subset queries through different examples expressed using extended SQL statements and relational algebra expressions. Our aim is to show the utility of subset queries for next generation applications.<|reference_end|> | arxiv | @article{valluri2004subset,
title={Subset Queries in Relational Databases},
author={Satyanarayana R Valluri and Kamalakar Karlapalem},
journal={arXiv preprint arXiv:cs/0406029},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406029},
primaryClass={cs.DB}
} | valluri2004subset |
arxiv-671959 | cs/0406030 | Abstract Canonical Inference | <|reference_start|>Abstract Canonical Inference: An abstract framework of canonical inference is used to explore how different proof orderings induce different variants of saturation and completeness. Notions like completion, paramodulation, saturation, redundancy elimination, and rewrite-system reduction are connected to proof orderings. Fairness of deductive mechanisms is defined in terms of proof orderings, distinguishing between (ordinary) "fairness," which yields completeness, and "uniform fairness," which yields saturation.<|reference_end|> | arxiv | @article{bonacina2004abstract,
title={Abstract Canonical Inference},
author={Maria Paola Bonacina and Nachum Dershowitz},
journal={ACM Transactions on Computational Logic, 8(1):180-208, January
2007},
year={2004},
doi={10.1145/1182613.1182619},
number={RR 18/2004},
archivePrefix={arXiv},
eprint={cs/0406030},
primaryClass={cs.LO cs.SC}
} | bonacina2004abstract |
arxiv-671960 | cs/0406031 | A Public Reference Implementation of the RAP Anaphora Resolution Algorithm | <|reference_start|>A Public Reference Implementation of the RAP Anaphora Resolution Algorithm: This paper describes a standalone, publicly-available implementation of the Resolution of Anaphora Procedure (RAP) given by Lappin and Leass (1994). The RAP algorithm resolves third person pronouns, lexical anaphors, and identifies pleonastic pronouns. Our implementation, JavaRAP, fills a current need in anaphora resolution research by providing a reference implementation that can be benchmarked against current algorithms. The implementation uses the standard, publicly available Charniak (2000) parser as input, and generates a list of anaphora-antecedent pairs as output. Alternately, an in-place annotation or substitution of the anaphors with their antecedents can be produced. Evaluation on the MUC-6 co-reference task shows that JavaRAP has an accuracy of 57.9%, similar to the performance given previously in the literature (e.g., Preiss 2002).<|reference_end|> | arxiv | @article{qiu2004a,
title={A Public Reference Implementation of the RAP Anaphora Resolution
Algorithm},
author={Long Qiu, Min-Yen Kan, Tat-Seng Chua},
journal={arXiv preprint arXiv:cs/0406031},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406031},
primaryClass={cs.CL}
} | qiu2004a |
arxiv-671961 | cs/0406032 | A Dynamic Clustering-Based Markov Model for Web Usage Mining | <|reference_start|>A Dynamic Clustering-Based Markov Model for Web Usage Mining: Markov models have been widely utilized for modelling user web navigation behaviour. In this work we propose a dynamic clustering-based method to increase a Markov model's accuracy in representing a collection of user web navigation sessions. The method makes use of the state cloning concept to duplicate states in a way that separates in-links whose corresponding second-order probabilities diverge. In addition, the new method incorporates a clustering technique which determines an effcient way to assign in-links with similar second-order probabilities to the same clone. We report on experiments conducted with both real and random data and we provide a comparison with the N-gram Markov concept. The results show that the number of additional states induced by the dynamic clustering method can be controlled through a threshold parameter, and suggest that the method's performance is linear time in the size of the model.<|reference_end|> | arxiv | @article{borges2004a,
title={A Dynamic Clustering-Based Markov Model for Web Usage Mining},
author={Jos'e Borges (1), Mark Levene (2) ((1) School of Engineering,
University of Porto, Portuga,(2)Birkbeck, University of London, U.K)},
journal={arXiv preprint arXiv:cs/0406032},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406032},
primaryClass={cs.IR cs.AI}
} | borges2004a |
arxiv-671962 | cs/0406033 | Randomized k-server algorithms for growth-rate bounded graphs | <|reference_start|>Randomized k-server algorithms for growth-rate bounded graphs: The paper referred to in the title is withdrawn.<|reference_end|> | arxiv | @article{mendel2004randomized,
title={Randomized k-server algorithms for growth-rate bounded graphs},
author={Manor Mendel},
journal={J. Algorithms, 55(2): 192-202, 2005},
year={2004},
doi={10.1016/j.jalgor.2004.06.002},
archivePrefix={arXiv},
eprint={cs/0406033},
primaryClass={cs.DS}
} | mendel2004randomized |
arxiv-671963 | cs/0406034 | Better algorithms for unfair metrical task systems and applications | <|reference_start|>Better algorithms for unfair metrical task systems and applications: Unfair metrical task systems are a generalization of online metrical task systems. In this paper we introduce new techniques to combine algorithms for unfair metrical task systems and apply these techniques to obtain improved randomized online algorithms for metrical task systems on arbitrary metric spaces.<|reference_end|> | arxiv | @article{fiat2004better,
title={Better algorithms for unfair metrical task systems and applications},
author={Amos Fiat, Manor Mendel},
journal={SIAM Journal on Computing 32(6), pp. 1403-1422, 2003},
year={2004},
doi={10.1137/S0097539700376159},
archivePrefix={arXiv},
eprint={cs/0406034},
primaryClass={cs.DS}
} | fiat2004better |
arxiv-671964 | cs/0406035 | Optimal Free-Space Management and Routing-Conscious Dynamic Placement for Reconfigurable Devices | <|reference_start|>Optimal Free-Space Management and Routing-Conscious Dynamic Placement for Reconfigurable Devices: We describe algorithmic results for two crucial aspects of allocating resources on computational hardware devices with partial reconfigurability. By using methods from the field of computational geometry, we derive a method that allows correct maintainance of free and occupied space of a set of n rectangular modules in optimal time Theta(n log n); previous approaches needed a time of O(n^2) for correct results and O(n) for heuristic results. We also show that finding an optimal feasible communication-conscious placement (which minimizes the total weighted Manhattan distance between the new module and existing demand points) can be computed in Theta(n log n). Both resulting algorithms are practically easy to implement and show convincing experimental behavior.<|reference_end|> | arxiv | @article{ahmadinia2004optimal,
title={Optimal Free-Space Management and Routing-Conscious Dynamic Placement
for Reconfigurable Devices},
author={Ali Ahmadinia, Christophe Bobda, Sandor Fekete, Juergen Teich, Jan van
der Veen},
journal={arXiv preprint arXiv:cs/0406035},
year={2004},
doi={10.1109/TC.2007.1028},
archivePrefix={arXiv},
eprint={cs/0406035},
primaryClass={cs.DS cs.CG}
} | ahmadinia2004optimal |
arxiv-671965 | cs/0406036 | Online Companion Caching | <|reference_start|>Online Companion Caching: This paper is concerned with online caching algorithms for the (n,k)-companion cache, defined by Brehob et. al. In this model the cache is composed of two components: a k-way set-associative cache and a companion fully-associative cache of size n. We show that the deterministic competitive ratio for this problem is (n+1)(k+1)-1, and the randomized competitive ratio is O(\log n \log k) and \Omega(\log n +\log k).<|reference_end|> | arxiv | @article{mendel2004online,
title={Online Companion Caching},
author={Manor Mendel, Steven S. Seiden},
journal={Theoret. Comput. Sci. 324(2-3): 183-200, 2004},
year={2004},
doi={10.1016/j.tcs.2004.05.015},
archivePrefix={arXiv},
eprint={cs/0406036},
primaryClass={cs.DS}
} | mendel2004online |
arxiv-671966 | cs/0406037 | Propositional Computability Logic II | <|reference_start|>Propositional Computability Logic II: Computability logic is a formal theory of computational tasks and resources. Its formulas represent interactive computational problems, logical operators stand for operations on computational problems, and validity of a formula is understood as being a scheme of problems that always have algorithmic solutions. A comprehensive online source on the subject is available at http://www.cis.upenn.edu/~giorgi/cl.html . The earlier article "Propositional computability logic I" proved soundness and completeness for the (in a sense) minimal nontrivial fragment CL1 of computability logic. The present paper extends that result to the significantly more expressive propositional system CL2. What makes CL2 more expressive than CL1 is the presence of two sorts of atoms in its language: elementary atoms, representing elementary computational problems (i.e. predicates), and general atoms, representing arbitrary computational problems. CL2 conservatively extends CL1, with the latter being nothing but the general-atom-free fragment of the former.<|reference_end|> | arxiv | @article{japaridze2004propositional,
title={Propositional Computability Logic II},
author={Giorgi Japaridze},
journal={ACM Transactions on Computational Logic 7 (2006), pp. 331-362},
year={2004},
doi={10.1145/1131313.1131319},
archivePrefix={arXiv},
eprint={cs/0406037},
primaryClass={cs.LO cs.GT math.LO}
} | japaridze2004propositional |
arxiv-671967 | cs/0406038 | A New Approach to Draw Detection by Move Repetition in Computer Chess Programming | <|reference_start|>A New Approach to Draw Detection by Move Repetition in Computer Chess Programming: We will try to tackle both the theoretical and practical aspects of a very important problem in chess programming as stated in the title of this article - the issue of draw detection by move repetition. The standard approach that has so far been employed in most chess programs is based on utilising positional matrices in original and compressed format as well as on the implementation of the so-called bitboard format. The new approach that we will be trying to introduce is based on using variant strings generated by the search algorithm (searcher) during the tree expansion in decision making. We hope to prove that this approach is more efficient than the standard treatment of the issue, especially in positions with few pieces (endgames). To illustrate what we have in mind a machine language routine that implements our theoretical assumptions is attached. The routine is part of the Axon chess program, developed by the authors. Axon, in its current incarnation, plays chess at master strength (ca. 2400-2450 Elo, based on both Axon vs computer programs and Axon vs human masters in over 3000 games altogether).<|reference_end|> | arxiv | @article{vuckovic2004a,
title={A New Approach to Draw Detection by Move Repetition in Computer Chess
Programming},
author={Vladan Vuckovic, Djordje Vidanovic},
journal={arXiv preprint arXiv:cs/0406038},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406038},
primaryClass={cs.AI}
} | vuckovic2004a |
arxiv-671968 | cs/0406039 | Long Nonbinary Codes Exceeding the Gilbert - Varshamov Bound for any Fixed Distance | <|reference_start|>Long Nonbinary Codes Exceeding the Gilbert - Varshamov Bound for any Fixed Distance: Let A(q,n,d) denote the maximum size of a q-ary code of length n and distance d. We study the minimum asymptotic redundancy \rho(q,n,d)=n-log_q A(q,n,d) as n grows while q and d are fixed. For any d and q<=d-1, long algebraic codes are designed that improve on the BCH codes and have the lowest asymptotic redundancy \rho(q,n,d) <= ((d-3)+1/(d-2)) log_q n known to date. Prior to this work, codes of fixed distance that asymptotically surpass BCH codes and the Gilbert-Varshamov bound were designed only for distances 4,5 and 6.<|reference_end|> | arxiv | @article{yekhanin2004long,
title={Long Nonbinary Codes Exceeding the Gilbert - Varshamov Bound for any
Fixed Distance},
author={Sergey Yekhanin, Ilya Dumer},
journal={arXiv preprint arXiv:cs/0406039},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406039},
primaryClass={cs.IT math.IT}
} | yekhanin2004long |
arxiv-671969 | cs/0406040 | Alchemistry of the P versus NP question | <|reference_start|>Alchemistry of the P versus NP question: Are P and NP provably inseparable ? Take a look at some unorthodox, guilty mentioned folklore and related unpublished results.<|reference_end|> | arxiv | @article{donat2004alchemistry,
title={Alchemistry of the P versus NP question},
author={Bonifac Donat},
journal={arXiv preprint arXiv:cs/0406040},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406040},
primaryClass={cs.CC cs.LO}
} | donat2004alchemistry |
arxiv-671970 | cs/0406041 | Non-Termination Inference of Logic Programs | <|reference_start|>Non-Termination Inference of Logic Programs: We present a static analysis technique for non-termination inference of logic programs. Our framework relies on an extension of the subsumption test, where some specific argument positions can be instantiated while others are generalized. We give syntactic criteria to statically identify such argument positions from the text of a program. Atomic left looping queries are generated bottom-up from selected subsets of the binary unfoldings of the program of interest. We propose a set of correct algorithms for automating the approach. Then, non-termination inference is tailored to attempt proofs of optimality of left termination conditions computed by a termination inference tool. An experimental evaluation is reported. When termination and non-termination analysis produce complementary results for a logic procedure, then with respect to the leftmost selection rule and the language used to describe sets of atomic queries, each analysis is optimal and together, they induce a characterization of the operational behavior of the logic procedure.<|reference_end|> | arxiv | @article{payet2004non-termination,
title={Non-Termination Inference of Logic Programs},
author={Etienne Payet and Fred Mesnard},
journal={arXiv preprint arXiv:cs/0406041},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406041},
primaryClass={cs.PL cs.LO}
} | payet2004non-termination |
arxiv-671971 | cs/0406042 | Business Process Measures | <|reference_start|>Business Process Measures: The paper proposes a new methodology for defining business process measures and their computation. The approach is based on metamodeling according to MOF. Especially, a metamodel providing precise definitions of typical process measures for UML activity diagram-like notation is proposed, including precise definitions how measures should be aggregated for composite process elements. The proposed approach allows defining values in a natural way, and measurement of data, which are of interest to business, without deep investigation into specific technical solutions. This provides new possibilities for business process measurement, decreasing the gap between technical solutions and asset management methodologies.<|reference_end|> | arxiv | @article{vitolins2004business,
title={Business Process Measures},
author={Valdis Vitolins},
journal={Vitolins Valdis, Business Process Measures. Computer Science and
Information Technologies, Databases and Information Systems Doctoral
Consortium, Scientific Papers University of Latvia Vol. 673, University of
Latvia, 2004, pp. 186.-197},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406042},
primaryClass={cs.CE cs.PF}
} | vitolins2004business |
arxiv-671972 | cs/0406043 | The Computational Complexity of Orientation Search Problems in Cryo-Electron Microscopy | <|reference_start|>The Computational Complexity of Orientation Search Problems in Cryo-Electron Microscopy: In this report we study the problem of determining three-dimensional orientations for noisy projections of randomly oriented identical particles. The problem is of central importance in the tomographic reconstruction of the density map of macromolecular complexes from electron microscope images and it has been studied intensively for more than 30 years. We analyze the computational complexity of the orientation problem and show that while several variants of the problem are $NP$-hard, inapproximable and fixed-parameter intractable, some restrictions are polynomial-time approximable within a constant factor or even solvable in logarithmic space. The orientation search problem is formalized as a constrained line arrangement problem that is of independent interest. The negative complexity results give a partial justification for the heuristic methods used in orientation search, and the positive complexity results on the orientation search have some positive implications also to the problem of finding functionally analogous genes. A preliminary version ``The Computational Complexity of Orientation Search in Cryo-Electron Microscopy'' appeared in Proc. ICCS 2004, LNCS 3036, pp. 231--238. Springer-Verlag 2004.<|reference_end|> | arxiv | @article{mielikäinen2004the,
title={The Computational Complexity of Orientation Search Problems in
Cryo-Electron Microscopy},
author={Taneli Mielik"ainen, Janne Ravantti, Esko Ukkonen},
journal={arXiv preprint arXiv:cs/0406043},
year={2004},
number={C-2004-3, Department of Computer Science, University of Helsinki},
archivePrefix={arXiv},
eprint={cs/0406043},
primaryClass={cs.DS cs.CG cs.CV}
} | mielikäinen2004the |
arxiv-671973 | cs/0406044 | On the Computational Complexity of the Forcing Chromatic Number | <|reference_start|>On the Computational Complexity of the Forcing Chromatic Number: We consider vertex colorings of graphs in which adjacent vertices have distinct colors. A graph is $s$-chromatic if it is colorable in $s$ colors and any coloring of it uses at least $s$ colors. The forcing chromatic number $F(G)$ of an $s$-chromatic graph $G$ is the smallest number of vertices which must be colored so that, with the restriction that $s$ colors are used, every remaining vertex has its color determined uniquely. We estimate the computational complexity of $F(G)$ relating it to the complexity class US introduced by Blass and Gurevich. We prove that recognizing if $F(G)\le 2$ is US-hard with respect to polynomial-time many-one reductions. Moreover, this problem is coNP-hard even under the promises that $F(G)\le 3$ and $G$ is 3-chromatic. On the other hand, recognizing if $F(G)\le k$, for each constant $k$, is reducible to a problem in US via disjunctive truth-table reduction. Similar results are obtained also for forcing variants of the clique and the domination numbers of a graph.<|reference_end|> | arxiv | @article{harary2004on,
title={On the Computational Complexity of the Forcing Chromatic Number},
author={Frank Harary, Wolfgang Slany, Oleg Verbitsky},
journal={arXiv preprint arXiv:cs/0406044},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406044},
primaryClass={cs.CC}
} | harary2004on |
arxiv-671974 | cs/0406045 | Online Searching with Turn Cost | <|reference_start|>Online Searching with Turn Cost: We consider the problem of searching for an object on a line at an unknown distance OPT from the original position of the searcher, in the presence of a cost of d for each time the searcher changes direction. This is a generalization of the well-studied linear-search problem. We describe a strategy that is guaranteed to find the object at a cost of at most 9*OPT + 2d, which has the optimal competitive ratio 9 with respect to OPT plus the minimum corresponding additive term. Our argument for upper and lower bound uses an infinite linear program, which we solve by experimental solution of an infinite series of approximating finite linear programs, estimating the limits, and solving the resulting recurrences. We feel that this technique is interesting in its own right and should help solve other searching problems. In particular, we consider the star search or cow-path problem with turn cost, where the hidden object is placed on one of m rays emanating from the original position of the searcher. For this problem we give a tight bound of (1+(2(m^m)/((m-1)^(m-1))) OPT + m ((m/(m-1))^(m-1) - 1) d. We also discuss tradeoff between the corresponding coefficients, and briefly consider randomized strategies on the line.<|reference_end|> | arxiv | @article{demaine2004online,
title={Online Searching with Turn Cost},
author={Erik D. Demaine, Sandor P. Fekete, and Shmuel Gal},
journal={arXiv preprint arXiv:cs/0406045},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406045},
primaryClass={cs.DS}
} | demaine2004online |
arxiv-671975 | cs/0406046 | Cheap Recovery: A Key to Self-Managing State | <|reference_start|>Cheap Recovery: A Key to Self-Managing State: Cluster hash tables (CHTs) are a key persistent-storage component of many large-scale Internet services due to their high performance and scalability. We show that a correctly-designed CHT can also be as easy to manage as a farm of stateless servers. Specifically, we trade away some consistency to obtain reboot-based recovery that is simple, maintains full data availability, and only has modest impact on performance. This simplifies management in two ways. First, it simplifies failure detection by lowering the cost of acting on false positives, allowing us to use simple but aggressive statistical techniques to quickly detect potential failures and node degradations; even when a false alarm is raised or when rebooting will not fix the problem, attempting recovery by rebooting is relatively non-intrusive to system availability and performance. Second, it allows us to re-cast online repartitioning as failure plus recovery, simplifying dynamic scaling and capacity planning. These properties make it possible for the system to be continuously self-adjusting, a key property of self-managing, autonomic systems.<|reference_end|> | arxiv | @article{huang2004cheap,
title={Cheap Recovery: A Key to Self-Managing State},
author={Andrew C. Huang and Armando Fox},
journal={arXiv preprint arXiv:cs/0406046},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406046},
primaryClass={cs.NI cs.DC}
} | huang2004cheap |
arxiv-671976 | cs/0406047 | Self-organizing neural networks in classification and image recognition | <|reference_start|>Self-organizing neural networks in classification and image recognition: Self-organizing neural networks are used for brick finding in OPERA experiment. Self-organizing neural networks and wavelet analysis used for recognition and extraction of car numbers from images.<|reference_end|> | arxiv | @article{ososkov2004self-organizing,
title={Self-organizing neural networks in classification and image recognition},
author={G.A. Ososkov, S.G. Dmitrievskiy, A.V. Stadnik},
journal={arXiv preprint arXiv:cs/0406047},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406047},
primaryClass={cs.CV cs.AI}
} | ososkov2004self-organizing |
arxiv-671977 | cs/0406048 | On Expanders Graphs: Parameters and Applications | <|reference_start|>On Expanders Graphs: Parameters and Applications: We give a new lower bound on the expansion coefficient of an edge-vertex graph of a $d$-regular graph. As a consequence, we obtain an improvement on the lower bound on relative minimum distance of the expander codes constructed by Sipser and Spielman. We also derive some improved results on the vertex expansion of graphs that help us in improving the parameters of the expander codes of Alon, Bruck, Naor, Naor, and Roth.<|reference_end|> | arxiv | @article{lal2004on,
title={On Expanders Graphs: Parameters and Applications},
author={H. L. Janwa A. K. Lal},
journal={arXiv preprint arXiv:cs/0406048},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406048},
primaryClass={cs.IT math.IT}
} | lal2004on |
arxiv-671978 | cs/0406049 | A Fast, Vectorizable Algorithm for Producing Single-Precision Sine-Cosine Pairs | <|reference_start|>A Fast, Vectorizable Algorithm for Producing Single-Precision Sine-Cosine Pairs: This paper presents an algorithm for computing Sine-Cosine pairs to modest accuracy, but in a manner which contains no conditional tests or branching, making it highly amenable to vectorization. An exemplary implementation for PowerPC AltiVec processors is included, but the algorithm should be easily portable to other achitectures, such as Intel SSE.<|reference_end|> | arxiv | @article{mendenhall2004a,
title={A Fast, Vectorizable Algorithm for Producing Single-Precision
Sine-Cosine Pairs},
author={Marcus H. Mendenhall},
journal={arXiv preprint arXiv:cs/0406049},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406049},
primaryClass={cs.MS}
} | mendenhall2004a |
arxiv-671979 | cs/0406050 | Finite-Length Scaling for Iteratively Decoded LDPC Ensembles | <|reference_start|>Finite-Length Scaling for Iteratively Decoded LDPC Ensembles: In this paper we investigate the behavior of iteratively decoded low-density parity-check codes over the binary erasure channel in the so-called ``waterfall region." We show that the performance curves in this region follow a very basic scaling law. We conjecture that essentially the same scaling behavior applies in a much more general setting and we provide some empirical evidence to support this conjecture. The scaling law, together with the error floor expressions developed previously, can be used for fast finite-length optimization.<|reference_end|> | arxiv | @article{amraoui2004finite-length,
title={Finite-Length Scaling for Iteratively Decoded LDPC Ensembles},
author={Abdelaziz Amraoui, Andrea Montanari, Tom Richardson, Ruediger Urbanke},
journal={arXiv preprint arXiv:cs/0406050},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406050},
primaryClass={cs.IT cond-mat.dis-nn cs.DM math.IT}
} | amraoui2004finite-length |
arxiv-671980 | cs/0406051 | Stable Outcomes for Two-Sided Contract Choice Problems | <|reference_start|>Stable Outcomes for Two-Sided Contract Choice Problems: We show, that a simple generalization of the Deferred Acceptance Procedure with firms proposing due to Gale and Shapley(1962), yeild outcomes for a two-sided contract choice problem, which necessarily belong to the core and are Weakly Pareto Optimal for firms. Under additional assumptions: (a) given any two distinct workers, the set of yields acheivable by a firm with the first worker is disjoint from the set of yields acheivable by it with the second, and (b) the contract choice problem is pair-wise efficient, we prove that there is no stable outcome at which a firm can get more than what it gets at the unique outcome of our procedure.<|reference_end|> | arxiv | @article{lahiri2004stable,
title={Stable Outcomes for Two-Sided Contract Choice Problems},
author={Somdeb Lahiri},
journal={arXiv preprint arXiv:cs/0406051},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406051},
primaryClass={cs.GT}
} | lahiri2004stable |
arxiv-671981 | cs/0406052 | NoSEBrEaK - Attacking Honeynets | <|reference_start|>NoSEBrEaK - Attacking Honeynets: It is usually assumed that Honeynets are hard to detect and that attempts to detect or disable them can be unconditionally monitored. We scrutinize this assumption and demonstrate a method how a host in a honeynet can be completely controlled by an attacker without any substantial logging taking place.<|reference_end|> | arxiv | @article{dornseif2004nosebreak,
title={NoSEBrEaK - Attacking Honeynets},
author={Maximillian Dornseif, Thorsten Holz, Christian N. Klein},
journal={Proceedings from the fifth IEEE Systems, Man and Cybernetics
Information Assurance Workshop, Westpoint, 2004; Pages 123-129},
year={2004},
doi={10.1109/IAW.2004.1437807},
archivePrefix={arXiv},
eprint={cs/0406052},
primaryClass={cs.CR cs.CY}
} | dornseif2004nosebreak |
arxiv-671982 | cs/0406053 | Approximation Algorithms for Minimum PCR Primer Set Selection with Amplification Length and Uniqueness Constraints | <|reference_start|>Approximation Algorithms for Minimum PCR Primer Set Selection with Amplification Length and Uniqueness Constraints: A critical problem in the emerging high-throughput genotyping protocols is to minimize the number of polymerase chain reaction (PCR) primers required to amplify the single nucleotide polymorphism loci of interest. In this paper we study PCR primer set selection with amplification length and uniqueness constraints from both theoretical and practical perspectives. We give a greedy algorithm that achieves a logarithmic approximation factor for the problem of minimizing the number of primers subject to a given upperbound on the length of PCR amplification products. We also give, using randomized rounding, the first non-trivial approximation algorithm for a version of the problem that requires unique amplification of each amplification target. Empirical results on randomly generated testcases as well as testcases extracted from the from the National Center for Biotechnology Information's genomic databases show that our algorithms are highly scalable and produce better results compared to previous heuristics.<|reference_end|> | arxiv | @article{konwar2004approximation,
title={Approximation Algorithms for Minimum PCR Primer Set Selection with
Amplification Length and Uniqueness Constraints},
author={K. Konwar, I. Mandoiu, A. Russell, A. Shvartsman},
journal={arXiv preprint arXiv:cs/0406053},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406053},
primaryClass={cs.DS cs.DM q-bio.QM}
} | konwar2004approximation |
arxiv-671983 | cs/0406054 | Building a linguistic corpus from bee dance data | <|reference_start|>Building a linguistic corpus from bee dance data: This paper discusses the problems and possibility of collecting bee dance data in a linguistic \textit{corpus} and use linguistic instruments such as Zipf's law and entropy statistics to decide on the question whether the dance carries information of any kind. We describe this against the historical background of attempts to analyse non-human communication systems.<|reference_end|> | arxiv | @article{paijmans2004building,
title={Building a linguistic corpus from bee dance data},
author={J.J. Paijmans},
journal={Proceedings of the first international congres of bioinformatics,
Havana (Cuba), 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406054},
primaryClass={cs.CL}
} | paijmans2004building |
arxiv-671984 | cs/0406055 | Web Services: A Process Algebra Approach | <|reference_start|>Web Services: A Process Algebra Approach: It is now well-admitted that formal methods are helpful for many issues raised in the Web service area. In this paper we present a framework for the design and verification of WSs using process algebras and their tools. We define a two-way mapping between abstract specifications written using these calculi and executable Web services written in BPEL4WS. Several choices are available: design and correct errors in BPEL4WS, using process algebra verification tools, or design and correct in process algebra and automatically obtaining the corresponding BPEL4WS code. The approaches can be combined. Process algebra are not useful only for temporal logic verification: we remark the use of simulation/bisimulation both for verification and for the hierarchical refinement design method. It is worth noting that our approach allows the use of any process algebra depending on the needs of the user at different levels (expressiveness, existence of reasoning tools, user expertise).<|reference_end|> | arxiv | @article{ferrara2004web,
title={Web Services: A Process Algebra Approach},
author={Andrea Ferrara},
journal={arXiv preprint arXiv:cs/0406055},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406055},
primaryClass={cs.AI cs.DB}
} | ferrara2004web |
arxiv-671985 | cs/0406056 | P=NP | <|reference_start|>P=NP: We claim to resolve the P=?NP problem via a formal argument for P=NP.<|reference_end|> | arxiv | @article{bringsjord2004p=np,
title={P=NP},
author={Selmer Bringsjord and Joshua Taylor},
journal={arXiv preprint arXiv:cs/0406056},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406056},
primaryClass={cs.CC cs.AI}
} | bringsjord2004p=np |
arxiv-671986 | cs/0406057 | Modelling the costs and benefits of Honeynets | <|reference_start|>Modelling the costs and benefits of Honeynets: For many IT-security measures exact costs and benefits are not known. This makes it difficult to allocate resources optimally to different security measures. We present a model for costs and benefits of so called Honeynets. This can foster informed reasoning about the deployment of honeynet technology.<|reference_end|> | arxiv | @article{dornseif2004modelling,
title={Modelling the costs and benefits of Honeynets},
author={Maximillian Dornseif, Sascha May},
journal={arXiv preprint arXiv:cs/0406057},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406057},
primaryClass={cs.CR cs.CY}
} | dornseif2004modelling |
arxiv-671987 | cs/0406058 | Proofs of Zero Knowledge | <|reference_start|>Proofs of Zero Knowledge: We present a protocol for verification of ``no such entry'' replies from databases. We introduce a new cryptographic primitive as the underlying structure, the keyed hash tree, which is an extension of Merkle's hash tree. We compare our scheme to Buldas et al.'s Undeniable Attesters and Micali et al.'s Zero Knowledge Sets.<|reference_end|> | arxiv | @article{bauer2004proofs,
title={Proofs of Zero Knowledge},
author={Matthias Bauer},
journal={arXiv preprint arXiv:cs/0406058},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406058},
primaryClass={cs.CR cs.DB}
} | bauer2004proofs |
arxiv-671988 | cs/0406059 | Ermittlung von Verwundbarkeiten mit elektronischen Koedern | <|reference_start|>Ermittlung von Verwundbarkeiten mit elektronischen Koedern: Electronic bait (honeypots) are network resources whose value consists of being attacked and compromised. These are often computers which do not have a task in the network, but are otherwise indestinguishable from regular computers. Such bait systems could be interconnected (honeynets). These honeynets are equipped with special software, facilitating forensic anylisis of incidents. Taking average of the wide variety of recorded data it is possible to learn considerable more about the behaviour of attackers in networks than with traditional methods. This article is an introduction into electronic bait and a description of the setup and first experiences of such a network deployed at RWTH Aachen University. ----- Als elektronische Koeder (honeypots) bezeichnet man Netzwerkressourcen, deren Wert darin besteht, angegriffen und kompromittiert zu werden. Oft sind dies Computer, die keine spezielle Aufgabe im Netzwerk haben, aber ansonsten nicht von regulaeren Rechnern zu unterscheiden sind. Koeder koennen zu Koeder-Netzwerken (honeynets) zusammengeschlossen werden. Sie sind mit spezieller Software ausgestattet, die die Forensik einer eingetretenen Schutzzielverletzung erleichtert. Durch die Vielfalt an mitgeschnittenen Daten kann man deutlich mehr ueber das Verhalten von Angreifern in Netzwerken lernen als mit herkoemmlichen forensischen Methoden. Dieser Beitrag stellt die Philosophie der Koeder-Netzwerke vor und beschreibt die ersten Erfahrungen, die mit einem solchen Netzwerk an der RWTH Aachen gemacht wurden.<|reference_end|> | arxiv | @article{dornseif2004ermittlung,
title={Ermittlung von Verwundbarkeiten mit elektronischen Koedern},
author={Maximillian Dornseif, Felix C. Gaertner, Thorsten Holz},
journal={arXiv preprint arXiv:cs/0406059},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406059},
primaryClass={cs.CR}
} | dornseif2004ermittlung |
arxiv-671989 | cs/0406060 | Well-Definedness and Semantic Type-Checking in the Nested Relational Calculus and XQuery | <|reference_start|>Well-Definedness and Semantic Type-Checking in the Nested Relational Calculus and XQuery: Two natural decision problems regarding the XML query language XQuery are well-definedness and semantic type-checking. We study these problems in the setting of a relational fragment of XQuery. We show that well-definedness and semantic type-checking are undecidable, even in the positive-existential case. Nevertheless, for a ``pure'' variant of XQuery, in which no identification is made between an item and the singleton containing that item, the problems become decidable. We also consider the analogous problems in the setting of the nested relational calculus.<|reference_end|> | arxiv | @article{bussche2004well-definedness,
title={Well-Definedness and Semantic Type-Checking in the Nested Relational
Calculus and XQuery},
author={Jan Van den Bussche, Dirk Van Gucht, Stijn Vansummeren},
journal={arXiv preprint arXiv:cs/0406060},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406060},
primaryClass={cs.DB cs.PL}
} | bussche2004well-definedness |
arxiv-671990 | cs/0406061 | The Complexity of Agreement | <|reference_start|>The Complexity of Agreement: A celebrated 1976 theorem of Aumann asserts that honest, rational Bayesian agents with common priors will never "agree to disagree": if their opinions about any topic are common knowledge, then those opinions must be equal. Economists have written numerous papers examining the assumptions behind this theorem. But two key questions went unaddressed: first, can the agents reach agreement after a conversation of reasonable length? Second, can the computations needed for that conversation be performed efficiently? This paper answers both questions in the affirmative, thereby strengthening Aumann's original conclusion. We first show that, for two agents with a common prior to agree within epsilon about the expectation of a [0,1] variable with high probability over their prior, it suffices for them to exchange order 1/epsilon^2 bits. This bound is completely independent of the number of bits n of relevant knowledge that the agents have. We then extend the bound to three or more agents; and we give an example where the economists' "standard protocol" (which consists of repeatedly announcing one's current expectation) nearly saturates the bound, while a new "attenuated protocol" does better. Finally, we give a protocol that would cause two Bayesians to agree within epsilon after exchanging order 1/epsilon^2 messages, and that can be simulated by agents with limited computational resources. By this we mean that, after examining the agents' knowledge and a transcript of their conversation, no one would be able to distinguish the agents from perfect Bayesians. The time used by the simulation procedure is exponential in 1/epsilon^6 but not in n.<|reference_end|> | arxiv | @article{aaronson2004the,
title={The Complexity of Agreement},
author={Scott Aaronson},
journal={arXiv preprint arXiv:cs/0406061},
year={2004},
archivePrefix={arXiv},
eprint={cs/0406061},
primaryClass={cs.CC cs.GT}
} | aaronson2004the |
arxiv-671991 | cs/0407001 | Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies | <|reference_start|>Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies: Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.<|reference_end|> | arxiv | @article{asadzadeh2004global,
title={Global Grids and Software Toolkits: A Study of Four Grid Middleware
Technologies},
author={Parvin Asadzadeh, Rajkumar Buyya, Chun Ling Kei, Deepa Nayar, and
Srikumar Venugopal},
journal={arXiv preprint arXiv:cs/0407001},
year={2004},
number={* Technical Report, GRIDS-TR-2004-4, Grid Computing and Distributed
Systems Laboratory, University of Melbourne, Australia, July 1, 2004},
archivePrefix={arXiv},
eprint={cs/0407001},
primaryClass={cs.DC}
} | asadzadeh2004global |
arxiv-671992 | cs/0407002 | Annotating Predicate-Argument Structure for a Parallel Treebank | <|reference_start|>Annotating Predicate-Argument Structure for a Parallel Treebank: We report on a recently initiated project which aims at building a multi-layered parallel treebank of English and German. Particular attention is devoted to a dedicated predicate-argument layer which is used for aligning translationally equivalent sentences of the two languages. We describe both our conceptual decisions and aspects of their technical realisation. We discuss some selected problems and conclude with a few remarks on how this project relates to similar projects in the field.<|reference_end|> | arxiv | @article{cyrus2004annotating,
title={Annotating Predicate-Argument Structure for a Parallel Treebank},
author={Lea Cyrus, Hendrik Feddes, Frank Schumacher},
journal={Proceedings of the LREC 2004 Workshop on Building Lexical
Resources from Semantically Annotated Corpora, Lisbon, May 30, 2004, pp.
39-46},
year={2004},
archivePrefix={arXiv},
eprint={cs/0407002},
primaryClass={cs.CL}
} | cyrus2004annotating |
arxiv-671993 | cs/0407003 | Insertion Sort is O(n log n) | <|reference_start|>Insertion Sort is O(n log n): Traditional Insertion Sort runs in O(n^2) time because each insertion takes O(n) time. When people run Insertion Sort in the physical world, they leave gaps between items to accelerate insertions. Gaps help in computers as well. This paper shows that Gapped Insertion Sort has insertion times of O(log n) with high probability, yielding a total running time of O(n log n) with high probability.<|reference_end|> | arxiv | @article{bender2004insertion,
title={Insertion Sort is O(n log n)},
author={Michael A. Bender, Martin Farach-Colton, Miguel Mosteiro},
journal={arXiv preprint arXiv:cs/0407003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0407003},
primaryClass={cs.DS}
} | bender2004insertion |
arxiv-671994 | cs/0407004 | Zero-error communication over networks | <|reference_start|>Zero-error communication over networks: Zero-Error communication investigates communication without any error. By defining channels without probabilities, results from Elias can be used to completely characterize which channel can simulate which other channels. We introduce the ambiguity of a channel, which completely characterizes the possibility in principle of a channel to simulate any other channel. In the second part we will look at networks of players connected by channels, while some players may be corrupted. We will show how the ambiguity of a virtual channel connecting two arbitrary players can be calculated. This means that we can exactly specify what kind of zero-error communication is possible between two players in any network of players connected by channels.<|reference_end|> | arxiv | @article{wullschleger2004zero-error,
title={Zero-error communication over networks},
author={J"urg Wullschleger},
journal={arXiv preprint arXiv:cs/0407004},
year={2004},
doi={10.1109/ISIT.2004.1365072},
archivePrefix={arXiv},
eprint={cs/0407004},
primaryClass={cs.IT cs.CR math.IT}
} | wullschleger2004zero-error |
arxiv-671995 | cs/0407005 | Statistical Machine Translation by Generalized Parsing | <|reference_start|>Statistical Machine Translation by Generalized Parsing: Designers of statistical machine translation (SMT) systems have begun to employ tree-structured translation models. Systems involving tree-structured translation models tend to be complex. This article aims to reduce the conceptual complexity of such systems, in order to make them easier to design, implement, debug, use, study, understand, explain, modify, and improve. In service of this goal, the article extends the theory of semiring parsing to arrive at a novel abstract parsing algorithm with five functional parameters: a logic, a grammar, a semiring, a search strategy, and a termination condition. The article then shows that all the common algorithms that revolve around tree-structured translation models, including hierarchical alignment, inference for parameter estimation, translation, and structured evaluation, can be derived by generalizing two of these parameters -- the grammar and the logic. The article culminates with a recipe for using such generalized parsers to train, apply, and evaluate an SMT system that is driven by tree-structured translation models.<|reference_end|> | arxiv | @article{melamed2004statistical,
title={Statistical Machine Translation by Generalized Parsing},
author={I. Dan Melamed and Wei Wang},
journal={arXiv preprint arXiv:cs/0407005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0407005},
primaryClass={cs.CL}
} | melamed2004statistical |
arxiv-671996 | cs/0407006 | Predicate Abstraction with Indexed Predicates | <|reference_start|>Predicate Abstraction with Indexed Predicates: Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.<|reference_end|> | arxiv | @article{lahiri2004predicate,
title={Predicate Abstraction with Indexed Predicates},
author={Shuvendu K. Lahiri, Randal E. Bryant},
journal={arXiv preprint arXiv:cs/0407006},
year={2004},
archivePrefix={arXiv},
eprint={cs/0407006},
primaryClass={cs.LO}
} | lahiri2004predicate |
arxiv-671997 | cs/0407007 | The semijoin algebra and the guarded fragment | <|reference_start|>The semijoin algebra and the guarded fragment: The semijoin algebra is the variant of the relational algebra obtained by replacing the join operator by the semijoin operator. We discuss some interesting connections between the semijoin algebra and the guarded fragment of first-order logic. We also provide an Ehrenfeucht-Fraisse game, characterizing the discerning power of the semijoin algebra. This game gives a method for showing that certain queries are not expressible in the semijoin algebra.<|reference_end|> | arxiv | @article{leinders2004the,
title={The semijoin algebra and the guarded fragment},
author={Dirk Leinders (1), Jerzy Tyszkiewicz (2), Jan Van den Bussche (1) ((1)
Limburgs Universitair Centrum, Diepenbeek, Belgium, (2) Institute of
Informatics, Warsaw University, Warsaw, Poland)},
journal={arXiv preprint arXiv:cs/0407007},
year={2004},
archivePrefix={arXiv},
eprint={cs/0407007},
primaryClass={cs.DB cs.LO}
} | leinders2004the |
arxiv-671998 | cs/0407008 | Autogenic Training With Natural Language Processing Modules: A Recent Tool For Certain Neuro Cognitive Studies | <|reference_start|>Autogenic Training With Natural Language Processing Modules: A Recent Tool For Certain Neuro Cognitive Studies: Learning to respond to voice-text input involves the subject's ability in understanding the phonetic and text based contents and his/her ability to communicate based on his/her experience. The neuro-cognitive facility of the subject has to support two important domains in order to make the learning process complete. In many cases, though the understanding is complete, the response is partial. This is one valid reason why we need to support the information from the subject with scalable techniques such as Natural Language Processing (NLP) for abstraction of the contents from the output. This paper explores the feasibility of using NLP modules interlaced with Neural Networks to perform the required task in autogenic training related to medical applications.<|reference_end|> | arxiv | @article{ravichandran2004autogenic,
title={Autogenic Training With Natural Language Processing Modules: A Recent
Tool For Certain Neuro Cognitive Studies},
author={S. Ravichandran and M.N. Karthik},
journal={arXiv preprint arXiv:cs/0407008},
year={2004},
archivePrefix={arXiv},
eprint={cs/0407008},
primaryClass={cs.AI}
} | ravichandran2004autogenic |
arxiv-671999 | cs/0407009 | Search Using N-gram Technique Based Statistical Analysis for Knowledge Extraction in Case Based Reasoning Systems | <|reference_start|>Search Using N-gram Technique Based Statistical Analysis for Knowledge Extraction in Case Based Reasoning Systems: Searching techniques for Case Based Reasoning systems involve extensive methods of elimination. In this paper, we look at a new method of arriving at the right solution by performing a series of transformations upon the data. These involve N-gram based comparison and deduction of the input data with the case data, using Morphemes and Phonemes as the deciding parameters. A similar technique for eliminating possible errors using a noise removal function is performed. The error tracking and elimination is performed through a statistical analysis of obtained data, where the entire data set is analyzed as sub-categories of various etymological derivatives. A probability analysis for the closest match is then performed, which yields the final expression. This final expression is referred to the Case Base. The output is redirected through an Expert System based on best possible match. The threshold for the match is customizable, and could be set by the Knowledge-Architect.<|reference_end|> | arxiv | @article{karthik2004search,
title={Search Using N-gram Technique Based Statistical Analysis for Knowledge
Extraction in Case Based Reasoning Systems},
author={M. N. Karthik and Moshe Davis},
journal={arXiv preprint arXiv:cs/0407009},
year={2004},
archivePrefix={arXiv},
eprint={cs/0407009},
primaryClass={cs.AI cs.IR}
} | karthik2004search |
arxiv-672000 | cs/0407010 | Improved error bounds for the erasure/list scheme: the binary and spherical cases | <|reference_start|>Improved error bounds for the erasure/list scheme: the binary and spherical cases: We derive improved bounds on the error and erasure rate for spherical codes and for binary linear codes under Forney's erasure/list decoding scheme and prove some related results.<|reference_end|> | arxiv | @article{barg2004improved,
title={Improved error bounds for the erasure/list scheme: the binary and
spherical cases},
author={Alexander Barg},
journal={IEEE Transactions on Informatin Theory, vol. 50, no. 10, 2004, pp.
2503-2511},
year={2004},
doi={10.1109/TIT.2004.834753},
archivePrefix={arXiv},
eprint={cs/0407010},
primaryClass={cs.IT math.IT}
} | barg2004improved |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.