corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-671101 | cs/0305031 | Clustering belief functions based on attracting and conflicting metalevel evidence | <|reference_start|>Clustering belief functions based on attracting and conflicting metalevel evidence: In this paper we develop a method for clustering belief functions based on attracting and conflicting metalevel evidence. Such clustering is done when the belief functions concern multiple events, and all belief functions are mixed up. The clustering process is used as the means for separating the belief functions into subsets that should be handled independently. While the conflicting metalevel evidence is generated internally from pairwise conflicts of all belief functions, the attracting metalevel evidence is assumed given by some external source.<|reference_end|> | arxiv | @article{schubert2003clustering,
title={Clustering belief functions based on attracting and conflicting
metalevel evidence},
author={Johan Schubert},
journal={in Proceedings of the Ninth International Conference on
Information Processing and Management of Uncertainty in Knowledge-based
Systems (IPMU'02), pp. 571-578, Annecy, France, 1-5 July 2002},
year={2003},
number={FOI-S-0524-SE},
archivePrefix={arXiv},
eprint={cs/0305031},
primaryClass={cs.AI cs.NE}
} | schubert2003clustering |
arxiv-671102 | cs/0305032 | Robust Report Level Cluster-to-Track Fusion | <|reference_start|>Robust Report Level Cluster-to-Track Fusion: In this paper we develop a method for report level tracking based on Dempster-Shafer clustering using Potts spin neural networks where clusters of incoming reports are gradually fused into existing tracks, one cluster for each track. Incoming reports are put into a cluster and continuous reclustering of older reports is made in order to obtain maximum association fit within the cluster and towards the track. Over time, the oldest reports of the cluster leave the cluster for the fixed track at the same rate as new incoming reports are put into it. Fusing reports to existing tracks in this fashion allows us to take account of both existing tracks and the probable future of each track, as represented by younger reports within the corresponding cluster. This gives us a robust report-to-track association. Compared to clustering of all available reports this approach is computationally faster and has a better report-to-track association than simple step-by-step association.<|reference_end|> | arxiv | @article{schubert2003robust,
title={Robust Report Level Cluster-to-Track Fusion},
author={Johan Schubert},
journal={in Proceedings of the Fifth International Conference on
Information Fusion (FUSION 2002), pp. 913-918, Annapolis, USA, 8-11 July
2002, International Society of Information Fusion, 2002},
year={2003},
number={FOI-S-0525-SE},
archivePrefix={arXiv},
eprint={cs/0305032},
primaryClass={cs.AI cs.NE}
} | schubert2003robust |
arxiv-671103 | cs/0305033 | Beslutst\"odssystemet Dezzy - en \"oversikt | <|reference_start|>Beslutst\"odssystemet Dezzy - en \"oversikt: Within the scope of the three-year ANTI-SUBMARINE WARFARE project of the National Defence Research Establishment, the INFORMATION SYSTEMS subproject has developed the demonstration prototype Dezzy for handling and analysis of intelligence reports concerning foreign underwater activities. ----- Inom ramen f\"or FOA:s tre{\aa}riga huvudprojekt UB{\AA}TSSKYDD har delprojekt INFORMATIONSSYSTEM utvecklat demonstrationsprototypen Dezzy till ett beslutsst\"odsystem f\"or hantering och analys av underr\"attelser om fr\"ammande undervattensverksamhet.<|reference_end|> | arxiv | @article{bergsten2003beslutst\"odssystemet,
title={Beslutst\"odssystemet Dezzy - en \"oversikt},
author={Ulla Bergsten, Johan Schubert, Per Svensson},
journal={in Dokumentation 7 juni av Seminarium och fackutst\"allning om
samband, sensorer och datorer f\"or ledningssystem till f\"orsvaret
(MILINF'89), pp. 07B2:19-31, Enk\"oping, June 1989, Telub AB, V\"axj\"o, 1989},
year={2003},
number={FOA Report B 20078-2.7},
archivePrefix={arXiv},
eprint={cs/0305033},
primaryClass={cs.AI cs.DB}
} | bergsten2003beslutst\"odssystemet |
arxiv-671104 | cs/0305034 | Cryptanalysis of HFE | <|reference_start|>Cryptanalysis of HFE: I transform the trapdoor problem of HFE into a linear algebra problem.<|reference_end|> | arxiv | @article{toli2003cryptanalysis,
title={Cryptanalysis of HFE},
author={Ilia Toli},
journal={arXiv preprint arXiv:cs/0305034},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305034},
primaryClass={cs.CR cs.SC}
} | toli2003cryptanalysis |
arxiv-671105 | cs/0305035 | A mathematical definition of "simplify" | <|reference_start|>A mathematical definition of "simplify": Even though every mathematician knows intuitively what it means to "simplify" a mathematical expression, there is still no universally accepted rigorous mathematical definition of "simplify". In this paper, we shall give a simple and plausible definition of "simplify" in terms of the computational complexity of integer functions. We shall also use this definition to show that there is no deterministic and exact algorithm which can compute the permanent of an $n \times n$ matrix in $o(2^n)$ time.<|reference_end|> | arxiv | @article{feinstein2003a,
title={A mathematical definition of "simplify"},
author={Craig Alan Feinstein},
journal={Progress in Physics, 2019 (vol. 15), issue 2, pp. 75-77},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305035},
primaryClass={cs.CC}
} | feinstein2003a |
arxiv-671106 | cs/0305036 | Using Dynamic Simulation in the Development of Construction Machinery | <|reference_start|>Using Dynamic Simulation in the Development of Construction Machinery: As in the car industry for quite some time, dynamic simulation of complete vehicles is being practiced more and more in the development of off-road machinery. However, specific questions arise due not only to company structure and size, but especially to the type of product. Tightly coupled, non-linear subsystems of different domains make prediction and optimisation of the complete system's dynamic behaviour a challenge. Furthermore, the demand for versatile machines leads to sometimes contradictory target requirements and can turn the design process into a hunt for the least painful compromise. This can be avoided by profound system knowledge, assisted by simulation-driven product development. This paper gives an overview of joint research into this issue by Volvo Wheel Loaders and Linkoping University on that matter, lists the results of a related literature review and introduces the term "operateability". Rather than giving detailed answers, the problem space for ongoing and future research is examined and possible solutions are sketched.<|reference_end|> | arxiv | @article{filla2003using,
title={Using Dynamic Simulation in the Development of Construction Machinery},
author={Reno Filla (1), Jan-Ove Palmberg (2) ((1) Volvo Wheel Loaders AB, (2)
Linkoping University)},
journal={The Eighth Scandinavian International Conference on Fluid Power
2003, vol. 1, pp. 651-667},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305036},
primaryClass={cs.CE}
} | filla2003using |
arxiv-671107 | cs/0305037 | Power Law Distributions in Class Relationships | <|reference_start|>Power Law Distributions in Class Relationships: Power law distributions have been found in many natural and social phenomena, and more recently in the source code and run-time characteristics of Object-Oriented (OO) systems. A power law implies that small values are extremely common, whereas large values are extremely rare. In this paper, we identify twelve new power laws relating to the static graph structures of Java programs. The graph structures analyzed represented different forms of OO coupling, namely, inheritance, aggregation, interface, parameter type and return type. Identification of these new laws provide the basis for predicting likely features of classes in future developments. The research in this paper ties together work in object-based coupling and World Wide Web structures.<|reference_end|> | arxiv | @article{wheeldon2003power,
title={Power Law Distributions in Class Relationships},
author={Richard Wheeldon and Steve Counsell},
journal={arXiv preprint arXiv:cs/0305037},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305037},
primaryClass={cs.SE}
} | wheeldon2003power |
arxiv-671108 | cs/0305038 | The Evolution of the Computerized Database | <|reference_start|>The Evolution of the Computerized Database: Databases, collections of related data, are as old as the written word. A database can be anything from a homemaker's metal recipe file to a sophisticated data warehouse. Yet today, when we think of a database we invariably think of computerized data and their DBMSs (database management systems). How did we go from organizing our data in a simple metal filing box or cabinet to storing our data in a sophisticated computerized database? How did the computerized database evolve? This paper defines what we mean by a database. It traces the evolution of the database, from its start as a non-computerized set of related data, to the, now standard, computerized RDBMS (relational database management system). Early computerized storage methods are reviewed including both the ISAM (Indexed Sequential Access Method) and VSAM (Virtual Storage Access Method) storage methods. Early database models are explored including the network and hierarchical database models. Eventually, the relational, object-relational and object-oriented databases models are discussed. An appendix of diagrams, including hierarchical occurrence tree, network schema, ER (entity relationship) and UML (unified modeling language) diagrams, is included to support the text. This paper concludes with an exploration of current and future trends in DBMS development. It discusses the factors affecting these trends. It delves into the relationship between DBMSs and the increasingly popular object-oriented development methodologies. Finally, it speculates on the future of the DBMS.<|reference_end|> | arxiv | @article{bercich2003the,
title={The Evolution of the Computerized Database},
author={Nancy Hartline Bercich},
journal={arXiv preprint arXiv:cs/0305038},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305038},
primaryClass={cs.DB}
} | bercich2003the |
arxiv-671109 | cs/0305039 | Periodicity and Unbordered Words: A Proof of the Extended Duval Conjecture | <|reference_start|>Periodicity and Unbordered Words: A Proof of the Extended Duval Conjecture: The relationship between the length of a word and the maximum length of its unbordered factors is investigated in this paper. Consider a finite word w of length n. We call a word bordered, if it has a proper prefix which is also a suffix of that word. Let f(w) denote the maximum length of all unbordered factors of w, and let p(w) denote the period of w. Clearly, f(w) < p(w)+1. We establish that f(w) = p(w), if w has an unbordered prefix of length f(w) and n > 2f(w)-2. This bound is tight and solves the stronger version of a 21 years old conjecture by Duval. It follows from this result that, in general, n > 3f(w)-3 implies f(w) = p(w) which gives an improved bound for the question asked by Ehrenfeucht and Silberger in 1979.<|reference_end|> | arxiv | @article{harju2003periodicity,
title={Periodicity and Unbordered Words: A Proof of the Extended Duval
Conjecture},
author={Tero Harju and Dirk Nowotka},
journal={arXiv preprint arXiv:cs/0305039},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305039},
primaryClass={cs.DM}
} | harju2003periodicity |
arxiv-671110 | cs/0305040 | Bounded LTL Model Checking with Stable Models | <|reference_start|>Bounded LTL Model Checking with Stable Models: In this paper bounded model checking of asynchronous concurrent systems is introduced as a promising application area for answer set programming. As the model of asynchronous systems a generalisation of communicating automata, 1-safe Petri nets, are used. It is shown how a 1-safe Petri net and a requirement on the behaviour of the net can be translated into a logic program such that the bounded model checking problem for the net can be solved by computing stable models of the corresponding program. The use of the stable model semantics leads to compact encodings of bounded reachability and deadlock detection tasks as well as the more general problem of bounded model checking of linear temporal logic. Correctness proofs of the devised translations are given, and some experimental results using the translation and the Smodels system are presented.<|reference_end|> | arxiv | @article{heljanko2003bounded,
title={Bounded LTL Model Checking with Stable Models},
author={Keijo Heljanko and Ilkka Niemel"a},
journal={arXiv preprint arXiv:cs/0305040},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305040},
primaryClass={cs.LO cs.AI}
} | heljanko2003bounded |
arxiv-671111 | cs/0305041 | Factorization of Language Models through Backing-Off Lattices | <|reference_start|>Factorization of Language Models through Backing-Off Lattices: Factorization of statistical language models is the task that we resolve the most discriminative model into factored models and determine a new model by combining them so as to provide better estimate. Most of previous works mainly focus on factorizing models of sequential events, each of which allows only one factorization manner. To enable parallel factorization, which allows a model event to be resolved in more than one ways at the same time, we propose a general framework, where we adopt a backing-off lattice to reflect parallel factorizations and to define the paths along which a model is resolved into factored models, we use a mixture model to combine parallel paths in the lattice, and generalize Katz's backing-off method to integrate all the mixture models got by traversing the entire lattice. Based on this framework, we formulate two types of model factorizations that are used in natural language modeling.<|reference_end|> | arxiv | @article{wang2003factorization,
title={Factorization of Language Models through Backing-Off Lattices},
author={Wei Wang},
journal={arXiv preprint arXiv:cs/0305041},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305041},
primaryClass={cs.CL}
} | wang2003factorization |
arxiv-671112 | cs/0305042 | Untraceable Email Cluster Bombs: On Agent-Based Distributed Denial of Service | <|reference_start|>Untraceable Email Cluster Bombs: On Agent-Based Distributed Denial of Service: We uncover a vulnerability that allows for an attacker to perform an email-based attack on selected victims, using only standard scripts and agents. What differentiates the attack we describe from other, already known forms of distributed denial of service (DDoS) attacks is that an attacker does not need to infiltrate the network in any manner -- as is normally required to launch a DDoS attack. Thus, we see this type of attack as a poor man's DDoS. Not only is the attack easy to mount, but it is also almost impossible to trace back to the perpetrator. Along with descriptions of our attack, we demonstrate its destructive potential with (limited and contained) experimental results. We illustrate the potential impact of our attack by describing how an attacker can disable an email account by flooding its inbox; block competition during on-line auctions; harm competitors with an on-line presence; disrupt phone service to a given victim; cheat in SMS-based games; disconnect mobile corporate leaders from their networks; and disrupt electronic elections. Finally, we propose a set of countermeasures that are light-weight, do not require modifications to the infrastructure, and can be deployed in a gradual manner.<|reference_end|> | arxiv | @article{jakobsson2003untraceable,
title={Untraceable Email Cluster Bombs: On Agent-Based Distributed Denial of
Service},
author={Markus Jakobsson, Filippo Menczer},
journal={arXiv preprint arXiv:cs/0305042},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305042},
primaryClass={cs.CY cs.NI}
} | jakobsson2003untraceable |
arxiv-671113 | cs/0305043 | Modeling of aerodynamic Space-to-Surface flight with optimal trajectory for targeting | <|reference_start|>Modeling of aerodynamic Space-to-Surface flight with optimal trajectory for targeting: Modeling has been created for a Space-to-Surface system defined for an optimal trajectory for targeting in terminal phase. The modeling includes models for simulation atmosphere, speed of sound, aerodynamic flight and navigation by an infrared system. The modeling simulation includes statistical analysis of the modeling results.<|reference_end|> | arxiv | @article{gornev2003modeling,
title={Modeling of aerodynamic Space-to-Surface flight with optimal trajectory
for targeting},
author={Serge Gornev},
journal={arXiv preprint arXiv:cs/0305043},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305043},
primaryClass={cs.OH}
} | gornev2003modeling |
arxiv-671114 | cs/0305044 | Updating beliefs with incomplete observations | <|reference_start|>Updating beliefs with incomplete observations: Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete. This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Grunwald and Halpern have shown that commonly used updating strategies fail in this case, except under very special assumptions. In this paper we propose a new method for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no assumptions about the so-called incompleteness mechanism that associates complete with incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we use only coherence arguments to turn prior into posterior probabilities. In general, this new approach to updating produces lower and upper posterior probabilities and expectations, as well as partially determinate decisions. This is a logical consequence of the existing ignorance about the incompleteness mechanism. We apply the new approach to the problem of classification of new evidence in probabilistic expert systems, where it leads to a new, so-called conservative updating rule. In the special case of Bayesian networks constructed using expert knowledge, we provide an exact algorithm for classification based on our updating rule, which has linear-time complexity for a class of networks wider than polytrees. This result is then extended to the more general framework of credal networks, where computations are often much harder than with Bayesian nets. Using an example, we show that our rule appears to provide a solid basis for reliable updating with incomplete observations, when no strong assumptions about the incompleteness mechanism are justified.<|reference_end|> | arxiv | @article{de cooman2003updating,
title={Updating beliefs with incomplete observations},
author={Gert de Cooman and Marco Zaffalon},
journal={arXiv preprint arXiv:cs/0305044},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305044},
primaryClass={cs.AI}
} | de cooman2003updating |
arxiv-671115 | cs/0305045 | Semiclassical Quantum Computation Solutions to the Count to Infinity Problem: A Brief Discussion | <|reference_start|>Semiclassical Quantum Computation Solutions to the Count to Infinity Problem: A Brief Discussion: In this paper we briefly define distance vector routing algorithms, their advantages and possible drawbacks. On these possible drawbacks, currently widely used methods split horizon and poisoned reverse are defined and compared. The count to infinity problem is specified and it is classified to be a halting problem and a proposition stating that entangled states used in quantum computation can be used to handle this problem is examined. Several solutions to this problem by using entangled states are proposed and a very brief introduction to entangled states is presented.<|reference_end|> | arxiv | @article{gokden2003semiclassical,
title={Semiclassical Quantum Computation Solutions to the Count to Infinity
Problem: A Brief Discussion},
author={Burc Gokden},
journal={arXiv preprint arXiv:cs/0305045},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305045},
primaryClass={cs.NI}
} | gokden2003semiclassical |
arxiv-671116 | cs/0305046 | Applications of Intuitionistic Logic in Answer Set Programming | <|reference_start|>Applications of Intuitionistic Logic in Answer Set Programming: We present some applications of intermediate logics in the field of Answer Set Programming (ASP). A brief, but comprehensive introduction to the answer set semantics, intuitionistic and other intermediate logics is given. Some equivalence notions and their applications are discussed. Some results on intermediate logics are shown, and applied later to prove properties of answer sets. A characterization of answer sets for logic programs with nested expressions is provided in terms of intuitionistic provability, generalizing a recent result given by Pearce. It is known that the answer set semantics for logic programs with nested expressions may select non-minimal models. Minimal models can be very important in some applications, therefore we studied them; in particular we obtain a characterization, in terms of intuitionistic logic, of answer sets which are also minimal models. We show that the logic G3 characterizes the notion of strong equivalence between programs under the semantic induced by these models. Finally we discuss possible applications and consequences of our results. They clearly state interesting links between ASP and intermediate logics, which might bring research in these two areas together.<|reference_end|> | arxiv | @article{osorio2003applications,
title={Applications of Intuitionistic Logic in Answer Set Programming},
author={Mauricio Osorio, Juan Antonio Navarro and Jose Arrazola},
journal={arXiv preprint arXiv:cs/0305046},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305046},
primaryClass={cs.LO}
} | osorio2003applications |
arxiv-671117 | cs/0305047 | CASTOR status and evolution | <|reference_start|>CASTOR status and evolution: In January 1999, CERN began to develop CASTOR ("CERN Advanced STORage manager"). This Hierarchical Storage Manager targetted at HEP applications has been in full production at CERN since May 2001. It now contains more than two Petabyte of data in roughly 9 million files. In 2002, 350 Terabytes of data were stored for COMPASS at 45 MB/s and a Data Challenge was run for ALICE in preparation for the LHC startup in 2007 and sustained a data transfer to tape of 300 MB/s for one week (180 TB). The major functionality improvements were the support for files larger than 2 GB (in collaboration with IN2P3) and the development of Grid interfaces to CASTOR: GridFTP and SRM ("Storage Resource Manager"). An ongoing effort is taking place to copy the existing data from obsolete media like 9940 A to better cost effective offerings. CASTOR has also been deployed at several HEP sites with little effort. In 2003, we plan to continue working on Grid interfaces and to improve performance not only for Central Data Recording but also for Data Analysis applications where thousands of processes possibly access the same hot data. This could imply the selection of another filesystem or the use of replication (hardware or software).<|reference_end|> | arxiv | @article{baud2003castor,
title={CASTOR status and evolution},
author={Jean-Philippe Baud, Ben Couturier, Charles Curran, Jean-Damien Durand,
Emil Knezo, Stefano Occhetti, Olof Barring},
journal={ECONFC0303241:TUDT007,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305047},
primaryClass={cs.OH}
} | baud2003castor |
arxiv-671118 | cs/0305048 | 2D Electrophoresis Gel Image and Diagnosis of a Disease | <|reference_start|>2D Electrophoresis Gel Image and Diagnosis of a Disease: The process of diagnosing a disease from the 2D gel electrophoresis image is a challenging problem. This is due to technical difficulties of generating reproducible images with a normalized form and the effect of negative stain. In this paper, we will discuss a new concept of interpreting the 2D images and overcoming the aforementioned technical difficulties using mathematical transformation. The method makes use of 2D gel images of proteins in serums and we explain a way of representing the images into vectors in order to apply machine-learning methods, such as the support vector machine.<|reference_end|> | arxiv | @article{kim20032d,
title={2D Electrophoresis Gel Image and Diagnosis of a Disease},
author={Gene Kim and MyungHo Kim},
journal={arXiv preprint arXiv:cs/0305048},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305048},
primaryClass={cs.CC cs.CV q-bio.QM}
} | kim20032d |
arxiv-671119 | cs/0305049 | The Athena Data Dictionary and Description Language | <|reference_start|>The Athena Data Dictionary and Description Language: Athena is the ATLAS off-line software framework, based upon the GAUDI architecture from LHCb. As part of ATLAS' continuing efforts to enhance and customise the architecture to meet our needs, we have developed a data object description tool suite and service for Athena. The aim is to provide a set of tools to describe, manage, integrate and use the Event Data Model at a design level according to the concepts of the Athena framework (use of patterns, relationships, ...). Moreover, to ensure stability and reusability this must be fully independent from the implementation details. After an extensive investigation into the many options, we have developed a language grammar based upon a description language (IDL, ODL) to provide support for object integration in Athena. We have then developed a compiler front end based upon this language grammar, JavaCC, and a Java Reflection API-like interface. We have then used these tools to develop several compiler back ends which meet specific needs in ATLAS such as automatic generation of object converters, and data object scripting interfaces. We present here details of our work and experience to date on the Athena Definition Language and Athena Data Dictionary.<|reference_end|> | arxiv | @article{bazan2003the,
title={The Athena Data Dictionary and Description Language},
author={Alain Bazan, Thierry Bouedo, Philippe Ghez, Massimo Marino, Craig Tull},
journal={ECONFC0303241:MOJT010,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305049},
primaryClass={cs.SE}
} | bazan2003the |
arxiv-671120 | cs/0305050 | Towards automation of computing fabrics using tools from the fabric management workpackage of the EU DataGrid project | <|reference_start|>Towards automation of computing fabrics using tools from the fabric management workpackage of the EU DataGrid project: The EU DataGrid project workpackage 4 has as an objective to provide the necessary tools for automating the management of medium size to very large computing fabrics. At the end of the second project year subsystems for centralized configuration management (presented at LISA'02) and performance/exception monitoring have been delivered. This will soon be augmented with a subsystem for node installation and service configuration, which is based on existing widely used standards where available (e.g. rpm, kickstart, init.d scripts) and clean interfaces to OS dependent components (e.g. base installation and service management). The three subsystems together allow for centralized management of very large computer farms. Finally, a fault tolerance system is being developed for tying together the above subsystems to form a complete framework for automated enterprise computing management by 3Q03. All software developed is open source covered by the EU DataGrid project license agreements. This article describes the architecture behind the designed fabric management system and the status of the different developments. It also covers the experience with an existing tool for automated configuration and installation that have been adapted and used from the beginning to manage the EU DataGrid testbed, which is now used for LHC data challenges.<|reference_end|> | arxiv | @article{barring2003towards,
title={Towards automation of computing fabrics using tools from the fabric
management workpackage of the EU DataGrid project},
author={Olof Barring},
journal={ECONFC0303241:MODT004,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305050},
primaryClass={cs.DC}
} | barring2003towards |
arxiv-671121 | cs/0305051 | Sharp Bounds for Bandwidth of Clique Products | <|reference_start|>Sharp Bounds for Bandwidth of Clique Products: The bandwidth of a graph is the labeling of vertices with minimum maximum edge difference. For many graph families this is NP-complete. A classic result computes the bandwidth for the hypercube. We generalize this result to give sharp lower bounds for products of cliques. This problem turns out to be equivalent to one in communication over multiple channels in which channels can fail and the information sent over those channels is lost. The goal is to create an encoding that minimizes the difference between the received and the original information while having as little redundancy as possible. Berger-Wolf and Reingold [2] have considered the problem for the equal size cliques (or equal capacity channels). This paper presents a tight lower bound and an algorithm for constructing the labeling for the product of any number of arbitrary size cliques.<|reference_end|> | arxiv | @article{berger-wolf2003sharp,
title={Sharp Bounds for Bandwidth of Clique Products},
author={Tanya Y. Berger-Wolf and Mitchell A. Harris},
journal={arXiv preprint arXiv:cs/0305051},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305051},
primaryClass={cs.DM}
} | berger-wolf2003sharp |
arxiv-671122 | cs/0305052 | On the Existence and Convergence Computable Universal Priors | <|reference_start|>On the Existence and Convergence Computable Universal Priors: Solomonoff unified Occam's razor and Epicurus' principle of multiple explanations to one elegant, formal, universal theory of inductive inference, which initiated the field of algorithmic information theory. His central result is that the posterior of his universal semimeasure M converges rapidly to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal predictor in case of unknown mu. We investigate the existence and convergence of computable universal (semi)measures for a hierarchy of computability classes: finitely computable, estimable, enumerable, and approximable. For instance, M is known to be enumerable, but not finitely computable, and to dominate all enumerable semimeasures. We define seven classes of (semi)measures based on these four computability concepts. Each class may or may not contain a (semi)measure which dominates all elements of another class. The analysis of these 49 cases can be reduced to four basic cases, two of them being new. The results hold for discrete and continuous semimeasures. We also investigate more closely the types of convergence, possibly implied by universality: in difference and in ratio, with probability 1, in mean sum, and for Martin-Loef random sequences. We introduce a generalized concept of randomness for individual sequences and use it to exhibit difficulties regarding these issues.<|reference_end|> | arxiv | @article{hutter2003on,
title={On the Existence and Convergence Computable Universal Priors},
author={Marcus Hutter},
journal={Proceedings of the 14th International Conference on Algorithmic
Learning Theory (ALT-2003) 298-312},
year={2003},
number={IDSIA-05-03},
archivePrefix={arXiv},
eprint={cs/0305052},
primaryClass={cs.LG cs.AI cs.CC math.ST stat.TH}
} | hutter2003on |
arxiv-671123 | cs/0305053 | Developing Open Data Models for Linguistic Field Data | <|reference_start|>Developing Open Data Models for Linguistic Field Data: The UQ Flint Archive houses the field notes and elicitation recordings made by Elwyn Flint in the 1950's and 1960's during extensive linguistic survey work across Queensland, Australia. The process of digitizing the contents of the UQ Flint Archive provides a number of interesting challenges in the context of EMELD. Firstly, all of the linguistic data is for languages which are either endangered or extinct, and as such forms a valuable ethnographic repository. Secondly, the physical format of the data is itself in danger of decline, and as such digitization is an important preservation task in the short to medium term. Thirdly, the adoption of open standards for the encoding and presentation of text and audio data for linguistic field data, whilst enabling preservation, represents a new field of research in itself where best practice has yet to be formalised. Fourthly, the provision of this linguistic data online as a new data source for future research introduces concerns of data portability and longevity. This paper will outline the origins of the data model, the content creation components, presentation forms based on the data model, data capture tools and media conversion components. It will also address some of the larger questions regarding the digitization and annotation of linguistic field work based on experience gained through work with the Flint Archive contents.<|reference_end|> | arxiv | @article{hughes2003developing,
title={Developing Open Data Models for Linguistic Field Data},
author={Baden Hughes},
journal={arXiv preprint arXiv:cs/0305053},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305053},
primaryClass={cs.DL cs.CL}
} | hughes2003developing |
arxiv-671124 | cs/0305054 | A Monitoring System for the BaBar INFN Computing Cluster | <|reference_start|>A Monitoring System for the BaBar INFN Computing Cluster: Monitoring large clusters is a challenging problem. It is necessary to observe a large quantity of devices with a reasonably short delay between consecutive observations. The set of monitored devices may include PCs, network switches, tape libraries and other equipments. The monitoring activity should not impact the performances of the system. In this paper we present PerfMC, a monitoring system for large clusters. PerfMC is driven by an XML configuration file, and uses the Simple Network Management Protocol (SNMP) for data collection. SNMP is a standard protocol implemented by many networked equipments, so the tool can be used to monitor a wide range of devices. System administrators can display informations on the status of each device by connecting to a WEB server embedded in PerfMC. The WEB server can produce graphs showing the value of different monitored quantities as a function of time; it can also produce arbitrary XML pages by applying XSL Transformations to an internal XML representation of the cluster's status. XSL Transformations may be used to produce HTML pages which can be displayed by ordinary WEB browsers. PerfMC aims at being relatively easy to configure and operate, and highly efficient. It is currently being used to monitor the Italian Reprocessing farm for the BaBar experiment, which is made of about 200 dual-CPU Linux machines.<|reference_end|> | arxiv | @article{marzolla2003a,
title={A Monitoring System for the BaBar INFN Computing Cluster},
author={M. Marzolla and V. Melloni},
journal={ECONFC0303241:MOET006,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305054},
primaryClass={cs.PF}
} | marzolla2003a |
arxiv-671125 | cs/0305055 | Goodness-of-fit of the Heston model | <|reference_start|>Goodness-of-fit of the Heston model: An analytical formula for the probability distribution of stock-market returns, derived from the Heston model assuming a mean-reverting stochastic volatility, was recently proposed by Dragulescu and Yakovenko in Quantitative Finance 2002. While replicating their results, we found two significant weaknesses in their method to pre-process the data, which cast a shadow over the effective goodness-of-fit of the model. We propose a new method, more truly capturing the market, and perform a Kolmogorov-Smirnov test and a Chi Square test on the resulting probability distribution. The results raise some significant questions for large time lags -- 40 to 250 days -- where the smoothness of the data does not require such a complex model; nevertheless, we also provide some statistical evidence in favour of the Heston model for small time lags -- 1 and 5 days -- compared with the traditional Gaussian model assuming constant volatility.<|reference_end|> | arxiv | @article{daniel2003goodness-of-fit,
title={Goodness-of-fit of the Heston model},
author={Gilles Daniel},
journal={arXiv preprint arXiv:cs/0305055},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305055},
primaryClass={cs.CE}
} | daniel2003goodness-of-fit |
arxiv-671126 | cs/0305056 | Configuration Database for BaBar On-line | <|reference_start|>Configuration Database for BaBar On-line: The configuration database is one of the vital systems in the BaBar on-line system. It provides services for the different parts of the data acquisition system and control system, which require run-time parameters. The original design and implementation of the configuration database played a significant role in the successful BaBar operations since the beginning of experiment. Recent additions to the design of the configuration database provide better means for the management of data and add new tools to simplify main configuration tasks. We describe the design of the configuration database, its implementation with the Objectivity/DB object-oriented database, and our experience collected during the years of operation.<|reference_end|> | arxiv | @article{bartoldus2003configuration,
title={Configuration Database for BaBar On-line},
author={R. Bartoldus, G. Dubois-Felsmann, Y. Kolomensky, A. Salnikov},
journal={arXiv preprint arXiv:cs/0305056},
year={2003},
number={SLAC-PUB-9831},
archivePrefix={arXiv},
eprint={cs/0305056},
primaryClass={cs.DB cs.IR}
} | bartoldus2003configuration |
arxiv-671127 | cs/0305057 | The Persint visualization program for the ATLAS experiment | <|reference_start|>The Persint visualization program for the ATLAS experiment: The Persint program is designed for the three-dimensional representation of objects and for the interfacing and access to a variety of independent applications, in a fully interactive way. Facilities are provided for the spatial navigation and the definition of the visualization properties, in order to interactively set the viewing and viewed points, and to obtain the desired perspective. In parallel, applications may be launched through the use of dedicated interfaces, such as the interactive reconstruction and display of physics events. Recent developments have focalized on the interfacing to the XML ATLAS General Detector Description AGDD, making it a widely used tool for XML developers. The graphics capabilities of this program were exploited in the context of the ATLAS 2002 Muon Testbeam where it was used as an online event display, integrated in the online software framework and participating in the commissioning and debug of the detector system.<|reference_end|> | arxiv | @article{pomarede2003the,
title={The Persint visualization program for the ATLAS experiment},
author={D. Pomarede, M. Virchaux},
journal={ECONFC0303241:MOLT009,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305057},
primaryClass={cs.GR}
} | pomarede2003the |
arxiv-671128 | cs/0305058 | ATLAS and CMS applications on the WorldGrid testbed | <|reference_start|>ATLAS and CMS applications on the WorldGrid testbed: WorldGrid is an intercontinental testbed spanning Europe and the US integrating architecturally different Grid implementations based on the Globus toolkit. It has been developed in the context of the DataTAG and iVDGL projects, and successfully demonstrated during the WorldGrid demos at IST2002 (Copenhagen) and SC2002 (Baltimore). Two HEP experiments, ATLAS and CMS, successful exploited the WorldGrid testbed for executing jobs simulating the response of their detectors to physics eve nts produced by real collisions expected at the LHC accelerator starting from 2007. This data intensive activity has been run since many years on local dedicated computing farms consisting of hundreds of nodes and Terabytes of disk and tape storage. Within the WorldGrid testbed, for the first time HEP simulation jobs were submitted and run indifferently on US and European resources, despite of their underlying different Grid implementations, and produced data which could be retrieved and further analysed on the submitting machine, or simply stored on the remote resources and registered on a Replica Catalogue which made them available to the Grid for further processing. In this contribution we describe the job submission from Europe for both ATLAS and CMS applications, performed through the GENIUS portal operating on top of an EDG User Interface submitting to an EDG Resource Broker, pointing out the chosen interoperability solutions which made US and European resources equivalent from the applications point of view, the data management in the WorldGrid environment, and the CMS specific production tools which were interfaced to the GENIUS portal.<|reference_end|> | arxiv | @article{ciaschini2003atlas,
title={ATLAS and CMS applications on the WorldGrid testbed},
author={V. Ciaschini, F. Donno, A. Fanfani, F. Fanzago, V. Garbellotto, M.
Verlato, L. Vaccarossa},
journal={ECONF C0303241:TUCP004,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305058},
primaryClass={cs.DC}
} | ciaschini2003atlas |
arxiv-671129 | cs/0305059 | EU DataGRID testbed management and support at CERN | <|reference_start|>EU DataGRID testbed management and support at CERN: In this paper we report on the first two years of running the CERN testbed site for the EU DataGRID project. The site consists of about 120 dual-processor PCs distributed over several testbeds used for different purposes: software development, system integration, and application tests. Activities at the site included test productions of MonteCarlo data for LHC experiments, tutorials and demonstrations of GRID technologies, and support for individual users analysis. This paper focuses on node installation and configuration techniques, service management, user support in a gridified environment, and includes considerations on scalability and security issues and comparisons with "traditional" production systems, as seen from the administrator point of view.<|reference_end|> | arxiv | @article{leonardi2003eu,
title={EU DataGRID testbed management and support at CERN},
author={E. Leonardi and M.W. Schulz},
journal={ECONF C0303241:THCT007,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305059},
primaryClass={cs.DC}
} | leonardi2003eu |
arxiv-671130 | cs/0305060 | Performance comparison between iSCSI and other hardware and software solutions | <|reference_start|>Performance comparison between iSCSI and other hardware and software solutions: We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.<|reference_end|> | arxiv | @article{gug2003performance,
title={Performance comparison between iSCSI and other hardware and software
solutions},
author={Mathias Gug},
journal={ECONFC0303241:TUDP001,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305060},
primaryClass={cs.PF}
} | gug2003performance |
arxiv-671131 | cs/0305061 | A Secure Infrastructure For System Console and Reset Access | <|reference_start|>A Secure Infrastructure For System Console and Reset Access: During the last years large farms have been built using commodity hardware. This hardware lacks components for remote and automated administration. Products that can be retrofitted to these systems are either costly or inherently insecure. We present a system based on serial ports and simple machine controlled relays. We report on experience gained by setting up a 50-machine test environment as well as current work in progress in the area.<|reference_end|> | arxiv | @article{horvath2003a,
title={A Secure Infrastructure For System Console and Reset Access},
author={Andras Horvath, Emanuele Leonardi, Markus Schulz},
journal={arXiv preprint arXiv:cs/0305061},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305061},
primaryClass={cs.DC}
} | horvath2003a |
arxiv-671132 | cs/0305062 | DIAMOnDS - DIstributed Agents for MObile & Dynamic Services | <|reference_start|>DIAMOnDS - DIstributed Agents for MObile & Dynamic Services: Distributed Services Architecture with support for mobile agents between services, offer significantly improved communication and computational flexibility. The uses of agents allow execution of complex operations that involve large amounts of data to be processed effectively using distributed resources. The prototype system Distributed Agents for Mobile and Dynamic Services (DIAMOnDS), allows a service to send agents on its behalf, to other services, to perform data manipulation and processing. Agents have been implemented as mobile services that are discovered using the Jini Lookup mechanism and used by other services for task management and communication. Agents provide proxies for interaction with other services as well as specific GUI to monitor and control the agent activity. Thus agents acting on behalf of one service cooperate with other services to carry out a job, providing inter-operation of loosely coupled services in a semi-autonomous way. Remote file system access functionality has been incorporated by the agent framework and allows services to dynamically share and browse the file system resources of hosts, running the services. Generic database access functionality has been implemented in the mobile agent framework that allows performing complex data mining and processing operations efficiently in distributed system. A basic data searching agent is also implemented that performs a query based search in a file system. The testing of the framework was carried out on WAN by moving Connectivity Test agents between AgentStations in CERN, Switzerland and NUST, Pakistan.<|reference_end|> | arxiv | @article{shafi2003diamonds,
title={DIAMOnDS - DIstributed Agents for MObile & Dynamic Services},
author={Aamir Shafi, Umer Farooq, Saad Kiani, Maria Riaz, Anjum Shehzad,
Arshad Ali, Iosif Legrand, Harvey Newman},
journal={ECONFC0303241:THAT003,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305062},
primaryClass={cs.DC}
} | shafi2003diamonds |
arxiv-671133 | cs/0305063 | McRunjob: A High Energy Physics Workflow Planner for Grid Production Processing | <|reference_start|>McRunjob: A High Energy Physics Workflow Planner for Grid Production Processing: McRunjob is a powerful grid workflow manager used to manage the generation of large numbers of production processing jobs in High Energy Physics. In use at both the DZero and CMS experiments, McRunjob has been used to manage large Monte Carlo production processing since 1999 and is being extended to uses in regular production processing for analysis and reconstruction. Described at CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety of environments. The powerful core metadata description language includes methods for converting the metadata into persistent forms, job descriptions, multi-step workflows, and data provenance information. The language features allow for structure in the metadata by including full expressions, namespaces, functional dependencies, site specific parameters in a grid environment, and ontological definitions. It also has simple control structures for parallelization of large jobs. McRunjob features a modular design which allows for easy expansion to new job description languages or new application level tasks.<|reference_end|> | arxiv | @article{graham2003mcrunjob:,
title={McRunjob: A High Energy Physics Workflow Planner for Grid Production
Processing},
author={Gregory E. Graham (Fermilab) Dave Evans and Iain Bertram (Lancaster
University)},
journal={ECONFC0303241:TUCT007,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305063},
primaryClass={cs.DC}
} | graham2003mcrunjob: |
arxiv-671134 | cs/0305064 | The use of Ethernet in the DataFlow of the ATLAS Trigger & DAQ | <|reference_start|>The use of Ethernet in the DataFlow of the ATLAS Trigger & DAQ: The article analyzes a proposed network topology for the ATLAS DAQ DataFlow, and identifies the Ethernet features required for a proper operation of the network: MAC address table size, switch performance in terms of throughput and latency, the use of Flow Control, Virtual LANs and Quality of Service. We investigate these features on some Ethernet switches, and conclude on their usefulness for the ATLAS DataFlow network.<|reference_end|> | arxiv | @article{stancu2003the,
title={The use of Ethernet in the DataFlow of the ATLAS Trigger & DAQ},
author={Stefan Stancu, Bob Dobinson, Matei Ciobotaru, Krzysztof Korcyl and
Emil Knezo},
journal={ECONFC0303241:MOGT010,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305064},
primaryClass={cs.NI}
} | stancu2003the |
arxiv-671135 | cs/0305065 | A Generic Multi-node State Monitoring Subsystem | <|reference_start|>A Generic Multi-node State Monitoring Subsystem: The BaBar online data acquisition (DAQ) system includes approximately fifty Unix systems that collectively implement the level-three trigger. These systems all run the same code. Each of these systems has its own state, and this state is expected to change in response to changes in the overall DAQ system. A specialized subsystem has been developed to initiate processing on this collection of systems, and to monitor them both for error conditions and to ensure that they all follow the same state trajectory within a specifiable period of time. This subsystem receives start commands from the main DAQ run control system, and reports major coherent state changes, as well as error conditions, back to the run control system. This state monitoring subsystem has the novel feature that it does not know anything about the state machines that it is monitoring, and hence does not introduce any fundamentally new state machine into the overall system. This feature makes it trivially applicable to other multi-node subsystems. Indeed it has already found a second application beyond the level-three trigger, within the BaBar experiment.<|reference_end|> | arxiv | @article{hamilton2003a,
title={A Generic Multi-node State Monitoring Subsystem},
author={James A. Hamilton, Gregory P. Dubois-Felsmann and Rainer Bartoldus},
journal={arXiv preprint arXiv:cs/0305065},
year={2003},
number={SLAC-PUB-9909},
archivePrefix={arXiv},
eprint={cs/0305065},
primaryClass={cs.DC}
} | hamilton2003a |
arxiv-671136 | cs/0305066 | The CMS Integration Grid Testbed | <|reference_start|>The CMS Integration Grid Testbed: The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.<|reference_end|> | arxiv | @article{graham2003the,
title={The CMS Integration Grid Testbed},
author={Gregory E. Graham, M. Anzar Afaq, Shafqat Aziz, L.A.T. Bauerdick,
Michael Ernst, Joseph Kaiser, Natalia Ratnikova, Hans Wenzel, Yujun Wu, Erik
Aslakson, Julian Bunn, Saima Iqbal, Iosif Legrand, Harvey Newman, Suresh
Singh, Conrad Steenberg, James Branson, Ian Fisk, James Letts, Adam Arbree,
Paul Avery, Dimitri Bourilkov, Richard Cavanaugh, Jorge Rodriguez, Suchindra
Kategari, Peter Couvares, Alan DeSmet, Miron Livny, Alain Roy, Todd
Tannenbaum},
journal={eConfC0303241:MOCT010B,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0305066},
primaryClass={cs.DC}
} | graham2003the |
arxiv-671137 | cs/0306001 | Clarens Client and Server Applications | <|reference_start|>Clarens Client and Server Applications: Several applications have been implemented with access via the Clarens web service infrastructure, including virtual organization management, JetMET physics data analysis using relational databases, and Storage Resource Broker (SRB) access. This functionality is accessible transparently from Python scripts, the Root analysis framework and from Java applications and browser applets.<|reference_end|> | arxiv | @article{steenberg2003clarens,
title={Clarens Client and Server Applications},
author={Conrad D. Steenberg and Eric Aslakson, Julian J. Bunn, Harvey B.
Newman, Michael Thomas, Frank van Lingen},
journal={arXiv preprint arXiv:cs/0306001},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306001},
primaryClass={cs.DC}
} | steenberg2003clarens |
arxiv-671138 | cs/0306002 | The Clarens web services architecture | <|reference_start|>The Clarens web services architecture: Clarens is a uniquely flexible web services infrastructure providing a unified access protocol to a diverse set of functions useful to the HEP community. It uses the standard HTTP protocol combined with application layer, certificate based authentication to provide single sign-on to individuals, organizations and hosts, with fine-grained access control to services, files and virtual organization (VO) management. This contribution describes the server functionality, while client applications are described in a subsequent talk.<|reference_end|> | arxiv | @article{steenberg2003the,
title={The Clarens web services architecture},
author={Conrad D. Steenberg and Eric Aslakson, Julian J. Bunn, Harvey B.
Newman, Michael Thomas, Frank van Lingen},
journal={arXiv preprint arXiv:cs/0306002},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306002},
primaryClass={cs.DC}
} | steenberg2003the |
arxiv-671139 | cs/0306003 | R-GMA: First results after deployment | <|reference_start|>R-GMA: First results after deployment: We describe R-GMA (Relational Grid Monitoring Architecture) which is being developed within the European DataGrid Project as an Grid Information and Monitoring System. Is is based on the GMA from GGF, which is a simple Consumer-Producer model. The special strength of this implementation comes from the power of the relational model. We offer a global view of the information as if each VO had one large relational database. We provide a number of different Producer types with different characteristics; for example some support streaming of information. We also provide combined Consumer/Producers, which are able to combine information and republish it. At the heart of the system is the mediator, which for any query is able to find and connect to the best Producers to do the job. We are able to invoke MDS info-provider scripts and publish the resulting information via R-GMA in addition to having some of our own sensors. APIs are available which allow the user to deploy monitoring and information services for any application that may be needed in the future. We have used it both for information about the grid (primarily to find what services are available at any one time) and for application monitoring. R-GMA has been deployed in Grid testbeds, we describe the results and experiences of this deployment.<|reference_end|> | arxiv | @article{byrom2003r-gma:,
title={R-GMA: First results after deployment},
author={Rob Byrom, Brian Coghlan, Andrew W Cooke, Roney Cordenonsi, Linda
Cornwall, Ari Datta, Abdeslem Djaoui, Laurence Field, Steve Fisher, Steve
Hicks, Stuart Kenny, James Magowan, Werner Nutt, David O'Callaghan, Manfred
Oevers, Norbert Podhorszki, John Ryan, Manish Soni, Paul Taylor, Antony J.
Wilson and Xiaomei Zhu},
journal={arXiv preprint arXiv:cs/0306003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306003},
primaryClass={cs.DC}
} | byrom2003r-gma: |
arxiv-671140 | cs/0306004 | Managing Dynamic User Communities in a Grid of Autonomous Resources | <|reference_start|>Managing Dynamic User Communities in a Grid of Autonomous Resources: One of the fundamental concepts in Grid computing is the creation of Virtual Organizations (VO's): a set of resource consumers and providers that join forces to solve a common problem. Typical examples of Virtual Organizations include collaborations formed around the Large Hadron Collider (LHC) experiments. To date, Grid computing has been applied on a relatively small scale, linking dozens of users to a dozen resources, and management of these VO's was a largely manual operation. With the advance of large collaboration, linking more than 10000 users with a 1000 sites in 150 counties, a comprehensive, automated management system is required. It should be simple enough not to deter users, while at the same time ensuring local site autonomy. The VO Management Service (VOMS), developed by the EU DataGrid and DataTAG projects[1, 2], is a secured system for managing authorization for users and resources in virtual organizations. It extends the existing Grid Security Infrastructure[3] architecture with embedded VO affiliation assertions that can be independently verified by all VO members and resource providers. Within the EU DataGrid project, Grid services for job submission, file- and database access are being equipped with fine- grained authorization systems that take VO membership into account. These also give resource owners the ability to ensure site security and enforce local access policies. This paper will describe the EU DataGrid security architecture, the VO membership service and the local site enforcement mechanisms Local Centre Authorization Service (LCAS), Local Credential Mapping Service(LCMAPS) and the Java Trust and Authorization Manager.<|reference_end|> | arxiv | @article{alfieri2003managing,
title={Managing Dynamic User Communities in a Grid of Autonomous Resources},
author={R. Alfieri, R. Cecchini, V. Ciaschini, L. dell'Agnello, A. Gianoli, F.
Spataro, F. Bonnassieux, P. Broadfoot, G. Lowe, L. Cornwall, J. Jensen, D.
Kelsey, A. Frohner, D.L. Groep, W. Som de Cerff, M. Steenbakkers, G.
Venekamp, D. Kouril, A. McNab, O. Mulmo, M. Silander, J. Hahkala, K.
Lhorentey},
journal={ECONF C0303241:TUBT005,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306004},
primaryClass={cs.DC}
} | alfieri2003managing |
arxiv-671141 | cs/0306005 | The Virtual Monte Carlo | <|reference_start|>The Virtual Monte Carlo: The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.<|reference_end|> | arxiv | @article{hrivnacova2003the,
title={The Virtual Monte Carlo},
author={I. Hrivnacova (1), D. Adamova (2), V. Berejnoi (3), R. Brun (3), F.
Carminati (3), A. Fasso (3), E. Futo (3), A. Gheata (3), I. Gonzalez
Caballero (4), A. Morsch (3) (for the ALICE Collaboration, (1) IPN, Orsay,
France, (2) NPI, ASCR, Rez, Czech Republic, (3) CERN, Geneva, Switzerland,
(4) IFCA, Santander, Spain)},
journal={ECONF C0303241:THJT006,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306005},
primaryClass={cs.SE}
} | hrivnacova2003the |
arxiv-671142 | cs/0306006 | Experience with the Open Source based implementation for ATLAS Conditions Data Management System | <|reference_start|>Experience with the Open Source based implementation for ATLAS Conditions Data Management System: Conditions Data in high energy physics experiments is frequently seen as every data needed for reconstruction besides the event data itself. This includes all sorts of slowly evolving data like detector alignment, calibration and robustness, and data from detector control system. Also, every Conditions Data Object is associated with a time interval of validity and a version. Besides that, quite often is useful to tag collections of Conditions Data Objects altogether. These issues have already been investigated and a data model has been proposed and used for different implementations based in commercial DBMSs, both at CERN and for the BaBar experiment. The special case of the ATLAS complex trigger that requires online access to calibration and alignment data poses new challenges that have to be met using a flexible and customizable solution more in the line of Open Source components. Motivated by the ATLAS challenges we have developed an alternative implementation, based in an Open Source RDBMS. Several issues were investigated land will be described in this paper: -The best way to map the conditions data model into the relational database concept considering what are foreseen as the most frequent queries. -The clustering model best suited to address the scalability problem. -Extensive tests were performed and will be described. The very promising results from these tests are attracting the attention from the HEP community and driving further developments.<|reference_end|> | arxiv | @article{amorim2003experience,
title={Experience with the Open Source based implementation for ATLAS
Conditions Data Management System},
author={A.Amorim, J.Lima, C.Oliveira, L.Pedro, N.Barros},
journal={arXiv preprint arXiv:cs/0306006},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306006},
primaryClass={cs.DB}
} | amorim2003experience |
arxiv-671143 | cs/0306007 | The first deployment of workload management services on the EU DataGrid Testbed: feedback on design and implementation | <|reference_start|>The first deployment of workload management services on the EU DataGrid Testbed: feedback on design and implementation: Application users have now been experiencing for about a year with the standardized resource brokering services provided by the 'workload management' package of the EU DataGrid project (WP1). Understanding, shaping and pushing the limits of the system has provided valuable feedback on both its design and implementation. A digest of the lessons, and "better practices", that were learned, and that were applied towards the second major release of the software, is given.<|reference_end|> | arxiv | @article{avellino2003the,
title={The first deployment of workload management services on the EU DataGrid
Testbed: feedback on design and implementation},
author={G. Avellino, S. Beco, B. Cantalupo, F. Pacini, A. Terracina, A.
Maraschini, D. Colling, S. Monforte, M. Pappalardo, L. Salconi, F. Giacomini,
E. Ronchieri, D. Kouril, A. Krenek, L. Matyska, M. Mulac, J. Pospisil, M.
Ruda, Z. Salvet, J. Sitera, M. Vocu, M. Mezzadri, F. Prelz, A. Gianelle, R.
Peluso, M. Sgaravatto, S. Barale, A. Guarise, A. Werbrouck},
journal={arXiv preprint arXiv:cs/0306007},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306007},
primaryClass={cs.DC}
} | avellino2003the |
arxiv-671144 | cs/0306008 | The new BaBar Data Reconstruction Control System | <|reference_start|>The new BaBar Data Reconstruction Control System: The BaBar experiment is characterized by extremely high luminosity, a complex detector, and a huge data volume, with increasing requirements each year. To fulfill these requirements a new control system has been designed and developed for the offline data reconstruction system. The new control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is actively distributed, enforces the separation between different processing tiers by using different naming domains, and glues them together by dedicated brokers. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes this new control system, currently in use at SLAC and Padova on ~450 CPUs organized in 12 farms.<|reference_end|> | arxiv | @article{ceseracciu2003the,
title={The new BaBar Data Reconstruction Control System},
author={A. Ceseracciu, M. Piemontese, F. Safai Tehrani, P. Elmer, D. Johnson,
T. M. Pulliam},
journal={arXiv preprint arXiv:cs/0306008},
year={2003},
number={SLAC-PUB-9873},
archivePrefix={arXiv},
eprint={cs/0306008},
primaryClass={cs.DC}
} | ceseracciu2003the |
arxiv-671145 | cs/0306009 | Virtual Data in CMS Production | <|reference_start|>Virtual Data in CMS Production: Initial applications of the GriPhyN Chimera Virtual Data System have been performed within the context of CMS Production of Monte Carlo Simulated Data. The GriPhyN Chimera system consists of four primary components: 1) a Virtual Data Language, which is used to describe virtual data products, 2) a Virtual Data Catalog, which is used to store virtual data entries, 3) an Abstract Planner, which resolves all dependencies of a particular virtual data product and forms a location and existence independent plan, 4) a Concrete Planner, which maps an abstract, logical plan onto concrete, physical grid resources accounting for staging in/out files and publishing results to a replica location service. A CMS Workflow Planner, MCRunJob, is used to generate virtual data products using the Virtual Data Language. Subsequently, a prototype workflow manager, known as WorkRunner, is used to schedule the instantiation of virtual data products across a grid.<|reference_end|> | arxiv | @article{arbree2003virtual,
title={Virtual Data in CMS Production},
author={A. Arbree, P. Avery, D. Bourilkov, R. Cavanaugh, G. Graham, S.
Katageri, J. Rodriguez, J. Voeckler, M. Wilde},
journal={ECONFC0303241:TUAT011,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306009},
primaryClass={cs.DC hep-ex}
} | arbree2003virtual |
arxiv-671146 | cs/0306010 | On multiple connectedness of regions visible due to multiple diffuse reflections | <|reference_start|>On multiple connectedness of regions visible due to multiple diffuse reflections: It is known that the region $V(s)$ of a simple polygon $P$, directly visible (illuminable) from an internal point $s$, is simply connected. Aronov et al. \cite{addpp981} established that the region $V_1(s)$ of a simple polygon visible from an internal point $s$ due to at most one diffuse reflection on the boundary of the polygon $P$, is also simply connected. In this paper we establish that the region $V_2(s)$, visible from $s$ due to at most two diffuse reflections may be multiply connected; we demonstrate the construction of an $n$-sided simple polygon with a point $s$ inside it so that and the region of $P$ visible from $s$ after at most two diffuse reflections is multiple connected.<|reference_end|> | arxiv | @article{pal2003on,
title={On multiple connectedness of regions visible due to multiple diffuse
reflections},
author={Sudebkumar Prasant Pal and Dilip Sarkar},
journal={arXiv preprint arXiv:cs/0306010},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306010},
primaryClass={cs.CG cs.DM cs.GR}
} | pal2003on |
arxiv-671147 | cs/0306011 | Grid Data Management in Action: Experience in Running and Supporting Data Management Services in the EU DataGrid Project | <|reference_start|>Grid Data Management in Action: Experience in Running and Supporting Data Management Services in the EU DataGrid Project: In the first phase of the EU DataGrid (EDG) project, a Data Management System has been implemented and provided for deployment. The components of the current EDG Testbed are: a prototype of a Replica Manager Service built around the basic services provided by Globus, a centralised Replica Catalogue to store information about physical locations of files, and the Grid Data Mirroring Package (GDMP) that is widely used in various HEP collaborations in Europe and the US for data mirroring. During this year these services have been refined and made more robust so that they are fit to be used in a pre-production environment. Application users have been using this first release of the Data Management Services for more than a year. In the paper we present the components and their interaction, our implementation and experience as well as the feedback received from our user communities. We have resolved not only issues regarding integration with other EDG service components but also many of the interoperability issues with components of our partner projects in Europe and the U.S. The paper concludes with the basic lessons learned during this operation. These conclusions provide the motivation for the architecture of the next generation of Data Management Services that will be deployed in EDG during 2003.<|reference_end|> | arxiv | @article{stockinger2003grid,
title={Grid Data Management in Action: Experience in Running and Supporting
Data Management Services in the EU DataGrid Project},
author={Heinz Stockinger, Flavia Donno, Erwin Laure, Shahzad Muzaffar, Peter
Kunszt, Giuseppe Andronico, Paul Millar},
journal={ECONFC0303241:TUAT007,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306011},
primaryClass={cs.DC}
} | stockinger2003grid |
arxiv-671148 | cs/0306012 | GraXML - Modular Geometric Modeler | <|reference_start|>GraXML - Modular Geometric Modeler: Many entities managed by HEP Software Frameworks represent spatial (3-dimensional) real objects. Effective definition, manipulation and visualization of such objects is an indispensable functionality. GraXML is a modular Geometric Modeling toolkit capable of processing geometric data of various kinds (detector geometry, event geometry) from different sources and delivering them in ways suitable for further use. Geometric data are first modeled in one of the Generic Models. Those Models are then used to populate powerful Geometric Model based on the Java3D technology. While Java3D has been originally created just to provide visualization of 3D objects, its light weight and high functionality allow an effective reuse as a general geometric component. This is possible also thanks to a large overlap between graphical and general geometric functionality and modular design of Java3D itself. Its graphical functionalities also allow a natural visualization of all manipulated elements. All these techniques have been developed primarily (or only) for the Java environment. It is, however, possible to interface them transparently to Frameworks built in other languages, like for example C++. The GraXML toolkit has been tested with data from several sources, as for example ATLAS and ALICE detector description and ATLAS event data. Prototypes for other sources, like Geometry Description Markup Language (GDML) exist too and interface to any other source is easy to add.<|reference_end|> | arxiv | @article{hrivnac2003graxml,
title={GraXML - Modular Geometric Modeler},
author={Julius Hrivnac},
journal={ECONFC0303241:THJT009,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306012},
primaryClass={cs.GR}
} | hrivnac2003graxml |
arxiv-671149 | cs/0306013 | Transparent Persistence with Java Data Objects | <|reference_start|>Transparent Persistence with Java Data Objects: Flexible and performant Persistency Service is a necessary component of any HEP Software Framework. The building of a modular, non-intrusive and performant persistency component have been shown to be very difficult task. In the past, it was very often necessary to sacrifice modularity to achieve acceptable performance. This resulted in the strong dependency of the overall Frameworks on their Persistency subsystems. Recent development in software technology has made possible to build a Persistency Service which can be transparently used from other Frameworks. Such Service doesn't force a strong architectural constraints on the overall Framework Architecture, while satisfying high performance requirements. Java Data Object standard (JDO) has been already implemented for almost all major databases. It provides truly transparent persistency for any Java object (both internal and external). Objects in other languages can be handled via transparent proxies. Being only a thin layer on top of a used database, JDO doesn't introduce any significant performance degradation. Also Aspect-Oriented Programming (AOP) makes possible to treat persistency as an orthogonal Aspect of the Application Framework, without polluting it with persistence-specific concepts. All these techniques have been developed primarily (or only) for the Java environment. It is, however, possible to interface them transparently to Frameworks built in other languages, like for example C++. Fully functional prototypes of flexible and non-intrusive persistency modules have been build for several other packages, as for example FreeHEP AIDA and LCG Pool AttributeSet (package Indicium).<|reference_end|> | arxiv | @article{hrivnac2003transparent,
title={Transparent Persistence with Java Data Objects},
author={Julius Hrivnac},
journal={arXiv preprint arXiv:cs/0306013},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306013},
primaryClass={cs.DB}
} | hrivnac2003transparent |
arxiv-671150 | cs/0306014 | SCRAM: Software configuration and management for the LHC Computing Grid project | <|reference_start|>SCRAM: Software configuration and management for the LHC Computing Grid project: Recently SCRAM (Software Configuration And Management) has been adopted by the applications area of the LHC computing grid project as baseline configuration management and build support infrastructure tool. SCRAM is a software engineering tool, that supports the configuration management and management processes for software development. It resolves the issues of configuration definition, assembly break-down, build, project organization, run-time environment, installation, distribution, deployment, and source code distribution. It was designed with a focus on supporting a distributed, multi-project development work-model. We will describe the underlying technology, and the solutions SCRAM offers to the above software engineering processes, while taking a users view of the system under configuration management.<|reference_end|> | arxiv | @article{wellisch2003scram:,
title={SCRAM: Software configuration and management for the LHC Computing Grid
project},
author={J.P. Wellisch, C. Williams, and S. Ashby (CERN, Geneve, Switzerland)},
journal={arXiv preprint arXiv:cs/0306014},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306014},
primaryClass={cs.OH}
} | wellisch2003scram: |
arxiv-671151 | cs/0306015 | Computing sharp and scalable bounds on errors in approximate zeros of univariate polynomials | <|reference_start|>Computing sharp and scalable bounds on errors in approximate zeros of univariate polynomials: There are several numerical methods for computing approximate zeros of a given univariate polynomial. In this paper, we develop a simple and novel method for determining sharp upper bounds on errors in approximate zeros of a given polynomial using Rouche's theorem from complex analysis. We compute the error bounds using non-linear optimization. Our bounds are scalable in the sense that we compute sharper error bounds for better approximations of zeros. We use high precision computations using the LEDA/real floating-point filter for computing our bounds robustly.<|reference_end|> | arxiv | @article{ramakrishna2003computing,
title={Computing sharp and scalable bounds on errors in approximate zeros of
univariate polynomials},
author={P. H. D. Ramakrishna, Sudebkumar Prasant Pal, Samir Bhalla, Hironmay
Basu and Sudhir Kumar Singh},
journal={arXiv preprint arXiv:cs/0306015},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306015},
primaryClass={cs.NA}
} | ramakrishna2003computing |
arxiv-671152 | cs/0306016 | Modelling Biochemical Operations on RNA Secondary Structures | <|reference_start|>Modelling Biochemical Operations on RNA Secondary Structures: In this paper we model several simple biochemical operations on RNA molecules that modify their secondary structure by means of a suitable variation of Gro\ss e-Rhode's Algebra Transformation Systems.<|reference_end|> | arxiv | @article{llabres2003modelling,
title={Modelling Biochemical Operations on RNA Secondary Structures},
author={Merce Llabres, Francesc Rossello},
journal={arXiv preprint arXiv:cs/0306016},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306016},
primaryClass={cs.CE q-bio}
} | llabres2003modelling |
arxiv-671153 | cs/0306017 | Minimum Model Semantics for Logic Programs with Negation-as-Failure | <|reference_start|>Minimum Model Semantics for Logic Programs with Negation-as-Failure: We give a purely model-theoretic characterization of the semantics of logic programs with negation-as-failure allowed in clause bodies. In our semantics the meaning of a program is, as in the classical case, the unique minimum model in a program-independent ordering. We use an expanded truth domain that has an uncountable linearly ordered set of truth values between False (the minimum element) and True (the maximum), with a Zero element in the middle. The truth values below Zero are ordered like the countable ordinals. The values above Zero have exactly the reverse order. Negation is interpreted as reflection about Zero followed by a step towards Zero; the only truth value that remains unaffected by negation is Zero. We show that every program has a unique minimum model M_P, and that this model can be constructed with a T_P iteration which proceeds through the countable ordinals. Furthermore, we demonstrate that M_P can also be obtained through a model intersection construction which generalizes the well-known model intersection theorem for classical logic programming. Finally, we show that by collapsing the true and false values of the infinite-valued model M_P to (the classical) True and False, we obtain a three-valued model identical to the well-founded one.<|reference_end|> | arxiv | @article{rondogiannis2003minimum,
title={Minimum Model Semantics for Logic Programs with Negation-as-Failure},
author={Panos Rondogiannis, William W. Wadge},
journal={ACM Trans. Comput. Log. 6(2): 441-467 (2005)},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306017},
primaryClass={cs.LO cs.AI cs.PL}
} | rondogiannis2003minimum |
arxiv-671154 | cs/0306018 | A monitoring tool for a GRID operation center | <|reference_start|>A monitoring tool for a GRID operation center: WorldGRID is an intercontinental testbed spanning Europe and the US integrating architecturally different Grid implementations based on the Globus toolkit. The WorldGRID testbed has been successfully demonstrated during the WorldGRID demos at SuperComputing 2002 (Baltimore) and IST2002 (Copenhagen) where real HEP application jobs were transparently submitted from US and Europe using "native" mechanisms and run where resources were available, independently of their location. To monitor the behavior and performance of such testbed and spot problems as soon as they arise, DataTAG has developed the EDT-Monitor tool based on the Nagios package that allows for Virtual Organization centric views of the Grid through dynamic geographical maps. The tool has been used to spot several problems during the WorldGRID operations, such as malfunctioning Resource Brokers or Information Servers, sites not correctly configured, job dispatching problems, etc. In this paper we give an overview of the package, its features and scalability solutions and we report on the experience acquired and the benefit that a GRID operation center would gain from such a tool.<|reference_end|> | arxiv | @article{andreozzi2003a,
title={A monitoring tool for a GRID operation center},
author={S. Andreozzi, S. Fantinel, D. Rebatto, L. Vaccarossa, G. Tortone},
journal={arXiv preprint arXiv:cs/0306018},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306018},
primaryClass={cs.DC}
} | andreozzi2003a |
arxiv-671155 | cs/0306019 | Relational databases for data management in PHENIX | <|reference_start|>Relational databases for data management in PHENIX: PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) and archives roughly 100TB of experimental data per year. In addition, large volumes of simulated data are produced at multiple off-site computing centers. For any file catalog to play a central role in data management it has to face problems associated with the need for distributed access and updates. To be used effectively by the hundreds of PHENIX collaborators in 12 countries the catalog must satisfy the following requirements: 1) contain up-to-date data, 2) provide fast and reliable access to the data, 3) have write permissions for the sites that store portions of data. We present an analysis of several available Relational Database Management Systems (RDBMS) to support a catalog meeting the above requirements and discuss the PHENIX experience with building and using the distributed file catalog.<|reference_end|> | arxiv | @article{sourikova2003relational,
title={Relational databases for data management in PHENIX},
author={I.Sourikova, D.Morrison},
journal={arXiv preprint arXiv:cs/0306019},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306019},
primaryClass={cs.DB}
} | sourikova2003relational |
arxiv-671156 | cs/0306020 | On the Verge of One Petabyte - the Story Behind the BaBar Database System | <|reference_start|>On the Verge of One Petabyte - the Story Behind the BaBar Database System: The BaBar database has pioneered the use of a commercial ODBMS within the HEP community. The unique object-oriented architecture of Objectivity/DB has made it possible to manage over 700 terabytes of production data generated since May'99, making the BaBar database the world's largest known database. The ongoing development includes new features, addressing the ever-increasing luminosity of the detector as well as other changing physics requirements. Significant efforts are focused on reducing space requirements and operational costs. The paper discusses our experience with developing a large scale database system, emphasizing universal aspects which may be applied to any large scale system, independently of underlying technology used.<|reference_end|> | arxiv | @article{adesanya2003on,
title={On the Verge of One Petabyte - the Story Behind the BaBar Database
System},
author={Adeyemi Adesanya, Tofigh Azemoon, Jacek Becla, Andrew Hanushevsky,
Adil Hasan, Wilko Kroeger, Artem Trunov, Daniel Wang, Igor Gaponenko, Simon
Patton, David Quarrie},
journal={arXiv preprint arXiv:cs/0306020},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306020},
primaryClass={cs.DB}
} | adesanya2003on |
arxiv-671157 | cs/0306021 | Visualization for Periodic Population Movement between Distinct Localities | <|reference_start|>Visualization for Periodic Population Movement between Distinct Localities: We present a new visualization method to summarize and present periodic population movement between distinct locations, such as floors, buildings, cities, or the like. In the specific case of this paper, we have chosen to focus on student movement between college dormitories on the Columbia University campus. The visual information is presented to the information analyst in the form of an interactive geographical map, in which specific temporal periods as well as individual buildings can be singled out for detailed data exploration. The navigational interface has been designed to specifically meet a geographical setting.<|reference_end|> | arxiv | @article{haubold2003visualization,
title={Visualization for Periodic Population Movement between Distinct
Localities},
author={Alexander Haubold},
journal={arXiv preprint arXiv:cs/0306021},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306021},
primaryClass={cs.IR}
} | haubold2003visualization |
arxiv-671158 | cs/0306022 | Techniques for effective vocabulary selection | <|reference_start|>Techniques for effective vocabulary selection: The vocabulary of a continuous speech recognition (CSR) system is a significant factor in determining its performance. In this paper, we present three principled approaches to select the target vocabulary for a particular domain by trading off between the target out-of-vocabulary (OOV) rate and vocabulary size. We evaluate these approaches against an ad-hoc baseline strategy. Results are presented in the form of OOV rate graphs plotted against increasing vocabulary size for each technique.<|reference_end|> | arxiv | @article{venkataraman2003techniques,
title={Techniques for effective vocabulary selection},
author={Anand Venkataraman and Wen Wang},
journal={arXiv preprint arXiv:cs/0306022},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306022},
primaryClass={cs.CL cs.AI}
} | venkataraman2003techniques |
arxiv-671159 | cs/0306023 | The Redesigned BaBar Event Store: Believe the Hype | <|reference_start|>The Redesigned BaBar Event Store: Believe the Hype: As the BaBar experiment progresses, it produces new and unforeseen requirements and increasing demands on capacity and feature base. The current system is being utilized well beyond its original design specifications, and has scaled appropriately, maintaining data consistency and durability. The persistent event storage system has remained largely unchanged since the initial implementation, and thus includes many design features which have become performance bottlenecks. Programming interfaces were designed before sufficient usage information became available. Performance and efficiency were traded off for added flexibility to cope with future demands. With significant experience in managing actual production data under our belt, we are now in a position to recraft the system to better suit current needs. The Event Store redesign is intended to eliminate redundant features while adding new ones, increase overall performance, and contain the physical storage cost of the world's largest database.<|reference_end|> | arxiv | @article{adesanya2003the,
title={The Redesigned BaBar Event Store: Believe the Hype},
author={Adeyemi Adesanya, Jacek Becla, Daniel Wang},
journal={arXiv preprint arXiv:cs/0306023},
year={2003},
number={SLAC-PUB-9893},
archivePrefix={arXiv},
eprint={cs/0306023},
primaryClass={cs.DB cs.DS}
} | adesanya2003the |
arxiv-671160 | cs/0306024 | Monitoring Systems and Services | <|reference_start|>Monitoring Systems and Services: The DESY Computer Center is the home of O(1000) computers supplying a wide range of different services Monitoring such a large installation is a challenge. After a long time running a SNMP based commercial Network Management System, the evaluation of a new System was started. There are a lot of different commercial and freeware products on the market, but none of them fully satisfied all our requirements. After re-valuating our original requirements we selected NAGIOS as our monitoring and alarming tool. After a successful test we are in production since autumn 2002 and are extending the service to fully support a distributed monitoring and alarming.<|reference_end|> | arxiv | @article{brokmann2003monitoring,
title={Monitoring Systems and Services},
author={Alwin Brokmann},
journal={ECONFC0303241:THET003,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306024},
primaryClass={cs.OH}
} | brokmann2003monitoring |
arxiv-671161 | cs/0306025 | Permutation Generation: Two New Permutation Algorithms | <|reference_start|>Permutation Generation: Two New Permutation Algorithms: Two completely new algorithms for generating permutations, shift-cursor algorithm and level algorithm, and their efficient implementations are presented in this paper. One implementation of the shift cursor algorithm gives an optimal solution of the permutation generation problem, and one implementation of the level algorithm can be used to generate random permutations.<|reference_end|> | arxiv | @article{gao2003permutation,
title={Permutation Generation: Two New Permutation Algorithms},
author={Jie Gao, and Dianjun Wang},
journal={arXiv preprint arXiv:cs/0306025},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306025},
primaryClass={cs.DS cs.CC}
} | gao2003permutation |
arxiv-671162 | cs/0306026 | BdbServer++: A User Driven Data Location and Retrieval Tool | <|reference_start|>BdbServer++: A User Driven Data Location and Retrieval Tool: The adoption of Grid technology has the potential to greatly aid the BaBar experiment. BdbServer was originally designed to extract copies of data from the Objectivity/DB database at SLAC and IN2P3. With data now stored in multiple locations in a variety of data formats, we are enhancing this tool. This will enable users to extract selected deep copies of event collections and ship them to the requested site using the facilities offered by the existing Grid infrastructure. By building on the work done by various groups in BaBar, and the European DataGrid, we have successfully expanded the capabilities of the BdbServer software. This should provide a framework for future work in data distribution.<|reference_end|> | arxiv | @article{earl2003bdbserver++:,
title={BdbServer++: A User Driven Data Location and Retrieval Tool},
author={A.D.Earl, A.Hasan and D. Boutigany},
journal={arXiv preprint arXiv:cs/0306026},
year={2003},
number={SLAC-PUB-9925},
archivePrefix={arXiv},
eprint={cs/0306026},
primaryClass={cs.IR}
} | earl2003bdbserver++: |
arxiv-671163 | cs/0306027 | HEP Applications Evaluation of the EDG Testbed and Middleware | <|reference_start|>HEP Applications Evaluation of the EDG Testbed and Middleware: Workpackage 8 of the European Datagrid project was formed in January 2001 with representatives from the four LHC experiments, and with experiment independent people from five of the six main EDG partners. In September 2002 WP8 was strengthened by the addition of effort from BaBar and D0. The original mandate of WP8 was, following the definition of short- and long-term requirements, to port experiment software to the EDG middleware and testbed environment. A major additional activity has been testing the basic functionality and performance of this environment. This paper reviews experiences and evaluations in the areas of job submission, data management, mass storage handling, information systems and monitoring. It also comments on the problems of remote debugging, the portability of code, and scaling problems with increasing numbers of jobs, sites and nodes. Reference is made to the pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed into their data challenges. A forward look is made to essential software developments within EDG and to the necessary cooperation between EDG and LCG for the LCG prototype due in mid 2003.<|reference_end|> | arxiv | @article{augustin2003hep,
title={HEP Applications Evaluation of the EDG Testbed and Middleware},
author={I. Augustin, F. Carminati, J. Closier, E. van Herwijnen, J. J.
Blaising, D. Boutigny, C.Charlot, V.Garonne, A. Tsaregorodtsev, K. Bos, J.
Templon, P. Capiluppi, A. Fanfani, R. Barbera, G. Negri, L. Perini, S.
Resconi, M. Sitta, M. Reale, D. Vicinanza, S. Bagnasco, P. Cerello, A.
Sciaba, O. Smirnova, D.Colling, F. Harris, S. Burke},
journal={ECONF C0303241:THCT003,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306027},
primaryClass={cs.DC}
} | augustin2003hep |
arxiv-671164 | cs/0306028 | An Abstract Programming System | <|reference_start|>An Abstract Programming System: The system PL permits the translation of abstract proofs of program correctness into programs in a variety of programming languages. A programming language satisfying certain axioms may be the target of such a translation. The system PL also permits the construction and proof of correctness of programs in an abstract programming language, and permits the translation of these programs into correct programs in a variety of languages. The abstract programming language has an imperative style of programming with assignment statements and side-effects, to allow the efficient generation of code. The abstract programs may be written by humans and then translated, avoiding the need to write the same program repeatedly in different languages or even the same language. This system uses classical logic, is conceptually simple, and permits reasoning about nonterminating programs using Scott-Strachey style denotational semantics.<|reference_end|> | arxiv | @article{plaisted2003an,
title={An Abstract Programming System},
author={David A. Plaisted},
journal={arXiv preprint arXiv:cs/0306028},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306028},
primaryClass={cs.SE cs.LO}
} | plaisted2003an |
arxiv-671165 | cs/0306029 | A Software Data Transport Framework for Trigger Applications on Clusters | <|reference_start|>A Software Data Transport Framework for Trigger Applications on Clusters: In the future ALICE heavy ion experiment at CERN's Large Hadron Collider input data rates of up to 25 GB/s have to be handled by the High Level Trigger (HLT) system, which has to scale them down to at most 1.25 GB/s before being written to permanent storage. The HLT system that is being designed to cope with these data rates consists of a large PC cluster, up to the order of a 1000 nodes, connected by a fast network. For the software that will run on these nodes a flexible data transport and distribution software framework has been developed. This framework consists of a set of separate components, that can be connected via a common interface, allowing to construct different configurations for the HLT, that are even changeable at runtime. To ensure a fault-tolerant operation of the HLT, the framework includes a basic fail-over mechanism that will be further expanded in the future, utilizing the runtime reconnection feature of the framework's component interface. First performance tests show very promising results for the software, indicating that it can achieve an event rate for the data transport sufficiently high to satisfy ALICE's requirements.<|reference_end|> | arxiv | @article{steinbeck2003a,
title={A Software Data Transport Framework for Trigger Applications on Clusters},
author={Timm M. Steinbeck, Volker Lindenstruth, Heinz Tilsner (Kirchhoff
Institute of Physics, Ruprecht-Karls-University Heidelberg, Germany, for the
ALICE Collaboration)},
journal={ECONF C0303241:TUGT003,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306029},
primaryClass={cs.DC}
} | steinbeck2003a |
arxiv-671166 | cs/0306030 | Grid-based access control for Unix environments, Filesystems and Web Sites | <|reference_start|>Grid-based access control for Unix environments, Filesystems and Web Sites: The EU DataGrid has deployed a grid testbed at approximately 20 sites across Europe, with several hundred registered users. This paper describes authorisation systems produced by GridPP and currently used on the EU DataGrid Testbed, including local Unix pool accounts and fine-grained access control with Access Control Lists and Grid-aware filesystems, fileservers and web developement environments.<|reference_end|> | arxiv | @article{mcnab2003grid-based,
title={Grid-based access control for Unix environments, Filesystems and Web
Sites},
author={A. McNab},
journal={ECONFC0303241:TUBT008,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306030},
primaryClass={cs.DC}
} | mcnab2003grid-based |
arxiv-671167 | cs/0306031 | The FRED Event Display: an Extensible HepRep Client for GLAST | <|reference_start|>The FRED Event Display: an Extensible HepRep Client for GLAST: A new graphics client prototype for the HepRep protocol is presented. Based on modern toolkits and high level languages (C++ and Ruby), Fred is an experiment to test applicability of scripting facilities to the high energy physics event display domain. Its flexible structure, extensibility and the use of the HepRep protocol are key features for its use in the astroparticle experiment GLAST.<|reference_end|> | arxiv | @article{frailis2003the,
title={The FRED Event Display: an Extensible HepRep Client for GLAST},
author={Marco Frailis, Riccardo Giannitrapani},
journal={ECONFC0303241:MOLT010,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306031},
primaryClass={cs.GR}
} | frailis2003the |
arxiv-671168 | cs/0306032 | Length-Based Attacks for Certain Group Based Encryption Rewriting Systems | <|reference_start|>Length-Based Attacks for Certain Group Based Encryption Rewriting Systems: In this note, we describe a probabilistic attack on public key cryptosystems based on the word/conjugacy problems for finitely presented groups of the type proposed recently by Anshel, Anshel and Goldfeld. In such a scheme, one makes use of the property that in the given group the word problem has a polynomial time solution, while the conjugacy problem has no known polynomial solution. An example is the braid group from topology in which the word problem is solvable in polynomial time while the only known solutions to the conjugacy problem are exponential. The attack in this paper is based on having a canonical representative of each string relative to which a length function may be computed. Hence the term length attack. Such canonical representatives are known to exist for the braid group.<|reference_end|> | arxiv | @article{hughes2003length-based,
title={Length-Based Attacks for Certain Group Based Encryption Rewriting
Systems},
author={James Hughes and Allen Tannenbaum},
journal={J. Hughes, A Tannenbaum, Length-Based Attacks for Certain Group
Based Encryption Rewriting Systems, Workshop SECI02 SEcurite de la
Communication sur Intenet, September, 2002, Tunis, Tunisa},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306032},
primaryClass={cs.CR}
} | hughes2003length-based |
arxiv-671169 | cs/0306033 | Multi-valued Connectives for Fuzzy Sets | <|reference_start|>Multi-valued Connectives for Fuzzy Sets: We present a procedure for the construction of multi-valued t-norms and t-conorms. Our procedure makes use of a pair of single-valued t-norms and the respective dual t-conorms and produces interval-valued t-norms and t-conorms. In this manner we combine desirable characteristics of different t-norms and t-conorms; if we use the t-norm min and t-conorm max, then the resulting structure is a superlattice, i.e. the multivalued analog of a lattice.<|reference_end|> | arxiv | @article{kehagias2003multi-valued,
title={Multi-valued Connectives for Fuzzy Sets},
author={Ath. Kehagias and K. Serafimidis},
journal={arXiv preprint arXiv:cs/0306033},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306033},
primaryClass={cs.OH}
} | kehagias2003multi-valued |
arxiv-671170 | cs/0306034 | A ROOT/IO Based Software Framework for CMS | <|reference_start|>A ROOT/IO Based Software Framework for CMS: The implementation of persistency in the Compact Muon Solenoid (CMS) Software Framework uses the core I/O functionality of ROOT. We will discuss the current ROOT/IO implementation, its evolution from the prior Objectivity/DB implementation, and the plans and ongoing work for the conversion to "POOL", provided by the LHC Computing Grid (LCG) persistency project.<|reference_end|> | arxiv | @article{tanenbaum2003a,
title={A ROOT/IO Based Software Framework for CMS},
author={William Tanenbaum},
journal={ECONFC0303241:TUKT010,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306034},
primaryClass={cs.DB}
} | tanenbaum2003a |
arxiv-671171 | cs/0306035 | A Transformational Decision Procedure for Non-Clausal Propositional Formulas | <|reference_start|>A Transformational Decision Procedure for Non-Clausal Propositional Formulas: A decision procedure for detecting valid propositional formulas is presented. It is based on the Davis-Putnam method and deals with propositional formulas that are initially converted to negational normal form. This procedure splits variables but, in contrast to other decision procedures based on the Davis-Putnam method, it does not branch. Instead, this procedure iteratively makes validity-preserving transformations of fragments of the formula. The transformations involve only a minimal formula part containing occurrences of the selected variable. Selection of the best variable for splitting is crucial in this decision procedure - it may shorten the decision process dramatically. A variable whose splitting leads to a minimal size of the transformed formula is selected. Also, the decision procedure performs plenty of optimizations based on calculation of delta-sets. Some optimizations lead to removing fragments of the formula. Others detect variables for which a single truth value assignment is sufficient. The latest information about this research can be found at http://www.sakharov.net/valid.html<|reference_end|> | arxiv | @article{sakharov2003a,
title={A Transformational Decision Procedure for Non-Clausal Propositional
Formulas},
author={Alexander Sakharov},
journal={arXiv preprint arXiv:cs/0306035},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306035},
primaryClass={cs.LO cs.CC}
} | sakharov2003a |
arxiv-671172 | cs/0306036 | Sequence Prediction based on Monotone Complexity | <|reference_start|>Sequence Prediction based on Monotone Complexity: This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the closeness of their posteriors, which are the important quantities for prediction. We show that for deterministic computable environments, the "posterior" and losses of m converge, but rapid convergence could only be shown on-sequence; the off-sequence behavior is unclear. In probabilistic environments, neither the posterior nor the losses converge, in general.<|reference_end|> | arxiv | @article{hutter2003sequence,
title={Sequence Prediction based on Monotone Complexity},
author={Marcus Hutter},
journal={Proceedings of the 16th Annual Conference on Learning Theory
(COLT-2003) 506-521},
year={2003},
number={IDSIA-09-03},
archivePrefix={arXiv},
eprint={cs/0306036},
primaryClass={cs.AI cs.IT cs.LG math.IT math.ST stat.TH}
} | hutter2003sequence |
arxiv-671173 | cs/0306037 | Flow-based analysis of Internet traffic | <|reference_start|>Flow-based analysis of Internet traffic: We propose flow-based analysis to estimate quality of an Internet connection. Using results from the queuing theory we compare two expressions for backbone traffic that have different scopes of applicability. A curve that shows dependence of utilization of a link on a number of active flows in it describes different states of the network. We propose a methodology for plotting such a curve using data received from a Cisco router by NetFlow protocol, determining the working area and the overloading point of the network. Our test is an easy way to find a moment for upgrading the backbone.<|reference_end|> | arxiv | @article{afanasiev2003flow-based,
title={Flow-based analysis of Internet traffic},
author={F. Afanasiev, A. Petrov, V.Grachev, A. Sukhov},
journal={published in Russian Edition of Network Computing, 5(98), 2003,
pp.92-95},
year={2003},
number={SamGAPS-03-14},
archivePrefix={arXiv},
eprint={cs/0306037},
primaryClass={cs.NI}
} | afanasiev2003flow-based |
arxiv-671174 | cs/0306038 | Quanta: a Language for Modeling and Manipulating Information Structures | <|reference_start|>Quanta: a Language for Modeling and Manipulating Information Structures: We present a theory for modeling the structure of information and a language (Quanta) expressing the theory. Unlike Shannon's information theory, which focuses on the amount of information in an information system, we focus on the structure of the information in the system. For example, we can model the information structure corresponding to an algorithm or a physical process such as the structure of a quantum interaction. After a brief discussion of the relation between an evolving state-system and an information structure, we develop an algebra of information pieces (infons) to represent the structure of systems where descriptions of complex systems are constructed from expressions involving descriptions of simpler information systems. We map the theory to the Von Neumann computing model of sequences/conditionals/repetitions, and to the class/object theory of object-oriented programming (OOP).<|reference_end|> | arxiv | @article{long2003quanta:,
title={Quanta: a Language for Modeling and Manipulating Information Structures},
author={Bruce Long},
journal={arXiv preprint arXiv:cs/0306038},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306038},
primaryClass={cs.LO cs.PL}
} | long2003quanta: |
arxiv-671175 | cs/0306039 | Bayesian Information Extraction Network | <|reference_start|>Bayesian Information Extraction Network: Dynamic Bayesian networks (DBNs) offer an elegant way to integrate various aspects of language in one model. Many existing algorithms developed for learning and inference in DBNs are applicable to probabilistic language modeling. To demonstrate the potential of DBNs for natural language processing, we employ a DBN in an information extraction task. We show how to assemble wealth of emerging linguistic instruments for shallow parsing, syntactic and semantic tagging, morphological decomposition, named entity recognition etc. in order to incrementally build a robust information extraction system. Our method outperforms previously published results on an established benchmark domain.<|reference_end|> | arxiv | @article{peshkin2003bayesian,
title={Bayesian Information Extraction Network},
author={Leonid Peshkin and Avi Pfeffer},
journal={Intl. Joint Conference on Artificial Intelligence, 2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306039},
primaryClass={cs.CL cs.AI cs.IR}
} | peshkin2003bayesian |
arxiv-671176 | cs/0306040 | The Open Language Archives Community: An infrastructure for distributed archiving of language resources | <|reference_start|>The Open Language Archives Community: An infrastructure for distributed archiving of language resources: New ways of documenting and describing language via electronic media coupled with new ways of distributing the results via the World-Wide Web offer a degree of access to language resources that is unparalleled in history. At the same time, the proliferation of approaches to using these new technologies is causing serious problems relating to resource discovery and resource creation. This article describes the infrastructure that the Open Language Archives Community (OLAC) has built in order to address these problems. Its technical and usage infrastructures address problems of resource discovery by constructing a single virtual library of distributed resources. Its governance infrastructure addresses problems of resource creation by providing a mechanism through which the language-resource community can express its consensus on recommended best practices.<|reference_end|> | arxiv | @article{simons2003the,
title={The Open Language Archives Community: An infrastructure for distributed
archiving of language resources},
author={Gary Simons and Steven Bird},
journal={arXiv preprint arXiv:cs/0306040},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306040},
primaryClass={cs.CL cs.DL}
} | simons2003the |
arxiv-671177 | cs/0306041 | Monodic temporal resolution | <|reference_start|>Monodic temporal resolution: Until recently, First-Order Temporal Logic (FOTL) has been little understood. While it is well known that the full logic has no finite axiomatisation, a more detailed analysis of fragments of the logic was not previously available. However, a breakthrough by Hodkinson et.al., identifying a finitely axiomatisable fragment, termed the monodic fragment, has led to improved understanding of FOTL. Yet, in order to utilise these theoretical advances, it is important to have appropriate proof techniques for the monodic fragment. In this paper, we modify and extend the clausal temporal resolution technique, originally developed for propositional temporal logics, to enable its use in such monodic fragments. We develop a specific normal form for formulae in FOTL, and provide a complete resolution calculus for formulae in this form. Not only is this clausal resolution technique useful as a practical proof technique for certain monodic classes, but the use of this approach provides us with increased understanding of the monodic fragment. In particular, we here show how several features of monodic FOTL are established as corollaries of the completeness result for the clausal temporal resolution method. These include definitions of new decidable monodic classes, simplification of existing monodic classes by reductions, and completeness of clausal temporal resolution in the case of monodic logics with expanding domains, a case with much significance in both theory and practice.<|reference_end|> | arxiv | @article{degtyarev2003monodic,
title={Monodic temporal resolution},
author={Anatoly Degtyarev, Michael Fisher, and Boris Konev},
journal={arXiv preprint arXiv:cs/0306041},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306041},
primaryClass={cs.LO}
} | degtyarev2003monodic |
arxiv-671178 | cs/0306042 | IGUANA Architecture, Framework and Toolkit for Interactive Graphics | <|reference_start|>IGUANA Architecture, Framework and Toolkit for Interactive Graphics: IGUANA is a generic interactive visualisation framework based on a C++ component model. It provides powerful user interface and visualisation primitives in a way that is not tied to any particular physics experiment or detector design. The article describes interactive visualisation tools built using IGUANA for the CMS and D0 experiments, as well as generic GEANT4 and GEANT3 applications. It covers features of the graphical user interfaces, 3D and 2D graphics, high-quality vector graphics output for print media, various textual, tabular and hierarchical data views, and integration with the application through control panels, a command line and different multi-threading models.<|reference_end|> | arxiv | @article{alverson2003iguana,
title={IGUANA Architecture, Framework and Toolkit for Interactive Graphics},
author={George Alverson, Giulio Eulisse, Shahzad Muzaffar, Ianna Osborne,
Lassi A. Tuura, Lucas Taylor},
journal={ECONFC0303241:MOLT008,2003},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306042},
primaryClass={cs.SE cs.GR}
} | alverson2003iguana |
arxiv-671179 | cs/0306043 | Skip Graphs | <|reference_start|>Skip Graphs: Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, constructing, inserting new nodes into, searching a skip graph, and detecting and repairing errors in the data structure introduced by node failures can be done using simple and straightforward algorithms.<|reference_end|> | arxiv | @article{aspnes2003skip,
title={Skip Graphs},
author={James Aspnes and Gauri Shah},
journal={arXiv preprint arXiv:cs/0306043},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306043},
primaryClass={cs.DS cs.DC}
} | aspnes2003skip |
arxiv-671180 | cs/0306044 | Compositional competitiveness for distributed algorithms | <|reference_start|>Compositional competitiveness for distributed algorithms: We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.<|reference_end|> | arxiv | @article{aspnes2003compositional,
title={Compositional competitiveness for distributed algorithms},
author={James Aspnes and Orli Waarts},
journal={arXiv preprint arXiv:cs/0306044},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306044},
primaryClass={cs.DS cs.DC}
} | aspnes2003compositional |
arxiv-671181 | cs/0306045 | The WorldGrid transatlantic testbed: a successful example of Grid interoperability across EU and US domains | <|reference_start|>The WorldGrid transatlantic testbed: a successful example of Grid interoperability across EU and US domains: The European DataTAG project has taken a major step towards making the concept of a worldwide computing Grid a reality. In collaboration with the companion U.S. project iVDGL, DataTAG has realized an intercontinental testbed spanning Europe and the U.S. integrating architecturally different Grid implementations based on the Globus toolkit. The WorldGrid testbed has been successfully demonstrated at SuperComputing 2002 and IST2002 where real HEP application jobs were transparently submitted from U.S. and Europe using native mechanisms and run where resources were available, independently of their location. In this paper we describe the architecture of the WorldGrid testbed, the problems encountered and the solutions taken in realizing such a testbed. With our work we present an important step towards interoperability of Grid middleware developed and deployed in Europe and the U.S.. Some of the solutions developed in WorldGrid will be adopted by the LHC Computing Grid first service. To the best of our knowledge, this is the first large-scale testbed that combines middleware components and makes them work together.<|reference_end|> | arxiv | @article{donno2003the,
title={The WorldGrid transatlantic testbed: a successful example of Grid
interoperability across EU and U.S. domains},
author={Flavia Donno, Vincenzo Ciaschini, David Rebatto, Luca Vaccarossa,
Marco Verlato},
journal={arXiv preprint arXiv:cs/0306045},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306045},
primaryClass={cs.DC}
} | donno2003the |
arxiv-671182 | cs/0306046 | Compact Approximation of Lattice Functions with Applications to Large-Alphabet Text Search | <|reference_start|>Compact Approximation of Lattice Functions with Applications to Large-Alphabet Text Search: We propose a very simple randomised data structure that stores an approximation from above of a lattice-valued function. Computing the function value requires a constant number of steps, and the error probability can be balanced with space usage, much like in Bloom filters. The structure is particularly well suited for functions that are bottom on most of their domain. We then show how to use our methods to store in a compact way the bad-character shift function for variants of the Boyer-Moore text search algorithms. As a result, we obtain practical implementations of these algorithms that can be used with large alphabets, such as Unicode collation elements, with a small setup time. The ideas described in this paper have been implemented as free software under the GNU General Public License within the MG4J project (http://mg4j.dsi.unimi.it/).<|reference_end|> | arxiv | @article{boldi2003compact,
title={Compact Approximation of Lattice Functions with Applications to
Large-Alphabet Text Search},
author={Paolo Boldi and Sebastiano Vigna},
journal={arXiv preprint arXiv:cs/0306046},
year={2003},
number={292-03},
archivePrefix={arXiv},
eprint={cs/0306046},
primaryClass={cs.DS}
} | boldi2003compact |
arxiv-671183 | cs/0306047 | Concrete uses of XML in software development and data analysis | <|reference_start|>Concrete uses of XML in software development and data analysis: XML is now becoming an industry standard for data description and exchange. Despite this there are still some questions about how or if this technology can be useful in High Energy Physics software development and data analysis. This paper aims to answer these questions by demonstrating how XML is used in the IceCube software development system, data handling and analysis. It does this by first surveying the concepts and tools that make up the XML technology. It then goes on to discuss concrete examples of how these concepts and tools are used to speed up software development in IceCube and what are the benefits of using XML in IceCube's data handling and analysis chain. The overall aim of this paper it to show that XML does have many benefits to bring High Energy Physics software development and data analysis.<|reference_end|> | arxiv | @article{patton2003concrete,
title={Concrete uses of XML in software development and data analysis},
author={S. Patton},
journal={arXiv preprint arXiv:cs/0306047},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306047},
primaryClass={cs.SE}
} | patton2003concrete |
arxiv-671184 | cs/0306048 | Parallel netCDF: A Scientific High-Performance I/O Interface | <|reference_start|>Parallel netCDF: A Scientific High-Performance I/O Interface: Dataset storage, exchange, and access play a critical role in scientific applications. For such purposes netCDF serves as a portable and efficient file format and programming interface, which is popular in numerous scientific application domains. However, the original interface does not provide an efficient mechanism for parallel data storage and access. In this work, we present a new parallel interface for writing and reading netCDF datasets. This interface is derived with minimum changes from the serial netCDF interface but defines semantics for parallel access and is tailored for high performance. The underlying parallel I/O is achieved through MPI-IO, allowing for dramatic performance gains through the use of collective I/O optimizations. We compare the implementation strategies with HDF5 and analyze both. Our tests indicate programming convenience and significant I/O performance improvement with this parallel netCDF interface.<|reference_end|> | arxiv | @article{li2003parallel,
title={Parallel netCDF: A Scientific High-Performance I/O Interface},
author={Jianwei Li, Wei-keng Liao, Alok Choudhary, Robert Ross, Rajeev Thakur,
William Gropp, Rob Latham},
journal={arXiv preprint arXiv:cs/0306048},
year={2003},
number={Preprint ANL/MCS-P1048-0503},
archivePrefix={arXiv},
eprint={cs/0306048},
primaryClass={cs.DC}
} | li2003parallel |
arxiv-671185 | cs/0306049 | Hyperdense Coding Modulo 6 with Filter-Machines | <|reference_start|>Hyperdense Coding Modulo 6 with Filter-Machines: We show how one can encode $n$ bits with $n^{o(1)}$ ``wave-bits'' using still hypothetical filter-machines (here $o(1)$ denotes a positive quantity which goes to 0 as $n$ goes to infity). Our present result - in a completely different computational model - significantly improves on the quantum superdense-coding breakthrough of Bennet and Wiesner (1992) which encoded $n$ bits by $\lceil{n/2}\rceil$ quantum-bits. We also show that our earlier algorithm (Tech. Rep. TR03-001, ECCC, See ftp://ftp.eccc.uni-trier.de/pub/eccc/reports/2003/TR03-001/index.html) which used $n^{o(1)}$ muliplication for computing a representation of the dot-product of two $n$-bit sequences modulo 6, and, similarly, an algorithm for computing a representation of the multiplication of two $n\times n$ matrices with $n^{2+o(1)}$ multiplications can be turned to algorithms computing the exact dot-product or the exact matrix-product with the same number of multiplications with filter-machines. With classical computation, computing the dot-product needs $\Omega(n)$ multiplications and the best known algorithm for matrix multiplication (D. Coppersmith and S. Winograd, Matrix multiplication via arithmetic progressions, J. Symbolic Comput., 9(3):251--280, 1990) uses $n^{2.376}$ multiplications.<|reference_end|> | arxiv | @article{grolmusz2003hyperdense,
title={Hyperdense Coding Modulo 6 with Filter-Machines},
author={Vince Grolmusz},
journal={arXiv preprint arXiv:cs/0306049},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306049},
primaryClass={cs.CC cs.DB}
} | grolmusz2003hyperdense |
arxiv-671186 | cs/0306050 | Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition | <|reference_start|>Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition: We describe the CoNLL-2003 shared task: language-independent named entity recognition. We give background information on the data sets (English and German) and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.<|reference_end|> | arxiv | @article{sang2003introduction,
title={Introduction to the CoNLL-2003 Shared Task: Language-Independent Named
Entity Recognition},
author={Erik F. Tjong Kim Sang and Fien De Meulder},
journal={Proceedings of CoNLL-2003, Edmonton, Canada, 2003, pp. 142-147},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306050},
primaryClass={cs.CL}
} | sang2003introduction |
arxiv-671187 | cs/0306051 | A data Grid testbed environment in Gigabit WAN with HPSS | <|reference_start|>A data Grid testbed environment in Gigabit WAN with HPSS: For data analysis of large-scale experiments such as LHC Atlas and other Japanese high energy and nuclear physics projects, we have constructed a Grid test bed at ICEPP and KEK. These institutes are connected to national scientific gigabit network backbone called SuperSINET. In our test bed, we have installed NorduGrid middleware based on Globus, and connected 120TB HPSS at KEK as a large scale data store. Atlas simulation data at ICEPP has been transferred and accessed using SuperSINET. We have tested various performances and characteristics of HPSS through this high speed WAN. The measurement includes comparison between computing and storage resources are tightly coupled with low latency LAN and long distant WAN.<|reference_end|> | arxiv | @article{manabe2003a,
title={A data Grid testbed environment in Gigabit WAN with HPSS},
author={Atsushi Manabe, Kohki Ishikawa, Yoshihiko Itoh, Setsuya Kawabata,
Tetsuro Mashimo, Youhei Morita, Hiroshi Sakamoto, Takashi Sasaki, Hiroyuki
Sato, Junichi Tanaka, Ikuo Ueda, Yoshiyuki Watase, Satomi Yamamoto, Shigeo
Yashiro},
journal={arXiv preprint arXiv:cs/0306051},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306051},
primaryClass={cs.DC}
} | manabe2003a |
arxiv-671188 | cs/0306052 | ATLAS Data Challenge 1 | <|reference_start|>ATLAS Data Challenge 1: In 2002 the ATLAS experiment started a series of Data Challenges (DC) of which the goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the production of those samples as a world-wide distributed activity. The first phase of DC1 was run during summer 2002, and involved 39 institutes in 18 countries. More than 10 million physics events and 30 million single particle events were fully simulated. Over a period of about 40 calendar days 71000 CPU-days were used producing 30 Tbytes of data in about 35000 partitions. In the second phase the next processing step was performed with the participation of 56 institutes in 21 countries (~ 4000 processors used in parallel). The basic elements of the ATLAS Monte Carlo production system are described. We also present how the software suite was validated and the participating sites were certified. These productions were already partly performed by using different flavours of Grid middleware at ~ 20 sites.<|reference_end|> | arxiv | @article{poulard2003atlas,
title={ATLAS Data Challenge 1},
author={Gilbert Poulard},
journal={arXiv preprint arXiv:cs/0306052},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306052},
primaryClass={cs.DC}
} | poulard2003atlas |
arxiv-671189 | cs/0306053 | A Community Authorization Service for Group Collaboration | <|reference_start|>A Community Authorization Service for Group Collaboration: In "Grids" and "collaboratories," we find distributed communities of resource providers and resource consumers, within which often complex and dynamic policies govern who can use which resources for which purpose. We propose a new approach to the representation, maintenance, and enforcement of such policies that provides a scalable mechanism for specifying and enforcing these policies. Our approach allows resource providers to delegate some of the authority for maintaining fine-grained access control policies to communities, while still maintaining ultimate control over their resources. We also describe a prototype implementation of this approach and an application in a data management context.<|reference_end|> | arxiv | @article{pearlman2003a,
title={A Community Authorization Service for Group Collaboration},
author={Laura Pearlman, Von Welch, Ian Foster, Carl Kesselman, and Steven
Tuecke},
journal={arXiv preprint arXiv:cs/0306053},
year={2003},
number={Preprint ANL/MCS-P1042-0502},
archivePrefix={arXiv},
eprint={cs/0306053},
primaryClass={cs.DC cs.CR}
} | pearlman2003a |
arxiv-671190 | cs/0306054 | OVAL: the CMS Testing Robot | <|reference_start|>OVAL: the CMS Testing Robot: Oval is a testing tool which help developers to detect unexpected changes in the behavior of their software. It is able to automatically compile some test programs, to prepare on the fly the needed configuration files, to run the tests within a specified Unix environment, and finally to analyze the output and check expectations. Oval does not provide utility code to help writing the tests, therefore it is quite independant of the programming/scripting language of the software to be tested. It can be seen as a kind of robot which apply the tests and warn about any unexpected change in the output. Oval was developed by the LLR laboratory for the needs of the CMS experiment, and it is now recommended by the CERN LCG project.<|reference_end|> | arxiv | @article{chamont2003oval:,
title={OVAL: the CMS Testing Robot},
author={D. Chamont and C. Charlot},
journal={arXiv preprint arXiv:cs/0306054},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306054},
primaryClass={cs.SE}
} | chamont2003oval: |
arxiv-671191 | cs/0306055 | BlueOx: A Java Framework for Distributed Data Analysis | <|reference_start|>BlueOx: A Java Framework for Distributed Data Analysis: High energy physics experiments including those at the Tevatron and the upcoming LHC require analysis of large data sets which are best handled by distributed computation. We present the design and development of a distributed data analysis framework based on Java. Analysis jobs run through three phases: discovery of data sets available, brokering/assignment of data sets to analysis servers, and job execution. Each phase is represented by a set of abstract interfaces. These interfaces allow different techniques to be used without modification to the framework. For example, the communications interface has been implemented by both a packet protocol and a SOAP-based scheme. User authentication can be provided either through simple passwords or through a GSI certificates system. Data from CMS HCAL Testbeams, the L3 LEP experiment, and a hypothetical high-energy linear collider experiment have been interfaced with the framework.<|reference_end|> | arxiv | @article{mans2003blueox:,
title={BlueOx: A Java Framework for Distributed Data Analysis},
author={Jeremiah Mans and David Bengali},
journal={arXiv preprint arXiv:cs/0306055},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306055},
primaryClass={cs.DC}
} | mans2003blueox: |
arxiv-671192 | cs/0306056 | Twelve Ways to Build CMS Crossings from ROOT Files | <|reference_start|>Twelve Ways to Build CMS Crossings from ROOT Files: The simulation of CMS raw data requires the random selection of one hundred and fifty pileup events from a very large set of files, to be superimposed in memory to the signal event. The use of ROOT I/O for that purpose is quite unusual: the events are not read sequentially but pseudo-randomly, they are not processed one by one in memory but by bunches, and they do not contain orthodox ROOT objects but many foreign objects and templates. In this context, we have compared the performance of ROOT containers versus the STL vectors, and the use of trees versus a direct storage of containers. The strategy with best performances is by far the one using clones within trees, but it stays hard to tune and very dependant on the exact use-case. The use of STL vectors could bring more easily similar performances in a future ROOT release.<|reference_end|> | arxiv | @article{chamont2003twelve,
title={Twelve Ways to Build CMS Crossings from ROOT Files},
author={D. Chamont and C. Charlot},
journal={arXiv preprint arXiv:cs/0306056},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306056},
primaryClass={cs.DB}
} | chamont2003twelve |
arxiv-671193 | cs/0306057 | IceCube's Development Environment | <|reference_start|>IceCube's Development Environment: When the IceCube experiment started serious software development it needed a development environment in which both its developers and clients could work and that would encourage and support a good software development process. Some of the key features that IceCube wanted in such a environment were: the separation of the configuration and build tools; inclusion of an issue tracking system; support for the Unified Change Model; support for unit testing; and support for continuous building. No single, affordable, off the shelf, environment offered all these features. However there are many open source tools that address subsets of these feature, therefore IceCube set about selecting those tools which it could use in developing its own environment and adding its own tools where no suitable tools were found. This paper outlines the tools that where chosen, what are their responsibilities in the development environment and how they fit together. The complete environment will be demonstrated with a walk through of single cycle of the development process.<|reference_end|> | arxiv | @article{patton2003icecube's,
title={IceCube's Development Environment},
author={S. Patton, D. Glowacki},
journal={arXiv preprint arXiv:cs/0306057},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306057},
primaryClass={cs.SE}
} | patton2003icecube's |
arxiv-671194 | cs/0306058 | Installing, Running and Maintaining Large Linux Clusters at CERN | <|reference_start|>Installing, Running and Maintaining Large Linux Clusters at CERN: Having built up Linux clusters to more than 1000 nodes over the past five years, we already have practical experience confronting some of the LHC scale computing challenges: scalability, automation, hardware diversity, security, and rolling OS upgrades. This paper describes the tools and processes we have implemented, working in close collaboration with the EDG project [1], especially with the WP4 subtask, to improve the manageability of our clusters, in particular in the areas of system installation, configuration, and monitoring. In addition to the purely technical issues, providing shared interactive and batch services which can adapt to meet the diverse and changing requirements of our users is a significant challenge. We describe the developments and tuning that we have introduced on our LSF based systems to maximise both responsiveness to users and overall system utilisation. Finally, this paper will describe the problems we are facing in enlarging our heterogeneous Linux clusters, the progress we have made in dealing with the current issues and the steps we are taking to gridify the clusters<|reference_end|> | arxiv | @article{bahyl2003installing,,
title={Installing, Running and Maintaining Large Linux Clusters at CERN},
author={Vladimir Bahyl, Benjamin Chardi, Jan van Eldik, Ulrich Fuchs, Thorsten
Kleinwort, Martin Murth, Tim Smith},
journal={arXiv preprint arXiv:cs/0306058},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306058},
primaryClass={cs.DC}
} | bahyl2003installing, |
arxiv-671195 | cs/0306059 | The Use of HepRep in GLAST | <|reference_start|>The Use of HepRep in GLAST: HepRep is a generic, hierarchical format for description of graphics representables that can be augmented by physics information and relational properties. It was developed for high energy physics event display applications and is especially suited to client/server or component frameworks. The GLAST experiment, an international effort led by NASA for a gamma-ray telescope to launch in 2006, chose HepRep to provide a flexible, extensible and maintainable framework for their event display without tying their users to any one graphics application. To support HepRep in their GUADI infrastructure, GLAST developed a HepRep filler and builder architecture. The architecture hides the details of XML and CORBA in a set of base and helper classes allowing physics experts to focus on what data they want to represent. GLAST has two GAUDI services: HepRepSvc, which registers HepRep fillers in a global registry and allows the HepRep to be exported to XML, and CorbaSvc, which allows the HepRep to be published through a CORBA interface and which allows the client application to feed commands back to GAUDI (such as start next event, or run some GAUDI algorithm). GLAST's HepRep solution gives users a choice of client applications, WIRED (written in Java) or FRED (written in C++ and Ruby), and leaves them free to move to any future HepRep-compliant event display.<|reference_end|> | arxiv | @article{perl2003the,
title={The Use of HepRep in GLAST},
author={J. Perl, R. Giannitrapani, M. Frailis},
journal={arXiv preprint arXiv:cs/0306059},
year={2003},
number={SLAC-PUB-9908},
archivePrefix={arXiv},
eprint={cs/0306059},
primaryClass={cs.GR}
} | perl2003the |
arxiv-671196 | cs/0306060 | DIRAC - Distributed Infrastructure with Remote Agent Control | <|reference_start|>DIRAC - Distributed Infrastructure with Remote Agent Control: This paper describes DIRAC, the LHCb Monte Carlo production system. DIRAC has a client/server architecture based on: Compute elements distributed among the collaborating institutes; Databases for production management, bookkeeping (the metadata catalogue) and software configuration; Monitoring and cataloguing services for updating and accessing the databases. Locally installed software agents implemented in Python monitor the local batch queue, interrogate the production database for any outstanding production requests using the XML-RPC protocol and initiate the job submission. The agent checks and, if necessary, installs any required software automatically. After the job has processed the events, the agent transfers the output data and updates the metadata catalogue. DIRAC has been successfully installed at 18 collaborating institutes, including the DataGRID, and has been used in recent Physics Data Challenges. In the near to medium term future we must use a mixed environment with different types of grid middleware or no middleware. We describe how this flexibility has been achieved and how ubiquitously available grid middleware would improve DIRAC.<|reference_end|> | arxiv | @article{brook2003dirac,
title={DIRAC - Distributed Infrastructure with Remote Agent Control},
author={N.Brook, A.Bogdanchikov, A.Buckley, J.Closier, U.Egede, M.Frank,
D.Galli, M.Gandelman, V.Garonne, C.Gaspar, R.Graciani Diaz, K.Harrison, E.van
Herwijnen, A.Khan, S.Klous, I.Korolko, G.Kuznetsov, F.Loverre, U.Marconi,
J.P.Palacios, G.N.Patrick, A.Pickford, S.Ponce, V.Romanovski, J.J.Saborido,
M.Schmelling, A.Soroko, A.Tsaregorodtsev, V.Vagnoni, A.Washbrook},
journal={arXiv preprint arXiv:cs/0306060},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306060},
primaryClass={cs.DC}
} | brook2003dirac |
arxiv-671197 | cs/0306061 | Operational Aspects of Dealing with the Large BaBar Data Set | <|reference_start|>Operational Aspects of Dealing with the Large BaBar Data Set: To date, the BaBar experiment has stored over 0.7PB of data in an Objectivity/DB database. Approximately half this data-set comprises simulated data of which more than 70% has been produced at more than 20 collaborating institutes outside of SLAC. The operational aspects of managing such a large data set and providing access to the physicists in a timely manner is a challenging and complex problem. We describe the operational aspects of managing such a large distributed data-set as well as importing and exporting data from geographically spread BaBar collaborators. We also describe problems common to dealing with such large datasets.<|reference_end|> | arxiv | @article{azemoon2003operational,
title={Operational Aspects of Dealing with the Large BaBar Data Set},
author={Tofigh Azemoon, Adil Hasan, Wilko Kroeger, Artem Trunov},
journal={arXiv preprint arXiv:cs/0306061},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306061},
primaryClass={cs.DB cs.DC}
} | azemoon2003operational |
arxiv-671198 | cs/0306062 | Learning to Order Facts for Discourse Planning in Natural Language Generation | <|reference_start|>Learning to Order Facts for Discourse Planning in Natural Language Generation: This paper presents a machine learning approach to discourse planning in natural language generation. More specifically, we address the problem of learning the most natural ordering of facts in discourse plans for a specific domain. We discuss our methodology and how it was instantiated using two different machine learning algorithms. A quantitative evaluation performed in the domain of museum exhibit descriptions indicates that our approach performs significantly better than manually constructed ordering rules. Being retrainable, the resulting planners can be ported easily to other similar domains, without requiring language technology expertise.<|reference_end|> | arxiv | @article{dimitromanolaki2003learning,
title={Learning to Order Facts for Discourse Planning in Natural Language
Generation},
author={Aggeliki Dimitromanolaki and Ion Androutsopoulos},
journal={Proceedings of EACL 2003 Workshop on Natural Language Generation},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306062},
primaryClass={cs.CL}
} | dimitromanolaki2003learning |
arxiv-671199 | cs/0306063 | A Model for Grid User Management | <|reference_start|>A Model for Grid User Management: Registration and management of users in a large scale Grid computing environment presents new challenges that are not well addressed by existing protocols. Within a single Virtual Organization (VO), thousands of users will potentially need access to hundreds of computing sites, and the traditional model where users register for local accounts at each site will present significant scaling problems. However, computing sites must maintain control over access to the site and site policies generally require individual local accounts for every user. We present here a model that allows users to register once with a VO and yet still provides all of the computing sites the information they require with the required level of trust. We have developed tools to allow sites to automate the management of local accounts and the mappings between Grid identities and local accounts.<|reference_end|> | arxiv | @article{baker2003a,
title={A Model for Grid User Management},
author={Richard Baker, Dantong Yu, Tomasz Wlodek},
journal={arXiv preprint arXiv:cs/0306063},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306063},
primaryClass={cs.DC}
} | baker2003a |
arxiv-671200 | cs/0306064 | Exploiting peer group concept for adaptive and highly available services | <|reference_start|>Exploiting peer group concept for adaptive and highly available services: This paper presents a prototype for redundant, highly available and fault tolerant peer to peer framework for data management. Peer to peer computing is gaining importance due to its flexible organization, lack of central authority, distribution of functionality to participating nodes and ability to utilize unused computational resources. Emergence of GRID computing has provided much needed infrastructure and administrative domain for peer to peer computing. The components of this framework exploit peer group concept to scope service and information search, arrange services and information in a coherent manner, provide selective redundancy and ensure availability in face of failure and high load conditions. A prototype system has been implemented using JXTA peer to peer technology and XML is used for service description and interfaces, allowing peers to communicate with services implemented in various platforms including web services and JINI services. It utilizes code mobility to achieve role interchange among services and ensure dynamic group membership. Security is ensured by using Public Key Infrastructure (PKI) to implement group level security policies for membership and service access.<|reference_end|> | arxiv | @article{jan2003exploiting,
title={Exploiting peer group concept for adaptive and highly available services},
author={Muhammad Asif Jan (Centre for European Nuclear Research (CERN)
Switzerland) Fahd Ali Zahid, Mohammad Moazam Fraz (Foundation University,
Islamabad, Pakistan) Arshad Ali (National University of Science and
Technology, Pakistan)},
journal={arXiv preprint arXiv:cs/0306064},
year={2003},
archivePrefix={arXiv},
eprint={cs/0306064},
primaryClass={cs.DC}
} | jan2003exploiting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.