corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-671201
cs/0306065
POOL File Catalog, Collection and Metadata Components
<|reference_start|>POOL File Catalog, Collection and Metadata Components: The POOL project is the common persistency framework for the LHC experiments to store petabytes of experiment data and metadata in a distributed and grid enabled way. POOL is a hybrid event store consisting of a data streaming layer and a relational layer. This paper describes the design of file catalog, collection and metadata components which are not part of the data streaming layer of POOL and outlines how POOL aims to provide transparent and efficient data access for a wide range of environments and use cases - ranging from a large production site down to a single disconnected laptops. The file catalog is the central POOL component translating logical data references to physical data files in a grid environment. POOL collections with their associated metadata provide an abstract way of accessing experiment data via their logical grouping into sets of related data objects.<|reference_end|>
arxiv
@article{cioffi2003pool, title={POOL File Catalog, Collection and Metadata Components}, author={C.Cioffi, S.Eckmann, M.Girone, J.Hrivnac, D.Malon, H.Schmuecker, A.Vaniachine, J.Wojcieszuk, Z.Xie}, journal={arXiv preprint arXiv:cs/0306065}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306065}, primaryClass={cs.DB} }
cioffi2003pool
arxiv-671202
cs/0306066
The COMPASS Event Store in 2002
<|reference_start|>The COMPASS Event Store in 2002: COMPASS, the fixed-target experiment at CERN studying the structure of the nucleon and spectroscopy, collected over 260 TB during summer 2002 run. All these data, together with reconstructed events information, were put from the beginning in a database infrastructure based on Objectivity/DB and on the hierarchical storage manager CASTOR. The experience in the usage of the database is reviewed and the evolution of the system outlined.<|reference_end|>
arxiv
@article{duic2003the, title={The COMPASS Event Store in 2002}, author={Venicio Duic, Massimo Lamanna}, journal={arXiv preprint arXiv:cs/0306066}, year={2003}, doi={10.1109/TNS.2004.832645}, archivePrefix={arXiv}, eprint={cs/0306066}, primaryClass={cs.DB} }
duic2003the
arxiv-671203
cs/0306067
The AliEn system, status and perspectives
<|reference_start|>The AliEn system, status and perspectives: AliEn is a production environment that implements several components of the Grid paradigm needed to simulate, reconstruct and analyse HEP data in a distributed way. The system is built around Open Source components, uses the Web Services model and standard network protocols to implement the computing platform that is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The aim of this paper is to present the current AliEn architecture and outline its future developments in the light of emerging standards.<|reference_end|>
arxiv
@article{buncic2003the, title={The AliEn system, status and perspectives}, author={P. Buncic, P. Saiz, A. J. Peters}, journal={arXiv preprint arXiv:cs/0306067}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306067}, primaryClass={cs.DC} }
buncic2003the
arxiv-671204
cs/0306068
AliEn Resource Brokers
<|reference_start|>AliEn Resource Brokers: AliEn (ALICE Environment) is a lightweight GRID framework developed by the Alice Collaboration. When the experiment starts running, it will collect data at a rate of approximately 2 PB per year, producing O(109) files per year. All these files, including all simulated events generated during the preparation phase of the experiment, must be accounted and reliably tracked in the GRID environment. The backbone of AliEn is a distributed file catalogue, which associates universal logical file name to physical file names for each dataset and provides transparent access to datasets independently of physical location. The file replication and transport is carried out under the control of the File Transport Broker. In addition, the file catalogue maintains information about every job running in the system. The jobs are distributed by the Job Resource Broker that is implemented using a simplified pull (as opposed to traditional push) architecture. This paper describes the Job and File Transport Resource Brokers and shows that a similar architecture can be applied to solve both problems.<|reference_end|>
arxiv
@article{saiz2003alien, title={AliEn Resource Brokers}, author={Pablo Saiz, Predrag Buncic, Andreas J. Peters}, journal={arXiv preprint arXiv:cs/0306068}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306068}, primaryClass={cs.DC} }
saiz2003alien
arxiv-671205
cs/0306069
Distributed Offline Data Reconstruction in BaBar
<|reference_start|>Distributed Offline Data Reconstruction in BaBar: The BaBar experiment at SLAC is in its fourth year of running. The data processing system has been continuously evolving to meet the challenges of higher luminosity running and the increasing bulk of data to re-process each year. To meet these goals a two-pass processing architecture has been adopted, where 'rolling calibrations' are quickly calculated on a small fraction of the events in the first pass and the bulk data reconstruction done in the second. This allows for quick detector feedback in the first pass and allows for the parallelization of the second pass over two or more separate farms. This two-pass system allows also for distribution of processing farms off-site. The first such site has been setup at INFN Padova. The challenges met here were many. The software was ported to a full Linux-based, commodity hardware system. The raw dataset, 90 TB, was imported from SLAC utilizing a 155 Mbps network link. A system for quality control and export of the processed data back to SLAC was developed. Between SLAC and Padova we are currently running three pass-one farms, with 32 CPUs each, and nine pass-two farms with 64 to 80 CPUs each. The pass-two farms can process between 2 and 4 million events per day. Details about the implementation and performance of the system will be presented.<|reference_end|>
arxiv
@article{pulliam2003distributed, title={Distributed Offline Data Reconstruction in BaBar}, author={Teela Pulliam, Peter Elmer, Alvise Dorigo}, journal={arXiv preprint arXiv:cs/0306069}, year={2003}, number={SLAC-PUB-9903}, archivePrefix={arXiv}, eprint={cs/0306069}, primaryClass={cs.DC} }
pulliam2003distributed
arxiv-671206
cs/0306070
Fine-Grained Authorization for Job and Resource Management Using Akenti and the Globus Toolkit
<|reference_start|>Fine-Grained Authorization for Job and Resource Management Using Akenti and the Globus Toolkit: As the Grid paradigm is adopted as a standard way of sharing remote resources across organizational domains, the need for fine-grained access control to these resources increases. This paper presents an authorization solution for job submission and control, developed as part of the National Fusion Collaboratory, that uses the Globus Toolkit 2 and the Akenti authorization service in order to perform fine-grained authorization of job and resource management requests in a Grid environment. At job startup, it allows the system to evaluate a user's Resource Specification Language request against authorization policies on resource usage (determining how many CPUs or memory a user can use on a given resource or which executables the user can run). Furthermore, based on authorization policies, it allows other virtual organization members to manage the user's job.<|reference_end|>
arxiv
@article{thompson2003fine-grained, title={Fine-Grained Authorization for Job and Resource Management Using Akenti and the Globus Toolkit}, author={M. Thompson, A. Essiari, K. Keahey, V. Welch, S.Lang, B. Liu}, journal={ECONF C0303241:TUBT006,2003}, year={2003}, number={LBNL-52976}, archivePrefix={arXiv}, eprint={cs/0306070}, primaryClass={cs.DC cs.CR} }
thompson2003fine-grained
arxiv-671207
cs/0306071
AliEnFS - a Linux File System for the AliEn Grid Services
<|reference_start|>AliEnFS - a Linux File System for the AliEn Grid Services: Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. $alienfs$ (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual File System Switch) to communicate via a generalised file system interface to the AliEn file system daemon. The AliEn framework is used for authentication, catalogue browsing, file registration and read/write transfer operations. A C++ API implements the generic file system operations. The goal of AliEnFS is to allow users easy interactive access to a worldwide distributed virtual file system using familiar shell commands (f.e. cp,ls,rm ...) The paper discusses general aspects of Grid File Systems, the AliEn implementation and present and future developments for the AliEn Grid File System.<|reference_end|>
arxiv
@article{peters2003alienfs, title={AliEnFS - a Linux File System for the AliEn Grid Services}, author={Andreas J. Peters, P. Saiz, P. Buncic}, journal={arXiv preprint arXiv:cs/0306071}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306071}, primaryClass={cs.DC} }
peters2003alienfs
arxiv-671208
cs/0306072
The EU DataGrid Workload Management System: towards the second major release
<|reference_start|>The EU DataGrid Workload Management System: towards the second major release: In the first phase of the European DataGrid project, the 'workload management' package (WP1) implemented a working prototype, providing users with an environment allowing to define and submit jobs to the Grid, and able to find and use the ``best'' resources for these jobs. Application users have now been experiencing for about a year now with this first release of the workload management system. The experiences acquired, the feedback received by the user and the need to plug new components implementing new functionalities, triggered an update of the existing architecture. A description of this revised and complemented workload management system is given.<|reference_end|>
arxiv
@article{avellino2003the, title={The EU DataGrid Workload Management System: towards the second major release}, author={G. Avellino, S. Barale, S. Beco, B. Cantalupo, D. Colling, F. Giacomini, A. Gianelle, A. Guarise, A. Krenek, D. Kouril, A. Maraschini, L. Matyska, M. Mezzadri, S. Monforte, M. Mulac, F. Pacini, M. Pappalardo, R. Peluso, J. Pospisil, F. Prelz, E. Ronchieri, M. Ruda, L. Salconi, Z. Salvet, M. Sgaravatto, J. Sitera, A. Terracina, M. Vocu, A. Werbrouck}, journal={arXiv preprint arXiv:cs/0306072}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306072}, primaryClass={cs.DC} }
avellino2003the
arxiv-671209
cs/0306073
GridMonitor: Integration of Large Scale Facility Fabric Monitoring with Meta Data Service in Grid Environment
<|reference_start|>GridMonitor: Integration of Large Scale Facility Fabric Monitoring with Meta Data Service in Grid Environment: Grid computing consists of the coordinated use of large sets of diverse, geographically distributed resources for high performance computation. Effective monitoring of these computing resources is extremely important to allow efficient use on the Grid. The large number of heterogeneous computing entities available in Grids makes the task challenging. In this work, we describe a Grid monitoring system, called GridMonitor, that captures and makes available the most important information from a large computing facility. The Grid monitoring system consists of four tiers: local monitoring, archiving, publishing and harnessing. This architecture was applied on a large scale linux farm and network infrastructure. It can be used by many higher-level Grid services including scheduling services and resource brokering.<|reference_end|>
arxiv
@article{baker2003gridmonitor:, title={GridMonitor: Integration of Large Scale Facility Fabric Monitoring with Meta Data Service in Grid Environment}, author={Rich Baker, Dantong Yu, Jason Smith, Anthony Chan, Kaushik De, Patrick McGuigan}, journal={arXiv preprint arXiv:cs/0306073}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306073}, primaryClass={cs.DC cs.PF} }
baker2003gridmonitor:
arxiv-671210
cs/0306074
Understanding and Coping with Hardware and Software Failures in a Very Large Trigger Farm
<|reference_start|>Understanding and Coping with Hardware and Software Failures in a Very Large Trigger Farm: When thousands of processors are involved in performing event filtering on a trigger farm, there is likely to be a large number of failures within the software and hardware systems. BTeV, a proton/antiproton collider experiment at Fermi National Accelerator Laboratory, has designed a trigger, which includes several thousand processors. If fault conditions are not given proper treatment, it is conceivable that this trigger system will experience failures at a high enough rate to have a negative impact on its effectiveness. The RTES (Real Time Embedded Systems) collaboration is a group of physicists, engineers, and computer scientists working to address the problem of reliability in large-scale clusters with real-time constraints such as this. Resulting infrastructure must be highly scalable, verifiable, extensible by users, and dynamically changeable.<|reference_end|>
arxiv
@article{kowalkowski2003understanding, title={Understanding and Coping with Hardware and Software Failures in a Very Large Trigger Farm}, author={Jim Kowalkowski (Fermilab)}, journal={ECONFC0303241:THGT001,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306074}, primaryClass={cs.DC} }
kowalkowski2003understanding
arxiv-671211
cs/0306075
Data Management for Physics Analysis in Phenix (BNL, RHIC)
<|reference_start|>Data Management for Physics Analysis in Phenix (BNL, RHIC): Every year the PHENIX collaboration deals with increasing volume of data (now about 1/4 PB/year). Apparently the more data the more questions how to process all the data in most efficient way. In recent past many developments in HEP computing were dedicated to the production environment. Now we need more tools to help to obtain physics results from the analysis of distributed simulated and experimental data. Developments in Grid architectures gave many examples how distributed computing facilities can be organized to meet physics analysis needs. We feel that our main task in this area is to try to use already developed systems or system components in PHENIX environment. We are concentrating here on the followed problems: file/replica catalog which keep names of our files, data moving over WAN, job submission in multicluster environment. PHENIX is a running experiment and this fact narrowed our ability to test new software on the collaboration computer facilities. We are experimenting with system prototypes at State University of New York at Stony Brook (SUNYSB) where we run midrange computing cluster for physics analysis. The talk is dedicated to discuss some experience with Grid software and achieved results.<|reference_end|>
arxiv
@article{jacak2003data, title={Data Management for Physics Analysis in Phenix (BNL, RHIC)}, author={Barbara Jacak, Roy Lacey, Dave Morrison, Irina Sourikova, Andrey Shevel, Qiu Zhiping}, journal={arXiv preprint arXiv:cs/0306075}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306075}, primaryClass={cs.DC} }
jacak2003data
arxiv-671212
cs/0306076
FAYE: A Java Implement of the Frame/Stream/Stop Analysis Model
<|reference_start|>FAYE: A Java Implement of the Frame/Stream/Stop Analysis Model: FAYE, The Frame AnalYsis Executable, is a Java based implementation of the Frame/Stream/Stop model for analyzing data. Unlike traditional Event based analysis models, the Frame/Stream/Stop model has no preference as to which part of any data is to be analyzed, and an Event get as equal treatment as a change in the high voltage. This model means that FAYE is a suitable analysis framework for many different type of data analysis, such as detector trends or as a visualization core. During the design of FAYE the emphasis has been on clearly delineating each of the executable's responsibilities and on keeping their implementations as completely independent as possible. This leads to the large part of FAYE being a generic core which is experiment independent, and smaller section that customizes this core to an experiments own data structures. This customization can even be done in C++, using JNI, while the executable's control remains in Java. This paper reviews the Frame/Stream/Stop model and then looks at how FAYE has approached its implementation, with an emphasis on which responsibilities are handled by the generic core, and which parts an experiment must provide as part of the customization portion of the executable.<|reference_end|>
arxiv
@article{patton2003faye:, title={FAYE: A Java Implement of the Frame/Stream/Stop Analysis Model}, author={S. Patton}, journal={arXiv preprint arXiv:cs/0306076}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306076}, primaryClass={cs.SE} }
patton2003faye:
arxiv-671213
cs/0306077
The TESLA Requirements Database
<|reference_start|>The TESLA Requirements Database: In preparation for the planned linear collider TESLA, DESY is designing the required buildings and facilities. The accelerator and infrastructure components have to be allocated to buildings, and their required areas for installation, operation and maintenance have to be determined. Interdisciplinary working groups specify the project from different viewpoints and need to develop a common vision as a precondition for an optimal solution. A commercial requirements database is used as a collaborative tool, enabling concurrent requirements specification by independent working groups. The requirements database ensures long term storage and availability of the emerging knowledge, and it offers a central platform for communication which is available for all project members. It is successfully operating since summer 2002 and has since then become an important tool for the design team.<|reference_end|>
arxiv
@article{hagge2003the, title={The TESLA Requirements Database}, author={Lars Hagge, Jens Kreutzkamp, Kathrin Lappe}, journal={arXiv preprint arXiv:cs/0306077}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306077}, primaryClass={cs.DB} }
hagge2003the
arxiv-671214
cs/0306078
ROOT Status and Future Developments
<|reference_start|>ROOT Status and Future Developments: In this talk we will review the major additions and improvements made to the ROOT system in the last 18 months and present our plans for future developments. The additons and improvements range from modifications to the I/O sub-system to allow users to save and restore objects of classes that have not been instrumented by special ROOT macros, to the addition of a geometry package designed for building, browsing, tracking and visualizing detector geometries. Other improvements include enhancements to the quick analysis sub-system (TTree::Draw()), the addition of classes that allow inter-file object references (TRef, TRefArray), better support for templated and STL classes, amelioration of the Automatic Script Compiler and the incorporation of new fitting and mathematical tools. Efforts have also been made to increase the modularity of the ROOT system with the introduction of more abstract interfaces and the development of a plug-in manager. In the near future, we intend to continue the development of PROOF and its interfacing with GRID environments. We plan on providing an interface between Geant3, Geant4 and Fluka and the new geometry package. The ROOT GUI classes will finally be available on Windows and we plan to release a GUI inspector and builder. In the last year, ROOT has drawn the endorsement of additional experiments and institutions. It is now officially supported by CERN and used as key I/O component by the LCG project.<|reference_end|>
arxiv
@article{rademakers2003root, title={ROOT Status and Future Developments}, author={Fons Rademakers, Masaharu Goto, Philippe Canal, Rene Brun}, journal={ECONFC0303241:MOJT001,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306078}, primaryClass={cs.SE} }
rademakers2003root
arxiv-671215
cs/0306079
Integrated Information Management for TESLA
<|reference_start|>Integrated Information Management for TESLA: Next-generation projects in High Energy Physics will reach again a new dimension of complexity. Information management has to ensure an efficient and economic information flow within the collaborations, offering world-wide up-to-date information access to the collaborators as one condition for successful projects. DESY introduces several information systems in preparation for the planned linear collider TESLA: a Requirements Management System (RMS) is in production for the TESLA planning group, a Product Data Management System (PDMS) is in production since the beginning of 2002 and is supporting the cavity preparation and the general engineering of accelerator components. A pilot Asset Management System (AMS) is in production for supporting the management and maintenance of the technical infrastructure, and a Facility Management System (FMS) with a Geographic Information System (GIS) is currently being introduced to support civil engineering. Efforts have been started to integrate the systems with the goal that users can retrieve information through a single point of access. The paper gives an introduction to information management and the activities at DESY.<|reference_end|>
arxiv
@article{buerger2003integrated, title={Integrated Information Management for TESLA}, author={Jochen Buerger, Lars Hagge, Jens Kreutzkamp, Kathrin Lappe, Andrea Robben}, journal={arXiv preprint arXiv:cs/0306079}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306079}, primaryClass={cs.DB} }
buerger2003integrated
arxiv-671216
cs/0306080
BOA: Framework for Automated Builds
<|reference_start|>BOA: Framework for Automated Builds: Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions.<|reference_end|>
arxiv
@article{ratnikova2003boa:, title={BOA: Framework for Automated Builds}, author={N. Ratnikova}, journal={ECONFC0303241:TUJT005,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306080}, primaryClass={cs.SE} }
ratnikova2003boa:
arxiv-671217
cs/0306081
An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS
<|reference_start|>An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS: In the context of the ATLAS experiment there is growing evidence of the importance of different kinds of Meta-data including all the important details of the detector and data acquisition that are vital for the analysis of the acquired data. The Online BookKeeper (OBK) is a component of ATLAS online software that stores all information collected while running the experiment, including the Meta-data associated with the event acquisition, triggering and storage. The facilities for acquisition of control data within the on-line software framework, together with a full functional Web interface, make the OBK a powerful tool containing all information needed for event analysis, including an electronic log book. In this paper we explain how OBK plays a role as one of the main collectors and managers of Meta-data produced on-line, and we'll also focus on the Web facilities already available. The usage of the web interface as an electronic run logbook is also explained, together with the future extensions. We describe the technology used in OBK development and how we arrived at the present level explaining the previous experience with various DBMS technologies. The extensive performance evaluations that have been performed and the usage in the production environment of the ATLAS test beams are also analysed.<|reference_end|>
arxiv
@article{barczyc2003an, title={An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS}, author={M. Barczyc, D. Burckhart-Chromek, M. Caprini, J. Da Silva Conceicao, M. Dobson, J. Flammer, R. Jones, A. Kazarov, S. Kolos, D. Liko, L. Mapelli, and I.Soloviev (CERN), R. Hart NIKHEF (Amsterdam, Netherlands), A. Amorim, D. Klose, J. Lima, L. Lucio, and L. Pedro (CFNUL/FCUL, Portugal), H. Wolters (UCP Figueira da Foz, Portugal), E. Badescu NIPNE (Bucharest, Romania), I. Alexandrov, V. Kotov, and M. Mineev JINR (Dubna, Russian Federation), Yu. Ryabov PNPI (Gatchina, Russian Federation)}, journal={arXiv preprint arXiv:cs/0306081}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306081}, primaryClass={cs.DB} }
barczyc2003an
arxiv-671218
cs/0306082
The Community Authorization Service: Status and Future
<|reference_start|>The Community Authorization Service: Status and Future: Virtual organizations (VOs) are communities of resource providers and users distributed over multiple policy domains. These VOs often wish to define and enforce consistent policies in addition to the policies of their underlying domains. This is challenging, not only because of the problems in distributing the policy to the domains, but also because of the fact that those domains may each have different capabilities for enforcing the policy. The Community Authorization Service (CAS) solves this problem by allowing resource providers to delegate some policy authority to the VO while maintaining ultimate control over their resources. In this paper we describe CAS and our past and current implementations of CAS, and we discuss our plans for CAS-related research.<|reference_end|>
arxiv
@article{pearlman2003the, title={The Community Authorization Service: Status and Future}, author={L. Pearlman, V. Welch, I. Foster, C. Kesselman, S. Tuecke}, journal={arXiv preprint arXiv:cs/0306082}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306082}, primaryClass={cs.SE} }
pearlman2003the
arxiv-671219
cs/0306083
The Athena Startup Kit
<|reference_start|>The Athena Startup Kit: The Athena Startup Kit (ASK), is an interactive front-end to the Atlas software framework (ATHENA). Written in python, a very effective "glue" language, it is build on top of the, in principle unrelated, code repository, build, configuration, debug, binding, and analysis tools. ASK automates many error-prone tasks that are otherwise left to the end-user, thereby pre-empting a whole category of potential problems. Through the existing tools, which ASK will setup for the user if and as needed, it locates available resources, maintains job coherency, manages the run-time environment, allows for interactivity and debugging, and provides standalone execution scripts. An end-user who wants to run her own analysis algorithms within the standard environment can let ASK generate the appropriate skeleton package, the needed dependencies and run-time, as well as a default job options script. For new and casual users, ASK comes with a graphical user interface; for advanced users, ASK has a scriptable command line interface. Both are built on top of the same set of libraries. ASK does not need to be, and isn't, experiment neutral. Thus it has built-in workarounds for known gotcha's, that would otherwise be a major time-sink for each and every new user. ASK minimizes the overhead for those physicists in Atlas who just want to write and run their analysis code.<|reference_end|>
arxiv
@article{lavrijsen2003the, title={The Athena Startup Kit}, author={W.T.L.P. Lavrijsen}, journal={arXiv preprint arXiv:cs/0306083}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306083}, primaryClass={cs.SE} }
lavrijsen2003the
arxiv-671220
cs/0306084
BaBar Web job submission with Globus authentication and AFS access
<|reference_start|>BaBar Web job submission with Globus authentication and AFS access: We present two versions of a grid job submission system produced for the BaBar experiment. Both use globus job submission to process data spread across various sites, producing output which can be combined for analysis. The problems encountered with authorisation and authentication, data location, job submission, and the input and output sandboxes are described, as are the solutions. The total system is still some way short of the aims of enterprises such as the EDG, but represent a significant step along the way.<|reference_end|>
arxiv
@article{barlow2003babar, title={BaBar Web job submission with Globus authentication and AFS access}, author={R.J.Barlow, A.Forti, A.McNab, S.Salih, D.Smith, T.Adye}, journal={Nucl. Instr & Meth. A479 p1 2002; International J. Supercomputer Applications, 15(3), 2001; Proceedings of Computing in High Energy and Nuclear Physics (CHEP 2001) 9/3/2001-9/7/2001}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306084}, primaryClass={cs.DC} }
barlow2003babar
arxiv-671221
cs/0306085
GANGA: a user-Grid interface for Atlas and LHCb
<|reference_start|>GANGA: a user-Grid interface for Atlas and LHCb: The Gaudi/Athena and Grid Alliance (GANGA) is a front-end for the configuration, submission, monitoring, bookkeeping, output collection, and reporting of computing jobs run on a local batch system or on the grid. In particular, GANGA handles jobs that use applications written for the Gaudi software framework shared by the Atlas and LHCb experiments. GANGA exploits the commonality of Gaudi-based computing jobs, while insulating against grid-, batch- and framework-specific technicalities, to maximize end-user productivity in defining, configuring, and executing jobs. Designed for a python-based component architecture, GANGA has a modular underpinning and is therefore well placed for contributing to, and benefiting from, work in related projects. Its functionality is accessible both from a scriptable command-line interface, for expert users and automated tasks, and through a graphical interface, which simplifies the interaction with GANGA for beginning and c1asual users. This paper presents the GANGA design and implementation, the development of the underlying software bus architecture, and the functionality of the first public GANGA release.<|reference_end|>
arxiv
@article{harrison2003ganga:, title={GANGA: a user-Grid interface for Atlas and LHCb}, author={K. Harrison, W.T.L.P. Lavrijsen, P. Mato, A. Soroko, C.L. Tan, C.E. Tull, N. Brook, R.W.L. Jones}, journal={arXiv preprint arXiv:cs/0306085}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306085}, primaryClass={cs.SE} }
harrison2003ganga:
arxiv-671222
cs/0306086
GMA Instrumentation of the Athena Framework using NetLogger
<|reference_start|>GMA Instrumentation of the Athena Framework using NetLogger: Grid applications are, by their nature, wide-area distributed applications. This WAN aspect of Grid applications makes the use of conventional monitoring and instrumentation tools (such as top, gprof, LSF Monitor, etc) impractical for verification that the application is running correctly and efficiently. To be effective, monitoring data must be "end-to-end", meaning that all components between the Grid application endpoints must be monitored. Instrumented applications can generate a large amount of monitoring data, so typically the instrumentation is off by default. For jobs running on a Grid, there needs to be a general mechanism to remotely activate the instrumentation in running jobs. The NetLogger Toolkit Activation Service provides this mechanism. To demonstrate this, we have instrumented the ATLAS Athena Framework with NetLogger to generate monitoring events. We then use a GMA-based activation service to control NetLogger's trigger mechanism. The NetLogger trigger mechanism allows one to easily start, stop, or change the logging level of a running program by modifying a trigger file. We present here details of the design of the NetLogger implementation of the GMA-based activation service and the instrumentation service for Athena. We also describe how this activation service allows us to non-intrusively collect and visualize the ATLAS Athena Framework monitoring data.<|reference_end|>
arxiv
@article{tull2003gma, title={GMA Instrumentation of the Athena Framework using NetLogger}, author={Craig E. Tull, Dan Gunter, Wim Lavrijsen, David Quarrie, Brian Tierney}, journal={arXiv preprint arXiv:cs/0306086}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306086}, primaryClass={cs.DC cs.IR} }
tull2003gma
arxiv-671223
cs/0306087
OO Model of the STAR offline production "Event Display" and its implementation based on Qt-ROOT
<|reference_start|>OO Model of the STAR offline production "Event Display" and its implementation based on Qt-ROOT: The paper presents the "Event Display" package for the STAR offline production as a special visualization tool to debug the reconstruction code. This can be achieved if an author of the algorithm / code may build his/her own custom Event Display alone from the base software blocks and re-used some well-designed, easy to learn user-friendly patterns. For STAR offline production Event Display ROOT with Qt lower level interface was chosen as the base tools.<|reference_end|>
arxiv
@article{fine2003oo, title={OO Model of the STAR offline production "Event Display" and its implementation based on Qt-ROOT}, author={Valeri Fine, Jerome Lauret, Victor Perevoztchikov}, journal={arXiv preprint arXiv:cs/0306087}, year={2003}, number={MOLT011}, archivePrefix={arXiv}, eprint={cs/0306087}, primaryClass={cs.HC cs.GR} }
fine2003oo
arxiv-671224
cs/0306088
Using CAS to Manage Role-Based VO Sub-Groups
<|reference_start|>Using CAS to Manage Role-Based VO Sub-Groups: LHC-era HENP experiments will generate unprecidented volumes of data and require commensurately large compute resources. These resources are larger than can be marshalled at any one site within the community. Production reconstruction, analysis, and simulation will need to take maximum advantage of these distributed computing and storage resources using the new capabilities offered by the Grid computing paradigm. Since large-scale, coordinated Grid computing involves user access across many Regional Centers and national and funding boundaries, one of the most crucial aspects of Grid computing is that of user authentication and authorization. While projects such as the DOE Grids CA have gone a long way to solving the problem of distributed authentication, the authorization problem is still largely open. We have developed and tested a prototype VO-Role management system using the Community Authorization Service (CAS) from the Globus project. CAS allows for a flexible definition of resources. In this protoype we define a role as a resource within the CAS database and assign individuals in the VO access to that resource to indicate their ability to assert the role. The access of an individual to this VO-Role resource is then an annotation of the user's CAS proxy certificate. This annotation is then used by the local resource managers to authorize access to local compute and storage resources at a granularity which is base on neither VOs nor individuals. We report here on the configuration details for the CAS database and the Globus Gatekeeper and on how this general approch could be formalized and extended to meet the clear needs of LHC experiments using the Grid.<|reference_end|>
arxiv
@article{tull2003using, title={Using CAS to Manage Role-Based VO Sub-Groups}, author={Craig E. Tull, Shane Canon, Steve Chan, Doug Olson, Laura Pearlman, Von Welch}, journal={arXiv preprint arXiv:cs/0306088}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306088}, primaryClass={cs.CR cs.DC} }
tull2003using
arxiv-671225
cs/0306089
The StoreGate: a Data Model for the Atlas Software Architecture
<|reference_start|>The StoreGate: a Data Model for the Atlas Software Architecture: The Atlas collaboration at CERN has adopted the Gaudi software architecture which belongs to the blackboard family: data objects produced by knowledge sources (e.g. reconstruction modules) are posted to a common in-memory data base from where other modules can access them and produce new data objects. The StoreGate has been designed, based on the Atlas requirements and the experience of other HENP systems such as Babar, CDF, CLEO, D0 and LHCB, to identify in a simple and efficient fashion (collections of) data objects based on their type and/or the modules which posted them to the Transient Data Store (the blackboard). The developer also has the freedom to use her preferred key class to uniquely identify a data object according to any other criterion. Besides this core functionality, the StoreGate provides the developers with a powerful interface to handle in a coherent fashion persistable references, object lifetimes, memory management and access control policy for the data objects in the Store. It also provides a Handle/Proxy mechanism to define and hide the cache fault mechanism: upon request, a missing Data Object can be transparently created and added to the Transient Store presumably retrieving it from a persistent data-base, or even reconstructing it on demand.<|reference_end|>
arxiv
@article{calafiura2003the, title={The StoreGate: a Data Model for the Atlas Software Architecture}, author={P. Calafiura, C.G. Leggett, D.R. Quarrie, H. Ma, S. Rajagopalan}, journal={arXiv preprint arXiv:cs/0306089}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306089}, primaryClass={cs.SE} }
calafiura2003the
arxiv-671226
cs/0306090
Worldwide Fast File Replication on Grid Datafarm
<|reference_start|>Worldwide Fast File Replication on Grid Datafarm: The Grid Datafarm architecture is designed for global petascale data-intensive computing. It provides a global parallel filesystem with online petascale storage, scalable I/O bandwidth, and scalable parallel processing, and it can exploit local I/O in a grid of clusters with tens of thousands of nodes. One of features is that it manages file replicas in filesystem metadata for fault tolerance and load balancing. This paper discusses and evaluates several techniques to support long-distance fast file replication. The Grid Datafarm manages a ranked group of files as a Gfarm file, each file, called a Gfarm file fragment, being stored on a filesystem node, or replicated on several filesystem nodes. Each Gfarm file fragment is replicated independently and in parallel using rate-controlled HighSpeed TCP with network striping. On a US-Japan testbed with 10,000 km distance, we achieve 419 Mbps using 2 nodes on each side, and 741 Mbps using 4 nodes out of 893 Mbps with two transpacific networks.<|reference_end|>
arxiv
@article{tatebe2003worldwide, title={Worldwide Fast File Replication on Grid Datafarm}, author={Osamu Tatebe, Satoshi Sekiguchi, Youhei Morita, Satoshi Matsuoka, and Noriyuki Soda}, journal={arXiv preprint arXiv:cs/0306090}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306090}, primaryClass={cs.PF cs.NI} }
tatebe2003worldwide
arxiv-671227
cs/0306091
Universal Sequential Decisions in Unknown Environments
<|reference_start|>Universal Sequential Decisions in Unknown Environments: We give a brief introduction to the AIXI model, which unifies and overcomes the limitations of sequential decision theory and universal Solomonoff induction. While the former theory is suited for active agents in known environments, the latter is suited for passive prediction of unknown environments.<|reference_end|>
arxiv
@article{hutter2003universal, title={Universal Sequential Decisions in Unknown Environments}, author={Marcus Hutter}, journal={Proc. 5th European Workshop on Reinforcement Learning (EWRL-2001) 25-26}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306091}, primaryClass={cs.AI cs.CC cs.LG} }
hutter2003universal
arxiv-671228
cs/0306092
Building A High Performance Parallel File System Using Grid Datafarm and ROOT I/O
<|reference_start|>Building A High Performance Parallel File System Using Grid Datafarm and ROOT I/O: Sheer amount of petabyte scale data foreseen in the LHC experiments require a careful consideration of the persistency design and the system design in the world-wide distributed computing. Event parallelism of the HENP data analysis enables us to take maximum advantage of the high performance cluster computing and networking when we keep the parallelism both in the data processing phase, in the data management phase, and in the data transfer phase. A modular architecture of FADS/ Goofy, a versatile detector simulation framework for Geant4, enables an easy choice of plug-in facilities for persistency technologies such as Objectivity/DB and ROOT I/O. The framework is designed to work naturally with the parallel file system of Grid Datafarm (Gfarm). FADS/Goofy is proven to generate 10^6 Geant4-simulated Atlas Mockup events using a 512 CPU PC cluster. The data in ROOT I/O files is replicated using Gfarm file system. The histogram information is collected from the distributed ROOT files. During the data replication it has been demonstrated to achieve more than 2.3 Gbps data transfer rate between the PC clusters over seven participating PC clusters in the United States and in Japan.<|reference_end|>
arxiv
@article{morita2003building, title={Building A High Performance Parallel File System Using Grid Datafarm and ROOT I/O}, author={Y. Morita (1), H. Sato (1), Y. Watase (1), O. Tatebe (2), S. Sekiguchi (2), S. Matsuoka (3), N. Soda (4), A. Dell'Acqua (5)}, journal={arXiv preprint arXiv:cs/0306092}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306092}, primaryClass={cs.DC} }
morita2003building
arxiv-671229
cs/0306093
Grid-Brick Event Processing Framework in GEPS
<|reference_start|>Grid-Brick Event Processing Framework in GEPS: Experiments like ATLAS at LHC involve a scale of computing and data management that greatly exceeds the capability of existing systems, making it necessary to resort to Grid-based Parallel Event Processing Systems (GEPS). Traditional Grid systems concentrate the data in central data servers which have to be accessed by many nodes each time an analysis or processing job starts. These systems require very powerful central data servers and make little use of the distributed disk space that is available in commodity computers. The Grid-Brick system, which is described in this paper, follows a different approach. The data storage is split among all grid nodes having each one a piece of the whole information. Users submit queries and the system will distribute the tasks through all the nodes and retrieve the result, merging them together in the Job Submit Server. The main advantage of using this system is the huge scalability it provides, while its biggest disadvantage appears in the case of failure of one of the nodes. A workaround for this problem involves data replication or backup.<|reference_end|>
arxiv
@article{amorim2003grid-brick, title={Grid-Brick Event Processing Framework in GEPS}, author={Antonio Amorim, Luis Pedro (Faculdade de Ciencias, University of Lisbon, Portugal) Han Fei, Nuno Almeida, Paulo Trezentos (ADETTI, Edificio ISCTE, University of Lisbon, Portugal) Jaime E. Villate (Department of Physics, School of Engineering, University of Porto)}, journal={arXiv preprint arXiv:cs/0306093}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306093}, primaryClass={cs.DC} }
amorim2003grid-brick
arxiv-671230
cs/0306094
BaBar - A Community Web Site in an Organizational Setting
<|reference_start|>BaBar - A Community Web Site in an Organizational Setting: The BABAR Web site was established in 1993 at the Stanford Linear Accelerator Center (SLAC) to support the BABAR experiment, to report its results, and to facilitate communication among its scientific and engineering collaborators, currently numbering about 600 individuals from 75 collaborating institutions in 10 countries. The BABAR Web site is, therefore, a community Web site. At the same time it is hosted at SLAC and funded by agencies that demand adherence to policies decided under different priorities. Additionally, the BABAR Web administrators deal with the problems that arise during the course of managing users, content, policies, standards, and changing technologies. Desired solutions to some of these problems may be incompatible with the overall administration of the SLAC Web sites and/or the SLAC policies and concerns. There are thus different perspectives of the same Web site and differing expectations in segments of the SLAC population which act as constraints and challenges in any review or re-engineering activities. Web Engineering, which post-dates the BABAR Web, has aimed to provide a comprehensive understanding of all aspects of Web development. This paper reports on the first part of a recent review of application of Web Engineering methods to the BABAR Web site, which has led to explicit user and information models of the BABAR community and how SLAC and the BABAR community relate and react to each other. The paper identifies the issues of a community Web site in a hierarchical, semi-governmental sector and formulates a strategy for periodic reviews of BABAR and similar sites.<|reference_end|>
arxiv
@article{cowan2003babar, title={BaBar - A Community Web Site in an Organizational Setting}, author={Ray Cowan, Yogesh Deshpande, and Bebo White}, journal={arXiv preprint arXiv:cs/0306094}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306094}, primaryClass={cs.IR} }
cowan2003babar
arxiv-671231
cs/0306095
The MammoGrid Project Grids Architecture
<|reference_start|>The MammoGrid Project Grids Architecture: The aim of the recently EU-funded MammoGrid project is, in the light of emerging Grid technology, to develop a European-wide database of mammograms that will be used to develop a set of important healthcare applications and investigate the potential of this Grid to support effective co-working between healthcare professionals throughout the EU. The MammoGrid consortium intends to use a Grid model to enable distributed computing that spans national borders. This Grid infrastructure will be used for deploying novel algorithms as software directly developed or enhanced within the project. Using the MammoGrid clinicians will be able to harness the use of massive amounts of medical image data to perform epidemiological studies, advanced image processing, radiographic education and ultimately, tele-diagnosis over communities of medical "virtual organisations". This is achieved through the use of Grid-compliant services [1] for managing (versions of) massively distributed files of mammograms, for handling the distributed execution of mammograms analysis software, for the development of Grid-aware algorithms and for the sharing of resources between multiple collaborating medical centres. All this is delivered via a novel software and hardware information infrastructure that, in addition guarantees the integrity and security of the medical data. The MammoGrid implementation is based on AliEn, a Grid framework developed by the ALICE Collaboration. AliEn provides a virtual file catalogue that allows transparent access to distributed data-sets and provides top to bottom implementation of a lightweight Grid applicable to cases when handling of a large number of files is required. This paper details the architecture that will be implemented by the MammoGrid project.<|reference_end|>
arxiv
@article{mcclatchey2003the, title={The MammoGrid Project Grids Architecture}, author={Richard McClatchey, Predrag Buncic, David Manset, Tamas Hauer, Florida Estrella, Pablo Saiz, Dmitri Rogulin}, journal={arXiv preprint arXiv:cs/0306095}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306095}, primaryClass={cs.DC cs.DB} }
mcclatchey2003the
arxiv-671232
cs/0306096
MonALISA : A Distributed Monitoring Service Architecture
<|reference_start|>MonALISA : A Distributed Monitoring Service Architecture: The MonALISA (Monitoring Agents in A Large Integrated Services Architecture) system provides a distributed monitoring service. MonALISA is based on a scalable Dynamic Distributed Services Architecture which is designed to meet the needs of physics collaborations for monitoring global Grid systems, and is implemented using JINI/JAVA and WSDL/SOAP technologies. The scalability of the system derives from the use of multithreaded Station Servers to host a variety of loosely coupled self-describing dynamic services, the ability of each service to register itself and then to be discovered and used by any other services, or clients that require such information, and the ability of all services and clients subscribing to a set of events (state changes) in the system to be notified automatically. The framework integrates several existing monitoring tools and procedures to collect parameters describing computational nodes, applications and network performance. It has built-in SNMP support and network-performance monitoring algorithms that enable it to monitor end-to-end network performance as well as the performance and state of site facilities in a Grid. MonALISA is currently running around the clock on the US CMS test Grid as well as an increasing number of other sites. It is also being used to monitor the performance and optimize the interconnections among the reflectors in the VRVS system.<|reference_end|>
arxiv
@article{newman2003monalisa, title={MonALISA : A Distributed Monitoring Service Architecture}, author={H.B. Newman, I.C.Legrand, P. Galvez, R. Voicu, C. Cirstoiu}, journal={arXiv preprint arXiv:cs/0306096}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306096}, primaryClass={cs.DC} }
newman2003monalisa
arxiv-671233
cs/0306097
A family of metrics on contact structures based on edge ideals
<|reference_start|>A family of metrics on contact structures based on edge ideals: The measurement of the similarity of RNA secondary structures, and in general of contact structures, of a fixed length has several specific applications. For instance, it is used in the analysis of the ensemble of suboptimal secondary structures generated by a given algorithm on a given RNA sequence, and in the comparison of the secondary structures predicted by different algorithms on a given RNA molecule. It is also a useful tool in the quantitative study of sequence-structure maps. A way to measure this similarity is by means of metrics. In this paper we introduce a new class of metrics $d_{m}$, $m\geq 3$, on the set of all contact structures of a fixed length, based on their representation by means of edge ideals in a polynomial ring. These metrics can be expressed in terms of Hilbert functions of monomial ideals, which allows the use of several public domain computer algebra systems to compute them. We study some abstract properties of these metrics, and we obtain explicit descriptions of them for $m=3,4$ on arbitrary contact structures and for $m=5,6$ on RNA secondary structures.<|reference_end|>
arxiv
@article{llabrés2003a, title={A family of metrics on contact structures based on edge ideals}, author={Merc'e Llabr'es, Francesc Rossell'o}, journal={arXiv preprint arXiv:cs/0306097}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306097}, primaryClass={cs.DM cs.CE q-bio} }
llabrés2003a
arxiv-671234
cs/0306098
Making refactoring decisions in large-scale Java systems: an empirical stance
<|reference_start|>Making refactoring decisions in large-scale Java systems: an empirical stance: Decisions on which classes to refactor are fraught with difficulty. The problem of identifying candidate classes becomes acute when confronted with large systems comprising hundreds or thousands of classes. In this paper, we describe a metric by which key classes, and hence candidates for refactoring, can be identified. Measures quantifying the usage of two forms of coupling, inheritance and aggregation, together with two other class features (number of methods and attributes) were extracted from the source code of three large Java systems. Our research shows that metrics from other research domains can be adapted to the software engineering process. Substantial differences were found between each of the systems in terms of the key classes identified and hence opportunities for refactoring those classes varied between those systems.<|reference_end|>
arxiv
@article{wheeldon2003making, title={Making refactoring decisions in large-scale Java systems: an empirical stance}, author={Richard Wheeldon and Steve Counsell}, journal={arXiv preprint arXiv:cs/0306098}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306098}, primaryClass={cs.SE} }
wheeldon2003making
arxiv-671235
cs/0306099
An Improved k-Nearest Neighbor Algorithm for Text Categorization
<|reference_start|>An Improved k-Nearest Neighbor Algorithm for Text Categorization: k is the most important parameter in a text categorization system based on k-Nearest Neighbor algorithm (kNN).In the classification process, k nearest documents to the test one in the training set are determined firstly. Then, the predication can be made according to the category distribution among these k nearest neighbors. Generally speaking, the class distribution in the training set is uneven. Some classes may have more samples than others. Therefore, the system performance is very sensitive to the choice of the parameter k. And it is very likely that a fixed k value will result in a bias on large categories. To deal with these problems, we propose an improved kNN algorithm, which uses different numbers of nearest neighbors for different categories, rather than a fixed number across all categories. More samples (nearest neighbors) will be used for deciding whether a test document should be classified to a category, which has more samples in the training set. Preliminary experiments on Chinese text categorization show that our method is less sensitive to the parameter k than the traditional one, and it can properly classify documents belonging to smaller classes with a large k. The method is promising for some cases, where estimating the parameter k via cross-validation is not allowed.<|reference_end|>
arxiv
@article{li2003an, title={An Improved k-Nearest Neighbor Algorithm for Text Categorization}, author={Baoli Li, Shiwen Yu, and Qin Lu}, journal={arXiv preprint arXiv:cs/0306099}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306099}, primaryClass={cs.CL} }
li2003an
arxiv-671236
cs/0306100
Site Authorization Service (SAZ)
<|reference_start|>Site Authorization Service (SAZ): In this paper we present a methodology to provide an additional level of centralized control for the grid resources. This centralized control is applied to site-wide distribution of various grids and thus providing an upper hand in the maintenance.<|reference_end|>
arxiv
@article{sehkri2003site, title={Site Authorization Service (SAZ)}, author={Vijay Sehkri, Igor Mandrichenko, Dane Skow}, journal={ECONFC0303241:TUBT007,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306100}, primaryClass={cs.DC} }
sehkri2003site
arxiv-671237
cs/0306101
The DataFlow System of the ATLAS Trigger and DAQ
<|reference_start|>The DataFlow System of the ATLAS Trigger and DAQ: The baseline design and implementation of the DataFlow system, to be documented in the ATLAS DAQ/HLT Technical Design Report in summer 2003, will be presented. Empahsis will be placed on the system performance and scalability based on the results from prototyping studies which have maximised the use of commercially available hardware.<|reference_end|>
arxiv
@article{lehmann2003the, title={The DataFlow System of the ATLAS Trigger and DAQ}, author={G. Lehmann, et al}, journal={ECONF C0303241:MOGT009,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306101}, primaryClass={cs.SE} }
lehmann2003the
arxiv-671238
cs/0306102
Prototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production
<|reference_start|>Prototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production: For efficiency of the large production tasks distributed worldwide, it is essential to provide shared production management tools comprised of integratable and interoperable services. To enhance the ATLAS DC1 production toolkit, we introduced and tested a Virtual Data services component. For each major data transformation step identified in the ATLAS data processing pipeline (event generation, detector simulation, background pile-up and digitization, etc) the Virtual Data Cookbook (VDC) catalogue encapsulates the specific data transformation knowledge and the validated parameters settings that must be provided before the data transformation invocation. To provide for local-remote transparency during DC1 production, the VDC database server delivered in a controlled way both the validated production parameters and the templated production recipes for thousands of the event generation and detector simulation jobs around the world, simplifying the production management solutions.<|reference_end|>
arxiv
@article{vaniachine2003prototyping, title={Prototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production}, author={A. Vaniachine, D. Malon (1), P. Nevski (2), K. De (3) ((1) Argonne National Laboratory, (2) Brookhaven National Laboratory, (3) University of Texas at Arlington)}, journal={arXiv preprint arXiv:cs/0306102}, year={2003}, number={ANL-HEP-CP-03-049}, archivePrefix={arXiv}, eprint={cs/0306102}, primaryClass={cs.DC cs.DB} }
vaniachine2003prototyping
arxiv-671239
cs/0306103
Primary Numbers Database for ATLAS Detector Description Parameters
<|reference_start|>Primary Numbers Database for ATLAS Detector Description Parameters: We present the design and the status of the database for detector description parameters in ATLAS experiment. The ATLAS Primary Numbers are the parameters defining the detector geometry and digitization in simulations, as well as certain reconstruction parameters. Since the detailed ATLAS detector description needs more than 10,000 such parameters, a preferred solution is to have a single verified source for all these data. The database stores the data dictionary for each parameter collection object, providing schema evolution support for object-based retrieval of parameters. The same Primary Numbers are served to many different clients accessing the database: the ATLAS software framework Athena, the Geant3 heritage framework Atlsim, the Geant4 developers framework FADS/Goofy, the generator of XML output for detector description, and several end-user clients for interactive data navigation, including web-based browsers and ROOT. The choice of the MySQL database product for the implementation provides additional benefits: the Primary Numbers database can be used on the developers laptop when disconnected (using the MySQL embedded server technology), with data being updated when the laptop is connected (using the MySQL database replication).<|reference_end|>
arxiv
@article{vaniachine2003primary, title={Primary Numbers Database for ATLAS Detector Description Parameters}, author={A. Vaniachine, S. Eckmann, D. Malon (1), P. Nevski, T. Wenaus (2) ((1) Argonne National Laboratory, (2) Brookhaven National Laboratory)}, journal={arXiv preprint arXiv:cs/0306103}, year={2003}, number={ANL-HEP-CP-03-050}, archivePrefix={arXiv}, eprint={cs/0306103}, primaryClass={cs.DB cs.HC} }
vaniachine2003primary
arxiv-671240
cs/0306104
Efficient pebbling for list traversal synopses
<|reference_start|>Efficient pebbling for list traversal synopses: We show how to support efficient back traversal in a unidirectional list, using small memory and with essentially no slowdown in forward steps. Using $O(\log n)$ memory for a list of size $n$, the $i$'th back-step from the farthest point reached so far takes $O(\log i)$ time in the worst case, while the overhead per forward step is at most $\epsilon$ for arbitrary small constant $\epsilon>0$. An arbitrary sequence of forward and back steps is allowed. A full trade-off between memory usage and time per back-step is presented: $k$ vs. $kn^{1/k}$ and vice versa. Our algorithms are based on a novel pebbling technique which moves pebbles on a virtual binary, or $t$-ary, tree that can only be traversed in a pre-order fashion. The compact data structures used by the pebbling algorithms, called list traversal synopses, extend to general directed graphs, and have other interesting applications, including memory efficient hash-chain implementation. Perhaps the most surprising application is in showing that for any program, arbitrary rollback steps can be efficiently supported with small overhead in memory, and marginal overhead in its ordinary execution. More concretely: Let $P$ be a program that runs for at most $T$ steps, using memory of size $M$. Then, at the cost of recording the input used by the program, and increasing the memory by a factor of $O(\log T)$ to $O(M \log T)$, the program $P$ can be extended to support an arbitrary sequence of forward execution and rollback steps: the $i$'th rollback step takes $O(\log i)$ time in the worst case, while forward steps take O(1) time in the worst case, and $1+\epsilon$ amortized time per step.<|reference_end|>
arxiv
@article{matias2003efficient, title={Efficient pebbling for list traversal synopses}, author={Yossi Matias and Ely Porat}, journal={arXiv preprint arXiv:cs/0306104}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306104}, primaryClass={cs.DS} }
matias2003efficient
arxiv-671241
cs/0306105
Design, implementation and deployment of the Saclay muon reconstruction algorithms (Muonbox/y) in the Athena software framework of the ATLAS experiment
<|reference_start|>Design, implementation and deployment of the Saclay muon reconstruction algorithms (Muonbox/y) in the Athena software framework of the ATLAS experiment: This paper gives an overview of a reconstruction algorithm for muon events in ATLAS experiment at CERN. After a short introduction on ATLAS Muon Spectrometer, we will describe the procedure performed by the algorithms Muonbox and Muonboy (last version) in order to achieve correctly the reconstruction task. These algorithms have been developed in Fortran language and are working in the official C++ framework Athena, as well as in stand alone mode. A description of the interaction between Muonboy and Athena will be given, together with the reconstruction performances (efficiency and momentum resolution) obtained with MonteCarlo data.<|reference_end|>
arxiv
@article{formica2003design,, title={Design, implementation and deployment of the Saclay muon reconstruction algorithms (Muonbox/y) in the Athena software framework of the ATLAS experiment}, author={Andrea Formica}, journal={arXiv preprint arXiv:cs/0306105}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306105}, primaryClass={cs.CE} }
formica2003design,
arxiv-671242
cs/0306106
Lexicographic probability, conditional probability, and nonstandard probability
<|reference_start|>Lexicographic probability, conditional probability, and nonstandard probability: The relationship between Popper spaces (conditional probability spaces that satisfy some regularity conditions), lexicographic probability systems (LPS's), and nonstandard probability spaces (NPS's) is considered. If countable additivity is assumed, Popper spaces and a subclass of LPS's are equivalent; without the assumption of countable additivity, the equivalence no longer holds. If the state space is finite, LPS's are equivalent to NPS's. However, if the state space is infinite, NPS's are shown to be more general than LPS's.<|reference_end|>
arxiv
@article{halpern2003lexicographic, title={Lexicographic probability, conditional probability, and nonstandard probability}, author={Joseph Y. Halpern}, journal={arXiv preprint arXiv:cs/0306106}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306106}, primaryClass={cs.GT cs.AI} }
halpern2003lexicographic
arxiv-671243
cs/0306107
On the Relationship between Strand Spaces and Multi-Agent Systems
<|reference_start|>On the Relationship between Strand Spaces and Multi-Agent Systems: Strand spaces are a popular framework for the analysis of security protocols. Strand spaces have some similarities to a formalism used successfully to model protocols for distributed systems, namely multi-agent systems. We explore the exact relationship between these two frameworks here. It turns out that a key difference is the handling of agents, which are unspecified in strand spaces and explicit in multi-agent systems. We provide a family of translations from strand spaces to multi-agent systems parameterized by the choice of agents in the strand space. We also show that not every multi-agent system of interest can be expressed as a strand space. This reveals a lack of expressiveness in the strand-space framework that can be characterized by our translation. To highlight this lack of expressiveness, we show one simple way in which strand spaces can be extended to model more systems.<|reference_end|>
arxiv
@article{halpern2003on, title={On the Relationship between Strand Spaces and Multi-Agent Systems}, author={Joseph Y. Halpern and Riccardo Pucella}, journal={ACM Transactions on Information and System Security 6:1, 2003, pp. 43--70}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306107}, primaryClass={cs.CR} }
halpern2003on
arxiv-671244
cs/0306108
Web Engineering
<|reference_start|>Web Engineering: Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application development? and d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialisation at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.<|reference_end|>
arxiv
@article{deshpande2003web, title={Web Engineering}, author={Yogesh Deshpande, San Murugesan, Athula Ginige, Steve Hansen, Daniel Schwabe, Martin Gaedke, and Bebo White}, journal={J.Web Engineering 1 (2002) 003-017}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306108}, primaryClass={cs.SE} }
deshpande2003web
arxiv-671245
cs/0306109
Distributed Heterogeneous Relational Data Warehouse In A Grid Environment
<|reference_start|>Distributed Heterogeneous Relational Data Warehouse In A Grid Environment: This paper examines how a "Distributed Heterogeneous Relational Data Warehouse" can be integrated in a Grid environment that will provide physicists with efficient access to large and small object collections drawn from databases at multiple sites. This paper investigates the requirements of Grid-enabling such a warehouse, and explores how these requirements may be met by extensions to existing Grid middleware. We present initial results obtained with a working prototype warehouse of this kind using both SQLServer and Oracle9i, where a Grid-enabled web-services interface makes it easier for web-applications to access the distributed contents of the databases securely. Based on the success of the prototype, we proposes a framework for using heterogeneous relational data warehouse through the web-service interface and create a single "Virtual Database System" for users. The ability to transparently access data in this way, as shown in prototype, is likely to be a very powerful facility for HENP and other grid users wishing to collate and analyze information distributed over Grid.<|reference_end|>
arxiv
@article{iqbal2003distributed, title={Distributed Heterogeneous Relational Data Warehouse In A Grid Environment}, author={Saima Iqbal, Julian J. Bunn, Harvey B. Newman}, journal={arXiv preprint arXiv:cs/0306109}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306109}, primaryClass={cs.DC cs.DB} }
iqbal2003distributed
arxiv-671246
cs/0306110
Run Control and Monitor System for the CMS Experiment
<|reference_start|>Run Control and Monitor System for the CMS Experiment: The Run Control and Monitor System (RCMS) of the CMS experiment is the set of hardware and software components responsible for controlling and monitoring the experiment during data-taking. It provides users with a "virtual counting room", enabling them to operate the experiment and to monitor detector status and data quality from any point in the world. This paper describes the architecture of the RCMS with particular emphasis on its scalability through a distributed collection of nodes arranged in a tree-based hierarchy. The current implementation of the architecture in a prototype RCMS used in test beam setups, detector validations and DAQ demonstrators is documented. A discussion of the key technologies used, including Web Services, and the results of tests performed with a 128-node system are presented.<|reference_end|>
arxiv
@article{bellato2003run, title={Run Control and Monitor System for the CMS Experiment}, author={M. Bellato, L. Berti, V. Brigljevic, G. Bruno, E. Cano, S. Cittolin, A. Csilling, S. Erhan, D. Gigi, F. Glege, R. Gomez-Reino, M. Gulmini, J. Gutleber, C. Jacobs, M. Kozlovszky, H. Larsen, I. Magrans, G. Maron, F. Meijers, E. Meschi, S. Murray, A. Oh, L. Orsini, L. Pollet, A. Racz, G. Rorato, D. Samyn, P. Scharff-Hansen, C. Schwick, P. Sphicas, N. Toniolo, S. Ventura, L. Zangrando}, journal={arXiv preprint arXiv:cs/0306110}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306110}, primaryClass={cs.DC} }
bellato2003run
arxiv-671247
cs/0306111
Sharing a conceptual model of grid resources and services
<|reference_start|>Sharing a conceptual model of grid resources and services: Grid technologies aim at enabling a coordinated resource-sharing and problem-solving capabilities over local and wide area networks and span locations, organizations, machine architectures and software boundaries. The heterogeneity of involved resources and the need for interoperability among different grid middlewares require the sharing of a common information model. Abstractions of different flavors of resources and services and conceptual schemas of domain specific entities require a collaboration effort in order to enable a coherent information services cooperation. With this paper, we present the result of our experience in grid resources and services modelling carried out within the Grid Laboratory Uniform Environment (GLUE) effort, a joint US and EU High Energy Physics projects collaboration towards grid interoperability. The first implementation-neutral agreement on services such as batch computing and storage manager, resources such as the hierarchy cluster, sub-cluster, host and the storage library are presented. Design guidelines and operational results are depicted together with open issues and future evolutions.<|reference_end|>
arxiv
@article{andreozzi2003sharing, title={Sharing a conceptual model of grid resources and services}, author={Sergio Andreozzi, Massimo Sgaravatto, Cristina Vistoli}, journal={arXiv preprint arXiv:cs/0306111}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306111}, primaryClass={cs.DC} }
andreozzi2003sharing
arxiv-671248
cs/0306112
Adapting SAM for CDF
<|reference_start|>Adapting SAM for CDF: The CDF and D0 experiments probe the high-energy frontier and as they do so have accumulated hundreds of Terabytes of data on the way to petabytes of data over the next two years. The experiments have made a commitment to use the developing Grid based on the SAM system to handle these data. The D0 SAM has been extended for use in CDF as common patterns of design emerged to meet the similar requirements of these experiments. The process by which the merger was achieved is explained with particular emphasis on lessons learned concerning the database design patterns plus realization of the use cases.<|reference_end|>
arxiv
@article{bonham2003adapting, title={Adapting SAM for CDF}, author={D. Bonham, G. Garzoglio, R. Herber, J. Kowalkowski, D. Litvintsev, L. Lueking, M. Paterno, D. Petravick, L. Piccoli, R. Pordes, N. Stanfield, I. Terekhov, J. Trumbo, J. Tseng, S. Veseli, M. Votava, V. White, T. Huffman, S. Stonjek, K. Waltkins, P. Crosby, D. Waters, R. St.Denis}, journal={ECONF C0303241:TUAT004,2003}, year={2003}, number={TUAT004}, archivePrefix={arXiv}, eprint={cs/0306112}, primaryClass={cs.DC} }
bonham2003adapting
arxiv-671249
cs/0306113
Symbolic Parametric Analysis of Embedded Systems with BDD-like Data-Structures
<|reference_start|>Symbolic Parametric Analysis of Embedded Systems with BDD-like Data-Structures: We use dense variable-ordering to define HRD (Hybrid-Restriction Diagram), a new BDD-like data-structure for the representation and manipulation of state-spaces of linear hybrid automata. We present and discuss various manipulation algorithms for HRD, including the basic set-oriented operations, weakest precondition calculation, and normalization. We implemented the ideas and experimented to see their performance. Finally, we have also developed a pruning technique for state-space exploration based on parameter valuation space characterization. The technique showed good promise in our experiment.<|reference_end|>
arxiv
@article{wang2003symbolic, title={Symbolic Parametric Analysis of Embedded Systems with BDD-like Data-Structures}, author={Farn Wang}, journal={arXiv preprint arXiv:cs/0306113}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306113}, primaryClass={cs.DS cs.LO} }
wang2003symbolic
arxiv-671250
cs/0306114
D0 Data Handling Operational Experience
<|reference_start|>D0 Data Handling Operational Experience: We report on the production experience of the D0 experiment at the Fermilab Tevatron, using the SAM data handling system with a variety of computing hardware configurations, batch systems, and mass storage strategies. We have stored more than 300 TB of data in the Fermilab Enstore mass storage system. We deliver data through this system at an average rate of more than 2 TB/day to analysis programs, with a substantial multiplication factor in the consumed data through intelligent cache management. We handle more than 1.7 Million files in this system and provide data delivery to user jobs at Fermilab on four types of systems: a reconstruction farm, a large SMP system, a Linux batch cluster, and a Linux desktop cluster. In addition, we import simulation data generated at 6 sites worldwide, and deliver data to jobs at many more sites. We describe the scope of the data handling deployment worldwide, the operational experience with this system, and the feedback of that experience.<|reference_end|>
arxiv
@article{baranovski2003d0, title={D0 Data Handling Operational Experience}, author={A. Baranovski, C. Brock, D. Bonham, L. Carpenter, L. Lueking, W. Merritt, C. Moore, I. Terekhov, J. Trumbo, S. Veseli, J. Weigand, S. White, K. Yip}, journal={ECONF C0303241:MOKT002,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306114}, primaryClass={cs.DC cs.AI} }
baranovski2003d0
arxiv-671251
cs/0306115
D0 Regional Analysis Center Concepts
<|reference_start|>D0 Regional Analysis Center Concepts: The D0 experiment is facing many exciting challenges providing a computing environment for its worldwide collaboration. Transparent access to data for processing and analysis has been enabled through deployment of its SAM system to collaborating sites and additional functionality will be provided soon with SAMGrid components. In order to maximize access to global storage, computational and intellectual resources, and to enable the system to scale to the large demands soon to be realized, several strategic sites have been identified as Regional Analysis Centers (RAC's). These sites play an expanded role within the system. The philosophy and function of these centers is discussed and details of their composition and operation are outlined. The plan for future additional centers is also addressed.<|reference_end|>
arxiv
@article{lueking2003d0, title={D0 Regional Analysis Center Concepts}, author={L. Lueking, representing the D0 Remote Analysis Task Force}, journal={ECONF C0303241:TUAT003,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306115}, primaryClass={cs.DC} }
lueking2003d0
arxiv-671252
cs/0306116
Global Platform for Rich Media Conferencing and Collaboration
<|reference_start|>Global Platform for Rich Media Conferencing and Collaboration: The Virtual Rooms Videoconferencing Service (VRVS) provides a worldwide videoconferencing service and collaborative environment to the research and education communities. This system provides a low cost, bandwidth-efficient, extensible means for videoconferencing and remote collaboration over networks within the High Energy and Nuclear Physics communities (HENP). VRVS has become a standard part of the toolset used daily by a large sector of HENP, and it is used increasingly for other DoE/NSF-supported programs. The current features included multi-protocol, multi-OS support for all significant video enabled clients including: H.323, Mbone, QuickTime, MPEG2, Java Media Framework, and other clients. The current architecture makes VRVS a distributed, highly functional, and efficient software-only system for multipoint audio, video and web conferencing and collaboration over global IP networks. VRVS has developed the VRVS-AG Reflector and a specialized Web interface that enables end users to connect to any Access Grid (AG) session, in any of the AG "virtual venues" from anywhere worldwide. The VRVS system has now been running for the last five and half years, offering to the HENP community a working and reliable tool for collaboration within groups and among physicists dispersed world-wide. The goal of this ongoing effort is to develop the next generation collaborative systems running over next generation networks. The new developments area integrate emerging standards, include all security aspects, and will extend the range of VRVS video technologies supported to cover the latest high end standards quality. We will focus the discussion on the new capability provides by the latest version V3.0 and its future evolution.<|reference_end|>
arxiv
@article{newman2003global, title={Global Platform for Rich Media Conferencing and Collaboration}, author={Harvey B. Newman, Philippe Galvez, Gregory Denis, David Collados, Kun Wei, David Adamczyk}, journal={arXiv preprint arXiv:cs/0306116}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306116}, primaryClass={cs.MM cs.NI} }
newman2003global
arxiv-671253
cs/0306117
Deciding regular grammar logics with converse through first-order logic
<|reference_start|>Deciding regular grammar logics with converse through first-order logic: We provide a simple translation of the satisfiability problem for regular grammar logics with converse into GF2, which is the intersection of the guarded fragment and the 2-variable fragment of first-order logic. This translation is theoretically interesting because it translates modal logics with certain frame conditions into first-order logic, without explicitly expressing the frame conditions. A consequence of the translation is that the general satisfiability problem for regular grammar logics with converse is in EXPTIME. This extends a previous result of the first author for grammar logics without converse. Using the same method, we show how some other modal logics can be naturally translated into GF2, including nominal tense logics and intuitionistic logic. In our view, the results in this paper show that the natural first-order fragment corresponding to regular grammar logics is simply GF2 without extra machinery such as fixed point-operators.<|reference_end|>
arxiv
@article{demri2003deciding, title={Deciding regular grammar logics with converse through first-order logic}, author={Stephane Demri and Hans de Nivelle}, journal={arXiv preprint arXiv:cs/0306117}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306117}, primaryClass={cs.LO} }
demri2003deciding
arxiv-671254
cs/0306118
On coalgebra based on classes
<|reference_start|>On coalgebra based on classes: Every endofunctor of the category of classes is proved to be set-based in the sense of Aczel and Mendler, therefore, it has a final coalgebra. Other basic properties of these endofunctors are proved, e.g. the existence of a free completely iterative theory.<|reference_end|>
arxiv
@article{adamek2003on, title={On coalgebra based on classes}, author={J.Adamek, S. Milius, J. Velebil}, journal={arXiv preprint arXiv:cs/0306118}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306118}, primaryClass={cs.LO} }
adamek2003on
arxiv-671255
cs/0306119
A Method for Solving Distributed Service Allocation Problems
<|reference_start|>A Method for Solving Distributed Service Allocation Problems: We present a method for solving service allocation problems in which a set of services must be allocated to a set of agents so as to maximize a global utility. The method is completely distributed so it can scale to any number of services without degradation. We first formalize the service allocation problem and then present a simple hill-climbing, a global hill-climbing, and a bidding-protocol algorithm for solving it. We analyze the expected performance of these algorithms as a function of various problem parameters such as the branching factor and the number of agents. Finally, we use the sensor allocation problem, an instance of a service allocation problem, to show the bidding protocol at work. The simulations also show that phase transition on the expected quality of the solution exists as the amount of communication between agents increases.<|reference_end|>
arxiv
@article{vidal2003a, title={A Method for Solving Distributed Service Allocation Problems}, author={Jose M Vidal}, journal={arXiv preprint arXiv:cs/0306119}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306119}, primaryClass={cs.MA} }
vidal2003a
arxiv-671256
cs/0306120
Reinforcement Learning with Linear Function Approximation and LQ control Converges
<|reference_start|>Reinforcement Learning with Linear Function Approximation and LQ control Converges: Reinforcement learning is commonly used with function approximation. However, very few positive results are known about the convergence of function approximation based RL control algorithms. In this paper we show that TD(0) and Sarsa(0) with linear function approximation is convergent for a simple class of problems, where the system is linear and the costs are quadratic (the LQ control problem). Furthermore, we show that for systems with Gaussian noise and non-completely observable states (the LQG problem), the mentioned RL algorithms are still convergent, if they are combined with Kalman filtering.<|reference_end|>
arxiv
@article{szita2003reinforcement, title={Reinforcement Learning with Linear Function Approximation and LQ control Converges}, author={Istvan Szita and Andras Lorincz}, journal={arXiv preprint arXiv:cs/0306120}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306120}, primaryClass={cs.LG cs.AI} }
szita2003reinforcement
arxiv-671257
cs/0306121
Reachability problems for communicating finite state machines
<|reference_start|>Reachability problems for communicating finite state machines: The paper deals with the verification of reachability properties in a commonly used state transition model of communication protocols, which consists of finite state machines connected by potentially unbounded FIFO channels. Although simple reachability problems are undecidable for general protocols with unbounded channels, they are decidable for the protocols with the recognizable channel property. The decidability question is open for the protocols with the rational channel property.<|reference_end|>
arxiv
@article{pachl2003reachability, title={Reachability problems for communicating finite state machines}, author={Jan Pachl}, journal={arXiv preprint arXiv:cs/0306121}, year={2003}, number={CS-82-12}, archivePrefix={arXiv}, eprint={cs/0306121}, primaryClass={cs.LO cs.NI} }
pachl2003reachability
arxiv-671258
cs/0306122
The Best Trail Algorithm for Assisted Navigation of Web Sites
<|reference_start|>The Best Trail Algorithm for Assisted Navigation of Web Sites: We present an algorithm called the Best Trail Algorithm, which helps solve the hypertext navigation problem by automating the construction of memex-like trails through the corpus. The algorithm performs a probabilistic best-first expansion of a set of navigation trees to find relevant and compact trails. We describe the implementation of the algorithm, scoring methods for trails, filtering algorithms and a new metric called \emph{potential gain} which measures the potential of a page for future navigation opportunities.<|reference_end|>
arxiv
@article{wheeldon2003the, title={The Best Trail Algorithm for Assisted Navigation of Web Sites}, author={Richard Wheeldon and Mark Levene}, journal={arXiv preprint arXiv:cs/0306122}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306122}, primaryClass={cs.DS cs.IR} }
wheeldon2003the
arxiv-671259
cs/0306123
Heuristic to reduce the complexity of complete bipartite graphs to accelerate the search for maximum weighted matchings with small error
<|reference_start|>Heuristic to reduce the complexity of complete bipartite graphs to accelerate the search for maximum weighted matchings with small error: A maximum weighted matching for bipartite graphs $G=(A \cup B,E)$ can be found by using the algorithm of Edmonds and Karp with a Fibonacci Heap and a modified Dijkstra in $O(nm + n^2 \log{n})$ time where n is the number of nodes and m the number of edges. For the case that $|A|=|B|$ the number of edges is $n^2$ and therefore the complexity is $O(n^3)$. In this paper we want to present a simple heuristic method to reduce the number of edges of complete bipartite graphs $G=(A \cup B,E)$ with $|A|=|B|$ such that $m = n\log{n}$ and therefore the complexity of such that $m = n\log{n}$ and therefore the complexity of $O(n^2 \log{n})$. The weights of all edges in G must be uniformly distributed in [0,1].<|reference_end|>
arxiv
@article{etzold2003heuristic, title={Heuristic to reduce the complexity of complete bipartite graphs to accelerate the search for maximum weighted matchings with small error}, author={Daniel Etzold}, journal={arXiv preprint arXiv:cs/0306123}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306123}, primaryClass={cs.DS} }
etzold2003heuristic
arxiv-671260
cs/0306124
Updating Probabilities
<|reference_start|>Updating Probabilities: As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a ``naive space'', which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR (``coarsening at random'') in the statistical literature characterizes when ``naive'' conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.<|reference_end|>
arxiv
@article{grunwald2003updating, title={Updating Probabilities}, author={Peter D. Grunwald and Joseph Y. Halpern}, journal={arXiv preprint arXiv:cs/0306124}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306124}, primaryClass={cs.AI} }
grunwald2003updating
arxiv-671261
cs/0306125
Predicting Response-Function Results of Electrical/Mechanical Systems Through Artificial Neural Network
<|reference_start|>Predicting Response-Function Results of Electrical/Mechanical Systems Through Artificial Neural Network: In the present paper a newer application of Artificial Neural Network (ANN) has been developed i.e., predicting response-function results of electrical-mechanical system through ANN. This method is specially useful to complex systems for which it is not possible to find the response-function because of complexity of the system. The proposed approach suggests that how even without knowing the response-function, the response-function results can be predicted with the use of ANN to the system. The steps used are: (i) Depending on the system, the ANN-architecture and the input & output parameters are decided, (ii) Training & test data are generated from simplified circuits and through tactic-superposition of it for complex circuits, (iii) Training the ANN with training data through many cycles and (iv) Test-data are used for predicting the response-function results. It is found that the proposed novel method for response prediction works satisfactorily. Thus this method could be used specially for complex systems where other methods are unable to tackle it. In this paper the application of ANN is particularly demonstrated to electrical-circuit system but can be applied to other systems too.<|reference_end|>
arxiv
@article{gupta2003predicting, title={Predicting Response-Function Results of Electrical/Mechanical Systems Through Artificial Neural Network}, author={R. C. Gupta, Ankur Agarwal, Ruchi Gupta, Sanjay Gupta}, journal={arXiv preprint arXiv:cs/0306125}, year={2003}, number={IET-MED-2003-2}, archivePrefix={arXiv}, eprint={cs/0306125}, primaryClass={cs.NE} }
gupta2003predicting
arxiv-671262
cs/0306126
Bayesian Treatment of Incomplete Discrete Data applied to Mutual Information and Feature Selection
<|reference_start|>Bayesian Treatment of Incomplete Discrete Data applied to Mutual Information and Feature Selection: Given the joint chances of a pair of random variables one can compute quantities of interest, like the mutual information. The Bayesian treatment of unknown chances involves computing, from a second order prior distribution and the data likelihood, a posterior distribution of the chances. A common treatment of incomplete data is to assume ignorability and determine the chances by the expectation maximization (EM) algorithm. The two different methods above are well established but typically separated. This paper joins the two approaches in the case of Dirichlet priors, and derives efficient approximations for the mean, mode and the (co)variance of the chances and the mutual information. Furthermore, we prove the unimodality of the posterior distribution, whence the important property of convergence of EM to the global maximum in the chosen framework. These results are applied to the problem of selecting features for incremental learning and naive Bayes classification. A fast filter based on the distribution of mutual information is shown to outperform the traditional filter based on empirical mutual information on a number of incomplete real data sets.<|reference_end|>
arxiv
@article{hutter2003bayesian, title={Bayesian Treatment of Incomplete Discrete Data applied to Mutual Information and Feature Selection}, author={Marcus Hutter and Marco Zaffalon}, journal={Proceedings of the 26th German Conference on Artificial Intelligence (KI-2003) 396-406}, year={2003}, number={IDSIA-15-03}, archivePrefix={arXiv}, eprint={cs/0306126}, primaryClass={cs.LG cs.AI math.PR} }
hutter2003bayesian
arxiv-671263
cs/0306127
Development of a Java Package for Matrix Programming
<|reference_start|>Development of a Java Package for Matrix Programming: We had assembled a Java package, known as MatrixPak, of four classes for the purpose of numerical matrix computation. The classes are matrix, matrix_operations, StrToMatrix, and MatrixToStr; all of which are inherited from java.lang.Object class. Class matrix defines a matrix as a two-dimensional array of float types, and contains the following mathematical methods: transpose, adjoint, determinant, inverse, minor and cofactor. Class matrix_operations contains the following mathematical methods: matrix addition, matrix subtraction, matrix multiplication, and matrix exponential. Class StrToMatrix contains methods necessary to parse a string representation (for example, [[2 3 4]-[5 6 7]]) of a matrix into a matrix definition, whereas class MatrixToStr does the reverse.<|reference_end|>
arxiv
@article{lim2003development, title={Development of a Java Package for Matrix Programming}, author={Ngee-Peng Lim, Maurice HT Ling, Shawn YC Lim, Ji-Hee Choi, Henry BK Teo}, journal={arXiv preprint arXiv:cs/0306127}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306127}, primaryClass={cs.MS} }
lim2003development
arxiv-671264
cs/0306128
On the suitability of the 2 x 2 games for studying reciprocal cooperation and kin selection
<|reference_start|>On the suitability of the 2 x 2 games for studying reciprocal cooperation and kin selection: The 2 x 2 games, in particular the Prisoner's Dilemma, have been extensively used in studies into reciprocal cooperation and, to a lesser extent, kin selection. This paper examines the suitability of the 2 x 2 games for modelling the evolution of cooperation through reciprocation and kin selection. This examination is not restricted to the Prisoner's Dilemma, but includes the other non-trivial symmetric 2 x 2 games. We show that the popularity of the Prisoner's Dilemma for modelling social and biotic interaction is justified by its superiority according to these criteria. Indeed, the Prisoner's Dilemma is unique in providing the simplest support for reciprocal cooperation, and additive kin-selected altruism. However, care is still required in choosing the particular Prisoner's Dilemma payoff matrix to use. This paper reviews the impact of non-linear payoffs for the application of Hamilton's rule to typical altruistic interactions, and derives new results for cases in which the roles of potential altruist and beneficiary are separated. In doing so we find the same equilibrium condition holds in continuous games between relatives, and in discrete games with roles.<|reference_end|>
arxiv
@article{marshall2003on, title={On the suitability of the 2 x 2 games for studying reciprocal cooperation and kin selection}, author={James A. R. Marshall (University of Bristol)}, journal={arXiv preprint arXiv:cs/0306128}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306128}, primaryClass={cs.GT} }
marshall2003on
arxiv-671265
cs/0306129
Security for Grid Services
<|reference_start|>Security for Grid Services: Grid computing is concerned with the sharing and coordinated use of diverse resources in distributed "virtual organizations." The dynamic and multi-institutional nature of these environments introduces challenging security issues that demand new technical approaches. In particular, one must deal with diverse local mechanisms, support dynamic creation of services, and enable dynamic creation of trust domains. We describe how these issues are addressed in two generations of the Globus Toolkit. First, we review the Globus Toolkit version 2 (GT2) approach; then, we describe new approaches developed to support the Globus Toolkit version 3 (GT3) implementation of the Open Grid Services Architecture, an initiative that is recasting Grid concepts within a service oriented framework based on Web services. GT3's security implementation uses Web services security mechanisms for credential exchange and other purposes, and introduces a tight least-privilege model that avoids the need for any privileged network service.<|reference_end|>
arxiv
@article{welch2003security, title={Security for Grid Services}, author={Von Welch, Frank Siebenlist, Ian Foster, John Bresnahan, Karl Czajkowski, Jarek Gawor, Carl Kesselman, Sam Meder, Laura Pearlman, Steven Tuecke}, journal={arXiv preprint arXiv:cs/0306129}, year={2003}, number={Preprint ANL/MCS-P1024-0203}, archivePrefix={arXiv}, eprint={cs/0306129}, primaryClass={cs.CR cs.DC} }
welch2003security
arxiv-671266
cs/0306130
Anusaaraka: Machine Translation in Stages
<|reference_start|>Anusaaraka: Machine Translation in Stages: Fully-automatic general-purpose high-quality machine translation systems (FGH-MT) are extremely difficult to build. In fact, there is no system in the world for any pair of languages which qualifies to be called FGH-MT. The reasons are not far to seek. Translation is a creative process which involves interpretation of the given text by the translator. Translation would also vary depending on the audience and the purpose for which it is meant. This would explain the difficulty of building a machine translation system. Since, the machine is not capable of interpreting a general text with sufficient accuracy automatically at present - let alone re-expressing it for a given audience, it fails to perform as FGH-MT. FOOTNOTE{The major difficulty that the machine faces in interpreting a given text is the lack of general world knowledge or common sense knowledge.}<|reference_end|>
arxiv
@article{bharati2003anusaaraka:, title={Anusaaraka: Machine Translation in Stages}, author={Akshar Bharati, Vineet Chaitanya, Amba P. Kulkarni, Rajeev Sangal}, journal={Vivek, A Quarterly in Artificial Intelligence, 10, 3, July 1997, pp. 22-25}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306130}, primaryClass={cs.CL cs.AI} }
bharati2003anusaaraka:
arxiv-671267
cs/0306131
Complexity of Cycle Length Modularity Problems in Graphs
<|reference_start|>Complexity of Cycle Length Modularity Problems in Graphs: The even cycle problem for both undirected and directed graphs has been the topic of intense research in the last decade. In this paper, we study the computational complexity of \emph{cycle length modularity problems}. Roughly speaking, in a cycle length modularity problem, given an input (undirected or directed) graph, one has to determine whether the graph has a cycle $C$ of a specific length (or one of several different lengths), modulo a fixed integer. We denote the two families (one for undirected graphs and one for directed graphs) of problems by $(S,m)\hbox{-}{\rm UC}$ and $(S,m)\hbox{-}{\rm DC}$, where $m \in \mathcal{N}$ and $S \subseteq \{0,1, ..., m-1\}$. $(S,m)\hbox{-}{\rm UC}$ (respectively, $(S,m)\hbox{-}{\rm DC}$) is defined as follows: Given an undirected (respectively, directed) graph $G$, is there a cycle in $G$ whose length, modulo $m$, is a member of $S$? In this paper, we fully classify (i.e., as either polynomial-time solvable or as ${\rm NP}$-complete) each problem $(S,m)\hbox{-}{\rm UC}$ such that $0 \in S$ and each problem $(S,m)\hbox{-}{\rm DC}$ such that $0 \notin S$. We also give a sufficient condition on $S$ and $m$ for the following problem to be polynomial-time computable: $(S,m)\hbox{-}{\rm UC}$ such that $0 \notin S$.<|reference_end|>
arxiv
@article{hemaspaandra2003complexity, title={Complexity of Cycle Length Modularity Problems in Graphs}, author={Edith Hemaspaandra, Holger Spakowski, and Mayur Thakur}, journal={arXiv preprint arXiv:cs/0306131}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306131}, primaryClass={cs.CC} }
hemaspaandra2003complexity
arxiv-671268
cs/0306132
Classical and Nonextensive Information Theory
<|reference_start|>Classical and Nonextensive Information Theory: In this work we firstly review some results in Classical Information Theory. Next, we try to generalize these results by using the Tsallis entropy. We present a preliminary result and discuss our aims in this field.<|reference_end|>
arxiv
@article{giraldi2003classical, title={Classical and Nonextensive Information Theory}, author={Gilson Antonio Giraldi (National Laboratory for Scientific Computing)}, journal={arXiv preprint arXiv:cs/0306132}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306132}, primaryClass={cs.GL} }
giraldi2003classical
arxiv-671269
cs/0306133
GRAPPA: Grid Access Portal for Physics Applications
<|reference_start|>GRAPPA: Grid Access Portal for Physics Applications: Grappa is a Grid portal effort designed to provide physicists convenient access to Grid tools and services. The ATLAS analysis and control framework, Athena, was used as the target application. Grappa provides basic Grid functionality such as resource configuration, credential testing, job submission, job monitoring, results monitoring, and preliminary integration with the ATLAS replica catalog system, MAGDA. Grappa uses Jython to combine the ease of scripting with the power of java-based toolkits. This provides a powerful framework for accessing diverse Grid resources with uniform interfaces. The initial prototype system was based on the XCAT Science Portal developed at the Indiana University Extreme Computing Lab and was demonstrated by running Monte Carlo production on the U.S. ATLAS test-bed. The portal also communicated with a European resource broker on WorldGrid as part of the joint iVDGL-DataTAG interoperability project for the IST2002 and SC2002 demonstrations. The current prototype replaces the XCAT Science Portal with an xbooks jetspeed portlet for managing user scripts.<|reference_end|>
arxiv
@article{engh2003grappa:, title={GRAPPA: Grid Access Portal for Physics Applications}, author={D. Engh, S. Smallen, J. Gieraltowski, L. Fang, R. Gardner, D. Gannon, R. Bramley}, journal={ECONFC0303241:TUCT006,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306133}, primaryClass={cs.DC} }
engh2003grappa:
arxiv-671270
cs/0306134
The Complexity of Boolean Constraint Isomorphism
<|reference_start|>The Complexity of Boolean Constraint Isomorphism: In 1978, Schaefer proved his famous dichotomy theorem for generalized satisfiability problems. He defined an infinite number of propositional satisfiability problems (nowadays usually called Boolean constraint satisfaction problems) and showed that all these satisfiability problems are either in P or NP-complete. In recent years, similar results have been obtained for quite a few other problems for Boolean constraints.Almost all of these problems are variations of the satisfiability problem. In this paper, we address a problem that is not a variation of satisfiability, namely, the isomorphism problem for Boolean constraints. Previous work by B\"ohler et al. showed that the isomorphism problem is either coNP-hard or reducible to the graph isomorphism problem (a problem that is in NP, but not known to be NP-hard), thus distinguishing a hard case and an easier case. However, they did not classify which cases are truly easy, i.e., in P. This paper accomplishes exactly that. It shows that Boolean constraint isomorphism is coNP-hard (and GI-hard), or equivalent to graph isomorphism, or in P, and it gives simple criteria to determine which case holds.<|reference_end|>
arxiv
@article{böhler2003the, title={The Complexity of Boolean Constraint Isomorphism}, author={Elmar B"ohler, Edith Hemaspaandra, Steffen Reith, Heribert Vollmer}, journal={arXiv preprint arXiv:cs/0306134}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306134}, primaryClass={cs.CC cs.LO} }
böhler2003the
arxiv-671271
cs/0306135
Pruning Isomorphic Structural Sub-problems in Configuration
<|reference_start|>Pruning Isomorphic Structural Sub-problems in Configuration: Configuring consists in simulating the realization of a complex product from a catalog of component parts, using known relations between types, and picking values for object attributes. This highly combinatorial problem in the field of constraint programming has been addressed with a variety of approaches since the foundation system R1(McDermott82). An inherent difficulty in solving configuration problems is the existence of many isomorphisms among interpretations. We describe a formalism independent approach to improve the detection of isomorphisms by configurators, which does not require to adapt the problem model. To achieve this, we exploit the properties of a characteristic subset of configuration problems, called the structural sub-problem, which canonical solutions can be produced or tested at a limited cost. In this paper we present an algorithm for testing the canonicity of configurations, that can be added as a symmetry breaking constraint to any configurator. The cost and efficiency of this canonicity test are given.<|reference_end|>
arxiv
@article{grandcolas2003pruning, title={Pruning Isomorphic Structural Sub-problems in Configuration}, author={Stephane Grandcolas and Laurent Henocque and Nicolas Prcovic}, journal={arXiv preprint arXiv:cs/0306135}, year={2003}, number={LSIS-2003-004}, archivePrefix={arXiv}, eprint={cs/0306135}, primaryClass={cs.AI} }
grandcolas2003pruning
arxiv-671272
cs/0306136
Distributive Computability
<|reference_start|>Distributive Computability: This thesis presents a series of theoretical results and practical realisations about the theory of computation in distributive categories. Distributive categories have been proposed as a foundational tool for Computer Science in the last years, starting from the papers of R.F.C. Walters. We shall focus on two major topics: distributive computability, i.e., a generalized theory of computability based on distributive categories, and the Imp(G) language, which is a language based on the syntax of distributive categories. The link between the former and the latter is that the functions computed by Imp(G) programs are exactly the distributively computable functions.<|reference_end|>
arxiv
@article{vigna2003distributive, title={Distributive Computability}, author={Sebastiano Vigna}, journal={arXiv preprint arXiv:cs/0306136}, year={2003}, archivePrefix={arXiv}, eprint={cs/0306136}, primaryClass={cs.LO cs.PL} }
vigna2003distributive
arxiv-671273
cs/0307001
Serving Database Information Using a Flexible Server in a Three Tier Architecture
<|reference_start|>Serving Database Information Using a Flexible Server in a Three Tier Architecture: The D0 experiment at Fermilab relies on a central Oracle database for storing all detector calibration information. Access to this data is needed by hundreds of physics applications distributed worldwide. In order to meet the demands of these applications from scarce resources, we have created a distributed system that isolates the user applications from the database facilities. This system, known as the Database Application Network (DAN) operates as the middle tier in a three tier architecture. A DAN server employs a hierarchical caching scheme and database connection management facility that limits access to the database resource. The modular design allows for caching strategies and database access components to be determined by runtime configuration. To solve scalability problems, a proxy database component allows for DAN servers to be arranged in a hierarchy. Also included is an event based monitoring system that is currently being used to collect statistics for performance analysis and problem diagnosis. DAN servers are currently implemented as a Python multithreaded program using CORBA for network communications and interface specification. The requirement details, design, and implementation of DAN are discussed along with operational experience and future plans.<|reference_end|>
arxiv
@article{greenlee2003serving, title={Serving Database Information Using a Flexible Server in a Three Tier Architecture}, author={Herbert Greenlee, Robert Illingworth, Jim Kowalkowski, Anil Kumar, Lee Lueking, Taka Yasuda, Margherita Vittone, Stephen White}, journal={ECONFC0303241:THKT003,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307001}, primaryClass={cs.DC cs.DB} }
greenlee2003serving
arxiv-671274
cs/0307002
AWESOME: A General Multiagent Learning Algorithm that Converges in Self-Play and Learns a Best Response Against Stationary Opponents
<|reference_start|>AWESOME: A General Multiagent Learning Algorithm that Converges in Self-Play and Learns a Best Response Against Stationary Opponents: A satisfactory multiagent learning algorithm should, {\em at a minimum}, learn to play optimally against stationary opponents and converge to a Nash equilibrium in self-play. The algorithm that has come closest, WoLF-IGA, has been proven to have these two properties in 2-player 2-action repeated games--assuming that the opponent's (mixed) strategy is observable. In this paper we present AWESOME, the first algorithm that is guaranteed to have these two properties in {\em all} repeated (finite) games. It requires only that the other players' actual actions (not their strategies) can be observed at each step. It also learns to play optimally against opponents that {\em eventually become} stationary. The basic idea behind AWESOME ({\em Adapt When Everybody is Stationary, Otherwise Move to Equilibrium}) is to try to adapt to the others' strategies when they appear stationary, but otherwise to retreat to a precomputed equilibrium strategy. The techniques used to prove the properties of AWESOME are fundamentally different from those used for previous algorithms, and may help in analyzing other multiagent learning algorithms also.<|reference_end|>
arxiv
@article{conitzer2003awesome:, title={AWESOME: A General Multiagent Learning Algorithm that Converges in Self-Play and Learns a Best Response Against Stationary Opponents}, author={Vincent Conitzer, Tuomas Sandholm}, journal={In Proceedings of the 20th International Conference on Machine Learning (ICML-03), Washington, DC, USA, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307002}, primaryClass={cs.GT cs.LG cs.MA} }
conitzer2003awesome:
arxiv-671275
cs/0307003
How many candidates are needed to make elections hard to manipulate?
<|reference_start|>How many candidates are needed to make elections hard to manipulate?: In multiagent settings where the agents have different preferences, preference aggregation is a central issue. Voting is a general method for preference aggregation, but seminal results have shown that all general voting protocols are manipulable. One could try to avoid manipulation by using voting protocols where determining a beneficial manipulation is hard computationally. The complexity of manipulating realistic elections where the number of candidates is a small constant was recently studied (Conitzer 2002), but the emphasis was on the question of whether or not a protocol becomes hard to manipulate for some constant number of candidates. That work, in many cases, left open the question: How many candidates are needed to make elections hard to manipulate? This is a crucial question when comparing the relative manipulability of different voting protocols. In this paper we answer that question for the voting protocols of the earlier study: plurality, Borda, STV, Copeland, maximin, regular cup, and randomized cup. We also answer that question for two voting protocols for which no results on the complexity of manipulation have been derived before: veto and plurality with runoff. It turns out that the voting protocols under study become hard to manipulate at 3 candidates, 4 candidates, 7 candidates, or never.<|reference_end|>
arxiv
@article{conitzer2003how, title={How many candidates are needed to make elections hard to manipulate?}, author={Vincent Conitzer, Jerome Lang, Tuomas Sandholm}, journal={In Proceedings of the 9th Conference on Theoretical Aspects of Rationality and Knowledge (TARK-03), pp. 201-214, Bloomington, Indiana, USA, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307003}, primaryClass={cs.GT cs.CC cs.MA} }
conitzer2003how
arxiv-671276
cs/0307004
State complexes for metamorphic robots
<|reference_start|>State complexes for metamorphic robots: A metamorphic robotic system is an aggregate of homogeneous robot units which can individually and selectively locomote in such a way as to change the global shape of the system. We introduce a mathematical framework for defining and analyzing general metamorphic robots. This formal structure, combined with ideas from geometric group theory, leads to a natural extension of a configuration space for metamorphic robots -- the state complex -- which is especially adapted to parallelization. We present an algorithm for optimizing reconfiguration sequences with respect to elapsed time. A universal geometric property of state complexes -- non-positive curvature -- is the key to proving convergence to the globally time-optimal solution.<|reference_end|>
arxiv
@article{abrams2003state, title={State complexes for metamorphic robots}, author={A. Abrams and R. Ghrist}, journal={arXiv preprint arXiv:cs/0307004}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307004}, primaryClass={cs.RO cs.CG} }
abrams2003state
arxiv-671277
cs/0307005
Optimal Adaptive Algorithms for Finding the Nearest and Farthest Point on a Parametric Black-Box Curve
<|reference_start|>Optimal Adaptive Algorithms for Finding the Nearest and Farthest Point on a Parametric Black-Box Curve: We consider a general model for representing and manipulating parametric curves, in which a curve is specified by a black box mapping a parameter value between 0 and 1 to a point in Euclidean d-space. In this model, we consider the nearest-point-on-curve and farthest-point-on-curve problems: given a curve C and a point p, find a point on C nearest to p or farthest from p. In the general black-box model, no algorithm can solve these problems. Assuming a known bound on the speed of the curve (a Lipschitz condition), the answer can be estimated up to an additive error of epsilon using O(1/epsilon) samples, and this bound is tight in the worst case. However, many instances can be solved with substantially fewer samples, and we give algorithms that adapt to the inherent difficulty of the particular instance, up to a logarithmic factor. More precisely, if OPT(C,p,epsilon) is the minimum number of samples of C that every correct algorithm must perform to achieve tolerance epsilon, then our algorithm performs O(OPT(C,p,epsilon) log (epsilon^(-1)/OPT(C,p,epsilon))) samples. Furthermore, any algorithm requires Omega(k log (epsilon^(-1)/k)) samples for some instance C' with OPT(C',p,epsilon) = k; except that, for the nearest-point-on-curve problem when the distance between C and p is less than epsilon, OPT is 1 but the upper and lower bounds on the number of samples are both Theta(1/epsilon). When bounds on relative error are desired, we give algorithms that perform O(OPT log (2+(1+epsilon^(-1)) m^(-1)/OPT)) samples (where m is the exact minimum or maximum distance from p to C) and prove that Omega(OPT log (1/epsilon)) samples are necessary on some problem instances.<|reference_end|>
arxiv
@article{baran2003optimal, title={Optimal Adaptive Algorithms for Finding the Nearest and Farthest Point on a Parametric Black-Box Curve}, author={Ilya Baran and Erik D. Demaine}, journal={arXiv preprint arXiv:cs/0307005}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307005}, primaryClass={cs.CG cs.DS} }
baran2003optimal
arxiv-671278
cs/0307006
BL-WoLF: A Framework For Loss-Bounded Learnability In Zero-Sum Games
<|reference_start|>BL-WoLF: A Framework For Loss-Bounded Learnability In Zero-Sum Games: We present BL-WoLF, a framework for learnability in repeated zero-sum games where the cost of learning is measured by the losses the learning agent accrues (rather than the number of rounds). The game is adversarially chosen from some family that the learner knows. The opponent knows the game and the learner's learning strategy. The learner tries to either not accrue losses, or to quickly learn about the game so as to avoid future losses (this is consistent with the Win or Learn Fast (WoLF) principle; BL stands for ``bounded loss''). Our framework allows for both probabilistic and approximate learning. The resultant notion of {\em BL-WoLF}-learnability can be applied to any class of games, and allows us to measure the inherent disadvantage to a player that does not know which game in the class it is in. We present {\em guaranteed BL-WoLF-learnability} results for families of games with deterministic payoffs and families of games with stochastic payoffs. We demonstrate that these families are {\em guaranteed approximately BL-WoLF-learnable} with lower cost. We then demonstrate families of games (both stochastic and deterministic) that are not guaranteed BL-WoLF-learnable. We show that those families, nevertheless, are {\em BL-WoLF-learnable}. To prove these results, we use a key lemma which we derive.<|reference_end|>
arxiv
@article{conitzer2003bl-wolf:, title={BL-WoLF: A Framework For Loss-Bounded Learnability In Zero-Sum Games}, author={Vincent Conitzer, Tuomas Sandholm}, journal={In Proceedings of the 20th International Conference on Machine Learning (ICML-03), Washington, DC, USA, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307006}, primaryClass={cs.GT cs.LG cs.MA} }
conitzer2003bl-wolf:
arxiv-671279
cs/0307007
Management of Grid Jobs and Information within SAMGrid
<|reference_start|>Management of Grid Jobs and Information within SAMGrid: We describe some of the key aspects of the SAMGrid system, used by the D0 and CDF experiments at Fermilab. Having sustained success of the data handling part of SAMGrid, we have developed new services for job and information services. Our job management is rooted in \CondorG and uses enhancements that are general applicability for HEP grids. Our information system is based on a uniform framework for configuration management based on XML data representation and processing.<|reference_end|>
arxiv
@article{baranovski2003management, title={Management of Grid Jobs and Information within SAMGrid}, author={A. Baranovski, G. Garzoglio, A. Kreymer, L. Lueking, S. Stonjek, I. Terekhov, F. Wuerthwein, A. Roy, P. Mhashikar, V. Murthi, T. Tannenbaum, R. Walker, F. Ratnikov, T. Rockwell}, journal={ECONF C0303241:TUAT002,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307007}, primaryClass={cs.DC} }
baranovski2003management
arxiv-671280
cs/0307008
Eprints and the Open Archives Initiative
<|reference_start|>Eprints and the Open Archives Initiative: The Open Archives Initiative (OAI) was created as a practical way to promote interoperability between eprint repositories. Although the scope of the OAI has been broadened, eprint repositories still represent a significant fraction of OAI data providers. In this article I present a brief survey of OAI eprint repositories, and of services using metadata harvested from eprint repositories using the OAI protocol for metadata harvesting (OAI-PMH). I then discuss several situations where metadata harvesting may be used to further improve the utility of eprint archives as a component of the scholarly communication infrastructure.<|reference_end|>
arxiv
@article{warner2003eprints, title={Eprints and the Open Archives Initiative}, author={Simeon Warner}, journal={Library Hi Tech, Volume 21, Number 2, 151-158 (2003)}, year={2003}, doi={10.1108/07378830310479794}, archivePrefix={arXiv}, eprint={cs/0307008}, primaryClass={cs.DL} }
warner2003eprints
arxiv-671281
cs/0307009
Finding the "truncated" polynomial that is closest to a function
<|reference_start|>Finding the "truncated" polynomial that is closest to a function: When implementing regular enough functions (e.g., elementary or special functions) on a computing system, we frequently use polynomial approximations. In most cases, the polynomial that best approximates (for a given distance and in a given interval) a function has coefficients that are not exactly representable with a finite number of bits. And yet, the polynomial approximations that are actually implemented do have coefficients that are represented with a finite - and sometimes small - number of bits: this is due to the finiteness of the floating-point representations (for software implementations), and to the need to have small, hence fast and/or inexpensive, multipliers (for hardware implementations). We then have to consider polynomial approximations for which the degree-$i$ coefficient has at most $m_i$ fractional bits (in other words, it is a rational number with denominator $2^{m_i}$). We provide a general method for finding the best polynomial approximation under this constraint. Then, we suggest refinements than can be used to accelerate our method.<|reference_end|>
arxiv
@article{brisebarre2003finding, title={Finding the "truncated" polynomial that is closest to a function}, author={Nicolas Brisebarre and Jean-Michel Muller}, journal={arXiv preprint arXiv:cs/0307009}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307009}, primaryClass={cs.MS} }
brisebarre2003finding
arxiv-671282
cs/0307010
Probabilistic Reasoning as Information Compression by Multiple Alignment, Unification and Search: An Introduction and Overview
<|reference_start|>Probabilistic Reasoning as Information Compression by Multiple Alignment, Unification and Search: An Introduction and Overview: This article introduces the idea that probabilistic reasoning (PR) may be understood as "information compression by multiple alignment, unification and search" (ICMAUS). In this context, multiple alignment has a meaning which is similar to but distinct from its meaning in bio-informatics, while unification means a simple merging of matching patterns, a meaning which is related to but simpler than the meaning of that term in logic. A software model, SP61, has been developed for the discovery and formation of 'good' multiple alignments, evaluated in terms of information compression. The model is described in outline. Using examples from the SP61 model, this article describes in outline how the ICMAUS framework can model various kinds of PR including: PR in best-match pattern recognition and information retrieval; one-step 'deductive' and 'abductive' PR; inheritance of attributes in a class hierarchy; chains of reasoning (probabilistic decision networks and decision trees, and PR with 'rules'); geometric analogy problems; nonmonotonic reasoning and reasoning with default values; modelling the function of a Bayesian network.<|reference_end|>
arxiv
@article{wolff2003probabilistic, title={Probabilistic Reasoning as Information Compression by Multiple Alignment, Unification and Search: An Introduction and Overview}, author={J Gerard Wolff}, journal={Journal of Universal Computer Science 5 (7), 418--462, 1999}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307010}, primaryClass={cs.AI} }
wolff2003probabilistic
arxiv-671283
cs/0307011
Supporting Out-of-turn Interactions in a Multimodal Web Interface
<|reference_start|>Supporting Out-of-turn Interactions in a Multimodal Web Interface: Multimodal interfaces are becoming increasingly important with the advent of mobile devices, accessibility considerations, and novel software technologies that combine diverse interaction media. This article investigates systems support for web browsing in a multimodal interface. Specifically, we outline the design and implementation of a software framework that integrates hyperlink and speech modes of interaction. Instead of viewing speech as merely an alternative interaction medium, the framework uses it to support out-of-turn interaction, providing a flexibility of information access not possible with hyperlinks alone. This approach enables the creation of websites that adapt to the needs of users, yet permits the designer fine-grained control over what interactions to support. Design methodology, implementation details, and two case studies are presented.<|reference_end|>
arxiv
@article{shenoy2003supporting, title={Supporting Out-of-turn Interactions in a Multimodal Web Interface}, author={Atul Shenoy, Naren Ramakrishnan, Manuel A. Perez-Quinones, and Srinidhi Varadarajan}, journal={arXiv preprint arXiv:cs/0307011}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307011}, primaryClass={cs.IR cs.HC} }
shenoy2003supporting
arxiv-671284
cs/0307012
Observation-based Cooperation Enforcement in Ad Hoc Networks
<|reference_start|>Observation-based Cooperation Enforcement in Ad Hoc Networks: Ad hoc networks rely on the cooperation of the nodes participating in the network to forward packets for each other. A node may decide not to cooperate to save its resources while still using the network to relay its traffic. If too many nodes exhibit this behavior, network performance degrades and cooperating nodes may find themselves unfairly loaded. Most previous efforts to counter this behavior have relied on further cooperation between nodes to exchange reputation information about other nodes. If a node observes another node not participating correctly, it reports this observation to other nodes who then take action to avoid being affected and potentially punish the bad node by refusing to forward its traffic. Unfortunately, such second-hand reputation information is subject to false accusations and requires maintaining trust relationships with other nodes. The objective of OCEAN is to avoid this trust-management machinery and see how far we can get simply by using direct first-hand observations of other nodes' behavior. We find that, in many scenarios, OCEAN can do as well as, or even better than, schemes requiring second-hand reputation exchanges. This encouraging result could possibly help obviate solutions requiring trust-management for some contexts.<|reference_end|>
arxiv
@article{bansal2003observation-based, title={Observation-based Cooperation Enforcement in Ad Hoc Networks}, author={Sorav Bansal, Mary Baker}, journal={arXiv preprint arXiv:cs/0307012}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307012}, primaryClass={cs.NI} }
bansal2003observation-based
arxiv-671285
cs/0307013
'Computing' as Information Compression by Multiple Alignment, Unification and Search
<|reference_start|>'Computing' as Information Compression by Multiple Alignment, Unification and Search: This paper argues that the operations of a 'Universal Turing Machine' (UTM) and equivalent mechanisms such as the 'Post Canonical System' (PCS) - which are widely accepted as definitions of the concept of `computing' - may be interpreted as *information compression by multiple alignment, unification and search* (ICMAUS). The motivation for this interpretation is that it suggests ways in which the UTM/PCS model may be augmented in a proposed new computing system designed to exploit the ICMAUS principles as fully as possible. The provision of a relatively sophisticated search mechanism in the proposed 'SP' system appears to open the door to the integration and simplification of a range of functions including unsupervised inductive learning, best-match pattern recognition and information retrieval, probabilistic reasoning, planning and problem solving, and others. Detailed consideration of how the ICMAUS principles may be applied to these functions is outside the scope of this article but relevant sources are cited in this article.<|reference_end|>
arxiv
@article{wolff2003'computing', title={'Computing' as Information Compression by Multiple Alignment, Unification and Search}, author={J Gerard Wolff}, journal={Journal of Universal Computer Science 5(11), 776--815, 1999}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307013}, primaryClass={cs.AI cs.CC} }
wolff2003'computing'
arxiv-671286
cs/0307014
Syntax, Parsing and Production of Natural Language in a Framework of Information Compression by Multiple Alignment, Unification and Search
<|reference_start|>Syntax, Parsing and Production of Natural Language in a Framework of Information Compression by Multiple Alignment, Unification and Search: This article introduces the idea that "information compression by multiple alignment, unification and search" (ICMAUS) provides a framework within which natural language syntax may be represented in a simple format and the parsing and production of natural language may be performed in a transparent manner. The ICMAUS concepts are embodied in a software model, SP61. The organisation and operation of the model are described and a simple example is presented showing how the model can achieve parsing of natural language. Notwithstanding the apparent paradox of 'decompression by compression', the ICMAUS framework, without any modification, can produce a sentence by decoding a compressed code for the sentence. This is illustrated with output from the SP61 model. The article includes four other examples - one of the parsing of a sentence in French and three from the domain of English auxiliary verbs. These examples show how the ICMAUS framework and the SP61 model can accommodate 'context sensitive' features of syntax in a relatively simple and direct manner.<|reference_end|>
arxiv
@article{wolff2003syntax,, title={Syntax, Parsing and Production of Natural Language in a Framework of Information Compression by Multiple Alignment, Unification and Search}, author={J Gerard Wolff}, journal={Journal of Universal Computer Science 6(8), 781--829, 2000}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307014}, primaryClass={cs.AI cs.CL} }
wolff2003syntax,
arxiv-671287
cs/0307015
Architecture of an Open-Sourced, Extensible Data Warehouse Builder: InterBase 6 Data Warehouse Builder (IB-DWB)
<|reference_start|>Architecture of an Open-Sourced, Extensible Data Warehouse Builder: InterBase 6 Data Warehouse Builder (IB-DWB): We report the development of an open-sourced data warehouse builder, InterBase Data Warehouse Builder (IB-DWB), based on Borland InterBase 6 Open Edition Database Server. InterBase 6 is used for its low maintenance and small footprint. IB-DWB is designed modularly and consists of 5 main components, Data Plug Platform, Discoverer Platform, Multi-Dimensional Cube Builder, and Query Supporter, bounded together by a Kernel. It is also an extensible system, made possible by the Data Plug Platform and the Discoverer Platform. Currently, extensions are only possible via dynamic linked-libraries (DLLs). Multi-Dimensional Cube Builder represents a basal mean of data aggregation. The architectural philosophy of IB-DWB centers around providing a base platform that is extensible, which is functionally supported by expansion modules. IB-DWB is currently being hosted by sourceforge.net (Project Unix Name: ib-dwb), licensed under GNU General Public License, Version 2.<|reference_end|>
arxiv
@article{ling2003architecture, title={Architecture of an Open-Sourced, Extensible Data Warehouse Builder: InterBase 6 Data Warehouse Builder (IB-DWB)}, author={Maurice HT Ling, Chi Wai So}, journal={Ling, Maurice HT and So, Chi Wai. 2003. Proceedings of the First Australian Undergraduate Students' Computing Conference. (pp. 40-45)}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307015}, primaryClass={cs.DB} }
ling2003architecture
arxiv-671288
cs/0307016
Complexity of Determining Nonemptiness of the Core
<|reference_start|>Complexity of Determining Nonemptiness of the Core: Coalition formation is a key problem in automated negotiation among self-interested agents, and other multiagent applications. A coalition of agents can sometimes accomplish things that the individual agents cannot, or can do things more efficiently. However, motivating the agents to abide to a solution requires careful analysis: only some of the solutions are stable in the sense that no group of agents is motivated to break off and form a new coalition. This constraint has been studied extensively in cooperative game theory. However, the computational questions around this constraint have received less attention. When it comes to coalition formation among software agents (that represent real-world parties), these questions become increasingly explicit. In this paper we define a concise general representation for games in characteristic form that relies on superadditivity, and show that it allows for efficient checking of whether a given outcome is in the core. We then show that determining whether the core is nonempty is $\mathcal{NP}$-complete both with and without transferable utility. We demonstrate that what makes the problem hard in both cases is determining the collaborative possibilities (the set of outcomes possible for the grand coalition), by showing that if these are given, the problem becomes tractable in both cases. However, we then demonstrate that for a hybrid version of the problem, where utility transfer is possible only within the grand coalition, the problem remains $\mathcal{NP}$-complete even when the collaborative possibilities are given.<|reference_end|>
arxiv
@article{conitzer2003complexity, title={Complexity of Determining Nonemptiness of the Core}, author={Vincent Conitzer, Tuomas Sandholm}, journal={In Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307016}, primaryClass={cs.GT cs.CC cs.MA} }
conitzer2003complexity
arxiv-671289
cs/0307017
Definition and Complexity of Some Basic Metareasoning Problems
<|reference_start|>Definition and Complexity of Some Basic Metareasoning Problems: In most real-world settings, due to limited time or other resources, an agent cannot perform all potentially useful deliberation and information gathering actions. This leads to the metareasoning problem of selecting such actions. Decision-theoretic methods for metareasoning have been studied in AI, but there are few theoretical results on the complexity of metareasoning. We derive hardness results for three settings which most real metareasoning systems would have to encompass as special cases. In the first, the agent has to decide how to allocate its deliberation time across anytime algorithms running on different problem instances. We show this to be $\mathcal{NP}$-complete. In the second, the agent has to (dynamically) allocate its deliberation or information gathering resources across multiple actions that it has to choose among. We show this to be $\mathcal{NP}$-hard even when evaluating each individual action is extremely simple. In the third, the agent has to (dynamically) choose a limited number of deliberation or information gathering actions to disambiguate the state of the world. We show that this is $\mathcal{NP}$-hard under a natural restriction, and $\mathcal{PSPACE}$-hard in general.<|reference_end|>
arxiv
@article{conitzer2003definition, title={Definition and Complexity of Some Basic Metareasoning Problems}, author={Vincent Conitzer, Tuomas Sandholm}, journal={In Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307017}, primaryClass={cs.AI cs.CC} }
conitzer2003definition
arxiv-671290
cs/0307018
Universal Voting Protocol Tweaks to Make Manipulation Hard
<|reference_start|>Universal Voting Protocol Tweaks to Make Manipulation Hard: Voting is a general method for preference aggregation in multiagent settings, but seminal results have shown that all (nondictatorial) voting protocols are manipulable. One could try to avoid manipulation by using voting protocols where determining a beneficial manipulation is hard computationally. A number of recent papers study the complexity of manipulating existing protocols. This paper is the first work to take the next step of designing new protocols that are especially hard to manipulate. Rather than designing these new protocols from scratch, we instead show how to tweak existing protocols to make manipulation hard, while leaving much of the original nature of the protocol intact. The tweak studied consists of adding one elimination preround to the election. Surprisingly, this extremely simple and universal tweak makes typical protocols hard to manipulate! The protocols become NP-hard, #P-hard, or PSPACE-hard to manipulate, depending on whether the schedule of the preround is determined before the votes are collected, after the votes are collected, or the scheduling and the vote collecting are interleaved, respectively. We prove general sufficient conditions on the protocols for this tweak to introduce the hardness, and show that the most common voting protocols satisfy those conditions. These are the first results in voting settings where manipulation is in a higher complexity class than NP (presuming PSPACE $\neq$ NP).<|reference_end|>
arxiv
@article{conitzer2003universal, title={Universal Voting Protocol Tweaks to Make Manipulation Hard}, author={Vincent Conitzer, Tuomas Sandholm}, journal={In Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307018}, primaryClass={cs.GT cs.CC cs.MA} }
conitzer2003universal
arxiv-671291
cs/0307019
Lattice QCD Production on Commodity Clusters at Fermilab
<|reference_start|>Lattice QCD Production on Commodity Clusters at Fermilab: We describe the construction and results to date of Fermilab's three Myrinet-networked lattice QCD production clusters (an 80-node dual Pentium III cluster, a 48-node dual Xeon cluster, and a 128-node dual Xeon cluster). We examine a number of aspects of performance of the MILC lattice QCD code running on these clusters.<|reference_end|>
arxiv
@article{holmgren2003lattice, title={Lattice QCD Production on Commodity Clusters at Fermilab}, author={D. Holmgren, A. Singh, P. Mackenzie, J. Simone}, journal={ECONFC0303241:TUIT004,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307019}, primaryClass={cs.DC} }
holmgren2003lattice
arxiv-671292
cs/0307020
Defying Dimensions Mod 6
<|reference_start|>Defying Dimensions Mod 6: We show that a certain representation of the matrix-product can be computed with $n^{o(1)}$ multiplications. We also show, that siumilar representations of matrices can be compressed enormously.<|reference_end|>
arxiv
@article{grolmusz2003defying, title={Defying Dimensions Mod 6}, author={Vince Grolmusz}, journal={arXiv preprint arXiv:cs/0307020}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307020}, primaryClass={cs.CC} }
grolmusz2003defying
arxiv-671293
cs/0307021
Tools and Techniques for Managing Clusters for SciDAC Lattice QCD at Fermilab
<|reference_start|>Tools and Techniques for Managing Clusters for SciDAC Lattice QCD at Fermilab: Fermilab operates several clusters for lattice gauge computing. Minimal manpower is available to manage these clusters. We have written a number of tools and developed techniques to cope with this task. We describe our tools which use the IPMI facilities of our systems for hardware management tasks such as remote power control, remote system resets, and health monitoring. We discuss our techniques involving network booting for installation and upgrades of the operating system on these computers, and for reloading BIOS and other firmware. Finally, we discuss our tools for parallel command processing and their use in monitoring and administrating the PBS batch queue system used on our clusters.<|reference_end|>
arxiv
@article{singh2003tools, title={Tools and Techniques for Managing Clusters for SciDAC Lattice QCD at Fermilab}, author={A. Singh, D. Holmgren, R. Rechenmacher, S. Epsteyn}, journal={ECONFC0303241:TUIT005,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307021}, primaryClass={cs.DC} }
singh2003tools
arxiv-671294
cs/0307022
Transformations of Logic Programs with Goals as Arguments
<|reference_start|>Transformations of Logic Programs with Goals as Arguments: We consider a simple extension of logic programming where variables may range over goals and goals may be arguments of predicates. In this language we can write logic programs which use goals as data. We give practical evidence that, by exploiting this capability when transforming programs, we can improve program efficiency. We propose a set of program transformation rules which extend the familiar unfolding and folding rules and allow us to manipulate clauses with goals which occur as arguments of predicates. In order to prove the correctness of these transformation rules, we formally define the operational semantics of our extended logic programming language. This semantics is a simple variant of LD-resolution. When suitable conditions are satisfied this semantics agrees with LD-resolution and, thus, the programs written in our extended language can be run by ordinary Prolog systems. Our transformation rules are shown to preserve the operational semantics and termination.<|reference_end|>
arxiv
@article{pettorossi2003transformations, title={Transformations of Logic Programs with Goals as Arguments}, author={Alberto Pettorossi and Maurizio Proietti}, journal={arXiv preprint arXiv:cs/0307022}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307022}, primaryClass={cs.PL cs.LO} }
pettorossi2003transformations
arxiv-671295
cs/0307023
Testing Bipartiteness of Geometric Intersection Graphs
<|reference_start|>Testing Bipartiteness of Geometric Intersection Graphs: We show how to test the bipartiteness of an intersection graph of n line segments or simple polygons in the plane, or of balls in R^d, in time O(n log n). More generally we find subquadratic algorithms for connectivity and bipartiteness testing of intersection graphs of a broad class of geometric objects. For unit balls in R^d, connectivity testing has equivalent randomized complexity to construction of Euclidean minimum spanning trees, and hence is unlikely to be solved as efficiently as bipartiteness testing. For line segments or planar disks, testing k-colorability of intersection graphs for k>2 is NP-complete.<|reference_end|>
arxiv
@article{eppstein2003testing, title={Testing Bipartiteness of Geometric Intersection Graphs}, author={David Eppstein}, journal={ACM Trans. Algorithms 5(2):15, 2009}, year={2003}, doi={10.1145/1497290.1497291}, archivePrefix={arXiv}, eprint={cs/0307023}, primaryClass={cs.CG} }
eppstein2003testing
arxiv-671296
cs/0307024
Architecture of monitoring elements for the network element modeling in a Grid infrastructure
<|reference_start|>Architecture of monitoring elements for the network element modeling in a Grid infrastructure: Several tools exist that collect host-to-host connectivity measurements. To improve the usability of such measurements, they should be mapped into a framework consisting of complex subsystems, and the infrastructure that connects them. We introduce one such framework, and analyze the architectural implications on the network structure. In our framework, a complex subsystem consists of several computing facilities and the infrastructure that connects them: we call it a -monitoring domain-. The task of measuring the connectivity between -monitoring domains- is considered distinct from the activity of -storage- and -computing- elements. Therefore we introduce a new element in our topology: we call it -theodolite- element, since its function is similar to that of a transponder. Using these basic concepts, we analyze the architectural implications on the network structure: in a nutshell, if we want that -theodolites- serve as a reference, than the contribution to the relevant network metrics due to the -monitoring domain- infrastructure must be negligible with respect to contributions of the inter-domain infrastructure. In addition all -theodolites- of a -monitoring domain- must give an image of the inter-domain infrastructure that is consistent with that experienced by network applications. We conclude giving a running SQL example of how information about -monitoring domains- and -theodolites- could be organized, and we outline the application of such framework in the GLUE schema activity for the network element<|reference_end|>
arxiv
@article{ciuffoletti2003architecture, title={Architecture of monitoring elements for the network element modeling in a Grid infrastructure}, author={A.Ciuffoletti, T.Ferrari A.Ghiselli, C.Vistoli}, journal={arXiv preprint arXiv:cs/0307024}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307024}, primaryClass={cs.NI} }
ciuffoletti2003architecture
arxiv-671297
cs/0307025
Information Compression by Multiple Alignment, Unification and Search as a Unifying Principle in Computing and Cognition
<|reference_start|>Information Compression by Multiple Alignment, Unification and Search as a Unifying Principle in Computing and Cognition: This article presents an overview of the idea that "information compression by multiple alignment, unification and search" (ICMAUS) may serve as a unifying principle in computing (including mathematics and logic) and in such aspects of human cognition as the analysis and production of natural language, fuzzy pattern recognition and best-match information retrieval, concept hierarchies with inheritance of attributes, probabilistic reasoning, and unsupervised inductive learning. The ICMAUS concepts are described together with an outline of the SP61 software model in which the ICMAUS concepts are currently realised. A range of examples is presented, illustrated with output from the SP61 model.<|reference_end|>
arxiv
@article{wolff2003information, title={Information Compression by Multiple Alignment, Unification and Search as a Unifying Principle in Computing and Cognition}, author={J Gerard Wolff}, journal={Artificial Intelligence Review 19(3), 193-230, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307025}, primaryClass={cs.AI} }
wolff2003information
arxiv-671298
cs/0307026
Improving the Security and Performance of the BaBar Detector Controls System
<|reference_start|>Improving the Security and Performance of the BaBar Detector Controls System: It starts out innocently enough - users want to monitor Online data and so run their own copies of the detector control GUIs in their offices and at home. But over time, the number of processes making requests for values to display on GUIs, webpages and stripcharts can grow, and affect the performance of an Input/Output Controller (IOC) such that it is unable to respond to requests from requests critical to data-taking. At worst, an IOC can hang, its CPU having been allocated 100% to responding to network requests. For the BaBar Online Detector Control System, we were able to eliminate this problem and make great gains in security by moving all of the IOCs to a non-routed, virtual LAN and by enlisting a workstation with two network interface cards to act as the interface between the virtual LAN and the public BaBar network. On the interface machine, we run the Experimental Physics Industrial Control System (EPICS) Channel Access (CA) gateway software (originating from Advanced Photon Source). This software accepts as inputs, all the channels which are loaded into the EPICS databases on all the IOCs. It polls them to update its copy of the values. It answers requests from applications by sending them the currently cached value. We adopted the requirement that data-taking would be independent of the gateway, so that, in the event of a gateway failure, data-taking would be uninterrupted. In this way, we avoided introducing any new risk elements to data-taking. Security rules already in use by the IOC were propagated to the gateway's own security rules and the security of the IOCs themselves was improved by removing them from the public BaBar network.<|reference_end|>
arxiv
@article{kotturi2003improving, title={Improving the Security and Performance of the BaBar Detector Controls System}, author={Karen D. Kotturi}, journal={arXiv preprint arXiv:cs/0307026}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307026}, primaryClass={cs.NI} }
kotturi2003improving
arxiv-671299
cs/0307027
Deterministic Sampling and Range Counting in Geometric Data Streams
<|reference_start|>Deterministic Sampling and Range Counting in Geometric Data Streams: We present memory-efficient deterministic algorithms for constructing epsilon-nets and epsilon-approximations of streams of geometric data. Unlike probabilistic approaches, these deterministic samples provide guaranteed bounds on their approximation factors. We show how our deterministic samples can be used to answer approximate online iceberg geometric queries on data streams. We use these techniques to approximate several robust statistics of geometric data streams, including Tukey depth, simplicial depth, regression depth, the Thiel-Sen estimator, and the least median of squares. Our algorithms use only a polylogarithmic amount of memory, provided the desired approximation factors are inverse-polylogarithmic. We also include a lower bound for non-iceberg geometric queries.<|reference_end|>
arxiv
@article{bagchi2003deterministic, title={Deterministic Sampling and Range Counting in Geometric Data Streams}, author={Amitabha Bagchi, Amitabh Chaudhary, David Eppstein and Michael T. Goodrich}, journal={ACM Trans. Algorithms 3(2):A16, 2007}, year={2003}, doi={10.1145/1240233.1240239}, archivePrefix={arXiv}, eprint={cs/0307027}, primaryClass={cs.CG} }
bagchi2003deterministic
arxiv-671300
cs/0307028
Issues in Communication Game
<|reference_start|>Issues in Communication Game: As interaction between autonomous agents, communication can be analyzed in game-theoretic terms. Meaning game is proposed to formalize the core of intended communication in which the sender sends a message and the receiver attempts to infer its meaning intended by the sender. Basic issues involved in the game of natural language communication are discussed, such as salience, grammaticality, common sense, and common belief, together with some demonstration of the feasibility of game-theoretic account of language.<|reference_end|>
arxiv
@article{hasida2003issues, title={Issues in Communication Game}, author={Koiti Hasida}, journal={arXiv preprint arXiv:cs/0307028}, year={2003}, archivePrefix={arXiv}, eprint={cs/0307028}, primaryClass={cs.CL} }
hasida2003issues