corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-672301 | cs/0411042 | An Empirical Analysis of Internet Protocol Version 6 (IPv6) | <|reference_start|>An Empirical Analysis of Internet Protocol Version 6 (IPv6): Although the current Internet Protocol known as IPv4 has served its purpose for over 20 years, its days are numbered. With IPv6 reaching a mature enough level, there is a need to evaluate the performance benefits or drawbacks that the new IPv6 protocol will have in comparison to the well established IPv4 protocol. Theoretically, the overhead between the two different protocols should be directly proportional to the difference in the packet's header size, however according to our findings, the empirical performance difference between IPv4 and IPv6, especially when the transition mechanisms are taken into consideration, is much larger than anticipated. We first examine the performance of each protocol independently. We then examined two transition mechanisms which perform the encapsulation at various points in the network: host-to-host and router-to-router (tunneling). Our experiments were conducted using two dual stack (IPv4/IPv6) routers using end nodes running both Windows 2000 and Solaris 8.0 in order to compare two different IPv6 implementations side by side. Our tests were written in C++ and utilized metrics such as latency, throughput, CPU utilization, socket creation time, socket connection time, web server simulation, and a video client/server application for TCP/UDP in IPv4/IPv6 under both Windows 2000 and Solaris 8.0. Our empirical evaluation proved that IPv6 is not yet a mature enough technology and that it is still years away from having consistent and good enough implementations, as the performance of IPv6 in many cases proved to be significantly worse than IPv4.<|reference_end|> | arxiv | @article{raicu2004an,
title={An Empirical Analysis of Internet Protocol Version 6 (IPv6)},
author={Ioan Raicu},
journal={arXiv preprint arXiv:cs/0411042},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411042},
primaryClass={cs.NI cs.PF}
} | raicu2004an |
arxiv-672302 | cs/0411043 | e3D: An Energy-Efficient Routing Algorithm for Wireless Sensor Networks | <|reference_start|>e3D: An Energy-Efficient Routing Algorithm for Wireless Sensor Networks: One of the limitations of wireless sensor nodes is their inherent limited energy resource. Besides maximizing the lifetime of the sensor node, it is preferable to distribute the energy dissipated throughout the wireless sensor network in order to minimize maintenance and maximize overall system performance. Any communication protocol that involves synchronization of peer nodes incurs some overhead for setting up the communication. We introduce a new algorithm, e3D (energy-efficient Distributed Dynamic Diffusion routing algorithm), and compare it to two other algorithms, namely directed, and random clustering communication. We take into account the setup costs and analyze the energy-efficiency and the useful lifetime of the system. In order to better understand the characteristics of each algorithm and how well e3D really performs, we also compare e3D with its optimum counterpart and an optimum clustering algorithm. The benefit of introducing these ideal algorithms is to show the upper bound on performance at the cost of an astronomical prohibitive synchronization costs. We compare the algorithms in terms of system lifetime, power dissipation distribution, cost of synchronization, and simplicity of the algorithm. Our simulation results show that e3D performs comparable to its optimal counterpart while having significantly less overhead.<|reference_end|> | arxiv | @article{raicu2004e3d:,
title={e3D: An Energy-Efficient Routing Algorithm for Wireless Sensor Networks},
author={Ioan Raicu, Loren Schwiebert, Scott Fowler, Sandeep K.S. Gupta},
journal={arXiv preprint arXiv:cs/0411043},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411043},
primaryClass={cs.NI}
} | raicu2004e3d: |
arxiv-672303 | cs/0411044 | Routing Algorithms for Wireless Sensor Networks | <|reference_start|>Routing Algorithms for Wireless Sensor Networks: Our contribution in this paper is e3D, a diffusion based routing protocol that prolongs the system lifetime, evenly distributes the power dissipation throughout the network, and incurs minimal overhead for synchronizing communication. We compare e3D with other algorithms in terms of system lifetime, power dissipation distribution, cost of synchronization, and simplicity of the algorithm.<|reference_end|> | arxiv | @article{raicu2004routing,
title={Routing Algorithms for Wireless Sensor Networks},
author={Ioan Raicu},
journal={arXiv preprint arXiv:cs/0411044},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411044},
primaryClass={cs.NI}
} | raicu2004routing |
arxiv-672304 | cs/0411045 | Usage Policy-based CPU Sharing in VOs | <|reference_start|>Usage Policy-based CPU Sharing in VOs: Resource sharing within Grid collaborations usually implies specific sharing mechanisms at participating sites. Challenging policy issues can arise within virtual organizations (VOs) that integrate participants and resources spanning multiple physical institutions. Resource owners may wish to grant to one or more VOs the right to use certain resources subject to local policy and service level agreements, and each VO may then wish to use those resources subject to VO policy. Thus, we must address the question of what usage policies (UPs) should be considered for resource sharing in VOs. As a first step in addressing this question, we develop and evaluate different UP scenarios within a specialized context that mimics scientific Grids within which the resources to be shared are computers. We also present a UP architecture and define roles and functions for scheduling resources in such grid environments while satisfying resource owner policies.<|reference_end|> | arxiv | @article{dumitrescu2004usage,
title={Usage Policy-based CPU Sharing in VOs},
author={Catalin Dumitrescu, Ian Foster},
journal={arXiv preprint arXiv:cs/0411045},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411045},
primaryClass={cs.DC}
} | dumitrescu2004usage |
arxiv-672305 | cs/0411046 | Balanced Overlay Networks (BON): Decentralized Load Balancing via Self-Organized Random Networks | <|reference_start|>Balanced Overlay Networks (BON): Decentralized Load Balancing via Self-Organized Random Networks: We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.<|reference_end|> | arxiv | @article{bridgewater2004balanced,
title={Balanced Overlay Networks (BON): Decentralized Load Balancing via
Self-Organized Random Networks},
author={Jesse S. A. Bridgewater, P. Oscar Boykin and Vwani P. Roychowdhury},
journal={arXiv preprint arXiv:cs/0411046},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411046},
primaryClass={cs.DC}
} | bridgewater2004balanced |
arxiv-672306 | cs/0411047 | Numerical Solutions of 2-D Steady Incompressible Driven Cavity Flow at High Reynolds Numbers | <|reference_start|>Numerical Solutions of 2-D Steady Incompressible Driven Cavity Flow at High Reynolds Numbers: Numerical calculations of the 2-D steady incompressible driven cavity flow are presented. The Navier-Stokes equations in streamfunction and vorticity formulation are solved numerically using a fine uniform grid mesh of 601x601. The steady driven cavity solutions are computed for Re<21,000 with a maximum absolute residuals of the governing equations that were less than 10-10. A new quaternary vortex at the bottom left corner and a new tertiary vortex at the top left corner of the cavity are observed in the flow field as the Reynolds number increases. Detailed results are presented and comparisons are made with benchmark solutions found in the literature.<|reference_end|> | arxiv | @article{erturk2004numerical,
title={Numerical Solutions of 2-D Steady Incompressible Driven Cavity Flow at
High Reynolds Numbers},
author={E. Erturk and T.C. Corke and C. Gokcol},
journal={International Journal for Numerical Methods in Fluids 2005, Vol
48, pp 747-774},
year={2004},
doi={10.1002/fld.953},
archivePrefix={arXiv},
eprint={cs/0411047},
primaryClass={cs.NA math.NA physics.comp-ph physics.flu-dyn}
} | erturk2004numerical |
arxiv-672307 | cs/0411048 | Discussions on Driven Cavity Flow | <|reference_start|>Discussions on Driven Cavity Flow: The widely studied benchmark problem, 2-D driven cavity flow problem is discussed in details in terms of physical and mathematical and also numerical aspects. A very brief literature survey on studies on the driven cavity flow is given. Based on the several numerical and experimental studies, the fact of the matter is, above moderate Reynolds numbers physically the flow in a driven cavity is not two-dimensional. However there exist numerical solutions for 2-D driven cavity flow at high Reynolds numbers.<|reference_end|> | arxiv | @article{erturk2004discussions,
title={Discussions on Driven Cavity Flow},
author={E. Erturk},
journal={International Journal for Numerical Methods in Fluids 2009, Vol
60, pp 275-294},
year={2004},
doi={10.1002/fld.1887},
archivePrefix={arXiv},
eprint={cs/0411048},
primaryClass={cs.NA physics.comp-ph physics.flu-dyn}
} | erturk2004discussions |
arxiv-672308 | cs/0411049 | Fourth Order Compact Formulation of Navier-Stokes Equations and Driven Cavity Flow at High Reynolds Numbers | <|reference_start|>Fourth Order Compact Formulation of Navier-Stokes Equations and Driven Cavity Flow at High Reynolds Numbers: A new fourth order compact formulation for the steady 2-D incompressible Navier-Stokes equations is presented. The formulation is in the same form of the Navier-Stokes equations such that any numerical method that solve the Navier-Stokes equations can also be applied to this fourth order compact formulation. In particular in this work the formulation is solved with an efficient numerical method that requires the solution of tridiagonal systems using a fine grid mesh of 601x601. Using this formulation, the steady 2-D incompressible flow in a driven cavity is solved up to Reynolds number of 20,000 with fourth order spatial accuracy. Detailed solutions are presented.<|reference_end|> | arxiv | @article{erturk2004fourth,
title={Fourth Order Compact Formulation of Navier-Stokes Equations and Driven
Cavity Flow at High Reynolds Numbers},
author={E. Erturk and C. Gokcol},
journal={International Journal for Numerical Methods in Fluids 2006, Vol
50, pp 421-436},
year={2004},
doi={10.1002/fld.1061},
archivePrefix={arXiv},
eprint={cs/0411049},
primaryClass={cs.NA math.NA physics.comp-ph physics.flu-dyn}
} | erturk2004fourth |
arxiv-672309 | cs/0411050 | Utilizing Reconfigurable Hardware Processors via Grid Services | <|reference_start|>Utilizing Reconfigurable Hardware Processors via Grid Services: Computational grids typically consist of nodes utilizing ordinary processors such as the Intel Pentium. Field Programmable Gate Arrays (FPGAs) are able to perform certain compute-intensive tasks very well due to their inherent parallel architecture, often resulting in orders of magnitude speedups. This paper explores how FPGAs can be transparently exposed for remote use via grid services, by integrating the Proteus Software Platform with the Globus Toolkit 3.0.<|reference_end|> | arxiv | @article{nathan2004utilizing,
title={Utilizing Reconfigurable Hardware Processors via Grid Services},
author={Darran Nathan, Ralf Clemens},
journal={arXiv preprint arXiv:cs/0411050},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411050},
primaryClass={cs.DC cs.AR}
} | nathan2004utilizing |
arxiv-672310 | cs/0411051 | Describing and Simulating Internet Routes | <|reference_start|>Describing and Simulating Internet Routes: This paper introduces relevant statistics for the description of routes in the internet, seen as a graph at the interface level. Based on the observed properties, we propose and evaluate methods for generating artificial routes suitable for simulation purposes. The work in this paper is based upon a study of over seven million route traces produced by CAIDA's skitter infrastructure.<|reference_end|> | arxiv | @article{leguay2004describing,
title={Describing and Simulating Internet Routes},
author={Jeremie Leguay and Matthieu Latapy and Timur Friedman and Kave
Salamatian},
journal={arXiv preprint arXiv:cs/0411051},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411051},
primaryClass={cs.NI}
} | leguay2004describing |
arxiv-672311 | cs/0411052 | Spontaneous Dynamics of Asymmetric Random Recurrent Spiking Neural Networks | <|reference_start|>Spontaneous Dynamics of Asymmetric Random Recurrent Spiking Neural Networks: We study in this paper the effect of an unique initial stimulation on random recurrent networks of leaky integrate and fire neurons. Indeed given a stochastic connectivity this so-called spontaneous mode exhibits various non trivial dynamics. This study brings forward a mathematical formalism that allows us to examine the variability of the afterward dynamics according to the parameters of the weight distribution. Provided independence hypothesis (e.g. in the case of very large networks) we are able to compute the average number of neurons that fire at a given time -- the spiking activity. In accordance with numerical simulations, we prove that this spiking activity reaches a steady-state, we characterize this steady-state and explore the transients.<|reference_end|> | arxiv | @article{soula2004spontaneous,
title={Spontaneous Dynamics of Asymmetric Random Recurrent Spiking Neural
Networks},
author={H. Soula, G. Beslon, O. Mazet},
journal={arXiv preprint arXiv:cs/0411052},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411052},
primaryClass={cs.NE math.PR}
} | soula2004spontaneous |
arxiv-672312 | cs/0411053 | Vers un environnement multi personnalites pour la configuration et le deploiement d'applications a base de composants logiciels | <|reference_start|>Vers un environnement multi personnalites pour la configuration et le deploiement d'applications a base de composants logiciels: The multiplication of architecture description languages, component models and platforms implies a serious dilemma for component based software architects. On the one hand, they have to choose a language to describe concrete configurations which will be automatically deployed on execution platforms. On the other hand, they wish to capitalize their software architectures independently of any description languages or platforms. To solve this problem, we propose a multi personalities environment for the configuration and the deployment of component based applications. This environment is composed of a core capturing a canonical model of configuration and deployment, and a set of personalities tailored to languages and platforms. This paper details the architecture of such an environment and describes the personalities for the CORBA and Fractal component models.<|reference_end|> | arxiv | @article{flissi2004vers,
title={Vers un environnement multi personnalites pour la configuration et le
deploiement d'applications a base de composants logiciels},
author={Areski Flissi (JACQUARD Ur-F Lifl), Philippe Merle (JACQUARD Ur-F
Lifl)},
journal={DECOR04 (2004) 3-14},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411053},
primaryClass={cs.NI}
} | flissi2004vers |
arxiv-672313 | cs/0411054 | J2EE Deployment: The JOnAS Case Study | <|reference_start|>J2EE Deployment: The JOnAS Case Study: La specification J2EE (Java 2 platform Enterprise Edition) definit une architecture de serveur d'application Java. Jusqu'a J2EE 1.3, seuls les aspects de deploiement concernant le developpeur d'applications etaient adresses. Avec J2EE 1.4, les interfaces et les etapes de deploiement ont ete plus precisement specifiees dans la specification "J2EE Deployment". JOnAS (Java Open Application Server) est une plate-forme J2EE developpee au sein du consortium ObjectWeb. Les aspects deploiement sont en cours de developpement. Cet article decrit les concepts lies au deploiement dans J2EE, ainsi que les problematiques levees lors de leur mise en oeuvre pour JOnAS. Il n'a pas pour but de presenter un travail abouti, mais illustre le deploiement par un cas concret et ebauche une liste de besoins non encore satisfaits dans le domaine. ----- The J2EE (Java 2 platform Enterprise Edition) specification defines an architecture for Java Application Servers. Until J2EE 1.3, the deployment aspect was addressed from the developer point of view only. Since J2EE 1.4, deployment APIs and steps have been more precisely specified within the "J2EE Deployment Specification". JOnAS (Java Open Application Server) is a J2EE platform implementation by ObjectWeb. The deployment aspects are under development. This article describes the J2EE Deployment concepts, and the issues raised when implementing deployment features within JOnAS. It does not provide a complete solution, but illustrates deployment through a concrete example and initiates a list of non fulfilled requirements.<|reference_end|> | arxiv | @article{exertier2004j2ee,
title={J2EE Deployment: The JOnAS Case Study},
author={Francois Exertier},
journal={DECOR04 (2004) 27-36},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411054},
primaryClass={cs.NI}
} | exertier2004j2ee |
arxiv-672314 | cs/0411055 | SDS : Une infrastructure d'installation de logiciels libres pour des organisations multi-sites | <|reference_start|>SDS : Une infrastructure d'installation de logiciels libres pour des organisations multi-sites: Les developpements logiciels sur les systemes UNIX font de plus en plus appel aux logiciels libres. Nous proposons une solution de deploiement et de controle de ces logiciels libres au sein d'une grande organisation. Nous nous attachons particulierement a resoudre les problemes lies au deploiement multi-sites ainsi qu'a la gestion de configuration de ces deploiements. L'originalite de notre approche repose sur sa capacite a etre mise en oeuvre et controlee par les utilisateurs plutot que par les administrateurs, sans necessiter d'expertise particuliere, et par les possibilites de deploiement dans des environnements heterogenes. ----- Free and open source software is more and more used for software developments on UNIX systems. We are proposing a solution to control the deployment of free software in the context of a large corporation, focusing on multi-site deployment and configuration management. The originality of our approach rests on its ability to be implemented and controlled by users rather than administrators, without requiring any particular expertise, and on its facility to be deployed in heterogeneous environments.<|reference_end|> | arxiv | @article{charles2004sds,
title={SDS : Une infrastructure d'installation de logiciels libres pour des
organisations multi-sites},
author={Laurent Charles, Manuel Vacelet, Mohamed Chaari, Miguel Santana},
journal={DECOR04 (2004) 37-48},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411055},
primaryClass={cs.NI}
} | charles2004sds |
arxiv-672315 | cs/0411056 | Adaptation dynamique de services | <|reference_start|>Adaptation dynamique de services: This paper proposes a software architecture for dynamical service adaptation. The services are constituted by reusable software components. The adaptation's goal is to optimize the service function of their execution context. For a first step, the context will take into account just the user needs but other elements will be added. A particular feature in our proposition is the profiles that are used not only to describe the context's elements but also the components itself. An Adapter analyzes the compatibility between all these profiles and detects the points where the profiles are not compatibles. The same Adapter search and apply the possible adaptation solutions: component customization, insertion, extraction or replacement.<|reference_end|> | arxiv | @article{cremene2004adaptation,
title={Adaptation dynamique de services},
author={Marcel Cremene, Michel Riveill, Christian Martel, Calin Loghin, Costin
Miron},
journal={DECOR04 (2004) 53-64},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411056},
primaryClass={cs.NI}
} | cremene2004adaptation |
arxiv-672316 | cs/0411057 | Runtime Reconfiguration of J2EE Applications | <|reference_start|>Runtime Reconfiguration of J2EE Applications: Runtime reconfiguration considered as "applying required changes to a running system" plays an important role for providing high availability not only of safety- and mission-critical systems, but also for commercial web-applications offering professional services. Hereby, the main concerns are maintaining the consistency of the running system during reconfiguration and minimizing its down-time caused by the reconfiguration. This paper focuses on the platform independent subsystem that realises deployment and redeployment of J2EE modules based on the new J2EE Deployment API as a part of the implementation of our proposed system architecture enabling runtime reconfiguration of component-based systems. Our "controlled runtime redeployment" comprises an extension of hot deployment and dynamic reloading, complemented by allowing for structural change<|reference_end|> | arxiv | @article{matevska-meyer2004runtime,
title={Runtime Reconfiguration of J2EE Applications},
author={Jasminka Matevska-Meyer, Sascha Olliges, Wilhelm Hasselbring},
journal={DECOR04 (2004) 77-84},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411057},
primaryClass={cs.NI}
} | matevska-meyer2004runtime |
arxiv-672317 | cs/0411058 | Deployment in dynamic environments | <|reference_start|>Deployment in dynamic environments: Information and communication technologies are moving towards a new stage where applications will be dynamically deployed, uninstalled, updated and (re)configured. Several approaches have been followed with the goal of creating a fully automated and context-aware deployment system. Ideally, this system should be capable of handling the dynamics of this new situation, without losing sight of other factors, such as performance, security, availability or scalability. We will take some of the technologies that follow the principles of Service Oriented Architectures, SOA, as a paradigm of dynamic environments. SOA promote the breaking down of applications into sets of loosely coupled elements, called services. Services can be dynamically bound, deployed, reconfigured, uninstalled and updated. First of all, we will try to offer a broad view on the specific deployment issues that arise in these environments. Later on, we will present our approach to the problem. One of the essential points that has to be tackled to develop an automated deployment engine will be to have enough information to carry out tasks without human intervention. In the article we will focus on the format and contents of deployment descriptors. Additionally, we will go into the details of the deployment framework for OSGi enabled gateways that has been developed by our research group. Finally we will give some concluding remarks and some ideas for future work<|reference_end|> | arxiv | @article{ruiz2004deployment,
title={Deployment in dynamic environments},
author={Jose L. Ruiz, Juan C. Duenas, Fernando Usero, Cristina Diaz},
journal={DECOR04 (2004) 85-98},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411058},
primaryClass={cs.NI}
} | ruiz2004deployment |
arxiv-672318 | cs/0411059 | OpenCCM : une infrastructure a composants pour le deploiement d'applications a base de composants CORBA | <|reference_start|>OpenCCM : une infrastructure a composants pour le deploiement d'applications a base de composants CORBA: Deployment of software components for building distributed applications consists of the coordination of a set of basic tasks like uploading component binaries to the execution sites, loading them in memory, instantiating components, interconnecting their ports, setting their business and technical attributes. The automation of the deployment process then requires the presence of a software infrastructure distributed itself on the different execution sites. This paper presents the characteristics of such an infrastructure for the deployment of CORBA component-based applications. This latter is designed and implemented in the context of our OpenCCM platform, an open source implementation of the CORBA Component Model. The main characteristic lays on the fact that this infrastructure is itself designed as a set of CORBA component assemblies. This allows its dynamic assembly during its deployment over the execution sites<|reference_end|> | arxiv | @article{briclet2004openccm,
title={OpenCCM : une infrastructure a composants pour le deploiement
d'applications a base de composants CORBA},
author={Frederic Briclet (JACQUARD Ur-F Lifl), Christophe Contreras (JACQUARD
Ur-F Lifl), Philippe Merle (JACQUARD Ur-F Lifl)},
journal={DECOR04 (2004) 101-112},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411059},
primaryClass={cs.NI}
} | briclet2004openccm |
arxiv-672319 | cs/0411060 | Gestion du deploiement de composants sur reseau P2P | <|reference_start|>Gestion du deploiement de composants sur reseau P2P: The deployment of component-based applications relies on a centralized directory to store the components. This paper describes an approach to distribute software components to be deployed on a set of peers of a peer to peer network in order to exploit some associated characteristics (load balancing, fault-tolerance, self-organisation). The proposed architecture is situated in the context of OSGI application deployment management. The software components (bundles) are distributed among a set of nodes participating in the execution of services. When a node wants to install a component which is not deployed locally, the component is looked for and installed using a p2p network. ----- Le deploiement d'applications a composants repose sur une approche d'annuaire centralise de stockage des composants. Cet article decrit une approche pour distribuer les composants logiciels a deployer sur un ensemble de noeuds d'un reseau pair-a-pair afin de pouvoir exploiter certaines caracteristiques associees (equilibrage de charge, tolerance de panne, auto-organisation). L'architecture proposee entre dans le cadre de la gestion du deploiement d'applications sur le modele OSGi. Les composants logiciels (ou bundles) sont repartis a travers un ensemble de noeuds participant a l'execution de services. Lorsqu'un noeud veut installer un composant et si celui-ci n'est pas encore deploye localement, il est recherche et installe en utilisant un reseau p2p<|reference_end|> | arxiv | @article{frenot2004gestion,
title={Gestion du deploiement de composants sur reseau P2P},
author={Stephane Frenot (ARES UR-RA)},
journal={DECOR04 (2004) 113-124},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411060},
primaryClass={cs.NI}
} | frenot2004gestion |
arxiv-672320 | cs/0411061 | Un meta-modele pour l'automatisation du deploiement d'applications logicielles | <|reference_start|>Un meta-modele pour l'automatisation du deploiement d'applications logicielles: Le deploiement est maintenant considere comme une activite a part entiere du cycle de vie du logiciel. Les grandes entreprises souhaitent pouvoir automatiser cette etape tout en prenant en compte les caracteristiques de chaque machine cible. Pour repondre a ces besoins, nous avons defini un environnement de deploiement : ORYA (Open enviRonment to deploY Applications). Cet environnement utilise un meta-modele de deploiement, decrit dans ce papier. Notre approche utilise aussi les technologies des federations et des procedes, fournissant un environnement flexible et extensible pour l'utilisateur. ----- The deployment is now a full activity of the software lifecycle. Large enterprises want to automate this step, taking into account characteristics of each target machine. To satisfy these needs, we have defined an environment for the deployment phase: ORYA (Open enviRonment to deploY Applications). This environment uses a deployment metamodel, described in this paper. Our approach is based also on federation and process federations, providing a flexible and extensible environment to the user<|reference_end|> | arxiv | @article{merle2004un,
title={Un meta-modele pour l'automatisation du deploiement d'applications
logicielles},
author={Noelle Merle (LSR - IMAG)},
journal={DECOR04 (2004) 125-132},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411061},
primaryClass={cs.NI}
} | merle2004un |
arxiv-672321 | cs/0411062 | FROGi : Deploiement de composants Fractal sur OSGi | <|reference_start|>FROGi : Deploiement de composants Fractal sur OSGi: Cet article presente FROGi, une proposition visant a introduire le modele a composants Fractal a l'interieur de la plateforme de services OSGi. La motivation derriere ce travail est double. D'un cote, FROGi offre aux developpeurs de services OSGi un modele a composants extensibles qui facilite le developpement des bundles ; ces derniers restent toutefois compatibles avec les bundles "patrimoniaux". D'un autre cote, FROGi beneficie de l'infrastructure de deploiement que represente OSGi et qui facilite la realisation du conditionnement et du deploiement de composants Fractal. Dans FROGi, une application Fractal est conditionnee sous la forme d'un ou plusieurs bundles et elle peut etre deployee de facon partielle et les activites de deploiement peuvent avoir lieu de facon continue. -- This paper presents FROGi, a proposal to introduce the Fractal component model into the OSGi services platform. There are two motivations for this work. The first one is to offer a flexible component model to the OSGi developers to simplify bundle development. Bundles developed with FROGi are nevertheless compatible with standard bundles. The second motivation is to leverage OSGi's deployment capabilities to package and deploy Fractal components. In FROGi, a Fractal application is packaged and delivered as a set of OSGi bundles; such an application supports partial deployment and additionally, deployment activities can occur continuously.<|reference_end|> | arxiv | @article{donsez2004frogi,
title={FROGi : Deploiement de composants Fractal sur OSGi},
author={Didier Donsez (LSR - IMAG), Humberto Cervantes (LSR - IMAG), Mikael
Desertot (LSR - IMAG)},
journal={DECOR04 (2004) 147-158},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411062},
primaryClass={cs.NI}
} | donsez2004frogi |
arxiv-672322 | cs/0411063 | From Tensor Equations to Numerical Code -- Computer Algebra Tools for Numerical Relativity | <|reference_start|>From Tensor Equations to Numerical Code -- Computer Algebra Tools for Numerical Relativity: In this paper we present our recent work in developing a computer-algebra tool for systems of partial differential equations (PDEs), termed "Kranc". Our work is motivated by the problem of finding solutions of the Einstein equations through numerical simulations. Kranc consists of Mathematica based computer-algebra packages, that facilitate the task of dealing with symbolic tensorial calculations and realize the conversion of systems of partial differential evolution equations into parallelized C or Fortran code.<|reference_end|> | arxiv | @article{lechner2004from,
title={From Tensor Equations to Numerical Code -- Computer Algebra Tools for
Numerical Relativity},
author={Christiane Lechner, Dana Alic, Sascha Husa},
journal={arXiv preprint arXiv:cs/0411063},
year={2004},
number={AEI-2004-108},
archivePrefix={arXiv},
eprint={cs/0411063},
primaryClass={cs.SC}
} | lechner2004from |
arxiv-672323 | cs/0411064 | Lower-Stretch Spanning Trees | <|reference_start|>Lower-Stretch Spanning Trees: We prove that every weighted graph contains a spanning tree subgraph of average stretch O((log n log log n)^2). Moreover, we show how to construct such a tree in time O(m log^2 n).<|reference_end|> | arxiv | @article{elkin2004lower-stretch,
title={Lower-Stretch Spanning Trees},
author={Michael Elkin, Yuval Emek, Daniel A. Spielman and Shang-Hua Teng},
journal={arXiv preprint arXiv:cs/0411064},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411064},
primaryClass={cs.DS cs.DM}
} | elkin2004lower-stretch |
arxiv-672324 | cs/0411065 | An Evaluated Certification Services System for the German National Root CA - Legally Binding and Trustworthy Transactions in E-Business and E-Government | <|reference_start|>An Evaluated Certification Services System for the German National Root CA - Legally Binding and Trustworthy Transactions in E-Business and E-Government: National Root CAs enable legally binding E-Business and E-Government transactions. This is a report about the development, the evaluation and the certification of the new certification services system for the German National Root CA. We illustrate why a new certification services system was necessary, and which requirements to the new system existed. Then we derive the tasks to be done from the mentioned requirements. After that we introduce the initial situation at the beginning of the project. We report about the very process and talk about some unfamiliar situations, special approaches and remarkable experiences. Finally we present the ready IT system and its impact to E-Business and E-Government.<|reference_end|> | arxiv | @article{wiesmaier2004an,
title={An Evaluated Certification Services System for the German National Root
CA - Legally Binding and Trustworthy Transactions in E-Business and
E-Government},
author={A. Wiesmaier, M. Lippert, E. Karatsiolis, G. Raptis, J. Buchmann},
journal={Proceedings of "The 2005 International Conference on E-Business,
Enterprise Information Systems, E-Government, and Outsourcing (EEE'05)"; June
2005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411065},
primaryClass={cs.CR}
} | wiesmaier2004an |
arxiv-672325 | cs/0411066 | Using LDAP Directories for Management of PKI Processes | <|reference_start|>Using LDAP Directories for Management of PKI Processes: We present a framework for extending the functionality of LDAP servers from their typical use as a public directory in public key infrastructures. In this framework the LDAP servers are used for administrating infrastructure processes. One application of this framework is a method for providing proof-of-possession, especially in the case of encryption keys. Another one is the secure delivery of software personal security environments.<|reference_end|> | arxiv | @article{karatsiolis2004using,
title={Using LDAP Directories for Management of PKI Processes},
author={V. Karatsiolis, M. Lippert and A. Wiesmaier},
journal={In Proceedings of Public Key Infrastructure: First European PKI
Workshop: Research and Applications, EuroPKI 2004, volume 3093 of Lecture
Notes in Computer Science, pages 126-134, June 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411066},
primaryClass={cs.CR}
} | karatsiolis2004using |
arxiv-672326 | cs/0411067 | Towards a Flexible Intra-Trustcenter Management Protocol | <|reference_start|>Towards a Flexible Intra-Trustcenter Management Protocol: This paper proposes the Intra Trustcenter Protocol (ITP), a flexible and secure management protocol for communication between arbitrary trustcenter components. Unlike other existing protocols (like PKCS#7, CMP or XKMS) ITP focuses on the communication within a trustcenter. It is powerful enough for transferring complex messages which are machine and human readable and easy to understand. In addition it includes an extension mechanism to be prepared for future developments.<|reference_end|> | arxiv | @article{karatsiolis2004towards,
title={Towards a Flexible Intra-Trustcenter Management Protocol},
author={V. Karatsiolis, M. Lippert, A. Wiesmaier, A. Pitaev, M. Ruppert, J.
Buchmann},
journal={arXiv preprint arXiv:cs/0411067},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411067},
primaryClass={cs.CR}
} | karatsiolis2004towards |
arxiv-672327 | cs/0411068 | Planning for Directory Services in Public Key Infrastructures | <|reference_start|>Planning for Directory Services in Public Key Infrastructures: In this paper we provide a guide for public key infrastructure designers and administrators when planning for directory services. We concentrate on the LDAP directories and how they can be used to successfully publish PKI information. We analyse their available mechanisms and propose a best practice guide for use in PKI. We then take a look into the German Signature Act and Ordinance and discuss their part as far as directories concerning. Finally, we translate those to the LDAP directories practices.<|reference_end|> | arxiv | @article{karatsiolis2004planning,
title={Planning for Directory Services in Public Key Infrastructures},
author={V. Karatsiolis, M. Lippert, A. Wiesmaier},
journal={Proceedings of "Sicherheit 2005"; April 2005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411068},
primaryClass={cs.CR}
} | karatsiolis2004planning |
arxiv-672328 | cs/0411069 | CDN: Content Distribution Network | <|reference_start|>CDN: Content Distribution Network: Internet evolves and operates largely without a central coordination, the lack of which was and is critically important to the rapid growth and evolution of Internet. However, the lack of management in turn makes it very difficult to guarantee proper performance and to deal systematically with performance problems. Meanwhile, the available network bandwidth and server capacity continue to be overwhelmed by the skyrocketing Internet utilization and the accelerating growth of bandwidth intensive content. As a result, Internet service quality perceived by customers is largely unpredictable and unsatisfactory. Content Distribution Network (CDN) is an effective approach to improve Internet service quality. CDN replicates the content from the place of origin to the replica servers scattered over the Internet and serves a request from a replica server close to where the request originates. In this paper, we first give an overview about CDN. We then present the critical issues involved in designing and implementing an effective CDN and survey the approaches proposed in literature to address these problems. An example of CDN is described to show how a real commercial CDN operates. After this, we present a scheme that provides fast service location for peer-to-peer systems, a special type of CDN with no infrastructure support. We conclude with a brief projection about CDN.<|reference_end|> | arxiv | @article{peng2004cdn:,
title={CDN: Content Distribution Network},
author={Gang Peng},
journal={arXiv preprint arXiv:cs/0411069},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411069},
primaryClass={cs.NI cs.IR}
} | peng2004cdn: |
arxiv-672329 | cs/0411070 | Data Path Processing in Fast Programmable Routers | <|reference_start|>Data Path Processing in Fast Programmable Routers: Internet is growing at a fast pace. The link speeds are surging toward 40 Gbps with the emergence of faster link technologies. New applications are coming up which require intelligent processing at the intermediate routers. Switches and routers are becoming the bottlenecks in fast communication. On one hand faster links deliver more packets every second and on the other hand intelligent processing consumes more CPU cycles at the router. The conflicting goals of providing faster but computationally expensive processing call for new approaches in designing routers. This survey takes a look at the core functionalities, like packet classification, buffer memory management, switch scheduling and output link scheduling performed by a router in its data path processing and discusses the algorithms that aim to reduce the performance bound for these operations. An important requirement for the routers is to provide Quality of Service guarantees. We propose an algorithm to guarantee QoS in Input Queued Routers. The hardware solution to speed up router operation was Application Specific Integrated Circuits (ASICs). But the inherent inflexibility of the method is a demerit as network standards and application requirements are constantly evolving, which seek a faster turnaround time to keep up with the changes. The promise of Network Processors (NP) is the flexibility of general-purpose processors together with the speed of ASICs. We will study the architectural choices for the design of Network Processors and focus on some of the commercially available NPs. There is a plethora of NP vendors in the market. The discussion on the NP benchmarks sets the normalizing platform to evaluate these NPs.<|reference_end|> | arxiv | @article{de2004data,
title={Data Path Processing in Fast Programmable Routers},
author={Pradipta De},
journal={arXiv preprint arXiv:cs/0411070},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411070},
primaryClass={cs.NI}
} | de2004data |
arxiv-672330 | cs/0411071 | Comparing Multi-Target Trackers on Different Force Unit Levels | <|reference_start|>Comparing Multi-Target Trackers on Different Force Unit Levels: Consider the problem of tracking a set of moving targets. Apart from the tracking result, it is often important to know where the tracking fails, either to steer sensors to that part of the state-space, or to inform a human operator about the status and quality of the obtained information. An intuitive quality measure is the correlation between two tracking results based on uncorrelated observations. In the case of Bayesian trackers such a correlation measure could be the Kullback-Leibler difference. We focus on a scenario with a large number of military units moving in some terrain. The units are observed by several types of sensors and "meta-sensors" with force aggregation capabilities. The sensors register units of different size. Two separate multi-target probability hypothesis density (PHD) particle filters are used to track some type of units (e.g., companies) and their sub-units (e.g., platoons), respectively, based on observations of units of those sizes. Each observation is used in one filter only. Although the state-space may well be the same in both filters, the posterior PHD distributions are not directly comparable -- one unit might correspond to three or four spatially distributed sub-units. Therefore, we introduce a mapping function between distributions for different unit size, based on doctrine knowledge of unit configuration. The mapped distributions can now be compared -- locally or globally -- using some measure, which gives the correlation between two PHD distributions in a bounded volume of the state-space. To locate areas where the tracking fails, a discretized quality map of the state-space can be generated by applying the measure locally to different parts of the space.<|reference_end|> | arxiv | @article{sidenbladh2004comparing,
title={Comparing Multi-Target Trackers on Different Force Unit Levels},
author={Hedvig Sidenbladh, Pontus Svenson, Johan Schubert},
journal={Proc SPIE Vol 5429, p 306-314 (2004)},
year={2004},
doi={10.1117/12.542024},
archivePrefix={arXiv},
eprint={cs/0411071},
primaryClass={cs.AI}
} | sidenbladh2004comparing |
arxiv-672331 | cs/0411072 | Extremal optimization for sensor report pre-processing | <|reference_start|>Extremal optimization for sensor report pre-processing: We describe the recently introduced extremal optimization algorithm and apply it to target detection and association problems arising in pre-processing for multi-target tracking. Here we consider the problem of pre-processing for multiple target tracking when the number of sensor reports received is very large and arrives in large bursts. In this case, it is sometimes necessary to pre-process reports before sending them to tracking modules in the fusion system. The pre-processing step associates reports to known tracks (or initializes new tracks for reports on objects that have not been seen before). It could also be used as a pre-process step before clustering, e.g., in order to test how many clusters to use. The pre-processing is done by solving an approximate version of the original problem. In this approximation, not all pair-wise conflicts are calculated. The approximation relies on knowing how many such pair-wise conflicts that are necessary to compute. To determine this, results on phase-transitions occurring when coloring (or clustering) large random instances of a particular graph ensemble are used.<|reference_end|> | arxiv | @article{svenson2004extremal,
title={Extremal optimization for sensor report pre-processing},
author={Pontus Svenson},
journal={Proc SPIE Vol 5429, p 162-171 (2004)},
year={2004},
doi={10.1117/12.542027},
archivePrefix={arXiv},
eprint={cs/0411072},
primaryClass={cs.AI}
} | svenson2004extremal |
arxiv-672332 | cs/0411073 | Geographic Routing with Limited Information in Sensor Networks | <|reference_start|>Geographic Routing with Limited Information in Sensor Networks: Geographic routing with greedy relaying strategies have been widely studied as a routing scheme in sensor networks. These schemes assume that the nodes have perfect information about the location of the destination. When the distance between the source and destination is normalized to unity, the asymptotic routing delays in these schemes are $\Theta(\frac{1}{M(n)}),$ where M(n) is the maximum distance traveled in a single hop (transmission range of a radio). In this paper, we consider routing scenarios where nodes have location errors (imprecise GPS), or where only coarse geographic information about the destination is available, and only a fraction of the nodes have routing information. We show that even with such imprecise or limited destination-location information, the routing delays are $\Theta(\frac{1}{M(n)})$. We also consider the throughput-capacity of networks with progressive routing strategies that take packets closer to the destination in every step, but not necessarily along a straight-line. We show that the throughput-capacity with progressive routing is order-wise the same as the maximum achievable throughput-capacity.<|reference_end|> | arxiv | @article{subramanian2004geographic,
title={Geographic Routing with Limited Information in Sensor Networks},
author={Sundar Subramanian, Sanjay Shakkottai},
journal={arXiv preprint arXiv:cs/0411073},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411073},
primaryClass={cs.IT math.IT}
} | subramanian2004geographic |
arxiv-672333 | cs/0411074 | Building Chinese Lexicons from Scratch by Unsupervised Short Document Self-Segmentation | <|reference_start|>Building Chinese Lexicons from Scratch by Unsupervised Short Document Self-Segmentation: Chinese text segmentation is a well-known and difficult problem. On one side, there is not a simple notion of "word" in Chinese language making really hard to implement rule-based systems to segment written texts, thus lexicons and statistical information are usually employed to achieve such a task. On the other side, any piece of Chinese text usually includes segments present neither in the lexicons nor in the training data. Even worse, such unseen sequences can be segmented into a number of totally unrelated words making later processing phases difficult. For instance, using a lexicon-based system the sequence ???(Baluozuo, Barroso, current president-designate of the European Commission) can be segmented into ?(ba, to hope, to wish) and ??(luozuo, an undefined word) changing completely the meaning of the sentence. A new and extremely simple algorithm specially suited to work over short Chinese documents is introduced. This new algorithm performs text "self-segmentation" producing results comparable to those achieved by native speakers without using either lexicons or any statistical information beyond the obtained from the input text. Furthermore, it is really robust for finding new "words", especially proper nouns, and it is well suited to build lexicons from scratch. Some preliminary results are provided in addition to examples of its employment.<|reference_end|> | arxiv | @article{gayo-avello2004building,
title={Building Chinese Lexicons from Scratch by Unsupervised Short Document
Self-Segmentation},
author={Daniel Gayo-Avello},
journal={arXiv preprint arXiv:cs/0411074},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411074},
primaryClass={cs.CL cs.IR}
} | gayo-avello2004building |
arxiv-672334 | cs/0411075 | A Self-Reconfigurable Computing Platform Hardware Architecture | <|reference_start|>A Self-Reconfigurable Computing Platform Hardware Architecture: Field Programmable Gate Arrays (FPGAs) have recently been increasingly used for highly-parallel processing of compute intensive tasks. This paper introduces an FPGA hardware platform architecture that is PC-based, allows for fast reconfiguration over the PCI bus, and retains a simple physical hardware design. The design considerations are first discussed, then the resulting system architecture designed is illustrated. Finally, experimental results on the FPGA resources utilized for this design are presented.<|reference_end|> | arxiv | @article{weisensee2004a,
title={A Self-Reconfigurable Computing Platform Hardware Architecture},
author={Andreas Weisensee, Darran Nathan},
journal={arXiv preprint arXiv:cs/0411075},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411075},
primaryClass={cs.AR cs.DC}
} | weisensee2004a |
arxiv-672335 | cs/0411076 | Lower bounds on the Deterministic and Quantum Communication Complexity of Hamming Distance | <|reference_start|>Lower bounds on the Deterministic and Quantum Communication Complexity of Hamming Distance: Alice and Bob want to know if two strings of length n are almost equal. That is, do they differ on \textit{at most} a bits? Let 0\leq a\leq n-1. We show that any deterministic protocol, as well as any error-free quantum protocol (C* version), for this problem requires at least n-2 bits of communication. We show the same bounds for the problem of determining if two strings differ in exactly a bits. We also prove a lower bound of n/2-1 for error-free Q* quantum protocols. Our results are obtained by lower-bounding the ranks of the appropriate matrices.<|reference_end|> | arxiv | @article{ambainis2004lower,
title={Lower bounds on the Deterministic and Quantum Communication Complexity
of Hamming Distance},
author={Andris Ambainis, William Gasarch, Aravind Srinavasan, Andrey Utis},
journal={arXiv preprint arXiv:cs/0411076},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411076},
primaryClass={cs.CC quant-ph}
} | ambainis2004lower |
arxiv-672336 | cs/0411077 | Transparent Format Migration of Preserved Web Content | <|reference_start|>Transparent Format Migration of Preserved Web Content: The LOCKSS digital preservation system collects content by crawling the web and preserves it in the format supplied by the publisher. Eventually, browsers will no longer understand that format. A process called format migration converts it to a newer format that the browsers do understand. The LOCKSS program has designed and tested an initial implementation of format migration for Web content that is transparent to readers, building on the content negotiation capabilities of HTTP.<|reference_end|> | arxiv | @article{rosenthal2004transparent,
title={Transparent Format Migration of Preserved Web Content},
author={David S. H. Rosenthal, Thomas Lipkis, Thomas Robertson, Seth Morabito},
journal={arXiv preprint arXiv:cs/0411077},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411077},
primaryClass={cs.DL}
} | rosenthal2004transparent |
arxiv-672337 | cs/0411078 | Notes On The Design Of An Internet Adversary | <|reference_start|>Notes On The Design Of An Internet Adversary: The design of the defenses Internet systems can deploy against attack, especially adaptive and resilient defenses, must start from a realistic model of the threat. This requires an assessment of the capabilities of the adversary. The design typically evolves through a process of simulating both the system and the adversary. This requires the design and implementation of a simulated adversary based on the capability assessment. Consensus on the capabilities of a suitable adversary is not evident. Part of the recent redesign of the protocol used by peers in the LOCKSS digital preservation system included a conservative assessment of the adversary's capabilities. We present our assessment and the implications we drew from it as a step towards a reusable adversary specification.<|reference_end|> | arxiv | @article{rosenthal2004notes,
title={Notes On The Design Of An Internet Adversary},
author={David S. H. Rosenthal, Petros Maniatis, Mema Roussopoulos, T.J. Giuli,
Mary Baker},
journal={Second Annual Adaptive and Resilient Computing Security Workshop,
Santa Fe, 2003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411078},
primaryClass={cs.DL}
} | rosenthal2004notes |
arxiv-672338 | cs/0411079 | Supporting Bandwidth Guarantee and Mobility for Real-Time Applications on Wireless LANs | <|reference_start|>Supporting Bandwidth Guarantee and Mobility for Real-Time Applications on Wireless LANs: The proliferation of IEEE 802.11-based wireless LANs opens up avenues for creation of several tetherless and mobility oriented services. Most of these services, like voice over WLAN, media streaming etc., generate delay and bandwidth sensitive traffic. These traffic flows require undisrupted network connectivity with some QoS guarantees. Unfortunately, there is no adequate support built into these wireless LANs towards QoS provisioning. Further, the network layer handoff latency incurred by mobile nodes in these wireless LANs is too high for real-time applications to function properly. In this paper, we describe a QoS mechanism, called Rether, to effectively support bandwidth guarantee on wireless LANs. Rether is designed to support the current wireless LAN technologies like 802.11b and 802.11a with a specific capability of being tailored for QoS oriented technology like 802.11e. We also describe a low-latency handoff mechanism which expedites network level handoff to provide real-time applications with an added advantage of seamless mobility.<|reference_end|> | arxiv | @article{sharma2004supporting,
title={Supporting Bandwidth Guarantee and Mobility for Real-Time Applications
on Wireless LANs},
author={Srikant Sharma and Kartik Gopalan and Ningning Zhu and Gang Peng and
Pradipta De and Tzi-cker Chiueh},
journal={arXiv preprint arXiv:cs/0411079},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411079},
primaryClass={cs.NI}
} | sharma2004supporting |
arxiv-672339 | cs/0411080 | Modeling the input history of programs for improved instruction-memory performance | <|reference_start|>Modeling the input history of programs for improved instruction-memory performance: When a program is loaded into memory for execution, the relative position of its basic blocks is crucial, since loading basic blocks that are unlikely to be executed first places them high in the instruction-memory hierarchy only to be dislodged as the execution goes on. In this paper we study the use of Bayesian networks as models of the input history of a program. The main point is the creation of a probabilistic model that persists as the program is run on different inputs and at each new input refines its own parameters in order to reflect the program's input history more accurately. As the model is thus tuned, it causes basic blocks to be reordered so that, upon arrival of the next input for execution, loading the basic blocks into memory automatically takes into account the input history of the program. We report on extensive experiments, whose results demonstrate the efficacy of the overall approach in progressively lowering the execution times of a program on identical inputs placed randomly in a sequence of varied inputs. We provide results on selected SPEC CINT2000 programs and also evaluate our approach as compared to the gcc level-3 optimization and to Pettis-Hansen reordering.<|reference_end|> | arxiv | @article{assis2004modeling,
title={Modeling the input history of programs for improved instruction-memory
performance},
author={C. A. G. Assis, E. S. T. Fernandes, V. C. Barbosa},
journal={Computer Journal 49 (2006), 744-761},
year={2004},
doi={10.1093/comjnl/bxl044},
number={ES-662/04},
archivePrefix={arXiv},
eprint={cs/0411080},
primaryClass={cs.OS}
} | assis2004modeling |
arxiv-672340 | cs/0411081 | Reconfigurations dynamiques de services dans un intergiciel a composants CORBA CCM | <|reference_start|>Reconfigurations dynamiques de services dans un intergiciel a composants CORBA CCM: Today, component oriented middlewares are used to design, develop and deploy easily distributed applications, by ensuring the heterogeneity, interoperability, and reuse of the software modules, and the separation between the business code encapsulated in the components and the system code managed by the containers. Several standards answer this definition such as: CCM (CORBA Component Model), EJB (Enterprise Java Beans) and .Net. However these standards offer a limited and fixed number of system services, removing any possibility to add system services or to reconfigure dynamically the middleware. Our works propose mechanisms to add and to adapt dynamically the system services, based on a reconfiguration language which is dynamically adaptable to the need of the reconfiguration, and on a tool of dynamic reconfiguration, a prototype was achieved for the OpenCCM platform, that is an implementation of the CCM specification. This work was partially financed by the european project IST-COACH (2001-34445).<|reference_end|> | arxiv | @article{hachichi2004reconfigurations,
title={Reconfigurations dynamiques de services dans un intergiciel a composants
CORBA CCM},
author={Assia Hachichi (REGAL Ur-R Lip6), Cyril Martin (REGAL Ur-R Lip6), Gael
Thomas (REGAL Ur-R Lip6), Simon Patarin (REGAL Ur-R Lip6), Bertil Folliot
(REGAL Ur-R Lip6)},
journal={DECOR04 (2004) 159-170},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411081},
primaryClass={cs.NI}
} | hachichi2004reconfigurations |
arxiv-672341 | cs/0411082 | Support pour la reconfiguration d'implantation dans les applications a composants Java | <|reference_start|>Support pour la reconfiguration d'implantation dans les applications a composants Java: Nowadays, numerous component models are used for various purposes: to build applications, middleware or even operating systems. Those models commonly support structure reconfiguration, that is modification of application's architecture at runtime. On the other hand, very few allow implementation reconfiguration, that is runtime modification of the code of components building the application. In this article we present the work we performed on JULIA, a Java-based implementation of the FRACTAL component model, in order for it to support implementation reconfigurations. We show how we overcame the limitations of Java class loading mechanism to allow runtime modifications of components' implementation and interfaces. We also describe the integration of our solution with the JULIA ADL.<|reference_end|> | arxiv | @article{kornas2004support,
title={Support pour la reconfiguration d'implantation dans les applications a
composants Java},
author={Jakub Kornas (SARDES Ur-Ra Imag), Matthieu Leclercq (SARDES Ur-Ra
Imag), Vivien Quema (SARDES Ur-Ra Imag), Jean-Bernard Stefani (SARDES Ur-Ra
Imag)},
journal={DECOR04 (2004) 171-184},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411082},
primaryClass={cs.NI}
} | kornas2004support |
arxiv-672342 | cs/0411083 | Contractualisation des ressources pour le deploiement des composants logiciels | <|reference_start|>Contractualisation des ressources pour le deploiement des composants logiciels: Software deployment can turn into a baffling problem when the components being deployed exhibit non-functional requirements. If the platform on which such components are deployed cannot satisfy their non-functional requirements, then they may in turn fail to perform satisfactorily. In this paper, we present a contract-based approach to take a specific category of non-functional properties specified by components into account, that is those that pertain to the resources that are necessary for their execution.<|reference_end|> | arxiv | @article{sommer2004contractualisation,
title={Contractualisation des ressources pour le deploiement des composants
logiciels},
author={Nicolas Le Sommer (VALORIA)},
journal={DECOR04 (2004) 211-222},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411083},
primaryClass={cs.NI}
} | sommer2004contractualisation |
arxiv-672343 | cs/0411084 | Gestion transactionnelle de la reprise sur erreurs dans le deploiement | <|reference_start|>Gestion transactionnelle de la reprise sur erreurs dans le deploiement: With the development of the networks and the Internet, the problems of automated deployment on broad scale became increasingly crucial. Software deployment is a complex process covering several activities going from the configuration to the retirement of a software product. During the execution of a deployment process, exceptions can be met which put the site in an incoherent state. To solve them, we propose an approach based on transactional concepts which describes the actions to be undertaken when an exceptional situation is met during the deployment process. The approach guaranties the respect of the site's consistency by preserving part of the work already carried out by the process. This article presents our approach and an experimentation made in an academic deployment system.<|reference_end|> | arxiv | @article{marin2004gestion,
title={Gestion transactionnelle de la reprise sur erreurs dans le deploiement},
author={Cristina Marin (LSR - IMAG), Noureddine Belkhatir (LSR - IMAG), Didier
Donsez (LSR - IMAG)},
journal={DECOR04 (2004) 199-210},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411084},
primaryClass={cs.NI}
} | marin2004gestion |
arxiv-672344 | cs/0411085 | Deploiement d'ordonnanceurs de processus specifiques dans un systeme d'exploitation generaliste | <|reference_start|>Deploiement d'ordonnanceurs de processus specifiques dans un systeme d'exploitation generaliste: Bossa is a framework to develop new processes schedulers in commodity operating systems. Although Bossa enables fine-grained management of the processor through new scheduling policies, deploying an application with its own scheduler raises some problems. In this paper we study the problems caused when deploying an application and its scheduler and to adresse these, we propose to establish Quality of Service contracts and mechanisms to reconfigure the scheduler hierarchy.<|reference_end|> | arxiv | @article{duchesne2004deploiement,
title={Deploiement d'ordonnanceurs de processus specifiques dans un systeme
d'exploitation generaliste},
author={Herve Duchesne (OBASCO Irisa), Christophe Augier (OBASCO Irisa),
Richard Urunuela (OBASCO Irisa)},
journal={DECOR04 (2004) 193-198},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411085},
primaryClass={cs.NI}
} | duchesne2004deploiement |
arxiv-672345 | cs/0411086 | A Software Architecture for Automatic Deployment of CORBA Components Using Grid Technologies | <|reference_start|>A Software Architecture for Automatic Deployment of CORBA Components Using Grid Technologies: Software components turn out to be a convenient model to build complex applications for scientific computing and to run them on a computational grid. However, deploying complex, component-based applications in a grid environment is particularly arduous. To prevent the user from directly dealing with a large number of execution hosts and their heterogeneity within a grid, the application deployment phase must be as automatic as possible. This paper describes an architecture for automatic deployment of component-based applications on computational grids. In the context of the CORBA Component Model (CCM), this paper details all the steps to achieve an automatic deployment of components as well as the entities involved: a grid access middleware and its grid information service (like OGSI), a component deployment model, as specified by CCM, an enriched application description and a deployment planner in order to select resources and map components onto computers.<|reference_end|> | arxiv | @article{lacour2004a,
title={A Software Architecture for Automatic Deployment of CORBA Components
Using Grid Technologies},
author={Sebastien Lacour (PARIS Irisa), Christian Perez (PARIS Irisa), Thierry
Priol (PARIS Irisa)},
journal={DECOR04 (2004) 187-192},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411086},
primaryClass={cs.NI}
} | lacour2004a |
arxiv-672346 | cs/0411087 | Pandora : une plate-forme efficace pour la construction d'applications autonomes | <|reference_start|>Pandora : une plate-forme efficace pour la construction d'applications autonomes: Autonomic computing has been proposed recently as a way to address the difficult management of applications whose complexity is constantly increasing. Autonomous applications will have to be especially flexible and be able to monitor themselves permanently. This work presents a framework, Pandora, which eases the construction of applications that satisfy this double goal. Pandora relies on an original application programming pattern - based on stackable layers and message passing - to obtain minimalist model and architecture that allows to control the overhead imposed by the full reflexivity of the framework. Besides, a prototype of the framework has been implemented in C++. A detailed performance study, together with examples of use, complement this presentation<|reference_end|> | arxiv | @article{patarin2004pandora,
title={Pandora : une plate-forme efficace pour la construction d'applications
autonomes},
author={Simon Patarin, Mesaac Makpangou (REGAL UR-R LIP6)},
journal={DECOR04 (2004) 15-26},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411087},
primaryClass={cs.NI}
} | patarin2004pandora |
arxiv-672347 | cs/0411088 | Des correctifs de securite a la mise a jour | <|reference_start|>Des correctifs de securite a la mise a jour: The ever growing software complexity suggests that they will never be bugfree and therefore secure. Software compagnies regulary publish updates. But maybe because of lack of time or care or maybe because stopping application is annoying, such updates are rarely if ever deployed on users' machines. We propose an integrated tool allowing system administrators to deploy critical security updates on the fly on applications running remotly without end-user intervention. Our approach is based on an aspect weaving system, Arachne, that dynamicaly rewrites binary code. Hence updated applications are still running while they are updated. Our second tool Minerve integrates Arachne within the standart updating process: Minerve takes a patch produced by dif and eventually builds a dynamic patch that can later be woven to update the application on the fly. In addition, Minerve allows to consult patches translated in a dedicated language and hence eases auditing tasks.<|reference_end|> | arxiv | @article{loriant2004des,
title={Des correctifs de securite a la mise a jour},
author={Nicolas Loriant (OBASCO IRISA), Marc Segura Devillechaise (OBASCO
IRISA), Jean-Marc Menaud (OBASCO IRISA)},
journal={DECOR04 (2004) 65-76},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411088},
primaryClass={cs.NI}
} | loriant2004des |
arxiv-672348 | cs/0411089 | Gestion Dynamique des Services Techniques pour Modele a Composants | <|reference_start|>Gestion Dynamique des Services Techniques pour Modele a Composants: The new applications being intended for more and more heterogeneous environments, it is necessary to propose solutions of development which answer in best the necessities of adaptation of new services. Component-based programming partially answers this aim, allowing easy replacement of software blocks in order to provide the most adapted version of a component. Nevertheless, most of the industrial component-based model implementations do not allow to provide to components the most adapted technical services (naming, trading, security, transaction, etc.). In this paper, we suggest defining technical services themselves under the shape of components. We shall detail our proposition, by basing it on the Fractal component model of Objectweb. Then, we shall bring solutions for the use of these new component-based technical services and shall propose a set of management components which allow to administer in a dynamic and stand-alone way the obtained components. Finally we present the prototype of the proposed solution.<|reference_end|> | arxiv | @article{herault2004gestion,
title={Gestion Dynamique des Services Techniques pour Modele a Composants},
author={Colombe Herault (LAMIH), Sylvain Lecomte (LAMIH)},
journal={DECOR04 (2004) 135-146},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411089},
primaryClass={cs.NI}
} | herault2004gestion |
arxiv-672349 | cs/0411090 | Local heuristics and the emergence of spanning subgraphs in complex networks | <|reference_start|>Local heuristics and the emergence of spanning subgraphs in complex networks: We study the use of local heuristics to determine spanning subgraphs for use in the dissemination of information in complex networks. We introduce two different heuristics and analyze their behavior in giving rise to spanning subgraphs that perform well in terms of allowing every node of the network to be reached, of requiring relatively few messages and small node bandwidth for information dissemination, and also of stretching paths with respect to the underlying network only modestly. We contribute a detailed mathematical analysis of one of the heuristics and provide extensive simulation results on random graphs for both of them. These results indicate that, within certain limits, spanning subgraphs are indeed expected to emerge that perform well in respect to all requirements. We also discuss the spanning subgraphs' inherent resilience to failures and adaptability to topological changes.<|reference_end|> | arxiv | @article{stauffer2004local,
title={Local heuristics and the emergence of spanning subgraphs in complex
networks},
author={A. O. Stauffer, V. C. Barbosa},
journal={Theoretical Computer Science 355 (2006), 80-95},
year={2004},
doi={10.1016/j.tcs.2005.12.007},
number={ES-663/04},
archivePrefix={arXiv},
eprint={cs/0411090},
primaryClass={cs.NI}
} | stauffer2004local |
arxiv-672350 | cs/0411091 | Principles for Digital Preservation | <|reference_start|>Principles for Digital Preservation: The immense investments in creating and disseminating digitally represented information have not been accompanied by commensurate effort to ensure the longevity of information of permanent interest. Asserted difficulties with long-term digital preservation prove to be largely underestimation of what technology can provide. We show how to clarify prominent misunderstandings and sketch a 'Trustworthy Digital Object (TDO)' method that solves all the published technical challenges.<|reference_end|> | arxiv | @article{gladney2004principles,
title={Principles for Digital Preservation},
author={H.M. Gladney},
journal={arXiv preprint arXiv:cs/0411091},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411091},
primaryClass={cs.DL}
} | gladney2004principles |
arxiv-672351 | cs/0411092 | Trustworthy 100-Year Digital Objects: Durable Encoding for When It's Too Late to Ask | <|reference_start|>Trustworthy 100-Year Digital Objects: Durable Encoding for When It's Too Late to Ask: How can an author store digital information so that it will be reliably useful, even years later when he is no longer available to answer questions? Methods that might work are not good enough; what is preserved today should be reliably useful whenever someone wants it. Prior proposals fail because they confound saved data with irrelevant details of today's information technology--details that are difficult to define, extract, and save completely and accurately. We use a virtual machine to represent and eventually to render any data whatsoever. We focus on a case of intermediate difficulty--an executable procedure--and identify a variant for every other data type. This solution might be more elaborate than needed to render some text, image, audio, or video data. Simple data can be preserved as representations using well-known standards. We sketch practical methods for files ranging from simple structures to those containing computer programs, treating simple cases here and deferring complex cases for future work. Enough of the complete solution is known to enable practical aggressive preservation programs today.<|reference_end|> | arxiv | @article{gladney2004trustworthy,
title={Trustworthy 100-Year Digital Objects: Durable Encoding for When It's Too
Late to Ask},
author={H.M. Gladney, R.A. Lorie},
journal={arXiv preprint arXiv:cs/0411092},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411092},
primaryClass={cs.DL}
} | gladney2004trustworthy |
arxiv-672352 | cs/0411093 | Forbidden Subgraphs in Connected Graphs | <|reference_start|>Forbidden Subgraphs in Connected Graphs: Given a set $\xi=\{H_1,H_2,...\}$ of connected non acyclic graphs, a $\xi$-free graph is one which does not contain any member of $% \xi$ as copy. Define the excess of a graph as the difference between its number of edges and its number of vertices. Let ${\gr{W}}_{k,\xi}$ be theexponential generating function (EGF for brief) of connected $\xi$-free graphs of excess equal to $k$ ($k \geq 1$). For each fixed $\xi$, a fundamental differential recurrence satisfied by the EGFs ${\gr{W}}_{k,\xi}$ is derived. We give methods on how to solve this nonlinear recurrence for the first few values of $k$ by means of graph surgery. We also show that for any finite collection $\xi$ of non-acyclic graphs, the EGFs ${\gr{W}}_{k,\xi}$ are always rational functions of the generating function, $T$, of Cayley's rooted (non-planar) labelled trees. From this, we prove that almost all connected graphs with $n$ nodes and $n+k$ edges are $\xi$-free, whenever $k=o(n^{1/3})$ and $|\xi| < \infty$ by means of Wright's inequalities and saddle point method. Limiting distributions are derived for sparse connected $\xi$-free components that are present when a random graph on $n$ nodes has approximately $\frac{n}{2}$ edges. In particular, the probability distribution that it consists of trees, unicyclic components, $...$, $(q+1)$-cyclic components all $\xi$-free is derived. Similar results are also obtained for multigraphs, which are graphs where self-loops and multiple-edges are allowed.<|reference_end|> | arxiv | @article{ravelomanana2004forbidden,
title={Forbidden Subgraphs in Connected Graphs},
author={Vlady Ravelomanana (LIPN), Loys Thimonier (LARIA)},
journal={arXiv preprint arXiv:cs/0411093},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411093},
primaryClass={cs.DS cs.DM math.CO}
} | ravelomanana2004forbidden |
arxiv-672353 | cs/0411094 | On the existence of truly autonomic computing systems and the link with quantum computing | <|reference_start|>On the existence of truly autonomic computing systems and the link with quantum computing: A theoretical model of truly autonomic computing systems (ACS), with infinitely many constraints, is proposed. An argument similar to Turing's for the unsolvability of the halting problem, which is permitted in classical logic, shows that such systems cannot exist. Turing's argument fails in the recently proposed non-Aristotelian finitary logic (NAFL), which permits the existence of ACS. NAFL also justifies quantum superposition and entanglement, which are essential ingredients of quantum algorithms, and resolves the Einstein-Podolsky-Rosen (EPR) paradox in favour of quantum mechanics and non-locality. NAFL requires that the autonomic manager (AM) must be conceptually and architecturally distinct from the managed element, in order for the ACS to exist as a non-self-referential entity. Such a scenario is possible if the AM uses quantum algorithms and is protected from all problems by (unbreakable) quantum encryption, while the managed element remains classical. NAFL supports such a link between autonomic and quantum computing, with the AM existing as a metamathematical entity. NAFL also allows quantum algorithms to access truly random elements and thereby supports non-standard models of quantum (hyper-) computation that permit infinite parallelism.<|reference_end|> | arxiv | @article{srinivasan2004on,
title={On the existence of truly autonomic computing systems and the link with
quantum computing},
author={Radhakrishnan Srinivasan, H. P. Raghunandan},
journal={arXiv preprint arXiv:cs/0411094},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411094},
primaryClass={cs.LO math.LO quant-ph}
} | srinivasan2004on |
arxiv-672354 | cs/0411095 | Embeddings into the Pancake Interconnection Network | <|reference_start|>Embeddings into the Pancake Interconnection Network: Owing to its nice properties, the pancake is one of the Cayley graphs that were proposed as alternatives to the hypercube for interconnecting processors in parallel computers. In this paper, we present embeddings of rings, grids and hypercubes into the pancake with constant dilation and congestion. We also extend the results to similar efficient embeddings into the star graph.<|reference_end|> | arxiv | @article{lavault2004embeddings,
title={Embeddings into the Pancake Interconnection Network},
author={Christian Lavault (LIPN)},
journal={Parallel Processing Letters 12, 3-4 (2002) 297-310},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411095},
primaryClass={cs.DC cs.DM cs.DS}
} | lavault2004embeddings |
arxiv-672355 | cs/0411096 | Inter-Package Dependency Networks in Open-Source Software | <|reference_start|>Inter-Package Dependency Networks in Open-Source Software: This research analyzes complex networks in open-source software at the inter-package level, where package dependencies often span across projects and between development groups. We review complex networks identified at ``lower'' levels of abstraction, and then formulate a description of interacting software components at the package level, a relatively ``high'' level of abstraction. By mining open-source software repositories from two sources, we empirically show that the coupling of modules at this granularity creates a small-world and scale-free network in both instances.<|reference_end|> | arxiv | @article{labelle2004inter-package,
title={Inter-Package Dependency Networks in Open-Source Software},
author={Nathan LaBelle, Eugene Wallingford},
journal={arXiv preprint arXiv:cs/0411096},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411096},
primaryClass={cs.SE}
} | labelle2004inter-package |
arxiv-672356 | cs/0411097 | Deterministic Bayesian Logic | <|reference_start|>Deterministic Bayesian Logic: In this paper a conditional logic is defined and studied. This conditional logic, Deterministic Bayesian Logic, is constructed as a deterministic counterpart to the (probabilistic) Bayesian conditional. The logic is unrestricted, so that any logical operations are allowed. This logic is shown to be non-trivial and is not reduced to classical propositions. The Bayesian conditional of DBL implies a definition of logical independence. Interesting results are derived about the interactions between the logical independence and the proofs. A model is constructed for the logic. Completeness results are proved. It is shown that any unconditioned probability can be extended to the whole logic DBL. The Bayesian conditional is then recovered from the probabilistic DBL. At last, it is shown why DBL is compliant with Lewis triviality.<|reference_end|> | arxiv | @article{dambreville2004deterministic,
title={Deterministic Bayesian Logic},
author={Frederic Dambreville (DGA/Cta/DT/Gip)},
journal={arXiv preprint arXiv:cs/0411097},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411097},
primaryClass={cs.LO math.LO math.PR}
} | dambreville2004deterministic |
arxiv-672357 | cs/0411098 | On the High-SNR Capacity of Non-Coherent Networks | <|reference_start|>On the High-SNR Capacity of Non-Coherent Networks: We obtain the first term in the high signal-to-noise ratio (SNR) expansion of the capacity of fading networks where the transmitters and receivers--while fully cognizant of the fading \emph{law}--have no access to the fading \emph{realization}. This term is an integer multiple of $\log \log \textnormal{SNR}$ with the coefficient having a simple combinatorial characterization.<|reference_end|> | arxiv | @article{lapidoth2004on,
title={On the High-SNR Capacity of Non-Coherent Networks},
author={Amos Lapidoth},
journal={arXiv preprint arXiv:cs/0411098},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411098},
primaryClass={cs.IT math.IT}
} | lapidoth2004on |
arxiv-672358 | cs/0411099 | A Note on the PAC Bayesian Theorem | <|reference_start|>A Note on the PAC Bayesian Theorem: We prove general exponential moment inequalities for averages of [0,1]-valued iid random variables and use them to tighten the PAC Bayesian Theorem. The logarithmic dependence on the sample count in the enumerator of the PAC Bayesian bound is halved.<|reference_end|> | arxiv | @article{maurer2004a,
title={A Note on the PAC Bayesian Theorem},
author={Andreas Maurer},
journal={arXiv preprint arXiv:cs/0411099},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411099},
primaryClass={cs.LG cs.AI}
} | maurer2004a |
arxiv-672359 | cs/0411100 | A Decidable Probability Logic for Timed Probabilistic Systems | <|reference_start|>A Decidable Probability Logic for Timed Probabilistic Systems: In this paper we extend the predicate logic introduced in [Beauquier et al. 2002] in order to deal with Semi-Markov Processes. We prove that with respect to qualitative probabilistic properties, model checking is decidable for this logic applied to Semi-Markov Processes. Furthermore we apply our logic to Probabilistic Timed Automata considering classical and urgent semantics, and considering also predicates on clock. We prove that results on Semi Markov Processes hold also for Probabilistic Timed Automata for both the two semantics considered. Moreover, we prove that results for Markov Processes shown in [Beauquier et al. 2002] are extensible to Probabilistic Timed Automata where urgent semantics is considered.<|reference_end|> | arxiv | @article{lanotte2004a,
title={A Decidable Probability Logic for Timed Probabilistic Systems},
author={Ruggero Lanotte (Dipartimento di Scienze della Cultura, Politiche e
dell'Informazione), Daniele Beauquier (LACL, Dept. of Informatics)},
journal={arXiv preprint arXiv:cs/0411100},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411100},
primaryClass={cs.LO}
} | lanotte2004a |
arxiv-672360 | cs/0412001 | EURYDICE : A platform for unified access to documents | <|reference_start|>EURYDICE : A platform for unified access to documents: In this paper we present Eurydice, a platform dedicated to provide a unified gateway to documents. Its basic functionalities about collecting documents have been designed based on a long experience about the management of scientific documentation among large and demanding academic communities such as IMAG and INRIA. Besides the basic problem of accessing documents - which was of course the original and main motivation of the project - a great effort has been dedicated to the development of management functionalities which could help institutions to control, analyse the current situation about the use of the documentation, and finally to set a better ground for a documentation policy. Finally a great emphasis - and corresponding technical investment - has been put on the protection of property and reproduction rights both from the users' intitution side and from the editors' side.<|reference_end|> | arxiv | @article{rouveyrol2004eurydice,
title={EURYDICE : A platform for unified access to documents},
author={Serge Rouveyrol (IMAG), Yves Chiaramella (IMAG), Francesca Leinardi
(IMAG), Joanna Janik (IMAG), Bruno Marmol (INRIA-RA), Carole Silvy
(INRIA-RA), Catherine Allauzun (INRIA-RA)},
journal={arXiv preprint arXiv:cs/0412001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412001},
primaryClass={cs.DL}
} | rouveyrol2004eurydice |
arxiv-672361 | cs/0412002 | Ranking Pages by Topology and Popularity within Web Sites | <|reference_start|>Ranking Pages by Topology and Popularity within Web Sites: We compare two link analysis ranking methods of web pages in a site. The first, called Site Rank, is an adaptation of PageRank to the granularity of a web site and the second, called Popularity Rank, is based on the frequencies of user clicks on the outlinks in a page that are captured by navigation sessions of users through the web site. We ran experiments on artificially created web sites of different sizes and on two real data sets, employing the relative entropy to compare the distributions of the two ranking methods. For the real data sets we also employ a nonparametric measure, called Spearman's footrule, which we use to compare the top-ten web pages ranked by the two methods. Our main result is that the distributions of the Popularity Rank and Site Rank are surprisingly close to each other, implying that the topology of a web site is very instrumental in guiding users through the site. Thus, in practice, the Site Rank provides a reasonable first order approximation of the aggregate behaviour of users within a web site given by the Popularity Rank.<|reference_end|> | arxiv | @article{borges2004ranking,
title={Ranking Pages by Topology and Popularity within Web Sites},
author={Jose Borges and Mark Levene},
journal={arXiv preprint arXiv:cs/0412002},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412002},
primaryClass={cs.AI cs.IR}
} | borges2004ranking |
arxiv-672362 | cs/0412003 | Mining Heterogeneous Multivariate Time-Series for Learning Meaningful Patterns: Application to Home Health Telecare | <|reference_start|>Mining Heterogeneous Multivariate Time-Series for Learning Meaningful Patterns: Application to Home Health Telecare: For the last years, time-series mining has become a challenging issue for researchers. An important application lies in most monitoring purposes, which require analyzing large sets of time-series for learning usual patterns. Any deviation from this learned profile is then considered as an unexpected situation. Moreover, complex applications may involve the temporal study of several heterogeneous parameters. In that paper, we propose a method for mining heterogeneous multivariate time-series for learning meaningful patterns. The proposed approach allows for mixed time-series -- containing both pattern and non-pattern data -- such as for imprecise matches, outliers, stretching and global translating of patterns instances in time. We present the early results of our approach in the context of monitoring the health status of a person at home. The purpose is to build a behavioral profile of a person by analyzing the time variations of several quantitative or qualitative parameters recorded through a provision of sensors installed in the home.<|reference_end|> | arxiv | @article{duchene2004mining,
title={Mining Heterogeneous Multivariate Time-Series for Learning Meaningful
Patterns: Application to Home Health Telecare},
author={Florence Duchene (TIMC - IMAG), Catherine Garbay (TIMC - IMAG),
Vincent Rialle (TIMC - IMAG, SIIM)},
journal={arXiv preprint arXiv:cs/0412003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412003},
primaryClass={cs.LG}
} | duchene2004mining |
arxiv-672363 | cs/0412004 | Finding Approximate Palindromes in Strings Quickly and Simply | <|reference_start|>Finding Approximate Palindromes in Strings Quickly and Simply: Described are two algorithms to find long approximate palindromes in a string, for example a DNA sequence. A simple algorithm requires O(n)-space and almost always runs in $O(k.n)$-time where n is the length of the string and k is the number of ``errors'' allowed in the palindrome. Its worst-case time-complexity is $O(n^2)$ but this does not occur with real biological sequences. A more complex algorithm guarantees $O(k.n)$ worst-case time complexity.<|reference_end|> | arxiv | @article{allison2004finding,
title={Finding Approximate Palindromes in Strings Quickly and Simply},
author={L. Allison},
journal={arXiv preprint arXiv:cs/0412004},
year={2004},
number={2004/162},
archivePrefix={arXiv},
eprint={cs/0412004},
primaryClass={cs.DS}
} | allison2004finding |
arxiv-672364 | cs/0412005 | Jordan Normal and Rational Normal Form Algorithms | <|reference_start|>Jordan Normal and Rational Normal Form Algorithms: In this paper, we present a determinist Jordan normal form algorithms based on the Fadeev formula: \[(\lambda \cdot I-A) \cdot B(\lambda)=P(\lambda) \cdot I\] where $B(\lambda)$ is $(\lambda \cdot I-A)$'s comatrix and $P(\lambda)$ is $A$'s characteristic polynomial. This rational Jordan normal form algorithm differs from usual algorithms since it is not based on the Frobenius/Smith normal form but rather on the idea already remarked in Gantmacher that the non-zero column vectors of $B(\lambda_0)$ are eigenvectors of $A$ associated to $\lambda_0$ for any root $\lambda_0$ of the characteristical polynomial. The complexity of the algorithm is $O(n^4)$ field operations if we know the factorization of the characteristic polynomial (or $O(n^5 \ln(n))$ operations for a matrix of integers of fixed size). This algorithm has been implemented using the Maple and Giac/Xcas computer algebra systems.<|reference_end|> | arxiv | @article{parisse2004jordan,
title={Jordan Normal and Rational Normal Form Algorithms},
author={Bernard Parisse (IF), Morgane Vaughan (IF)},
journal={arXiv preprint arXiv:cs/0412005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412005},
primaryClass={cs.SC}
} | parisse2004jordan |
arxiv-672365 | cs/0412006 | The Accelerated Euclidean Algorithm | <|reference_start|>The Accelerated Euclidean Algorithm: We present a new GCD algorithm of two integers or polynomials. The algorithm is iterative and its time complexity is still $O(n \\log^2 n ~ log \\log n)$ for $n$-bit inputs.<|reference_end|> | arxiv | @article{sedjelmaci2004the,
title={The Accelerated Euclidean Algorithm},
author={Sidi Mohamed Sedjelmaci (LIPN)},
journal={Proceedings of the EACA, (2004) 283-287},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412006},
primaryClass={cs.DS}
} | sedjelmaci2004the |
arxiv-672366 | cs/0412007 | Exploring networks with traceroute-like probes: theory and simulations | <|reference_start|>Exploring networks with traceroute-like probes: theory and simulations: Mapping the Internet generally consists in sampling the network from a limited set of sources by using traceroute-like probes. This methodology, akin to the merging of different spanning trees to a set of destination, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. In this paper we explore these biases and provide a statistical analysis of their origin. We derive an analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular, we find that the edge and vertex detection probability depends on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with broad distributions of connectivity. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in network models with different topologies. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. Moreover, we characterize the level of redundancy and completeness of the exploration process as a function of the topological properties of the network. Finally, we study numerically how the fraction of vertices and edges discovered in the sampled graph depends on the particular deployements of probing sources. The results might hint the steps toward more efficient mapping strategies.<|reference_end|> | arxiv | @article{dall'asta2004exploring,
title={Exploring networks with traceroute-like probes: theory and simulations},
author={Luca Dall'Asta, Ignacio Alvarez-Hamelin, Alain Barrat, Alexei Vazquez,
Alessandro Vespignani},
journal={Theoretical Computer Science 355 (2006) 6},
year={2004},
doi={10.1016/j.tcs.2005.12.009},
archivePrefix={arXiv},
eprint={cs/0412007},
primaryClass={cs.NI cond-mat.other}
} | dall'asta2004exploring |
arxiv-672367 | cs/0412008 | Measured descent: A new embedding method for finite metrics | <|reference_start|>Measured descent: A new embedding method for finite metrics: We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Frechet embeddings for finite metrics, due to [Bourgain, 1985] and [Rao, 1999]. We prove that any n-point metric space (X,d) embeds in Hilbert space with distortion O(sqrt{alpha_X log n}), where alpha_X is a geometric estimate on the decomposability of X. As an immediate corollary, we obtain an O(sqrt{(log lambda_X) \log n}) distortion embedding, where \lambda_X is the doubling constant of X. Since \lambda_X\le n, this result recovers Bourgain's theorem, but when the metric X is, in a sense, ``low-dimensional,'' improved bounds are achieved. Our embeddings are volume-respecting for subsets of arbitrary size. One consequence is the existence of (k, O(log n)) volume-respecting embeddings for all 1 \leq k \leq n, which is the best possible, and answers positively a question posed by U. Feige. Our techniques are also used to answer positively a question of Y. Rabinovich, showing that any weighted n-point planar graph embeds in l_\infty^{O(log n)} with O(1) distortion. The O(log n) bound on the dimension is optimal, and improves upon the previously known bound of O((log n)^2).<|reference_end|> | arxiv | @article{krauthgamer2004measured,
title={Measured descent: A new embedding method for finite metrics},
author={Robert Krauthgamer, James R. Lee, Manor Mendel, Assaf Naor},
journal={Geom. Funct. Anal. 15(4):839-858, 2005},
year={2004},
doi={10.1007/s00039-005-0527-6},
archivePrefix={arXiv},
eprint={cs/0412008},
primaryClass={cs.DS math.MG}
} | krauthgamer2004measured |
arxiv-672368 | cs/0412009 | A Fully Sparse Implementation of a Primal-Dual Interior-Point Potential Reduction Method for Semidefinite Programming | <|reference_start|>A Fully Sparse Implementation of a Primal-Dual Interior-Point Potential Reduction Method for Semidefinite Programming: In this paper, we show a way to exploit sparsity in the problem data in a primal-dual potential reduction method for solving a class of semidefinite programs. When the problem data is sparse, the dual variable is also sparse, but the primal one is not. To avoid working with the dense primal variable, we apply Fukuda et al.'s theory of partial matrix completion and work with partial matrices instead. The other place in the algorithm where sparsity should be exploited is in the computation of the search direction, where the gradient and the Hessian-matrix product of the primal and dual barrier functions must be computed in every iteration. By using an idea from automatic differentiation in backward mode, both the gradient and the Hessian-matrix product can be computed in time proportional to the time needed to compute the barrier functions of sparse variables itself. Moreover, the high space complexity that is normally associated with the use of automatic differentiation in backward mode can be avoided in this case. In addition, we suggest a technique to efficiently compute the determinant of the positive definite matrix completion that is required to compute primal search directions. The method of obtaining one of the primal search directions that minimizes the number of the evaluations of the determinant of the positive definite completion is also proposed. We then implement the algorithm and test it on the problem of finding the maximum cut of a graph.<|reference_end|> | arxiv | @article{srijuntongsiri2004a,
title={A Fully Sparse Implementation of a Primal-Dual Interior-Point Potential
Reduction Method for Semidefinite Programming},
author={Gun Srijuntongsiri, Stephen A. Vavasis},
journal={arXiv preprint arXiv:cs/0412009},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412009},
primaryClass={cs.NA}
} | srijuntongsiri2004a |
arxiv-672369 | cs/0412010 | Toward a Human-Centered Uml for Risk Analysis | <|reference_start|>Toward a Human-Centered Uml for Risk Analysis: Safety is now a major concern in many complex systems such as medical robots. A way to control the complexity of such systems is to manage risk. The first and important step of this activity is risk analysis. During risk analysis, two main studies concerning human factors must be integrated: task analysis and human error analysis. This multidisciplinary analysis often leads to a work sharing between several stakeholders who use their own languages and techniques. This often produces consistency errors and understanding difficulties between them. Hence, this paper proposes to treat the risk analysis on the common expression language UML (Unified Modeling Language) and to handle human factors concepts for task analysis and human error analysis based on the features of this language. The approach is applied to the development of a medical robot for teleechography.<|reference_end|> | arxiv | @article{guiochet2004toward,
title={Toward a Human-Centered Uml for Risk Analysis},
author={Jeremie Guiochet (LAAS), Gilles Motet, Claude Baron, Guy Boy},
journal={Proc. of the 18th IFIP World Computer Congress (WCC), Human Error,
Safety and Systems Development (HESSD04) (2004) 177-191},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412010},
primaryClass={cs.OH}
} | guiochet2004toward |
arxiv-672370 | cs/0412011 | Successful E-Business Systems - Paypal | <|reference_start|>Successful E-Business Systems - Paypal: PayPal is an account-based system that allows anyone with an email address to send and receive online payment s. This service is easy to use for customers. Members can instantaneously send money to anyone. Recipients are informed by email that they have received a payment. PayPal is also available to people in 38 countries. This paper starts with introduction to the company and its services. The information about the history and the current company situation are covered. Later some interesting and different technical issues are discussed. The Paper ends with analysis of the company and several future recommendations.<|reference_end|> | arxiv | @article{avaliani2004successful,
title={Successful E-Business Systems - Paypal},
author={Archil Avaliani},
journal={arXiv preprint arXiv:cs/0412011},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412011},
primaryClass={cs.OH}
} | avaliani2004successful |
arxiv-672371 | cs/0412012 | Jartege: a Tool for Random Generation of Unit Tests for Java Classes | <|reference_start|>Jartege: a Tool for Random Generation of Unit Tests for Java Classes: This report presents Jartege, a tool which allows random generation of unit tests for Java classes specified in JML. JML (Java Modeling Language) is a specification language for Java which allows one to write invariants for classes, and pre- and postconditions for operations. As in the JML-JUnit tool, we use JML specifications on the one hand to eliminate irrelevant test cases, and on the other hand as a test oracle. Jartege randomly generates test cases, which consist of a sequence of constructor and method calls for the classes under test. The random aspect of the tool can be parameterized by associating weights to classes and operations, and by controlling the number of instances which are created for each class under test. The practical use of Jartege is illustrated by a small case study.<|reference_end|> | arxiv | @article{oriat2004jartege:,
title={Jartege: a Tool for Random Generation of Unit Tests for Java Classes},
author={Catherine Oriat (LSR - IMAG)},
journal={arXiv preprint arXiv:cs/0412012},
year={2004},
number={RR-1069-I},
archivePrefix={arXiv},
eprint={cs/0412012},
primaryClass={cs.PL}
} | oriat2004jartege: |
arxiv-672372 | cs/0412013 | Signals for Cellular Automata in dimension 2 or higher | <|reference_start|>Signals for Cellular Automata in dimension 2 or higher: We investigate how increasing the dimension of the array can help to draw signals on cellular automata.We show the existence of a gap of constructible signals in any dimension. We exhibit two cellular automata in dimension 2 to show that increasing the dimension allows to reduce the number of states required for some constructions.<|reference_end|> | arxiv | @article{dubacq2004signals,
title={Signals for Cellular Automata in dimension 2 or higher},
author={Jean-Christophe Dubacq (LIPN, GREYC), Veronique Terrier (GREYC)},
journal={LATIN 2002: Theoretical Informatics (2002) 451-464},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412013},
primaryClass={cs.CC cs.DC cs.DM math.CO}
} | dubacq2004signals |
arxiv-672373 | cs/0412014 | Randomized Initialization of a Wireless Multihop Network | <|reference_start|>Randomized Initialization of a Wireless Multihop Network: Address autoconfiguration is an important mechanism required to set the IP address of a node automatically in a wireless network. The address autoconfiguration, also known as initialization or naming, consists to give a unique identifier ranging from 1 to $n$ for a set of $n$ indistinguishable nodes. We consider a wireless network where $n$ nodes (processors) are randomly thrown in a square $X$, uniformly and independently. We assume that the network is synchronous and two nodes are able to communicate if they are within distance at most of $r$ of each other ($r$ is the transmitting/receiving range). The model of this paper concerns nodes without the collision detection ability: if two or more neighbors of a processor $u$ transmit concurrently at the same time, then $u$ would not receive either messages. We suppose also that nodes know neither the topology of the network nor the number of nodes in the network. Moreover, they start indistinguishable, anonymous and unnamed. Under this extremal scenario, we design and analyze a fully distributed protocol to achieve the initialization task for a wireless multihop network of $n$ nodes uniformly scattered in a square $X$. We show how the transmitting range of the deployed stations can affect the typical characteristics such as the degrees and the diameter of the network. By allowing the nodes to transmit at a range $r= \sqrt{\frac{(1+\ell) \ln{n} \SIZE}{\pi n}}$ (slightly greater than the one required to have a connected network), we show how to design a randomized protocol running in expected time $O(n^{3/2} \log^2{n})$ in order to assign a unique number ranging from 1 to $n$ to each of the $n$ participating nodes.<|reference_end|> | arxiv | @article{ravelomanana2004randomized,
title={Randomized Initialization of a Wireless Multihop Network},
author={Vlady Ravelomanana (LIPN)},
journal={arXiv preprint arXiv:cs/0412014},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412014},
primaryClass={cs.DC cs.DM}
} | ravelomanana2004randomized |
arxiv-672374 | cs/0412015 | A Tutorial on the Expectation-Maximization Algorithm Including Maximum-Likelihood Estimation and EM Training of Probabilistic Context-Free Grammars | <|reference_start|>A Tutorial on the Expectation-Maximization Algorithm Including Maximum-Likelihood Estimation and EM Training of Probabilistic Context-Free Grammars: The paper gives a brief review of the expectation-maximization algorithm (Dempster 1977) in the comprehensible framework of discrete mathematics. In Section 2, two prominent estimation methods, the relative-frequency estimation and the maximum-likelihood estimation are presented. Section 3 is dedicated to the expectation-maximization algorithm and a simpler variant, the generalized expectation-maximization algorithm. In Section 4, two loaded dice are rolled. A more interesting example is presented in Section 5: The estimation of probabilistic context-free grammars.<|reference_end|> | arxiv | @article{prescher2004a,
title={A Tutorial on the Expectation-Maximization Algorithm Including
Maximum-Likelihood Estimation and EM Training of Probabilistic Context-Free
Grammars},
author={Detlef Prescher},
journal={arXiv preprint arXiv:cs/0412015},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412015},
primaryClass={cs.CL}
} | prescher2004a |
arxiv-672375 | cs/0412016 | Inside-Outside Estimation Meets Dynamic EM | <|reference_start|>Inside-Outside Estimation Meets Dynamic EM: We briefly review the inside-outside and EM algorithm for probabilistic context-free grammars. As a result, we formally prove that inside-outside estimation is a dynamic-programming variant of EM. This is interesting in its own right, but even more when considered in a theoretical context since the well-known convergence behavior of inside-outside estimation has been confirmed by many experiments but apparently has never been formally proved. However, being a version of EM, inside-outside estimation also inherits the good convergence behavior of EM. Therefore, the as yet imperfect line of argumentation can be transformed into a coherent proof.<|reference_end|> | arxiv | @article{prescher2004inside-outside,
title={Inside-Outside Estimation Meets Dynamic EM},
author={Detlef Prescher},
journal={Proceedings of IWPT 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412016},
primaryClass={cs.CL}
} | prescher2004inside-outside |
arxiv-672376 | cs/0412017 | An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol | <|reference_start|>An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol: Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic.<|reference_end|> | arxiv | @article{baset2004an,
title={An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol},
author={Salman A. Baset and Henning Schulzrinne},
journal={arXiv preprint arXiv:cs/0412017},
year={2004},
number={CUCS-039-04},
archivePrefix={arXiv},
eprint={cs/0412017},
primaryClass={cs.NI cs.MM}
} | baset2004an |
arxiv-672377 | cs/0412018 | Modeling Complex Higher Order Patterns | <|reference_start|>Modeling Complex Higher Order Patterns: The goal of this paper is to show that generalizing the notion of frequent patterns can be useful in extending association analysis to more complex higher order patterns. To that end, we describe a general framework for modeling a complex pattern based on evaluating the interestingness of its sub-patterns. A key goal of any framework is to allow people to more easily express, explore, and communicate ideas, and hence, we illustrate how our framework can be used to describe a variety of commonly used patterns, such as frequent patterns, frequent closed patterns, indirect association patterns, hub patterns and authority patterns. To further illustrate the usefulness of the framework, we also present two new kinds of patterns that derived from the framework: clique pattern and bi-clique pattern and illustrate their practical use.<|reference_end|> | arxiv | @article{he2004modeling,
title={Modeling Complex Higher Order Patterns},
author={Zengyou He, Xiaofei Xu, Shengchun Deng},
journal={arXiv preprint arXiv:cs/0412018},
year={2004},
number={Tr-04-11},
archivePrefix={arXiv},
eprint={cs/0412018},
primaryClass={cs.DB cs.AI}
} | he2004modeling |
arxiv-672378 | cs/0412019 | A Link Clustering Based Approach for Clustering Categorical Data | <|reference_start|>A Link Clustering Based Approach for Clustering Categorical Data: Categorical data clustering (CDC) and link clustering (LC) have been considered as separate research and application areas. The main focus of this paper is to investigate the commonalities between these two problems and the uses of these commonalities for the creation of new clustering algorithms for categorical data based on cross-fertilization between the two disjoint research fields. More precisely, we formally transform the CDC problem into an LC problem, and apply LC approach for clustering categorical data. Experimental results on real datasets show that LC based clustering method is competitive with existing CDC algorithms with respect to clustering accuracy.<|reference_end|> | arxiv | @article{he2004a,
title={A Link Clustering Based Approach for Clustering Categorical Data},
author={Zengyou He, Xiaofei Xu, Shengchun Deng},
journal={A poster paper in Proc. of WAIM 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412019},
primaryClass={cs.DL cs.AI}
} | he2004a |
arxiv-672379 | cs/0412020 | Towards Reliable Network Wide Broadcast in Mobile Ad Hoc Networks | <|reference_start|>Towards Reliable Network Wide Broadcast in Mobile Ad Hoc Networks: Network-Wide Broadcast (NWB) is a common operation in Mobile Ad hoc Networks (MANETs) used by routing protocols to discover routes and in group communication operations. NWB is commonly performed via flooding, which has been shown to be expensive in dense MANETs because of its high redundancy. Several efforts have targeted reducing the redundancy of floods. In this work, we target another problem that can substantially impact the success of NWBs: since MAC level broadcasts are unreliable, it is possible for critical rebroadcasts to be lost, leading to a significant drop in the node coverage. This is especially true under heavy load and in sparse topologies. We show that the techniques that target reducing the overhead of flooding, reduce its inherent redundancy and harm its reliability. In addition, we show that static approaches are more vulnerable to this problem. We then present a selective rebroadcast approach to improve the robustness of NWBs. We show that our approach leads to considerable improvement in NWB coverage relative to a recently proposed solution to this problem, with a small increase in overhead. The proposed approaches do not require proactive neighbor discovery and are therefore resilient to mobility. Finally, the solution can be added to virtually all NWB approaches to improve their reliability.<|reference_end|> | arxiv | @article{rogers2004towards,
title={Towards Reliable Network Wide Broadcast in Mobile Ad Hoc Networks},
author={Paul Rogers and Nael Abu-Ghazaleh},
journal={arXiv preprint arXiv:cs/0412020},
year={2004},
number={Tech Report: tr-cs-02-04},
archivePrefix={arXiv},
eprint={cs/0412020},
primaryClass={cs.NI}
} | rogers2004towards |
arxiv-672380 | cs/0412021 | Finite Domain Bounds Consistency Revisited | <|reference_start|>Finite Domain Bounds Consistency Revisited: A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with constraint propagation for pruning the search space. Constraint propagation is performed by propagators implementing a certain notion of consistency. Bounds consistency is the method of choice for building propagators for arithmetic constraints and several global constraints in the finite integer domain. However, there has been some confusion in the definition of bounds consistency. In this paper we clarify the differences and similarities among the three commonly used notions of bounds consistency.<|reference_end|> | arxiv | @article{choi2004finite,
title={Finite Domain Bounds Consistency Revisited},
author={Chiu Wo Choi, Warwick Harvey, Jimmy Ho-Man Lee, Peter J. Stuckey},
journal={arXiv preprint arXiv:cs/0412021},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412021},
primaryClass={cs.AI cs.LO}
} | choi2004finite |
arxiv-672381 | cs/0412022 | Zeno machines and hypercomputation | <|reference_start|>Zeno machines and hypercomputation: This paper reviews the Church-Turing Thesis (or rather, theses) with reference to their origin and application and considers some models of "hypercomputation", concentrating on perhaps the most straight-forward option: Zeno machines (Turing machines with accelerating clock). The halting problem is briefly discussed in a general context and the suggestion that it is an inevitable companion of any reasonable computational model is emphasised. It is hinted that claims to have "broken the Turing barrier" could be toned down and that the important and well-founded role of Turing computability in the mathematical sciences stands unchallenged.<|reference_end|> | arxiv | @article{potgieter2004zeno,
title={Zeno machines and hypercomputation},
author={Petrus H. Potgieter},
journal={arXiv preprint arXiv:cs/0412022},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412022},
primaryClass={cs.CC}
} | potgieter2004zeno |
arxiv-672382 | cs/0412023 | Multidimensional data classification with artificial neural networks | <|reference_start|>Multidimensional data classification with artificial neural networks: Multi-dimensional data classification is an important and challenging problem in many astro-particle experiments. Neural networks have proved to be versatile and robust in multi-dimensional data classification. In this article we shall study the classification of gamma from the hadrons for the MAGIC Experiment. Two neural networks have been used for the classification task. One is Multi-Layer Perceptron based on supervised learning and other is Self-Organising Map (SOM), which is based on unsupervised learning technique. The results have been shown and the possible ways of combining these networks have been proposed to yield better and faster classification results.<|reference_end|> | arxiv | @article{boinee2004multidimensional,
title={Multidimensional data classification with artificial neural networks},
author={P. Boinee, F. Barbarino, A. De Angelis},
journal={arXiv preprint arXiv:cs/0412023},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412023},
primaryClass={cs.NE cs.AI}
} | boinee2004multidimensional |
arxiv-672383 | cs/0412024 | Human-Level Performance on Word Analogy Questions by Latent Relational Analysis | <|reference_start|>Human-Level Performance on Word Analogy Questions by Latent Relational Analysis: This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, machine translation, and information retrieval. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason/stone is analogous to the pair carpenter/wood. Past work on semantic similarity measures has mainly been concerned with attributional similarity. Recently the Vector Space Model (VSM) of information retrieval has been adapted to the task of measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus (they are not predefined), (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data (it is also used this way in Latent Semantic Analysis), and (3) automatically generated synonyms are used to explore reformulations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying noun-modifier relations, LRA achieves similar gains over the VSM, while using a smaller corpus.<|reference_end|> | arxiv | @article{turney2004human-level,
title={Human-Level Performance on Word Analogy Questions by Latent Relational
Analysis},
author={Peter D. Turney (National Research Council of Canada)},
journal={arXiv preprint arXiv:cs/0412024},
year={2004},
number={NRC-47422},
archivePrefix={arXiv},
eprint={cs/0412024},
primaryClass={cs.CL cs.IR cs.LG}
} | turney2004human-level |
arxiv-672384 | cs/0412025 | Minimum Dilation Stars | <|reference_start|>Minimum Dilation Stars: The dilation of a Euclidean graph is defined as the ratio of distance in the graph divided by distance in R^d. In this paper we consider the problem of positioning the root of a star such that the dilation of the resulting star is minimal. We present a deterministic O(n log n)-time algorithm for evaluating the dilation of a given star; a randomized O(n log n) expected-time algorithm for finding an optimal center in R^d; and for the case d=2, a randomized O(n 2^(alpha(n)) log^2 n) expected-time algorithm for finding an optimal center among the input points.<|reference_end|> | arxiv | @article{eppstein2004minimum,
title={Minimum Dilation Stars},
author={David Eppstein and Kevin A. Wortman},
journal={Comp. Geom. Theory and Appl. 37(1):27-37, 2007},
year={2004},
doi={10.1016/j.comgeo.2006.05.007},
archivePrefix={arXiv},
eprint={cs/0412025},
primaryClass={cs.CG}
} | eppstein2004minimum |
arxiv-672385 | cs/0412026 | Removing Propagation Redundant Constraints in Redundant Modeling | <|reference_start|>Removing Propagation Redundant Constraints in Redundant Modeling: A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with various degrees of constraint propagation for pruning the search space. One common technique to improve the execution efficiency is to add redundant constraints, which are constraints logically implied by others in the problem model. However, some redundant constraints are propagation redundant and hence do not contribute additional propagation information to the constraint solver. Redundant constraints arise naturally in the process of redundant modeling where two models of the same problem are connected and combined through channeling constraints. In this paper, we give general theorems for proving propagation redundancy of one constraint with respect to channeling constraints and constraints in the other model. We illustrate, on problems from CSPlib (http://www.csplib.org/), how detecting and removing propagation redundant constraints in redundant modeling can significantly speed up constraint solving.<|reference_end|> | arxiv | @article{choi2004removing,
title={Removing Propagation Redundant Constraints in Redundant Modeling},
author={Chiu Wo Choi, Jimmy Ho-Man Lee, Peter J. Stuckey},
journal={arXiv preprint arXiv:cs/0412026},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412026},
primaryClass={cs.LO cs.AI}
} | choi2004removing |
arxiv-672386 | cs/0412027 | Correlated dynamics in human printing behavior | <|reference_start|>Correlated dynamics in human printing behavior: Arrival times of requests to print in a student laboratory were analyzed. Inter-arrival times between subsequent requests follow a universal scaling law relating time intervals and the size of the request, indicating a scale invariant dynamics with respect to the size. The cumulative distribution of file sizes is well-described by a modified power law often seen in non-equilibrium critical systems. For each user, waiting times between their individual requests show long range dependence and are broadly distributed from seconds to weeks. All results are incompatible with Poisson models, and may provide evidence of critical dynamics associated with voluntary thought processes in the brain.<|reference_end|> | arxiv | @article{harder2004correlated,
title={Correlated dynamics in human printing behavior},
author={Uli Harder and Maya Paczuski},
journal={arXiv preprint arXiv:cs/0412027},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412027},
primaryClass={cs.PF cond-mat.other}
} | harder2004correlated |
arxiv-672387 | cs/0412028 | A feasible algorithm for typing in Elementary Affine Logic | <|reference_start|>A feasible algorithm for typing in Elementary Affine Logic: We give a new type inference algorithm for typing lambda-terms in Elementary Affine Logic (EAL), which is motivated by applications to complexity and optimal reduction. Following previous references on this topic, the variant of EAL type system we consider (denoted EAL*) is a variant without sharing and without polymorphism. Our algorithm improves over the ones already known in that it offers a better complexity bound: if a simple type derivation for the term t is given our algorithm performs EAL* type inference in polynomial time.<|reference_end|> | arxiv | @article{baillot2004a,
title={A feasible algorithm for typing in Elementary Affine Logic},
author={Patrick Baillot and Kazushige Terui},
journal={arXiv preprint arXiv:cs/0412028},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412028},
primaryClass={cs.LO}
} | baillot2004a |
arxiv-672388 | cs/0412029 | The modular technology of development of the CAD expansions: profiles of outside networks of water supply and water drain | <|reference_start|>The modular technology of development of the CAD expansions: profiles of outside networks of water supply and water drain: The modular technology of development of the problem-oriented CAD expansions is applied to a task of designing of profiles of outside networks of water supply and water drain with realization in program system TechnoCAD GlassX. The unity of structure of this profiles is revealed, the system model of the drawings of profiles of networks is developed including the structured parametric representation (properties of objects and their interdependence, general settings and default settings) and operations with it, which efficiently automate designing<|reference_end|> | arxiv | @article{migunov2004the,
title={The modular technology of development of the CAD expansions: profiles of
outside networks of water supply and water drain},
author={Vladimir V. Migunov, Rustem R. Kafiyatullov, Ilsur T. Safin},
journal={arXiv preprint arXiv:cs/0412029},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412029},
primaryClass={cs.CE cs.DS}
} | migunov2004the |
arxiv-672389 | cs/0412030 | The modular technology of development of the CAD expansions: protection of the buildings from the lightning | <|reference_start|>The modular technology of development of the CAD expansions: protection of the buildings from the lightning: The modular technology of development of the problem-oriented CAD expansions is applied to a task of designing of protection of the buildings from the lightning with realization in program system TechnoCAD GlassX. The system model of the drawings of lightning protection is developed including the structured parametric representation (properties of objects and their interdependence, general settings and default settings) and operations with it, which efficiently automate designing<|reference_end|> | arxiv | @article{migunov2004the,
title={The modular technology of development of the CAD expansions: protection
of the buildings from the lightning},
author={Vladimir V. Migunov, Rustem R. Kafiyatullov, Ilsur T. Safin},
journal={arXiv preprint arXiv:cs/0412030},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412030},
primaryClass={cs.CE cs.DS}
} | migunov2004the |
arxiv-672390 | cs/0412031 | The Features of the Complex CAD system of Reconstruction of the Industrial Plants | <|reference_start|>The Features of the Complex CAD system of Reconstruction of the Industrial Plants: The features of designing of reconstruction of the acting plant by its design department are considered: the results of work are drawings corresponding with the national standards; large number of the small projects for different acting objects; variety of the types of the drawings in one project; large paper archive. The models and methods of developing of the complex CAD system with friend uniform environment of designing, with setting a profile of operations, with usage of the general parts of the project, with a series of problem-oriented subsystems are described on an example of a CAD system TechnoCAD GlassX<|reference_end|> | arxiv | @article{migunov2004the,
title={The Features of the Complex CAD system of Reconstruction of the
Industrial Plants},
author={Vladimir V. Migunov},
journal={arXiv preprint arXiv:cs/0412031},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412031},
primaryClass={cs.CE}
} | migunov2004the |
arxiv-672391 | cs/0412032 | The methods of support of the requirements of the Russian standards at development of a CAD of industrial objects | <|reference_start|>The methods of support of the requirements of the Russian standards at development of a CAD of industrial objects: The methods of support of the requirements of the Russian standards in a CAD of industrial objects are explained, which were implemented in the CAD system TechnoCAD GlassX with an own graphics core and own structures of data storage. It is rotined, that the binding of storage structures and program code of a CAD to the requirements of standards enable not only to fulfil these requirements in project documentation, but also to increase a degree of compactness of storage of drawings both on the disk and in the RAM<|reference_end|> | arxiv | @article{migunov2004the,
title={The methods of support of the requirements of the Russian standards at
development of a CAD of industrial objects},
author={Vladimir V. Migunov},
journal={arXiv preprint arXiv:cs/0412032},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412032},
primaryClass={cs.CE cs.DS}
} | migunov2004the |
arxiv-672392 | cs/0412033 | The modelling of the build constructions in a CAD of the renovation of the enterprises by means of units in the drawings | <|reference_start|>The modelling of the build constructions in a CAD of the renovation of the enterprises by means of units in the drawings: The parametric model of build constructions and features of design operations are described for making drawings, which are the common component of the different parts of the projects of renovation of enterprises. The key moment of the deep design automation is the using of so-called units in the drawings, which are joining a visible graphic part and invisible parameters. The model has passed check during designing of several hundreds of drawings<|reference_end|> | arxiv | @article{migunov2004the,
title={The modelling of the build constructions in a CAD of the renovation of
the enterprises by means of units in the drawings},
author={Vladimir V. Migunov},
journal={arXiv preprint arXiv:cs/0412033},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412033},
primaryClass={cs.CE}
} | migunov2004the |
arxiv-672393 | cs/0412034 | The informatization of design works at industry firm during its renovation | <|reference_start|>The informatization of design works at industry firm during its renovation: The characteristic of design works on firm at its renovation and of the common directions of their informatization is given. The implantation of a CAD is selected as the key direction, and the requirements to a complex CAD-system are stated. The methods of such a CAD-system development are featured, and the connectedness of this development with the process of integration of information space of design department of the firm is characterized. The experience of development and implantation of a complex CAD of renovation of firms TechnoCAD GlassX lies in a basis of this reviewing<|reference_end|> | arxiv | @article{migunov2004the,
title={The informatization of design works at industry firm during its
renovation},
author={Vladimir V. Migunov},
journal={arXiv preprint arXiv:cs/0412034},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412034},
primaryClass={cs.CE}
} | migunov2004the |
arxiv-672394 | cs/0412035 | Deployment of a Grid-based Medical Imaging Application | <|reference_start|>Deployment of a Grid-based Medical Imaging Application: The MammoGrid project has deployed its Service-Oriented Architecture (SOA)-based Grid application in a real environment comprising actual participating hospitals. The resultant setup is currently being exploited to conduct rigorous in-house tests in the first phase before handing over the setup to the actual clinicians to get their feedback. This paper elaborates the deployment details and the experiences acquired during this phase of the project. Finally the strategy regarding migration to an upcoming middleware from EGEE project will be described. This paper concludes by highlighting some of the potential areas of future work.<|reference_end|> | arxiv | @article{amendolia2004deployment,
title={Deployment of a Grid-based Medical Imaging Application},
author={S R Amendolia, F Estrella, C del Frate, J Galvez, W Hassan, T Hauer, D
Manset, R McClatchey, M Odeh, D Rogulin, T Solomonides, R Warren},
journal={arXiv preprint arXiv:cs/0412035},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412035},
primaryClass={cs.DC cs.DB}
} | amendolia2004deployment |
arxiv-672395 | cs/0412036 | Reverse Engineering Ontology to Conceptual Data Models | <|reference_start|>Reverse Engineering Ontology to Conceptual Data Models: Ontologies facilitate the integration of heterogeneous data sources by resolving semantic heterogeneity between them. This research aims to study the possibility of generating a domain conceptual model from a given ontology with the vision to grow this generated conceptual data model into a global conceptual model integrating a number of existing data and information sources. Based on ontologically derived semantics of the BWW model, rules are identified that map elements of the ontology language (DAML+OIL) to domain conceptual model elements. This mapping is demonstrated using TAMBIS ontology. A significant corollary of this study is that it is possible to generate a domain conceptual model from a given ontology subject to validation that needs to be performed by the domain specialist before evolving this model into a global conceptual model.<|reference_end|> | arxiv | @article{el-ghalayini2004reverse,
title={Reverse Engineering Ontology to Conceptual Data Models},
author={Haya El-Ghalayini, Mohammed Odeh, Richard McClatchey & Tony
Solomonides},
journal={arXiv preprint arXiv:cs/0412036},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412036},
primaryClass={cs.DC cs.DB}
} | el-ghalayini2004reverse |
arxiv-672396 | cs/0412037 | A Statistical Framework for Efficient Monitoring of End-to-End Network Properties | <|reference_start|>A Statistical Framework for Efficient Monitoring of End-to-End Network Properties: Network service providers and customers are often concerned with aggregate performance measures that span multiple network paths. Unfortunately, forming such network-wide measures can be difficult, due to the issues of scale involved. In particular, the number of paths grows too rapidly with the number of endpoints to make exhaustive measurement practical. As a result, there is interest in the feasibility of methods that dramatically reduce the number of paths measured in such situations while maintaining acceptable accuracy. In previous work we proposed a statistical framework to efficiently address this problem, in the context of additive metrics such as delay and loss rate, for which the per-path metric is a sum of (possibly transformed) per-link measures. The key to our method lies in the observation and exploitation of significant redundancy in network paths (sharing of common links). In this paper we make three contributions: (1) we generalize the framework to make it more immediately applicable to network measurements encountered in practice; (2) we demonstrate that the observed path redundancy upon which our method is based is robust to variation in key network conditions and characteristics, including link failures; and (3) we show how the framework may be applied to address three practical problems of interest to network providers and customers, using data from an operating network. In particular, we show how appropriate selection of small sets of path measurements can be used to accurately estimate network-wide averages of path delays, to reliably detect network anomalies, and to effectively make a choice between alternative sub-networks, as a customer choosing between two providers or two ingress points into a provider network.<|reference_end|> | arxiv | @article{chua2004a,
title={A Statistical Framework for Efficient Monitoring of End-to-End Network
Properties},
author={David B. Chua, Eric D. Kolaczyk and Mark Crovella},
journal={arXiv preprint arXiv:cs/0412037},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412037},
primaryClass={cs.NI math.ST stat.TH}
} | chua2004a |
arxiv-672397 | cs/0412038 | Tycoon: an Implementation of a Distributed, Market-based Resource Allocation System | <|reference_start|>Tycoon: an Implementation of a Distributed, Market-based Resource Allocation System: Distributed clusters like the Grid and PlanetLab enable the same statistical multiplexing efficiency gains for computing as the Internet provides for networking. One major challenge is allocating resources in an economically efficient and low-latency way. A common solution is proportional share, where users each get resources in proportion to their pre-defined weight. However, this does not allow users to differentiate the value of their jobs. This leads to economic inefficiency. In contrast, systems that require reservations impose a high latency (typically minutes to hours) to acquire resources. We present Tycoon, a market based distributed resource allocation system based on proportional share. The key advantages of Tycoon are that it allows users to differentiate the value of their jobs, its resource acquisition latency is limited only by communication delays, and it imposes no manual bidding overhead on users. We present experimental results using a prototype implementation of our design.<|reference_end|> | arxiv | @article{lai2004tycoon:,
title={Tycoon: an Implementation of a Distributed, Market-based Resource
Allocation System},
author={Kevin Lai, Lars Rasmusson, Eytan Adar, Stephen Sorkin, Li Zhang,
Bernardo A. Huberman},
journal={arXiv preprint arXiv:cs/0412038},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412038},
primaryClass={cs.DC cs.OS}
} | lai2004tycoon: |
arxiv-672398 | cs/0412039 | Security in Carrier Class Server Applications for All-IP Networks | <|reference_start|>Security in Carrier Class Server Applications for All-IP Networks: A revolution is taking place in telecommunication networks. New services are appearing on platforms such as third generation cellular phones (3G) and broadband Internet access. This motivates the transition from mostly switched to all-IP networks. The replacement of the traditional shallow and well-defined interface to telephony networks brings accrued flexibility, but also makes the network accordingly difficult to properly secure. This paper surveys the implications of this transition on security issues in telecom applications. It does not give an exhaustive list of security tools or security protocols. Its goal is rather to initiate the reader to the security issues brought to carrier class servers by this revolution.<|reference_end|> | arxiv | @article{chatel2004security,
title={Security in Carrier Class Server Applications for All-IP Networks},
author={Marc Chatel, Michel Dagenais, Charles Levert, Makan Pourzandi},
journal={arXiv preprint arXiv:cs/0412039},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412039},
primaryClass={cs.NI}
} | chatel2004security |
arxiv-672399 | cs/0412040 | Data-stationary Architecture to Execute Quantum Algorithms Classically | <|reference_start|>Data-stationary Architecture to Execute Quantum Algorithms Classically: This paper presents a data stationary architecture in which each word has an attached address field. Address fields massively update in parallel to record data interchanges. Words do not move until memory is read for post processing. A sea of such cells can test large-scale quantum algorithms, although other programming is possible.<|reference_end|> | arxiv | @article{burger2004data-stationary,
title={Data-stationary Architecture to Execute Quantum Algorithms Classically},
author={J. R. Burger},
journal={arXiv preprint arXiv:cs/0412040},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412040},
primaryClass={cs.AR}
} | burger2004data-stationary |
arxiv-672400 | cs/0412041 | An Efficient and Flexible Engine for Computing Fixed Points | <|reference_start|>An Efficient and Flexible Engine for Computing Fixed Points: An efficient and flexible engine for computing fixed points is critical for many practical applications. In this paper, we firstly present a goal-directed fixed point computation strategy in the logic programming paradigm. The strategy adopts a tabled resolution (or memorized resolution) to mimic the efficient semi-naive bottom-up computation. Its main idea is to dynamically identify and record those clauses that will lead to recursive variant calls, and then repetitively apply those alternatives incrementally until the fixed point is reached. Secondly, there are many situations in which a fixed point contains a large number or even infinite number of solutions. In these cases, a fixed point computation engine may not be efficient enough or feasible at all. We present a mode-declaration scheme which provides the capabilities to reduce a fixed point from a big solution set to a preferred small one, or from an infeasible infinite set to a finite one. The mode declaration scheme can be characterized as a meta-level operation over the original fixed point. We show the correctness of the mode declaration scheme. Thirdly, the mode-declaration scheme provides a new declarative method for dynamic programming, which is typically used for solving optimization problems. There is no need to define the value of an optimal solution recursively, instead, defining a general solution suffices. The optimal value as well as its corresponding concrete solution can be derived implicitly and automatically using a mode-directed fixed point computation engine. Finally, this fixed point computation engine has been successfully implemented in a commercial Prolog system. Experimental results are shown to indicate that the mode declaration improves both time and space performances in solving dynamic programming problems.<|reference_end|> | arxiv | @article{guo2004an,
title={An Efficient and Flexible Engine for Computing Fixed Points},
author={Hai-Feng Guo and Gopal Gupta},
journal={arXiv preprint arXiv:cs/0412041},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412041},
primaryClass={cs.PL cs.AI cs.LO}
} | guo2004an |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.