corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-672401 | cs/0412042 | The approximability of three-valued MAX CSP | <|reference_start|>The approximability of three-valued MAX CSP: In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3.<|reference_end|> | arxiv | @article{jonsson2004the,
title={The approximability of three-valued MAX CSP},
author={Peter Jonsson, Mikael Klasson, and Andrei Krokhin},
journal={arXiv preprint arXiv:cs/0412042},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412042},
primaryClass={cs.CC}
} | jonsson2004the |
arxiv-672402 | cs/0412043 | Widening Operators for Weakly-Relational Numeric Abstractions (Extended Abstract) | <|reference_start|>Widening Operators for Weakly-Relational Numeric Abstractions (Extended Abstract): We discuss the divergence problems recently identified in some extrapolation operators for weakly-relational numeric domains. We identify the cause of the divergences and point out that resorting to more concrete, syntactic domains can be avoided by researching suitable algorithms for the elimination of redundant constraints in the chosen representation.<|reference_end|> | arxiv | @article{bagnara2004widening,
title={Widening Operators for Weakly-Relational Numeric Abstractions (Extended
Abstract)},
author={Roberto Bagnara, Patricia M. Hill, Elena Mazzi, and Enea Zaffanella},
journal={arXiv preprint arXiv:cs/0412043},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412043},
primaryClass={cs.PL}
} | bagnara2004widening |
arxiv-672403 | cs/0412044 | TulaFale: A Security Tool for Web Services | <|reference_start|>TulaFale: A Security Tool for Web Services: Web services security specifications are typically expressed as a mixture of XML schemas, example messages, and narrative explanations. We propose a new specification language for writing complementary machine-checkable descriptions of SOAP-based security protocols and their properties. Our TulaFale language is based on the pi calculus (for writing collections of SOAP processors running in parallel), plus XML syntax (to express SOAP messaging), logical predicates (to construct and filter SOAP messages), and correspondence assertions (to specify authentication goals of protocols). Our implementation compiles TulaFale into the applied pi calculus, and then runs Blanchet's resolution-based protocol verifier. Hence, we can automatically verify authentication properties of SOAP protocols.<|reference_end|> | arxiv | @article{bhargavan2004tulafale:,
title={TulaFale: A Security Tool for Web Services},
author={Karthikeyan Bhargavan, Cedric Fournet, Andrew D. Gordon, Riccardo
Pucella},
journal={arXiv preprint arXiv:cs/0412044},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412044},
primaryClass={cs.CR}
} | bhargavan2004tulafale: |
arxiv-672404 | cs/0412045 | Validating a Web Service Security Abstraction by Typing | <|reference_start|>Validating a Web Service Security Abstraction by Typing: An XML web service is, to a first approximation, an RPC service in which requests and responses are encoded in XML as SOAP envelopes, and transported over HTTP. We consider the problem of authenticating requests and responses at the SOAP-level, rather than relying on transport-level security. We propose a security abstraction, inspired by earlier work on secure RPC, in which the methods exported by a web service are annotated with one of three security levels: none, authenticated, or both authenticated and encrypted. We model our abstraction as an object calculus with primitives for defining and calling web services. We describe the semantics of our object calculus by translating to a lower-level language with primitives for message passing and cryptography. To validate our semantics, we embed correspondence assertions that specify the correct authentication of requests and responses. By appeal to the type theory for cryptographic protocols of Gordon and Jeffrey's Cryptyc, we verify the correspondence assertions simply by typing. Finally, we describe an implementation of our semantics via custom SOAP headers.<|reference_end|> | arxiv | @article{gordon2004validating,
title={Validating a Web Service Security Abstraction by Typing},
author={Andrew D. Gordon, Riccardo Pucella},
journal={Formal Aspects of Computing 17 (3), pp. 277-318, 2005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412045},
primaryClass={cs.CR}
} | gordon2004validating |
arxiv-672405 | cs/0412046 | Quasiconvex Programming | <|reference_start|>Quasiconvex Programming: We define quasiconvex programming, a form of generalized linear programming in which one seeks the point minimizing the pointwise maximum of a collection of quasiconvex functions. We survey algorithms for solving quasiconvex programs either numerically or via generalizations of the dual simplex method from linear programming, and describe varied applications of this geometric optimization technique in meshing, scientific computation, information visualization, automated algorithm analysis, and robust statistics.<|reference_end|> | arxiv | @article{eppstein2004quasiconvex,
title={Quasiconvex Programming},
author={David Eppstein},
journal={arXiv preprint arXiv:cs/0412046},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412046},
primaryClass={cs.CG}
} | eppstein2004quasiconvex |
arxiv-672406 | cs/0412047 | A Social Network for Societal-Scale Decision-Making Systems | <|reference_start|>A Social Network for Societal-Scale Decision-Making Systems: In societal-scale decision-making systems the collective is faced with the problem of ensuring that the derived group decision is in accord with the collective's intention. In modern systems, political institutions have instatiated representative forms of decision-making to ensure that every individual in the society has a participatory voice in the decision-making behavior of the whole--even if only indirectly through representation. An agent-based simulation demonstrates that in modern representative systems, as the ratio of representatives increases, there exists an exponential decrease in the ability for the group to behave in accord with the desires of the whole. To remedy this issue, this paper provides a novel representative power structure for decision-making that utilizes a social network and power distribution algorithm to maintain the collective's perspective over varying degrees of participation and/or ratios of representation. This work shows promise for the future development of policy-making systems that are supported by the computer and network infrastructure of our society.<|reference_end|> | arxiv | @article{rodriguez2004a,
title={A Social Network for Societal-Scale Decision-Making Systems},
author={Marko Rodriguez and Daniel Steinbock},
journal={North American Association for Computational Social and
Organizational Science Conference Proceedings 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412047},
primaryClass={cs.CY cs.DS cs.HC}
} | rodriguez2004a |
arxiv-672407 | cs/0412048 | On computing fixed points for generalized sandpiles | <|reference_start|>On computing fixed points for generalized sandpiles: We prove fixed points results for sandpiles starting with arbitrary initial conditions. We give an effective algorithm for computing such fixed points, and we refine it in the particular case of SPM.<|reference_end|> | arxiv | @article{formenti2004on,
title={On computing fixed points for generalized sandpiles},
author={Enrico Formenti (I3S), Benoit Masson (I3S)},
journal={arXiv preprint arXiv:cs/0412048},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412048},
primaryClass={cs.CC}
} | formenti2004on |
arxiv-672408 | cs/0412049 | Neural Networks in Mobile Robot Motion | <|reference_start|>Neural Networks in Mobile Robot Motion: This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.<|reference_end|> | arxiv | @article{janglova2004neural,
title={Neural Networks in Mobile Robot Motion},
author={Danica Janglova},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 1, March 2004, pp.15-22},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412049},
primaryClass={cs.RO cs.AI}
} | janglova2004neural |
arxiv-672409 | cs/0412050 | Gyroscopically Stabilized Robot: Balance and Tracking | <|reference_start|>Gyroscopically Stabilized Robot: Balance and Tracking: The single wheel, gyroscopically stabilized robot - Gyrover, is a dynamically stable but statically unstable, underactuated system. In this paper, based on the dynamic model of the robot, we investigate two classes of nonholonomic constraints associated with the system. Then, based on the backstepping technology, we propose a control law for balance control of Gyrover. Next, through transferring the systems states from Cartesian coordinate to polar coordinate, control laws for point-to-point control and line tracking in Cartesian space are provided.<|reference_end|> | arxiv | @article{ou2004gyroscopically,
title={Gyroscopically Stabilized Robot: Balance and Tracking},
author={Yongsheng Ou & Yangsheng Xu},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 1, March 2004, pp.23-32},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412050},
primaryClass={cs.RO}
} | ou2004gyroscopically |
arxiv-672410 | cs/0412051 | Dynamic replanning in uncertain environments for a sewer inspection robot | <|reference_start|>Dynamic replanning in uncertain environments for a sewer inspection robot: The sewer inspection robot MAKRO is an autonomous multi-segment robot with worm-like shape driven by wheels. It is currently under development in the project MAKRO-PLUS. The robot has to navigate autonomously within sewer systems. Its first tasks will be to take water probes, analyze it onboard, and measure positions of manholes and pipes to detect polluted-loaded sewage and to improve current maps of sewer systems. One of the challenging problems is the controller software, which should enable the robot to navigate in the sewer system and perform the inspection tasks autonomously, not inflicting any self-damage. This paper focuses on the route planning and replanning aspect of the robot. The robots software has four different levels, of which the planning system is the highest level, and the remaining three are controller levels each with a different degree of abstraction. The planner coordinates the sequence of actions that are to be successively executed by the robot.<|reference_end|> | arxiv | @article{adria2004dynamic,
title={Dynamic replanning in uncertain environments for a sewer inspection
robot},
author={Oliver Adria, Hermann Streich, Joachim Hertzberg},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 1, March 2004, pp.33-38},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412051},
primaryClass={cs.RO}
} | adria2004dynamic |
arxiv-672411 | cs/0412052 | WebotsTM: Professional Mobile Robot Simulation | <|reference_start|>WebotsTM: Professional Mobile Robot Simulation: Cyberbotics Ltd. develops WebotsTM, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. WebotsTM lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. WebotsTM has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.<|reference_end|> | arxiv | @article{michel2004webotstm:,
title={WebotsTM: Professional Mobile Robot Simulation},
author={Olivier Michel},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 1, March 2004, pp.39-42},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412052},
primaryClass={cs.RO}
} | michel2004webotstm: |
arxiv-672412 | cs/0412053 | Dynamic simulation of task constrained of a rigid-flexible manipulator | <|reference_start|>Dynamic simulation of task constrained of a rigid-flexible manipulator: A rigid-flexible manipulator may be assigned tasks in a moving environment where the winds or vibrations affect the position and/or orientation of surface of operation. Consequently, losses of the contact and perhaps degradation of the performance may occur as references are changed. When the environment is moving, knowledge of the angle α between the contact surface and the horizontal is required at every instant. In this paper, different profiles for the time varying angle α are proposed to investigate the effect of this change into the contact force and the joint torques of a rigid-flexible manipulator. The coefficients of the equation of the proposed rotating surface are changing with time to determine the new X and Y coordinates of the moving surface as the surface rotates.<|reference_end|> | arxiv | @article{ata2004dynamic,
title={Dynamic simulation of task constrained of a rigid-flexible manipulator},
author={Atef A. Ata & Habib Johar},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 2, June 2004, pp.61-66},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412053},
primaryClass={cs.RO}
} | ata2004dynamic |
arxiv-672413 | cs/0412054 | Assembly and Disassembly Planning by using Fuzzy Logic & Genetic Algorithms | <|reference_start|>Assembly and Disassembly Planning by using Fuzzy Logic & Genetic Algorithms: The authors propose the implementation of hybrid Fuzzy Logic-Genetic Algorithm (FL-GA) methodology to plan the automatic assembly and disassembly sequence of products. The GA-Fuzzy Logic approach is implemented onto two levels. The first level of hybridization consists of the development of a Fuzzy controller for the parameters of an assembly or disassembly planner based on GAs. This controller acts on mutation probability and crossover rate in order to adapt their values dynamically while the algorithm runs. The second level consists of the identification of theoptimal assembly or disassembly sequence by a Fuzzy function, in order to obtain a closer control of the technological knowledge of the assembly/disassembly process. Two case studies were analyzed in order to test the efficiency of the Fuzzy-GA methodologies.<|reference_end|> | arxiv | @article{galantucci2004assembly,
title={Assembly and Disassembly Planning by using Fuzzy Logic & Genetic
Algorithms},
author={L. M. Galantucci, G. Percoco & R. Spina},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 2, June 2004, pp.67-74},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412054},
primaryClass={cs.RO}
} | galantucci2004assembly |
arxiv-672414 | cs/0412055 | Robotic Applications in Cardiac Surgery | <|reference_start|>Robotic Applications in Cardiac Surgery: Traditionally, cardiac surgery has been performed through a median sternotomy, which allows the surgeon generous access to the heart and surrounding great vessels. As a paradigm shift in the size and location of incisions occurs in cardiac surgery, new methods have been developed to allow the surgeon the same amount of dexterity and accessibility to the heart in confined spaces and in a less invasive manner. Initially, long instruments without pivot points were used, however, more recent robotic telemanipulation systems have been applied that allow for improved dexterity, enabling the surgeon to perform cardiac surgery from a distance not previously possible. In this rapidly evolving field, we review the recent history and clinical results of using robotics in cardiac surgery.<|reference_end|> | arxiv | @article{kypson2004robotic,
title={Robotic Applications in Cardiac Surgery},
author={Alan P. Kypson and W. Randolph Chitwood Jr},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 2, June 2004, pp.87-92},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412055},
primaryClass={cs.RO}
} | kypson2004robotic |
arxiv-672415 | cs/0412056 | One-Chip Solution to Intelligent Robot Control: Implementing Hexapod Subsumption Architecture Using a Contemporary Microprocessor | <|reference_start|>One-Chip Solution to Intelligent Robot Control: Implementing Hexapod Subsumption Architecture Using a Contemporary Microprocessor: This paper introduces a six-legged autonomous robot managed by a single controller and a software core modeled on subsumption architecture. We begin by discussing the features and capabilities of IsoPod, a new processor for robotics which has enabled a streamlined implementation of our project. We argue that this processor offers a unique set of hardware and software features, making it a practical development platform for robotics in general and for subsumption-based control architectures in particular. Next, we summarize original ideas on subsumption architecture implementation for a six-legged robot, as presented by its inventor Rodney Brooks in 1980s. A comparison is then made to a more recent example of a hexapod control architecture based on subsumption. The merits of both systems are analyzed and a new subsumption architecture layout is formulated as a response. We conclude with some remarks regarding the development of this project as a hint at new potentials for intelligent robot design, opened by a recent development in embedded controller market.<|reference_end|> | arxiv | @article{pashenkov2004one-chip,
title={One-Chip Solution to Intelligent Robot Control: Implementing Hexapod
Subsumption Architecture Using a Contemporary Microprocessor},
author={Nikita Pashenkov & Ryuichi Iwamasa},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 2, June 2004, pp. 93-98},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412056},
primaryClass={cs.RO}
} | pashenkov2004one-chip |
arxiv-672416 | cs/0412057 | How to achieve various gait patterns from single nominal | <|reference_start|>How to achieve various gait patterns from single nominal: In this paper is presented an approach to achieving on-line modification of nominal biped gait without recomputing entire dynamics when steady motion is performed. Straight, dynamically balanced walk was used as a nominal gait, and applied modifications were speed-up and slow-down walk and turning left and right. It is shown that the disturbances caused by these modifications jeopardize dynamic stability, but they can be simply compensated to enable walk continuation.<|reference_end|> | arxiv | @article{vukobratovic2004how,
title={How to achieve various gait patterns from single nominal},
author={Miomir Vukobratovic, Dejan Andric & Branislav Borovac},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 2, June 2004, pp. 99-108},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412057},
primaryClass={cs.RO}
} | vukobratovic2004how |
arxiv-672417 | cs/0412058 | Clustering Categorical Data Streams | <|reference_start|>Clustering Categorical Data Streams: The data stream model has been defined for new classes of applications involving massive data being generated at a fast pace. Web click stream analysis and detection of network intrusions are two examples. Cluster analysis on data streams becomes more difficult, because the data objects in a data stream must be accessed in order and can be read only once or few times with limited resources. Recently, a few clustering algorithms have been developed for analyzing numeric data streams. However, to our knowledge to date, no algorithm exists for clustering categorical data streams. In this paper, we propose an efficient clustering algorithm for analyzing categorical data streams. It has been proved that the proposed algorithm uses small memory footprints. We provide empirical analysis on the performance of the algorithm in clustering both synthetic and real data streams<|reference_end|> | arxiv | @article{he2004clustering,
title={Clustering Categorical Data Streams},
author={Zengyou He, Xiaofei Xu, Shengchun Deng, Joshua Zhexue Huang},
journal={arXiv preprint arXiv:cs/0412058},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412058},
primaryClass={cs.DB cs.AI}
} | he2004clustering |
arxiv-672418 | cs/0412059 | Vector Symbolic Architectures answer Jackendoff's challenges for cognitive neuroscience | <|reference_start|>Vector Symbolic Architectures answer Jackendoff's challenges for cognitive neuroscience: Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and transformation of the compositional structures that are typically taken to be the domain of symbolic processing. He contended that typical connectionist approaches fail to meet these challenges and that the dialogue between linguistic theory and cognitive neuroscience will be relatively unproductive until the importance of these problems is widely recognised and the challenges answered by some technical innovation in connectionist modelling. This paper claims that a little-known family of connectionist models (Vector Symbolic Architectures) are able to meet Jackendoff's challenges.<|reference_end|> | arxiv | @article{gayler2004vector,
title={Vector Symbolic Architectures answer Jackendoff's challenges for
cognitive neuroscience},
author={Ross W. Gayler},
journal={arXiv preprint arXiv:cs/0412059},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412059},
primaryClass={cs.NE cs.AI}
} | gayler2004vector |
arxiv-672419 | cs/0412060 | Monotonicity Results for Coherent MIMO Rician Channels | <|reference_start|>Monotonicity Results for Coherent MIMO Rician Channels: The dependence of the Gaussian input information rate on the line-of-sight (LOS) matrix in multiple-input multiple-output coherent Rician fading channels is explored. It is proved that the outage probability and the mutual information induced by a multivariate circularly symmetric Gaussian input with any covariance matrix are monotonic in the LOS matrix D, or more precisely, monotonic in D'D in the sense of the Loewner partial order. Conversely, it is also demonstrated that this ordering on the LOS matrices is a necessary condition for the uniform monotonicity over all input covariance matrices. This result is subsequently applied to prove the monotonicity of the isotropic Gaussian input information rate and channel capacity in the singular values of the LOS matrix. Extensions to multiple-access channels are also discussed.<|reference_end|> | arxiv | @article{hoesli2004monotonicity,
title={Monotonicity Results for Coherent MIMO Rician Channels},
author={Daniel Hoesli, Young-Han Kim, Amos Lapidoth},
journal={arXiv preprint arXiv:cs/0412060},
year={2004},
doi={10.1109/TIT.2005.858968},
archivePrefix={arXiv},
eprint={cs/0412060},
primaryClass={cs.IT math.IT}
} | hoesli2004monotonicity |
arxiv-672420 | cs/0412061 | Free quasi-symmetric functions, product actions and quantum field theory of partitions | <|reference_start|>Free quasi-symmetric functions, product actions and quantum field theory of partitions: We examine two associative products over the ring of symmetric functions related to the intransitive and Cartesian products of permutation groups. As an application, we give an enumeration of some Feynman type diagrams arising in Bender's QFT of partitions. We end by exploring possibilities to construct noncommutative analogues.<|reference_end|> | arxiv | @article{duchamp2004free,
title={Free quasi-symmetric functions, product actions and quantum field theory
of partitions},
author={Gerard Henry Edmond Duchamp (LIPN), Jean-Gabriel Luque (IGM), Karol A.
Penson (LPTL), Christophe Tollu (LIPN)},
journal={arXiv preprint arXiv:cs/0412061},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412061},
primaryClass={cs.SC math.CO quant-ph}
} | duchamp2004free |
arxiv-672421 | cs/0412062 | Isomorphic Implication | <|reference_start|>Isomorphic Implication: We study the isomorphic implication problem for Boolean constraints. We show that this is a natural analog of the subgraph isomorphism problem. We prove that, depending on the set of constraints, this problem is in P, NP-complete, or NP-hard, coNP-hard, and in parallel access to NP. We show how to extend the NP-hardness and coNP-hardness to hardness for parallel access to NP for some cases, and conjecture that this can be done in all cases.<|reference_end|> | arxiv | @article{bauland2004isomorphic,
title={Isomorphic Implication},
author={Michael Bauland and Edith Hemaspaandra},
journal={arXiv preprint arXiv:cs/0412062},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412062},
primaryClass={cs.CC}
} | bauland2004isomorphic |
arxiv-672422 | cs/0412063 | Labelled transition systems as a Stone space | <|reference_start|>Labelled transition systems as a Stone space: A fully abstract and universal domain model for modal transition systems and refinement is shown to be a maximal-points space model for the bisimulation quotient of labelled transition systems over a finite set of events. In this domain model we prove that this quotient is a Stone space whose compact, zero-dimensional, and ultra-metrizable Hausdorff topology measures the degree of bisimilarity such that image-finite labelled transition systems are dense. Using this compactness we show that the set of labelled transition systems that refine a modal transition system, its ''set of implementations'', is compact and derive a compactness theorem for Hennessy-Milner logic on such implementation sets. These results extend to systems that also have partially specified state propositions, unify existing denotational, operational, and metric semantics on partial processes, render robust consistency measures for modal transition systems, and yield an abstract interpretation of compact sets of labelled transition systems as Scott-closed sets of modal transition systems.<|reference_end|> | arxiv | @article{huth2004labelled,
title={Labelled transition systems as a Stone space},
author={Michael Huth},
journal={Logical Methods in Computer Science, Volume 1, Issue 1 (January
26, 2005) lmcs:2271},
year={2004},
doi={10.2168/LMCS-1(1:1)2005},
archivePrefix={arXiv},
eprint={cs/0412063},
primaryClass={cs.LO}
} | huth2004labelled |
arxiv-672423 | cs/0412064 | Collective Intelligence Quanitifed for Computer-Mediated Group Problem Solving | <|reference_start|>Collective Intelligence Quanitifed for Computer-Mediated Group Problem Solving: Collective Intelligence (CI) is the ability of a group to exhibit greater intelligence than its individual members. Expressed by the common saying that "two minds are better than one," CI has been a topic of interest for social psychology and the information sciences. Computer mediation adds a new element in the form of distributed networks and group support systems. These facilitate highly organized group activities that were all but impossible before computer mediation. This paper presents experimental findings on group problem solving where a distributed software system automatically integrates input from many humans. In order to quantify Collective Intelligence, we compare the performance of groups to individuals when solving a mathematically formalized problem. This study shows that groups can outperform individuals on difficult but not easy problems, though groups are slower to produce solutions. The subjects are 57 university students. The task is the 8-Puzzle sliding tile game.<|reference_end|> | arxiv | @article{steinbock2004collective,
title={Collective Intelligence Quanitifed for Computer-Mediated Group Problem
Solving},
author={Dan Steinbock, Craig Kaplan, Marko Rodriguez, Juana Diaz, Newton Der,
Suzanne Garcia},
journal={arXiv preprint arXiv:cs/0412064},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412064},
primaryClass={cs.CY cs.HC cs.OH}
} | steinbock2004collective |
arxiv-672424 | cs/0412065 | A Framework for Creating Natural Language User Interfaces for Action-Based Applications | <|reference_start|>A Framework for Creating Natural Language User Interfaces for Action-Based Applications: In this paper we present a framework for creating natural language interfaces to action-based applications. Our framework uses a number of reusable application-independent components, in order to reduce the effort of creating a natural language interface for a given application. Using a type-logical grammar, we first translate natural language sentences into expressions in an extended higher-order logic. These expressions can be seen as executable specifications corresponding to the original sentences. The executable specifications are then interpreted by invoking appropriate procedures provided by the application for which a natural language interface is being created.<|reference_end|> | arxiv | @article{chong2004a,
title={A Framework for Creating Natural Language User Interfaces for
Action-Based Applications},
author={Stephen Chong, Riccardo Pucella},
journal={arXiv preprint arXiv:cs/0412065},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412065},
primaryClass={cs.CL cs.HC}
} | chong2004a |
arxiv-672425 | cs/0412066 | From Feature Extraction to Classification: A multidisciplinary Approach applied to Portuguese Granites | <|reference_start|>From Feature Extraction to Classification: A multidisciplinary Approach applied to Portuguese Granites: The purpose of this paper is to present a complete methodology based on a multidisciplinary approach, that goes from the extraction of features till the classification of a set of different portuguese granites. The set of tools to extract the features that characterise polished surfaces of the granites is mainly based on mathematical morphology. The classification methodology is based on a genetic algorithm capable of search the input feature space used by the nearest neighbour rule classifier. Results show that is adequate to perform feature reduction and simultaneous improve the recognition rate. Moreover, the present methodology represents a robust strategy to understand the proper nature of the images treated, and their discriminant features. KEYWORDS: Portuguese grey granites, feature extraction, mathematical morphology, feature reduction, genetic algorithms, nearest neighbour rule classifiers (k-NNR).<|reference_end|> | arxiv | @article{ramos2004from,
title={From Feature Extraction to Classification: A multidisciplinary Approach
applied to Portuguese Granites},
author={Vitorino Ramos, Pedro Pina, Fernando Muge},
journal={SCIA 99, 11th Scandinavian Conf. on Image Analysis, ISBN
87-88306-42-9, Vol.2, pp. 817-824, Kangerlussuaq, Greenland, 7-11, June 1999},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412066},
primaryClass={cs.AI cs.CV}
} | ramos2004from |
arxiv-672426 | cs/0412067 | Complete Characterization of the Equivalent MIMO Channel for Quasi-Orthogonal Space-Time Codes | <|reference_start|>Complete Characterization of the Equivalent MIMO Channel for Quasi-Orthogonal Space-Time Codes: Recently, a quasi-orthogonal space-time block code (QSTBC) capable of achieving a significant fraction of the outage mutual information of a multiple-input-multiple output (MIMO) wireless communication system for the case of four transmit and one receive antennas was proposed. We generalize these results to $n_T=2^n$ transmit and an arbitrary number of receive antennas $n_R$. Furthermore, we completely characterize the structure of the equivalent channel for the general case and show that for all $n_T=2^n$ and $n_R$ the eigenvectors of the equivalent channel are fixed and independent from the channel realization. Furthermore, the eigenvalues of the equivalent channel are independent identically distributed random variables each following a noncentral chi-square distribution with $4n_R$ degrees of freedom. Based on these important insights into the structure of the QSTBC, we derive an analytical lower bound for the fraction of outage probability achieved with QSTBC and show that this bound is tight for low signal-to-noise-ratios (SNR) values and also for increasing number of receive antennas. We also present an upper bound, which is tight for high SNR values and derive analytical expressions for the case of four transmit antennas. Finally, by utilizing the special structure of the QSTBC we propose a new transmit strategy, which decouples the signals transmitted from different antennas in order to detect the symbols separately with a linear ML-detector rather than joint detection, an up to now only known advantage of orthogonal space-time block codes (OSTBC).<|reference_end|> | arxiv | @article{sezgin2004complete,
title={Complete Characterization of the Equivalent MIMO Channel for
Quasi-Orthogonal Space-Time Codes},
author={A. Sezgin and T.J. Oechtering},
journal={IEEE Transactions on Information Theory, vol. 54(7), pp.
3315-3327, July, 2008},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412067},
primaryClass={cs.IT math.IT}
} | sezgin2004complete |
arxiv-672427 | cs/0412068 | ANTIDS: Self-Organized Ant-based Clustering Model for Intrusion Detection System | <|reference_start|>ANTIDS: Self-Organized Ant-based Clustering Model for Intrusion Detection System: Security of computers and the networks that connect them is increasingly becoming of great significance. Computer security is defined as the protection of computing systems against threats to confidentiality, integrity, and availability. There are two types of intruders: the external intruders who are unauthorized users of the machines they attack, and internal intruders, who have permission to access the system with some restrictions. Due to the fact that it is more and more improbable to a system administrator to recognize and manually intervene to stop an attack, there is an increasing recognition that ID systems should have a lot to earn on following its basic principles on the behavior of complex natural systems, namely in what refers to self-organization, allowing for a real distributed and collective perception of this phenomena. With that aim in mind, the present work presents a self-organized ant colony based intrusion detection system (ANTIDS) to detect intrusions in a network infrastructure. The performance is compared among conventional soft computing paradigms like Decision Trees, Support Vector Machines and Linear Genetic Programming to model fast, online and efficient intrusion detection systems.<|reference_end|> | arxiv | @article{ramos2004antids:,
title={ANTIDS: Self-Organized Ant-based Clustering Model for Intrusion
Detection System},
author={Vitorino Ramos, Ajith Abraham},
journal={arXiv preprint arXiv:cs/0412068},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412068},
primaryClass={cs.CR cs.AI}
} | ramos2004antids: |
arxiv-672428 | cs/0412069 | Swarming around Shellfish Larvae | <|reference_start|>Swarming around Shellfish Larvae: The collection of wild larvae seed as a source of raw material is a major sub industry of shellfish aquaculture. To predict when, where and in what quantities wild seed will be available, it is necessary to track the appearance and growth of planktonic larvae. One of the most difficult groups to identify, particularly at the species level are the Bivalvia. This difficulty arises from the fact that fundamentally all bivalve larvae have a similar shape and colour. Identification based on gross morphological appearance is limited by the time-consuming nature of the microscopic examination and by the limited availability of expertise in this field. Molecular and immunological methods are also being studied. We describe the application of computational pattern recognition methods to the automated identification and size analysis of scallop larvae. For identification, the shape features used are binary invariant moments; that is, the features are invariant to shift (position within the image), scale (induced either by growth or differential image magnification) and rotation. Images of a sample of scallop and non-scallop larvae covering a range of maturities have been analysed. In order to overcome the automatic identification, as well as to allow the system to receive new unknown samples at any moment, a self-organized and unsupervised ant-like clustering algorithm based on Swarm Intelligence is proposed, followed by simple k-NNR nearest neighbour classification on the final map. Results achieve a full recognition rate of 100% under several situations (k =1 or 3).<|reference_end|> | arxiv | @article{ramos2004swarming,
title={Swarming around Shellfish Larvae},
author={Vitorino Ramos, Jonathan Campbell, John Slater, John Gillespie, Ivan
F. Bendezu, Fionn Murtagh},
journal={arXiv preprint arXiv:cs/0412069},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412069},
primaryClass={cs.AI cs.CV}
} | ramos2004swarming |
arxiv-672429 | cs/0412070 | Less is More - Genetic Optimisation of Nearest Neighbour Classifiers | <|reference_start|>Less is More - Genetic Optimisation of Nearest Neighbour Classifiers: The present paper deals with optimisation of Nearest Neighbour rule Classifiers via Genetic Algorithms. The methodology consists on implement a Genetic Algorithm capable of search the input feature space used by the NNR classifier. Results show that is adequate to perform feature reduction and simultaneous improve the Recognition Rate. Some practical examples prove that is possible to Recognise Portuguese Granites in 100%, with only 3 morphological features (from an original set of 117 features), which is well suited for real time applications. Moreover, the present method represents a robust strategy to understand the proper nature of the images treated, and their discriminant features. KEYWORDS: Feature Reduction, Genetic Algorithms, Nearest Neighbour Rule Classifiers (k-NNR).<|reference_end|> | arxiv | @article{ramos2004less,
title={Less is More - Genetic Optimisation of Nearest Neighbour Classifiers},
author={Vitorino Ramos, Fernando Muge},
journal={Proc. RecPad 98 - 10th Portuguese Conference on Pattern
Recognition, F.Muge, C.Pinto and M.Piedade Eds., ISBN 972-97711-0-3, pp.
293-301, Lisbon, March 1998},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412070},
primaryClass={cs.AI cs.CV}
} | ramos2004less |
arxiv-672430 | cs/0412071 | Web Usage Mining Using Artificial Ant Colony Clustering and Genetic Programming | <|reference_start|>Web Usage Mining Using Artificial Ant Colony Clustering and Genetic Programming: The rapid e-commerce growth has made both business community and customers face a new situation. Due to intense competition on one hand and the customer's option to choose from several alternatives business community has realized the necessity of intelligent marketing strategies and relationship management. Web usage mining attempts to discover useful knowledge from the secondary data obtained from the interactions of the users with the Web. Web usage mining has become very critical for effective Web site management, creating adaptive Web sites, business and support services, personalization, network traffic flow analysis and so on. The study of ant colonies behavior and their self-organizing capabilities is of interest to knowledge retrieval/management and decision support systems sciences, because it provides models of distributed adaptive organization, which are useful to solve difficult optimization, classification, and distributed control problems, among others. In this paper, we propose an ant clustering algorithm to discover Web usage patterns (data clusters) and a linear genetic programming approach to analyze the visitor trends. Empirical results clearly shows that ant colony clustering performs well when compared to a self-organizing map (for clustering Web usage patterns) even though the performance accuracy is not that efficient when comparared to evolutionary-fuzzy clustering (i-miner) approach. KEYWORDS: Web Usage Mining, Swarm Intelligence, Ant Systems, Stigmergy, Data-Mining, Linear Genetic Programming.<|reference_end|> | arxiv | @article{abraham2004web,
title={Web Usage Mining Using Artificial Ant Colony Clustering and Genetic
Programming},
author={Ajith Abraham, Vitorino Ramos},
journal={CEC 03 - Congress on Evolutionary Computation, IEEE Press, ISBN
0780378040, pp.1384-1391, Canberra, Australia, 8-12 Dec. 2003},
year={2004},
doi={10.1109/CEC.2003.1299832},
archivePrefix={arXiv},
eprint={cs/0412071},
primaryClass={cs.AI cs.NE}
} | abraham2004web |
arxiv-672431 | cs/0412072 | Swarms on Continuous Data | <|reference_start|>Swarms on Continuous Data: While being it extremely important, many Exploratory Data Analysis (EDA) systems have the inhability to perform classification and visualization in a continuous basis or to self-organize new data-items into the older ones (evenmore into new labels if necessary), which can be crucial in KDD - Knowledge Discovery, Retrieval and Data Mining Systems (interactive and online forms of Web Applications are just one example). This disadvantge is also present in more recent approaches using Self-Organizing Maps. On the present work, and exploiting past sucesses in recently proposed Stigmergic Ant Systems a robust online classifier is presented, which produces class decisions on a continuous stream data, allowing for continuous mappings. Results show that increasingly better results are achieved, as demonstraded by other authors in different areas. KEYWORDS: Swarm Intelligence, Ant Systems, Stigmergy, Data-Mining, Exploratory Data Analysis, Image Retrieval, Continuous Classification.<|reference_end|> | arxiv | @article{ramos2004swarms,
title={Swarms on Continuous Data},
author={Vitorino Ramos, Ajith Abraham},
journal={CEC 03 - Congress on Evolutionary Computation, IEEE Press, ISBN
0780378040, pp.1370-1375, Canberra, Australia, 8-12 Dec. 2003},
year={2004},
doi={10.1109/CEC.2003.1299828},
archivePrefix={arXiv},
eprint={cs/0412072},
primaryClass={cs.AI cs.NE}
} | ramos2004swarms |
arxiv-672432 | cs/0412073 | Self-Organizing the Abstract: Canvas as a Swarm Habitat for Collective Memory, Perception and Cooperative Distributed Creativity | <|reference_start|>Self-Organizing the Abstract: Canvas as a Swarm Habitat for Collective Memory, Perception and Cooperative Distributed Creativity: Past experiences under the designation of "Swarm Paintings" conducted in 2001, not only confirmed the possibility of realizing an artificial art (thus non-human), as introduced into the process the questioning of creative migration, specifically from the computer monitors to the canvas via a robotic harm. In more recent self-organized based research we seek to develop and profound the initial ideas by using a swarm of autonomous robots (ARTsBOT project 2002-03), that "live" avoiding the purpose of being merely a simple perpetrator of order streams coming from an external computer, but instead, that actually co-evolve within the canvas space, acting (that is, laying ink) according to simple inner threshold stimulus response functions, reacting simultaneously to the chromatic stimulus present in the canvas environment done by the passage of their team-mates, as well as by the distributed feedback, affecting their future collective behaviour. In parallel, and in what respects to certain types of collective systems, we seek to confirm, in a physically embedded way, that the emergence of order (even as a concept) seems to be found at a lower level of complexity, based on simple and basic interchange of information, and on the local dynamic of parts, who, by self-organizing mechanisms tend to form an lived whole, innovative and adapting, allowing for emergent open-ended creative and distributed production. KEYWORDS: ArtSBots Project, Swarm Intelligence, Stigmergy, UnManned Art, Symbiotic Art, Swarm Paintings, Robot Paintings, Non-Human Art, Painting Emergence and Cooperation, Art and Complexity, ArtBots: The Robot Talent Show.<|reference_end|> | arxiv | @article{ramos2004self-organizing,
title={Self-Organizing the Abstract: Canvas as a Swarm Habitat for Collective
Memory, Perception and Cooperative Distributed Creativity},
author={Vitorino Ramos},
journal={in First Art and Science Symposium, Models to Know Reality, J.
Rekalde, R. Ibanez and A. Simo (Eds.), pp. 59-60, Facultad de Bellas Artes
EHU/UPV, Universidad del Pais Vasco, 11-12 Dec., Bilbao, Spain, 2003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412073},
primaryClass={cs.MM cs.AI}
} | ramos2004self-organizing |
arxiv-672433 | cs/0412074 | Threats of Human Error in a High-Performance Storage System: Problem Statement and Case Study | <|reference_start|>Threats of Human Error in a High-Performance Storage System: Problem Statement and Case Study: System administration is a difficult, often tedious, job requiring many skilled laborers. The data that is protected by system administrators is often valued at or above the value of the institution maintaining that data. A number of ethnographic studies have confirmed the skill of these operators, and the difficulty of providing adequate tools. In an effort to minimize the maintenance costs, an increasing portion of system administration is subject to automation - particularly simple, routine tasks such as data backup. While such tools reduce the risk of errors from carelessness, the same tools may result in reduced skill and system familiarity in experienced workers. Care should be taken to ensure that operators maintain system awareness without placing the operator in a passive, monitoring role.<|reference_end|> | arxiv | @article{haubert2004threats,
title={Threats of Human Error in a High-Performance Storage System: Problem
Statement and Case Study},
author={Elizabeth Haubert},
journal={arXiv preprint arXiv:cs/0412074},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412074},
primaryClass={cs.HC cs.OS}
} | haubert2004threats |
arxiv-672434 | cs/0412075 | Self-Organized Stigmergic Document Maps: Environment as a Mechanism for Context Learning | <|reference_start|>Self-Organized Stigmergic Document Maps: Environment as a Mechanism for Context Learning: Social insect societies and more specifically ant colonies, are distributed systems that, in spite of the simplicity of their individuals, present a highly structured social organization. As a result of this organization, ant colonies can accomplish complex tasks that in some cases exceed the individual capabilities of a single ant. The study of ant colonies behavior and of their self-organizing capabilities is of interest to knowledge retrieval/management and decision support systems sciences, because it provides models of distributed adaptive organization which are useful to solve difficult optimization, classification, and distributed control problems, among others. In the present work we overview some models derived from the observation of real ants, emphasizing the role played by stigmergy as distributed communication paradigm, and we present a novel strategy to tackle unsupervised clustering as well as data retrieval problems. The present ant clustering system (ACLUSTER) avoids not only short-term memory based strategies, as well as the use of several artificial ant types (using different speeds), present in some recent approaches. Moreover and according to our knowledge, this is also the first application of ant systems into textual document clustering. KEYWORDS: Swarm Intelligence, Ant Systems, Unsupervised Clustering, Data Retrieval, Data Mining, Distributed Computing, Document Maps, Textual Document Clustering.<|reference_end|> | arxiv | @article{ramos2004self-organized,
title={Self-Organized Stigmergic Document Maps: Environment as a Mechanism for
Context Learning},
author={Vitorino Ramos, Juan J. Merelo},
journal={in AEB 2002, 1st Spanish Conference on Evolutionary and
Bio-Inspired Algorithms, E. Alba, F. Herrera, J.J. Merelo et al. (Eds.), pp.
284-293, Centro Univ. de Merida, Merida, Spain, 6-8 Feb. 2002},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412075},
primaryClass={cs.AI cs.DC}
} | ramos2004self-organized |
arxiv-672435 | cs/0412076 | Clustering Techniques for Marbles Classification | <|reference_start|>Clustering Techniques for Marbles Classification: Automatic marbles classification based on their visual appearance is an important industrial issue. However, there is no definitive solution to the problem mainly due to the presence of randomly distributed high number of different colours and its subjective evaluation by the human expert. In this paper we present a study of segmentation techniques, we evaluate they overall performance using a training set and standard quality measures and finally we apply different clustering techniques to automatically classify the marbles. KEYWORDS: Segmentation, Clustering, Quadtrees, Learning Vector Quantization (LVQ), Simulated Annealing (SA).<|reference_end|> | arxiv | @article{caldas-pinto2004clustering,
title={Clustering Techniques for Marbles Classification},
author={J.R. Caldas-Pinto, Pedro Pina, Vitorino Ramos, Mario Ramalho},
journal={RecPad 2002 -12th Portuguese Conference on Pattern Recognition,
ISBN 972-789-067-9, Aveiro, Portugal, June 27-28, 2002},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412076},
primaryClass={cs.AI cs.CV}
} | caldas-pinto2004clustering |
arxiv-672436 | cs/0412077 | On the Implicit and on the Artificial - Morphogenesis and Emergent Aesthetics in Autonomous Collective Systems | <|reference_start|>On the Implicit and on the Artificial - Morphogenesis and Emergent Aesthetics in Autonomous Collective Systems: Imagine a "machine" where there is no pre-commitment to any particular representational scheme: the desired behaviour is distributed and roughly specified simultaneously among many parts, but there is minimal specification of the mechanism required to generate that behaviour, i.e. the global behaviour evolves from the many relations of multiple simple behaviours. A machine that lives to and from/with Synergy. An artificial super-organism that avoids specific constraints and emerges within multiple low-level implicit bio-inspired mechanisms. KEYWORDS: Complex Science, ArtSBots Project, Swarm Intelligence, Stigmergy, UnManned Art, Symbiotic Art, Swarm Paintings, Robot Paintings, Non-Human Art, Painting Emergence and Cooperation, Art and Complexity, ArtBots: The Robot Talent Show.<|reference_end|> | arxiv | @article{ramos2004on,
title={On the Implicit and on the Artificial - Morphogenesis and Emergent
Aesthetics in Autonomous Collective Systems},
author={Vitorino Ramos},
journal={Chapter 2 in ARCHITOPIA Book, Art, Architecture and Science, J.L.
Maubant et al. (Eds.), pp. 25-57, INSTITUT D'ART CONTEMPORAIN (France), ISBN
: 2905985631, Feb. 2002},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412077},
primaryClass={cs.AI cs.MM}
} | ramos2004on |
arxiv-672437 | cs/0412078 | The vertex-transitive TLF-planar graphs | <|reference_start|>The vertex-transitive TLF-planar graphs: We consider the class of the topologically locally finite (in short TLF) planar vertex-transitive graphs, a class containing in particular all the one-ended planar Cayley graphs and the normal transitive tilings. We characterize these graphs with a finite local representation and a special kind of finite state automaton named labeling scheme. As a result, we are able to enumerate and describe all TLF-planar vertex-transitive graphs of any given degree. Also, we are able decide to whether any TLF-planar transitive graph is Cayley or not.<|reference_end|> | arxiv | @article{renault2004the,
title={The vertex-transitive TLF-planar graphs},
author={D. Renault},
journal={arXiv preprint arXiv:cs/0412078},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412078},
primaryClass={cs.DM}
} | renault2004the |
arxiv-672438 | cs/0412079 | The MC2 Project [Machines of Collective Conscience]: A possible walk, up to Life-like Complexity and Behaviour, from bottom, basic and simple bio-inspired heuristics - a walk, up into the morphogenesis of information | <|reference_start|>The MC2 Project [Machines of Collective Conscience]: A possible walk, up to Life-like Complexity and Behaviour, from bottom, basic and simple bio-inspired heuristics - a walk, up into the morphogenesis of information: Synergy (from the Greek word synergos), broadly defined, refers to combined or co-operative effects produced by two or more elements (parts or individuals). The definition is often associated with the holistic conviction quote that "the whole is greater than the sum of its parts" (Aristotle, in Metaphysics), or the whole cannot exceed the sum of the energies invested in each of its parts (e.g. first law of thermodynamics) even if it is more accurate to say that the functional effects produced by wholes are different from what the parts can produce alone. Synergy is a ubiquitous phenomena in nature and human societies alike. One well know example is provided by the emergence of self-organization in social insects, via direct or indirect interactions. The latter types are more subtle and defined as stigmergy to explain task coordination and regulation in the context of nest reconstruction in termites. An example, could be provided by two individuals, who interact indirectly when one of them modifies the environment and the other responds to the new environment at a later time. In other words, stigmergy could be defined as a particular case of environmental or spatial synergy. The system is purely holistic, and their properties are intrinsically emergent and autocatalytic. On the present work we present a "machine" where there is no precommitment to any particular representational scheme: the desired behaviour is distributed and roughly specified simultaneously among many parts, but there is minimal specification of the mechanism required to generate that behaviour, i.e. the global behaviour evolves from the many relations of multiple simple behaviours.<|reference_end|> | arxiv | @article{ramos2004the,
title={The MC2 Project [Machines of Collective Conscience]: A possible walk, up
to Life-like Complexity and Behaviour, from bottom, basic and simple
bio-inspired heuristics - a walk, up into the morphogenesis of information},
author={Vitorino Ramos},
journal={at UTOPIA Biennial Art Exposition CATALOGUE, Cascais, Portugal,
July 12-22, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412079},
primaryClass={cs.AI cs.MM}
} | ramos2004the |
arxiv-672439 | cs/0412080 | The Biological Concept of Neoteny in Evolutionary Colour Image Segmentation - Simple Experiments in Simple Non-Memetic Genetic Algorithms | <|reference_start|>The Biological Concept of Neoteny in Evolutionary Colour Image Segmentation - Simple Experiments in Simple Non-Memetic Genetic Algorithms: Neoteny, also spelled Paedomorphosis, can be defined in biological terms as the retention by an organism of juvenile or even larval traits into later life. In some species, all morphological development is retarded; the organism is juvenilized but sexually mature. Such shifts of reproductive capability would appear to have adaptive significance to organisms that exhibit it. In terms of evolutionary theory, the process of paedomorphosis suggests that larval stages and developmental phases of existing organisms may give rise, under certain circumstances, to wholly new organisms. Although the present work does not pretend to model or simulate the biological details of such a concept in any way, these ideas were incorporated by a rather simple abstract computational strategy, in order to allow (if possible) for faster convergence into simple non-memetic Genetic Algorithms, i.e. without using local improvement procedures (e.g. via Baldwin or Lamarckian learning). As a case-study, the Genetic Algorithm was used for colour image segmentation purposes by using K-mean unsupervised clustering methods, namely for guiding the evolutionary algorithm in his search for finding the optimal or sub-optimal data partition. Average results suggest that the use of neotonic strategies by employing juvenile genotypes into the later generations and the use of linear-dynamic mutation rates instead of constant, can increase fitness values by 58% comparing to classical Genetic Algorithms, independently from the starting population characteristics on the search space. KEYWORDS: Genetic Algorithms, Artificial Neoteny, Dynamic Mutation Rates, Faster Convergence, Colour Image Segmentation, Classification, Clustering.<|reference_end|> | arxiv | @article{ramos2004the,
title={The Biological Concept of Neoteny in Evolutionary Colour Image
Segmentation - Simple Experiments in Simple Non-Memetic Genetic Algorithms},
author={Vitorino Ramos},
journal={in Applications of Evolutionary Computation, (Eds.), EuroGP /
EvoIASP 2001 - 3rd Eur. Works. on Evol. Comp. in Image Analysis and Signal
Processing, Lake Como, Milan, Italy, Lecture Notes in Computer Science, Vol.
2037, pp. 364-378, Springer-Verlag, Berlin-Heidelberg, April 18-20, 2001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412080},
primaryClass={cs.AI cs.NE}
} | ramos2004the |
arxiv-672440 | cs/0412081 | Artificial Neoteny in Evolutionary Image Segmentation | <|reference_start|>Artificial Neoteny in Evolutionary Image Segmentation: Neoteny, also spelled Paedomorphosis, can be defined in biological terms as the retention by an organism of juvenile or even larval traits into later life. In some species, all morphological development is retarded; the organism is juvenilized but sexually mature. Such shifts of reproductive capability would appear to have adaptive significance to organisms that exhibit it. In terms of evolutionary theory, the process of paedomorphosis suggests that larval stages and developmental phases of existing organisms may give rise, under certain circumstances, to wholly new organisms. Although the present work does not pretend to model or simulate the biological details of such a concept in any way, these ideas were incorporated by a rather simple abstract computational strategy, in order to allow (if possible) for faster convergence into simple non-memetic Genetic Algorithms, i.e. without using local improvement procedures (e.g. via Baldwin or Lamarckian learning). As a case-study, the Genetic Algorithm was used for colour image segmentation purposes by using K-mean unsupervised clustering methods, namely for guiding the evolutionary algorithm in his search for finding the optimal or sub-optimal data partition. Average results suggest that the use of neotonic strategies by employing juvenile genotypes into the later generations and the use of linear-dynamic mutation rates instead of constant, can increase fitness values by 58% comparing to classical Genetic Algorithms, independently from the starting population characteristics on the search space. KEYWORDS: Genetic Algorithms, Artificial Neoteny, Dynamic Mutation Rates, Faster Convergence, Colour Image Segmentation, Classification, Clustering.<|reference_end|> | arxiv | @article{ramos2004artificial,
title={Artificial Neoteny in Evolutionary Image Segmentation},
author={Vitorino Ramos},
journal={SIARP 2000 - 5th IberoAmerican Symp. on Pattern Rec., F. Muge,
Moises P. and R. Caldas Pinto (Eds.), ISBN 972-97711-1-1, pp. 69-78, Lisbon,
Portugal, 11-13 Sep. 2000},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412081},
primaryClass={cs.AI cs.NE}
} | ramos2004artificial |
arxiv-672441 | cs/0412083 | Line and Word Matching in Old Documents | <|reference_start|>Line and Word Matching in Old Documents: This paper is concerned with the problem of establishing an index based on word matching. It is assumed that the book was digitised as better as possible and some pre-processing techniques were already applied as line orientation correction and some noise removal. However two main factor are responsible for being not possible to apply ordinary optical character recognition techniques (OCR): the presence of antique fonts and the degraded state of many characters due to unrecoverable original time degradation. In this paper we make a short introduction to word segmentation that involves finding the lines that characterise a word. After we discuss different approaches for word matching and how they can be combined to obtain an ordered list for candidate words for the matching. This discussion will be illustrated by examples.<|reference_end|> | arxiv | @article{marcolino2004line,
title={Line and Word Matching in Old Documents},
author={A. Marcolino, Vitorino Ramos, Mario Ramalho, J.R. Caldas Pinto},
journal={SIARP 2000 - 5th IberoAmerican Symp. on Pattern Rec., F. Muge,
Moises P. and R. Caldas Pinto (Eds.), ISBN 972-97711-1-1, pp. 123-135,
Lisbon, Portugal, 11-13 Sep. 2000},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412083},
primaryClass={cs.AI cs.CV}
} | marcolino2004line |
arxiv-672442 | cs/0412084 | Map Segmentation by Colour Cube Genetic K-Mean Clustering | <|reference_start|>Map Segmentation by Colour Cube Genetic K-Mean Clustering: Segmentation of a colour image composed of different kinds of texture regions can be a hard problem, namely to compute for an exact texture fields and a decision of the optimum number of segmentation areas in an image when it contains similar and/or unstationary texture fields. In this work, a method is described for evolving adaptive procedures for these problems. In many real world applications data clustering constitutes a fundamental issue whenever behavioural or feature domains can be mapped into topological domains. We formulate the segmentation problem upon such images as an optimisation problem and adopt evolutionary strategy of Genetic Algorithms for the clustering of small regions in colour feature space. The present approach uses k-Means unsupervised clustering methods into Genetic Algorithms, namely for guiding this last Evolutionary Algorithm in his search for finding the optimal or sub-optimal data partition, task that as we know, requires a non-trivial search because of its NP-complete nature. To solve this task, the appropriate genetic coding is also discussed, since this is a key aspect in the implementation. Our purpose is to demonstrate the efficiency of Genetic Algorithms to automatic and unsupervised texture segmentation. Some examples in Colour Maps are presented and overall results discussed. KEYWORDS: Genetic Algorithms, Artificial Neoteny, Dynamic Mutation Rates, Faster Convergence, Colour Image Segmentation, Classification, Clustering.<|reference_end|> | arxiv | @article{ramos2004map,
title={Map Segmentation by Colour Cube Genetic K-Mean Clustering},
author={Vitorino Ramos, Fernando Muge},
journal={ECDL 2000 - 4th Eur. Conf. on Research and Advanced Technology for
Digital Libraries, J. Borbinha and T. Baker (Eds.), ISBN 3-540-41023-6, LNCS
series, Vol. 1923, pp. 319-323, Springer-Verlag, Heidelberg, Lisbon,
Portugal, 18-20 Sep. 2000},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412084},
primaryClass={cs.AI cs.NE}
} | ramos2004map |
arxiv-672443 | cs/0412085 | A class of one-dimensional MDS convolutional codes | <|reference_start|>A class of one-dimensional MDS convolutional codes: A class of one-dimensional convolutional codes will be presented. They are all MDS codes, i. e., have the largest distance among all one-dimensional codes of the same length n and overall constraint length delta. Furthermore, their extended row distances are computed, and they increase with slope n-delta. In certain cases of the algebraic parameters, we will also derive parity check matrices of Vandermonde type for these codes. Finally, cyclicity in the convolutional sense will be discussed for our class of codes. It will turn out that they are cyclic if and only if the field element used in the generator matrix has order n. This can be regarded as a generalization of the block code case.<|reference_end|> | arxiv | @article{gluesing-luerssen2004a,
title={A class of one-dimensional MDS convolutional codes},
author={Heide Gluesing-Luerssen, Barbara Langfeld},
journal={arXiv preprint arXiv:cs/0412085},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412085},
primaryClass={cs.IT math.IT math.RA}
} | gluesing-luerssen2004a |
arxiv-672444 | cs/0412086 | Artificial Ant Colonies in Digital Image Habitats - A Mass Behaviour Effect Study on Pattern Recognition | <|reference_start|>Artificial Ant Colonies in Digital Image Habitats - A Mass Behaviour Effect Study on Pattern Recognition: Some recent studies have pointed that, the self-organization of neurons into brain-like structures, and the self-organization of ants into a swarm are similar in many respects. If possible to implement, these features could lead to important developments in pattern recognition systems, where perceptive capabilities can emerge and evolve from the interaction of many simple local rules. The principle of the method is inspired by the work of Chialvo and Millonas who developed the first numerical simulation in which swarm cognitive map formation could be explained. From this point, an extended model is presented in order to deal with digital image habitats, in which artificial ants could be able to react to the environment and perceive it. Evolution of pheromone fields point that artificial ant colonies could react and adapt appropriately to any type of digital habitat. KEYWORDS: Swarm Intelligence, Self-Organization, Stigmergy, Artificial Ant Systems, Pattern Recognition and Perception, Image Segmentation, Gestalt Perception Theory, Distributed Computation.<|reference_end|> | arxiv | @article{ramos2004artificial,
title={Artificial Ant Colonies in Digital Image Habitats - A Mass Behaviour
Effect Study on Pattern Recognition},
author={Vitorino Ramos, Filipe Almeida},
journal={Proc. of ANTS 2000 - 2nd Int. Works. on Ant Algorithms (From Ant
Colonies to Artificial Ants), Marco Dorigo, Martin Middendorf, Thomas Stuzle
(Eds.), pp. 113, Brussels, Belgium, 7-9 Sep. 2000},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412086},
primaryClass={cs.AI cs.CV}
} | ramos2004artificial |
arxiv-672445 | cs/0412087 | Image Colour Segmentation by Genetic Algorithms | <|reference_start|>Image Colour Segmentation by Genetic Algorithms: Segmentation of a colour image composed of different kinds of texture regions can be a hard problem, namely to compute for an exact texture fields and a decision of the optimum number of segmentation areas in an image when it contains similar and/or unstationary texture fields. In this work, a method is described for evolving adaptive procedures for these problems. In many real world applications data clustering constitutes a fundamental issue whenever behavioural or feature domains can be mapped into topological domains. We formulate the segmentation problem upon such images as an optimisation problem and adopt evolutionary strategy of Genetic Algorithms for the clustering of small regions in colour feature space. The present approach uses k-Means unsupervised clustering methods into Genetic Algorithms, namely for guiding this last Evolutionary Algorithm in his search for finding the optimal or sub-optimal data partition, task that as we know, requires a non-trivial search because of its intrinsic NP-complete nature. To solve this task, the appropriate genetic coding is also discussed, since this is a key aspect in the implementation. Our purpose is to demonstrate the efficiency of Genetic Algorithms to automatic and unsupervised texture segmentation. Some examples in Colour Maps, Ornamental Stones and in Human Skin Mark segmentation are presented and overall results discussed. KEYWORDS: Genetic Algorithms, Colour Image Segmentation, Classification, Clustering.<|reference_end|> | arxiv | @article{ramos2004image,
title={Image Colour Segmentation by Genetic Algorithms},
author={Vitorino Ramos, Fernando Muge},
journal={RecPad 2000 - 11th Portuguese Conf. on Pattern Recognition, in
Aurelio C. Campilho and A.M. Mendonca (Eds.), ISBN 972-96883-2-5, pp.
125-129, Porto, Portugal, May 11-12, 2000},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412087},
primaryClass={cs.AI cs.CV}
} | ramos2004image |
arxiv-672446 | cs/0412088 | On Image Filtering, Noise and Morphological Size Intensity Diagrams | <|reference_start|>On Image Filtering, Noise and Morphological Size Intensity Diagrams: In the absence of a pure noise-free image it is hard to define what noise is, in any original noisy image, and as a consequence also where it is, and in what amount. In fact, the definition of noise depends largely on our own aim in the whole image analysis process, and (perhaps more important) in our self-perception of noise. For instance, when we perceive noise as disconnected and small it is normal to use MM-ASF filters to treat it. There is two evidences of this. First, in many instances there is no ideal and pure noise-free image to compare our filtering process (nothing but our self-perception of its pure image); second, and related with this first point, MM transformations that we chose are only based on our self - and perhaps - fuzzy notion. The present proposal combines the results of two MM filtering transformations (FT1, FT2) and makes use of some measures and quantitative relations on their Size/Intensity Diagrams to find the most appropriate noise removal process. Results can also be used for finding the most appropriate stop criteria, and the right sequence of MM operators combination on Alternating Sequential Filters (ASF), if these measures are applied, for instance, on a Genetic Algorithm's target function.<|reference_end|> | arxiv | @article{ramos2004on,
title={On Image Filtering, Noise and Morphological Size Intensity Diagrams},
author={Vitorino Ramos, Fernando Muge},
journal={RecPad 2000 - 11th Portuguese Conf. on Pattern Recognition, in
Aurelio C. Campilho and A.M. Mendonca (Eds.), ISBN 972-96883-2-5, pp.
483-491, Porto, Portugal, May 11-12, 2000},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412088},
primaryClass={cs.CV cs.AI}
} | ramos2004on |
arxiv-672447 | cs/0412089 | Evolving Categories: Consistent Framework for Representation of Data and Algorithms | <|reference_start|>Evolving Categories: Consistent Framework for Representation of Data and Algorithms: A concept of "evolving categories" is suggested to build a simple, scalable, mathematically consistent framework for representing in uniform way both data and algorithms. A state machine for executing algorithms becomes clear, rich and powerful semantics, based on category theory, and still allows easy implementation. Moreover, it gives an original insight into the nature and semantics of algorithms.<|reference_end|> | arxiv | @article{yanenko2004evolving,
title={Evolving Categories: Consistent Framework for Representation of Data and
Algorithms},
author={Evgeny Yanenko},
journal={arXiv preprint arXiv:cs/0412089},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412089},
primaryClass={cs.DS}
} | yanenko2004evolving |
arxiv-672448 | cs/0412090 | Real Time Models of the Asynchronous Circuits: The Delay Theory | <|reference_start|>Real Time Models of the Asynchronous Circuits: The Delay Theory: The chapter from the book introduces the delay theory, whose purpose is the modeling of the asynchronous circuits from digital electrical engineering with ordinary and differential pseudo-boolean equations.<|reference_end|> | arxiv | @article{vlad2004real,
title={Real Time Models of the Asynchronous Circuits: The Delay Theory},
author={Serban E. Vlad},
journal={in New Developments in Computer Science Research, Editor Susan
Shannon, Nova Science Publishers, Inc., New York, 2005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412090},
primaryClass={cs.GL}
} | vlad2004real |
arxiv-672449 | cs/0412091 | The Combination of Paradoxical, Uncertain, and Imprecise Sources of Information based on DSmT and Neutro-Fuzzy Inference | <|reference_start|>The Combination of Paradoxical, Uncertain, and Imprecise Sources of Information based on DSmT and Neutro-Fuzzy Inference: The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this chapter, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the literature, developed for dealing with imprecise, uncertain and paradoxical sources of information. We focus our presentation here rather on the foundations of DSmT, and on the two important new rules of combination, than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout the presentation to show the efficiency and the generality of this new approach. The last part of this chapter concerns the presentation of the neutrosophic logic, the neutro-fuzzy inference and its connection with DSmT. Fuzzy logic and neutrosophic logic are useful tools in decision making after fusioning the information using the DSm hybrid rule of combination of masses.<|reference_end|> | arxiv | @article{smarandache2004the,
title={The Combination of Paradoxical, Uncertain, and Imprecise Sources of
Information based on DSmT and Neutro-Fuzzy Inference},
author={Florentin Smarandache, Jean Dezert},
journal={A version of this paper published in Proceedings of 10th
International Conference on Fuzzy Theory and Technology (FT&T 2005), Salt
Lake City, Utah, USA, July 21-26, 2005.},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412091},
primaryClass={cs.AI}
} | smarandache2004the |
arxiv-672450 | cs/0412092 | Mass Storage Management and the Grid | <|reference_start|>Mass Storage Management and the Grid: The University of Edinburgh has a significant interest in mass storage systems as it is one of the core groups tasked with the roll out of storage software for the UK's particle physics grid, GridPP. We present the results of a development project to provide software interfaces between the SDSC Storage Resource Broker, the EU DataGrid and the Storage Resource Manager. This project was undertaken in association with the eDikt group at the National eScience Centre, the Universities of Bristol and Glasgow, Rutherford Appleton Laboratory and the San Diego Supercomputing Center.<|reference_end|> | arxiv | @article{earl2004mass,
title={Mass Storage Management and the Grid},
author={A. Earl and P. Clark},
journal={arXiv preprint arXiv:cs/0412092},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412092},
primaryClass={cs.DC cs.SE}
} | earl2004mass |
arxiv-672451 | cs/0412093 | ScotGrid: A Prototype Tier 2 Centre | <|reference_start|>ScotGrid: A Prototype Tier 2 Centre: ScotGrid is a prototype regional computing centre formed as a collaboration between the universities of Durham, Edinburgh and Glasgow as part of the UK's national particle physics grid, GridPP. We outline the resources available at the three core sites and our optimisation efforts for our user communities. We discuss the work which has been conducted in extending the centre to embrace new projects both from particle physics and new user communities and explain our methodology for doing this.<|reference_end|> | arxiv | @article{earl2004scotgrid:,
title={ScotGrid: A Prototype Tier 2 Centre},
author={A. Earl, P. Clark, S. Thorn},
journal={arXiv preprint arXiv:cs/0412093},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412093},
primaryClass={cs.AR cs.DC}
} | earl2004scotgrid: |
arxiv-672452 | cs/0412094 | Preemptive Multi-Machine Scheduling of Equal-Length Jobs to Minimize the Average Flow Time | <|reference_start|>Preemptive Multi-Machine Scheduling of Equal-Length Jobs to Minimize the Average Flow Time: We study the problem of preemptive scheduling of n equal-length jobs with given release times on m identical parallel machines. The objective is to minimize the average flow time. Recently, Brucker and Kravchenko proved that the optimal schedule can be computed in polynomial time by solving a linear program with O(n^3) variables and constraints, followed by some substantial post-processing (where n is the number of jobs.) In this note we describe a simple linear program with only O(mn) variables and constraints. Our linear program produces directly the optimal schedule and does not require any post-processing.<|reference_end|> | arxiv | @article{baptiste2004preemptive,
title={Preemptive Multi-Machine Scheduling of Equal-Length Jobs to Minimize the
Average Flow Time},
author={Philippe Baptiste, Marek Chrobak, Christoph Durr, Francis Sourd},
journal={arXiv preprint arXiv:cs/0412094},
year={2004},
number={This paper is now part of the report cs.DS/0605078.},
archivePrefix={arXiv},
eprint={cs/0412094},
primaryClass={cs.DS}
} | baptiste2004preemptive |
arxiv-672453 | cs/0412095 | Partitioning Regular Polygons into Circular Pieces II:Nonconvex Partitions | <|reference_start|>Partitioning Regular Polygons into Circular Pieces II:Nonconvex Partitions: We explore optimal circular nonconvex partitions of regular k-gons. The circularity of a polygon is measured by its aspect ratio: the ratio of the radii of the smallest circumscribing circle to the largest inscribed disk. An optimal circular partition minimizes the maximum ratio over all pieces in the partition. We show that the equilateral triangle has an optimal 4-piece nonconvex partition, the square an optimal 13-piece nonconvex partition, and the pentagon has an optimal nonconvex partition with more than 20 thousand pieces. For hexagons and beyond, we provide a general algorithm that approaches optimality, but does not achieve it.<|reference_end|> | arxiv | @article{damian2004partitioning,
title={Partitioning Regular Polygons into Circular Pieces II:Nonconvex
Partitions},
author={Mirela Damian and Joseph O'Rourke},
journal={arXiv preprint arXiv:cs/0412095},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412095},
primaryClass={cs.CG cs.DM}
} | damian2004partitioning |
arxiv-672454 | cs/0412096 | Complexity of Self-Assembled Shapes | <|reference_start|>Complexity of Self-Assembled Shapes: The connection between self-assembly and computation suggests that a shape can be considered the output of a self-assembly ``program,'' a set of tiles that fit together to create a shape. It seems plausible that the size of the smallest self-assembly program that builds a shape and the shape's descriptional (Kolmogorov) complexity should be related. We show that when using a notion of a shape that is independent of scale, this is indeed so: in the Tile Assembly Model, the minimal number of distinct tile types necessary to self-assemble a shape, at some scale, can be bounded both above and below in terms of the shape's Kolmogorov complexity. As part of the proof of the main result, we sketch a general method for converting a program outputting a shape as a list of locations into a set of tile types that self-assembles into a scaled up version of that shape. Our result implies, somewhat counter-intuitively, that self-assembly of a scaled-up version of a shape often requires fewer tile types. Furthermore, the independence of scale in self-assembly theory appears to play the same crucial role as the independence of running time in the theory of computability. This leads to an elegant formulation of languages of shapes generated by self-assembly. Considering functions from integers to shapes, we show that the running-time complexity, with respect to Turing machines, is polynomially equivalent to the scale complexity of the same function implemented via self-assembly by a finite set of tile types. Our results also hold for shapes defined by Wang tiling -- where there is no sense of a self-assembly process -- except that here time complexity must be measured with respect to non-deterministic Turing machines.<|reference_end|> | arxiv | @article{soloveichik2004complexity,
title={Complexity of Self-Assembled Shapes},
author={David Soloveichik and Erik Winfree},
journal={SIAM Journal on Computing 36 (6) 1544-1569, 2007},
year={2004},
doi={10.1137/S0097539704446712},
archivePrefix={arXiv},
eprint={cs/0412096},
primaryClass={cs.CC}
} | soloveichik2004complexity |
arxiv-672455 | cs/0412097 | The Computational Power of Benenson Automata | <|reference_start|>The Computational Power of Benenson Automata: The development of autonomous molecular computers capable of making independent decisions in vivo regarding local drug administration may revolutionize medical science. Recently Benenson at el (2004) have envisioned one form such a ``smart drug'' may take by implementing an in vitro scheme, in which a long DNA state molecule is cut repeatedly by a restriction enzyme in a manner dependent upon the presence of particular short DNA ``rule molecules.'' To analyze the potential of their scheme in terms of the kinds of computations it can perform, we study an abstraction assuming that a certain class of restriction enzymes is available and reactions occur without error. We also discuss how our molecular algorithms could perform with known restriction enzymes. By exhibiting a way to simulate arbitrary circuits, we show that these ``Benenson automata'' are capable of computing arbitrary Boolean functions. Further, we show that they are able to compute efficiently exactly those functions computable by log-depth circuits. Computationally, we formalize a new variant of limited width branching programs with a molecular implementation.<|reference_end|> | arxiv | @article{soloveichik2004the,
title={The Computational Power of Benenson Automata},
author={David Soloveichik and Erik Winfree},
journal={Theoretical Computer Science 344(2-3): 279-297, 2005},
year={2004},
doi={10.1016/j.tcs.2005.07.027},
archivePrefix={arXiv},
eprint={cs/0412097},
primaryClass={cs.CC}
} | soloveichik2004the |
arxiv-672456 | cs/0412098 | The Google Similarity Distance | <|reference_start|>The Google Similarity Distance: Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers the equivalent of `society' is `database,' and the equivalent of `use' is `way to search the database.' We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts we use the world-wide-web as database, and Google as search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the world-wide-web using Google page counts. The world-wide-web is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies, and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87% with the expert crafted WordNet categories.<|reference_end|> | arxiv | @article{cilibrasi2004the,
title={The Google Similarity Distance},
author={Rudi Cilibrasi (CWI), Paul M. B. Vitanyi (CWI, University of
Amsterdam)},
journal={R.L. Cilibrasi, P.M.B. Vitanyi, The Google Similarity Distance,
IEEE Trans. Knowledge and Data Engineering, 19:3(2007), 370-383},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412098},
primaryClass={cs.CL cs.AI cs.DB cs.IR cs.LG}
} | cilibrasi2004the |
arxiv-672457 | cs/0412099 | An unbreakable cryptosystem | <|reference_start|>An unbreakable cryptosystem: The remarkably long-standing problem of cryptography is to generate completely secure key. It is widely believed that the task cannot be achieved within classical cryptography. However, there is no proof in support of this belief. We present an incredibly simple classical cryptosystem which can generate completely secure key.<|reference_end|> | arxiv | @article{mitra2004an,
title={An unbreakable cryptosystem},
author={Arindam Mitra},
journal={arXiv preprint arXiv:cs/0412099},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412099},
primaryClass={cs.CR}
} | mitra2004an |
arxiv-672458 | cs/0412100 | Formal Test Purposes and The Validity of Test Cases | <|reference_start|>Formal Test Purposes and The Validity of Test Cases: We give a formalization of the notion of test purpose based on (suitably restricted) Message Sequence Charts. We define the validity of test cases with respect to such a formal test purpose and provide a simple decision procedure for validity.<|reference_end|> | arxiv | @article{deussen2004formal,
title={Formal Test Purposes and The Validity of Test Cases},
author={Peter H. Deussen and Stephan Tobies},
journal={arXiv preprint arXiv:cs/0412100},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412100},
primaryClass={cs.DS}
} | deussen2004formal |
arxiv-672459 | cs/0412101 | The Inverse Method Implements the Automata Approach for Modal Satisfiability | <|reference_start|>The Inverse Method Implements the Automata Approach for Modal Satisfiability: Tableaux-based decision procedures for satisfiability of modal and description logics behave quite well in practice, but it is sometimes hard to obtain exact worst-case complexity results using these approaches, especially for EXPTIME-complete logics. In contrast, automata-based approaches often yield algorithms for which optimal worst-case complexity can easily be proved. However, the algorithms obtained this way are usually not only worst-case, but also best-case exponential: they first construct an automaton that is always exponential in the size of the input, and then apply the (polynomial) emptiness test to this large automaton. To overcome this problem, one must try to construct the automaton "on-the-fly" while performing the emptiness test. In this paper we will show that Voronkov's inverse method for the modal logic K can be seen as an on-the-fly realization of the emptiness test done by the automata approach for K. The benefits of this result are two-fold. First, it shows that Voronkov's implementation of the inverse method, which behaves quite well in practice, is an optimized on-the-fly implementation of the automata-based satisfiability procedure for K. Second, it can be used to give a simpler proof of the fact that Voronkov's optimizations do not destroy completeness of the procedure. We will also show that the inverse method can easily be extended to handle global axioms, and that the correspondence to the automata approach still holds in this setting. In particular, the inverse method yields an EXPTIME-algorithm for satisfiability in K w.r.t. global axioms.<|reference_end|> | arxiv | @article{baader2004the,
title={The Inverse Method Implements the Automata Approach for Modal
Satisfiability},
author={Franz Baader and Stephan Tobies},
journal={arXiv preprint arXiv:cs/0412101},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412101},
primaryClass={cs.LO}
} | baader2004the |
arxiv-672460 | cs/0412102 | Quantum Interactive Proofs with Competing Provers | <|reference_start|>Quantum Interactive Proofs with Competing Provers: This paper studies quantum refereed games, which are quantum interactive proof systems with two competing provers: one that tries to convince the verifier to accept and the other that tries to convince the verifier to reject. We prove that every language having an ordinary quantum interactive proof system also has a quantum refereed game in which the verifier exchanges just one round of messages with each prover. A key part of our proof is the fact that there exists a single quantum measurement that reliably distinguishes between mixed states chosen arbitrarily from disjoint convex sets having large minimal trace distance from one another. We also show how to reduce the probability of error for some classes of quantum refereed games.<|reference_end|> | arxiv | @article{gutoski2004quantum,
title={Quantum Interactive Proofs with Competing Provers},
author={Gus Gutoski and John Watrous},
journal={Proceedings of STACS 2005, LNCS vol. 3404, pages 605-616},
year={2004},
doi={10.1007/978-3-540-31856-9_50},
archivePrefix={arXiv},
eprint={cs/0412102},
primaryClass={cs.CC quant-ph}
} | gutoski2004quantum |
arxiv-672461 | cs/0412103 | Chosen-Plaintext Cryptanalysis of a Clipped-Neural-Network-Based Chaotic Cipher | <|reference_start|>Chosen-Plaintext Cryptanalysis of a Clipped-Neural-Network-Based Chaotic Cipher: In ISNN'04, a novel symmetric cipher was proposed, by combining a chaotic signal and a clipped neural network (CNN) for encryption. The present paper analyzes the security of this chaotic cipher against chosen-plaintext attacks, and points out that this cipher can be broken by a chosen-plaintext attack. Experimental analyses are given to support the feasibility of the proposed attack.<|reference_end|> | arxiv | @article{li2004chosen-plaintext,
title={Chosen-Plaintext Cryptanalysis of a Clipped-Neural-Network-Based Chaotic
Cipher},
author={Chengqing Li, Shujun Li, Dan Zhang and Guanrong Chen},
journal={Lecture Notes in Computer Science, vol. 3497, pp. 630-636, 2005},
year={2004},
doi={10.1007/11427445_103},
archivePrefix={arXiv},
eprint={cs/0412103},
primaryClass={cs.CR cs.NE nlin.CD}
} | li2004chosen-plaintext |
arxiv-672462 | cs/0412104 | Negotiating over Bundles and Prices Using Aggregate Knowledge | <|reference_start|>Negotiating over Bundles and Prices Using Aggregate Knowledge: Combining two or more items and selling them as one good, a practice called bundling, can be a very effective strategy for reducing the costs of producing, marketing, and selling goods. In this paper, we consider a form of multi-issue negotiation where a shop negotiates both the contents and the price of bundles of goods with his customers. We present some key insights about, as well as a technique for, locating mutually beneficial alternatives to the bundle currently under negotiation. The essence of our approach lies in combining historical sales data, condensed into aggregate knowledge, with current data about the ongoing negotiation process, to exploit these insights. In particular, when negotiating a given bundle of goods with a customer, the shop analyzes the sequence of the customer's offers to determine the progress in the negotiation process. In addition, it uses aggregate knowledge concerning customers' valuations of goods in general. We show how the shop can use these two sources of data to locate promising alternatives to the current bundle. When the current negotiation's progress slows down, the shop may suggest the most promising of those alternatives and, depending on the customer's response, continue negotiating about the alternative bundle, or propose another alternative. Extensive computer simulation experiments show that our approach increases the speed with which deals are reached, as well as the number and quality of the deals reached, as compared to a benchmark. In addition, we show that the performance of our system is robust to a variety of changes in the negotiation strategies employed by the customers.<|reference_end|> | arxiv | @article{somefun2004negotiating,
title={Negotiating over Bundles and Prices Using Aggregate Knowledge},
author={Koye Somefun (1), Tomas Klos (1), Han La Poutr'e (1 and 2) ((1)
Center for Mathematics and Computer Science (CWI), Amsterdam, The
Netherlands, (2) Eindhoven University of Technology, Eindhoven, The
Netherlands)},
journal={arXiv preprint arXiv:cs/0412104},
year={2004},
number={SEN-E0405},
archivePrefix={arXiv},
eprint={cs/0412104},
primaryClass={cs.MA cs.GT}
} | somefun2004negotiating |
arxiv-672463 | cs/0412105 | On the existence of stable models of non-stratified logic programs | <|reference_start|>On the existence of stable models of non-stratified logic programs: This paper introduces a fundamental result, which is relevant for Answer Set programming, and planning. For the first time since the definition of the stable model semantics, the class of logic programs for which a stable model exists is given a syntactic characterization. This condition may have a practical importance both for defining new algorithms for checking consistency and computing answer sets, and for improving the existing systems. The approach of this paper is to introduce a new canonical form (to which any logic program can be reduced to), to focus the attention on cyclic dependencies. The technical result is then given in terms of programs in canonical form (canonical programs), without loss of generality. The result is based on identifying the cycles contained in the program, showing that stable models of the overall program are composed of stable models of suitable sub-programs, corresponding to the cycles, and on defining the Cycle Graph. Each vertex of this graph corresponds to one cycle, and each edge corresponds to onehandle, which is a literal containing an atom that, occurring in both cycles, actually determines a connection between them. In fact, the truth value of the handle in the cycle where it appears as the head of a rule, influences the truth value of the atoms of the cycle(s) where it occurs in the body. We can therefore introduce the concept of a handle path, connecting different cycles. If for every odd cycle we can find a handle path with certain properties, then the existence of stable model is guaranteed.<|reference_end|> | arxiv | @article{costantini2004on,
title={On the existence of stable models of non-stratified logic programs},
author={Stefania Costantini},
journal={arXiv preprint arXiv:cs/0412105},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412105},
primaryClass={cs.AI cs.LO}
} | costantini2004on |
arxiv-672464 | cs/0412106 | Online Learning of Aggregate Knowledge about Non-linear Preferences Applied to Negotiating Prices and Bundles | <|reference_start|>Online Learning of Aggregate Knowledge about Non-linear Preferences Applied to Negotiating Prices and Bundles: In this paper, we consider a form of multi-issue negotiation where a shop negotiates both the contents and the price of bundles of goods with his customers. We present some key insights about, as well as a procedure for, locating mutually beneficial alternatives to the bundle currently under negotiation. The essence of our approach lies in combining aggregate (anonymous) knowledge of customer preferences with current data about the ongoing negotiation process. The developed procedure either works with already obtained aggregate knowledge or, in the absence of such knowledge, learns the relevant information online. We conduct computer experiments with simulated customers that have_nonlinear_ preferences. We show how, for various types of customers, with distinct negotiation heuristics, our procedure (with and without the necessary aggregate knowledge) increases the speed with which deals are reached, as well as the number and the Pareto efficiency of the deals reached compared to a benchmark.<|reference_end|> | arxiv | @article{somefun2004online,
title={Online Learning of Aggregate Knowledge about Non-linear Preferences
Applied to Negotiating Prices and Bundles},
author={Koye Somefun (1), Tomas Klos (1), Han La Poutr'e (1 and 2) ((1)
Center for Mathematics and Computer Science (CWI), Amsterdam, The
Netherlands, (2) Eindhoven University of Technology, Eindhoven, The
Netherlands)},
journal={arXiv preprint arXiv:cs/0412106},
year={2004},
number={SEN-E0415},
archivePrefix={arXiv},
eprint={cs/0412106},
primaryClass={cs.MA cs.GT cs.LG}
} | somefun2004online |
arxiv-672465 | cs/0412107 | A Monte Carlo algorithm for efficient large matrix inversion | <|reference_start|>A Monte Carlo algorithm for efficient large matrix inversion: This paper introduces a new Monte Carlo algorithm to invert large matrices. It is based on simultaneous coupled draws from two random vectors whose covariance is the required inverse. It can be considered a generalization of a previously reported algorithm for hermitian matrices inversion based in only one draw. The use of two draws allows the inversion on non-hermitian matrices. Both the conditions for convergence and the rate of convergence are similar to the Gauss-Seidel algorithm. Results on two examples are presented, a real non-symmetric matrix related to quantitative genetics and a complex non-hermitian matrix relevant for physicists. Compared with other Monte Carlo algorithms it reveals a large reduction of the processing time showing eight times faster processing in the examples studied.<|reference_end|> | arxiv | @article{garcia-cortes2004a,
title={A Monte Carlo algorithm for efficient large matrix inversion},
author={L. A. Garcia-Cortes and C. Cabrillo},
journal={arXiv preprint arXiv:cs/0412107},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412107},
primaryClass={cs.DS cs.NA hep-lat}
} | garcia-cortes2004a |
arxiv-672466 | cs/0412108 | Mutual Information and Minimum Mean-square Error in Gaussian Channels | <|reference_start|>Mutual Information and Minimum Mean-square Error in Gaussian Channels: This paper deals with arbitrarily distributed finite-power input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the input-output mutual information and the minimum mean-square error (MMSE) achievable by optimal estimation of the input given the output. That is, the derivative of the mutual information (nats) with respect to the signal-to-noise ratio (SNR) is equal to half the MMSE, regardless of the input statistics. This relationship holds for both scalar and vector signals, as well as for discrete-time and continuous-time noncausal MMSE estimation. This fundamental information-theoretic result has an unexpected consequence in continuous-time nonlinear estimation: For any input signal with finite power, the causal filtering MMSE achieved at SNR is equal to the average value of the noncausal smoothing MMSE achieved with a channel whose signal-to-noise ratio is chosen uniformly distributed between 0 and SNR.<|reference_end|> | arxiv | @article{guo2004mutual,
title={Mutual Information and Minimum Mean-square Error in Gaussian Channels},
author={Dongning Guo, Shlomo Shamai (Shitz), and Sergio Verdu},
journal={arXiv preprint arXiv:cs/0412108},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412108},
primaryClass={cs.IT math.IT}
} | guo2004mutual |
arxiv-672467 | cs/0412109 | Global minimization of a quadratic functional: neural network approach | <|reference_start|>Global minimization of a quadratic functional: neural network approach: The problem of finding out the global minimum of a multiextremal functional is discussed. One frequently faces with such a functional in various applications. We propose a procedure, which depends on the dimensionality of the problem polynomially. In our approach we use the eigenvalues and eigenvectors of the connection matrix.<|reference_end|> | arxiv | @article{litinskii2004global,
title={Global minimization of a quadratic functional: neural network approach},
author={L.B.Litinskii, B.M.Magomedov},
journal={arXiv preprint arXiv:cs/0412109},
year={2004},
number={IONT-04-16},
archivePrefix={arXiv},
eprint={cs/0412109},
primaryClass={cs.NE cs.DM}
} | litinskii2004global |
arxiv-672468 | cs/0412110 | Q-valued neural network as a system of fast identification and pattern recognition | <|reference_start|>Q-valued neural network as a system of fast identification and pattern recognition: An effective neural network algorithm of the perceptron type is proposed. The algorithm allows us to identify strongly distorted input vector reliably. It is shown that its reliability and processing speed are orders of magnitude higher than that of full connected neural networks. The processing speed of our algorithm exceeds the one of the stack fast-access retrieval algorithm that is modified for working when there are noises in the input channel.<|reference_end|> | arxiv | @article{alieva2004q-valued,
title={Q-valued neural network as a system of fast identification and pattern
recognition},
author={D.I.Alieva, B.V.Kryzhanovsky, V.M.Kryzhanovsky, A.B.Fonarev},
journal={arXiv preprint arXiv:cs/0412110},
year={2004},
number={IONT-04-17},
archivePrefix={arXiv},
eprint={cs/0412110},
primaryClass={cs.NE cs.CV}
} | alieva2004q-valued |
arxiv-672469 | cs/0412111 | On the asymptotic accuracy of the union bound | <|reference_start|>On the asymptotic accuracy of the union bound: A new lower bound on the error probability of maximum likelihood decoding of a binary code on a binary symmetric channel was proved in Barg and McGregor (2004, cs.IT/0407011). It was observed in that paper that this bound leads to a new region of code rates in which the random coding exponent is asymptotically tight, giving a new region in which the reliability of the BSC is known exactly. The present paper explains the relation of these results to the union bound on the error probability.<|reference_end|> | arxiv | @article{barg2004on,
title={On the asymptotic accuracy of the union bound},
author={Alexander Barg},
journal={arXiv preprint arXiv:cs/0412111},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412111},
primaryClass={cs.IT math.IT}
} | barg2004on |
arxiv-672470 | cs/0412112 | Source Coding With Encoder Side Information | <|reference_start|>Source Coding With Encoder Side Information: We introduce the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. We show that such distortion side information is not only useful at the encoder, but that under certain conditions, knowing it at only the encoder is as good as knowing it at both encoder and decoder, and knowing it at only the decoder is useless. Thus distortion side information is a natural complement to the signal side information studied by Wyner and Ziv, which depends on the source but does not involve the distortion measure. Furthermore, when both types of side information are present, we characterize the penalty for deviating from the configuration of encoder-only distortion side information and decoder-only signal side information, which in many cases is as good as full side information knowledge.<|reference_end|> | arxiv | @article{martinian2004source,
title={Source Coding With Encoder Side Information},
author={Emin Martinian, Gregory W. Wornell, Ram Zamir},
journal={arXiv preprint arXiv:cs/0412112},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412112},
primaryClass={cs.IT math.IT}
} | martinian2004source |
arxiv-672471 | cs/0412113 | Source-Channel Diversity for Parallel Channels | <|reference_start|>Source-Channel Diversity for Parallel Channels: We consider transmitting a source across a pair of independent, non-ergodic channels with random states (e.g., slow fading channels) so as to minimize the average distortion. The general problem is unsolved. Hence, we focus on comparing two commonly used source and channel encoding systems which correspond to exploiting diversity either at the physical layer through parallel channel coding or at the application layer through multiple description source coding. For on-off channel models, source coding diversity offers better performance. For channels with a continuous range of reception quality, we show the reverse is true. Specifically, we introduce a new figure of merit called the distortion exponent which measures how fast the average distortion decays with SNR. For continuous-state models such as additive white Gaussian noise channels with multiplicative Rayleigh fading, optimal channel coding diversity at the physical layer is more efficient than source coding diversity at the application layer in that the former achieves a better distortion exponent. Finally, we consider a third decoding architecture: multiple description encoding with a joint source-channel decoding. We show that this architecture achieves the same distortion exponent as systems with optimal channel coding diversity for continuous-state channels, and maintains the the advantages of multiple description systems for on-off channels. Thus, the multiple description system with joint decoding achieves the best performance, from among the three architectures considered, on both continuous-state and on-off channels.<|reference_end|> | arxiv | @article{laneman2004source-channel,
title={Source-Channel Diversity for Parallel Channels},
author={J. Nicholas Laneman, Emin Martinian, Gregory W. Wornell, John G.
Apostolopoulos},
journal={arXiv preprint arXiv:cs/0412113},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412113},
primaryClass={cs.IT math.IT}
} | laneman2004source-channel |
arxiv-672472 | cs/0412114 | State of the Art, Evaluation and Recommendations regarding "Document Processing and Visualization Techniques" | <|reference_start|>State of the Art, Evaluation and Recommendations regarding "Document Processing and Visualization Techniques": Several Networks of Excellence have been set up in the framework of the European FP5 research program. Among these Networks of Excellence, the NEMIS project focuses on the field of Text Mining. Within this field, document processing and visualization was identified as one of the key topics and the WG1 working group was created in the NEMIS project, to carry out a detailed survey of techniques associated with the text mining process and to identify the relevant research topics in related research areas. In this document we present the results of this comprehensive survey. The report includes a description of the current state-of-the-art and practice, a roadmap for follow-up research in the identified areas, and recommendations for anticipated technological development in the domain of text mining.<|reference_end|> | arxiv | @article{rajman2004state,
title={State of the Art, Evaluation and Recommendations regarding "Document
Processing and Visualization Techniques"},
author={Martin Rajman and Martin Vesely and Pierre Andrews},
journal={arXiv preprint arXiv:cs/0412114},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412114},
primaryClass={cs.CL}
} | rajman2004state |
arxiv-672473 | cs/0412115 | Reductions in Distributed Computing Part I: Consensus and Atomic Commitment Tasks | <|reference_start|>Reductions in Distributed Computing Part I: Consensus and Atomic Commitment Tasks: We introduce several notions of reduction in distributed computing, and investigate reduction properties of two fundamental agreement tasks, namely Consensus and Atomic Commitment. We first propose the notion of reduction "a la Karp'', an analog for distributed computing of the classical Karp reduction. We then define a weaker reduction which is the analog of Cook reduction. These two reductions are called K-reduction and C-reduction, respectively. We also introduce the notion of C*-reduction which has no counterpart in classical (namely, non distributed) systems, and which naturally arises when dealing with symmetric tasks. We establish various reducibility and irreducibility theorems with respect to these three reductions. Our main result is an incomparability statement for Consensus and Atomic Commitment tasks: we show that they are incomparable with respect to the C-reduction, except when the resiliency degree is 1, in which case Atomic Commitment is strictly harder than Consensus. A side consequence of these results is that our notion of C-reduction is strictly weaker than the one of K-reduction, even for unsolvable tasks.<|reference_end|> | arxiv | @article{charron-bost2004reductions,
title={Reductions in Distributed Computing Part I: Consensus and Atomic
Commitment Tasks},
author={Bernadette Charron-Bost},
journal={arXiv preprint arXiv:cs/0412115},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412115},
primaryClass={cs.DC}
} | charron-bost2004reductions |
arxiv-672474 | cs/0412116 | Reductions in Distributed Computing Part II: k-Threshold Agreement Tasks | <|reference_start|>Reductions in Distributed Computing Part II: k-Threshold Agreement Tasks: We extend the results of Part I by considering a new class of agreement tasks, the so-called k-Threshold Agreement tasks (previously introduced by Charron-Bost and Le Fessant). These tasks naturally interpolate between Atomic Commitment and Consensus. Moreover, they constitute a valuable tool to derive irreducibility results between Consensus tasks only. In particular, they allow us to show that (A) for a fixed set of processes, the higher the resiliency degree is, the harder the Consensus task is, and (B) for a fixed resiliency degree, the smaller the set of processes is, the harder the Consensus task is. The proofs of these results lead us to consider new oracle-based reductions, involving a weaker variant of the C-reduction introduced in Part I. We also discuss the relationship between our results and previous ones relating f-resiliency and wait-freedom.<|reference_end|> | arxiv | @article{charron-bost2004reductions,
title={Reductions in Distributed Computing Part II: k-Threshold Agreement Tasks},
author={Bernadette Charron-Bost},
journal={arXiv preprint arXiv:cs/0412116},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412116},
primaryClass={cs.DC}
} | charron-bost2004reductions |
arxiv-672475 | cs/0412117 | Thematic Annotation: extracting concepts out of documents | <|reference_start|>Thematic Annotation: extracting concepts out of documents: Contrarily to standard approaches to topic annotation, the technique used in this work does not centrally rely on some sort of -- possibly statistical -- keyword extraction. In fact, the proposed annotation algorithm uses a large scale semantic database -- the EDR Electronic Dictionary -- that provides a concept hierarchy based on hyponym and hypernym relations. This concept hierarchy is used to generate a synthetic representation of the document by aggregating the words present in topically homogeneous document segments into a set of concepts best preserving the document's content. This new extraction technique uses an unexplored approach to topic selection. Instead of using semantic similarity measures based on a semantic resource, the later is processed to extract the part of the conceptual hierarchy relevant to the document content. Then this conceptual hierarchy is searched to extract the most relevant set of concepts to represent the topics discussed in the document. Notice that this algorithm is able to extract generic concepts that are not directly present in the document.<|reference_end|> | arxiv | @article{andrews2004thematic,
title={Thematic Annotation: extracting concepts out of documents},
author={Pierre Andrews and Martin Rajman},
journal={arXiv preprint arXiv:cs/0412117},
year={2004},
number={IC/2004/68},
archivePrefix={arXiv},
eprint={cs/0412117},
primaryClass={cs.CL}
} | andrews2004thematic |
arxiv-672476 | cs/0412118 | Power Aware Routing for Sensor Databases | <|reference_start|>Power Aware Routing for Sensor Databases: Wireless sensor networks offer the potential to span and monitor large geographical areas inexpensively. Sensor network databases like TinyDB are the dominant architectures to extract and manage data in such networks. Since sensors have significant power constraints (battery life), and high communication costs, design of energy efficient communication algorithms is of great importance. The data flow in a sensor database is very different from data flow in an ordinary network and poses novel challenges in designing efficient routing algorithms. In this work we explore the problem of energy efficient routing for various different types of database queries and show that in general, this problem is NP-complete. We give a constant factor approximation algorithm for one class of query, and for other queries give heuristic algorithms. We evaluate the efficiency of the proposed algorithms by simulation and demonstrate their near optimal performance for various network sizes.<|reference_end|> | arxiv | @article{buragohain2004power,
title={Power Aware Routing for Sensor Databases},
author={Chiranjeeb Buragohain, Divyakant Agrawal, Subhash Suri},
journal={Proceedings of IEEE INFOCOM 2005, March 13-17, 2005 Miami},
year={2004},
doi={10.1109/INFCOM.2005.1498455},
archivePrefix={arXiv},
eprint={cs/0412118},
primaryClass={cs.NI cs.DC}
} | buragohain2004power |
arxiv-672477 | cs/0412119 | CDTP Chain Distributed Transfer Protocol | <|reference_start|>CDTP Chain Distributed Transfer Protocol: The rapid growth of the internet in general and of bandwidth capacity at internet clients in particular poses increasing computation and bandwidth demands on internet servers. Internet access technologies like ADSL [DSL], Cable Modem and Wireless modem allow internet clients to access the internet with orders of magnitude more bandwidth than using traditional modems. We present CDTP a distributed transfer protocol that allows clients to cooperate and therefore remove the strain from the internet server thus achieving much better performance than traditional transfer protocols (e.g. FTP [FTP]). The CDTP server and client tools are presented also as well as results of experiments. Finally a bandwidth measurement technique is presented. CDTP tools use this technique to differentiate between slow and fast clients.<|reference_end|> | arxiv | @article{vagner2004cdtp,
title={CDTP Chain Distributed Transfer Protocol},
author={Shmuel Vagner},
journal={arXiv preprint arXiv:cs/0412119},
year={2004},
number={1937340},
archivePrefix={arXiv},
eprint={cs/0412119},
primaryClass={cs.NI cs.AR}
} | vagner2004cdtp |
arxiv-672478 | cs/0412120 | An estimate of accuracy for interpolant numerical solutions of a PDE problem | <|reference_start|>An estimate of accuracy for interpolant numerical solutions of a PDE problem: In this paper we present an estimate of accuracy for a piecewise polynomial approximation of a classical numerical solution to a non linear differential problem. We suppose the numerical solution U is computed using a grid with a small linear step and interval time Tu, while the polynomial approximation V is an interpolation of the values of a numerical solution on a less fine grid and interval time Tv << Tu. The estimate shows that the interpolant solution V can be, under suitable hypotheses, a good approximation and in general its computational cost is much lower of the cost of the fine numerical solution. We present two possible applications to linear case and periodic case.<|reference_end|> | arxiv | @article{argentini2004an,
title={An estimate of accuracy for interpolant numerical solutions of a PDE
problem},
author={Gianluca Argentini},
journal={APPLIED AND INDUSTRIAL MATHEMATICS IN ITALY, Series on Advances in
Mathematics for Applied Sciences - Vol. 69, World Scientific Company, 2005,
56 - 64.},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412120},
primaryClass={cs.NA math-ph math.MP}
} | argentini2004an |
arxiv-672479 | cs/0412121 | A Distributed Economics-based Infrastructure for Utility Computing | <|reference_start|>A Distributed Economics-based Infrastructure for Utility Computing: Existing attempts at utility computing revolve around two approaches. The first consists of proprietary solutions involving renting time on dedicated utility computing machines. The second requires the use of heavy, monolithic applications that are difficult to deploy, maintain, and use. We propose a distributed, community-oriented approach to utility computing. Our approach provides an infrastructure built on Web Services in which modular components are combined to create a seemingly simple, yet powerful system. The community-oriented nature generates an economic environment which results in fair transactions between consumers and providers of computing cycles while simultaneously encouraging improvements in the infrastructure of the computational grid itself.<|reference_end|> | arxiv | @article{treaster2004a,
title={A Distributed Economics-based Infrastructure for Utility Computing},
author={Michael Treaster, Nadir Kiyanclar, Gregory A. Koenig, William Yurcik},
journal={arXiv preprint arXiv:cs/0412121},
year={2004},
archivePrefix={arXiv},
eprint={cs/0412121},
primaryClass={cs.DC}
} | treaster2004a |
arxiv-672480 | cs/0501001 | A Survey of Distributed Intrusion Detection Approaches | <|reference_start|>A Survey of Distributed Intrusion Detection Approaches: Distributed intrustion detection systems detect attacks on computer systems by analyzing data aggregated from distributed sources. The distributed nature of the data sources allows patterns in the data to be seen that might not be detectable if each of the sources were examined individually. This paper describes the various approaches that have been developed to share and analyze data in such systems, and discusses some issues that must be addressed before fully decentralized distributed intrusion detection systems can be made viable.<|reference_end|> | arxiv | @article{treaster2005a,
title={A Survey of Distributed Intrusion Detection Approaches},
author={Michael Treaster},
journal={arXiv preprint arXiv:cs/0501001},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501001},
primaryClass={cs.CR}
} | treaster2005a |
arxiv-672481 | cs/0501002 | A Survey of Fault-Tolerance and Fault-Recovery Techniques in Parallel Systems | <|reference_start|>A Survey of Fault-Tolerance and Fault-Recovery Techniques in Parallel Systems: Supercomputing systems today often come in the form of large numbers of commodity systems linked together into a computing cluster. These systems, like any distributed system, can have large numbers of independent hardware components cooperating or collaborating on a computation. Unfortunately, any of this vast number of components can fail at any time, resulting in potentially erroneous output. In order to improve the robustness of supercomputing applications in the presence of failures, many techniques have been developed to provide resilience to these kinds of system faults. This survey provides an overview of these various fault-tolerance techniques.<|reference_end|> | arxiv | @article{treaster2005a,
title={A Survey of Fault-Tolerance and Fault-Recovery Techniques in Parallel
Systems},
author={Michael Treaster},
journal={arXiv preprint arXiv:cs/0501002},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501002},
primaryClass={cs.DC}
} | treaster2005a |
arxiv-672482 | cs/0501003 | Implementation of Motzkin-Burger algorithm in Maple | <|reference_start|>Implementation of Motzkin-Burger algorithm in Maple: Subject of this paper is an implementation of a well-known Motzkin-Burger algorithm, which solves the problem of finding the full set of solutions of a system of linear homogeneous inequalities. There exist a number of implementations of this algorithm, but there was no one in Maple, to the best of the author's knowledge.<|reference_end|> | arxiv | @article{burovsky2005implementation,
title={Implementation of Motzkin-Burger algorithm in Maple},
author={P.A. Burovsky},
journal={arXiv preprint arXiv:cs/0501003},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501003},
primaryClass={cs.CG cs.CC cs.SC}
} | burovsky2005implementation |
arxiv-672483 | cs/0501004 | Advances towards a General-Purpose Societal-Scale Human-Collective Problem-Solving Engine | <|reference_start|>Advances towards a General-Purpose Societal-Scale Human-Collective Problem-Solving Engine: Human collective intelligence has proved itself as an important factor in a society's ability to accomplish large-scale behavioral feats. As societies have grown in population-size, individuals have seen a decrease in their ability to activeily participate in the problem-solving processes of the group. Representative decision-making structures have been used as a modern solution to society's inadequate information-processing infrastructure. With computer and network technologies being further embedded within the fabric of society, the implementation of a general-purpose societal-scale human-collective problem-solving engine is envisioned as a means of furthering the collective-intelligence potential of society. This paper provides both a novel framework for creating collective intelligence systems and a method for implementing a representative and expertise system based on social-network theory.<|reference_end|> | arxiv | @article{rodriguez2005advances,
title={Advances towards a General-Purpose Societal-Scale Human-Collective
Problem-Solving Engine},
author={Marko Rodriguez},
journal={Proceedings of the International Conference on Systems, Man and
Cybernetics, IEEE SMC, The Hague, Netherlands, volume 1, pages 206-211, ISSN:
1062-922X, 2004},
year={2005},
doi={10.1109/ICSMC.2004.1398298},
archivePrefix={arXiv},
eprint={cs/0501004},
primaryClass={cs.CY cs.HC}
} | rodriguez2005advances |
arxiv-672484 | cs/0501005 | Portfolio selection using neural networks | <|reference_start|>Portfolio selection using neural networks: In this paper we apply a heuristic method based on artificial neural networks in order to trace out the efficient frontier associated to the portfolio selection problem. We consider a generalization of the standard Markowitz mean-variance model which includes cardinality and bounding constraints. These constraints ensure the investment in a given number of different assets and limit the amount of capital to be invested in each asset. We present some experimental results obtained with the neural network heuristic and we compare them to those obtained with three previous heuristic methods.<|reference_end|> | arxiv | @article{fernandez2005portfolio,
title={Portfolio selection using neural networks},
author={Alberto Fernandez, Sergio Gomez},
journal={Computers & Operations Research 34 (2007) 1177-1191},
year={2005},
doi={10.1016/j.cor.2005.06.017},
number={DEIM-RR-04-004},
archivePrefix={arXiv},
eprint={cs/0501005},
primaryClass={cs.NE}
} | fernandez2005portfolio |
arxiv-672485 | cs/0501006 | Formal Languages and Algorithms for Similarity based Retrieval from Sequence Databases | <|reference_start|>Formal Languages and Algorithms for Similarity based Retrieval from Sequence Databases: The paper considers various formalisms based on Automata, Temporal Logic and Regular Expressions for specifying queries over sequences. Unlike traditional binary semantics, the paper presents a similarity based semantics for thse formalisms. More specifically, a distance measure in the range [0,1] is associated with a sequence, query pair denoting how closely the sequence satisfies the query. These measures are defined using a spectrum of normed vector distance measures. Various distance measures based on the syntax and the traditional semantics of the query are presented. Efficient algorithms for computing these distance measure are presented. These algorithms can be employed for retrieval of sequence from a database that closely satisfy a given.<|reference_end|> | arxiv | @article{sistla2005formal,
title={Formal Languages and Algorithms for Similarity based Retrieval from
Sequence Databases},
author={A. Prasad Sistla},
journal={arXiv preprint arXiv:cs/0501006},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501006},
primaryClass={cs.LO cs.DB}
} | sistla2005formal |
arxiv-672486 | cs/0501007 | A Time-Optimal Delaunay Refinement Algorithm in Two Dimensions | <|reference_start|>A Time-Optimal Delaunay Refinement Algorithm in Two Dimensions: We propose a new refinement algorithm to generate size-optimal quality-guaranteed Delaunay triangulations in the plane. The algorithm takes $O(n \log n + m)$ time, where $n$ is the input size and $m$ is the output size. This is the first time-optimal Delaunay refinement algorithm.<|reference_end|> | arxiv | @article{har-peled2005a,
title={A Time-Optimal Delaunay Refinement Algorithm in Two Dimensions},
author={Sariel Har-Peled and Alper Ungor},
journal={arXiv preprint arXiv:cs/0501007},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501007},
primaryClass={cs.CG}
} | har-peled2005a |
arxiv-672487 | cs/0501008 | Multipartite Secret Correlations and Bound Information | <|reference_start|>Multipartite Secret Correlations and Bound Information: We consider the problem of secret key extraction when $n$ honest parties and an eavesdropper share correlated information. We present a family of probability distributions and give the full characterization of its distillation properties. This formalism allows us to design a rich variety of cryptographic scenarios. In particular, we provide examples of multipartite probability distributions containing non-distillable secret correlations, also known as bound information.<|reference_end|> | arxiv | @article{masanes2005multipartite,
title={Multipartite Secret Correlations and Bound Information},
author={Lluis Masanes and Antonio Acin},
journal={IEEE Trans. Inf. Theory, vol. 52, no. 10, pp. 4686-4694 (2006)},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501008},
primaryClass={cs.CR cs.IT math.IT quant-ph}
} | masanes2005multipartite |
arxiv-672488 | cs/0501009 | On The Liniar Time Complexity of Finite Languages | <|reference_start|>On The Liniar Time Complexity of Finite Languages: The present paper presents and proves a proposition concerning the time complexity of finite languages. It is shown herein, that for any finite language (a language for which the set of words composing it is finite) there is a Turing machine that computes the language in such a way that for any input of length k the machine stops in, at most, k + 1 steps.<|reference_end|> | arxiv | @article{moscu2005on,
title={On The Liniar Time Complexity of Finite Languages},
author={Mircea Alexandru Popescu Moscu},
journal={arXiv preprint arXiv:cs/0501009},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501009},
primaryClass={cs.CC}
} | moscu2005on |
arxiv-672489 | cs/0501010 | A Cryptographic Study of Some Digital Signature Schemes | <|reference_start|>A Cryptographic Study of Some Digital Signature Schemes: In this thesis, we propose some directed signature schemes. In addition, we have discussed their applications in different situations. In this thesis, we would like to discuss the security aspects during the design process of the proposed directed digital signature schemes. The security of the most digital signature schemes widely use in practice is based on the two difficult problems, viz; the problem of factoring integers (The RSA scheme) and the problem of finding discrete logarithms over finite fields (The ElGamal scheme). The proposed works in this thesis is divided into seven chapters.<|reference_end|> | arxiv | @article{kumar2005a,
title={A Cryptographic Study of Some Digital Signature Schemes},
author={Manoj Kumar},
journal={arXiv preprint arXiv:cs/0501010},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501010},
primaryClass={cs.CR}
} | kumar2005a |
arxiv-672490 | cs/0501011 | A simple algorithm for decoding Reed-Solomon codes and its relation to the Welch-Berlekamp algorithm | <|reference_start|>A simple algorithm for decoding Reed-Solomon codes and its relation to the Welch-Berlekamp algorithm: A simple and natural Gao algorithm for decoding algebraic codes is described. Its relation to the Welch-Berlekamp and Euclidean algorithms is given.<|reference_end|> | arxiv | @article{fedorenko2005a,
title={A simple algorithm for decoding Reed-Solomon codes and its relation to
the Welch-Berlekamp algorithm},
author={Sergei Fedorenko},
journal={IEEE Transactions on Information Theory, vol. IT-51, no. 3, pp.
1196-1198, 2005.},
year={2005},
doi={10.1109/TIT.2004.842738},
archivePrefix={arXiv},
eprint={cs/0501011},
primaryClass={cs.IT math.IT}
} | fedorenko2005a |
arxiv-672491 | cs/0501012 | Fedora: An Architecture for Complex Objects and their Relationships | <|reference_start|>Fedora: An Architecture for Complex Objects and their Relationships: The Fedora architecture is an extensible framework for the storage, management, and dissemination of complex objects and the relationships among them. Fedora accommodates the aggregation of local and distributed content into digital objects and the association of services with objects. This al-lows an object to have several accessible representations, some of them dy-namically produced. The architecture includes a generic RDF-based relation-ship model that represents relationships among objects and their components. Queries against these relationships are supported by an RDF triple store. The architecture is implemented as a web service, with all aspects of the complex object architecture and related management functions exposed through REST and SOAP interfaces. The implementation is available as open-source soft-ware, providing the foundation for a variety of end-user applications for digital libraries, archives, institutional repositories, and learning object systems.<|reference_end|> | arxiv | @article{lagoze2005fedora:,
title={Fedora: An Architecture for Complex Objects and their Relationships},
author={Carl Lagoze, Sandy Payette, Edwin Shin, Chris Wilper},
journal={arXiv preprint arXiv:cs/0501012},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501012},
primaryClass={cs.DL cs.MM}
} | lagoze2005fedora: |
arxiv-672492 | cs/0501013 | On the security of the Yen-Guo's domino signal encryption algorithm (DSEA) | <|reference_start|>On the security of the Yen-Guo's domino signal encryption algorithm (DSEA): Recently, a new domino signal encryption algorithm (DSEA) was proposed for digital signal transmission, especially for digital images and videos. This paper analyzes the security of DSEA, and points out the following weaknesses: 1) its security against the brute-force attack was overestimated; 2) it is not sufficiently secure against ciphertext-only attacks, and only one ciphertext is enough to get some information about the plaintext and to break the value of a sub-key; 3) it is insecure against known/chosen-plaintext attacks, in the sense that the secret key can be recovered from a number of continuous bytes of only one known/chosen plaintext and the corresponding ciphertext. Experimental results are given to show the performance of the proposed attacks, and some countermeasures are discussed to improve DSEA.<|reference_end|> | arxiv | @article{li2005on,
title={On the security of the Yen-Guo's domino signal encryption algorithm
(DSEA)},
author={Chengqing Li, Shujun Li, Der-Chyuan Lou and Dan Zhang},
journal={Journal of Systems and Software, vol. 79, no. 2, pp. 253-258, 2006},
year={2005},
doi={10.1016/j.jss.2005.04.021},
archivePrefix={arXiv},
eprint={cs/0501013},
primaryClass={cs.CR cs.MM nlin.CD}
} | li2005on |
arxiv-672493 | cs/0501014 | On the Design of Perceptual MPEG-Video Encryption Algorithms | <|reference_start|>On the Design of Perceptual MPEG-Video Encryption Algorithms: In this paper, some existing perceptual encryption algorithms of MPEG videos are reviewed and some problems, especially security defects of two recently proposed MPEG-video perceptual encryption schemes, are pointed out. Then, a simpler and more effective design is suggested, which selectively encrypts fixed-length codewords (FLC) in MPEG-video bitstreams under the control of three perceptibility factors. The proposed design is actually an encryption configuration that can work with any stream cipher or block cipher. Compared with the previously-proposed schemes, the new design provides more useful features, such as strict size-preservation, on-the-fly encryption and multiple perceptibility, which make it possible to support more applications with different requirements. In addition, four different measures are suggested to provide better security against known/chosen-plaintext attacks.<|reference_end|> | arxiv | @article{li2005on,
title={On the Design of Perceptual MPEG-Video Encryption Algorithms},
author={Shujun Li, Guanrong Chen, Albert Cheung, Bharat Bhargava and Kwok-Tung
Lo},
journal={IEEE Transactions on Circuits and Systems for Video Technology,
vol. 17, no. 2, pp. 214-223, 2007},
year={2005},
doi={10.1109/TCSVT.2006.888840},
archivePrefix={arXiv},
eprint={cs/0501014},
primaryClass={cs.MM cs.CR}
} | li2005on |
arxiv-672494 | cs/0501015 | Application of Generating Functions and Partial Differential Equations in Coding Theory | <|reference_start|>Application of Generating Functions and Partial Differential Equations in Coding Theory: In this work we have considered formal power series and partial differential equations, and their relationship with Coding Theory. We have obtained the nature of solutions for the partial differential equations for Cycle Poisson Case. The coefficients for this case have been simulated, and the high tendency of growth is shown. In the light of Complex Analysis, the Hadamard Multiplication's Theorem is presented as a new approach to divide the power sums relating to the error probability, each part of which can be analyzed later.<|reference_end|> | arxiv | @article{bradonjic2005application,
title={Application of Generating Functions and Partial Differential Equations
in Coding Theory},
author={Milan Bradonjic},
journal={arXiv preprint arXiv:cs/0501015},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501015},
primaryClass={cs.IT math.IT}
} | bradonjic2005application |
arxiv-672495 | cs/0501016 | On the weight distribution of convolutional codes | <|reference_start|>On the weight distribution of convolutional codes: Detailed information about the weight distribution of a convolutional code is given by the adjacency matrix of the state diagram associated with a controller canonical form of the code. We will show that this matrix is an invariant of the code. Moreover, it will be proven that codes with the same adjacency matrix have the same dimension and the same Forney indices and finally that for one-dimensional binary convolutional codes the adjacency matrix determines the code uniquely up to monomial equivalence.<|reference_end|> | arxiv | @article{gluesing-luerssen2005on,
title={On the weight distribution of convolutional codes},
author={Heide Gluesing-Luerssen},
journal={arXiv preprint arXiv:cs/0501016},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501016},
primaryClass={cs.IT math.IT math.OC}
} | gluesing-luerssen2005on |
arxiv-672496 | cs/0501017 | Public Key Cryptography based on Semigroup Actions | <|reference_start|>Public Key Cryptography based on Semigroup Actions: A generalization of the original Diffie-Hellman key exchange in $(\Z/p\Z)^*$ found a new depth when Miller and Koblitz suggested that such a protocol could be used with the group over an elliptic curve. In this paper, we propose a further vast generalization where abelian semigroups act on finite sets. We define a Diffie-Hellman key exchange in this setting and we illustrate how to build interesting semigroup actions using finite (simple) semirings. The practicality of the proposed extensions rely on the orbit sizes of the semigroup actions and at this point it is an open question how to compute the sizes of these orbits in general and also if there exists a square root attack in general. In Section 2 a concrete practical semigroup action built from simple semirings is presented. It will require further research to analyse this system.<|reference_end|> | arxiv | @article{maze2005public,
title={Public Key Cryptography based on Semigroup Actions},
author={G.Maze, C.Monico, J.Rosenthal},
journal={arXiv preprint arXiv:cs/0501017},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501017},
primaryClass={cs.CR cs.IT math.IT}
} | maze2005public |
arxiv-672497 | cs/0501018 | Combining Independent Modules in Lexical Multiple-Choice Problems | <|reference_start|>Combining Independent Modules in Lexical Multiple-Choice Problems: Existing statistical approaches to natural language problems are very coarse approximations to the true complexity of language processing. As such, no single technique will be best for all problem instances. Many researchers are examining ensemble methods that combine the output of multiple modules to create more accurate solutions. This paper examines three merging rules for combining probability distributions: the familiar mixture rule, the logarithmic rule, and a novel product rule. These rules were applied with state-of-the-art results to two problems used to assess human mastery of lexical semantics -- synonym questions and analogy questions. All three merging rules result in ensembles that are more accurate than any of their component modules. The differences among the three rules are not statistically significant, but it is suggestive that the popular mixture rule is not the best rule for either of the two problems.<|reference_end|> | arxiv | @article{turney2005combining,
title={Combining Independent Modules in Lexical Multiple-Choice Problems},
author={Peter D. Turney, Michael L. Littman, Jeffrey Bigham, Victor Shnayder},
journal={Recent Advances in Natural Language Processing III: Selected
Papers from RANLP 2003, Eds: N. Nicolov, K. Botcheva, G. Angelova, and R.
Mitkov, (2004), Current Issues in Linguistic Theory (CILT), 260, John
Benjamins, 101-110},
year={2005},
number={NRC-47434},
archivePrefix={arXiv},
eprint={cs/0501018},
primaryClass={cs.LG cs.CL cs.IR}
} | turney2005combining |
arxiv-672498 | cs/0501019 | Clustering SPIRES with EqRank | <|reference_start|>Clustering SPIRES with EqRank: SPIRES is the largest database of scientific papers in the subject field of high energy and nuclear physics. It contains information on the citation graph of more than half a million of papers (vertexes of the citation graph). We outline the EqRank algorithm designed to cluster vertexes of directed graphs, and present the results of EqRank application to the SPIRES citation graph. The hierarchical clustering of SPIRES yielded by EqRank is used to set up a web service, which is also outlined.<|reference_end|> | arxiv | @article{pivovarov2005clustering,
title={Clustering SPIRES with EqRank},
author={G. B. Pivovarov and S. E. Trunov},
journal={arXiv preprint arXiv:cs/0501019},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501019},
primaryClass={cs.DL cs.IR}
} | pivovarov2005clustering |
arxiv-672499 | cs/0501020 | Enhancing Histograms by Tree-Like Bucket Indices | <|reference_start|>Enhancing Histograms by Tree-Like Bucket Indices: Histograms are used to summarize the contents of relations into a number of buckets for the estimation of query result sizes. Several techniques (e.g., MaxDiff and V-Optimal) have been proposed in the past for determining bucket boundaries which provide accurate estimations. However, while search strategies for optimal bucket boundaries are rather sophisticated, no much attention has been paid for estimating queries inside buckets and all of the above techniques adopt naive methods for such an estimation. This paper focuses on the problem of improving the estimation inside a bucket once its boundaries have been fixed. The proposed technique is based on the addition, to each bucket, of 32-bit additional information (organized into a 4-level tree index), storing approximate cumulative frequencies at 7 internal intervals of the bucket. Both theoretical analysis and experimental results show that, among a number of alternative ways to organize the additional information, the 4-level tree index provides the best frequency estimation inside a bucket. The index is later added to two well-known histograms, MaxDiff and V-Optimal, obtaining the non-obvious result that despite the spatial cost of 4LT which reduces the number of allowed buckets once the storage space has been fixed, the original methods are strongly improved in terms of accuracy.<|reference_end|> | arxiv | @article{buccafurri2005enhancing,
title={Enhancing Histograms by Tree-Like Bucket Indices},
author={Francesco Buccafurri, Gianluca Lax, Domenico Sacca', Luigi Pontieri
and Domenico Rosaci},
journal={arXiv preprint arXiv:cs/0501020},
year={2005},
archivePrefix={arXiv},
eprint={cs/0501020},
primaryClass={cs.DS}
} | buccafurri2005enhancing |
arxiv-672500 | cs/0501021 | Large-scale lattice Boltzmann simulations of complex fluids: advances through the advent of computational grids | <|reference_start|>Large-scale lattice Boltzmann simulations of complex fluids: advances through the advent of computational grids: During the last two years the RealityGrid project has allowed us to be one of the few scientific groups involved in the development of computational grids. Since smoothly working production grids are not yet available, we have been able to substantially influence the direction of software development and grid deployment within the project. In this paper we review our results from large scale three-dimensional lattice Boltzmann simulations performed over the last two years. We describe how the proactive use of computational steering and advanced job migration and visualization techniques enabled us to do our scientific work more efficiently. The projects reported on in this paper are studies of complex fluid flows under shear or in porous media, as well as large-scale parameter searches, and studies of the self-organisation of liquid cubic mesophases. Movies are available at http://www.ica1.uni-stuttgart.de/~jens/pub/05/05-PhilTransReview.html<|reference_end|> | arxiv | @article{harting2005large-scale,
title={Large-scale lattice Boltzmann simulations of complex fluids: advances
through the advent of computational grids},
author={J. Harting, J. Chin, M. Venturoli, P.V. Coveney},
journal={Phil. Trans. R. Soc. London Series A 363 1895-1915 (2005)},
year={2005},
doi={10.1098/rsta.2005.1618},
archivePrefix={arXiv},
eprint={cs/0501021},
primaryClass={cs.DC cond-mat.other cond-mat.soft physics.flu-dyn}
} | harting2005large-scale |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.