corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-671001
cs/0303010
Techniques and Applications of Computation Slicing
<|reference_start|>Techniques and Applications of Computation Slicing: Writing correct distributed programs is hard. In spite of extensive testing and debugging, software faults persist even in commercial grade software. Many distributed systems, especially those employed in safety-critical environments, should be able to operate properly even in the presence of software faults. Monitoring the execution of a distributed system, and, on detecting a fault, initiating the appropriate corrective action is an important way to tolerate such faults. This gives rise to the predicate detection problem which requires finding a consistent cut of a given computation that satisfies a given global predicate, if it exists. Detecting a predicate in a computation is, however, an NP-complete problem. To ameliorate the associated combinatorial explosion problem, we introduce the notion of computation slice. Formally, the slice of a computation with respect to a predicate is a (sub)computation with the least number of consistent cuts that contains all consistent cuts of the computation satisfying the predicate. To detect a predicate, rather than searching the state-space of the computation, it is much more efficient to search the state-space of the slice. We prove that the slice exists and is uniquely defined for all predicates. We present efficient slicing algorithms for several useful classes of predicates. We develop efficient heuristic algorithms for computing an approximate slice for predicates for which computing the slice is otherwise provably intractable. Our experimental results show that slicing can lead to an exponential improvement over existing techniques for predicate detection in terms of time and space.<|reference_end|>
arxiv
@article{mittal2003techniques, title={Techniques and Applications of Computation Slicing}, author={Neeraj Mittal and Vijay K. Garg}, journal={arXiv preprint arXiv:cs/0303010}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303010}, primaryClass={cs.DC cs.SE} }
mittal2003techniques
arxiv-671002
cs/0303011
Lock-free dynamic hash tables with open addressing
<|reference_start|>Lock-free dynamic hash tables with open addressing: We present an efficient lock-free algorithm for parallel accessible hash tables with open addressing, which promises more robust performance and reliability than conventional lock-based implementations. ``Lock-free'' means that it is guaranteed that always at least one process completes its operation within a bounded number of steps. For a single processor architecture our solution is as efficient as sequential hash tables. On a multiprocessor architecture this is also the case when all processors have comparable speeds. The algorithm allows processors that have widely different speeds or come to a halt. It can easily be implemented using C-like languages and requires on average only constant time for insertion, deletion or accessing of elements. The algorithm allows the hash tables to grow and shrink when needed. Lock-free algorithms are hard to design correctly, even when apparently straightforward. Ensuring the correctness of the design at the earliest possible stage is a major challenge in any responsible system development. In view of the complexity of the algorithm, we turned to the interactive theorem prover PVS for mechanical support. We employ standard deductive verification techniques to prove around 200 invariance properties of our algorithm, and describe how this is achieved with the theorem prover PVS.<|reference_end|>
arxiv
@article{gao2003lock-free, title={Lock-free dynamic hash tables with open addressing}, author={Hui Gao, Jan Friso Groote, Wim H. Hesselink}, journal={Distributed Computing 17 (2005) 21-42}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303011}, primaryClass={cs.DC cs.DS} }
gao2003lock-free
arxiv-671003
cs/0303012
The measurements, parameters and construction of Web proxy cache
<|reference_start|>The measurements, parameters and construction of Web proxy cache: The aim of this paper is an experimental study of cache systems in order to optimize proxy cache systems and to modernize construction principles. Our investigations lead to the criteria for the optimal use of storage capacity and allow the description of the basic effects of the ratio between construction parts, steady-state performance, optimal size, etc. We want to outline that the results obtained and the plan of the experiment follow from the theoretical model. Special consideration is given to the modification of the key formulas supposed by Wolman at al.<|reference_end|>
arxiv
@article{dolgikh2003the, title={The measurements, parameters and construction of Web proxy cache}, author={Dmitry Dolgikh, Andrei Sukhov}, journal={arXiv preprint arXiv:cs/0303012}, year={2003}, number={SSAU-02-13}, archivePrefix={arXiv}, eprint={cs/0303012}, primaryClass={cs.NI} }
dolgikh2003the
arxiv-671004
cs/0303013
Extending the code generation capabilities of the Together CASE tool to support Data Definition languages
<|reference_start|>Extending the code generation capabilities of the Together CASE tool to support Data Definition languages: Together is the recommended software development tool in the Atlas collaboration. The programmatic API, which provides the capability to use and augment Together's internal functionality, is comprised of three major components - IDE, RWI and SCI. IDE is a read-only interface used to generate custom outputs based on the information contained in a Together model. RWI allows to both extract and write information to a Together model. SCI is the Source Code Interface, as the name implies it allows to work at the level of the source code. Together is extended by writing modules (java classes) extensively making use of the relevant API. We exploited Together extensibility to add support for the Atlas Dictionary Language. ADL is an extended subset of OMG IDL. The implemented module (ADLModule) makes Together to support ADL keywords, enables options and generate ADL object descriptions directly from UML Class diagrams. The module thoroughly accesses a Together reverse engineered C++ project - and/or design only class diagrams - and it is general enough to allow for possibly additional HEP-specific Together tool tailoring.<|reference_end|>
arxiv
@article{marino2003extending, title={Extending the code generation capabilities of the Together CASE tool to support Data Definition languages}, author={Massimo Marino}, journal={ECONFC0303241:TUJP004,2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303013}, primaryClass={cs.SE} }
marino2003extending
arxiv-671005
cs/0303014
Theoretical study of cache syatems
<|reference_start|>Theoretical study of cache syatems: The aim of this paper is a theoretical study of a cache system in order to optimize proxy cache systems and to modernize construction principles including prefetching schemes. Two types of correlations, Zipf-like distribution and normalizing conditions, play a role of the fundamental laws. A corresponding system of equations allows to describe the basic effects like ratio between construction parts, steady-state performance, optimal size, long-term prefetching, etc. A modification of the fundamental laws leads to the description of new effects of documents' renewal in the global network. An internet traffic caching system based on Zipf-like distribution (ZBS) is invented. The additional module to the cache construction gives an effective prefetching by lifetime.<|reference_end|>
arxiv
@article{dolgikh2003theoretical, title={Theoretical study of cache syatems}, author={Dmitry Dolgikh, Andrei Sukhov}, journal={arXiv preprint arXiv:cs/0303014}, year={2003}, number={SSAU-02-112}, archivePrefix={arXiv}, eprint={cs/0303014}, primaryClass={cs.NI} }
dolgikh2003theoretical
arxiv-671006
cs/0303015
Statistical efficiency of curve fitting algorithms
<|reference_start|>Statistical efficiency of curve fitting algorithms: We study the problem of fitting parametrized curves to noisy data. Under certain assumptions (known as Cartesian and radial functional models), we derive asymptotic expressions for the bias and the covariance matrix of the parameter estimates. We also extend Kanatani's version of the Cramer-Rao lower bound, which he proved for unbiased estimates only, to more general estimates that include many popular algorithms (most notably, the orthogonal least squares and algebraic fits). We then show that the gradient-weighted algebraic fit is statistically efficient and describe all other statistically efficient algebraic fits.<|reference_end|>
arxiv
@article{chernov2003statistical, title={Statistical efficiency of curve fitting algorithms}, author={N. Chernov and C. Lesort}, journal={arXiv preprint arXiv:cs/0303015}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303015}, primaryClass={cs.CV} }
chernov2003statistical
arxiv-671007
cs/0303016
Fast Parallel I/O on Cluster Computers
<|reference_start|>Fast Parallel I/O on Cluster Computers: Today's cluster computers suffer from slow I/O, which slows down I/O-intensive applications. We show that fast disk I/O can be achieved by operating a parallel file system over fast networks such as Myrinet or Gigabit Ethernet. In this paper, we demonstrate how the ParaStation3 communication system helps speed-up the performance of parallel I/O on clusters using the open source parallel virtual file system (PVFS) as testbed and production system. We will describe the set-up of PVFS on the Alpha-Linux-Cluster-Engine (ALiCE) located at Wuppertal University, Germany. Benchmarks on ALiCE achieve write-performances of up to 1 GB/s from a 32-processor compute-partition to a 32-processor PVFS I/O-partition, outperforming known benchmark results for PVFS on the same network by more than a factor of 2. Read-performance from buffer-cache reaches up to 2.2 GB/s. Our benchmarks are giant, I/O-intensive eigenmode problems from lattice quantum chromodynamics, demonstrating stability and performance of PVFS over Parastation in large-scale production runs.<|reference_end|>
arxiv
@article{duessel2003fast, title={Fast Parallel I/O on Cluster Computers}, author={Thomas Duessel, Norbert Eicker, Florin Isaila, Thomas Lippert, Thomas Moschny, Hartmut Neff, Klaus Schilling, Walter Tichy}, journal={arXiv preprint arXiv:cs/0303016}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303016}, primaryClass={cs.DC cs.AR hep-lat} }
duessel2003fast
arxiv-671008
cs/0303017
A Neural Network Assembly Memory Model with Maximum-Likelihood Recall and Recognition Properties
<|reference_start|>A Neural Network Assembly Memory Model with Maximum-Likelihood Recall and Recognition Properties: It has been shown that a neural network model recently proposed to describe basic memory performance is based on a ternary/binary coding/decoding algorithm which leads to a new neural network assembly memory model (NNAMM) providing maximum-likelihood recall/recognition properties and implying a new memory unit architecture with Hopfield two-layer network, N-channel time gate, auxiliary reference memory, and two nested feedback loops. For the data coding used, conditions are found under which a version of Hopfied network implements maximum-likelihood convolutional decoding algorithm and, simultaneously, linear statistical classifier of arbitrary binary vectors with respect to Hamming distance between vector analyzed and reference vector given. In addition to basic memory performance and etc, the model explicitly describes the dependence on time of memory trace retrieval, gives a possibility of one-trial learning, metamemory simulation, generalized knowledge representation, and distinct description of conscious and unconscious mental processes. It has been shown that an assembly memory unit may be viewed as a model of a smallest inseparable part or an 'atom' of consciousness. Some nontraditional neurobiological backgrounds (dynamic spatiotemporal synchrony, properties of time dependent and error detector neurons, early precise spike firing, etc) and the model's application to solve some interdisciplinary problems from different scientific fields are discussed.<|reference_end|>
arxiv
@article{gopych2003a, title={A Neural Network Assembly Memory Model with Maximum-Likelihood Recall and Recognition Properties}, author={Petro M. Gopych}, journal={arXiv preprint arXiv:cs/0303017}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303017}, primaryClass={cs.AI cs.IR cs.NE q-bio.NC q-bio.QM} }
gopych2003a
arxiv-671009
cs/0303018
Multi-target particle filtering for the probability hypothesis density
<|reference_start|>Multi-target particle filtering for the probability hypothesis density: When tracking a large number of targets, it is often computationally expensive to represent the full joint distribution over target states. In cases where the targets move independently, each target can instead be tracked with a separate filter. However, this leads to a model-data association problem. Another approach to solve the problem with computational complexity is to track only the first moment of the joint distribution, the probability hypothesis density (PHD). The integral of this distribution over any area S is the expected number of targets within S. Since no record of object identity is kept, the model-data association problem is avoided. The contribution of this paper is a particle filter implementation of the PHD filter mentioned above. This PHD particle filter is applied to tracking of multiple vehicles in terrain, a non-linear tracking problem. Experiments show that the filter can track a changing number of vehicles robustly, achieving near-real-time performance.<|reference_end|>
arxiv
@article{sidenbladh2003multi-target, title={Multi-target particle filtering for the probability hypothesis density}, author={Hedvig Sidenbladh}, journal={arXiv preprint arXiv:cs/0303018}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303018}, primaryClass={cs.AI} }
sidenbladh2003multi-target
arxiv-671010
cs/0303019
An Effective Decision Procedure for Linear Arithmetic with Integer and Real Variables
<|reference_start|>An Effective Decision Procedure for Linear Arithmetic with Integer and Real Variables: This paper considers finite-automata based algorithms for handling linear arithmetic with both real and integer variables. Previous work has shown that this theory can be dealt with by using finite automata on infinite words, but this involves some difficult and delicate to implement algorithms. The contribution of this paper is to show, using topological arguments, that only a restricted class of automata on infinite words are necessary for handling real and integer linear arithmetic. This allows the use of substantially simpler algorithms, which have been successfully implemented.<|reference_end|>
arxiv
@article{boigelot2003an, title={An Effective Decision Procedure for Linear Arithmetic with Integer and Real Variables}, author={Bernard Boigelot, Sebastien Jodogne and Pierre Wolper}, journal={arXiv preprint arXiv:cs/0303019}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303019}, primaryClass={cs.LO} }
boigelot2003an
arxiv-671011
cs/0303020
Complex Systems
<|reference_start|>Complex Systems: The study of Complex Systems is considered by many to be a new scientific field, and is distinguished by being a discipline that has applications within many separate areas of scientific study. The study of Neural Networks, Traffic Patterns, Artificial Intelligence, Social Systems, and many other scientific areas can all be considered to fall within the realm of Complex Systems, and can be studied from this new perspective. The advent of more capable computer systems has allowed these systems to be simulated and modeled with far greater ease, and new understanding of computer modeling approaches has allowed the fledgling science to be studied as never before. The preliminary focus of this paper will be to provide a general overview of the science of Complex Systems, including terminology, definitions, history, and examples. I will attempt to look at some of the most important trends in different areas of research, and give a general overview of research methods that have been used in parallel with computer modeling. Also, I will further define the areas of the science that concern themselves with computer modeling and simulation, and I will attempt to make it clear why the science only came into its own when the proper modeling and simulation tools were finally available. In addition, although there seems to be general agreement between different authors and institutes regarding the generalities of the study, there are some differences in terminology and methodology. I have attempted in this paper to bring as many elements together as possible, as far as the scope of the subject is concerned, without losing focus by studying Complex System techniques that are bound to one particular area of scientific study, unless that area is that of computer modeling.<|reference_end|>
arxiv
@article{smith2003complex, title={Complex Systems}, author={Jeffrey B. Smith}, journal={arXiv preprint arXiv:cs/0303020}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303020}, primaryClass={cs.NI} }
smith2003complex
arxiv-671012
cs/0303021
A Development Calculus for Specifications
<|reference_start|>A Development Calculus for Specifications: A first order inference system, called R-calculus, is defined to develop the specifications. It is used to eliminate the laws which is not consistent with the user's requirements. The R-calculus consists of the structural rules, an axiom, a cut rule, and the rules for logical connectives. Some examples are given to demonstrate the usage of the R-calculus. The properties about reachability and completeness of the R-calculus are formally defined and are proved.<|reference_end|>
arxiv
@article{li2003a, title={A Development Calculus for Specifications}, author={Wei Li}, journal={arXiv preprint arXiv:cs/0303021}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303021}, primaryClass={cs.LO cs.PL} }
li2003a
arxiv-671013
cs/0303022
Probabilistic behavior of hash tables
<|reference_start|>Probabilistic behavior of hash tables: We extend a result of Goldreich and Ron about estimating the collision probability of a hash function. Their estimate has a polynomial tail. We prove that when the load factor is greater than a certain constant, the estimator has a gaussian tail. As an application we find an estimate of an upper bound for the average search time in hashing with chaining, for a particular user (we allow the overall key distribution to be different from the key distribution of a particular user). The estimator has a gaussian tail.<|reference_end|>
arxiv
@article{hong2003probabilistic, title={Probabilistic behavior of hash tables}, author={Dawei Hong, Jean-Camille Birget, Shushuang Man}, journal={arXiv preprint arXiv:cs/0303022}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303022}, primaryClass={cs.DS cs.DB} }
hong2003probabilistic
arxiv-671014
cs/0303023
Conferences with Internet Web-Casting as Binding Events in a Global Brain: Example Data From Complexity Digest
<|reference_start|>Conferences with Internet Web-Casting as Binding Events in a Global Brain: Example Data From Complexity Digest: There is likeness of the Internet to human brains which has led to the metaphor of the world-wide computer network as a `Global Brain'. We consider conferences as 'binding events' in the Global Brain that can lead to metacognitive structures on a global scale. One of the critical factors for that phenomenon to happen (similar to the biological brain) are the time-scales characteristic for the information exchange. In an electronic newsletter- the Complexity Digest (ComDig) we include webcasting of audio (mp3) and video (asf) files from international conferences in the weekly ComDig issues. Here we present the time variation of the weekly rate of accesses to the conference files. From those empirical data it appears that the characteristic time-scales related to access of web-casting files is of the order of a few weeks. This is at least an order of magnitude shorter than the characteristic time-scales of peer reviewed publications and conference proceedings. We predict that this observation will have profound implications on the nature of future conference proceedings, presumably in electronic form.<|reference_end|>
arxiv
@article{das2003conferences, title={Conferences with Internet Web-Casting as Binding Events in a Global Brain: Example Data From Complexity Digest}, author={A. Das, G. Mayer-Kress, C. Gershenson, P. Das}, journal={arXiv preprint arXiv:cs/0303023}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303023}, primaryClass={cs.NI cs.AI} }
das2003conferences
arxiv-671015
cs/0303024
Differential Methods in Catadioptric Sensor Design with Applications to Panoramic Imaging
<|reference_start|>Differential Methods in Catadioptric Sensor Design with Applications to Panoramic Imaging: We discuss design techniques for catadioptric sensors that realize given projections. In general, these problems do not have solutions, but approximate solutions may often be found that are visually acceptable. There are several methods to approach this problem, but here we focus on what we call the ``vector field approach''. An application is given where a true panoramic mirror is derived, i.e. a mirror that yields a cylindrical projection to the viewer without any digital unwarping.<|reference_end|>
arxiv
@article{hicks2003differential, title={Differential Methods in Catadioptric Sensor Design with Applications to Panoramic Imaging}, author={R. Andrew Hicks}, journal={arXiv preprint arXiv:cs/0303024}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303024}, primaryClass={cs.CV cs.RO} }
hicks2003differential
arxiv-671016
cs/0303025
Algorithmic Clustering of Music
<|reference_start|>Algorithmic Clustering of Music: We present a fully automatic method for music classification, based only on compression of strings that represent the music pieces. The method uses no background knowledge about music whatsoever: it is completely general and can, without change, be used in different areas like linguistic classification and genomics. It is based on an ideal theory of the information content in individual objects (Kolmogorov complexity), information distance, and a universal similarity metric. Experiments show that the method distinguishes reasonably well between various musical genres and can even cluster pieces by composer.<|reference_end|>
arxiv
@article{cilibrasi2003algorithmic, title={Algorithmic Clustering of Music}, author={Rudi Cilibrasi (CWI), Paul Vitanyi (CWI and University of Amsterdam), Ronald de Wolf (CWI)}, journal={arXiv preprint arXiv:cs/0303025}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303025}, primaryClass={cs.SD cs.LG physics.data-an} }
cilibrasi2003algorithmic
arxiv-671017
cs/0303026
Preserving Peer Replicas By Rate-Limited Sampled Voting in LOCKSS
<|reference_start|>Preserving Peer Replicas By Rate-Limited Sampled Voting in LOCKSS: The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in "opinion polls." Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection to ensure that even some very powerful adversaries attacking over many years have only a small probability of causing irrecoverable damage before being detected.<|reference_end|>
arxiv
@article{maniatis2003preserving, title={Preserving Peer Replicas By Rate-Limited Sampled Voting in LOCKSS}, author={Petros Maniatis, Mema Roussopoulos, TJ Giuli, David S. H. Rosenthal, Mary Baker, Yanto Muliadi}, journal={arXiv preprint arXiv:cs/0303026}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303026}, primaryClass={cs.DC cs.DL} }
maniatis2003preserving
arxiv-671018
cs/0303027
Numerical Coverage Estimation for the Symbolic Simulation of Real-Time Systems
<|reference_start|>Numerical Coverage Estimation for the Symbolic Simulation of Real-Time Systems: Three numerical coverage metrics for the symbolic simulation of dense-time systems and their estimation methods are presented. Special techniques to derive numerical estimations of dense-time state-spaces have also been developed. Properties of the metrics are also discussed with respect to four criteria. Implementation and experiments are then reported.<|reference_end|>
arxiv
@article{wang2003numerical, title={Numerical Coverage Estimation for the Symbolic Simulation of Real-Time Systems}, author={Farn Wang, Geng-Dian Hwang and Fang Yu}, journal={arXiv preprint arXiv:cs/0303027}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303027}, primaryClass={cs.SE cs.SC} }
wang2003numerical
arxiv-671019
cs/0303028
The missing links in the BGP-based AS connectivity maps
<|reference_start|>The missing links in the BGP-based AS connectivity maps: A number of recent studies of the Internet topology at the autonomous systems level (AS graph) are based on the BGP-based AS connectivity maps (original maps). The so-called extended maps use additional data sources and contain more complete pictures of the AS graph. In this paper, we compare an original map, an extended map and a synthetic map generated by the Barabasi-Albert model. We examine the recently reported rich-club phenomenon, alternative routing paths and attack tolerance. We point out that the majority of the missing links of the original maps are the connecting links between rich nodes (nodes with large numbers of links) of the extended maps. We show that the missing links are relevant because links between rich nodes can be crucial for the network structure.<|reference_end|>
arxiv
@article{zhou2003the, title={The missing links in the BGP-based AS connectivity maps}, author={Shi Zhou and Raul J. Mondragon}, journal={arXiv preprint arXiv:cs/0303028}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303028}, primaryClass={cs.NI} }
zhou2003the
arxiv-671020
cs/0303029
Towards Modelling The Internet Topology - The Interactive Growth Model
<|reference_start|>Towards Modelling The Internet Topology - The Interactive Growth Model: The Internet topology at the Autonomous Systems level (AS graph) has a power--law degree distribution and a tier structure. In this paper, we introduce the Interactive Growth (IG) model based on the joint growth of new nodes and new links. This simple and dynamic model compares favorable with other Internet power--law topology generators because it not only closely resembles the degree distribution of the AS graph, but also accurately matches the hierarchical structure, which is measured by the recently reported rich-club phenomenon.<|reference_end|>
arxiv
@article{zhou2003towards, title={Towards Modelling The Internet Topology - The Interactive Growth Model}, author={Shi Zhou and Raul J. Mondragon}, journal={Published in Proc. of the 18th International Teletraffic Congress (Elsevier's Teletraffic Science and Engineering series, vol.5a, p.121) 2003 http://www.elsevier.com/wps/find/bookdescription.cws_home/680828/description}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303029}, primaryClass={cs.NI} }
zhou2003towards
arxiv-671021
cs/0303030
Analyzing and modelling the AS-level Internet topology
<|reference_start|>Analyzing and modelling the AS-level Internet topology: Recently we introduced the rich-club phenomenon as a quantitative metric to characterize the tier structure of the Autonomous Systems level Internet topology (AS graph) and we proposed the Interactive Growth (IG) model, which closely matches the degree distribution and hierarchical structure of the AS graph and compares favourble with other available Internet power-law topology generators. Our research was based on the widely used BGP AS graph obtained from the Oregon BGP routing tables. Researchers argue that Traceroute AS graph, extracted from the traceroute data collected by the CAIDA's active probing tool, Skitter, is more complete and reliable. To be prudent, in this paper we analyze and compare topological structures of Traceroute AS graph and BGP AS graph. Also we compare with two synthetic Internet topologies generated by the IG model and the well-known Barabasi-Albert (BA) model. Result shows that both AS graphs show the rich-club phenomenon and have similar tier structures, which are closely matched by the IG model, however the BA model does not show the rich-club phenomenon at all.<|reference_end|>
arxiv
@article{zhou2003analyzing, title={Analyzing and modelling the AS-level Internet topology}, author={Shi Zhou, Raul J. Mondragon}, journal={arXiv preprint arXiv:cs/0303030}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303030}, primaryClass={cs.NI} }
zhou2003analyzing
arxiv-671022
cs/0303031
A Bird's eye view of Matrix Distributed Processing
<|reference_start|>A Bird's eye view of Matrix Distributed Processing: We present Matrix Distributed Processing, a C++ library for fast development of efficient parallel algorithms. MDP is based on MPI and consists of a collection of C++ classes and functions such as lattice, site and field. Once an algorithm is written using these components the algorithm is automatically parallel and no explicit call to communication functions is required. MDP is particularly suitable for implementing parallel solvers for multi-dimensional differential equations and mesh-like problems.<|reference_end|>
arxiv
@article{di pierro2003a, title={A Bird's eye view of Matrix Distributed Processing}, author={Massimo Di Pierro}, journal={Proceedings of the ICCSA 2003 Conference}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303031}, primaryClass={cs.DC cs.CE cs.DM cs.MS hep-lat physics.comp-ph} }
di pierro2003a
arxiv-671023
cs/0303032
Recent Results on No-Free-Lunch Theorems for Optimization
<|reference_start|>Recent Results on No-Free-Lunch Theorems for Optimization: The sharpened No-Free-Lunch-theorem (NFL-theorem) states that the performance of all optimization algorithms averaged over any finite set F of functions is equal if and only if F is closed under permutation (c.u.p.) and each target function in F is equally likely. In this paper, we first summarize some consequences of this theorem, which have been proven recently: The average number of evaluations needed to find a desirable (e.g., optimal) solution can be calculated; the number of subsets c.u.p. can be neglected compared to the overall number of possible subsets; and problem classes relevant in practice are not likely to be c.u.p. Second, as the main result, the NFL-theorem is extended. Necessary and sufficient conditions for NFL-results to hold are given for arbitrary, non-uniform distributions of target functions. This yields the most general NFL-theorem for optimization presented so far.<|reference_end|>
arxiv
@article{igel2003recent, title={Recent Results on No-Free-Lunch Theorems for Optimization}, author={Christian Igel and Marc Toussaint}, journal={arXiv preprint arXiv:cs/0303032}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303032}, primaryClass={cs.NE math.OC nlin.AO} }
igel2003recent
arxiv-671024
cs/0303033
A Digital Preservation Appliance Based on OpenBSD
<|reference_start|>A Digital Preservation Appliance Based on OpenBSD: The LOCKSS program has developed and deployed in a world-wide test a system for preserving access to academic journals published on the Web. The fundamental problem for any digital preservation system is that it must be affordable for the long term. To reduce the cost of ownership, the LOCKSS system uses generic PC hardware, open source software, and peer-to-peer technology. It is packaged as a ``network appliance'', a single-function box that can be connected to the Internet, configured and left alone to do its job with minimal monitoring or administration. The first version of this system was based on a Linux boot floppy. After three years of testing it was replaced by a second version, based on OpenBSD and booting from CD-ROM. We focus in this paper on the design, implementation and deployment of a network appliance based on an open source operating system. We provide an overview of the LOCKSS application and describe the experience of deploying and supporting its first version. We list the requirements we took from this to drive the design of the second version, describe how we satisfied them in the OpenBSD environment, and report on the initial<|reference_end|>
arxiv
@article{rosenthal2003a, title={A Digital Preservation Appliance Based on OpenBSD}, author={David S. H. Rosenthal}, journal={Proceedings of BSDcon, 2003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0303033}, primaryClass={cs.DC cs.DL} }
rosenthal2003a
arxiv-671025
cs/0304001
Design Guidelines for Landmarks to Support Navigation in Virtual Environments
<|reference_start|>Design Guidelines for Landmarks to Support Navigation in Virtual Environments: Unfamiliar, large-scale virtual environments are difficult to navigate. This paper presents design guidelines to ease navigation in such virtual environments. The guidelines presented here focus on the design and placement of landmarks in virtual environments. Moreover, the guidelines are based primarily on the extensive empirical literature on navigation in the real world. A rationale for this approach is provided by the similarities between navigational behavior in real and virtual environments.<|reference_end|>
arxiv
@article{vinson2003design, title={Design Guidelines for Landmarks to Support Navigation in Virtual Environments}, author={Norman G. Vinson}, journal={Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit, p.278-285, May 15-20, 1999, Pittsburgh, Pennsylvania, United States}, year={2003}, number={NRC 43578}, archivePrefix={arXiv}, eprint={cs/0304001}, primaryClass={cs.HC} }
vinson2003design
arxiv-671026
cs/0304002
The Mad Hatter&acute;s Cocktail Party: A Social Mobile Audio Space Supporting Multiple Simultaneous Conversations
<|reference_start|>The Mad Hatter&acute;s Cocktail Party: A Social Mobile Audio Space Supporting Multiple Simultaneous Conversations: This paper presents a mobile audio space intended for use by gelled social groups. In face-to-face interactions in such social groups, conversational floors change frequently, e.g., two participants split off to form a new conversational floor, a participant moves from one conversational floor to another, etc. To date, audio spaces have provided little support for such dynamic regroupings of participants, either requiring that the participants explicitly specify with whom they wish to talk or simply presenting all participants as though they are in a single floor. By contrast, the audio space described here monitors participant behavior to identify conversational floors as they emerge. The system dynamically modifies the audio delivered to each participant to enhance the salience of the participants with whom they are currently conversing. We report a user study of the system, focusing on conversation analytic results.<|reference_end|>
arxiv
@article{aoki2003the, title={The Mad Hatter&acute;s Cocktail Party: A Social Mobile Audio Space Supporting Multiple Simultaneous Conversations}, author={Paul M. Aoki, Matthew Romaine, Margaret H. Szymanski, James D. Thornton, Daniel Wilson, Allison Woodruff}, journal={Proc. ACM SIGCHI Conf. on Human Factors in Computing Systems, Ft. Lauderdale, FL, Apr. 2003, 425-432. ACM Press.}, year={2003}, doi={10.1145/642611.642686}, archivePrefix={arXiv}, eprint={cs/0304002}, primaryClass={cs.HC cs.SD} }
aoki2003the
arxiv-671027
cs/0304003
TCTL Inevitability Analysis of Dense-time Systems
<|reference_start|>TCTL Inevitability Analysis of Dense-time Systems: Inevitability properties in branching temporal logics are of the syntax forall eventually \phi, where \phi is an arbitrary (timed) CTL formula. In the sense that "good things will happen", they are parallel to the "liveness" properties in linear temporal logics. Such inevitability properties in dense-time logics can be analyzed with greatest fixpoint calculation. We present algorithms to model-check inevitability properties both with and without requirement of non-Zeno computations. We discuss a technique for early decision on greatest fixpoints in the temporal logics, and experiment with the effect of non-Zeno computations on the evaluation of greatest fixpoints. We also discuss the TCTL subclass with only universal path quantifiers which allows for the safe abstraction analysis of inevitability properties. Finally, we report our implementation and experiments to show the plausibility of our ideas.<|reference_end|>
arxiv
@article{wang2003tctl, title={TCTL Inevitability Analysis of Dense-time Systems}, author={Farn Wang, Geng-Dian Hwang and Fang Yu}, journal={arXiv preprint arXiv:cs/0304003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304003}, primaryClass={cs.SC} }
wang2003tctl
arxiv-671028
cs/0304004
Quasi-Optimal Arithmetic for Quaternion Polynomials
<|reference_start|>Quasi-Optimal Arithmetic for Quaternion Polynomials: Fast algorithms for arithmetic on real or complex polynomials are well-known and have proven to be not only asymptotically efficient but also very practical. Based on Fast Fourier Transform (FFT), they for instance multiply two polynomials of degree up to N or multi-evaluate one at N points simultaneously within quasi-linear time O(N.polylog N). An extension to (and in fact the mere definition of) polynomials over the skew-field H of quaternions is promising but still missing. The present work proposes three such definitions which in the commutative case coincide but for H turn out to differ, each one satisfying some desirable properties while lacking others. For each notion we devise algorithms for according arithmetic; these are quasi-optimal in that their running times match lower complexity bounds up to polylogarithmic factors.<|reference_end|>
arxiv
@article{ziegler2003quasi-optimal, title={Quasi-Optimal Arithmetic for Quaternion Polynomials}, author={Martin Ziegler}, journal={pp.705-715 in Proc.14th ISAAC (2003), Springer LNCS 2906}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304004}, primaryClass={cs.SC} }
ziegler2003quasi-optimal
arxiv-671029
cs/0304005
Quantum Computation and Lattice Problems
<|reference_start|>Quantum Computation and Lattice Problems: We present the first explicit connection between quantum computation and lattice problems. Namely, we show a solution to the Unique Shortest Vector Problem (SVP) under the assumption that there exists an algorithm that solves the hidden subgroup problem on the dihedral group by coset sampling. Moreover, we solve the hidden subgroup problem on the dihedral group by using an average case subset sum routine. By combining the two results, we get a quantum reduction from $\Theta(n^{2.5})$-unique-SVP to the average case subset sum problem.<|reference_end|>
arxiv
@article{regev2003quantum, title={Quantum Computation and Lattice Problems}, author={Oded Regev}, journal={arXiv preprint arXiv:cs/0304005}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304005}, primaryClass={cs.DS} }
regev2003quantum
arxiv-671030
cs/0304006
Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment
<|reference_start|>Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment: We address the text-to-text generation problem of sentence-level paraphrasing -- a phenomenon distinct from and more difficult than word- or phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.<|reference_end|>
arxiv
@article{barzilay2003learning, title={Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment}, author={Regina Barzilay and Lillian Lee}, journal={arXiv preprint arXiv:cs/0304006}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304006}, primaryClass={cs.CL} }
barzilay2003learning
arxiv-671031
cs/0304007
A Method for Clustering Web Attacks Using Edit Distance
<|reference_start|>A Method for Clustering Web Attacks Using Edit Distance: Cluster analysis often serves as the initial step in the process of data classification. In this paper, the problem of clustering different length input data is considered. The edit distance as the minimum number of elementary edit operations needed to transform one vector into another is used. A heuristic for clustering unequal length vectors, analogue to the well known k-means algorithm is described and analyzed. This heuristic determines cluster centroids expanding shorter vectors to the lengths of the longest ones in each cluster in a specific way. It is shown that the time and space complexities of the heuristic are linear in the number of input vectors. Experimental results on real data originating from a system for classification of Web attacks are given.<|reference_end|>
arxiv
@article{petrovic2003a, title={A Method for Clustering Web Attacks Using Edit Distance}, author={Slobodan Petrovic, Gonzalo Alvarez}, journal={arXiv preprint arXiv:cs/0304007}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304007}, primaryClass={cs.IR cs.AI cs.CR} }
petrovic2003a
arxiv-671032
cs/0304008
A Physics-Free Introduction to the Quantum Computation Model
<|reference_start|>A Physics-Free Introduction to the Quantum Computation Model: This article defines and proves basic properties of the standard quantum circuit model of computation. The model is developed abstractly in close analogy with (classical) deterministic and probabilistic circuits, without recourse to any physical concepts or principles. It is intended as a primer for theoretical computer scientists who do not know--and perhaps do not care to know--any physics.<|reference_end|>
arxiv
@article{fenner2003a, title={A Physics-Free Introduction to the Quantum Computation Model}, author={Stephen A. Fenner}, journal={Bulletin of the European Association for Theoretical Computer Science, 79(Feb 2003), 69-85}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304008}, primaryClass={cs.CC quant-ph} }
fenner2003a
arxiv-671033
cs/0304009
Stochastic Volatility in a Quantitative Model of Stock Market Returns
<|reference_start|>Stochastic Volatility in a Quantitative Model of Stock Market Returns: Standard quantitative models of the stock market predict a log-normal distribution for stock returns (Bachelier 1900, Osborne 1959), but it is recognised (Fama 1965) that empirical data, in comparison with a Gaussian, exhibit leptokurtosis (it has more probability mass in its tails and centre) and fat tails (probabilities of extreme events are underestimated). Different attempts to explain this departure from normality have coexisted. In particular, since one of the strong assumptions of the Gaussian model concerns the volatility, considered finite and constant, the new models were built on a non finite (Mandelbrot 1963) or non constant (Cox, Ingersoll and Ross 1985) volatility. We investigate in this thesis a very recent model (Dragulescu et al. 2002) based on a Brownian motion process for the returns, and a stochastic mean-reverting process for the volatility. In this model, the forward Kolmogorov equation that governs the time evolution of returns is solved analytically. We test this new theory against different stock indexes (Dow Jones Industrial Average, Standard and Poor s and Footsie), over different periods (from 20 to 105 years). Our aim is to compare this model with the classical Gaussian and with a simple Neural Network, used as a benchmark. We perform the usual statistical tests on the kurtosis and tails of the expected distributions, paying particular attention to the outliers. As claimed by the authors, the new model outperforms the Gaussian for any time lag, but is artificially too complex for medium and low frequencies, where the Gaussian is preferable. Moreover this model is still rejected for high frequencies, at a 0.05 level of significance, due to the kurtosis, incorrectly handled.<|reference_end|>
arxiv
@article{daniel2003stochastic, title={Stochastic Volatility in a Quantitative Model of Stock Market Returns}, author={Gilles Daniel}, journal={arXiv preprint arXiv:cs/0304009}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304009}, primaryClass={cs.CE} }
daniel2003stochastic
arxiv-671034
cs/0304010
Efficient linear feedback shift registers with maximal period
<|reference_start|>Efficient linear feedback shift registers with maximal period: We introduce and analyze an efficient family of linear feedback shift registers (LFSR's) with maximal period. This family is word-oriented and is suitable for implementation in software, thus provides a solution to a recent challenge posed in FSE '94. The classical theory of LFSR's is extended to provide efficient algorithms for generation of irreducible and primitive LFSR's of this new type.<|reference_end|>
arxiv
@article{tsaban2003efficient, title={Efficient linear feedback shift registers with maximal period}, author={Boaz Tsaban and Uzi Vishne}, journal={Finite Fields and their Applications 8 (2002), 256--267}, year={2003}, doi={10.1006/ffta.2001.0339}, archivePrefix={arXiv}, eprint={cs/0304010}, primaryClass={cs.CR math.NT} }
tsaban2003efficient
arxiv-671035
cs/0304011
Embedded Reflection Mapping
<|reference_start|>Embedded Reflection Mapping: Environment maps are used to simulate reflections off curved objects. We present a technique to reflect a user, or a group of users, in a real environment, onto a virtual object, in a virtual reality application, using the live video feeds from a set of cameras, in real-time. Our setup can be used in a variety of environments ranging from outdoor or indoor scenes.<|reference_end|>
arxiv
@article{anderson2003embedded, title={Embedded Reflection Mapping}, author={Paul Anderson and Goncalo Carvalho}, journal={arXiv preprint arXiv:cs/0304011}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304011}, primaryClass={cs.GR} }
anderson2003embedded
arxiv-671036
cs/0304012
Individual Communication Complexity
<|reference_start|>Individual Communication Complexity: We initiate the theory of communication complexity of individual inputs held by the agents, rather than worst-case or average-case. We consider total, partial, and partially correct protocols, one-way versus two-way, with and without help bits. The results are expressed in trems of Kolmogorov complexity.<|reference_end|>
arxiv
@article{buhrman2003individual, title={Individual Communication Complexity}, author={Harry Buhrman (CWI and University of Amsterdam), Hartmut Klauck (IAS, Princeton), Nikolai Vereshchagin (Moscow University), Paul Vitanyi (CWI and University of Amsterdam)}, journal={arXiv preprint arXiv:cs/0304012}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304012}, primaryClass={cs.CC cs.DC} }
buhrman2003individual
arxiv-671037
cs/0304013
Hidden Polynomial(s) Cryptosystems
<|reference_start|>Hidden Polynomial(s) Cryptosystems: We propose public-key cryptosystems with public key a system of polynomial equations, algebraic or differential, and private key a single polynomial or a small-size ideal. We set up probabilistic encryption, signature, and signcryption protocols.<|reference_end|>
arxiv
@article{toli2003hidden, title={Hidden Polynomial(s) Cryptosystems}, author={Ilia Toli}, journal={arXiv preprint arXiv:cs/0304013}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304013}, primaryClass={cs.CR cs.SC} }
toli2003hidden
arxiv-671038
cs/0304014
Commitment Capacity of Discrete Memoryless Channels
<|reference_start|>Commitment Capacity of Discrete Memoryless Channels: In extension of the bit commitment task and following work initiated by Crepeau and Kilian, we introduce and solve the problem of characterising the optimal rate at which a discrete memoryless channel can be used for bit commitment. It turns out that the answer is very intuitive: it is the maximum equivocation of the channel (after removing trivial redundancy), even when unlimited noiseless bidirectional side communication is allowed. By a well-known reduction, this result provides a lower bound on the channel's capacity for implementing coin tossing, which we conjecture to be an equality. The method of proving this relates the problem to Wyner's wire--tap channel in an amusing way. We also discuss extensions to quantum channels.<|reference_end|>
arxiv
@article{winter2003commitment, title={Commitment Capacity of Discrete Memoryless Channels}, author={Andreas Winter, Anderson C. A. Nascimento, Hideki Imai}, journal={Proc. 9th Cirencester Crypto and Coding Conf., LNCS 2989, pp 35-51, Springer, Berlin 2003.}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304014}, primaryClass={cs.CR quant-ph} }
winter2003commitment
arxiv-671039
cs/0304015
A Performance Study of Monitoring and Information Services for Distributed Systems
<|reference_start|>A Performance Study of Monitoring and Information Services for Distributed Systems: Monitoring and information services form a key component of a distributed system, or Grid. A quantitative study of such services can aid in understanding the performance limitations, advise in the deployment of the systems, and help evaluate future development work. To this end, we study the performance of three monitoring and information services for distributed systems: the Globus Toolkit's Monitoring and Discovery Service (MDS), the European Data Grid Relational Grid Monitoring Architecture (R-GMA), and Hawkeye, part of the Condor project. We perform experiments to test their scalability with respect to number of users, number of resources, and amount of data collected. Our study shows that each approach has different behaviors, often due to their different design goals. In the four sets of experiments we conducted to evaluate the performance of the service components under different circumstances, we found a strong advantage to caching or prefetching the data, as well as the need to have primary components at well connected sites due to high load seen by all systems.<|reference_end|>
arxiv
@article{zhang2003a, title={A Performance Study of Monitoring and Information Services for Distributed Systems}, author={Xuehai Zhang, Jeffrey Freschl, and Jennifer M. Schopf}, journal={arXiv preprint arXiv:cs/0304015}, year={2003}, number={Preprint ANL/MCS-P1040-0403}, archivePrefix={arXiv}, eprint={cs/0304015}, primaryClass={cs.PF} }
zhang2003a
arxiv-671040
cs/0304016
Symmetric and anti-symmetric quantum functions
<|reference_start|>Symmetric and anti-symmetric quantum functions: This paper introduces and analyzes symmetric and anti-symmetric quantum binary functions. Generally, such functions uniquely convert a given computational basis state into a different basis state, but with either a plus or a minus sign. Such functions may serve along with a constant function (in a Deutsch-Jozsa type of algorithm) to provide 2**n deterministic qubit combinations (for n qubits) instead of just one.<|reference_end|>
arxiv
@article{burger2003symmetric, title={Symmetric and anti-symmetric quantum functions}, author={J. R. Burger}, journal={arXiv preprint arXiv:cs/0304016}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304016}, primaryClass={cs.OH quant-ph} }
burger2003symmetric
arxiv-671041
cs/0304017
Ground Canonicity
<|reference_start|>Ground Canonicity: We explore how different proof orderings induce different notions of saturation. We relate completion, paramodulation, saturation, redundancy elimination, and rewrite system reduction to proof orderings.<|reference_end|>
arxiv
@article{dershowitz2003ground, title={Ground Canonicity}, author={Nachum Dershowitz}, journal={arXiv preprint arXiv:cs/0304017}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304017}, primaryClass={cs.LO} }
dershowitz2003ground
arxiv-671042
cs/0304018
Quasiconvex Analysis of Backtracking Algorithms
<|reference_start|>Quasiconvex Analysis of Backtracking Algorithms: We consider a class of multivariate recurrences frequently arising in the worst case analysis of Davis-Putnam-style exponential time backtracking algorithms for NP-hard problems. We describe a technique for proving asymptotic upper bounds on these recurrences, by using a suitable weight function to reduce the problem to that of solving univariate linear recurrences; show how to use quasiconvex programming to determine the weight function yielding the smallest upper bound; and prove that the resulting upper bounds are within a polynomial factor of the true asymptotics of the recurrence. We develop and implement a multiple-gradient descent algorithm for the resulting quasiconvex programs, using a real-number arithmetic package for guaranteed accuracy of the computed worst case time bounds.<|reference_end|>
arxiv
@article{eppstein2003quasiconvex, title={Quasiconvex Analysis of Backtracking Algorithms}, author={David Eppstein}, journal={arXiv preprint arXiv:cs/0304018}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304018}, primaryClass={cs.DS cs.CG math.CO} }
eppstein2003quasiconvex
arxiv-671043
cs/0304019
Blind Normalization of Speech From Different Channels
<|reference_start|>Blind Normalization of Speech From Different Channels: We show how to construct a channel-independent representation of speech that has propagated through a noisy reverberant channel. This is done by blindly rescaling the cepstral time series by a non-linear function, with the form of this scale function being determined by previously encountered cepstra from that channel. The rescaled form of the time series is an invariant property of it in the following sense: it is unaffected if the time series is transformed by any time-independent invertible distortion. Because a linear channel with stationary noise and impulse response transforms cepstra in this way, the new technique can be used to remove the channel dependence of a cepstral time series. In experiments, the method achieved greater channel-independence than cepstral mean normalization, and it was comparable to the combination of cepstral mean normalization and spectral subtraction, despite the fact that no measurements of channel noise or reverberations were required (unlike spectral subtraction).<|reference_end|>
arxiv
@article{levin2003blind, title={Blind Normalization of Speech From Different Channels}, author={David N. Levin}, journal={arXiv preprint arXiv:cs/0304019}, year={2003}, doi={10.1121/1.1755235}, archivePrefix={arXiv}, eprint={cs/0304019}, primaryClass={cs.CL} }
levin2003blind
arxiv-671044
cs/0304020
A direct sum theorem in communication complexity via message compression
<|reference_start|>A direct sum theorem in communication complexity via message compression: We prove lower bounds for the direct sum problem for two-party bounded error randomised multiple-round communication protocols. Our proofs use the notion of information cost of a protocol, as defined by Chakrabarti, Shi, Wirth and Yao and refined further by Bar-Yossef, Jayram, Kumar and Sivakumar. Our main technical result is a `compression' theorem saying that, for any probability distribution $\mu$ over the inputs, a $k$-round private coin bounded error protocol for a function $f$ with information cost $c$ can be converted into a $k$-round deterministic protocol for $f$ with bounded distributional error and communication cost $O(kc)$. We prove this result using a substate theorem about relative entropy and a rejection sampling argument. Our direct sum result follows from this `compression' result via elementary information theoretic arguments. We also consider the direct sum problem in quantum communication. Using a probabilistic argument, we show that messages cannot be compressed in this manner even if they carry small information. Hence, new techniques may be necessary to tackle the direct sum problem in quantum communication.<|reference_end|>
arxiv
@article{jain2003a, title={A direct sum theorem in communication complexity via message compression}, author={Rahul Jain, Jaikumar Radhakrishnan, Pranab Sen}, journal={arXiv preprint arXiv:cs/0304020}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304020}, primaryClass={cs.CC} }
jain2003a
arxiv-671045
cs/0304021
Model Checking for a Class of Weighted Automata
<|reference_start|>Model Checking for a Class of Weighted Automata: A large number of different model checking approaches has been proposed during the last decade. The different approaches are applicable to different model types including untimed, timed, probabilistic and stochastic models. This paper presents a new framework for model checking techniques which includes some of the known approaches, but enlarges the class of models for which model checking can be applied to the general class of weighted automata. The approach allows an easy adaption of model checking to models which have not been considered yet for this purpose. Examples for those new model types for which model checking can be applied are max/plus or min/plus automata which are well established models to describe different forms of dynamic systems and optimization problems. In this context, model checking can be used to verify temporal or quantitative properties of a system. The paper first presents briefly our class of weighted automata, as a very general model type. Then Valued Computational Tree Logic (CTL$) is introduced as a natural extension of the well known branching time logic CTL. Afterwards, algorithms to check a weighted automaton according to a CTL$ formula are presented. As a last result, a bisimulation is presented for weighted automata and for CTL$.<|reference_end|>
arxiv
@article{buchholz2003model, title={Model Checking for a Class of Weighted Automata}, author={Peter Buchholz and Peter Kemper}, journal={arXiv preprint arXiv:cs/0304021}, year={2003}, number={University of Dortmund, Department of Computer Science Technical report No. 779}, archivePrefix={arXiv}, eprint={cs/0304021}, primaryClass={cs.LO} }
buchholz2003model
arxiv-671046
cs/0304022
Self-Replicating Machines in Continuous Space with Virtual Physics
<|reference_start|>Self-Replicating Machines in Continuous Space with Virtual Physics: JohnnyVon is an implementation of self-replicating machines in continuous two-dimensional space. Two types of particles drift about in a virtual liquid. The particles are automata with discrete internal states but continuous external relationships. Their internal states are governed by finite state machines but their external relationships are governed by a simulated physics that includes Brownian motion, viscosity, and spring-like attractive and repulsive forces. The particles can be assembled into patterns that can encode arbitrary strings of bits. We demonstrate that, if an arbitrary "seed" pattern is put in a "soup" of separate individual particles, the pattern will replicate by assembling the individual particles into copies of itself. We also show that, given sufficient time, a soup of separate individual particles will eventually spontaneously form self-replicating patterns. We discuss the implications of JohnnyVon for research in nanotechnology, theoretical biology, and artificial life.<|reference_end|>
arxiv
@article{smith2003self-replicating, title={Self-Replicating Machines in Continuous Space with Virtual Physics}, author={Arnold Smith (National Research Council of Canada), Peter Turney (National Research Council of Canada), Robert Ewaschuk (University of Waterloo)}, journal={Artificial Life, (2003), 9, 21-40}, year={2003}, doi={10.1162/106454603321489509}, number={NRC-44969}, archivePrefix={arXiv}, eprint={cs/0304022}, primaryClass={cs.NE cs.CE q-bio.PE} }
smith2003self-replicating
arxiv-671047
cs/0304023
Partitioning Regular Polygons into Circular Pieces I: Convex Partitions
<|reference_start|>Partitioning Regular Polygons into Circular Pieces I: Convex Partitions: We explore an instance of the question of partitioning a polygon into pieces, each of which is as ``circular'' as possible, in the sense of having an aspect ratio close to 1. The aspect ratio of a polygon is the ratio of the diameters of the smallest circumscribing circle to the largest inscribed disk. The problem is rich even for partitioning regular polygons into convex pieces, the focus of this paper. We show that the optimal (most circular) partition for an equilateral triangle has an infinite number of pieces, with the lower bound approachable to any accuracy desired by a particular finite partition. For pentagons and all regular k-gons, k > 5, the unpartitioned polygon is already optimal. The square presents an interesting intermediate case. Here the one-piece partition is not optimal, but nor is the trivial lower bound approachable. We narrow the optimal ratio to an aspect-ratio gap of 0.01082 with several somewhat intricate partitions.<|reference_end|>
arxiv
@article{damian2003partitioning, title={Partitioning Regular Polygons into Circular Pieces I: Convex Partitions}, author={Mirela Damian and Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0304023}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304023}, primaryClass={cs.CG} }
damian2003partitioning
arxiv-671048
cs/0304024
Glottochronologic Retrognostic of Language System
<|reference_start|>Glottochronologic Retrognostic of Language System: A glottochronologic retrognostic of language system is proposed<|reference_end|>
arxiv
@article{victor2003glottochronologic, title={Glottochronologic Retrognostic of Language System}, author={Kromer Victor}, journal={arXiv preprint arXiv:cs/0304024}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304024}, primaryClass={cs.CL} }
victor2003glottochronologic
arxiv-671049
cs/0304025
Computational Geometry Column 44
<|reference_start|>Computational Geometry Column 44: The open problem of whether or not every pair of equal-area polygons has a hinged dissection is discussed.<|reference_end|>
arxiv
@article{o'rourke2003computational, title={Computational Geometry Column 44}, author={Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0304025}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304025}, primaryClass={cs.CG} }
o'rourke2003computational
arxiv-671050
cs/0304026
A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover
<|reference_start|>A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover: Given a $k$-uniform hyper-graph, the E$k$-Vertex-Cover problem is to find the smallest subset of vertices that intersects every hyper-edge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that E$k$-Vertex-Cover is NP-hard to approximate within factor $(k-1-\epsilon)$ for any $k \geq 3$ and any $\epsilon>0$. The result is essentially tight as this problem can be easily approximated within factor $k$. Our construction makes use of the biased Long-Code and is analyzed using combinatorial properties of $s$-wise $t$-intersecting families of subsets.<|reference_end|>
arxiv
@article{dinur2003a, title={A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover}, author={Irit Dinur, Venkatesan Guruswami, Subhash Khot, Oded Regev}, journal={arXiv preprint arXiv:cs/0304026}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304026}, primaryClass={cs.CC} }
dinur2003a
arxiv-671051
cs/0304027
"I'm sorry Dave, I'm afraid I can't do that": Linguistics, Statistics, and Natural Language Processing circa 2001
<|reference_start|>"I'm sorry Dave, I'm afraid I can't do that": Linguistics, Statistics, and Natural Language Processing circa 2001: A brief, general-audience overview of the history of natural language processing, focusing on data-driven approaches.Topics include "Ambiguity and language analysis", "Firth things first", "A 'C' change", and "The empiricists strike back".<|reference_end|>
arxiv
@article{lee2003"i'm, title={"I'm sorry Dave, I'm afraid I can't do that": Linguistics, Statistics, and Natural Language Processing circa 2001}, author={Lillian Lee}, journal={In "Computer Science: Reflections on the Field, Reflections from the Field" (report of the National Academies' Study on the Fundamentals of Computer Science), pp. 111--118, 2004}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304027}, primaryClass={cs.CL} }
lee2003"i'm
arxiv-671052
cs/0304028
Grid-Enabling Natural Language Engineering By Stealth
<|reference_start|>Grid-Enabling Natural Language Engineering By Stealth: We describe a proposal for an extensible, component-based software architecture for natural language engineering applications. Our model leverages existing linguistic resource description and discovery mechanisms based on extended Dublin Core metadata. In addition, the application design is flexible, allowing disparate components to be combined to suit the overall application functionality. An application specification language provides abstraction from the programming environment and allows ease of interface with computational grids via a broker.<|reference_end|>
arxiv
@article{hughes2003grid-enabling, title={Grid-Enabling Natural Language Engineering By Stealth}, author={Baden Hughes and Steven Bird}, journal={arXiv preprint arXiv:cs/0304028}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304028}, primaryClass={cs.DC cs.CL} }
hughes2003grid-enabling
arxiv-671053
cs/0304029
An XML based Document Suite
<|reference_start|>An XML based Document Suite: We report about the current state of development of a document suite and its applications. This collection of tools for the flexible and robust processing of documents in German is based on the use of XML as unifying formalism for encoding input and output data as well as process information. It is organized in modules with limited responsibilities that can easily be combined into pipelines to solve complex tasks. Strong emphasis is laid on a number of techniques to deal with lexical and conceptual gaps that are typical when starting a new application.<|reference_end|>
arxiv
@article{roesner2003an, title={An XML based Document Suite}, author={Dietmar Roesner, Manuela Kunze}, journal={Proceedings of COLING 2002; p. 1278-1282}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304029}, primaryClass={cs.CL} }
roesner2003an
arxiv-671054
cs/0304030
Small Spans in Scaled Dimension
<|reference_start|>Small Spans in Scaled Dimension: Juedes and Lutz (1995) proved a small span theorem for polynomial-time many-one reductions in exponential time. This result says that for language A decidable in exponential time, either the class of languages reducible to A (the lower span) or the class of problems to which A can be reduced (the upper span) is small in the sense of resource-bounded measure and, in particular, that the degree of A is small. Small span theorems have been proven for increasingly stronger polynomial-time reductions, and a small span theorem for polynomial-time Turing reductions would imply BPP != EXP. In contrast to the progress in resource-bounded measure, Ambos-Spies, Merkle, Reimann, and Stephan (2001) showed that there is no small span theorem for the resource-bounded dimension of Lutz (2000), even for polynomial-time many-one reductions. Resource-bounded scaled dimension, recently introduced by Hitchcock, Lutz, and Mayordomo (2003), provides rescalings of resource-bounded dimension. We use scaled dimension to further understand the contrast between measure and dimension regarding polynomial-time spans and degrees. We strengthen prior results by showing that the small span theorem holds for polynomial-time many-one reductions in the -3rd-order scaled dimension, but fails to hold in the -2nd-order scaled dimension. Our results also hold in exponential space. As an application, we show that determining the -2nd- or -1st-order scaled dimension in ESPACE of the many-one complete languages for E would yield a proof of P = BPP or P != PSPACE. On the other hand, it is shown unconditionally that the complete languages for E have -3rd-order scaled dimension 0 in ESPACE and -2nd- and -1st-order scaled dimension 1 in E.<|reference_end|>
arxiv
@article{hitchcock2003small, title={Small Spans in Scaled Dimension}, author={John M. Hitchcock}, journal={arXiv preprint arXiv:cs/0304030}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304030}, primaryClass={cs.CC} }
hitchcock2003small
arxiv-671055
cs/0304031
Transforming the Structure of Network Interconnection and Transport
<|reference_start|>Transforming the Structure of Network Interconnection and Transport: Vibrant development of a network-based economy requires separating investment in highly location specific local access technology from the development of standardized, geography-independent, wide-area network services. Thus far interconnection arrangements and associated regulations have been too closely tied to the idiosyncratic geographic structure of individual operators' networks. A key industry challenge is to foster the development of a wide area lattice of common geographic points of interconnection. Sound regulatory and anti-trust policy can help address this industry need.<|reference_end|>
arxiv
@article{galbi2003transforming, title={Transforming the Structure of Network Interconnection and Transport}, author={Douglas A. Galbi}, journal={CommLaw Conspectus, v. 8, n. 2 (Summer 2000) pp. 203-18}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304031}, primaryClass={cs.CY} }
galbi2003transforming
arxiv-671056
cs/0304032
Growth in the "New Economy": US Bandwidth Use and Pricing Across the 1990s
<|reference_start|>Growth in the "New Economy": US Bandwidth Use and Pricing Across the 1990s: An acceleration in the growth of communications bandwidth in use and a rapid reduction in bandwidth prices have not accompanied the U.S. economy's strong performance in the second half of the 1990s. Overall U.S. bandwidth in use has grown robustly throughout the 1990s, but growth has not significantly accelerated in the second half of 1990s. Average prices for U.S. bandwidth in use have fallen little in nominal terms in the second half of the 1990s. Policy makers and policy analysts should recognize that institutional change, rather than more competitors of established types, appears to be key to dramatic improvements in bandwidth growth and prices. Such a development could provide a significant additional impetus to aggregate growth and productivity.<|reference_end|>
arxiv
@article{galbi2003growth, title={Growth in the "New Economy": U.S. Bandwidth Use and Pricing Across the 1990s}, author={Douglas A. Galbi}, journal={Telecommunications Policy 25 (2001) 139-154}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304032}, primaryClass={cs.CY} }
galbi2003growth
arxiv-671057
cs/0304033
A New Account of Personalization and Effective Communication
<|reference_start|>A New Account of Personalization and Effective Communication: To contribute to understanding of information economies of daily life, this paper explores over the past millennium given names of a large number of persons. Analysts have long both condemned and praised mass media as a source of common culture, national unity, or shared symbolic experiences. Names, however, indicate a large decline in shared symbolic experience over the past two centuries, a decline that the growth of mass media does not appear to have affected significantly. Study of names also shows that action and personal relationships, along with time horizon, are central aspects of effective communication across a large population. The observed preference for personalization over the past two centuries and the importance of action and personal relationships to effective communication are aspects of information economies that are likely to have continuing significance for industry developments, economic statistics, and public policy.<|reference_end|>
arxiv
@article{galbi2003a, title={A New Account of Personalization and Effective Communication}, author={Douglas A. Galbi}, journal={arXiv preprint arXiv:cs/0304033}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304033}, primaryClass={cs.CY} }
galbi2003a
arxiv-671058
cs/0304034
Revolutionary Ideas for Radio Regulation
<|reference_start|>Revolutionary Ideas for Radio Regulation: Radio technology seems destined to become part of the standard micro-processor input/output system. But unlike for memory or display systems, for radio systems government regulation matters a lot. Much discussion of radio regulation has focused on narrow spectrum management and interference issues. Reflecting on historical experience and centuries of conversation about fundamental political choices, persons working with radio technology should also ponder three questions. First, what is a good separation and balance of powers in radio regulation? Second, how should radio regulation be geographically configured? Third, how should radio regulation understand and respect personal freedom and equality? Working out answering to these questions involves a general process of shaping good government. This process will be hugely important for radio regulation, technology, and applications.<|reference_end|>
arxiv
@article{galbi2003revolutionary, title={Revolutionary Ideas for Radio Regulation}, author={Douglas A. Galbi}, journal={arXiv preprint arXiv:cs/0304034}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304034}, primaryClass={cs.CY} }
galbi2003revolutionary
arxiv-671059
cs/0304035
Exploiting Sublanguage and Domain Characteristics in a Bootstrapping Approach to Lexicon and Ontology Creation
<|reference_start|>Exploiting Sublanguage and Domain Characteristics in a Bootstrapping Approach to Lexicon and Ontology Creation: It is very costly to build up lexical resources and domain ontologies. Especially when confronted with a new application domain lexical gaps and a poor coverage of domain concepts are a problem for the successful exploitation of natural language document analysis systems that need and exploit such knowledge sources. In this paper we report about ongoing experiments with `bootstrapping techniques' for lexicon and ontology creation.<|reference_end|>
arxiv
@article{roesner2003exploiting, title={Exploiting Sublanguage and Domain Characteristics in a Bootstrapping Approach to Lexicon and Ontology Creation}, author={Dietmar Roesner, Manuela Kunze}, journal={Workshop-Proceedings of the OntoLex 2002 - Ontologies and Lexical Knowledge Bases at the LREC 2002, p. 68-73}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304035}, primaryClass={cs.CL} }
roesner2003exploiting
arxiv-671060
cs/0304036
An Approach for Resource Sharing in Multilingual NLP
<|reference_start|>An Approach for Resource Sharing in Multilingual NLP: In this paper we describe an approach for the analysis of documents in German and English with a shared pool of resources. For the analysis of German documents we use a document suite, which supports the user in tasks like information retrieval and information extraction. The core of the document suite is based on our tool XDOC. Now we want to exploit these methods for the analysis of English documents as well. For this aim we need a multilingual presentation format of the resources. These resources must be transformed into an unified format, in which we can set additional information about linguistic characteristics of the language depending on the analyzed documents. In this paper we describe our approach for such an exchange model for multilingual resources based on XML.<|reference_end|>
arxiv
@article{kunze2003an, title={An Approach for Resource Sharing in Multilingual NLP}, author={Manuela Kunze, Chun Xiao}, journal={STAIRS 2002 - STarting Artificial Intelligence Researchers Symposium at the ECAI 2002. Lyon, France. ISBN 158603 259 3. IOS Press Amsterdam, p. 123-124}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304036}, primaryClass={cs.CL} }
kunze2003an
arxiv-671061
cs/0304037
Using Regression Techniques to Predict Large Data Transfers
<|reference_start|>Using Regression Techniques to Predict Large Data Transfers: The recent proliferation of Data Grids and the increasingly common practice of using resources as distributed data stores provide a convenient environment for communities of researchers to share, replicate, and manage access to copies of large datasets. This has led to the question of which replica can be accessed most efficiently. In such environments, fetching data from one of the several replica locations requires accurate predictions of end-to-end transfer times. The answer to this question can depend on many factors, including physical characteristics of the resources and the load behavior on the CPUs, networks, and storage devices that are part of the end-to-end data path linking possible sources and sinks. Our approach combines end-to-end application throughput observations with network and disk load variations and captures whole-system performance and variations in load patterns. Our predictions characterize the effect of load variations of several shared devices (network and disk) on file transfer times. We develop a suite of univariate and multivariate predictors that can use multiple data sources to improve the accuracy of the predictions as well as address Data Grid variations (availability of data and sporadic nature of transfers). We ran a large set of data transfer experiments using GridFTP and observed performance predictions within 15% error for our testbed sites, which is quite promising for a pragmatic system.<|reference_end|>
arxiv
@article{vazhkudai2003using, title={Using Regression Techniques to Predict Large Data Transfers}, author={Sudharshan Vazhkudai and Jennifer M. Schopf}, journal={arXiv preprint arXiv:cs/0304037}, year={2003}, number={Preprint ANL/MCS-P1033-0303}, archivePrefix={arXiv}, eprint={cs/0304037}, primaryClass={cs.DC} }
vazhkudai2003using
arxiv-671062
cs/0304038
How NP got a new definition: a survey of probabilistically checkable proofs
<|reference_start|>How NP got a new definition: a survey of probabilistically checkable proofs: We survey a collective achievement of a group of researchers: the PCP Theorems. They give new definitions of the class \np, and imply that computing approximate solutions to many \np-hard problems is itself \np-hard. Techniques developed to prove them have had many other consequences.<|reference_end|>
arxiv
@article{arora2003how, title={How NP got a new definition: a survey of probabilistically checkable proofs}, author={Sanjeev Arora}, journal={Proceedings of the ICM, Beijing 2002, vol. 3, 637--648}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304038}, primaryClass={cs.CC} }
arora2003how
arxiv-671063
cs/0304039
Approximation thresholds for combinatorial optimization problems
<|reference_start|>Approximation thresholds for combinatorial optimization problems: An NP-hard combinatorial optimization problem $\Pi$ is said to have an {\em approximation threshold} if there is some $t$ such that the optimal value of $\Pi$ can be approximated in polynomial time within a ratio of $t$, and it is NP-hard to approximate it within a ratio better than $t$. We survey some of the known approximation threshold results, and discuss the pattern that emerges from the known results.<|reference_end|>
arxiv
@article{feige2003approximation, title={Approximation thresholds for combinatorial optimization problems}, author={Uriel Feige}, journal={Proceedings of the ICM, Beijing 2002, vol. 3, 649--658}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304039}, primaryClass={cs.CC} }
feige2003approximation
arxiv-671064
cs/0304040
Hardness as randomness: a survey of universal derandomization
<|reference_start|>Hardness as randomness: a survey of universal derandomization: We survey recent developments in the study of probabilistic complexity classes. While the evidence seems to support the conjecture that probabilism can be deterministically simulated with relatively low overhead, i.e., that $P=BPP$, it also indicates that this may be a difficult question to resolve. In fact, proving that probabilistic algorithms have non-trivial deterministic simulations is basically equivalent to proving circuit lower bounds, either in the algebraic or Boolean models.<|reference_end|>
arxiv
@article{impagliazzo2003hardness, title={Hardness as randomness: a survey of universal derandomization}, author={Russell Impagliazzo}, journal={Proceedings of the ICM, Beijing 2002, vol. 3, 659--672}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304040}, primaryClass={cs.CC} }
impagliazzo2003hardness
arxiv-671065
cs/0304041
$P \ne NP$, propositional proof complexity, and resolution lower bounds for the weak pigeonhole principle
<|reference_start|>$P \ne NP$, propositional proof complexity, and resolution lower bounds for the weak pigeonhole principle: Recent results established exponential lower bounds for the length of any Resolution proof for the weak pigeonhole principle. More formally, it was proved that any Resolution proof for the weak pigeonhole principle, with $n$ holes and any number of pigeons, is of length $\Omega(2^{n^{\epsilon}})$, (for a constant $\epsilon = 1/3$). One corollary is that certain propositional formulations of the statement $P \ne NP$ do not have short Resolution proofs. After a short introduction to the problem of $P \ne NP$ and to the research area of propositional proof complexity, I will discuss the above mentioned lower bounds for the weak pigeonhole principle and the connections to the hardness of proving $P \ne NP$.<|reference_end|>
arxiv
@article{raz2003$p, title={$P \ne NP$, propositional proof complexity, and resolution lower bounds for the weak pigeonhole principle}, author={Ran Raz}, journal={Proceedings of the ICM, Beijing 2002, vol. 3, 685--696}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304041}, primaryClass={cs.CC} }
raz2003$p
arxiv-671066
cs/0304042
On probabilistic analog automata
<|reference_start|>On probabilistic analog automata: We consider probabilistic automata on a general state space and study their computational power. The model is based on the concept of language recognition by probabilistic automata due to Rabin and models of analog computation in a noisy environment suggested by Maass and Orponen, and Maass and Sontag. Our main result is a generalization of Rabin's reduction theorem that implies that under very mild conditions, the computational power of the automaton is limited to regular languages.<|reference_end|>
arxiv
@article{ben-hur2003on, title={On probabilistic analog automata}, author={A. Ben-Hur, A. Roitershtein, H. Siegelmann}, journal={arXiv preprint arXiv:cs/0304042}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304042}, primaryClass={cs.OH} }
ben-hur2003on
arxiv-671067
cs/0304043
gTybalt - a free computer algebra system
<|reference_start|>gTybalt - a free computer algebra system: This article documents the free computer algebra system "gTybalt". The program is build on top of other packages, among others GiNaC, TeXmacs and Root. It offers the possibility of interactive symbolic calculations within the C++ programming language. Mathematical formulae are visualized using TeX fonts.<|reference_end|>
arxiv
@article{weinzierl2003gtybalt, title={gTybalt - a free computer algebra system}, author={Stefan Weinzierl}, journal={Comput.Phys.Commun.156:180-198,2004}, year={2003}, doi={10.1016/S0010-4655(03)00468-5}, archivePrefix={arXiv}, eprint={cs/0304043}, primaryClass={cs.SC hep-ph} }
weinzierl2003gtybalt
arxiv-671068
cs/0304044
Hardness of approximating the weight enumerator of a binary linear code
<|reference_start|>Hardness of approximating the weight enumerator of a binary linear code: We consider the problem of evaluation of the weight enumerator of a binary linear code. We show that the exact evaluation is hard for polynomial hierarchy. More exactly, if WE is an oracle answering the solution of the evaluation problem then P^WE=P^GapP. Also we consider the approximative evaluation of the weight enumerator. In the case of approximation with additive accuracy $2^{\alpha n}$, $\alpha$ is constant the problem is hard in the above sense. We also prove that approximate evaluation at a single point $e^{\pi i/4}$ is hard for $0<\al<\al_0\approx0.88$.<|reference_end|>
arxiv
@article{vyalyi2003hardness, title={Hardness of approximating the weight enumerator of a binary linear code}, author={M.N.Vyalyi}, journal={arXiv preprint arXiv:cs/0304044}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304044}, primaryClass={cs.CC} }
vyalyi2003hardness
arxiv-671069
cs/0304045
On a composition of digraphs
<|reference_start|>On a composition of digraphs: Many "good" topologies for interconnection networks are based on line digraphs of regular digraphs. These digraphs support unitary matrices. We propose the property "being the digraph of a unitary matrix" as additional criterion for the design of new interconnection networks. We define a composition of digraphs, which we call diagonal union. Diagonal union can be used to construct digraphs of unitary matrices. We remark that digraphs obtained via diagonal union are state split graphs, as defined in symbolic dynamics. Finally, we list some potential directions for future research.<|reference_end|>
arxiv
@article{severini2003on, title={On a composition of digraphs}, author={Simone Severini (U. Bristol)}, journal={arXiv preprint arXiv:cs/0304045}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304045}, primaryClass={cs.DM cs.AR cs.NI} }
severini2003on
arxiv-671070
cs/0304046
Distributed States Temporal Logic
<|reference_start|>Distributed States Temporal Logic: We introduce a temporal logic to reason on global applications in an asynchronous setting. First, we define the Distributed States Logic (DSL), a modal logic for localities that embeds the local theories of each component into a theory of the distributed states of the system. We provide the logic with a sound and complete axiomatization. The contribution is that it is possible to reason about properties that involve several components, even in the absence of a global clock. Then, we define the Distributed States Temporal Logic (DSTL) by introducing temporal operators a' la Unity. We support our proposal by working out a pair of examples: a simple secure communication system, and an algorithm for distributed leader election. The motivation for this work is that the existing logics for distributed systems do not have the right expressive power to reason on the systems behaviour, when the communication is based on asynchronous message passing. On the other side, asynchronous communication is the most used abstraction when modelling global applications.<|reference_end|>
arxiv
@article{montangero2003distributed, title={Distributed States Temporal Logic}, author={Carlo Montangero and Laura Semini (Dipartimento di Informatica, Universita' di Pisa, Italy)}, journal={arXiv preprint arXiv:cs/0304046}, year={2003}, archivePrefix={arXiv}, eprint={cs/0304046}, primaryClass={cs.LO} }
montangero2003distributed
arxiv-671071
cs/0305001
A Framework for Searching AND/OR Graphs with Cycles
<|reference_start|>A Framework for Searching AND/OR Graphs with Cycles: Search in cyclic AND/OR graphs was traditionally known to be an unsolved problem. In the recent past several important studies have been reported in this domain. In this paper, we have taken a fresh look at the problem. First, a new and comprehensive theoretical framework for cyclic AND/OR graphs has been presented, which was found missing in the recent literature. Based on this framework, two best-first search algorithms, S1 and S2, have been developed. S1 does uninformed search and is a simple modification of the Bottom-up algorithm by Martelli and Montanari. S2 performs a heuristically guided search and replicates the modification in Bottom-up's successors, namely HS and AO*. Both S1 and S2 solve the problem of searching AND/OR graphs in presence of cycles. We then present a detailed analysis for the correctness and complexity results of S1 and S2, using the proposed framework. We have observed through experiments that S1 and S2 output correct results in all cases.<|reference_end|>
arxiv
@article{mahanti2003a, title={A Framework for Searching AND/OR Graphs with Cycles}, author={Ambuj Mahanti, Supriyo Ghose and Samir K. Sadhukhan}, journal={arXiv preprint arXiv:cs/0305001}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305001}, primaryClass={cs.AI} }
mahanti2003a
arxiv-671072
cs/0305002
Hybrid Rounding Techniques for Knapsack Problems
<|reference_start|>Hybrid Rounding Techniques for Knapsack Problems: We address the classical knapsack problem and a variant in which an upper bound is imposed on the number of items that can be selected. We show that appropriate combinations of rounding techniques yield novel and powerful ways of rounding. As an application of these techniques, we present a linear-storage Polynomial Time Approximation Scheme (PTAS) and a Fully Polynomial Time Approximation Scheme (FPTAS) that compute an approximate solution, of any fixed accuracy, in linear time. This linear complexity bound gives a substantial improvement of the best previously known polynomial bounds.<|reference_end|>
arxiv
@article{mastrolilli2003hybrid, title={Hybrid Rounding Techniques for Knapsack Problems}, author={Monaldo Mastrolilli and Marcus Hutter}, journal={Discrete Applied Mathematics, 154:4 (2006) 640-649}, year={2003}, doi={10.1016/j.dam.2005.08.004}, number={IDSIA-03-02}, archivePrefix={arXiv}, eprint={cs/0305002}, primaryClass={cs.CC cs.DM cs.DS} }
mastrolilli2003hybrid
arxiv-671073
cs/0305003
The Ubiquitous Interactor - Device Independent Access to Mobile Services
<|reference_start|>The Ubiquitous Interactor - Device Independent Access to Mobile Services: The Ubiquitous Interactor (UBI) addresses the problems of design and development that arise around services that need to be accessed from many different devices. In UBI, the same service can present itself with different user interfaces on different devices. This is done by separating interaction between users and services from presentation. The interaction is kept the same for all devices, and different presentation information is provided for different devices. This way, tailored user interfaces for many different devices can be created without multiplying development and maintenance work. In this paper we describe the system design of UBI, the system implementation, and two services implemented for the system: a calendar service and a stockbroker service.<|reference_end|>
arxiv
@article{nylander2003the, title={The Ubiquitous Interactor - Device Independent Access to Mobile Services}, author={Stina Nylander, Markus Bylund, Annika Waern}, journal={arXiv preprint arXiv:cs/0305003}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305003}, primaryClass={cs.HC} }
nylander2003the
arxiv-671074
cs/0305004
Approximate Grammar for Information Extraction
<|reference_start|>Approximate Grammar for Information Extraction: In this paper, we present the concept of Approximate grammar and how it can be used to extract information from a documemt. As the structure of informational strings cannot be defined well in a document, we cannot use the conventional grammar rules to represent the information. Hence, the need arises to design an approximate grammar that can be used effectively to accomplish the task of Information extraction. Approximate grammars are a novel step in this direction. The rules of an approximate grammar can be given by a user or the machine can learn the rules from an annotated document. We have performed our experiments in both the above areas and the results have been impressive.<|reference_end|>
arxiv
@article{sriram2003approximate, title={Approximate Grammar for Information Extraction}, author={V.Sriram, B. Ravi Sekar Reddy and R. Sangal}, journal={Conference on Universal Knowledge and Language, Goa'2002}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305004}, primaryClass={cs.CL cs.AI} }
sriram2003approximate
arxiv-671075
cs/0305005
An In-Place Sorting with O(n log n) Comparisons and O(n) Moves
<|reference_start|>An In-Place Sorting with O(n log n) Comparisons and O(n) Moves: We present the first in-place algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a long-standing open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13, 374-93, 1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously.<|reference_end|>
arxiv
@article{franceschini2003an, title={An In-Place Sorting with O(n log n) Comparisons and O(n) Moves}, author={Gianni Franceschini and Viliam Geffert}, journal={arXiv preprint arXiv:cs/0305005}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305005}, primaryClass={cs.DS cs.CC} }
franceschini2003an
arxiv-671076
cs/0305006
On the Ramsey Numbers for Bipartite Multigraphs
<|reference_start|>On the Ramsey Numbers for Bipartite Multigraphs: A coloring of a complete bipartite graph is shuffle-preserved if it is the case that assigning a color $c$ to edges $(u, v)$ and $(u', v')$ enforces the same color assignment for edges $(u, v')$ and $(u',v)$. (In words, the induced subgraph with respect to color $c$ is complete.) In this paper, we investigate a variant of the Ramsey problem for the class of complete bipartite multigraphs. (By a multigraph we mean a graph in which multiple edges, but no loops, are allowed.) Unlike the conventional m-coloring scheme in Ramsey theory which imposes a constraint (i.e., $m$) on the total number of colors allowed in a graph, we introduce a relaxed version called m-local coloring which only requires that, for every vertex $v$, the number of colors associated with $v$'s incident edges is bounded by $m$. Note that the number of colors found in a graph under $m$-local coloring may exceed m. We prove that given any $n \times n$ complete bipartite multigraph $G$, every shuffle-preserved $m$-local coloring displays a monochromatic copy of $K_{p,p}$ provided that $2(p-1)(m-1) < n$. Moreover, the above bound is tight when (i) $m=2$, or (ii) $n=2^k$ and $m=3\cdot 2^{k-2}$ for every integer $k\geq 2$. As for the lower bound of $p$, we show that the existence of a monochromatic $K_{p,p}$ is not guaranteed if $p> \lceil \frac{n}{m} \rceil$. Finally, we give a generalization for $k$-partite graphs and a method applicable to general graphs. Many conclusions found in $m$-local coloring can be inferred to similar results of $m$-coloring.<|reference_end|>
arxiv
@article{chen2003on, title={On the Ramsey Numbers for Bipartite Multigraphs}, author={Ming-Yang Chen, Hsueh-I. Lu, and Hsu-Chun Yen}, journal={arXiv preprint arXiv:cs/0305006}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305006}, primaryClass={cs.DM} }
chen2003on
arxiv-671077
cs/0305007
Computing only minimal answers in disjunctive deductive databases
<|reference_start|>Computing only minimal answers in disjunctive deductive databases: A method is presented for computing minimal answers in disjunctive deductive databases under the disjunctive stable model semantics. Such answers are constructed by repeatedly extending partial answers. Our method is complete (in that every minimal answer can be computed) and does not admit redundancy (in the sense that every partial answer generated can be extended to a minimal answer), whence no non-minimal answer is generated. For stratified databases, the method does not (necessarily) require the computation of models of the database in their entirety. Compilation is proposed as a tool by which problems relating to computational efficiency and the non-existence of disjunctive stable models can be overcome. The extension of our method to other semantics is also considered.<|reference_end|>
arxiv
@article{johnson2003computing, title={Computing only minimal answers in disjunctive deductive databases}, author={C. A. Johnson}, journal={arXiv preprint arXiv:cs/0305007}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305007}, primaryClass={cs.LO} }
johnson2003computing
arxiv-671078
cs/0305008
A Representation of Changes of Images and its Application for Developmental Biolology
<|reference_start|>A Representation of Changes of Images and its Application for Developmental Biolology: In this paper, we consider a series of events observed at spaced time intervals and present a method of representation of the series. To explain an idea, by dealing with a set of gene expression data, which could be obtained from developmental biology, the procedures are sketched with comments in some details. We mean representation by choosing a proper function, which fits well with observed data of a series, and turning its characteristics into numbers, which extract the intrinsic properties of fluctuating data. With help of a machine learning techniques, this method will give a classification of developmental biological data as well as any varying data during a certain period and the classification can be applied for diagnosis of a disease.<|reference_end|>
arxiv
@article{kim2003a, title={A Representation of Changes of Images and its Application for Developmental Biolology}, author={Gene Kim and MyungHo Kim}, journal={arXiv preprint arXiv:cs/0305008}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305008}, primaryClass={cs.CC cs.MS q-bio} }
kim2003a
arxiv-671079
cs/0305009
The Threshold for Random k-SAT is 2^k ln2 - O(k)
<|reference_start|>The Threshold for Random k-SAT is 2^k ln2 - O(k): Let F be a random k-SAT formula on n variables, formed by selecting uniformly and independently m = rn out of all possible k-clauses. It is well-known that if r>2^k ln 2, then the formula F is unsatisfiable with probability that tends to 1 as n tends to infinity. We prove that there exists a sequence t_k = O(k) such that if r < 2^k ln 2 - t_k, then the formula F is satisfiable with probability that tends to 1 as n tends to infinity. Our technique yields an explicit lower bound for the random k-SAT threshold for every k. For k>3 this improves upon all previously known lower bounds. For example, when k=10 our lower bound is 704.94 while the upper bound is 708.94.<|reference_end|>
arxiv
@article{achlioptas2003the, title={The Threshold for Random k-SAT is 2^k ln2 - O(k)}, author={Dimitris Achlioptas, Yuval Peres}, journal={arXiv preprint arXiv:cs/0305009}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305009}, primaryClass={cs.CC cond-mat.stat-mech cs.DM math.PR} }
achlioptas2003the
arxiv-671080
cs/0305010
The NPC Framework for Building Information Dissemination Networks
<|reference_start|>The NPC Framework for Building Information Dissemination Networks: Numerous systems for dissemination, retrieval, and archiving of documents have been developed in the past. Those systems often focus on one of these aspects and are hard to extend and combine. Typically, the transmission protocols, query and filtering languages are fixed as well as the interfaces to other systems. We rather envisage the seamless establishment of networks among the providers, repositories and consumers of information, supporting information retrieval and dissemination while being highly interoperable and extensible. We propose a framework with a single event-based mechanism that unifies document storage, retrieval, and dissemination. This framework offers complete openness with respect to document and metadata formats, transmission protocols, and filtering mechanisms. It specifies a high-level building kit, by which arbitrary processors for document streams can be incorporated to support the retrieval, transformation, aggregation and disaggregation of documents. Using the same kit, interfaces for different transmission protocols can be added easily to enable the communication with various information sources and information consumers.<|reference_end|>
arxiv
@article{faulstich2003the, title={The NPC Framework for Building Information Dissemination Networks}, author={Lukas C. Faulstich}, journal={arXiv preprint arXiv:cs/0305010}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305010}, primaryClass={cs.DL cs.NI} }
faulstich2003the
arxiv-671081
cs/0305011
Optimizing Optimal Reduction: A Type Inference Algorithm for Elementary Affine Logic
<|reference_start|>Optimizing Optimal Reduction: A Type Inference Algorithm for Elementary Affine Logic: We present a type inference algorithm for lambda-terms in Elementary Affine Logic using linear constraints. We prove that the algorithm is correct and complete.<|reference_end|>
arxiv
@article{coppola2003optimizing, title={Optimizing Optimal Reduction: A Type Inference Algorithm for Elementary Affine Logic}, author={Paolo Coppola and Simone Martini}, journal={ACM Transactions on Computational Logic, vol 7 (2006) pp. 219 - 260.}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305011}, primaryClass={cs.LO} }
coppola2003optimizing
arxiv-671082
cs/0305012
Time-scales, Meaning, and Availability of Information in a Global Brain
<|reference_start|>Time-scales, Meaning, and Availability of Information in a Global Brain: We note the importance of time-scales, meaning, and availability of information for the emergence of novel information meta-structures at a global scale. We discuss previous work in this area and develop future perspectives. We focus on the transmission of scientific articles and the integration of traditional conferences with their virtual extensions on the Internet, their time-scales, and availability. We mention the Semantic Web as an effort for integrating meaningful information.<|reference_end|>
arxiv
@article{gershenson2003time-scales,, title={Time-scales, Meaning, and Availability of Information in a Global Brain}, author={Carlos Gershenson, Gottfried Mayer-Kress, Atin Das, Pritha Das, Matus Marko}, journal={arXiv preprint arXiv:cs/0305012}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305012}, primaryClass={cs.AI cs.CY cs.NI} }
gershenson2003time-scales,
arxiv-671083
cs/0305013
On Nonspecific Evidence
<|reference_start|>On Nonspecific Evidence: When simultaneously reasoning with evidences about several different events it is necessary to separate the evidence according to event. These events should then be handled independently. However, when propositions of evidences are weakly specified in the sense that it may not be certain to which event they are referring, this may not be directly possible. In this paper a criterion for partitioning evidences into subsets representing events is established. This criterion, derived from the conflict within each subset, involves minimising a criterion function for the overall conflict of the partition. An algorithm based on characteristics of the criterion function and an iterative optimisation among partitionings of evidences is proposed.<|reference_end|>
arxiv
@article{schubert2003on, title={On Nonspecific Evidence}, author={Johan Schubert}, journal={International Journal of Intelligent Systems 8(6) (1993) 711-725}, year={2003}, number={FOA Report B 20112-2.7}, archivePrefix={arXiv}, eprint={cs/0305013}, primaryClass={cs.AI cs.NE} }
schubert2003on
arxiv-671084
cs/0305014
Dempster's Rule for Evidence Ordered in a Complete Directed Acyclic Graph
<|reference_start|>Dempster's Rule for Evidence Ordered in a Complete Directed Acyclic Graph: For the case of evidence ordered in a complete directed acyclic graph this paper presents a new algorithm with lower computational complexity for Dempster's rule than that of step-by-step application of Dempster's rule. In this problem, every original pair of evidences, has a corresponding evidence against the simultaneous belief in both propositions. In this case, it is uncertain whether the propositions of any two evidences are in logical conflict. The original evidences are associated with the vertices and the additional evidences are associated with the edges. The original evidences are ordered, i.e., for every pair of evidences it is determinable which of the two evidences is the earlier one. We are interested in finding the most probable completely specified path through the graph, where transitions are possible only from lower- to higher-ranked vertices. The path is here a representation for a sequence of states, for instance a sequence of snapshots of a physical object's track. A completely specified path means that the path includes no other vertices than those stated in the path representation, as opposed to an incompletely specified path that may also include other vertices than those stated. In a hierarchical network of all subsets of the frame, i.e., of all incompletely specified paths, the original and additional evidences support subsets that are not disjoint, thus it is not possible to prune the network to a tree. Instead of propagating belief, the new algorithm reasons about the logical conditions of a completely specified path through the graph. The new algorithm is O(|THETA| log |THETA|), compared to O(|THETA| ** log |THETA|) of the classic brute force algorithm.<|reference_end|>
arxiv
@article{bergsten2003dempster's, title={Dempster's Rule for Evidence Ordered in a Complete Directed Acyclic Graph}, author={Ulla Bergsten, Johan Schubert}, journal={International Journal of Approximate Reasoning 9(1) (1993) 37-73}, year={2003}, number={FOA Report B 20114-2.7}, archivePrefix={arXiv}, eprint={cs/0305014}, primaryClass={cs.AI cs.DM} }
bergsten2003dempster's
arxiv-671085
cs/0305015
Finding a Posterior Domain Probability Distribution by Specifying Nonspecific Evidence
<|reference_start|>Finding a Posterior Domain Probability Distribution by Specifying Nonspecific Evidence: This article is an extension of the results of two earlier articles. In [J. Schubert, On nonspecific evidence, Int. J. Intell. Syst. 8 (1993) 711-725] we established within Dempster-Shafer theory a criterion function called the metaconflict function. With this criterion we can partition into subsets a set of several pieces of evidence with propositions that are weakly specified in the sense that it may be uncertain to which event a proposition is referring. In a second article [J. Schubert, Specifying nonspecific evidence, in Cluster-based specification techniques in Dempster-Shafer theory for an evidential intelligence analysis of multiple target tracks, Ph.D. Thesis, TRITA-NA-9410, Royal Institute of Technology, Stockholm, 1994, ISBN 91-7170-801-4] we not only found the most plausible subset for each piece of evidence, we also found the plausibility for every subset that this piece of evidence belongs to the subset. In this article we aim to find a posterior probability distribution regarding the number of subsets. We use the idea that each piece of evidence in a subset supports the existence of that subset to the degree that this piece of evidence supports anything at all. From this we can derive a bpa that is concerned with the question of how many subsets we have. That bpa can then be combined with a given prior domain probability distribution in order to obtain the sought-after posterior domain distribution.<|reference_end|>
arxiv
@article{schubert2003finding, title={Finding a Posterior Domain Probability Distribution by Specifying Nonspecific Evidence}, author={Johan Schubert}, journal={International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 3(2) (1995) 163-185}, year={2003}, number={FOA-B-95-00077-3.4-SE}, archivePrefix={arXiv}, eprint={cs/0305015}, primaryClass={cs.AI cs.NE} }
schubert2003finding
arxiv-671086
cs/0305016
The one-round Voronoi game replayed
<|reference_start|>The one-round Voronoi game replayed: We consider the one-round Voronoi game, where player one (``White'', called ``Wilma'') places a set of n points in a rectangular area of aspect ratio r <=1, followed by the second player (``Black'', called ``Barney''), who places the same number of points. Each player wins the fraction of the board closest to one of his points, and the goal is to win more than half of the total area. This problem has been studied by Cheong et al., who showed that for large enough $n$ and r=1, Barney has a strategy that guarantees a fraction of 1/2+a, for some small fixed a. We resolve a number of open problems raised by that paper. In particular, we give a precise characterization of the outcome of the game for optimal play: We show that Barney has a winning strategy for n>2 and r>sqrt{2}/n, and for n=2 and r>sqrt{3}/2. Wilma wins in all remaining cases, i.e., for n>=3 and r<=sqrt{2}/n, for n=2 and r<=sqrt{3}/2, and for n=1. We also discuss complexity aspects of the game on more general boards, by proving that for a polygon with holes, it is NP-hard to maximize the area Barney can win against a given set of points by Wilma.<|reference_end|>
arxiv
@article{fekete2003the, title={The one-round Voronoi game replayed}, author={Sandor P. Fekete and Henk Meijer}, journal={arXiv preprint arXiv:cs/0305016}, year={2003}, archivePrefix={arXiv}, eprint={cs/0305016}, primaryClass={cs.CG cs.GT} }
fekete2003the
arxiv-671087
cs/0305017
Cluster-based Specification Techniques in Dempster-Shafer Theory
<|reference_start|>Cluster-based Specification Techniques in Dempster-Shafer Theory: When reasoning with uncertainty there are many situations where evidences are not only uncertain but their propositions may also be weakly specified in the sense that it may not be certain to which event a proposition is referring. It is then crucial not to combine such evidences in the mistaken belief that they are referring to the same event. This situation would become manageable if the evidences could be clustered into subsets representing events that should be handled separately. In an earlier article we established within Dempster-Shafer theory a criterion function called the metaconflict function. With this criterion we can partition a set of evidences into subsets. Each subset representing a separate event. In this article we will not only find the most plausible subset for each piece of evidence, we will also find the plausibility for every subset that the evidence belongs to the subset. Also, when the number of subsets are uncertain we aim to find a posterior probability distribution regarding the number of subsets.<|reference_end|>
arxiv
@article{schubert2003cluster-based, title={Cluster-based Specification Techniques in Dempster-Shafer Theory}, author={Johan Schubert}, journal={in Symbolic and Quantitative Approaches to Reasoning and Uncertainty, C. Froidevaux and J. Kohlas (Eds.), Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty (ECSQARU'95), pp. 395-404, Universite' de Fribourg, Switzerland, 3-5 July 1995, Springer-Verlag (LNAI 946), Berlin, 1995}, year={2003}, number={FOA-B-95-00078-3.4-SE}, archivePrefix={arXiv}, eprint={cs/0305017}, primaryClass={cs.AI cs.NE} }
schubert2003cluster-based
arxiv-671088
cs/0305018
Cluster-based Specification Techniques in Dempster-Shafer Theory for an Evidential Intelligence Analysis of MultipleTarget Tracks (Thesis Abstract)
<|reference_start|>Cluster-based Specification Techniques in Dempster-Shafer Theory for an Evidential Intelligence Analysis of MultipleTarget Tracks (Thesis Abstract): In Intelligence Analysis it is of vital importance to manage uncertainty. Intelligence data is almost always uncertain and incomplete, making it necessary to reason and taking decisions under uncertainty. One way to manage the uncertainty in Intelligence Analysis is Dempster-Shafer Theory. This thesis contains five results regarding multiple target tracks and intelligence specification.<|reference_end|>
arxiv
@article{schubert2003cluster-based, title={Cluster-based Specification Techniques in Dempster-Shafer Theory for an Evidential Intelligence Analysis of MultipleTarget Tracks (Thesis Abstract)}, author={Johan Schubert}, journal={AI Communications 8(2) (1995) 107-110}, year={2003}, number={FOA-B-95-00079-3.4-SE}, archivePrefix={arXiv}, eprint={cs/0305018}, primaryClass={cs.AI cs.NE} }
schubert2003cluster-based
arxiv-671089
cs/0305019
On rho in a Decision-Theoretic Apparatus of Dempster-Shafer Theory
<|reference_start|>On rho in a Decision-Theoretic Apparatus of Dempster-Shafer Theory: Thomas M. Strat has developed a decision-theoretic apparatus for Dempster-Shafer theory (Decision analysis using belief functions, Intern. J. Approx. Reason. 4(5/6), 391-417, 1990). In this apparatus, expected utility intervals are constructed for different choices. The choice with the highest expected utility is preferable to others. However, to find the preferred choice when the expected utility interval of one choice is included in that of another, it is necessary to interpolate a discerning point in the intervals. This is done by the parameter rho, defined as the probability that the ambiguity about the utility of every nonsingleton focal element will turn out as favorable as possible. If there are several different decision makers, we might sometimes be more interested in having the highest expected utility among the decision makers rather than only trying to maximize our own expected utility regardless of choices made by other decision makers. The preference of each choice is then determined by the probability of yielding the highest expected utility. This probability is equal to the maximal interval length of rho under which an alternative is preferred. We must here take into account not only the choices already made by other decision makers but also the rational choices we can assume to be made by later decision makers. In Strats apparatus, an assumption, unwarranted by the evidence at hand, has to be made about the value of rho. We demonstrate that no such assumption is necessary. It is sufficient to assume a uniform probability distribution for rho to be able to discern the most preferable choice. We discuss when this approach is justifiable.<|reference_end|>
arxiv
@article{schubert2003on, title={On rho in a Decision-Theoretic Apparatus of Dempster-Shafer Theory}, author={Johan Schubert}, journal={International Journal of Approximate Reasoning 13(3), 185-200, 1995}, year={2003}, number={FOA-B-95-00097-3.4-SE}, archivePrefix={arXiv}, eprint={cs/0305019}, primaryClass={cs.AI} }
schubert2003on
arxiv-671090
cs/0305020
Specifying nonspecific evidence
<|reference_start|>Specifying nonspecific evidence: In an earlier article [J. Schubert, On nonspecific evidence, Int. J. Intell. Syst. 8(6), 711-725 (1993)] we established within Dempster-Shafer theory a criterion function called the metaconflict function. With this criterion we can partition into subsets a set of several pieces of evidence with propositions that are weakly specified in the sense that it may be uncertain to which event a proposition is referring. Each subset in the partitioning is representing a separate event. The metaconflict function was derived as the plausibility that the partitioning is correct when viewing the conflict in Dempster's rule within each subset as a newly constructed piece of metalevel evidence with a proposition giving support against the entire partitioning. In this article we extend the results of the previous article. We will not only find the most plausible subset for each piece of evidence as was done in the earlier article. In addition we will specify each piece of nonspecific evidence, in the sense that we find to which events the proposition might be referring, by finding the plausibility for every subset that this piece of evidence belong to the subset. In doing this we will automatically receive indication that some evidence might be false. We will then develop a new methodology to exploit these newly specified pieces of evidence in a subsequent reasoning process. This will include methods to discount evidence based on their degree of falsity and on their degree of credibility due to a partial specification of affiliation, as well as a refined method to infer the event of each subset.<|reference_end|>
arxiv
@article{schubert2003specifying, title={Specifying nonspecific evidence}, author={Johan Schubert}, journal={International Journal of Intelligent Systems 11(8), 525-563, 1996}, year={2003}, number={FOA-B-96-00174-3.4-SE}, archivePrefix={arXiv}, eprint={cs/0305020}, primaryClass={cs.AI cs.NE} }
schubert2003specifying
arxiv-671091
cs/0305021
Creating Prototypes for Fast Classification in Dempster-Shafer Clustering
<|reference_start|>Creating Prototypes for Fast Classification in Dempster-Shafer Clustering: We develop a classification method for incoming pieces of evidence in Dempster-Shafer theory. This methodology is based on previous work with clustering and specification of originally nonspecific evidence. This methodology is here put in order for fast classification of future incoming pieces of evidence by comparing them with prototypes representing the clusters, instead of making a full clustering of all evidence. This method has a computational complexity of O(M * N) for each new piece of evidence, where M is the maximum number of subsets and N is the number of prototypes chosen for each subset. That is, a computational complexity independent of the total number of previously arrived pieces of evidence. The parameters M and N are typically fixed and domain dependent in any application.<|reference_end|>
arxiv
@article{schubert2003creating, title={Creating Prototypes for Fast Classification in Dempster-Shafer Clustering}, author={Johan Schubert}, journal={in Qualitative and Quantitative Practical Reasoning, Proceedings of the First International Joint Conference on Qualitative and Quantitative Practical Reasoning (ECSQARU-FAPR'97), pp. 525-535, Bad Honnef, Germany, 9-12 June 1997, Springer-Verlag (LNAI 1244), Berlin, 1997}, year={2003}, number={FOA-B-97-00244-505-SE}, archivePrefix={arXiv}, eprint={cs/0305021}, primaryClass={cs.AI cs.NE} }
schubert2003creating
arxiv-671092
cs/0305022
Applying Data Mining and Machine Learning Techniques to Submarine Intelligence Analysis
<|reference_start|>Applying Data Mining and Machine Learning Techniques to Submarine Intelligence Analysis: We describe how specialized database technology and data analysis methods were applied by the Swedish defense to help deal with the violation of Swedish marine territory by foreign submarine intruders during the Eighties and early Nineties. Among several approaches tried some yielded interesting information, although most of the key questions remain unanswered. We conclude with a survey of belief-function- and genetic-algorithm-based methods which were proposed to support interpretation of intelligence reports and prediction of future submarine positions, respectively.<|reference_end|>
arxiv
@article{bergsten2003applying, title={Applying Data Mining and Machine Learning Techniques to Submarine Intelligence Analysis}, author={Ulla Bergsten, Johan Schubert, Per Svensson}, journal={in Proceedings of the Third International Conference on Knowledge Discovery and Data Mining (KDD'97), pp. 127-130, Newport Beach, USA, 14-17 August 1997, The AAAI Press, Menlo Park, 1997}, year={2003}, number={FOA-B-97-00263-505-SE}, archivePrefix={arXiv}, eprint={cs/0305022}, primaryClass={cs.AI cs.DB cs.NE} }
bergsten2003applying
arxiv-671093
cs/0305023
Fast Dempster-Shafer clustering using a neural network structure
<|reference_start|>Fast Dempster-Shafer clustering using a neural network structure: In this paper we study a problem within Dempster-Shafer theory where 2**n - 1 pieces of evidence are clustered by a neural structure into n clusters. The clustering is done by minimizing a metaconflict function. Previously we developed a method based on iterative optimization. However, for large scale problems we need a method with lower computational complexity. The neural structure was found to be effective and much faster than iterative optimization for larger problems. While the growth in metaconflict was faster for the neural structure compared with iterative optimization in medium sized problems, the metaconflict per cluster and evidence was moderate. The neural structure was able to find a global minimum over ten runs for problem sizes up to six clusters.<|reference_end|>
arxiv
@article{schubert2003fast, title={Fast Dempster-Shafer clustering using a neural network structure}, author={Johan Schubert}, journal={in Proceedings of the Seventh International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems (IPMU'98), pp. 1438-1445, Universite' de La Sorbonne, Paris, France, 6-10 July 1998, Editions EDK, Paris, 1998}, year={2003}, number={FOA-B-98-00355-505-SE}, archivePrefix={arXiv}, eprint={cs/0305023}, primaryClass={cs.AI cs.NE} }
schubert2003fast
arxiv-671094
cs/0305024
A neural network and iterative optimization hybrid for Dempster-Shafer clustering
<|reference_start|>A neural network and iterative optimization hybrid for Dempster-Shafer clustering: In this paper we extend an earlier result within Dempster-Shafer theory ["Fast Dempster-Shafer Clustering Using a Neural Network Structure," in Proc. Seventh Int. Conf. Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU 98)] where a large number of pieces of evidence are clustered into subsets by a neural network structure. The clustering is done by minimizing a metaconflict function. Previously we developed a method based on iterative optimization. While the neural method had a much lower computation time than iterative optimization its average clustering performance was not as good. Here, we develop a hybrid of the two methods. We let the neural structure do the initial clustering in order to achieve a high computational performance. Its solution is fed as the initial state to the iterative optimization in order to improve the clustering performance.<|reference_end|>
arxiv
@article{schubert2003a, title={A neural network and iterative optimization hybrid for Dempster-Shafer clustering}, author={Johan Schubert}, journal={in Proceedings of EuroFusion98 International Conference on Data Fusion (EF'98), M. Bedworth, J. O'Brien (Eds.), pp. 29-36, Great Malvern, UK, 6-7 October 1998}, year={2003}, number={FOA-B-98-00383-505-SE}, archivePrefix={arXiv}, eprint={cs/0305024}, primaryClass={cs.AI cs.NE} }
schubert2003a
arxiv-671095
cs/0305025
Simultaneous Dempster-Shafer clustering and gradual determination of number of clusters using a neural network structure
<|reference_start|>Simultaneous Dempster-Shafer clustering and gradual determination of number of clusters using a neural network structure: In this paper we extend an earlier result within Dempster-Shafer theory ["Fast Dempster-Shafer Clustering Using a Neural Network Structure," in Proc. Seventh Int. Conf. Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU'98)] where several pieces of evidence were clustered into a fixed number of clusters using a neural structure. This was done by minimizing a metaconflict function. We now develop a method for simultaneous clustering and determination of number of clusters during iteration in the neural structure. We let the output signals of neurons represent the degree to which a pieces of evidence belong to a corresponding cluster. From these we derive a probability distribution regarding the number of clusters, which gradually during the iteration is transformed into a determination of number of clusters. This gradual determination is fed back into the neural structure at each iteration to influence the clustering process.<|reference_end|>
arxiv
@article{schubert2003simultaneous, title={Simultaneous Dempster-Shafer clustering and gradual determination of number of clusters using a neural network structure}, author={Johan Schubert}, journal={in Proceedings of the 1999 Information, Decision and Control Conference (IDC'99), pp. 401-406, Adelaide, Australia, 8-10 February 1999, IEEE, Piscataway, 1999}, year={2003}, doi={10.1109/IDC.1999.754191}, number={FOA-B-99-00431-505-SE}, archivePrefix={arXiv}, eprint={cs/0305025}, primaryClass={cs.AI cs.NE} }
schubert2003simultaneous
arxiv-671096
cs/0305026
Fast Dempster-Shafer clustering using a neural network structure
<|reference_start|>Fast Dempster-Shafer clustering using a neural network structure: In this article we study a problem within Dempster-Shafer theory where 2**n - 1 pieces of evidence are clustered by a neural structure into n clusters. The clustering is done by minimizing a metaconflict function. Previously we developed a method based on iterative optimization. However, for large scale problems we need a method with lower computational complexity. The neural structure was found to be effective and much faster than iterative optimization for larger problems. While the growth in metaconflict was faster for the neural structure compared with iterative optimization in medium sized problems, the metaconflict per cluster and evidence was moderate. The neural structure was able to find a global minimum over ten runs for problem sizes up to six clusters.<|reference_end|>
arxiv
@article{schubert2003fast, title={Fast Dempster-Shafer clustering using a neural network structure}, author={Johan Schubert}, journal={in Information, Uncertainty and Fusion, B. Bouchon-Meunier, R.R. Yager, L.A. Zadeh (Eds.), pp. 419-430, Kluwer Academic Publishers, Boston, 1999}, year={2003}, number={FOA-B-99-00504-505-SE}, archivePrefix={arXiv}, eprint={cs/0305026}, primaryClass={cs.AI cs.NE} }
schubert2003fast
arxiv-671097
cs/0305027
Managing Inconsistent Intelligence
<|reference_start|>Managing Inconsistent Intelligence: In this paper we demonstrate that it is possible to manage intelligence in constant time as a pre-process to information fusion through a series of processes dealing with issues such as clustering reports, ranking reports with respect to importance, extraction of prototypes from clusters and immediate classification of newly arriving intelligence reports. These methods are used when intelligence reports arrive which concerns different events which should be handled independently, when it is not known a priori to which event each intelligence report is related. We use clustering that runs as a back-end process to partition the intelligence into subsets representing the events, and in parallel, a fast classification that runs as a front-end process in order to put the newly arriving intelligence into its correct information fusion process.<|reference_end|>
arxiv
@article{schubert2003managing, title={Managing Inconsistent Intelligence}, author={Johan Schubert}, journal={in Proceedings of the Third International Conference on Information Fusion (FUSION 2000), pp. TuB4/10-16, Paris, France, 10-13 July 2000, International Society of Information Fusion, Sunnyvale, 2000}, year={2003}, number={FOA-B-00-00619-505-SE}, archivePrefix={arXiv}, eprint={cs/0305027}, primaryClass={cs.AI cs.NE} }
schubert2003managing
arxiv-671098
cs/0305028
Dempster-Shafer clustering using Potts spin mean field theory
<|reference_start|>Dempster-Shafer clustering using Potts spin mean field theory: In this article we investigate a problem within Dempster-Shafer theory where 2**q - 1 pieces of evidence are clustered into q clusters by minimizing a metaconflict function, or equivalently, by minimizing the sum of weight of conflict over all clusters. Previously one of us developed a method based on a Hopfield and Tank model. However, for very large problems we need a method with lower computational complexity. We demonstrate that the weight of conflict of evidence can, as an approximation, be linearized and mapped to an antiferromagnetic Potts Spin model. This facilitates efficient numerical solution, even for large problem sizes. Optimal or nearly optimal solutions are found for Dempster-Shafer clustering benchmark tests with a time complexity of approximately O(N**2 log**2 N). Furthermore, an isomorphism between the antiferromagnetic Potts spin model and a graph optimization problem is shown. The graph model has dynamic variables living on the links, which have a priori probabilities that are directly related to the pairwise conflict between pieces of evidence. Hence, the relations between three different models are shown.<|reference_end|>
arxiv
@article{bengtsson2003dempster-shafer, title={Dempster-Shafer clustering using Potts spin mean field theory}, author={Mats Bengtsson, Johan Schubert}, journal={Soft Computing 5(3) (2001) 215-228}, year={2003}, number={FOI-S-0027-SE}, archivePrefix={arXiv}, eprint={cs/0305028}, primaryClass={cs.AI cs.NE} }
bengtsson2003dempster-shafer
arxiv-671099
cs/0305029
Conflict-based Force Aggregation
<|reference_start|>Conflict-based Force Aggregation: In this paper we present an application where we put together two methods for clustering and classification into a force aggregation method. Both methods are based on conflicts between elements. These methods work with different type of elements (intelligence reports, vehicles, military units) on different hierarchical levels using specific conflict assessment methods on each level. We use Dempster-Shafer theory for conflict calculation between elements, Dempster-Shafer clustering for clustering these elements, and templates for classification. The result of these processes is a complete force aggregation on all levels handled.<|reference_end|>
arxiv
@article{cantwell2003conflict-based, title={Conflict-based Force Aggregation}, author={John Cantwell, Johan Schubert, Johan Walter}, journal={in Cd Proceedings of the Sixth International Command and Control Research and Technology Symposium, Track 7, Paper 031, pp. 1-15, Annapolis, USA, 19-21 June 2001, US Department of Defence CCRP, Washington, DC, 2001}, year={2003}, number={FOI-S-0040-SE}, archivePrefix={arXiv}, eprint={cs/0305029}, primaryClass={cs.AI cs.NE} }
cantwell2003conflict-based
arxiv-671100
cs/0305030
Reliable Force Aggregation Using a Refined Evidence Specification from Dempster-Shafer Clustering
<|reference_start|>Reliable Force Aggregation Using a Refined Evidence Specification from Dempster-Shafer Clustering: In this paper we develop methods for selection of templates and use these templates to recluster an already performed Dempster-Shafer clustering taking into account intelligence to template fit during the reclustering phase. By this process the risk of erroneous force aggregation based on some misplace pieces of evidence from the first clustering process is greatly reduced. Finally, a more reliable force aggregation is performed using the result of the second clustering. These steps are taken in order to maintain most of the excellent computational performance of Dempster-Shafer clustering, while at the same time improve on the clustering result by including some higher relations among intelligence reports described by the templates. The new improved algorithm has a computational complexity of O(n**3 log**2 n) compared to O(n**2 log**2 n) of standard Dempster-Shafer clustering using Potts spin mean field theory.<|reference_end|>
arxiv
@article{schubert2003reliable, title={Reliable Force Aggregation Using a Refined Evidence Specification from Dempster-Shafer Clustering}, author={Johan Schubert}, journal={in Proceedings of the Fourth Annual Conference on Information Fusion (FUSION 2001), pp. TuB3/15-22, Montreal, Canada, 7-10 August 2001, International Society of Information Fusion, Sunnyvale, 2001}, year={2003}, number={FOI-S-0050-SE}, archivePrefix={arXiv}, eprint={cs/0305030}, primaryClass={cs.AI cs.NE} }
schubert2003reliable