corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-672901 | cs/0505028 | A linear memory algorithm for Baum-Welch training | <|reference_start|>A linear memory algorithm for Baum-Welch training: Background: Baum-Welch training is an expectation-maximisation algorithm for training the emission and transition probabilities of hidden Markov models in a fully automated way. Methods and results: We introduce a linear space algorithm for Baum-Welch training. For a hidden Markov model with M states, T free transition and E free emission parameters, and an input sequence of length L, our new algorithm requires O(M) memory and O(L M T_max (T + E)) time for one Baum-Welch iteration, where T_max is the maximum number of states that any state is connected to. The most memory efficient algorithm until now was the checkpointing algorithm with O(log(L) M) memory and O(log(L) L M T_max) time requirement. Our novel algorithm thus renders the memory requirement completely independent of the length of the training sequences. More generally, for an n-hidden Markov model and n input sequences of length L, the memory requirement of O(log(L) L^(n-1) M) is reduced to O(L^(n-1) M) memory while the running time is changed from O(log(L) L^n M T_max + L^n (T + E)) to O(L^n M T_max (T + E)). Conclusions: For the large class of hidden Markov models used for example in gene prediction, whose number of states does not scale with the length of the input sequence, our novel algorithm can thus be both faster and more memory-efficient than any of the existing algorithms.<|reference_end|> | arxiv | @article{miklos2005a,
title={A linear memory algorithm for Baum-Welch training},
author={Istvan Miklos, Irmtraud M. Meyer},
journal={BMC Bioinformatics (2005) 6:231},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505028},
primaryClass={cs.LG cs.DS q-bio.QM}
} | miklos2005a |
arxiv-672902 | cs/0505029 | Automated Improvement for Component Reuse | <|reference_start|>Automated Improvement for Component Reuse: Software component reuse is the key to significant gains in productivity. However, the major problem is the lack of identifying and developing potentially reusable components. This paper concentrates on our approach to the development of reusable software components. A prototype tool has been developed, known as the Reuse Assessor and Improver System (RAIS) which can interactively identify, analyse, assess, and modify abstractions, attributes and architectures that support reuse. Practical and objective reuse guidelines are used to represent reuse knowledge and to do domain analysis. It takes existing components, provides systematic reuse assessment which is based on reuse advice and analysis, and produces components that are improved for reuse. Our work on guidelines has been extended to a large scale industrial application.<|reference_end|> | arxiv | @article{ramachandran2005automated,
title={Automated Improvement for Component Reuse},
author={Muthu Ramachandran},
journal={INFOCOMP Journal of Computer Science, 4(1), 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505029},
primaryClass={cs.SE}
} | ramachandran2005automated |
arxiv-672903 | cs/0505030 | Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix | <|reference_start|>Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix: We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n x n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension n and degree d. If the latter multiplication is done in MM(n,d)=softO(n^omega d) operations, with omega the exponent of matrix multiplication over K, then the algorithm uses softO(MM(n,d)) operations in K. The softO notation indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel high-order lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]-module<|reference_end|> | arxiv | @article{storjohann2005computing,
title={Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix},
author={Arne Storjohann (UWO), Gilles Villard (LIP)},
journal={Proceedings of the 2005 International Symposium on Symbolic and
Algebraic Computation (2005) 309-316},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505030},
primaryClass={cs.SC cs.CC}
} | storjohann2005computing |
arxiv-672904 | cs/0505031 | Estudo e Implementacao de Algoritmos de Roteamento sobre Grafos em um Sistema de Informacoes Geograficas | <|reference_start|>Estudo e Implementacao de Algoritmos de Roteamento sobre Grafos em um Sistema de Informacoes Geograficas: This article presents an implementation of a graphical software with various algorithms in Operations research, like minimum path, minimum tree, chinese postman problem and travelling salesman.<|reference_end|> | arxiv | @article{sampaio2005estudo,
title={Estudo e Implementacao de Algoritmos de Roteamento sobre Grafos em um
Sistema de Informacoes Geograficas},
author={Rudini M. Sampaio, Horacio H. Yanasse},
journal={INFOCOMP Journal of Computer Science, 3(1), 2004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505031},
primaryClass={cs.MS cs.DS}
} | sampaio2005estudo |
arxiv-672905 | cs/0505032 | Broadcast Channels with Cooperating Decoders | <|reference_start|>Broadcast Channels with Cooperating Decoders: We consider the problem of communicating over the general discrete memoryless broadcast channel (BC) with partially cooperating receivers. In our setup, receivers are able to exchange messages over noiseless conference links of finite capacities, prior to decoding the messages sent from the transmitter. In this paper we formulate the general problem of broadcast with cooperation. We first find the capacity region for the case where the BC is physically degraded. Then, we give achievability results for the general broadcast channel, for both the two independent messages case and the single common message case.<|reference_end|> | arxiv | @article{dabora2005broadcast,
title={Broadcast Channels with Cooperating Decoders},
author={Ron Dabora, Sergio D. Servetto (Cornell University)},
journal={IEEE Trans. Inform. Theory, 52(12):5438-5454, 2006.},
year={2005},
doi={10.1109/TIT.2006.885478},
archivePrefix={arXiv},
eprint={cs/0505032},
primaryClass={cs.IT math.IT}
} | dabora2005broadcast |
arxiv-672906 | cs/0505033 | Parametric Verification of a Group Membership Algorithm | <|reference_start|>Parametric Verification of a Group Membership Algorithm: We address the problem of verifying clique avoidance in the TTP protocol. TTP allows several stations embedded in a car to communicate. It has many mechanisms to ensure robustness to faults. In particular, it has an algorithm that allows a station to recognize itself as faulty and leave the communication. This algorithm must satisfy the crucial 'non-clique' property: it is impossible to have two or more disjoint groups of stations communicating exclusively with stations in their own group. In this paper, we propose an automatic verification method for an arbitrary number of stations $N$ and a given number of faults $k$. We give an abstraction that allows to model the algorithm by means of unbounded (parametric) counter automata. We have checked the non-clique property on this model in the case of one fault, using the ALV tool as well as the LASH tool.<|reference_end|> | arxiv | @article{bouajjani2005parametric,
title={Parametric Verification of a Group Membership Algorithm},
author={Ahmed Bouajjani, Agathe Merceron},
journal={arXiv preprint arXiv:cs/0505033},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505033},
primaryClass={cs.LO}
} | bouajjani2005parametric |
arxiv-672907 | cs/0505034 | Essential Incompleteness of Arithmetic Verified by Coq | <|reference_start|>Essential Incompleteness of Arithmetic Verified by Coq: A constructive proof of the Goedel-Rosser incompleteness theorem has been completed using the Coq proof assistant. Some theory of classical first-order logic over an arbitrary language is formalized. A development of primitive recursive functions is given, and all primitive recursive functions are proved to be representable in a weak axiom system. Formulas and proofs are encoded as natural numbers, and functions operating on these codes are proved to be primitive recursive. The weak axiom system is proved to be essentially incomplete. In particular, Peano arithmetic is proved to be consistent in Coq's type theory and therefore is incomplete.<|reference_end|> | arxiv | @article{o'connor2005essential,
title={Essential Incompleteness of Arithmetic Verified by Coq},
author={Russell O'Connor},
journal={Russell O'Connor, Essential Incompleteness of Arithmetic Verified
by Coq, Lecture Notes in Computer Science, Volume 3603, Aug 2005, Pages 245 -
260},
year={2005},
doi={10.1007/11541868_16},
archivePrefix={arXiv},
eprint={cs/0505034},
primaryClass={cs.LO}
} | o'connor2005essential |
arxiv-672908 | cs/0505035 | Beyond Hypertree Width: Decomposition Methods Without Decompositions | <|reference_start|>Beyond Hypertree Width: Decomposition Methods Without Decompositions: The general intractability of the constraint satisfaction problem has motivated the study of restrictions on this problem that permit polynomial-time solvability. One major line of work has focused on structural restrictions, which arise from restricting the interaction among constraint scopes. In this paper, we engage in a mathematical investigation of generalized hypertree width, a structural measure that has up to recently eluded study. We obtain a number of computational results, including a simple proof of the tractability of CSP instances having bounded generalized hypertree width.<|reference_end|> | arxiv | @article{chen2005beyond,
title={Beyond Hypertree Width: Decomposition Methods Without Decompositions},
author={Hubie Chen and Victor Dalmau},
journal={arXiv preprint arXiv:cs/0505035},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505035},
primaryClass={cs.CC cs.AI}
} | chen2005beyond |
arxiv-672909 | cs/0505036 | Minimal Eulerian trail in a labeled digraph | <|reference_start|>Minimal Eulerian trail in a labeled digraph: Let $G$ be an Eulerian directed graph with an arc-labeling such that arcs going out from the same vertex have different labels. In this work, we present an algorithm to construct the Eulerian trail starting at an arbitrary vertex $v$ of minimum lexicographical label among labels of all Eulerian trails starting at this vertex. We also show an application of this algorithm to construct the minimal de Bruijn sequence of a language.<|reference_end|> | arxiv | @article{matamala2005minimal,
title={Minimal Eulerian trail in a labeled digraph},
author={Martin Matamala and Eduardo Moreno},
journal={arXiv preprint arXiv:cs/0505036},
year={2005},
number={CMM-B-04/08-108},
archivePrefix={arXiv},
eprint={cs/0505036},
primaryClass={cs.DM}
} | matamala2005minimal |
arxiv-672910 | cs/0505037 | General Recursion via Coinductive Types | <|reference_start|>General Recursion via Coinductive Types: A fertile field of research in theoretical computer science investigates the representation of general recursive functions in intensional type theories. Among the most successful approaches are: the use of wellfounded relations, implementation of operational semantics, formalization of domain theory, and inductive definition of domain predicates. Here, a different solution is proposed: exploiting coinductive types to model infinite computations. To every type A we associate a type of partial elements Partial(A), coinductively generated by two constructors: the first, return(a) just returns an element a:A; the second, step(x), adds a computation step to a recursive element x:Partial(A). We show how this simple device is sufficient to formalize all recursive functions between two given types. It allows the definition of fixed points of finitary, that is, continuous, operators. We will compare this approach to different ones from the literature. Finally, we mention that the formalization, with appropriate structural maps, defines a strong monad.<|reference_end|> | arxiv | @article{capretta2005general,
title={General Recursion via Coinductive Types},
author={Venanzio Capretta},
journal={Logical Methods in Computer Science, Volume 1, Issue 2 (July 13,
2005) lmcs:2265},
year={2005},
doi={10.2168/LMCS-1(2:1)2005},
archivePrefix={arXiv},
eprint={cs/0505037},
primaryClass={cs.LO}
} | capretta2005general |
arxiv-672911 | cs/0505038 | Efficient Management of Short-Lived Data | <|reference_start|>Efficient Management of Short-Lived Data: Motivated by the increasing prominence of loosely-coupled systems, such as mobile and sensor networks, which are characterised by intermittent connectivity and volatile data, we study the tagging of data with so-called expiration times. More specifically, when data are inserted into a database, they may be tagged with time values indicating when they expire, i.e., when they are regarded as stale or invalid and thus are no longer considered part of the database. In a number of applications, expiration times are known and can be assigned at insertion time. We present data structures and algorithms for online management of data tagged with expiration times. The algorithms are based on fully functional, persistent treaps, which are a combination of binary search trees with respect to a primary attribute and heaps with respect to a secondary attribute. The primary attribute implements primary keys, and the secondary attribute stores expiration times in a minimum heap, thus keeping a priority queue of tuples to expire. A detailed and comprehensive experimental study demonstrates the well-behavedness and scalability of the approach as well as its efficiency with respect to a number of competitors.<|reference_end|> | arxiv | @article{schmidt2005efficient,
title={Efficient Management of Short-Lived Data},
author={Albrecht Schmidt, Christian S. Jensen},
journal={arXiv preprint arXiv:cs/0505038},
year={2005},
number={TimeCenter, TR-82},
archivePrefix={arXiv},
eprint={cs/0505038},
primaryClass={cs.DB}
} | schmidt2005efficient |
arxiv-672912 | cs/0505039 | Methods for comparing rankings of search engine results | <|reference_start|>Methods for comparing rankings of search engine results: In this paper we present a number of measures that compare rankings of search engine results. We apply these measures to five queries that were monitored daily for two periods of about 21 days each. Rankings of the different search engines (Google, Yahoo and Teoma for text searches and Google, Yahoo and Picsearch for image searches) are compared on a daily basis, in addition to longitudinal comparisons of the same engine for the same query over time. The results and rankings of the two periods are compared as well.<|reference_end|> | arxiv | @article{bar-ilan2005methods,
title={Methods for comparing rankings of search engine results},
author={Judit Bar-Ilan, Mazlita Mat-Hassan, Mark Levene},
journal={arXiv preprint arXiv:cs/0505039},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505039},
primaryClass={cs.IR}
} | bar-ilan2005methods |
arxiv-672913 | cs/0505040 | Asynchronous pseudo-systems | <|reference_start|>Asynchronous pseudo-systems: The paper introduces the concept of asynchronous pseudo-system. Its purpose is to correct/generalize/continue the study of the asynchronous systems (the models of the asynchronous circuits) that has been started in [1], [2].<|reference_end|> | arxiv | @article{vlad2005asynchronous,
title={Asynchronous pseudo-systems},
author={Serban E. Vlad},
journal={Analele Universitatii Oradea, Fasc. Matematica, Tom XI, 2004,
133-174},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505040},
primaryClass={cs.OH}
} | vlad2005asynchronous |
arxiv-672914 | cs/0505041 | Relational reasoning in the region connection calculus | <|reference_start|>Relational reasoning in the region connection calculus: This paper is mainly concerned with the relation-algebraical aspects of the well-known Region Connection Calculus (RCC). We show that the contact relation algebra (CRA) of certain RCC model is not atomic complete and hence infinite. So in general an extensional composition table for the RCC cannot be obtained by simply refining the RCC8 relations. After having shown that each RCC model is a consistent model of the RCC11 CT, we give an exhaustive investigation about extensional interpretation of the RCC11 CT. More important, we show the complemented closed disk algebra is a representation for the relation algebra determined by the RCC11 table. The domain of this algebra contains two classes of regions, the closed disks and closures of their complements in the real plane.<|reference_end|> | arxiv | @article{li2005relational,
title={Relational reasoning in the region connection calculus},
author={Yongming Li, Sanjiang Li and Mingsheng Ying},
journal={arXiv preprint arXiv:cs/0505041},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505041},
primaryClass={cs.AI cs.LO}
} | li2005relational |
arxiv-672915 | cs/0505042 | Iterative MILP Methods for Vehicle Control Problems | <|reference_start|>Iterative MILP Methods for Vehicle Control Problems: Mixed integer linear programming (MILP) is a powerful tool for planning and control problems because of its modeling capability and the availability of good solvers. However, for large models, MILP methods suffer computationally. In this paper, we present iterative MILP algorithms that address this issue. We consider trajectory generation problems with obstacle avoidance requirements and minimum time trajectory generation problems. The algorithms use fewer binary variables than standard MILP methods and require less computational effort.<|reference_end|> | arxiv | @article{earl2005iterative,
title={Iterative MILP Methods for Vehicle Control Problems},
author={Matthew Earl and Raffaello D'Andrea},
journal={M. G. Earl and R. D'Andrea, "Iterative MILP Methods for Vehicle
Control Problems," IEEE Transactions on Robotics, Volume 21, Issue 6, pages
1158-1167, Dec. 2005.},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505042},
primaryClass={cs.RO}
} | earl2005iterative |
arxiv-672916 | cs/0505043 | Estimacao Temporal da Deformacao entre Objectos utilizando uma Metodologia Fisica | <|reference_start|>Estimacao Temporal da Deformacao entre Objectos utilizando uma Metodologia Fisica: In this paper, it is presented a methodology to estimate the deformation involved between two objects attending to its physical properties. This methodology can be used, for example, in Computational Vision or Computer Graphics applications, and consists in physically modeling the objects, by means of the Finite Elements Method, establishing correspondences between some of its data points, by using Modal Matching, and finally, determining the displacement field, that is the intermediate shapes, through the resolution of the Lagrange Dynamic Equilibrium Equation. As in many of the possible applications of the methodology to present, it is necessary to quantify the existing deformation, as well as to estimate only the non rigid component of the involved global deformation. The solutions adopted to satisfy such intentions will be also presented.<|reference_end|> | arxiv | @article{tavares2005estimacao,
title={Estimacao Temporal da Deformacao entre Objectos utilizando uma
Metodologia Fisica},
author={Joao Manuel R. S. Tavares, Raquel R. Pinho},
journal={INFOCOMP Journal of Computer Science, 4(1), 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505043},
primaryClass={cs.GR cs.CG}
} | tavares2005estimacao |
arxiv-672917 | cs/0505044 | Separating a Real-Life Nonlinear Image Mixture | <|reference_start|>Separating a Real-Life Nonlinear Image Mixture: When acquiring an image of a paper document, the image printed on the back page sometimes shows through. The mixture of the front- and back-page images thus obtained is markedly nonlinear, and thus constitutes a good real-life test case for nonlinear blind source separation. This paper addresses a difficult version of this problem, corresponding to the use of "onion skin" paper, which results in a relatively strong nonlinearity of the mixture, which becomes close to singular in the lighter regions of the images. The separation is achieved through the MISEP technique, which is an extension of the well known INFOMAX method. The separation results are assessed with objective quality measures. They show an improvement over the results obtained with linear separation, but have room for further improvement.<|reference_end|> | arxiv | @article{almeida2005separating,
title={Separating a Real-Life Nonlinear Image Mixture},
author={Luis B. Almeida},
journal={arXiv preprint arXiv:cs/0505044},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505044},
primaryClass={cs.NE cs.AI cs.IT math.IT}
} | almeida2005separating |
arxiv-672918 | cs/0505045 | A T Step Ahead Optimal Target Detection Algorithm for a Multi Sensor Surveillance System | <|reference_start|>A T Step Ahead Optimal Target Detection Algorithm for a Multi Sensor Surveillance System: This paper presents a methodology for optimal target detection in a multi sensor surveillance system. The system consists of mobile sensors that guard a rectangular surveillance zone crisscrossed by moving targets. Targets percolate the surveillance zone in a poisson fashion with uniform velocities. Under these statistics this paper computes a motion strategy for a sensor that maximizes target detections for the next T time steps. A coordination mechanism between sensors ensures that overlapping areas between sensors is reduced. This coordination mechanism is interleaved with the motion strategy computation to reduce detections of the same target by more than one sensor. To avoid an exhaustive search in the joint space of all sensors the coordination mechanism constraints the search by assigning priorities to the sensors. A comparison of this methodology with other multi target tracking schemes verifies its efficacy in maximizing detections. A tabulation of these comparisons is reported in results section of the paper<|reference_end|> | arxiv | @article{krishna2005a,
title={A T Step Ahead Optimal Target Detection Algorithm for a Multi Sensor
Surveillance System},
author={K Madhava Krishna, Henry Hexmoor and Shravan Sogani},
journal={arXiv preprint arXiv:cs/0505045},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505045},
primaryClass={cs.MA cs.RO}
} | krishna2005a |
arxiv-672919 | cs/0505046 | Optimum Signal Linear Detector in the Discrete Wavelet Transform-Domain | <|reference_start|>Optimum Signal Linear Detector in the Discrete Wavelet Transform-Domain: The problem of known signal detection in Additive White Gaussian Noise is considered. In this paper a new detection algorithm based on Discrete Wavelet Transform pre-processing and threshold comparison is introduced. Current approaches described in [7] use the maximum value obtained in the wavelet domain for decision. Here, we use all available information in the wavelet domain with excellent results. Detector performance is presented in Probability of detection curves for a fixed probability of false alarm.<|reference_end|> | arxiv | @article{melgar2005optimum,
title={Optimum Signal Linear Detector in the Discrete Wavelet Transform-Domain},
author={Ignacio Melgar, Jaime Gomez, Juan Seijas},
journal={WSEAS Transactions on Communications, ISSN 1109-2742, issue 3, vol
2, p253-258, July-2003},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505046},
primaryClass={cs.IT cs.IR math.IT}
} | melgar2005optimum |
arxiv-672920 | cs/0505047 | A Simple Proof of the F\'ary-Wagner Theorem | <|reference_start|>A Simple Proof of the F\'ary-Wagner Theorem: We give a simple proof of the following fundamental result independently due to Fary (1948) and Wagner (1936): Every plane graph has a drawing in which every edge is straight.<|reference_end|> | arxiv | @article{wood2005a,
title={A Simple Proof of the F{\'a}ry-Wagner Theorem},
author={David R. Wood},
journal={arXiv preprint arXiv:cs/0505047},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505047},
primaryClass={cs.CG}
} | wood2005a |
arxiv-672921 | cs/0505048 | Improved Combinatorial Group Testing Algorithms for Real-World Problem Sizes | <|reference_start|>Improved Combinatorial Group Testing Algorithms for Real-World Problem Sizes: We study practically efficient methods for performing combinatorial group testing. We present efficient non-adaptive and two-stage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our two-stage algorithm matches the information theoretic lower bound for the number of tests in a combinatorial group testing regimen.<|reference_end|> | arxiv | @article{eppstein2005improved,
title={Improved Combinatorial Group Testing Algorithms for Real-World Problem
Sizes},
author={David Eppstein, Michael T. Goodrich, and Daniel S. Hirschberg},
journal={SIAM J. Computing 36(5):1360-1375, 2007},
year={2005},
doi={10.1137/050631847},
archivePrefix={arXiv},
eprint={cs/0505048},
primaryClass={cs.DS}
} | eppstein2005improved |
arxiv-672922 | cs/0505049 | Fading-Resilient Super-Orthogonal Space-Time Signal Sets: Can Good Constellations Survive in Fading? | <|reference_start|>Fading-Resilient Super-Orthogonal Space-Time Signal Sets: Can Good Constellations Survive in Fading?: In this correspondence, first-tier indirect (direct) discernible constellation expansions are defined for generalized orthogonal designs. The expanded signal constellation, leading to so-called super-orthogonal codes, allows the achievement of coding gains in addition to diversity gains enabled by orthogonal designs. Conditions that allow the shape of an expanded multidimensional constellation to be preserved at the channel output, on an instantaneous basis, are derived. It is further shown that, for such constellations, the channel alters neither the relative distances nor the angles between signal points in the expanded signal constellation.<|reference_end|> | arxiv | @article{ionescu2005fading-resilient,
title={Fading-Resilient Super-Orthogonal Space-Time Signal Sets: Can Good
Constellations Survive in Fading?},
author={Dumitru Mihai Ionescu and Zhiyuan Yan},
journal={IEEE Transactions on Information Theory, Vol. 53, No. 9, SEPTEMBER
2007, pp. 3219-3225},
year={2005},
doi={10.1109/TIT.2007.903148},
archivePrefix={arXiv},
eprint={cs/0505049},
primaryClass={cs.IT math.IT}
} | ionescu2005fading-resilient |
arxiv-672923 | cs/0505050 | The QDF file format: an electronic system to describe ancient andean khipus | <|reference_start|>The QDF file format: an electronic system to describe ancient andean khipus: With the goal of bringing to reseachers of the ancient andean khipus with a tool to share and process electronically the current corpus of these ancient information devices, I present on this paper a proposal for a Quipu Description Format (QDF), a XML based file format designed to describe such documents in a systematic and computer standard way.<|reference_end|> | arxiv | @article{izquierdo2005the,
title={The QDF file format: an electronic system to describe ancient andean
khipus},
author={Manuel Arturo Izquierdo},
journal={arXiv preprint arXiv:cs/0505050},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505050},
primaryClass={cs.CY}
} | izquierdo2005the |
arxiv-672924 | cs/0505051 | Sub-Optimum Signal Linear Detector Using Wavelets and Support Vector Machines | <|reference_start|>Sub-Optimum Signal Linear Detector Using Wavelets and Support Vector Machines: The problem of known signal detection in Additive White Gaussian Noise is considered. In previous work, a new detection scheme was introduced by the authors, and it was demonstrated that optimum performance cannot be reached in a real implementation. In this paper we analyse Support Vector Machines (SVM) as an alternative, evaluating the results in terms of Probability of detection curves for a fixed Probability of false alarm.<|reference_end|> | arxiv | @article{gomez2005sub-optimum,
title={Sub-Optimum Signal Linear Detector Using Wavelets and Support Vector
Machines},
author={Jaime Gomez, Ignacio Melgar, Juan Seijas, Diego Andina},
journal={WSEAS Transactions on Communications, ISSN 1109-2742, issue 4, vol
2, p426-431, October-2003},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505051},
primaryClass={cs.IR cs.NE}
} | gomez2005sub-optimum |
arxiv-672925 | cs/0505052 | Upgrading Pulse Detection with Time Shift Properties Using Wavelets and Support Vector Machines | <|reference_start|>Upgrading Pulse Detection with Time Shift Properties Using Wavelets and Support Vector Machines: Current approaches in pulse detection use domain transformations so as to concentrate frequency related information that can be distinguishable from noise. In real cases we do not know when the pulse will begin, so we need a time search process in which time windows are scheduled and analysed. Each window can contain the pulsed signal (either complete or incomplete) and / or noise. In this paper a simple search process will be introduced, allowing the algorithm to process more information, upgrading the capabilities in terms of probability of detection (Pd) and probability of false alarm (Pfa).<|reference_end|> | arxiv | @article{gomez2005upgrading,
title={Upgrading Pulse Detection with Time Shift Properties Using Wavelets and
Support Vector Machines},
author={Jaime Gomez, Ignacio Melgar, Juan Seijas},
journal={Proceedings of the World Automation Congress (WAC-04), Sevilla,
Spain, June-2004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505052},
primaryClass={cs.IR cs.NE}
} | gomez2005upgrading |
arxiv-672926 | cs/0505053 | Wavelet Time Shift Properties Integration with Support Vector Machines | <|reference_start|>Wavelet Time Shift Properties Integration with Support Vector Machines: This paper presents a short evaluation about the integration of information derived from wavelet non-linear-time-invariant (non-LTI) projection properties using Support Vector Machines (SVM). These properties may give additional information for a classifier trying to detect known patterns hidden by noise. In the experiments we present a simple electromagnetic pulsed signal recognition scheme, where some improvement is achieved with respect to previous work. SVMs are used as a tool for information integration, exploiting some unique properties not easily found in neural networks.<|reference_end|> | arxiv | @article{gomez2005wavelet,
title={Wavelet Time Shift Properties Integration with Support Vector Machines},
author={Jaime Gomez, Ignacio Melgar, Juan Seijas},
journal={LNAI-3131 Modeling Decisions for Artificial Intelligence, ISSN
0302-9743, p49-59, Barcelona, Spain, August-2004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505053},
primaryClass={cs.IR cs.NE}
} | gomez2005wavelet |
arxiv-672927 | cs/0505054 | The Partition Weight Enumerator of MDS Codes and its Applications | <|reference_start|>The Partition Weight Enumerator of MDS Codes and its Applications: A closed form formula of the partition weight enumerator of maximum distance separable (MDS) codes is derived for an arbitrary number of partitions. Using this result, some properties of MDS codes are discussed. The results are extended for the average binary image of MDS codes in finite fields of characteristic two. As an application, we study the multiuser error probability of Reed Solomon codes.<|reference_end|> | arxiv | @article{el-khamy2005the,
title={The Partition Weight Enumerator of MDS Codes and its Applications},
author={Mostafa El-Khamy and Robert J. McEliece},
journal={arXiv preprint arXiv:cs/0505054},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505054},
primaryClass={cs.IT math.IT}
} | el-khamy2005the |
arxiv-672928 | cs/0505055 | A Verifiable Partial Key Escrow, Based on McCurley Encryption Scheme | <|reference_start|>A Verifiable Partial Key Escrow, Based on McCurley Encryption Scheme: In this paper, firstly we propose two new concepts concerning the notion of key escrow encryption schemes: provable partiality and independency. Roughly speaking we say that a scheme has provable partiality if existing polynomial time algorithm for recovering the secret knowing escrowed information implies a polynomial time algorithm that can solve a well-known intractable problem. In addition, we say that a scheme is independent if the secret key and the escrowed information are independent. Finally, we propose a new verifiable partial key escrow, which has both of above criteria. The new scheme use McCurley encryption scheme as underlying scheme.<|reference_end|> | arxiv | @article{azimian2005a,
title={A Verifiable Partial Key Escrow, Based on McCurley Encryption Scheme},
author={Kooshiar Azimian, Javad Mohajeri, Mahmoud Salmasizadeh, Siamak Fayyaz},
journal={arXiv preprint arXiv:cs/0505055},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505055},
primaryClass={cs.CR cs.CC}
} | azimian2005a |
arxiv-672929 | cs/0505056 | Text Compression and Superfast Searching | <|reference_start|>Text Compression and Superfast Searching: In this paper, a new compression scheme for text is presented. The same is efficient in giving high compression ratios and enables super fast searching within the compressed text. Typical compression ratios of 70-80% and reducing the search time by 80-85% are the features of this paper. Till now, a trade-off between high ratios and searchability within compressed text has been seen. In this paper, we show that greater the compression, faster the search. This finds applicability in so many places where data as natural language text is present.<|reference_end|> | arxiv | @article{khurana2005text,
title={Text Compression and Superfast Searching},
author={Udayan Khurana (1), Anirudh Koul (1) ((1) Thapar Institute of
Engineering and Technology)},
journal={arXiv preprint arXiv:cs/0505056},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505056},
primaryClass={cs.IR cs.IT math.IT}
} | khurana2005text |
arxiv-672930 | cs/0505057 | Improved Bounds on the Parity-Check Density and Achievable Rates of Binary Linear Block Codes with Applications to LDPC Codes | <|reference_start|>Improved Bounds on the Parity-Check Density and Achievable Rates of Binary Linear Block Codes with Applications to LDPC Codes: We derive bounds on the asymptotic density of parity-check matrices and the achievable rates of binary linear block codes transmitted over memoryless binary-input output-symmetric (MBIOS) channels. The lower bounds on the density of arbitrary parity-check matrices are expressed in terms of the gap between the rate of these codes for which reliable communication is achievable and the channel capacity, and the bounds are valid for every sequence of binary linear block codes. These bounds address the question, previously considered by Sason and Urbanke, of how sparse can parity-check matrices of binary linear block codes be as a function of the gap to capacity. Similarly to a previously reported bound by Sason and Urbanke, the new lower bounds on the parity-check density scale like the log of the inverse of the gap to capacity, but their tightness is improved (except for a binary symmetric/erasure channel, where they coincide with the previous bound). The new upper bounds on the achievable rates of binary linear block codes tighten previously reported bounds by Burshtein et al., and therefore enable to obtain tighter upper bounds on the thresholds of sequences of binary linear block codes under ML decoding. The bounds are applied to low-density parity-check (LDPC) codes, and the improvement in their tightness is exemplified numerically. The upper bounds on the achievable rates enable to assess the inherent loss in performance of various iterative decoding algorithms as compared to optimal ML decoding. The lower bounds on the asymptotic parity-check density are helpful in assessing the inherent tradeoff between the asymptotic performance of LDPC codes and their decoding complexity (per iteration) under message-passing decoding.<|reference_end|> | arxiv | @article{wiechman2005improved,
title={Improved Bounds on the Parity-Check Density and Achievable Rates of
Binary Linear Block Codes with Applications to LDPC Codes},
author={Gil Wiechman and Igal Sason},
journal={arXiv preprint arXiv:cs/0505057},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505057},
primaryClass={cs.IT math.IT}
} | wiechman2005improved |
arxiv-672931 | cs/0505058 | The Cyborg Astrobiologist: Scouting Red Beds for Uncommon Features with Geological Significance | <|reference_start|>The Cyborg Astrobiologist: Scouting Red Beds for Uncommon Features with Geological Significance: The `Cyborg Astrobiologist' (CA) has undergone a second geological field trial, at a red sandstone site in northern Guadalajara, Spain, near Riba de Santiuste. The Cyborg Astrobiologist is a wearable computer and video camera system that has demonstrated a capability to find uncommon interest points in geological imagery in real-time in the field. The first (of three) geological structures that we studied was an outcrop of nearly homogeneous sandstone, which exhibits oxidized-iron impurities in red and and an absence of these iron impurities in white. The white areas in these ``red beds'' have turned white because the iron has been removed by chemical reduction, perhaps by a biological agent. The computer vision system found in one instance several (iron-free) white spots to be uncommon and therefore interesting, as well as several small and dark nodules. The second geological structure contained white, textured mineral deposits on the surface of the sandstone, which were found by the CA to be interesting. The third geological structure was a 50 cm thick paleosol layer, with fossilized root structures of some plants, which were found by the CA to be interesting. A quasi-blind comparison of the Cyborg Astrobiologist's interest points for these images with the interest points determined afterwards by a human geologist shows that the Cyborg Astrobiologist concurred with the human geologist 68% of the time (true positive rate), with a 32% false positive rate and a 32% false negative rate. (abstract has been abridged).<|reference_end|> | arxiv | @article{mcguire2005the,
title={The Cyborg Astrobiologist: Scouting Red Beds for Uncommon Features with
Geological Significance},
author={Patrick C. McGuire, Enrique Diaz-Martinez, Jens Ormo, Javier
Gomez-Elvira, Jose A. Rodriguez-Manfredi, Eduardo Sebastian-Martinez, Helge
Ritter, Robert Haschke, Markus Oesker, Joerg Ontrup},
journal={Int.J.Astrobiol.4:101-113,2005},
year={2005},
doi={10.1017/S1473550405002533},
archivePrefix={arXiv},
eprint={cs/0505058},
primaryClass={cs.CV astro-ph cs.AI cs.CE cs.HC cs.RO cs.SE physics.ins-det q-bio.NC}
} | mcguire2005the |
arxiv-672932 | cs/0505059 | Consistent query answers on numerical databases under aggregate constraints | <|reference_start|>Consistent query answers on numerical databases under aggregate constraints: The problem of extracting consistent information from relational databases violating integrity constraints on numerical data is addressed. In particular, aggregate constraints defined as linear inequalities on aggregate-sum queries on input data are considered. The notion of repair as consistent set of updates at attribute-value level is exploited, and the characterization of several complexity issues related to repairing data and computing consistent query answers is provided.<|reference_end|> | arxiv | @article{flesca2005consistent,
title={Consistent query answers on numerical databases under aggregate
constraints},
author={Sergio Flesca, Filippo Furfaro, Francesco Parisi},
journal={arXiv preprint arXiv:cs/0505059},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505059},
primaryClass={cs.DB}
} | flesca2005consistent |
arxiv-672933 | cs/0505060 | A Unified Subspace Outlier Ensemble Framework for Outlier Detection in High Dimensional Spaces | <|reference_start|>A Unified Subspace Outlier Ensemble Framework for Outlier Detection in High Dimensional Spaces: The task of outlier detection is to find small groups of data objects that are exceptional when compared with rest large amount of data. Detection of such outliers is important for many applications such as fraud detection and customer migration. Most such applications are high dimensional domains in which the data may contain hundreds of dimensions. However, the outlier detection problem itself is not well defined and none of the existing definitions are widely accepted, especially in high dimensional space. In this paper, our first contribution is to propose a unified framework for outlier detection in high dimensional spaces from an ensemble-learning viewpoint. In our new framework, the outlying-ness of each data object is measured by fusing outlier factors in different subspaces using a combination function. Accordingly, we show that all existing researches on outlier detection can be regarded as special cases in the unified framework with respect to the set of subspaces considered and the type of combination function used. In addition, to demonstrate the usefulness of the ensemble-learning based outlier detection framework, we developed a very simple and fast algorithm, namely SOE1 (Subspace Outlier Ensemble using 1-dimensional Subspaces) in which only subspaces with one dimension is used for mining outliers from large categorical datasets. The SOE1 algorithm needs only two scans over the dataset and hence is very appealing in real data mining applications. Experimental results on real datasets and large synthetic datasets show that: (1) SOE1 has comparable performance with respect to those state-of-art outlier detection algorithms on identifying true outliers and (2) SOE1 can be an order of magnitude faster than one of the fastest outlier detection algorithms known so far.<|reference_end|> | arxiv | @article{he2005a,
title={A Unified Subspace Outlier Ensemble Framework for Outlier Detection in
High Dimensional Spaces},
author={Zengyou He, Xiaofei Xu, Shengchun Deng},
journal={arXiv preprint arXiv:cs/0505060},
year={2005},
number={TR-04-08},
archivePrefix={arXiv},
eprint={cs/0505060},
primaryClass={cs.DB cs.AI}
} | he2005a |
arxiv-672934 | cs/0505061 | EAH: A New Encoder based on Adaptive Variable-length Codes | <|reference_start|>EAH: A New Encoder based on Adaptive Variable-length Codes: Adaptive variable-length codes associate a variable-length codeword to the symbol being encoded depending on the previous symbols in the input string. This class of codes has been recently presented in [Dragos Trinca, arXiv:cs.DS/0505007] as a new class of non-standard variable-length codes. New algorithms for data compression, based on adaptive variable-length codes of order one and Huffman's algorithm, have been recently presented in [Dragos Trinca, ITCC 2004]. In this paper, we extend the work done so far by the following contributions: first, we propose an improved generalization of these algorithms, called EAHn. Second, we compute the entropy bounds for EAHn, using the well-known bounds for Huffman's algorithm. Third, we discuss implementation details and give reports of experimental results obtained on some well-known corpora. Finally, we describe a parallel version of EAHn using the PRAM model of computation.<|reference_end|> | arxiv | @article{trinca2005eah:,
title={EAH: A New Encoder based on Adaptive Variable-length Codes},
author={Dragos Trinca},
journal={arXiv preprint arXiv:cs/0505061},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505061},
primaryClass={cs.DS}
} | trinca2005eah: |
arxiv-672935 | cs/0505062 | Gossip Codes for Fingerprinting: Construction, Erasure Analysis and Pirate Tracing | <|reference_start|>Gossip Codes for Fingerprinting: Construction, Erasure Analysis and Pirate Tracing: This work presents two new construction techniques for q-ary Gossip codes from tdesigns and Traceability schemes. These Gossip codes achieve the shortest code length specified in terms of code parameters and can withstand erasures in digital fingerprinting applications. This work presents the construction of embedded Gossip codes for extending an existing Gossip code into a bigger code. It discusses the construction of concatenated codes and realisation of erasure model through concatenated codes.<|reference_end|> | arxiv | @article{veerubhotla2005gossip,
title={Gossip Codes for Fingerprinting: Construction, Erasure Analysis and
Pirate Tracing},
author={Ravi S. Veerubhotla, Ashutosh Saxena, V.P. Gulati, A.K. Pujari},
journal={Journal of Universal Computer Science, vol. 11, no. 1 (2005),
122-149},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505062},
primaryClass={cs.CR}
} | veerubhotla2005gossip |
arxiv-672936 | cs/0505063 | Approximate reasoning for real-time probabilistic processes | <|reference_start|>Approximate reasoning for real-time probabilistic processes: We develop a pseudo-metric analogue of bisimulation for generalized semi-Markov processes. The kernel of this pseudo-metric corresponds to bisimulation; thus we have extended bisimulation for continuous-time probabilistic processes to a much broader class of distributions than exponential distributions. This pseudo-metric gives a useful handle on approximate reasoning in the presence of numerical information -- such as probabilities and time -- in the model. We give a fixed point characterization of the pseudo-metric. This makes available coinductive reasoning principles for reasoning about distances. We demonstrate that our approach is insensitive to potentially ad hoc articulations of distance by showing that it is intrinsic to an underlying uniformity. We provide a logical characterization of this uniformity using a real-valued modal logic. We show that several quantitative properties of interest are continuous with respect to the pseudo-metric. Thus, if two processes are metrically close, then observable quantitative properties of interest are indeed close.<|reference_end|> | arxiv | @article{gupta2005approximate,
title={Approximate reasoning for real-time probabilistic processes},
author={Vineet Gupta and Radha Jagadeesan and Prakash Panangaden},
journal={Logical Methods in Computer Science, Volume 2, Issue 1 (March 7,
2006) lmcs:2258},
year={2005},
doi={10.2168/LMCS-2(1:4)2006},
archivePrefix={arXiv},
eprint={cs/0505063},
primaryClass={cs.LO}
} | gupta2005approximate |
arxiv-672937 | cs/0505064 | Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks | <|reference_start|>Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks: A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects.<|reference_end|> | arxiv | @article{mcguire2005multi-modal,
title={Multi-Modal Human-Machine Communication for Instructing Robot Grasping
Tasks},
author={P.C. McGuire, J. Fritsch, J. J. Steil, F. Roethling, G. A. Fink, S.
Wachsmuth, G. Sagerer, H. Ritter},
journal={Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), Lausanne, Switzerland, IEEE
publications, pp. 1082-1089 (2002)},
year={2005},
doi={10.1109/IRDS.2002.1043875},
archivePrefix={arXiv},
eprint={cs/0505064},
primaryClass={cs.HC cs.AI cs.CV cs.LG cs.RO}
} | mcguire2005multi-modal |
arxiv-672938 | cs/0505065 | A dissipative particle swarm optimization | <|reference_start|>A dissipative particle swarm optimization: A dissipative particle swarm optimization is developed according to the self-organization of dissipative structure. The negative entropy is introduced to construct an opening dissipative system that is far-from-equilibrium so as to driving the irreversible evolution process with better fitness. The testing of two multimodal functions indicates it improves the performance effectively<|reference_end|> | arxiv | @article{xie2005a,
title={A dissipative particle swarm optimization},
author={Xiao-Feng Xie, Wen-Jun Zhang, Zhi-Lian Yang},
journal={arXiv preprint arXiv:cs/0505065},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505065},
primaryClass={cs.NE}
} | xie2005a |
arxiv-672939 | cs/0505066 | Decision Sort and its Parallel Implementation | <|reference_start|>Decision Sort and its Parallel Implementation: In this paper, a sorting technique is presented that takes as input a data set whose primary key domain is known to the sorting algorithm, and works with an time efficiency of O(n+k), where k is the primary key domain. It is shown that the algorithm has applicability over a wide range of data sets. Later, a parallel formulation of the same is proposed and its effectiveness is argued. Though this algorithm is applicable over a wide range of general data sets, it finds special application (much superior to others) in places where sorting information that arrives in parts and in cases where input data is huge in size.<|reference_end|> | arxiv | @article{khuarana2005decision,
title={Decision Sort and its Parallel Implementation},
author={Udayan Khuarana},
journal={arXiv preprint arXiv:cs/0505066},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505066},
primaryClass={cs.DS}
} | khuarana2005decision |
arxiv-672940 | cs/0505067 | Optimizing semiconductor devices by self-organizing particle swarm | <|reference_start|>Optimizing semiconductor devices by self-organizing particle swarm: A self-organizing particle swarm is presented. It works in dissipative state by employing the small inertia weight, according to experimental analysis on a simplified model, which with fast convergence. Then by recognizing and replacing inactive particles according to the process deviation information of device parameters, the fluctuation is introduced so as to driving the irreversible evolution process with better fitness. The testing on benchmark functions and an application example for device optimization with designed fitness function indicates it improves the performance effectively.<|reference_end|> | arxiv | @article{xie2005optimizing,
title={Optimizing semiconductor devices by self-organizing particle swarm},
author={Xiao-Feng Xie, Wen-Jun Zhang, De-Chun Bi},
journal={arXiv preprint arXiv:cs/0505067},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505067},
primaryClass={cs.NE}
} | xie2005optimizing |
arxiv-672941 | cs/0505068 | Handling equality constraints by adaptive relaxing rule for swarm algorithms | <|reference_start|>Handling equality constraints by adaptive relaxing rule for swarm algorithms: The adaptive constraints relaxing rule for swarm algorithms to handle with the problems with equality constraints is presented. The feasible space of such problems may be similiar to ridge function class, which is hard for applying swarm algorithms. To enter the solution space more easily, the relaxed quasi feasible space is introduced and shrinked adaptively. The experimental results on benchmark functions are compared with the performance of other algorithms, which show its efficiency.<|reference_end|> | arxiv | @article{xie2005handling,
title={Handling equality constraints by adaptive relaxing rule for swarm
algorithms},
author={Xiao-Feng Xie, Wen-Jun Zhang, De-Chun Bi},
journal={arXiv preprint arXiv:cs/0505068},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505068},
primaryClass={cs.NE}
} | xie2005handling |
arxiv-672942 | cs/0505069 | Handling boundary constraints for numerical optimization by particle swarm flying in periodic search space | <|reference_start|>Handling boundary constraints for numerical optimization by particle swarm flying in periodic search space: The periodic mode is analyzed together with two conventional boundary handling modes for particle swarm. By providing an infinite space that comprises periodic copies of original search space, it avoids possible disorganizing of particle swarm that is induced by the undesired mutations at the boundary. The results on benchmark functions show that particle swarm with periodic mode is capable of improving the search performance significantly, by compared with that of conventional modes and other algorithms.<|reference_end|> | arxiv | @article{zhang2005handling,
title={Handling boundary constraints for numerical optimization by particle
swarm flying in periodic search space},
author={Wen-Jun Zhang, Xiao-Feng Xie, De-Chun Bi},
journal={arXiv preprint arXiv:cs/0505069},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505069},
primaryClass={cs.NE}
} | zhang2005handling |
arxiv-672943 | cs/0505070 | SWAF: Swarm Algorithm Framework for Numerical Optimization | <|reference_start|>SWAF: Swarm Algorithm Framework for Numerical Optimization: A swarm algorithm framework (SWAF), realized by agent-based modeling, is presented to solve numerical optimization problems. Each agent is a bare bones cognitive architecture, which learns knowledge by appropriately deploying a set of simple rules in fast and frugal heuristics. Two essential categories of rules, the generate-and-test and the problem-formulation rules, are implemented, and both of the macro rules by simple combination and subsymbolic deploying of multiple rules among them are also studied. Experimental results on benchmark problems are presented, and performance comparison between SWAF and other existing algorithms indicates that it is efficiently.<|reference_end|> | arxiv | @article{xie2005swaf:,
title={SWAF: Swarm Algorithm Framework for Numerical Optimization},
author={Xiao-Feng Xie, Wen-Jun Zhang},
journal={arXiv preprint arXiv:cs/0505070},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505070},
primaryClass={cs.NE}
} | xie2005swaf: |
arxiv-672944 | cs/0505071 | Summarization Techniques for Pattern Collections in Data Mining | <|reference_start|>Summarization Techniques for Pattern Collections in Data Mining: Discovering patterns from data is an important task in data mining. There exist techniques to find large collections of many kinds of patterns from data very efficiently. A collection of patterns can be regarded as a summary of the data. A major difficulty with patterns is that pattern collections summarizing the data well are often very large. In this dissertation we describe methods for summarizing pattern collections in order to make them also more understandable. More specifically, we focus on the following themes: 1) Quality value simplifications. 2) Pattern orderings. 3) Pattern chains and antichains. 4) Change profiles. 5) Inverse pattern discovery.<|reference_end|> | arxiv | @article{mielikäinen2005summarization,
title={Summarization Techniques for Pattern Collections in Data Mining},
author={Taneli Mielik"ainen},
journal={arXiv preprint arXiv:cs/0505071},
year={2005},
number={A-2005-1, Department of Computer Science, University of Helsinki},
archivePrefix={arXiv},
eprint={cs/0505071},
primaryClass={cs.DB cs.AI cs.DS}
} | mielikäinen2005summarization |
arxiv-672945 | cs/0505072 | Steganographic Codes -- a New Problem of Coding Theory | <|reference_start|>Steganographic Codes -- a New Problem of Coding Theory: To study how to design steganographic algorithm more efficiently, a new coding problem -- steganographic codes (abbreviated stego-codes) -- is presented in this paper. The stego-codes are defined over the field with $q(q\ge2)$ elements. Firstly a method of constructing linear stego-codes is proposed by using the direct sum of vector subspaces. And then the problem of linear stego-codes is converted to an algebraic problem by introducing the concept of $t$th dimension of vector space. And some bounds on the length of stego-codes are obtained, from which the maximum length embeddable (MLE) code is brought up. It is shown that there is a corresponding relation between MLE codes and perfect error-correcting codes. Furthermore the classification of all MLE codes and a lower bound on the number of binary MLE codes are obtained based on the corresponding results on perfect codes. Finally hiding redundancy is defined to value the performance of stego-codes.<|reference_end|> | arxiv | @article{zhang2005steganographic,
title={Steganographic Codes -- a New Problem of Coding Theory},
author={Weiming Zhang and Shiqu Li},
journal={arXiv preprint arXiv:cs/0505072},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505072},
primaryClass={cs.CR}
} | zhang2005steganographic |
arxiv-672946 | cs/0505073 | Reasoning about transfinite sequences | <|reference_start|>Reasoning about transfinite sequences: We introduce a family of temporal logics to specify the behavior of systems with Zeno behaviors. We extend linear-time temporal logic LTL to authorize models admitting Zeno sequences of actions and quantitative temporal operators indexed by ordinals replace the standard next-time and until future-time operators. Our aim is to control such systems by designing controllers that safely work on $\omega$-sequences but interact synchronously with the system in order to restrict their behaviors. We show that the satisfiability problem for the logics working on $\omega^k$-sequences is EXPSPACE-complete when the integers are represented in binary, and PSPACE-complete with a unary representation. To do so, we substantially extend standard results about LTL by introducing a new class of succinct ordinal automata that can encode the interaction between the different quantitative temporal operators.<|reference_end|> | arxiv | @article{demri2005reasoning,
title={Reasoning about transfinite sequences},
author={St'ephane Demri and David Nowak},
journal={International Journal of Foundations of Computer Science,
18(1):87-112, February 2007},
year={2005},
doi={10.1142/S0129054107004589},
archivePrefix={arXiv},
eprint={cs/0505073},
primaryClass={cs.LO cs.CC}
} | demri2005reasoning |
arxiv-672947 | cs/0505074 | Instance-Independent View Serializability for Semistructured Databases | <|reference_start|>Instance-Independent View Serializability for Semistructured Databases: Semistructured databases require tailor-made concurrency control mechanisms since traditional solutions for the relational model have been shown to be inadequate. Such mechanisms need to take full advantage of the hierarchical structure of semistructured data, for instance allowing concurrent updates of subtrees of, or even individual elements in, XML documents. We present an approach for concurrency control which is document-independent in the sense that two schedules of semistructured transactions are considered equivalent if they are equivalent on all possible documents. We prove that it is decidable in polynomial time whether two given schedules in this framework are equivalent. This also solves the view serializability for semistructured schedules polynomially in the size of the schedule and exponentially in the number of transactions.<|reference_end|> | arxiv | @article{dekeyser2005instance-independent,
title={Instance-Independent View Serializability for Semistructured Databases},
author={Stijn Dekeyser, Jan Hidders, Jan Paredaens, Roel Vercammen},
journal={arXiv preprint arXiv:cs/0505074},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505074},
primaryClass={cs.DB}
} | dekeyser2005instance-independent |
arxiv-672948 | cs/0505075 | On Searching a Table Consistent with Division Poset | <|reference_start|>On Searching a Table Consistent with Division Poset: Suppose $P_n=\{1,2,...,n\}$ is a partially ordered set with the partial order defined by divisibility, that is, for any two distinct elements $i,j\in P_n$ satisfying $i$ divides $j$, $i<_{P_n} j$. A table $A_n=\{a_i|i=1,2,...,n\}$ of distinct real numbers is said to be \emph{consistent} with $P_n$, provided for any two distinct elements $i,j\in \{1,2,...,n\}$ satisfying $i$ divides $j$, $a_i< a_j$. Given an real number $x$, we want to determine whether $x\in A_n$, by comparing $x$ with as few entries of $A_n$ as possible. In this paper we investigate the complexity $\tau(n)$, measured in the number of comparisons, of the above search problem. We present a $\frac{55n}{72}+O(\ln^2 n)$ search algorithm for $A_n$ and prove a lower bound $({3/4}+{17/2160})n+O(1)$ on $\tau(n)$ by using an adversary argument.<|reference_end|> | arxiv | @article{cheng2005on,
title={On Searching a Table Consistent with Division Poset},
author={Yongxi Cheng, Xi Chen, Yiqun Lisa Yin},
journal={arXiv preprint arXiv:cs/0505075},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505075},
primaryClass={cs.DM cs.DS}
} | cheng2005on |
arxiv-672949 | cs/0505076 | On the Solution of Graph Isomorphism by Dynamical Algorithms | <|reference_start|>On the Solution of Graph Isomorphism by Dynamical Algorithms: In the recent years, several polynomial algorithms of a dynamical nature have been proposed to address the graph isomorphism problem. In this paper we propose a generalization of an approach exposed in cond-mat/0209112 and find that this dynamical algorithm is covered by a combinatorial approach. It is possible to infer that polynomial dynamical algorithms addressing graph isomorphism are covered by suitable polynomial combinatorial approaches and thus are tackled by the same weaknesses as the last ones.<|reference_end|> | arxiv | @article{golovkins2005on,
title={On the Solution of Graph Isomorphism by Dynamical Algorithms},
author={Marats Golovkins},
journal={arXiv preprint arXiv:cs/0505076},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505076},
primaryClass={cs.CC}
} | golovkins2005on |
arxiv-672950 | cs/0505077 | Efficient Approximation of Convex Recolorings | <|reference_start|>Efficient Approximation of Convex Recolorings: A coloring of a tree is convex if the vertices that pertain to any color induce a connected subtree; a partial coloring (which assigns colors to some of the vertices) is convex if it can be completed to a convex (total) coloring. Convex coloring of trees arise in areas such as phylogenetics, linguistics, etc. eg, a perfect phylogenetic tree is one in which the states of each character induce a convex coloring of the tree. Research on perfect phylogeny is usually focused on finding a tree so that few predetermined partial colorings of its vertices are convex. When a coloring of a tree is not convex, it is desirable to know "how far" it is from a convex one. In [19], a natural measure for this distance, called the recoloring distance was defined: the minimal number of color changes at the vertices needed to make the coloring convex. This can be viewed as minimizing the number of "exceptional vertices" w.r.t. to a closest convex coloring. The problem was proved to be NP-hard even for colored string. In this paper we continue the work of [19], and present a 2-approximation algorithm of convex recoloring of strings whose running time O(cn), where c is the number of colors and n is the size of the input, and an O(cn^2)-time 3-approximation algorithm for convex recoloring of trees.<|reference_end|> | arxiv | @article{moran2005efficient,
title={Efficient Approximation of Convex Recolorings},
author={Shlomo Moran and Sagi Snir},
journal={arXiv preprint arXiv:cs/0505077},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505077},
primaryClass={cs.DS}
} | moran2005efficient |
arxiv-672951 | cs/0505078 | On the Parity-Check Density and Achievable Rates of LDPC Codes | <|reference_start|>On the Parity-Check Density and Achievable Rates of LDPC Codes: The paper introduces new bounds on the asymptotic density of parity-check matrices and the achievable rates under ML decoding of binary linear block codes transmitted over memoryless binary-input output-symmetric channels. The lower bounds on the parity-check density are expressed in terms of the gap between the channel capacity and the rate of the codes for which reliable communication is achievable, and are valid for every sequence of binary linear block codes. The bounds address the question, previously considered by Sason and Urbanke, of how sparse can parity-check matrices of binary linear block codes be as a function of the gap to capacity. The new upper bounds on the achievable rates of binary linear block codes tighten previously reported bounds by Burshtein et al., and therefore enable to obtain tighter upper bounds on the thresholds of sequences of binary linear block codes under ML decoding. The bounds are applied to low-density parity-check (LDPC) codes, and the improvement in their tightness is exemplified numerically.<|reference_end|> | arxiv | @article{wiechman2005on,
title={On the Parity-Check Density and Achievable Rates of LDPC Codes},
author={Gil Wiechman and Igal Sason},
journal={arXiv preprint arXiv:cs/0505078},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505078},
primaryClass={cs.IT math.IT}
} | wiechman2005on |
arxiv-672952 | cs/0505079 | Application of Kolmogorov complexity and universal codes to identity testing and nonparametric testing of serial independence for time series | <|reference_start|>Application of Kolmogorov complexity and universal codes to identity testing and nonparametric testing of serial independence for time series: We show that Kolmogorov complexity and such its estimators as universal codes (or data compression methods) can be applied for hypotheses testing in a framework of classical mathematical statistics. The methods for identity testing and nonparametric testing of serial independence for time series are suggested.<|reference_end|> | arxiv | @article{ryabko2005application,
title={Application of Kolmogorov complexity and universal codes to identity
testing and nonparametric testing of serial independence for time series},
author={Boris Ryabko and Jaakko Astola and Alex Gammerman},
journal={arXiv preprint arXiv:cs/0505079},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505079},
primaryClass={cs.CC}
} | ryabko2005application |
arxiv-672953 | cs/0505080 | Dominance Based Crossover Operator for Evolutionary Multi-objective Algorithms | <|reference_start|>Dominance Based Crossover Operator for Evolutionary Multi-objective Algorithms: In spite of the recent quick growth of the Evolutionary Multi-objective Optimization (EMO) research field, there has been few trials to adapt the general variation operators to the particular context of the quest for the Pareto-optimal set. The only exceptions are some mating restrictions that take in account the distance between the potential mates - but contradictory conclusions have been reported. This paper introduces a particular mating restriction for Evolutionary Multi-objective Algorithms, based on the Pareto dominance relation: the partner of a non-dominated individual will be preferably chosen among the individuals of the population that it dominates. Coupled with the BLX crossover operator, two different ways of generating offspring are proposed. This recombination scheme is validated within the well-known NSGA-II framework on three bi-objective benchmark problems and one real-world bi-objective constrained optimization problem. An acceleration of the progress of the population toward the Pareto set is observed on all problems.<|reference_end|> | arxiv | @article{roudenko2005dominance,
title={Dominance Based Crossover Operator for Evolutionary Multi-objective
Algorithms},
author={Olga Roudenko (INRIA Futurs), Marc Schoenauer (INRIA Futurs)},
journal={Dans Parallel Problem Solving from Nature 2004 [OAI
oai:hal.inria.fr:inria-00000095_v1] -
http://inria.ccsd.cnrs.fr/inria-00000095},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505080},
primaryClass={cs.AI cs.NA}
} | roudenko2005dominance |
arxiv-672954 | cs/0505081 | An ontological approach to the construction of problem-solving models | <|reference_start|>An ontological approach to the construction of problem-solving models: Our ongoing work aims at defining an ontology-centered approach for building expertise models for the CommonKADS methodology. This approach (which we have named "OntoKADS") is founded on a core problem-solving ontology which distinguishes between two conceptualization levels: at an object level, a set of concepts enable us to define classes of problem-solving situations, and at a meta level, a set of meta-concepts represent modeling primitives. In this article, our presentation of OntoKADS will focus on the core ontology and, in particular, on roles - the primitive situated at the interface between domain knowledge and reasoning, and whose ontological status is still much debated. We first propose a coherent, global, ontological framework which enables us to account for this primitive. We then show how this novel characterization of the primitive allows definition of new rules for the construction of expertise models.<|reference_end|> | arxiv | @article{bruaux2005an,
title={An ontological approach to the construction of problem-solving models},
author={Sabine Bruaux (LaRIA), Gilles Kassel (LaRIA), Gilles Morel (LaRIA)},
journal={arXiv preprint arXiv:cs/0505081},
year={2005},
number={LRR 2005-03},
archivePrefix={arXiv},
eprint={cs/0505081},
primaryClass={cs.AI}
} | bruaux2005an |
arxiv-672955 | cs/0505082 | Fast generators for the Diffie-Hellman key agreement protocol and malicious standards | <|reference_start|>Fast generators for the Diffie-Hellman key agreement protocol and malicious standards: The Diffie-Hellman key agreement protocol is based on taking large powers of a generator of a prime-order cyclic group. Some generators allow faster exponentiation. We show that to a large extent, using the fast generators is as secure as using a randomly chosen generator. On the other hand, we show that if there is some case in which fast generators are less secure, then this could be used by a malicious authority to generate a standard for the Diffie-Hellman key agreement protocol which has a hidden trapdoor.<|reference_end|> | arxiv | @article{tsaban2005fast,
title={Fast generators for the Diffie-Hellman key agreement protocol and
malicious standards},
author={Boaz Tsaban},
journal={Information Processing Letters 99 (2006), 145--148},
year={2005},
doi={10.1016/j.ipl.2005.11.025},
archivePrefix={arXiv},
eprint={cs/0505082},
primaryClass={cs.CR cs.CC}
} | tsaban2005fast |
arxiv-672956 | cs/0505083 | Defensive forecasting | <|reference_start|>Defensive forecasting: We consider how to make probability forecasts of binary labels. Our main mathematical result is that for any continuous gambling strategy used for detecting disagreement between the forecasts and the actual labels, there exists a forecasting strategy whose forecasts are ideal as far as this gambling strategy is concerned. A forecasting strategy obtained in this way from a gambling strategy demonstrating a strong law of large numbers is simplified and studied empirically.<|reference_end|> | arxiv | @article{vovk2005defensive,
title={Defensive forecasting},
author={Vladimir Vovk, Akimichi Takemura, Glenn Shafer},
journal={Proceedings of the Tenth International Workshop on Artificial
Intelligence and Statistics, 2005, pages 365--372.},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505083},
primaryClass={cs.LG cs.AI}
} | vovk2005defensive |
arxiv-672957 | cs/0505084 | An explicit formula for the number of tunnels in digital objects | <|reference_start|>An explicit formula for the number of tunnels in digital objects: An important concept in digital geometry for computer imagery is that of tunnel. In this paper we obtain a formula for the number of tunnels as a function of the number of the object vertices, pixels, holes, connected components, and 2x2 grid squares. It can be used to test for tunnel-freedom a digital object, in particular a digital curve.<|reference_end|> | arxiv | @article{brimkov2005an,
title={An explicit formula for the number of tunnels in digital objects},
author={Valentin Brimkov, Angelo Maimone, Giorgio Nordo},
journal={arXiv preprint arXiv:cs/0505084},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505084},
primaryClass={cs.DM cs.CG cs.CV}
} | brimkov2005an |
arxiv-672958 | cs/0505085 | Improving PARMA Trailing | <|reference_start|>Improving PARMA Trailing: Taylor introduced a variable binding scheme for logic variables in his PARMA system, that uses cycles of bindings rather than the linear chains of bindings used in the standard WAM representation. Both the HAL and dProlog languages make use of the PARMA representation in their Herbrand constraint solvers. Unfortunately, PARMA's trailing scheme is considerably more expensive in both time and space consumption. The aim of this paper is to present several techniques that lower the cost. First, we introduce a trailing analysis for HAL using the classic PARMA trailing scheme that detects and eliminates unnecessary trailings. The analysis, whose accuracy comes from HAL's determinism and mode declarations, has been integrated in the HAL compiler and is shown to produce space improvements as well as speed improvements. Second, we explain how to modify the classic PARMA trailing scheme to halve its trailing cost. This technique is illustrated and evaluated both in the context of dProlog and HAL. Finally, we explain the modifications needed by the trailing analysis in order to be combined with our modified PARMA trailing scheme. Empirical evidence shows that the combination is more effective than any of the techniques when used in isolation. To appear in Theory and Practice of Logic Programming.<|reference_end|> | arxiv | @article{schrijvers2005improving,
title={Improving PARMA Trailing},
author={Tom Schrijvers, Maria Garcia de la Banda, Bart Demoen, Peter J.
Stuckey},
journal={arXiv preprint arXiv:cs/0505085},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505085},
primaryClass={cs.PL cs.PF}
} | schrijvers2005improving |
arxiv-672959 | cs/0505086 | On the Ancestral Compatibility of Two Phylogenetic Trees with Nested Taxa | <|reference_start|>On the Ancestral Compatibility of Two Phylogenetic Trees with Nested Taxa: Compatibility of phylogenetic trees is the most important concept underlying widely-used methods for assessing the agreement of different phylogenetic trees with overlapping taxa and combining them into common supertrees to reveal the tree of life. The notion of ancestral compatibility of phylogenetic trees with nested taxa was introduced by Semple et al in 2004. In this paper we analyze in detail the meaning of this compatibility from the points of view of the local structure of the trees, of the existence of embeddings into a common supertree, and of the joint properties of their cluster representations. Our analysis leads to a very simple polynomial-time algorithm for testing this compatibility, which we have implemented and is freely available for download from the BioPerl collection of Perl modules for computational biology.<|reference_end|> | arxiv | @article{llabres2005on,
title={On the Ancestral Compatibility of Two Phylogenetic Trees with Nested
Taxa},
author={Merce Llabres, Jairo Rocha, Francesc Rossello, Gabriel Valiente},
journal={arXiv preprint arXiv:cs/0505086},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505086},
primaryClass={cs.DM q-bio.OT}
} | llabres2005on |
arxiv-672960 | cs/0505087 | Feasible Proofs of Matrix Properties with Csanky's Algorithm | <|reference_start|>Feasible Proofs of Matrix Properties with Csanky's Algorithm: We show that Csanky's fast parallel algorithm for computing the characteristic polynomial of a matrix can be formalized in the logical theory LAP, and can be proved correct in LAP from the principle of linear independence. LAP is a natural theory for reasoning about linear algebra introduced by Cook and Soltys. Further, we show that several principles of matrix algebra, such as linear independence or the Cayley-Hamilton Theorem, can be shown equivalent in the logical theory QLA. Applying the separation between complexity classes AC^0[2] contained in DET(GF(2)), we show that these principles are in fact not provable in QLA. In a nutshell, we show that linear independence is ``all there is'' to elementary linear algebra (from a proof complexity point of view), and furthermore, linear independence cannot be proved trivially (again, from a proof complexity point of view).<|reference_end|> | arxiv | @article{soltys2005feasible,
title={Feasible Proofs of Matrix Properties with Csanky's Algorithm},
author={Michael Soltys},
journal={arXiv preprint arXiv:cs/0505087},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505087},
primaryClass={cs.LO}
} | soltys2005feasible |
arxiv-672961 | cs/0505088 | 6-cycle double covers of cubic graphs | <|reference_start|>6-cycle double covers of cubic graphs: A cycle double cover (CDC) of an undirected graph is a collection of the graph's cycles such that every edge of the graph belongs to exactly two cycles. We describe a constructive method for generating all the cubic graphs that have a 6-CDC (a CDC in which every cycle has length 6). As an application of the method, we prove that all such graphs have a Hamiltonian cycle. A sense of direction is an edge labeling on graphs that follows a globally consistent scheme and is known to considerably reduce the complexity of several distributed problems. In [9], a particular instance of sense of direction, called a chordal sense of direction (CSD), is studied and the class of k-regular graphs that admit a CSD with exactly k labels (a minimal CSD) is analyzed. We now show that nearly all the cubic graphs in this class have a 6-CDC, the only exception being K4.<|reference_end|> | arxiv | @article{leao20056-cycle,
title={6-cycle double covers of cubic graphs},
author={Rodrigo S. C. Leao, Valmir C. Barbosa},
journal={arXiv preprint arXiv:cs/0505088},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505088},
primaryClass={cs.DM}
} | leao20056-cycle |
arxiv-672962 | cs/0506001 | SafeMPI - Extending MPI for Byzantine Error Detection on Parallel Clusters | <|reference_start|>SafeMPI - Extending MPI for Byzantine Error Detection on Parallel Clusters: Modern high-performance computing relies heavily on the use of commodity processors arranged together in clusters. These clusters consist of individual nodes (typically off-the-shelf single or dual processor machines) connected together with a high speed interconnect. Using cluster computation has many benefits, but also carries the liability of being failure prone due to the sheer number of components involved. Many effective solutions have been proposed to aid failure recovery in clusters, their one significant downside being the failure models they support. Most of the work in the area has focused on detecting and correcting fail-stop errors. We propose a system that will also detect more general error models, such as Byzantine errors, thus allowing existing failure recovery methods to handle them correctly.<|reference_end|> | arxiv | @article{mogilevsky2005safempi,
title={SafeMPI - Extending MPI for Byzantine Error Detection on Parallel
Clusters},
author={Dmitry Mogilevsky, Sean Keller},
journal={arXiv preprint arXiv:cs/0506001},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506001},
primaryClass={cs.DC}
} | mogilevsky2005safempi |
arxiv-672963 | cs/0506002 | HepToX: Heterogeneous Peer to Peer XML Databases | <|reference_start|>HepToX: Heterogeneous Peer to Peer XML Databases: We study a collection of heterogeneous XML databases maintaining similar and related information, exchanging data via a peer to peer overlay network. In this setting, a mediated global schema is unrealistic. Yet, users/applications wish to query the databases via one peer using its schema. We have recently developed HepToX, a P2P Heterogeneous XML database system. A key idea is that whenever a peer enters the system, it establishes an acquaintance with a small number of peer databases, possibly with different schema. The peer administrator provides correspondences between the local schema and the acquaintance schema using an informal and intuitive notation of arrows and boxes. We develop a novel algorithm that infers a set of precise mapping rules between the schemas from these visual annotations. We pin down a semantics of query translation given such mapping rules, and present a novel query translation algorithm for a simple but expressive fragment of XQuery, that employs the mapping rules in either direction. We show the translation algorithm is correct. Finally, we demonstrate the utility and scalability of our ideas and algorithms with a detailed set of experiments on top of the Emulab, a large scale P2P network emulation testbed.<|reference_end|> | arxiv | @article{bonifati2005heptox:,
title={HepToX: Heterogeneous Peer to Peer XML Databases},
author={Angela Bonifati (Icar CNR, Italy), Elaine Qing Chang (UBC, Canada),
Terence Ho (UBC, Canada), and Laks V.S. Lakshmanan (UBC, Canada)},
journal={arXiv preprint arXiv:cs/0506002},
year={2005},
number={UBC TR-2005-15},
archivePrefix={arXiv},
eprint={cs/0506002},
primaryClass={cs.DB}
} | bonifati2005heptox: |
arxiv-672964 | cs/0506003 | Authentication and routing in simple Quantum Key Distribution networks | <|reference_start|>Authentication and routing in simple Quantum Key Distribution networks: We consider various issues which arise as soon as one tries to practically implement simple networks of quantum relays for QKD. In particular we discuss authentication and routing which are essential ingredients of any QKD network. This paper aims to address some gaps between quantum and networking aspects of QKD networks usually reserved to specialist in physics and computer science respectively.<|reference_end|> | arxiv | @article{pasquinucci2005authentication,
title={Authentication and routing in simple Quantum Key Distribution networks},
author={Andrea Pasquinucci},
journal={arXiv preprint arXiv:cs/0506003},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506003},
primaryClass={cs.NI cs.CR quant-ph}
} | pasquinucci2005authentication |
arxiv-672965 | cs/0506004 | Non-asymptotic calibration and resolution | <|reference_start|>Non-asymptotic calibration and resolution: We analyze a new algorithm for probability forecasting of binary observations on the basis of the available data, without making any assumptions about the way the observations are generated. The algorithm is shown to be well calibrated and to have good resolution for long enough sequences of observations and for a suitable choice of its parameter, a kernel on the Cartesian product of the forecast space $[0,1]$ and the data space. Our main results are non-asymptotic: we establish explicit inequalities, shown to be tight, for the performance of the algorithm.<|reference_end|> | arxiv | @article{vovk2005non-asymptotic,
title={Non-asymptotic calibration and resolution},
author={Vladimir Vovk},
journal={arXiv preprint arXiv:cs/0506004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506004},
primaryClass={cs.LG}
} | vovk2005non-asymptotic |
arxiv-672966 | cs/0506005 | Programming Finite-Domain Constraint Propagators in Action Rules | <|reference_start|>Programming Finite-Domain Constraint Propagators in Action Rules: In this paper, we propose a new language, called AR ({\it Action Rules}), and describe how various propagators for finite-domain constraints can be implemented in it. An action rule specifies a pattern for agents, an action that the agents can carry out, and an event pattern for events that can activate the agents. AR combines the goal-oriented execution model of logic programming with the event-driven execution model. This hybrid execution model facilitates programming constraint propagators. A propagator for a constraint is an agent that maintains the consistency of the constraint and is activated by the updates of the domain variables in the constraint. AR has a much stronger descriptive power than {\it indexicals}, the language widely used in the current finite-domain constraint systems, and is flexible for implementing not only interval-consistency but also arc-consistency algorithms. As examples, we present a weak arc-consistency propagator for the {\tt all\_distinct} constraint and a hybrid algorithm for n-ary linear equality constraints. B-Prolog has been extended to accommodate action rules. Benchmarking shows that B-Prolog as a CLP(FD) system significantly outperforms other CLP(FD) systems.<|reference_end|> | arxiv | @article{zhou2005programming,
title={Programming Finite-Domain Constraint Propagators in Action Rules},
author={Neng-Fa Zhou},
journal={TPLP Vol 5(4&5) 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506005},
primaryClass={cs.PL}
} | zhou2005programming |
arxiv-672967 | cs/0506006 | A batch scheduler with high level components | <|reference_start|>A batch scheduler with high level components: In this article we present the design choices and the evaluation of a batch scheduler for large clusters, named OAR. This batch scheduler is based upon an original design that emphasizes on low software complexity by using high level tools. The global architecture is built upon the scripting language Perl and the relational database engine Mysql. The goal of the project OAR is to prove that it is possible today to build a complex system for ressource management using such tools without sacrificing efficiency and scalability. Currently, our system offers most of the important features implemented by other batch schedulers such as priority scheduling (by queues), reservations, backfilling and some global computing support. Despite the use of high level tools, our experiments show that our system has performances close to other systems. Furthermore, OAR is currently exploited for the management of 700 nodes (a metropolitan GRID) and has shown good efficiency and robustness.<|reference_end|> | arxiv | @article{capit2005a,
title={A batch scheduler with high level components},
author={Nicolas Capit (ID - Imag, Inria Rh^one-Alpes / Id-Imag), Georges Da
Costa (ID - Imag, Inria Rh^one-Alpes / Id-Imag), Yiannis Georgiou (ID -
Imag, Inria Rh^one-Alpes / Id-Imag), Guillaume Huard (ID - Imag, Inria
Rh^one-Alpes / Id-Imag), Cyrille Martin (ID - Imag), Gr'egory Mouni'e (ID
- Imag, Inria Rh^one-Alpes / Id-Imag), Pierre Neyron (ID - Imag), Olivier
Richard (ID - Imag, Inria Rh^one-Alpes / Id-Imag)},
journal={Cluster computing and Grid 2005 (CCGrid05), Royaume-Uni (2005)},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506006},
primaryClass={cs.DC}
} | capit2005a |
arxiv-672968 | cs/0506007 | Defensive forecasting for linear protocols | <|reference_start|>Defensive forecasting for linear protocols: We consider a general class of forecasting protocols, called "linear protocols", and discuss several important special cases, including multi-class forecasting. Forecasting is formalized as a game between three players: Reality, whose role is to generate observations; Forecaster, whose goal is to predict the observations; and Skeptic, who tries to make money on any lack of agreement between Forecaster's predictions and the actual observations. Our main mathematical result is that for any continuous strategy for Skeptic in a linear protocol there exists a strategy for Forecaster that does not allow Skeptic's capital to grow. This result is a meta-theorem that allows one to transform any continuous law of probability in a linear protocol into a forecasting strategy whose predictions are guaranteed to satisfy this law. We apply this meta-theorem to a weak law of large numbers in Hilbert spaces to obtain a version of the K29 prediction algorithm for linear protocols and show that this version also satisfies the attractive properties of proper calibration and resolution under a suitable choice of its kernel parameter, with no assumptions about the way the data is generated.<|reference_end|> | arxiv | @article{vovk2005defensive,
title={Defensive forecasting for linear protocols},
author={Vladimir Vovk, Ilia Nouretdinov, Akimichi Takemura, Glenn Shafer},
journal={arXiv preprint arXiv:cs/0506007},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506007},
primaryClass={cs.LG}
} | vovk2005defensive |
arxiv-672969 | cs/0506008 | Bounds on the Automata Size for Presburger Arithmetic | <|reference_start|>Bounds on the Automata Size for Presburger Arithmetic: Automata provide a decision procedure for Presburger arithmetic. However, until now only crude lower and upper bounds were known on the sizes of the automata produced by this approach. In this paper, we prove an upper bound on the the number of states of the minimal deterministic automaton for a Presburger arithmetic formula. This bound depends on the length of the formula and the quantifiers occurring in the formula. The upper bound is established by comparing the automata for Presburger arithmetic formulas with the formulas produced by a quantifier elimination method. We also show that our bound is tight, even for nondeterministic automata. Moreover, we provide optimal automata constructions for linear equations and inequations.<|reference_end|> | arxiv | @article{klaedtke2005bounds,
title={Bounds on the Automata Size for Presburger Arithmetic},
author={Felix Klaedtke},
journal={arXiv preprint arXiv:cs/0506008},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506008},
primaryClass={cs.LO}
} | klaedtke2005bounds |
arxiv-672970 | cs/0506009 | Approximate MAP Decoding on Tail-Biting Trellises | <|reference_start|>Approximate MAP Decoding on Tail-Biting Trellises: We propose two approximate algorithms for MAP decoding on tail-biting trellises. The algorithms work on a subset of nodes of the tail-biting trellis, judiciously selected. We report the results of simulations on an AWGN channel using the approximate algorithms on tail-biting trellises for the $(24,12)$ Extended Golay Code and a rate 1/2 convolutional code with memory 6.<|reference_end|> | arxiv | @article{madhu2005approximate,
title={Approximate MAP Decoding on Tail-Biting Trellises},
author={A. S. Madhu, Priti Shankar},
journal={arXiv preprint arXiv:cs/0506009},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506009},
primaryClass={cs.IT math.IT}
} | madhu2005approximate |
arxiv-672971 | cs/0506010 | The OAI Data-Provider Registration and Validation Service | <|reference_start|>The OAI Data-Provider Registration and Validation Service: I present a summary of recent use of the Open Archives Initiative (OAI) registration and validation services for data-providers. The registration service has seen a steady stream of registrations since its launch in 2002, and there are now over 220 registered repositories. I examine the validation logs to produce a breakdown of reasons why repositories fail validation. This breakdown highlights some common problems and will be used to guide work to improve the validation service.<|reference_end|> | arxiv | @article{warner2005the,
title={The OAI Data-Provider Registration and Validation Service},
author={Simeon Warner (Cornell University)},
journal={arXiv preprint arXiv:cs/0506010},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506010},
primaryClass={cs.DL}
} | warner2005the |
arxiv-672972 | cs/0506011 | On the dimensions of certain LDPC codes based on q-regular bipartite graphs | <|reference_start|>On the dimensions of certain LDPC codes based on q-regular bipartite graphs: An explicit construction of a family of binary LDPC codes called LU(3,q), where q is a power of a prime, was recently given. A conjecture was made for the dimensions of these codes when q is odd. The conjecture is proved in this note. The proof involves the geometry of a 4-dimensional symplectic vector space and the action of the symplectic group and its subgroups.<|reference_end|> | arxiv | @article{sin2005on,
title={On the dimensions of certain LDPC codes based on q-regular bipartite
graphs},
author={Peter Sin and Qing Xiang},
journal={IEEE Trans. Information Theory, 52 (8), (2006), 3735-3737},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506011},
primaryClass={cs.IT cs.DM math.IT}
} | sin2005on |
arxiv-672973 | cs/0506012 | A Non-Cooperative Power Control Game in Delay-Constrained Multiple-Access Networks | <|reference_start|>A Non-Cooperative Power Control Game in Delay-Constrained Multiple-Access Networks: A game-theoretic approach for studying power control in multiple-access networks with transmission delay constraints is proposed. A non-cooperative power control game is considered in which each user seeks to choose a transmit power that maximizes its own utility while satisfying the user's delay requirements. The utility function measures the number of reliable bits transmitted per joule of energy and the user's delay constraint is modeled as an upper bound on the delay outage probability. The Nash equilibrium for the proposed game is derived, and its existence and uniqueness are proved. Using a large-system analysis, explicit expressions for the utilities achieved at equilibrium are obtained for the matched filter, decorrelating and minimum mean square error multiuser detectors. The effects of delay constraints on the users' utilities (in bits/Joule) and network capacity (i.e., the maximum number of users that can be supported) are quantified.<|reference_end|> | arxiv | @article{meshkati2005a,
title={A Non-Cooperative Power Control Game in Delay-Constrained
Multiple-Access Networks},
author={Farhad Meshkati, H. Vincent Poor, Stuart C. Schwartz},
journal={arXiv preprint arXiv:cs/0506012},
year={2005},
doi={10.1109/ISIT.2005.1523426},
archivePrefix={arXiv},
eprint={cs/0506012},
primaryClass={cs.IT math.IT}
} | meshkati2005a |
arxiv-672974 | cs/0506013 | On the existence and characterization of the maxent distribution under general moment inequality constraints | <|reference_start|>On the existence and characterization of the maxent distribution under general moment inequality constraints: A broad set of sufficient conditions that guarantees the existence of the maximum entropy (maxent) distribution consistent with specified bounds on certain generalized moments is derived. Most results in the literature are either focused on the minimum cross-entropy distribution or apply only to distributions with a bounded-volume support or address only equality constraints. The results of this work hold for general moment inequality constraints for probability distributions with possibly unbounded support, and the technical conditions are explicitly on the underlying generalized moment functions. An analytical characterization of the maxent distribution is also derived using results from the theory of constrained optimization in infinite-dimensional normed linear spaces. Several auxiliary results of independent interest pertaining to certain properties of convex coercive functions are also presented.<|reference_end|> | arxiv | @article{ishwar2005on,
title={On the existence and characterization of the maxent distribution under
general moment inequality constraints},
author={Prakash Ishwar and Pierre Moulin},
journal={arXiv preprint arXiv:cs/0506013},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506013},
primaryClass={cs.IT math.IT}
} | ishwar2005on |
arxiv-672975 | cs/0506014 | The Equivalence Problem for Deterministic MSO Tree Transducers is Decidable | <|reference_start|>The Equivalence Problem for Deterministic MSO Tree Transducers is Decidable: It is decidable for deterministic MSO definable graph-to-string or graph-to-tree transducers whether they are equivalent on a context-free set of graphs.<|reference_end|> | arxiv | @article{engelfriet2005the,
title={The Equivalence Problem for Deterministic MSO Tree Transducers is
Decidable},
author={Joost Engelfriet and Sebastian Maneth},
journal={arXiv preprint arXiv:cs/0506014},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506014},
primaryClass={cs.LO}
} | engelfriet2005the |
arxiv-672976 | cs/0506015 | Cryptanalysis of Key Issuing Protocols in ID-based Cryptosystems | <|reference_start|>Cryptanalysis of Key Issuing Protocols in ID-based Cryptosystems: To remove key escrow problem and avoid the need of secure channel in ID based cryptosystem Lee et al. proposed a secure key issuing protocol. However we show that it suffers from impersonation, insider attacks and incompetency of the key privacy authorities. We also cryptanalyze Sui et al.'s separable and anonymous key issuing protocol.<|reference_end|> | arxiv | @article{gangishetti2005cryptanalysis,
title={Cryptanalysis of Key Issuing Protocols in ID-based Cryptosystems},
author={Raju Gangishetti, M. Choudary Gorantla, Manik Lal Das, Ashutosh Saxena},
journal={arXiv preprint arXiv:cs/0506015},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506015},
primaryClass={cs.CR}
} | gangishetti2005cryptanalysis |
arxiv-672977 | cs/0506016 | Compressing Probability Distributions | <|reference_start|>Compressing Probability Distributions: We show how to store good approximations of probability distributions in small space.<|reference_end|> | arxiv | @article{gagie2005compressing,
title={Compressing Probability Distributions},
author={Travis Gagie},
journal={10.1016/j.ipl.2005.10.006},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506016},
primaryClass={cs.IT math.IT}
} | gagie2005compressing |
arxiv-672978 | cs/0506017 | Treillis de concepts et ontologies pour l'interrogation d'un annuaire de sources de donn\'ees biologiques (BioRegistry) | <|reference_start|>Treillis de concepts et ontologies pour l'interrogation d'un annuaire de sources de donn\'ees biologiques (BioRegistry): Bioinformatic data sources available on the web are multiple and heterogenous. The lack of documentation and the difficulty of interaction with these data sources require users competence in both informatics and biological fields for an optimal use of sources contents that remain rather under exploited. In this paper we present an approach based on formal concept analysis to classify and search relevant bioinformatic data sources for a given query. It consists in building the concept lattice from the binary relation between bioinformatic data sources and their associated metadata. The concept built from a given query is then merged into the concept lattice. The result is given by the extraction of the set of sources belonging to the extents of the query concept subsumers in the resulting concept lattice. The sources ranking is given by the concept specificity order in the concept lattice. An improvement of the approach consists in automatic query refinement thanks to domain ontologies. Two forms of refinement are possible by generalisation and by specialisation. ----- Les sources de donn\'{e}es biologiques disponibles sur le web sont multiples et h\'{e}t\'{e}rog\`{e}nes. L'utilisation optimale de ces ressources n\'{e}cessite aujourd'hui de la part des utilisateurs des comp\'{e}tences \`{a} la fois en informatique et en biologie, du fait du manque de documentation et des difficult\'{e}s d'interaction avec les sources de donn\'{e}es. De fait, les contenus de ces ressources restent souvent sous-exploit\'{e}s. Nous pr\'{e}sentons ici une approche bas\'{e}e sur l'analyse de concepts formels, pour organiser et rechercher des sources de donn\'{e}es biologiques pertinentes pour une requ\^{e}te donn\'{e}e. Le travail consiste \`{a} construire un treillis de concepts \`{a} partir des m\'{e}ta-donn\'{e}es associ\'{e}es aux sources. Le concept construit \`{a} partir d'une requ\^{e}te donn\'{e}e est alors int\'{e}gr\'{e} au treillis. La r\'{e}ponse \`{a} la requ\^{e}te est ensuite fournie par l'extraction des sources de donn\'{e}es appartenant aux extensions des concepts subsumant le concept requ\^{e}te dans le treillis. Les sources ainsi retourn\'{e}es peuvent \^{e}tre tri\'{e}es selon l'ordre de sp\'{e}cificit\'{e} des concepts dans le treillis. Une proc\'{e}dure de raffinement de requ\^{e}te, bas\'{e}e sur des ontologies du domaine, permet d'am\'{e}liorer le rappel par g\'{e}n\'{e}ralisation ou par sp\'{e}cialisation<|reference_end|> | arxiv | @article{messai2005treillis,
title={Treillis de concepts et ontologies pour l'interrogation d'un annuaire de
sources de donn\'{e}es biologiques (BioRegistry)},
author={Nizar Messai (INRIA Lorraine - LORIA), Marie-Dominique Devignes (INRIA
Lorraine - LORIA), Malika Sma"il-Tabbone (INRIA Lorraine - LORIA), Amedeo
Napoli (INRIA Lorraine - LORIA)},
journal={arXiv preprint arXiv:cs/0506017},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506017},
primaryClass={cs.DB cs.IR}
} | messai2005treillis |
arxiv-672979 | cs/0506018 | On the Achievable Diversity-Multiplexing Tradeoffs in Half-Duplex Cooperative Channels | <|reference_start|>On the Achievable Diversity-Multiplexing Tradeoffs in Half-Duplex Cooperative Channels: In this paper, we propose novel cooperative transmission protocols for delay limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (up-link) channels. For the relay channel, we investigate two classes of cooperation schemes; namely, Amplify and Forward (AF) protocols and Decode and Forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with N-1 relays where it is shown to outperform the space-time coded protocol of Laneman and Worenell without requiring decoding/encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0 < r < 1/N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the cooperative multiple-access channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the sub-optimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint.<|reference_end|> | arxiv | @article{azarian2005on,
title={On the Achievable Diversity-Multiplexing Tradeoffs in Half-Duplex
Cooperative Channels},
author={Kambiz Azarian, Hesham El Gamal and Philip Schniter},
journal={arXiv preprint arXiv:cs/0506018},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506018},
primaryClass={cs.IT math.IT}
} | azarian2005on |
arxiv-672980 | cs/0506019 | An Efficient Approximation Algorithm for Point Pattern Matching Under Noise | <|reference_start|>An Efficient Approximation Algorithm for Point Pattern Matching Under Noise: Point pattern matching problems are of fundamental importance in various areas including computer vision and structural bioinformatics. In this paper, we study one of the more general problems, known as LCP (largest common point set problem): Let $\PP$ and $\QQ$ be two point sets in $\mathbb{R}^3$, and let $\epsilon \geq 0$ be a tolerance parameter, the problem is to find a rigid motion $\mu$ that maximizes the cardinality of subset $\II$ of $Q$, such that the Hausdorff distance $\distance(\PP,\mu(\II)) \leq \epsilon$. We denote the size of the optimal solution to the above problem by $\LCP(P,Q)$. The problem is called exact-LCP for $\epsilon=0$, and \tolerant-LCP when $\epsilon>0$ and the minimum interpoint distance is greater than $2\epsilon$. A $\beta$-distance-approximation algorithm for tolerant-LCP finds a subset $I \subseteq \QQ$ such that $|I|\geq \LCP(P,Q)$ and $\distance(\PP,\mu(\II)) \leq \beta \epsilon$ for some $\beta \ge 1$. This paper has three main contributions. (1) We introduce a new algorithm, called {\DA}, which gives the fastest known deterministic 4-distance-approximation algorithm for \tolerant-LCP. (2) For the exact-LCP, when the matched set is required to be large, we give a simple sampling strategy that improves the running times of all known deterministic algorithms, yielding the fastest known deterministic algorithm for this problem. (3) We use expander graphs to speed-up the \DA algorithm for \tolerant-LCP when the size of the matched set is required to be large, at the expense of approximation in the matched set size. Our algorithms also work when the transformation $\mu$ is allowed to be scaling transformation.<|reference_end|> | arxiv | @article{choi2005an,
title={An Efficient Approximation Algorithm for Point Pattern Matching Under
Noise},
author={Vicky Choi, Navin Goyal},
journal={arXiv preprint arXiv:cs/0506019},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506019},
primaryClass={cs.CV cs.CG}
} | choi2005an |
arxiv-672981 | cs/0506020 | On the Throughput-Delay Tradeoff in Cellular Multicast | <|reference_start|>On the Throughput-Delay Tradeoff in Cellular Multicast: In this paper, we adopt a cross layer design approach for analyzing the throughput-delay tradeoff of the multicast channel in a single cell system. To illustrate the main ideas, we start with the single group case, i.e., pure multicast, where a common information stream is requested by all the users. We consider three classes of scheduling algorithms with progressively increasing complexity. The first class strives for minimum complexity by resorting to a static scheduling strategy along with memoryless decoding. Our analysis for this class of scheduling algorithms reveals the existence of a static scheduling policy that achieves the optimal scaling law of the throughput at the expense of a delay that increases exponentially with the number of users. The second scheduling policy resorts to a higher complexity incremental redundancy encoding/decoding strategy to achieve a superior throughput-delay tradeoff. The third, and most complex, scheduling strategy benefits from the cooperation between the different users to minimize the delay while achieving the optimal scaling law of the throughput. In particular, the proposed cooperative multicast strategy is shown to simultaneously achieve the optimal scaling laws of both throughput and delay. Then, we generalize our scheduling algorithms to exploit the multi-group diversity available when different information streams are requested by different subsets of the user population. Finally, we discuss the effect of the potential gains of equipping the base station with multi-transmit antennas and present simulation results that validate our theoretical claims.<|reference_end|> | arxiv | @article{gopala2005on,
title={On the Throughput-Delay Tradeoff in Cellular Multicast},
author={Praveen Kumar Gopala and Hesham El Gamal},
journal={arXiv preprint arXiv:cs/0506020},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506020},
primaryClass={cs.IT math.IT}
} | gopala2005on |
arxiv-672982 | cs/0506021 | Analysis of Relationship between Strategic and Aggregate Energy Minimization in Delay-Constrained Wireless Networks | <|reference_start|>Analysis of Relationship between Strategic and Aggregate Energy Minimization in Delay-Constrained Wireless Networks: We formulate two versions of the power control problem for wireless networks with latency constraints arising from duty cycle allocations In the first version, strategic power optimization, wireless nodes are modeled as rational agents in a power game, who strategically adjust their powers to minimize their own energy. In the other version, joint power optimization, wireless nodes jointly minimize the aggregate energy expenditure. Our analysis of these models yields insights into the different energy outcomes of strategic versus joint power optimization. We derive analytical solutions for power allocation under both models and study how they are affected by data loads and channel quality. We derive simple necessary conditions for the existence of Nash equilibria in the power game and also provide numerical examples of optimal power allocation under both models. Finally, we show that joint optimization can (sometimes) be Pareto-optimal and dominate strategic optimization, i.e the energy expenditure of all nodes is lower than if they were using strategic optimization.<|reference_end|> | arxiv | @article{kannan2005analysis,
title={Analysis of Relationship between Strategic and Aggregate Energy
Minimization in Delay-Constrained Wireless Networks},
author={Rajgopal Kannan and Shuangqing Wei},
journal={arXiv preprint arXiv:cs/0506021},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506021},
primaryClass={cs.NI cs.GT}
} | kannan2005analysis |
arxiv-672983 | cs/0506022 | Asymptotics of Discrete MDL for Online Prediction | <|reference_start|>Asymptotics of Discrete MDL for Online Prediction: Minimum Description Length (MDL) is an important principle for induction and prediction, with strong relations to optimal Bayesian learning. This paper deals with learning non-i.i.d. processes by means of two-part MDL, where the underlying model class is countable. We consider the online learning framework, i.e. observations come in one by one, and the predictor is allowed to update his state of mind after each time step. We identify two ways of predicting by MDL for this setup, namely a static} and a dynamic one. (A third variant, hybrid MDL, will turn out inferior.) We will prove that under the only assumption that the data is generated by a distribution contained in the model class, the MDL predictions converge to the true values almost surely. This is accomplished by proving finite bounds on the quadratic, the Hellinger, and the Kullback-Leibler loss of the MDL learner, which are however exponentially worse than for Bayesian prediction. We demonstrate that these bounds are sharp, even for model classes containing only Bernoulli distributions. We show how these bounds imply regret bounds for arbitrary loss functions. Our results apply to a wide range of setups, namely sequence prediction, pattern classification, regression, and universal induction in the sense of Algorithmic Information Theory among others.<|reference_end|> | arxiv | @article{poland2005asymptotics,
title={Asymptotics of Discrete MDL for Online Prediction},
author={Jan Poland and Marcus Hutter},
journal={IEEE Transactions on Information Theory, 51:11 (2005) 3780-3795},
year={2005},
doi={10.1109/TIT.2005.856956},
number={IDSIA-13-05},
archivePrefix={arXiv},
eprint={cs/0506022},
primaryClass={cs.IT cs.LG math.IT math.ST stat.TH}
} | poland2005asymptotics |
arxiv-672984 | cs/0506023 | Sparse Covariance Selection via Robust Maximum Likelihood Estimation | <|reference_start|>Sparse Covariance Selection via Robust Maximum Likelihood Estimation: We address a problem of covariance selection, where we seek a trade-off between a high likelihood against the number of non-zero elements in the inverse covariance matrix. We solve a maximum likelihood problem with a penalty term given by the sum of absolute values of the elements of the inverse covariance matrix, and allow for imposing bounds on the condition number of the solution. The problem is directly amenable to now standard interior-point algorithms for convex optimization, but remains challenging due to its size. We first give some results on the theoretical computational complexity of the problem, by showing that a recent methodology for non-smooth convex optimization due to Nesterov can be applied to this problem, to greatly improve on the complexity estimate given by interior-point algorithms. We then examine two practical algorithms aimed at solving large-scale, noisy (hence dense) instances: one is based on a block-coordinate descent approach, where columns and rows are updated sequentially, another applies a dual version of Nesterov's method.<|reference_end|> | arxiv | @article{banerjee2005sparse,
title={Sparse Covariance Selection via Robust Maximum Likelihood Estimation},
author={Onureena Banerjee, Alexandre d'Aspremont, Laurent El Ghaoui},
journal={arXiv preprint arXiv:cs/0506023},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506023},
primaryClass={cs.CE cs.AI}
} | banerjee2005sparse |
arxiv-672985 | cs/0506024 | The Hyper-Cortex of Human Collective-Intelligence Systems | <|reference_start|>The Hyper-Cortex of Human Collective-Intelligence Systems: Individual-intelligence research, from a neurological perspective, discusses the hierarchical layers of the cortex as a structure that performs conceptual abstraction and specification. This theory has been used to explain how motor-cortex regions responsible for different behavioral modalities such as writing and speaking can be utilized to express the same general concept represented higher in the cortical hierarchy. For example, the concept of a dog, represented across a region of high-level cortical-neurons, can either be written or spoken about depending on the individual's context. The higher-layer cortical areas project down the hierarchy, sending abstract information to specific regions of the motor-cortex for contextual implementation. In this paper, this idea is expanded to incorporate collective-intelligence within a hyper-cortical construct. This hyper-cortex is a multi-layered network used to represent abstract collective concepts. These ideas play an important role in understanding how collective-intelligence systems can be engineered to handle problem abstraction and solution specification. Finally, a collection of common problems in the scientific community are solved using an artificial hyper-cortex generated from digital-library metadata.<|reference_end|> | arxiv | @article{rodriguez2005the,
title={The Hyper-Cortex of Human Collective-Intelligence Systems},
author={Marko A. Rodriguez},
journal={arXiv preprint arXiv:cs/0506024},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506024},
primaryClass={cs.CY cs.AI cs.DL cs.NE}
} | rodriguez2005the |
arxiv-672986 | cs/0506025 | Dynamic Asymmetric Communication | <|reference_start|>Dynamic Asymmetric Communication: We show how any dynamic instantaneous compression algorithm can be converted to an asymmetric communication protocol, with which a server with high bandwidth can help clients with low bandwidth send it messages. Unlike previous authors, we do not assume the server knows the messages' distribution, and our protocols are the first to use only one round of communication for each message.<|reference_end|> | arxiv | @article{gagie2005dynamic,
title={Dynamic Asymmetric Communication},
author={Travis Gagie},
journal={arXiv preprint arXiv:cs/0506025},
year={2005},
doi={10.1109/DCC.2006.29},
archivePrefix={arXiv},
eprint={cs/0506025},
primaryClass={cs.IT math.IT}
} | gagie2005dynamic |
arxiv-672987 | cs/0506026 | Database Reformulation with Integrity Constraints (extended abstract) | <|reference_start|>Database Reformulation with Integrity Constraints (extended abstract): In this paper we study the problem of reducing the evaluation costs of queries on finite databases in presence of integrity constraints, by designing and materializing views. Given a database schema, a set of queries defined on the schema, a set of integrity constraints, and a storage limit, to find a solution to this problem means to find a set of views that satisfies the storage limit, provides equivalent rewritings of the queries under the constraints (this requirement is weaker than equivalence in the absence of constraints), and reduces the total costs of evaluating the queries. This problem, database reformulation, is important for many applications, including data warehousing and query optimization. We give complexity results and algorithms for database reformulation in presence of constraints, for conjunctive queries, views, and rewritings and for several types of constraints, including functional and inclusion dependencies. To obtain better complexity results, we introduce an unchase technique, which reduces the problem of query equivalence under constraints to equivalence in the absence of constraints without increasing query size.<|reference_end|> | arxiv | @article{chirkova2005database,
title={Database Reformulation with Integrity Constraints (extended abstract)},
author={Rada Chirkova and Michael R. Genesereth},
journal={arXiv preprint arXiv:cs/0506026},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506026},
primaryClass={cs.DB}
} | chirkova2005database |
arxiv-672988 | cs/0506027 | Sorting a Low-Entropy Sequence | <|reference_start|>Sorting a Low-Entropy Sequence: We give the first sorting algorithm with bounds in terms of higher-order entropies: let $S$ be a sequence of length $m$ containing $n$ distinct elements and let (H_\ell (S)) be the $\ell$th-order empirical entropy of $S$, with (n^{\ell + 1} \log n \in O (m)); our algorithm sorts $S$ using ((H_\ell (S) + O (1)) m) comparisons.<|reference_end|> | arxiv | @article{gagie2005sorting,
title={Sorting a Low-Entropy Sequence},
author={Travis Gagie},
journal={arXiv preprint arXiv:cs/0506027},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506027},
primaryClass={cs.DS}
} | gagie2005sorting |
arxiv-672989 | cs/0506028 | Neyman-Pearson Detection of Gauss-Markov Signals in Noise: Closed-Form Error Exponent and Properties | <|reference_start|>Neyman-Pearson Detection of Gauss-Markov Signals in Noise: Closed-Form Error Exponent and Properties: The performance of Neyman-Pearson detection of correlated stochastic signals using noisy observations is investigated via the error exponent for the miss probability with a fixed level. Using the state-space structure of the signal and observation model, a closed-form expression for the error exponent is derived, and the connection between the asymptotic behavior of the optimal detector and that of the Kalman filter is established. The properties of the error exponent are investigated for the scalar case. It is shown that the error exponent has distinct characteristics with respect to correlation strength: for signal-to-noise ratio (SNR) >1 the error exponent decreases monotonically as the correlation becomes stronger, whereas for SNR <1 there is an optimal correlation that maximizes the error exponent for a given SNR.<|reference_end|> | arxiv | @article{sung2005neyman-pearson,
title={Neyman-Pearson Detection of Gauss-Markov Signals in Noise: Closed-Form
Error Exponent and Properties},
author={Youngchul Sung, Lang Tong and H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0506028},
year={2005},
doi={10.1109/TIT.2006.871599},
archivePrefix={arXiv},
eprint={cs/0506028},
primaryClass={cs.IT math.IT}
} | sung2005neyman-pearson |
arxiv-672990 | cs/0506029 | A Unified Framework for Tree Search Decoding : Rediscovering the Sequential Decoder | <|reference_start|>A Unified Framework for Tree Search Decoding : Rediscovering the Sequential Decoder: We consider receiver design for coded transmission over linear Gaussian channels. We restrict ourselves to the class of lattice codes and formulate the joint detection and decoding problem as a closest lattice point search (CLPS). Here, a tree search framework for solving the CLPS is adopted. In our framework, the CLPS algorithm decomposes into the preprocessing and tree search stages. The role of the preprocessing stage is to expose the tree structure in a form {\em matched} to the search stage. We argue that the minimum mean square error decision feedback (MMSE-DFE) frontend is instrumental for solving the joint detection and decoding problem in a single search stage. It is further shown that MMSE-DFE filtering allows for using lattice reduction methods to reduce complexity, at the expense of a marginal performance loss, and solving under-determined linear systems. For the search stage, we present a generic method, based on the branch and bound (BB) algorithm, and show that it encompasses all existing sphere decoders as special cases. The proposed generic algorithm further allows for an interesting classification of tree search decoders, sheds more light on the structural properties of all known sphere decoders, and inspires the design of more efficient decoders. In particular, an efficient decoding algorithm that resembles the well known Fano sequential decoder is identified. The excellent performance-complexity tradeoff achieved by the proposed MMSE-Fano decoder is established via simulation results and analytical arguments in several MIMO and ISI scenarios.<|reference_end|> | arxiv | @article{murugan2005a,
title={A Unified Framework for Tree Search Decoding : Rediscovering the
Sequential Decoder},
author={Arul D. Murugan (1), Hesham El Gamal (1), Mohamed Oussama Damen (2)
and Giuseppe Caire (3) ((1) The Ohio State University, (2) University of
Waterloo, (3) Institut Eurecom)},
journal={arXiv preprint arXiv:cs/0506029},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506029},
primaryClass={cs.IT math.IT}
} | murugan2005a |
arxiv-672991 | cs/0506030 | Preferential and Preferential-discriminative Consequence relations | <|reference_start|>Preferential and Preferential-discriminative Consequence relations: The present paper investigates consequence relations that are both non-monotonic and paraconsistent. More precisely, we put the focus on preferential consequence relations, i.e. those relations that can be defined by a binary preference relation on states labelled by valuations. We worked with a general notion of valuation that covers e.g. the classical valuations as well as certain kinds of many-valued valuations. In the many-valued cases, preferential consequence relations are paraconsistant (in addition to be non-monotonic), i.e. they are capable of drawing reasonable conclusions which contain contradictions. The first purpose of this paper is to provide in our general framework syntactic characterizations of several families of preferential relations. The second and main purpose is to provide, again in our general framework, characterizations of several families of preferential discriminative consequence relations. They are defined exactly as the plain version, but any conclusion such that its negation is also a conclusion is rejected (these relations bring something new essentially in the many-valued cases).<|reference_end|> | arxiv | @article{ben-naim2005preferential,
title={Preferential and Preferential-discriminative Consequence relations},
author={Jonathan Ben-Naim (LIF)},
journal={Journal of Logic and Computation 15 (2005) number 3, pp. 263-294},
year={2005},
doi={10.1093/logcom/exi013},
archivePrefix={arXiv},
eprint={cs/0506030},
primaryClass={cs.AI cs.LO}
} | ben-naim2005preferential |
arxiv-672992 | cs/0506031 | A Constrained Object Model for Configuration Based Workflow Composition | <|reference_start|>A Constrained Object Model for Configuration Based Workflow Composition: Automatic or assisted workflow composition is a field of intense research for applications to the world wide web or to business process modeling. Workflow composition is traditionally addressed in various ways, generally via theorem proving techniques. Recent research observed that building a composite workflow bears strong relationships with finite model search, and that some workflow languages can be defined as constrained object metamodels . This lead to consider the viability of applying configuration techniques to this problem, which was proven feasible. Constrained based configuration expects a constrained object model as input. The purpose of this document is to formally specify the constrained object model involved in ongoing experiments and research using the Z specification language.<|reference_end|> | arxiv | @article{albert2005a,
title={A Constrained Object Model for Configuration Based Workflow Composition},
author={Patrick Albert, Laurent Henocque, Mathias Kleiner},
journal={arXiv preprint arXiv:cs/0506031},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506031},
primaryClass={cs.AI}
} | albert2005a |
arxiv-672993 | cs/0506032 | Framework for Hopfield Network based Adaptive routing - A design level approach for adaptive routing phenomena with Artificial Neural Network | <|reference_start|>Framework for Hopfield Network based Adaptive routing - A design level approach for adaptive routing phenomena with Artificial Neural Network: Routing, as a basic phenomena, by itself, has got umpteen scopes to analyse, discuss and arrive at an optimal solution for the technocrats over years. Routing is analysed based on many factors; few key constraints that decide the factors are communication medium, time dependency, information source nature. Parametric routing has become the requirement of the day, with some kind of adaptation to the underlying network environment. Satellite constellations, particularly LEO satellite constellations have become a reality in operational to have a non-breaking voice/data communication around the world.Routing in these constellations has to be treated in a non conventional way, taking their network geometry into consideration. One of the efficient methods of optimization is putting Neural Networks to use. Few Artificial Neural Network models are very much suitable for the adaptive control mechanism, by their nature of network arrangement. One such efficient model is Hopfield Network model. This paper is an attempt to design a framework for the Hopfield Network based adaptive routing phenomena in satellite constellations.<|reference_end|> | arxiv | @article{shankar2005framework,
title={Framework for Hopfield Network based Adaptive routing - A design level
approach for adaptive routing phenomena with Artificial Neural Network},
author={R. Shankar},
journal={arXiv preprint arXiv:cs/0506032},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506032},
primaryClass={cs.NE}
} | shankar2005framework |
arxiv-672994 | cs/0506033 | An Event-driven Operator Model for Dynamic Simulation of Construction Machinery | <|reference_start|>An Event-driven Operator Model for Dynamic Simulation of Construction Machinery: Prediction and optimisation of a wheel loader's dynamic behaviour is a challenge due to tightly coupled, non-linear subsystems of different technical domains. Furthermore, a simulation regarding performance, efficiency, and operability cannot be limited to the machine itself, but has to include operator, environment, and work task. This paper presents some results of our approach to an event-driven simulation model of a human operator. Describing the task and the operator model independently of the machine's technical parameters, gives the possibility to change whole sub-system characteristics without compromising the relevance and validity of the simulation.<|reference_end|> | arxiv | @article{filla2005an,
title={An Event-driven Operator Model for Dynamic Simulation of Construction
Machinery},
author={Reno Filla (Volvo Wheel Loaders AB)},
journal={arXiv preprint arXiv:cs/0506033},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506033},
primaryClass={cs.CE}
} | filla2005an |
arxiv-672995 | cs/0506034 | A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing | <|reference_start|>A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing: Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.<|reference_end|> | arxiv | @article{venugopal2005a,
title={A Taxonomy of Data Grids for Distributed Data Sharing, Management and
Processing},
author={Srikumar Venugopal, Rajkumar Buyya and Kotagiri Ramamohanarao},
journal={arXiv preprint arXiv:cs/0506034},
year={2005},
number={GRIDS-TR-2005-3},
archivePrefix={arXiv},
eprint={cs/0506034},
primaryClass={cs.DC cs.CE}
} | venugopal2005a |
arxiv-672996 | cs/0506035 | Fast Recompilation of Object Oriented Modules | <|reference_start|>Fast Recompilation of Object Oriented Modules: Once a program file is modified, the recompilation time should be minimized, without sacrificing execution speed or high level object oriented features. The recompilation time is often a problem for the large graphical interactive distributed applications tackled by modern OO languages. A compilation server and fast code generator were developed and integrated with the SRC Modula-3 compiler and Linux ELF dynamic linker. The resulting compilation and recompilation speedups are impressive. The impact of different language features, processor speed, and application size are discussed.<|reference_end|> | arxiv | @article{collin2005fast,
title={Fast Recompilation of Object Oriented Modules},
author={Jerome Collin (Computer Engineering, Ecole Polytechnique de Montreal),
Michel Dagenais (Computer Engineering, Ecole Polytechnique de Montreal)},
journal={arXiv preprint arXiv:cs/0506035},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506035},
primaryClass={cs.PL}
} | collin2005fast |
arxiv-672997 | cs/0506036 | Non prefix-free codes for constrained sequences | <|reference_start|>Non prefix-free codes for constrained sequences: In this paper we consider the use of variable length non prefix-free codes for coding constrained sequences of symbols. We suppose to have a Markov source where some state transitions are impossible, i.e. the stochastic matrix associated with the Markov chain has some null entries. We show that classic Kraft inequality is not a necessary condition, in general, for unique decodability under the above hypothesis and we propose a relaxed necessary inequality condition. This allows, in some cases, the use of non prefix-free codes that can give very good performance, both in terms of compression and computational efficiency. Some considerations are made on the relation between the proposed approach and other existing coding paradigms.<|reference_end|> | arxiv | @article{dalai2005non,
title={Non prefix-free codes for constrained sequences},
author={Marco Dalai and Riccardo Leonardi},
journal={arXiv preprint arXiv:cs/0506036},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506036},
primaryClass={cs.IT math.IT}
} | dalai2005non |
arxiv-672998 | cs/0506037 | Tradeoff Between Source and Channel Coding for Erasure Channels | <|reference_start|>Tradeoff Between Source and Channel Coding for Erasure Channels: In this paper, we investigate the optimal tradeoff between source and channel coding for channels with bit or packet erasure. Upper and Lower bounds on the optimal channel coding rate are computed to achieve minimal end-to-end distortion. The bounds are calculated based on a combination of sphere packing, straight line and expurgated error exponents and also high rate vector quantization theory. By modeling a packet erasure channel in terms of an equivalent bit erasure channel, we obtain bounds on the packet size for a specified limit on the distortion.<|reference_end|> | arxiv | @article{kizhakkemadam2005tradeoff,
title={Tradeoff Between Source and Channel Coding for Erasure Channels},
author={Sriram N. Kizhakkemadam, Panos Papamichalis, Mandyam Srinath, Dinesh
Rajan},
journal={arXiv preprint arXiv:cs/0506037},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506037},
primaryClass={cs.IT math.IT}
} | kizhakkemadam2005tradeoff |
arxiv-672999 | cs/0506038 | A Game Theoretic Economics Framework to understanding Information Security Oursourcing Market | <|reference_start|>A Game Theoretic Economics Framework to understanding Information Security Oursourcing Market: On information security outsourcing market, an important reason that firms do not want to let outside firms(usually called MSSPs-Managed Security Service Providers) to take care of their security need is that they worry about service quality MSSPs provide because they cannot monitor effort of the MSSPs. Since MSSPs action is unobservable to buyers, MSSPs can lower cost by working less hard than required in the contract and get higher profit. In the asymmetric information literature, this possible secret shirking behavior is termed as moral hazard problem. This paper considers a game theoretic economic framework to show that under information asymmetry, an optimal contract can be designed so that MSSPs will stick to their promised effort level. We also show that the optimal contract should be performance-based, i.e., payment to MSSP should base on performance of MSSP's security service period by period. For comparison, we also showed that if the moral hazard problem does not exist, the optimal contract does not depend on MSSP's performance. A contract that specifies constant payment to MSSP will be optimal. Besides these, we show that for no matter under perfect information scenario or imperfect information scenario, the higher the transaction cost is, the lower payment to MSSPs will be.<|reference_end|> | arxiv | @article{ding2005a,
title={A Game Theoretic Economics Framework to understanding Information
Security Oursourcing Market},
author={Wen Ding & William Yurcik},
journal={arXiv preprint arXiv:cs/0506038},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506038},
primaryClass={cs.GT}
} | ding2005a |
arxiv-673000 | cs/0506039 | Antenna array geometry and coding performance | <|reference_start|>Antenna array geometry and coding performance: This paper provides details about experiments in realistic, urban, and frequency flat channels with space-time coding that specifically examines the impact of the number of receive antennas and the design criteria for code selection on the performance. Also the performance characteristics are examined of the coded modulations in the presence of finite size array geometries. This paper gives some insight into which of the theories are most useful in realistic deployments.<|reference_end|> | arxiv | @article{zhu2005antenna,
title={Antenna array geometry and coding performance},
author={Weijun Zhu, Heechoon Lee, Daniel Liu and Michael P. Fitz},
journal={arXiv preprint arXiv:cs/0506039},
year={2005},
archivePrefix={arXiv},
eprint={cs/0506039},
primaryClass={cs.IT math.IT}
} | zhu2005antenna |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.