corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-3001
0803.1217
Hsiao-Code Check Matrices and Recursively Balanced Matrices
<|reference_start|>Hsiao-Code Check Matrices and Recursively Balanced Matrices: The key step of generating the well-known Hsiao code is to construct a {0,1}-check-matrix in which each column contains the same odd-number of 1's and each row contains the same number of 1's or differs at most by one for the number of 1's. We also require that no two columns are identical in the matrix. The author solved this problem in 1986 by introducing a type of recursively balanced matrices. However, since the paper was published in Chinese, the solution for such an important problem was not known by international researchers in coding theory. In this note, we focus on how to practically generate the check matrix of Hsiao codes. We have modified the original algorithm to be more efficient and effective. We have also corrected an error in algorithm analysis presented in the earlier paper. The result shows that the algorithm attained optimum in average cases if a divide-and-conquer technique must be involved in the algorithm.<|reference_end|>
arxiv
@article{chen2008hsiao-code, title={Hsiao-Code Check Matrices and Recursively Balanced Matrices}, author={Li Chen}, journal={arXiv preprint arXiv:0803.1217}, year={2008}, archivePrefix={arXiv}, eprint={0803.1217}, primaryClass={cs.DM} }
chen2008hsiao-code
arxiv-3002
0803.1220
22-Step Collisions for SHA-2
<|reference_start|>22-Step Collisions for SHA-2: In this note, we provide the first 22-step collisions for SHA-256 and SHA-512. Detailed technique of generating these collisions will be provided in the next revision of this note.<|reference_end|>
arxiv
@article{sanadhya200822-step, title={22-Step Collisions for SHA-2}, author={Somitra Kumar Sanadhya and Palash Sarkar}, journal={arXiv preprint arXiv:0803.1220}, year={2008}, archivePrefix={arXiv}, eprint={0803.1220}, primaryClass={cs.CR} }
sanadhya200822-step
arxiv-3003
0803.1221
Non-Singular Assembly-mode Changing Motions for 3-RPR Parallel Manipulators
<|reference_start|>Non-Singular Assembly-mode Changing Motions for 3-RPR Parallel Manipulators: When moving from one arbitrary location at another, a parallel manipulator may change its assembly-mode without crossing a singularity. Because the non-singular change of assembly-mode cannot be simply detected, the actual assembly-mode during motion is difficult to track. This paper proposes a global explanatory approach to help better understand non-singular assembly-mode changing motions for 3-RPR planar parallel manipulators. The approach consists in fixing one of the actuated joints and analyzing the configuration-space as a surface in a 3-dimensional space. Such a global description makes it possible to display all possible non-singular assembly-mode changing trajectories.<|reference_end|>
arxiv
@article{zein2008non-singular, title={Non-Singular Assembly-mode Changing Motions for 3-RPR Parallel Manipulators}, author={Mazen Zein (IRCCyN), Philippe Wenger (IRCCyN), Damien Chablat (IRCCyN)}, journal={Mechanism and Machine Theory 23, 4 (2008) 480-490}, year={2008}, archivePrefix={arXiv}, eprint={0803.1221}, primaryClass={cs.RO physics.class-ph} }
zein2008non-singular
arxiv-3004
0803.1227
Linear programming bounds for unitary space time codes
<|reference_start|>Linear programming bounds for unitary space time codes: The linear programming method is applied to the space $\U_n(\C)$ of unitary matrices in order to obtain bounds for codes relative to the diversity sum and the diversity product. Theoretical and numerical results improving previously known bounds are derived.<|reference_end|>
arxiv
@article{creignou2008linear, title={Linear programming bounds for unitary space time codes}, author={Jean Creignou (IMB), Herv'e Diet (IMB)}, journal={arXiv preprint arXiv:0803.1227}, year={2008}, archivePrefix={arXiv}, eprint={0803.1227}, primaryClass={cs.IT math.IT} }
creignou2008linear
arxiv-3005
0803.1245
The shortest game of Chinese Checkers and related problems
<|reference_start|>The shortest game of Chinese Checkers and related problems: In 1979, David Fabian found a complete game of two-person Chinese Checkers in 30 moves (15 by each player) [Martin Gardner, Penrose Tiles to Trapdoor Ciphers, MAA, 1997]. This solution requires that the two players cooperate to generate a win as quickly as possible for one of them. We show, using computational search techniques, that no shorter game is possible. We also consider a solitaire version of Chinese Checkers where one player attempts to move her pieces across the board in as few moves as possible. In 1971, Octave Levenspiel found a solution in 27 moves [Ibid.]; we demonstrate that no shorter solution exists. To show optimality, we employ a variant of A* search, as well as bidirectional search.<|reference_end|>
arxiv
@article{bell2008the, title={The shortest game of Chinese Checkers and related problems}, author={George I. Bell}, journal={INTEGERS: Electronic Journal of Combinatorial Number Theory 9 (2009) #G01}, year={2008}, archivePrefix={arXiv}, eprint={0803.1245}, primaryClass={math.CO cs.DM cs.DS} }
bell2008the
arxiv-3006
0803.1296
On the Topology of the Restricted Delaunay Triangulation and Witness Complex in Higher Dimensions
<|reference_start|>On the Topology of the Restricted Delaunay Triangulation and Witness Complex in Higher Dimensions: It is a well-known fact that, under mild sampling conditions, the restricted Delaunay triangulation provides good topological approximations of 1- and 2-manifolds. We show that this is not the case for higher-dimensional manifolds, even under stronger sampling conditions. Specifically, it is not true that, for any compact closed submanifold M of R^n, and any sufficiently dense uniform sampling L of M, the Delaunay triangulation of L restricted to M is homeomorphic to M, or even homotopy equivalent to it. Besides, it is not true either that, for any sufficiently dense set W of witnesses, the witness complex of L relative to M contains or is contained in the restricted Delaunay triangulation of L.<|reference_end|>
arxiv
@article{oudot2008on, title={On the Topology of the Restricted Delaunay Triangulation and Witness Complex in Higher Dimensions}, author={Steve Y. Oudot}, journal={arXiv preprint arXiv:0803.1296}, year={2008}, archivePrefix={arXiv}, eprint={0803.1296}, primaryClass={cs.CG} }
oudot2008on
arxiv-3007
0803.1321
Treewidth computation and extremal combinatorics
<|reference_start|>Treewidth computation and extremal combinatorics: For a given graph G and integers b,f >= 0, let S be a subset of vertices of G of size b+1 such that the subgraph of G induced by S is connected and S can be separated from other vertices of G by removing f vertices. We prove that every graph on n vertices contains at most n\binom{b+f}{b} such vertex subsets. This result from extremal combinatorics appears to be very useful in the design of several enumeration and exact algorithms. In particular, we use it to provide algorithms that for a given n-vertex graph G - compute the treewidth of G in time O(1.7549^n) by making use of exponential space and in time O(2.6151^n) and polynomial space; - decide in time O(({\frac{2n+k+1}{3})^{k+1}\cdot kn^6}) if the treewidth of G is at most k; - list all minimal separators of G in time O(1.6181^n) and all potential maximal cliques of G in time O(1.7549^n). This significantly improves previous algorithms for these problems.<|reference_end|>
arxiv
@article{fomin2008treewidth, title={Treewidth computation and extremal combinatorics}, author={Fedor V. Fomin and Yngve Villanger}, journal={arXiv preprint arXiv:0803.1321}, year={2008}, archivePrefix={arXiv}, eprint={0803.1321}, primaryClass={cs.DS} }
fomin2008treewidth
arxiv-3008
0803.1323
A Game Theoretic Framework for Decentralized Power Allocation in IDMA Systems
<|reference_start|>A Game Theoretic Framework for Decentralized Power Allocation in IDMA Systems: In this contribution we present a decentralized power allocation algorithm for the uplink interleave division multiple access (IDMA) channel. Within the proposed optimal strategy for power allocation, each user aims at selfishly maximizing its own utility function. An iterative chip by chip (CBC) decoder at the receiver and a rational selfish behavior of all the users according to a classical game-theoretical framework are the underlying assumptions of this work. This approach leads to a power allocation based on a channel inversion policy where the optimal power level is set locally at each terminal based on the knowledge of its own channel realization, the noise level at the receiver and the number of active users in the network.<|reference_end|>
arxiv
@article{perlaza2008a, title={A Game Theoretic Framework for Decentralized Power Allocation in IDMA Systems}, author={Samir Medina Perlaza, Laura Cottatellucci, Merouane Debbah}, journal={arXiv preprint arXiv:0803.1323}, year={2008}, doi={10.1109/PIMRC.2008.4699708}, archivePrefix={arXiv}, eprint={0803.1323}, primaryClass={cs.IT cs.GT math.IT} }
perlaza2008a
arxiv-3009
0803.1360
On the need for a global academic internet platform
<|reference_start|>On the need for a global academic internet platform: The article collects arguments for the necessity of a global academic internet platform, which is organized as a kind of ``global scientific parliament''. With such a constitution educational and research institutions will have direct means for communicating scientific results, as well as a platform for representing academia and scientific life in the public.<|reference_end|>
arxiv
@article{kutz2008on, title={On the need for a global academic internet platform}, author={Nadja Kutz}, journal={arXiv preprint arXiv:0803.1360}, year={2008}, archivePrefix={arXiv}, eprint={0803.1360}, primaryClass={cs.CY} }
kutz2008on
arxiv-3010
0803.1393
On inversion formulas and Fibonomial coefficients
<|reference_start|>On inversion formulas and Fibonomial coefficients: A research problem for undergraduates and graduates is being posed as a cap for the prior antecedent regular discrete mathematics exercises. [Here cap is not necessarily CAP=Competitive Access Provider, though nevertheless ...] The object of the cap problem of final interest i.e. array of fibonomial coefficients and the issue of its combinatorial meaning is to be found in A.K.Kwa\'sniewski's source papers. The cap problem number seven - still opened for students has been placed on Mathemagics page of the first author [http://ii.uwb.edu.pl/akk/dydaktyka/dyskr/dyskretna.htm]. The indicatory references are to point at a part of the vast domain of the foundations of computer science in ArXiv affiliation noted as CO.cs.DM. The presentation has been verified in a tutor system of communication with a couple of intelligent students. The result is top secret.Temporarily. [Contact: Wikipedia; Theory of cognitive development].<|reference_end|>
arxiv
@article{kwaśniewski2008on, title={On inversion formulas and Fibonomial coefficients}, author={A. Krzysztof Kwa'sniewski, Ewa Krot-Sieniawska}, journal={Proc. Jangjeon Math. Soc. volume 11 (1), 2008 (June),65-68}, year={2008}, archivePrefix={arXiv}, eprint={0803.1393}, primaryClass={math.CO cs.DM math.GM} }
kwaśniewski2008on
arxiv-3011
0803.1416
New formulas for Stirling-like numbers and Dobinski-like formulas
<|reference_start|>New formulas for Stirling-like numbers and Dobinski-like formulas: Extensions of the $Stirling$ numbers of the second kind and $Dobinski$ -like formulas are proposed in a series of exercises for graduates. Some of these new formulas recently discovered by me are to be found in the source paper $ [1]$. These extensions naturally encompass the well known $q$- extensions. The indicatory references are to point at a part of the vast domain of the foundations of computer science in arxiv affiliation.<|reference_end|>
arxiv
@article{kwasniewski2008new, title={New formulas for Stirling-like numbers and Dobinski-like formulas}, author={A. K. Kwasniewski}, journal={Proc. Jangjeon Math. Soc. Vol. 11 No 2, (2008),137-144}, year={2008}, archivePrefix={arXiv}, eprint={0803.1416}, primaryClass={math.CO cs.DM} }
kwasniewski2008new
arxiv-3012
0803.1443
Lexical growth, entropy and the benefits of networking
<|reference_start|>Lexical growth, entropy and the benefits of networking: If each node of an idealized network has an equal capacity to efficiently exchange benefits, then the network's capacity to use energy is scaled by the average amount of energy required to connect any two of its nodes. The scaling factor equals \textit{e}, and the network's entropy is $\ln(n)$. Networking emerges in consequence of nodes minimizing the ratio of their energy use to the benefits obtained for such use, and their connectability. Networking leads to nested hierarchical clustering, which multiplies a network's capacity to use its energy to benefit its nodes. Network entropy multiplies a node's capacity. For a real network in which the nodes have the capacity to exchange benefits, network entropy may be estimated as $C \log_L(n)$, where the base of the log is the path length $L$, and $C$ is the clustering coefficient. Since $n$, $L$ and $C$ can be calculated for real networks, network entropy for real networks can be calculated and can reveal aspects of emergence and also of economic, biological, conceptual and other networks, such as the relationship between rates of lexical growth and divergence, and the economic benefit of adding customers to a commercial communications network. \textit{Entropy dating} can help estimate the age of network processes, such as the growth of hierarchical society and of language.<|reference_end|>
arxiv
@article{shour2008lexical, title={Lexical growth, entropy and the benefits of networking}, author={Robert Shour}, journal={arXiv preprint arXiv:0803.1443}, year={2008}, archivePrefix={arXiv}, eprint={0803.1443}, primaryClass={cs.IT math.IT q-bio.QM} }
shour2008lexical
arxiv-3013
0803.1445
Distributed Joint Source-Channel Coding on a Multiple Access Channel with Side Information
<|reference_start|>Distributed Joint Source-Channel Coding on a Multiple Access Channel with Side Information: We consider the problem of transmission of several distributed sources over a multiple access channel (MAC) with side information at the sources and the decoder. Source-channel separation does not hold for this channel. Sufficient conditions are provided for transmission of sources with a given distortion. The source and/or the channel could have continuous alphabets (thus Gaussian sources and Gaussian MACs are special cases). Various previous results are obtained as special cases. We also provide several good joint source-channel coding schemes for a discrete/continuous source and discrete/continuous alphabet channel. Channels with feedback and fading are also considered. Keywords: Multiple access channel, side information, lossy joint source-channel coding, channels with feedback, fading channels.<|reference_end|>
arxiv
@article{rajesh2008distributed, title={Distributed Joint Source-Channel Coding on a Multiple Access Channel with Side Information}, author={R. Rajesh and Vinod Sharma}, journal={arXiv preprint arXiv:0803.1445}, year={2008}, number={TR-PME-2008-01}, archivePrefix={arXiv}, eprint={0803.1445}, primaryClass={cs.IT math.IT} }
rajesh2008distributed
arxiv-3014
0803.1454
Tight Bounds on the Capacity of Binary Input random CDMA Systems
<|reference_start|>Tight Bounds on the Capacity of Binary Input random CDMA Systems: We consider multiple access communication on a binary input additive white Gaussian noise channel using randomly spread code division. For a general class of symmetric distributions for spreading coefficients, in the limit of a large number of users, we prove an upper bound on the capacity, which matches a formula that Tanaka obtained by using the replica method. We also show concentration of various relevant quantities including mutual information, capacity and free energy. The mathematical methods are quite general and allow us to discuss extensions to other multiuser scenarios.<|reference_end|>
arxiv
@article{korada2008tight, title={Tight Bounds on the Capacity of Binary Input random CDMA Systems}, author={Satish Babu Korada, Nicolas Macris}, journal={arXiv preprint arXiv:0803.1454}, year={2008}, archivePrefix={arXiv}, eprint={0803.1454}, primaryClass={cs.IT math.IT} }
korada2008tight
arxiv-3015
0803.1457
Hybrid Reasoning and the Future of Iconic Representations
<|reference_start|>Hybrid Reasoning and the Future of Iconic Representations: We give a brief overview of the main characteristics of diagrammatic reasoning, analyze a case of human reasoning in a mastermind game, and explain why hybrid representation systems (HRS) are particularly attractive and promising for Artificial General Intelligence and Computer Science in general.<|reference_end|>
arxiv
@article{recanati2008hybrid, title={Hybrid Reasoning and the Future of Iconic Representations}, author={Catherine Recanati (LIPN)}, journal={Dans Artificial General Intelligence 2008 - The First AGI Conference, Memphis, Tennessee : \'Etats-Unis d'Am\'erique (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0803.1457}, primaryClass={cs.AI cs.LO} }
recanati2008hybrid
arxiv-3016
0803.1500
NCore: Architecture and Implementation of a Flexible, Collaborative Digital Library
<|reference_start|>NCore: Architecture and Implementation of a Flexible, Collaborative Digital Library: NCore is an open source architecture and software platform for creating flexible, collaborative digital libraries. NCore was developed by the National Science Digital Library (NSDL) project, and it serves as the central technical infrastructure for NSDL. NCore consists of a central Fedora-based digital repository, a specific data model, an API, and a set of backend services and frontend tools that create a new model for collaborative, contributory digital libraries. This paper describes NCore, presents and analyzes its architecture, tools and services; and reports on the experience of NSDL in building and operating a major digital library on it over the past year and the experience of the Digital Library for Earth Systems Education in porting their existing digital library and tools to the NCore platform.<|reference_end|>
arxiv
@article{krafft2008ncore:, title={NCore: Architecture and Implementation of a Flexible, Collaborative Digital Library}, author={Dean B. Krafft, Aaron Birkland, Ellen J. Cramer}, journal={arXiv preprint arXiv:0803.1500}, year={2008}, archivePrefix={arXiv}, eprint={0803.1500}, primaryClass={cs.DL} }
krafft2008ncore:
arxiv-3017
0803.1511
The Capacity Region of the Degraded Finite-State Broadcast Channel
<|reference_start|>The Capacity Region of the Degraded Finite-State Broadcast Channel: We consider the discrete, time-varying broadcast channel with memory under the assumption that the channel states belong to a set of finite cardinality. We first define the physically degraded finite-state broadcast channel for which we derive the capacity region. We then define the stochastically degraded finite-state broadcast channel and derive the capacity region for this scenario as well. In both scenarios we consider the non-indecomposable finite-state channel as well as the indecomposable one.<|reference_end|>
arxiv
@article{dabora2008the, title={The Capacity Region of the Degraded Finite-State Broadcast Channel}, author={Ron Dabora and Andrea Goldsmith}, journal={arXiv preprint arXiv:0803.1511}, year={2008}, archivePrefix={arXiv}, eprint={0803.1511}, primaryClass={cs.IT math.IT} }
dabora2008the
arxiv-3018
0803.1520
Integrity-Enhancing Replica Coordination for Byzantine Fault Tolerant Systems
<|reference_start|>Integrity-Enhancing Replica Coordination for Byzantine Fault Tolerant Systems: Strong replica consistency is often achieved by writing deterministic applications, or by using a variety of mechanisms to render replicas deterministic. There exists a large body of work on how to render replicas deterministic under the benign fault model. However, when replicas can be subject to malicious faults, most of the previous work is no longer effective. Furthermore, the determinism of the replicas is often considered harmful from the security perspective and for many applications, their integrity strongly depends on the randomness of some of their internal operations. This calls for new approaches towards achieving replica consistency while preserving the replica randomness. In this paper, we present two such approaches. One is based on Byzantine agreement and the other on threshold coin-tossing. Each approach has its strength and weaknesses. We compare the performance of the two approaches and outline their respective best use scenarios.<|reference_end|>
arxiv
@article{zhao2008integrity-enhancing, title={Integrity-Enhancing Replica Coordination for Byzantine Fault Tolerant Systems}, author={Wenbing Zhao}, journal={arXiv preprint arXiv:0803.1520}, year={2008}, archivePrefix={arXiv}, eprint={0803.1520}, primaryClass={cs.DC} }
zhao2008integrity-enhancing
arxiv-3019
0803.1521
Proactive Service Migration for Long-Running Byzantine Fault Tolerant Systems
<|reference_start|>Proactive Service Migration for Long-Running Byzantine Fault Tolerant Systems: In this paper, we describe a novel proactive recovery scheme based on service migration for long-running Byzantine fault tolerant systems. Proactive recovery is an essential method for ensuring long term reliability of fault tolerant systems that are under continuous threats from malicious adversaries. The primary benefit of our proactive recovery scheme is a reduced vulnerability window. This is achieved by removing the time-consuming reboot step from the critical path of proactive recovery. Our migration-based proactive recovery is coordinated among the replicas, therefore, it can automatically adjust to different system loads and avoid the problem of excessive concurrent proactive recoveries that may occur in previous work with fixed watchdog timeouts. Moreover, the fast proactive recovery also significantly improves the system availability in the presence of faults.<|reference_end|>
arxiv
@article{zhao2008proactive, title={Proactive Service Migration for Long-Running Byzantine Fault Tolerant Systems}, author={Wenbing Zhao}, journal={arXiv preprint arXiv:0803.1521}, year={2008}, archivePrefix={arXiv}, eprint={0803.1521}, primaryClass={cs.DC} }
zhao2008proactive
arxiv-3020
0803.1555
Privacy Preserving ID3 over Horizontally, Vertically and Grid Partitioned Data
<|reference_start|>Privacy Preserving ID3 over Horizontally, Vertically and Grid Partitioned Data: We consider privacy preserving decision tree induction via ID3 in the case where the training data is horizontally or vertically distributed. Furthermore, we consider the same problem in the case where the data is both horizontally and vertically distributed, a situation we refer to as grid partitioned data. We give an algorithm for privacy preserving ID3 over horizontally partitioned data involving more than two parties. For grid partitioned data, we discuss two different evaluation methods for preserving privacy ID3, namely, first merging horizontally and developing vertically or first merging vertically and next developing horizontally. Next to introducing privacy preserving data mining over grid-partitioned data, the main contribution of this paper is that we show, by means of a complexity analysis that the former evaluation method is the more efficient.<|reference_end|>
arxiv
@article{kuijpers2008privacy, title={Privacy Preserving ID3 over Horizontally, Vertically and Grid Partitioned Data}, author={Bart Kuijpers, Vanessa Lemmens, Bart Moelans and Karl Tuyls}, journal={arXiv preprint arXiv:0803.1555}, year={2008}, archivePrefix={arXiv}, eprint={0803.1555}, primaryClass={cs.DB cs.LG} }
kuijpers2008privacy
arxiv-3021
0803.1568
Dempster-Shafer for Anomaly Detection
<|reference_start|>Dempster-Shafer for Anomaly Detection: In this paper, we implement an anomaly detection system using the Dempster-Shafer method. Using two standard benchmark problems we show that by combining multiple signals it is possible to achieve better results than by using a single signal. We further show that by applying this approach to a real-world email dataset the algorithm works for email worm detection. Dempster-Shafer can be a promising method for anomaly detection problems with multiple features (data sources), and two or more classes.<|reference_end|>
arxiv
@article{chen2008dempster-shafer, title={Dempster-Shafer for Anomaly Detection}, author={Qi Chen and Uwe Aickelin}, journal={Proceedings of the International Conference on Data Mining (DMIN 2006), pp 232-238, Las Vegas, USA 2006}, year={2008}, archivePrefix={arXiv}, eprint={0803.1568}, primaryClass={cs.NE cs.AI cs.CR} }
chen2008dempster-shafer
arxiv-3022
0803.1575
A Quantifier Elimination Algorithm for Linear Real Arithmetic
<|reference_start|>A Quantifier Elimination Algorithm for Linear Real Arithmetic: We propose a new quantifier elimination algorithm for the theory of linear real arithmetic. This algorithm uses as subroutine satisfiability modulo this theory, a problem for which there are several implementations available. The quantifier elimination algorithm presented in the paper is compared, on examples arising from program analysis problems, to several other implementations, all of which cannot solve some of the examples that our algorithm solves easily.<|reference_end|>
arxiv
@article{monniaux2008a, title={A Quantifier Elimination Algorithm for Linear Real Arithmetic}, author={David Monniaux (VERIMAG - Imag)}, journal={arXiv preprint arXiv:0803.1575}, year={2008}, archivePrefix={arXiv}, eprint={0803.1575}, primaryClass={cs.LO} }
monniaux2008a
arxiv-3023
0803.1576
Simulation Optimization of the Crossdock Door Assignment Problem
<|reference_start|>Simulation Optimization of the Crossdock Door Assignment Problem: The purpose of this report is to present the Crossdock Door Assignment Problem, which involves assigning destinations to outbound dock doors of Crossdock centres such that travel distance by material handling equipment is minimized. We propose a two fold solution; simulation and optimization of the simulation model simulation optimization. The novel aspect of our solution approach is that we intend to use simulation to derive a more realistic objective function and use Memetic algorithms to find an optimal solution. The main advantage of using Memetic algorithms is that it combines a local search with Genetic Algorithms. The Crossdock Door Assignment Problem is a new domain application to Memetic Algorithms and it is yet unknown how it will perform.<|reference_end|>
arxiv
@article{aickelin2008simulation, title={Simulation Optimization of the Crossdock Door Assignment Problem}, author={Uwe Aickelin and Adrian Adewunmi}, journal={UK Operational Research Society Simulation Workshop 2006 (SW 2006), Leamington Spa, UK 2006}, year={2008}, archivePrefix={arXiv}, eprint={0803.1576}, primaryClass={cs.NE cs.CE} }
aickelin2008simulation
arxiv-3024
0803.1586
Spatio-activity based object detection
<|reference_start|>Spatio-activity based object detection: We present the SAMMI lightweight object detection method which has a high level of accuracy and robustness, and which is able to operate in an environment with a large number of cameras. Background modeling is based on DCT coefficients provided by cameras. Foreground detection uses similarity in temporal characteristics of adjacent blocks of pixels, which is a computationally inexpensive way to make use of object coherence. Scene model updating uses the approximated median method for improved performance. Evaluation at pixel level and application level shows that SAMMI object detection performs better and faster than the conventional Mixture of Gaussians method.<|reference_end|>
arxiv
@article{springett2008spatio-activity, title={Spatio-activity based object detection}, author={Jarrad Springett, Jeroen Vendrig}, journal={arXiv preprint arXiv:0803.1586}, year={2008}, archivePrefix={arXiv}, eprint={0803.1586}, primaryClass={cs.CV} }
springett2008spatio-activity
arxiv-3025
0803.1596
Using Intelligent Agents to understand organisational behaviour
<|reference_start|>Using Intelligent Agents to understand organisational behaviour: This paper introduces two ongoing research projects which seek to apply computer modelling techniques in order to simulate human behaviour within organisations. Previous research in other disciplines has suggested that complex social behaviours are governed by relatively simple rules which, when identified, can be used to accurately model such processes using computer technology. The broad objective of our research is to develop a similar capability within organisational psychology.<|reference_end|>
arxiv
@article{celia2008using, title={Using Intelligent Agents to understand organisational behaviour}, author={Helen Celia, Christopher Clegg, Mark Robinson, Peer-Olaf Siebers, Uwe Aickelin and Christine Sprigg}, journal={Proceedings of the British Psychology Society Annual Conference, Occupational Psychology Division (BPS 2007), Bristol, UK 2007}, year={2008}, archivePrefix={arXiv}, eprint={0803.1596}, primaryClass={cs.NE cs.MA} }
celia2008using
arxiv-3026
0803.1598
A Multi-Agent Simulation of Retail Management Practices
<|reference_start|>A Multi-Agent Simulation of Retail Management Practices: We apply Agent-Based Modeling and Simulation (ABMS) to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents do offer potential for developing organizational capabilities in the future. Our multi-disciplinary research team has worked with a UK department store to collect data and capture perceptions about operations from actors within departments. Based on this case study work, we have built a simulator that we present in this paper. We then use the simulator to gather empirical evidence regarding two specific management practices: empowerment and employee development.<|reference_end|>
arxiv
@article{siebers2008a, title={A Multi-Agent Simulation of Retail Management Practices}, author={Peer-Olaf Siebers, Uwe Aickelin, Helen Celia and Christopher Clegg}, journal={Proceedings of the Summer Computer Simulation Conference (SCSC 2007), pp 959-966, San Diego, USA 2007}, year={2008}, archivePrefix={arXiv}, eprint={0803.1598}, primaryClass={cs.NE} }
siebers2008a
arxiv-3027
0803.1600
Understanding Retail Productivity by Simulating Management Practise
<|reference_start|>Understanding Retail Productivity by Simulating Management Practise: Intelligent agents offer a new and exciting way of understanding the world of work. In this paper we apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents could offer potential for fostering sustainable organizational capabilities in the future. Our research so far has led us to conduct case study work with a top ten UK retailer, collecting data in four departments in two stores. Based on our case study data we have built and tested a first version of a department store simulator. In this paper we will report on the current development of our simulator which includes new features concerning more realistic data on the pattern of footfall during the day and the week, a more differentiated view of customers, and the evolution of customers over time. This allows us to investigate more complex scenarios and to analyze the impact of various management practices.<|reference_end|>
arxiv
@article{siebers2008understanding, title={Understanding Retail Productivity by Simulating Management Practise}, author={Peer-Olaf Siebers, Uwe Aickelin, Helen Celia and Christopher Clegg}, journal={Proceedings of the EUROSIM Congress on Modelling and Simulation (EUROSIM 2007), pp 1-12, Ljubljana, Slovenia 2007}, year={2008}, archivePrefix={arXiv}, eprint={0803.1600}, primaryClass={cs.NE} }
siebers2008understanding
arxiv-3028
0803.1604
Using Intelligent Agents to Understand Management Practices and Retail Productivity
<|reference_start|>Using Intelligent Agents to Understand Management Practices and Retail Productivity: Intelligent agents offer a new and exciting way of understanding the world of work. In this paper we apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents could offer potential for fostering sustainable organizational capabilities in the future. The project is still at an early stage. So far we have conducted a case study in a UK department store to collect data and capture impressions about operations and actors within departments. Furthermore, based on our case study we have built and tested our first version of a retail branch simulator which we will present in this paper.<|reference_end|>
arxiv
@article{siebers2008using, title={Using Intelligent Agents to Understand Management Practices and Retail Productivity}, author={Peer-Olaf Siebers, Uwe Aickelin, Helen Celia and Christopher Clegg}, journal={Proceedings of the Winter Simulation Conference (WSC 2007), pp 2212-2220, Washington, USA 2007}, year={2008}, archivePrefix={arXiv}, eprint={0803.1604}, primaryClass={cs.NE cs.CE cs.MA} }
siebers2008using
arxiv-3029
0803.1610
A Queueing System for Modeling a File Sharing Principle
<|reference_start|>A Queueing System for Modeling a File Sharing Principle: We investigate in this paper the performance of a simple file sharing principle. For this purpose, we consider a system composed of N peers becoming active at exponential random times; the system is initiated with only one server offering the desired file and the other peers after becoming active try to download it. Once the file has been downloaded by a peer, this one immediately becomes a server. To investigate the transient behavior of this file sharing system, we study the instant when the system shifts from a congested state where all servers available are saturated by incoming demands to a state where a growing number of servers are idle. In spite of its apparent simplicity, this queueing model (with a random number of servers) turns out to be quite difficult to analyze. A formulation in terms of an urn and ball model is proposed and corresponding scaling results are derived. These asymptotic results are then compared against simulations.<|reference_end|>
arxiv
@article{simatos2008a, title={A Queueing System for Modeling a File Sharing Principle}, author={Florian Simatos, Philippe Robert, Fabrice Guillemin (FT R&D)}, journal={Dans ACM Sigmetrics (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0803.1610}, primaryClass={cs.NI} }
simatos2008a
arxiv-3030
0803.1621
An Agent-Based Simulation of In-Store Customer Experiences
<|reference_start|>An Agent-Based Simulation of In-Store Customer Experiences: Agent-based modelling and simulation offers a new and exciting way of understanding the world of work. In this paper we describe the development of an agent-based simulation model, designed to help to understand the relationship between human resource management practices and retail productivity. We report on the current development of our simulation model which includes new features concerning the evolution of customers over time. To test some of these features we have conducted a series of experiments dealing with customer pool sizes, standard and noise reduction modes, and the spread of the word of mouth. Our multi-disciplinary research team draws upon expertise from work psychologists and computer scientists. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents offer potential for fostering sustainable organisational capabilities in the future.<|reference_end|>
arxiv
@article{siebers2008an, title={An Agent-Based Simulation of In-Store Customer Experiences}, author={Peer-Olaf Siebers, Uwe Aickelin, Helen Celia and Christopher Clegg}, journal={Operational Research Society 4th Simulation Workshop (SW08), in print, pp, Worcestershire, UK 2008}, year={2008}, archivePrefix={arXiv}, eprint={0803.1621}, primaryClass={cs.NE cs.CE cs.MA} }
siebers2008an
arxiv-3031
0803.1626
Genetic-Algorithm Seeding Of Idiotypic Networks For Mobile-Robot Navigation
<|reference_start|>Genetic-Algorithm Seeding Of Idiotypic Networks For Mobile-Robot Navigation: Robot-control designers have begun to exploit the properties of the human immune system in order to produce dynamic systems that can adapt to complex, varying, real-world tasks. Jernes idiotypic-network theory has proved the most popular artificial-immune-system (AIS) method for incorporation into behaviour-based robotics, since idiotypic selection produces highly adaptive responses. However, previous efforts have mostly focused on evolving the network connections and have often worked with a single, pre-engineered set of behaviours, limiting variability. This paper describes a method for encoding behaviours as a variable set of attributes, and shows that when the encoding is used with a genetic algorithm (GA), multiple sets of diverse behaviours can develop naturally and rapidly, providing much greater scope for flexible behaviour-selection. The algorithm is tested extensively with a simulated e-puck robot that navigates around a maze by tracking colour. Results show that highly successful behaviour sets can be generated within about 25 minutes, and that much greater diversity can be obtained when multiple autonomous populations are used, rather than a single one.<|reference_end|>
arxiv
@article{whitbrook2008genetic-algorithm, title={Genetic-Algorithm Seeding Of Idiotypic Networks For Mobile-Robot Navigation}, author={Amanda Whitbrook, Uwe Aickelin and Jonathan Garibaldi}, journal={Proceedings of the International Conference on Informatics in Control, Automation and Robotics (ICINCO 2008), in print, pp, Funchal, Portugal, 2008}, year={2008}, archivePrefix={arXiv}, eprint={0803.1626}, primaryClass={cs.NE cs.RO} }
whitbrook2008genetic-algorithm
arxiv-3032
0803.1672
Self-Assembly of Discrete Self-Similar Fractals
<|reference_start|>Self-Assembly of Discrete Self-Similar Fractals: In this paper, we search for {\it absolute} limitations of the Tile Assembly Model (TAM), along with techniques to work around such limitations. Specifically, we investigate the self-assembly of fractal shapes in the TAM. We prove that no self-similar fractal fully weakly self-assembles at temperature 1, and that certain kinds of self-similar fractals do not strictly self-assemble at any temperature. Additionally, we extend the fiber construction from Lathrop et. al. (2007) to show that any self-similar fractal belonging to a particular class of "nice" self-similar fractals has a fibered version that strictly self-assembles in the TAM.<|reference_end|>
arxiv
@article{patitz2008self-assembly, title={Self-Assembly of Discrete Self-Similar Fractals}, author={Matthew J. Patitz, and Scott M. Summers}, journal={arXiv preprint arXiv:0803.1672}, year={2008}, archivePrefix={arXiv}, eprint={0803.1672}, primaryClass={cs.CC cs.DM} }
patitz2008self-assembly
arxiv-3033
0803.1695
Use of self-correlation metrics for evaluation of information properties of binary strings
<|reference_start|>Use of self-correlation metrics for evaluation of information properties of binary strings: It is demonstrated that appropriately chosen computable metrics based on self-correlation properties provide a degree of determinism sufficient to segregate binary strings by level of information content.<|reference_end|>
arxiv
@article{viznyuk2008use, title={Use of self-correlation metrics for evaluation of information properties of binary strings}, author={S.Viznyuk}, journal={arXiv preprint arXiv:0803.1695}, year={2008}, archivePrefix={arXiv}, eprint={0803.1695}, primaryClass={cs.IT math.IT} }
viznyuk2008use
arxiv-3034
0803.1716
Citation Counting, Citation Ranking, and h-Index of Human-Computer Interaction Researchers: A Comparison between Scopus and Web of Science
<|reference_start|>Citation Counting, Citation Ranking, and h-Index of Human-Computer Interaction Researchers: A Comparison between Scopus and Web of Science: This study examines the differences between Scopus and Web of Science in the citation counting, citation ranking, and h-index of 22 top human-computer interaction (HCI) researchers from EQUATOR--a large British Interdisciplinary Research Collaboration project. Results indicate that Scopus provides significantly more coverage of HCI literature than Web of Science, primarily due to coverage of relevant ACM and IEEE peer-reviewed conference proceedings. No significant differences exist between the two databases if citations in journals only are compared. Although broader coverage of the literature does not significantly alter the relative citation ranking of individual researchers, Scopus helps distinguish between the researchers in a more nuanced fashion than Web of Science in both citation counting and h-index. Scopus also generates significantly different maps of citation networks of individual scholars than those generated by Web of Science. The study also presents a comparison of h-index scores based on Google Scholar with those based on the union of Scopus and Web of Science. The study concludes that Scopus can be used as a sole data source for citation-based research and evaluation in HCI, especially if citations in conference proceedings are sought and that h scores should be manually calculated instead of relying on system calculations.<|reference_end|>
arxiv
@article{meho2008citation, title={Citation Counting, Citation Ranking, and h-Index of Human-Computer Interaction Researchers: A Comparison between Scopus and Web of Science}, author={Lokman I. Meho and Yvonne Rogers}, journal={arXiv preprint arXiv:0803.1716}, year={2008}, archivePrefix={arXiv}, eprint={0803.1716}, primaryClass={cs.HC cs.IR} }
meho2008citation
arxiv-3035
0803.1723
Estimation of available bandwidth and measurement infrastructure for Russian segment of Internet
<|reference_start|>Estimation of available bandwidth and measurement infrastructure for Russian segment of Internet: In paper the method for estimation of available bandwidth is supposed which does not demand the advanced utilities. Our method is based on the measurement of network delay $D$ for packets of different sizes $W$. The simple expression for available bandwidth $B_{av} =(W_2-W_1)/(D_2-D_1)$ is substantiated. For the experimental testing the measurement infrastructure for Russian segment of Internet was installed in framework of RFBR grant 06-07-89074.<|reference_end|>
arxiv
@article{platonov2008estimation, title={Estimation of available bandwidth and measurement infrastructure for Russian segment of Internet}, author={A. P. Platonov, D. I. Sidelnikov, M. V. Strizhov, A. M. Sukhov}, journal={arXiv preprint arXiv:0803.1723}, year={2008}, archivePrefix={arXiv}, eprint={0803.1723}, primaryClass={cs.NI cs.PF} }
platonov2008estimation
arxiv-3036
0803.1728
Investigating a Hybrid Metaheuristic For Job Shop Rescheduling
<|reference_start|>Investigating a Hybrid Metaheuristic For Job Shop Rescheduling: Previous research has shown that artificial immune systems can be used to produce robust schedules in a manufacturing environment. The main goal is to develop building blocks (antibodies) of partial schedules that can be used to construct backup solutions (antigens) when disturbances occur during production. The building blocks are created based upon underpinning ideas from artificial immune systems and evolved using a genetic algorithm (Phase I). Each partial schedule (antibody) is assigned a fitness value and the best partial schedules are selected to be converted into complete schedules (antigens). We further investigate whether simulated annealing and the great deluge algorithm can improve the results when hybridised with our artificial immune system (Phase II). We use ten fixed solutions as our target and measure how well we cover these specific scenarios.<|reference_end|>
arxiv
@article{abdullah2008investigating, title={Investigating a Hybrid Metaheuristic For Job Shop Rescheduling}, author={Salwani Abdullah, Uwe Aickelin, Edmund Burke, Aniza Din and Rong Qu}, journal={Proceedings of the 3rd Australian Conference on Artificial Life (ACAL07), Lecture Notes in Computer Science 4828, pp 357-368, Gold Coast, Australia 2007}, year={2008}, doi={10.1007/978-3-540-76931-6_31}, archivePrefix={arXiv}, eprint={0803.1728}, primaryClass={cs.NE cs.CE} }
abdullah2008investigating
arxiv-3037
0803.1733
Degrees of Freedom of the MIMO Interference Channel with Cooperation and Cognition
<|reference_start|>Degrees of Freedom of the MIMO Interference Channel with Cooperation and Cognition: In this paper, we explore the benefits, in the sense of total (sum rate) degrees of freedom (DOF), of cooperation and cognitive message sharing for a two-user multiple-input-multiple-output (MIMO) Gaussian interference channel with $M_1$, $M_2$ antennas at transmitters and $N_1$, $N_2$ antennas at receivers. For the case of cooperation (including cooperation at transmitters only, at receivers only, and at transmitters as well as receivers), the DOF is $\min \{M_1+M_2, N_1+N_2, \max(M_1, N_2)), \max(M_2, N_1)\}$, which is the same as the DOF of the channel without cooperation. For the case of cognitive message sharing, the DOF is $\min \{M_1+M_2, N_1+N_2, (1-1_{T2})((1-1_{R2}) \max(M_1, N_2) + 1_{R2} (M_1+N_2)), (1-1_{T1})((1-1_{R1}) \max(M_2, N_1) + 1_{R1} (M_2+N_1)) \}$ where $1_{Ti} = 1$ $(0)$ when transmitter $i$ is (is not) a cognitive transmitter and $1_{Ri}$ is defined in the same fashion. Our results show that while both techniques may increase the sum rate capacity of the MIMO interference channel, only cognitive message sharing can increase the DOF. We also find that it may be more beneficial for a user to have a cognitive transmitter than to have a cognitive receiver.<|reference_end|>
arxiv
@article{huang2008degrees, title={Degrees of Freedom of the MIMO Interference Channel with Cooperation and Cognition}, author={Chiachi Huang and Syed A. Jafar}, journal={arXiv preprint arXiv:0803.1733}, year={2008}, archivePrefix={arXiv}, eprint={0803.1733}, primaryClass={cs.IT math.IT} }
huang2008degrees
arxiv-3038
0803.1748
A Computational Framework for the Near Elimination of Spreadsheet Risk
<|reference_start|>A Computational Framework for the Near Elimination of Spreadsheet Risk: We present Risk Integrated's Enterprise Spreadsheet Platform (ESP), a technical approach to the near-elimination of spreadsheet risk in the enterprise computing environment, whilst maintaining the full flexibility of spreadsheets for modelling complex financial structures and processes. In its Basic Mode of use, the system comprises a secure and robust centralised spreadsheet management framework. In Advanced Mode, the system can be viewed as a robust computational framework whereby users can "submit jobs" to the spreadsheet, and retrieve the results from the computations, but with no direct access to the underlying spreadsheet. An example application, Monte Carlo simulation, is presented to highlight the benefits of this approach with regard to mitigating spreadsheet risk in complex, mission-critical, financial calculations.<|reference_end|>
arxiv
@article{jafry2008a, title={A Computational Framework for the Near Elimination of Spreadsheet Risk}, author={Yusuf Jafry, Fredrika Sidoroff, Roger Chi}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2006 85-93 ISBN:1-905617-08-9}, year={2008}, archivePrefix={arXiv}, eprint={0803.1748}, primaryClass={cs.SE cs.CY} }
jafry2008a
arxiv-3039
0803.1751
TellTable Spreadsheet Audit: from Technical Possibility to Operating Prototype
<|reference_start|>TellTable Spreadsheet Audit: from Technical Possibility to Operating Prototype: At the 2003 EuSpRIG meeting, we presented a framework and software infrastructure to generate and analyse an audit trail for a spreadsheet file. This report describes the results of a pilot implementation of this software (now called TellTable; see www.telltable.com), along with developments in the server infrastructure and availability, extensions to other "Office Suite" files, integration of the audit tool into the server interface, and related developments, licensing and reports. We continue to seek collaborators and partners in what is primarily an open-source project with some shared-source components.<|reference_end|>
arxiv
@article{nash2008telltable, title={TellTable Spreadsheet Audit: from Technical Possibility to Operating Prototype}, author={John Nash, Andy Adler, Neil Smith}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2004 45-55ISBN 1 902724 94 1}, year={2008}, archivePrefix={arXiv}, eprint={0803.1751}, primaryClass={cs.SE} }
nash2008telltable
arxiv-3040
0803.1754
A Novel Approach to Formulae Production and Overconfidence Measurement to Reduce Risk in Spreadsheet Modelling
<|reference_start|>A Novel Approach to Formulae Production and Overconfidence Measurement to Reduce Risk in Spreadsheet Modelling: Research on formulae production in spreadsheets has established the practice as high risk yet unrecognised as such by industry. There are numerous software applications that are designed to audit formulae and find errors. However these are all post creation, designed to catch errors before the spreadsheet is deployed. As a general conclusion from EuSpRIG 2003 conference it was decided that the time has come to attempt novel solutions based on an understanding of human factors. Hence in this paper we examine one such possibility namely a novel example driven modelling approach. We discuss a control experiment that compares example driven modelling against traditional approaches over several progressively more difficult tests. The results are very interesting and certainly point to the value of further investigation of the example driven potential. Lastly we propose a method for statistically analysing the problem of overconfidence in spreadsheet modellers.<|reference_end|>
arxiv
@article{thorne2008a, title={A Novel Approach to Formulae Production and Overconfidence Measurement to Reduce Risk in Spreadsheet Modelling}, author={Simon Thorne, David Ball, Zoe Lawson}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2004 71-83 ISBN 1 902724 94 1}, year={2008}, archivePrefix={arXiv}, eprint={0803.1754}, primaryClass={cs.HC cs.CY} }
thorne2008a
arxiv-3041
0803.1764
WiFly: experimenting with Wireless Sensor Networks and Virtual coordinates
<|reference_start|>WiFly: experimenting with Wireless Sensor Networks and Virtual coordinates: Experimentation is important when designing communication protocols for Wireless Sensor Networks. Lower-layers have a major impact on upper-layer performance, and the complexity of the phenomena can not be entirely captured by analysis or simulation. In this report, we go through the complete process, from designing an energy-efficient self-organizing communication architecture (MAC, routing and application layers) to real-life experimentation roll-outs. The presented communication architecture includes a MAC protocol which avoids building and maintaining neighborhood tables, and a geographically-inspired routing protocol over virtual coordinates. The application consists of a mobile sink interrogating a wireless sensor network based on the requests issued by a disconnected base station. After the design process of this architecture, we verify it functions correctly by simulation, and we perform a temporal verification. This study is needed to calculate the maximum speed the mobile sink can take. We detail the implementation, and the results of the off-site experimentation (energy consumption at PHY layer, collision probability at MAC layer, and routing). Finally, we report on the real-world deployment where we have mounted the mobile sink node on a radio-controlled airplane.<|reference_end|>
arxiv
@article{watteyne2008wifly:, title={WiFly: experimenting with Wireless Sensor Networks and Virtual coordinates}, author={Thomas Watteyne (INRIA Rh^one-Alpes, FT R&D), Dominique Barthel (FT R&D), Mischa Dohler (CTTC), Isabelle Aug'e-Blum (INRIA Rh^one-Alpes)}, journal={arXiv preprint arXiv:0803.1764}, year={2008}, archivePrefix={arXiv}, eprint={0803.1764}, primaryClass={cs.NI} }
watteyne2008wifly:
arxiv-3042
0803.1807
Minimum-Delay Decoding of Turbo-Codes for Upper-Layer FEC
<|reference_start|>Minimum-Delay Decoding of Turbo-Codes for Upper-Layer FEC: In this paper we investigate the decoding of parallel turbo codes over the binary erasure channel suited for upper-layer error correction. The proposed algorithm performs on-the-fly decoding, i.e. it starts decoding as soon as the first symbols are received. This algorithm compares with the iterative decoding of codes defined on graphs, in that it propagates in the trellises of the turbo code by removing transitions in the same way edges are removed in a bipartite graph under message-passing decoding. Performance comparison with LDPC codes for different coding rates is shown.<|reference_end|>
arxiv
@article{kraidy2008minimum-delay, title={Minimum-Delay Decoding of Turbo-Codes for Upper-Layer FEC}, author={Ghassan M. Kraidy and Valentin Savin}, journal={arXiv preprint arXiv:0803.1807}, year={2008}, archivePrefix={arXiv}, eprint={0803.1807}, primaryClass={cs.IT math.IT} }
kraidy2008minimum-delay
arxiv-3043
0803.1830
On Winning Conditions of High Borel Complexity in Pushdown Games
<|reference_start|>On Winning Conditions of High Borel Complexity in Pushdown Games: Some decidable winning conditions of arbitrarily high finite Borel complexity for games on finite graphs or on pushdown graphs have been recently presented by O. Serre in [ Games with Winning Conditions of High Borel Complexity, in the Proceedings of the International Conference ICALP 2004, LNCS, Volume 3142, p. 1150-1162 ]. We answer in this paper several questions which were raised by Serre in the above cited paper. We first show that, for every positive integer n, the class C_n(A), which arises in the definition of decidable winning conditions, is included in the class of non-ambiguous context free omega languages, and that it is neither closed under union nor under intersection. We prove also that there exists pushdown games, equipped with such decidable winning conditions, where the winning sets are not deterministic context free languages, giving examples of winning sets which are non-deterministic non-ambiguous context free languages, inherently ambiguous context free languages, or even non context free languages.<|reference_end|>
arxiv
@article{finkel2008on, title={On Winning Conditions of High Borel Complexity in Pushdown Games}, author={Olivier Finkel (ELM)}, journal={Fundamenta Informaticae 66 (3) (2005) 277-298}, year={2008}, archivePrefix={arXiv}, eprint={0803.1830}, primaryClass={cs.LO cs.GT math.LO} }
finkel2008on
arxiv-3044
0803.1841
On the Topological Complexity of Infinitary Rational Relations
<|reference_start|>On the Topological Complexity of Infinitary Rational Relations: We prove in this paper that there exists some infinitary rational relations which are analytic but non Borel sets, giving an answer to a question of Simonnet [Automates et Th\'eorie Descriptive, Ph. D. Thesis, Universit\'e Paris 7, March 1992].<|reference_end|>
arxiv
@article{finkel2008on, title={On the Topological Complexity of Infinitary Rational Relations}, author={Olivier Finkel (ELM)}, journal={RAIRO-Theoretical Informatics and Applications 37 (2) (2003) 105-113}, year={2008}, archivePrefix={arXiv}, eprint={0803.1841}, primaryClass={cs.LO cs.CC math.LO} }
finkel2008on
arxiv-3045
0803.1842
Closure Properties of Locally Finite Omega Languages
<|reference_start|>Closure Properties of Locally Finite Omega Languages: Locally finite omega languages were introduced by Ressayre in [Journal of Symbolic Logic, Volume 53, No. 4, p.1009-1026]. They generalize omega languages accepted by finite automata or defined by monadic second order sentences. We study here closure properties of the family LOC_omega of locally finite omega languages. In particular we show that the class LOC_omega is neither closed under intersection nor under complementation, giving an answer to a question of Ressayre.<|reference_end|>
arxiv
@article{finkel2008closure, title={Closure Properties of Locally Finite Omega Languages}, author={Olivier Finkel (ELM)}, journal={Theoretical Computer Science 322 (1) (2004) 69-84}, year={2008}, archivePrefix={arXiv}, eprint={0803.1842}, primaryClass={cs.LO math.LO} }
finkel2008closure
arxiv-3046
0803.1862
Exploring Human Factors in Spreadsheet Development
<|reference_start|>Exploring Human Factors in Spreadsheet Development: In this paper we consider human factors and their impact on spreadsheet development in strategic decision-making. This paper brings forward research from many disciplines both directly related to spreadsheets and a broader spectrum from psychology to industrial processing. We investigate how human factors affect a simplified development cycle and what the potential consequences are.<|reference_end|>
arxiv
@article{thorne2008exploring, title={Exploring Human Factors in Spreadsheet Development}, author={Simon Thorne, David Ball}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2005 161-172 ISBN:1-902724-16-X}, year={2008}, archivePrefix={arXiv}, eprint={0803.1862}, primaryClass={cs.HC cs.CY} }
thorne2008exploring
arxiv-3047
0803.1866
Risk Management for Complex Calculations: EuSpRIG Best Practices in Hybrid Applications
<|reference_start|>Risk Management for Complex Calculations: EuSpRIG Best Practices in Hybrid Applications: As the need for advanced, interactive mathematical models has increased, user/programmers are increasingly choosing the MatLab scripting language over spreadsheets. However, applications developed in these tools have high error risk, and no best practices exist. We recommend that advanced, highly mathematical applications incorporate these tools with spreadsheets into hybrid applications, where developers can apply EuSpRIG best practices. Development of hybrid applications can reduce the potential for errors, shorten development time, and enable higher level operations. We believe that hybrid applications are the future and over the course of this paper, we apply and extend spreadsheet best practices to reduce or prevent risks in hybrid Excel/MatLab applications.<|reference_end|>
arxiv
@article{cernauskas2008risk, title={Risk Management for Complex Calculations: EuSpRIG Best Practices in Hybrid Applications}, author={Deborah Cernauskas, Andrew Kumiega, Ben VanVliet}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 25-36 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0803.1866}, primaryClass={cs.HC} }
cernauskas2008risk
arxiv-3048
0803.1875
Breaking Out of the Cell: On The Benefits of a New Spreadsheet User-Interaction Paradigm
<|reference_start|>Breaking Out of the Cell: On The Benefits of a New Spreadsheet User-Interaction Paradigm: Contemporary spreadsheets are plagued by a profusion of errors, auditing difficulties, lack of uniform development methodologies, and barriers to easy comprehension of the underlying business models they represent. This paper presents a case that most of these difficulties stem from the fact that the standard spreadsheet user-interaction paradigm - the 'cell-matrix' approach - is appropriate for spreadsheet data presentation but has significant drawbacks with respect to spreadsheet creation, maintenance and comprehension when workbooks pass a minimal threshold of complexity. An alternative paradigm for the automated generation of spreadsheets directly from plain-language business model descriptions is presented along with its potential benefits. Sunsight Modeller (TM), a working software system implementing the suggested paradigm, is briefly described.<|reference_end|>
arxiv
@article{hellman2008breaking, title={Breaking Out of the Cell: On The Benefits of a New Spreadsheet User-Interaction Paradigm}, author={Ziv Hellman}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2005 113-124 ISBN:1-902724-16-X}, year={2008}, archivePrefix={arXiv}, eprint={0803.1875}, primaryClass={cs.HC} }
hellman2008breaking
arxiv-3049
0803.1908
On the Throughput Allocation for Proportional Fairness in Multirate IEEE 80211 DCF under General Load Conditions
<|reference_start|>On the Throughput Allocation for Proportional Fairness in Multirate IEEE 80211 DCF under General Load Conditions: This paper presents a modified proportional fairness (PF) criterion suitable for mitigating the \textit{rate anomaly} problem of multirate IEEE 802.11 Wireless LANs employing the mandatory Distributed Coordination Function (DCF) option. Compared to the widely adopted assumption of saturated network, the proposed criterion can be applied to general networks whereby the contending stations are characterized by specific packet arrival rates, $\lambda_s$, and transmission rates $R_d^{s}$. The throughput allocation resulting from the proposed algorithm is able to greatly increase the aggregate throughput of the DCF while ensuring fairness levels among the stations of the same order of the ones available with the classical PF criterion. Put simply, each station is allocated a throughput that depends on a suitable normalization of its packet rate, which, to some extent, measures the frequency by which the station tries to gain access to the channel. Simulation results are presented for some sample scenarios, confirming the effectiveness of the proposed criterion.<|reference_end|>
arxiv
@article{daneshgaran2008on, title={On the Throughput Allocation for Proportional Fairness in Multirate IEEE 802.11 DCF under General Load Conditions}, author={F. Daneshgaran, M. Laddomada, F. Mesiti, M. Mondin}, journal={arXiv preprint arXiv:0803.1908}, year={2008}, archivePrefix={arXiv}, eprint={0803.1908}, primaryClass={cs.NI} }
daneshgaran2008on
arxiv-3050
0803.1926
Improved evolutionary generation of XSLT stylesheets
<|reference_start|>Improved evolutionary generation of XSLT stylesheets: This paper introduces a procedure based on genetic programming to evolve XSLT programs (usually called stylesheets or logicsheets). XSLT is a general purpose, document-oriented functional language, generally used to transform XML documents (or, in general, solve any problem that can be coded as an XML document). The proposed solution uses a tree representation for the stylesheets as well as diverse specific operators in order to obtain, in the studied cases and a reasonable time, a XSLT stylesheet that performs the transformation. Several types of representation have been compared, resulting in different performance and degree of success.<|reference_end|>
arxiv
@article{garcia-sanchez2008improved, title={Improved evolutionary generation of XSLT stylesheets}, author={Pablo Garcia-Sanchez, J. L. J. Laredo, J. P. Sevilla, Pedro Castillo, J. J. Merelo}, journal={arXiv preprint arXiv:0803.1926}, year={2008}, archivePrefix={arXiv}, eprint={0803.1926}, primaryClass={cs.NE cs.AI} }
garcia-sanchez2008improved
arxiv-3051
0803.1944
Early Experiences in Traffic Engineering Exploiting Path Diversity: A Practical Approach
<|reference_start|>Early Experiences in Traffic Engineering Exploiting Path Diversity: A Practical Approach: Recent literature has proved that stable dynamic routing algorithms have solid theoretical foundation that makes them suitable to be implemented in a real protocol, and used in practice in many different operational network contexts. Such algorithms inherit much of the properties of congestion controllers implementing one of the possible combination of AQM/ECN schemes at nodes and flow control at sources. In this paper we propose a linear program formulation of the multi-commodity flow problem with congestion control, under max-min fairness, comprising demands with or without exogenous peak rates. Our evaluations of the gain, using path diversity, in scenarios as intra-domain traffic engineering and wireless mesh networks encourages real implementations, especially in presence of hot spots demands and non uniform traffic matrices. We propose a flow aware perspective of the subject by using a natural multi-path extension to current congestion controllers and show its performance with respect to current proposals. Since flow aware architectures exploiting path diversity are feasible, scalable, robust and nearly optimal in presence of flows with exogenous peak rates, we claim that our solution rethinked in the context of realistic traffic assumptions performs as better as an optimal approach with all the additional benefits of the flow aware paradigm.<|reference_end|>
arxiv
@article{muscariello2008early, title={Early Experiences in Traffic Engineering Exploiting Path Diversity: A Practical Approach}, author={Luca Muscariello (FT R&D), Diego Perino (FT R&D, INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0803.1944}, year={2008}, archivePrefix={arXiv}, eprint={0803.1944}, primaryClass={cs.NI} }
muscariello2008early
arxiv-3052
0803.1945
Resampling and requantization of band-limited Gaussian stochastic signals with flat power spectrum
<|reference_start|>Resampling and requantization of band-limited Gaussian stochastic signals with flat power spectrum: A theoretical analysis, aimed at characterizing the degradation induced by the resampling and requantization processes applied to band-limited Gaussian signals with flat power spectrum, available through their digitized samples, is presented. The analysis provides an efficient algorithm for computing the complete {joint} bivariate discrete probability distribution associated to the true quantized version of the Gaussian signal and to the quantity estimated after resampling and requantization of the input digitized sequence. The use of Fourier transform techniques allows deriving {approximate} analytical expressions for the quantities of interest, as well as implementing their efficient computation. Numerical experiments are found to be in good agreement with the theoretical results, and confirm the validity of the whole approach.<|reference_end|>
arxiv
@article{lanucara2008resampling, title={Resampling and requantization of band-limited Gaussian stochastic signals with flat power spectrum}, author={Marco Lanucara and Riccardo Borghi}, journal={arXiv preprint arXiv:0803.1945}, year={2008}, archivePrefix={arXiv}, eprint={0803.1945}, primaryClass={cs.IT math.IT} }
lanucara2008resampling
arxiv-3053
0803.1975
Compressed Modular Matrix Multiplication
<|reference_start|>Compressed Modular Matrix Multiplication: We propose to store several integers modulo a small prime into a single machine word. Modular addition is performed by addition and possibly subtraction of a word containing several times the modulo. Modular Multiplication is not directly accessible but modular dot product can be performed by an integer multiplication by the reverse integer. Modular multiplication by a word containing a single residue is a also possible. Therefore matrix multiplication can be performed on such a compressed storage. We here give bounds on the sizes of primes and matrices for which such a compression is possible. We also explicit the details of the required compressed arithmetic routines.<|reference_end|>
arxiv
@article{dumas2008compressed, title={Compressed Modular Matrix Multiplication}, author={Jean-Guillaume Dumas (LJK), Laurent Fousse (LJK), Bruno Salvy (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0803.1975}, year={2008}, archivePrefix={arXiv}, eprint={0803.1975}, primaryClass={cs.SC} }
dumas2008compressed
arxiv-3054
0803.1985
An Investigation of the Sequential Sampling Method for Crossdocking Simulation Output Variance Reduction
<|reference_start|>An Investigation of the Sequential Sampling Method for Crossdocking Simulation Output Variance Reduction: This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.<|reference_end|>
arxiv
@article{adewunmi2008an, title={An Investigation of the Sequential Sampling Method for Crossdocking Simulation Output Variance Reduction}, author={Adrian Adewunmi, Uwe Aickelin and Mike Byrne}, journal={Operational Research Society 4th Simulation Workshop (SW08) in print, pp, Worcestershire, UK 2008}, year={2008}, archivePrefix={arXiv}, eprint={0803.1985}, primaryClass={cs.NE cs.CE} }
adewunmi2008an
arxiv-3055
0803.1992
Achievable Rates and Optimal Resource Allocation for Imperfectly-Known Fading Relay Channels
<|reference_start|>Achievable Rates and Optimal Resource Allocation for Imperfectly-Known Fading Relay Channels: In this paper, achievable rates and optimal resource allocation strategies for imperfectly-known fading relay channels are studied. It is assumed that communication starts with the network training phase in which the receivers estimate the fading coefficients of their respective channels. In the data transmission phase, amplify-and-forward and decode-and-forward relaying schemes with different degrees of cooperation are considered, and the corresponding achievable rate expressions are obtained. Three resource allocation problems are addressed: 1) power allocation between data and training symbols; 2) time/bandwidth allocation to the relay; 3) power allocation between the source and relay in the presence of total power constraints. The achievable rate expressions are employed to identify the optimal resource allocation strategies. Finally, energy efficiency is investigated by studying the bit energy requirements in the low-SNR regime.<|reference_end|>
arxiv
@article{zhang2008achievable, title={Achievable Rates and Optimal Resource Allocation for Imperfectly-Known Fading Relay Channels}, author={Junwei Zhang, Mustafa Cenk Gursoy}, journal={arXiv preprint arXiv:0803.1992}, year={2008}, number={UNL-123}, archivePrefix={arXiv}, eprint={0803.1992}, primaryClass={cs.IT math.IT} }
zhang2008achievable
arxiv-3056
0803.1993
Improved Squeaky Wheel Optimisation for Driver Scheduling
<|reference_start|>Improved Squeaky Wheel Optimisation for Driver Scheduling: This paper presents a technique called Improved Squeaky Wheel Optimisation for driver scheduling problems. It improves the original Squeaky Wheel Optimisations effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, the optimisation in the ISWO is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. Encouraging experimental results are reported.<|reference_end|>
arxiv
@article{aickelin2008improved, title={Improved Squeaky Wheel Optimisation for Driver Scheduling}, author={Uwe Aickelin, Edmund Burke and Jingpeng Li}, journal={Proceedings of the 9th International Conference on Parallel Problem Solving from Nature (PPSN IX), Lecture Notes in Computer Science 4193, pp 182-191, Reykjavik, Iceland, 2006}, year={2008}, doi={10.1007/11844297_19}, archivePrefix={arXiv}, eprint={0803.1993}, primaryClass={cs.NE cs.CE} }
aickelin2008improved
arxiv-3057
0803.1994
The Application of Bayesian Optimization and Classifier Systems in Nurse Scheduling
<|reference_start|>The Application of Bayesian Optimization and Classifier Systems in Nurse Scheduling: Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each persons assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.<|reference_end|>
arxiv
@article{li2008the, title={The Application of Bayesian Optimization and Classifier Systems in Nurse Scheduling}, author={Jingpeng Li and Uwe Aickelin}, journal={Proceedings of the 8th International Conference on Parallel Problem Solving from Nature (PPSN VIII), Lecture Notes in Computer Science 3242, pp 581-590, Birmingham, UK 2004}, year={2008}, doi={10.1007/b100601}, archivePrefix={arXiv}, eprint={0803.1994}, primaryClass={cs.NE cs.CE} }
li2008the
arxiv-3058
0803.1997
Danger Theory: The Link between AIS and IDS?
<|reference_start|>Danger Theory: The Link between AIS and IDS?: We present ideas about creating a next generation Intrusion Detection System based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems. The Human Immune System can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System for our computers.<|reference_end|>
arxiv
@article{aickelin2008danger, title={Danger Theory: The Link between AIS and IDS?}, author={Uwe Aickelin, Peter Bentley, Steve Cayzer, Kim Jungwon and Julie McLeod}, journal={Proceedings of the 2nd International Conference on Artificial Immune Systems (ICARIS 2003), Lecture Notes in Computer Science 2787, pp 147-155, doi: 10.1007/b12020, Edinburgh, UK 2003}, year={2008}, doi={10.1007/b12020}, archivePrefix={arXiv}, eprint={0803.1997}, primaryClass={cs.NE cs.AI cs.CR} }
aickelin2008danger
arxiv-3059
0803.2027
Excelsior: Bringing the Benefits of Modularisation to Excel
<|reference_start|>Excelsior: Bringing the Benefits of Modularisation to Excel: Excel lacks features for modular design. Had it such features, as do most programming languages, they would save time, avoid unneeded programming, make mistakes less likely, make code-control easier, help organisations adopt a uniform house style, and open business opportunities in buying and selling spreadsheet modules. I present Excelsior, a system for bringing these benefits to Excel.<|reference_end|>
arxiv
@article{paine2008excelsior:, title={Excelsior: Bringing the Benefits of Modularisation to Excel}, author={Jocelyn Paine}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2005 173-184 ISBN:1-902724-16-X}, year={2008}, archivePrefix={arXiv}, eprint={0803.2027}, primaryClass={cs.SE} }
paine2008excelsior:
arxiv-3060
0803.2092
An Ant-Based Model for Multiple Sequence Alignment
<|reference_start|>An Ant-Based Model for Multiple Sequence Alignment: Multiple sequence alignment is a key process in today's biology, and finding a relevant alignment of several sequences is much more challenging than just optimizing some improbable evaluation functions. Our approach for addressing multiple sequence alignment focuses on the building of structures in a new graph model: the factor graph model. This model relies on block-based formulation of the original problem, formulation that seems to be one of the most suitable ways for capturing evolutionary aspects of alignment. The structures are implicitly built by a colony of ants laying down pheromones in the factor graphs, according to relations between blocks belonging to the different sequences.<|reference_end|>
arxiv
@article{guinand2008an, title={An Ant-Based Model for Multiple Sequence Alignment}, author={Fr'ed'eric Guinand (LITIS), Yoann Pign'e (LITIS)}, journal={Dans Large-Scale Scientific Computing - Large-Scale Scientific Computing, 6th International Conference, LSSC 2007, Sozopol : Bulgarie (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0803.2092}, primaryClass={q-bio.QM cs.AI} }
guinand2008an
arxiv-3061
0803.2093
GraphStream: A Tool for bridging the gap between Complex Systems and Dynamic Graphs
<|reference_start|>GraphStream: A Tool for bridging the gap between Complex Systems and Dynamic Graphs: The notion of complex systems is common to many domains, from Biology to Economy, Computer Science, Physics, etc. Often, these systems are made of sets of entities moving in an evolving environment. One of their major characteristics is the emergence of some global properties stemmed from local interactions between the entities themselves and between the entities and the environment. The structure of these systems as sets of interacting entities leads researchers to model them as graphs. However, their understanding requires most often to consider the dynamics of their evolution. It is indeed not relevant to study some properties out of any temporal consideration. Thus, dynamic graphs seem to be a very suitable model for investigating the emergence and the conservation of some properties. GraphStream is a Java-based library whose main purpose is to help researchers and developers in their daily tasks of dynamic problem modeling and of classical graph management tasks: creation, processing, display, etc. It may also be used, and is indeed already used, for teaching purpose. GraphStream relies on an event-based engine allowing several event sources. Events may be included in the core of the application, read from a file or received from an event handler.<|reference_end|>
arxiv
@article{pigné2008graphstream:, title={GraphStream: A Tool for bridging the gap between Complex Systems and Dynamic Graphs}, author={Yoann Pign'e (LITIS), Antoine Dutot (LITIS), Fr'ed'eric Guinand (LITIS), Damien Olivier (LITIS)}, journal={Emergent Properties in Natural and Artificial Complex Systems. Satellite Conference within the 4th European Conference on Complex Systems (ECCS'2007), Dresden : Allemagne (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0803.2093}, primaryClass={cs.MS} }
pigné2008graphstream:
arxiv-3062
0803.2123
Groups from Cyclic Infrastructures and Pohlig-Hellman in Certain Infrastructures
<|reference_start|>Groups from Cyclic Infrastructures and Pohlig-Hellman in Certain Infrastructures: In discrete logarithm based cryptography, a method by Pohlig and Hellman allows solving the discrete logarithm problem efficiently if the group order is known and has no large prime factors. The consequence is that such groups are avoided. In the past, there have been proposals for cryptography based on cyclic infrastructures. We will show that the Pohlig-Hellman method can be adapted to certain cyclic infrastructures, which similarly implies that certain infrastructures should not be used for cryptography. This generalizes a result by M\"uller, Vanstone and Zuccherato for infrastructures obtained from hyperelliptic function fields. We recall the Pohlig-Hellman method, define the concept of a cyclic infrastructure and briefly describe how to obtain such infrastructures from certain function fields of unit rank one. Then, we describe how to obtain cyclic groups from discrete cyclic infrastructures and how to apply the Pohlig-Hellman method to compute absolute distances, which is in general a computationally hard problem for cyclic infrastructures. Moreover, we give an algorithm which allows to test whether an infrastructure satisfies certain requirements needed for applying the Pohlig-Hellman method, and discuss whether the Pohlig-Hellman method is applicable in infrastructures obtained from number fields. Finally, we discuss how this influences cryptography based on cyclic infrastructures.<|reference_end|>
arxiv
@article{fontein2008groups, title={Groups from Cyclic Infrastructures and Pohlig-Hellman in Certain Infrastructures}, author={Felix Fontein (University of Zurich)}, journal={Advances in Mathematics of Communications, 2 (3), 2008}, year={2008}, doi={10.3934/amc.2008.2.293}, archivePrefix={arXiv}, eprint={0803.2123}, primaryClass={cs.CR} }
fontein2008groups
arxiv-3063
0803.2129
Comparison of the Discriminatory Processor Sharing Policies
<|reference_start|>Comparison of the Discriminatory Processor Sharing Policies: Discriminatory Processor Sharing policy introduced by Kleinrock is of a great interest in many application areas, including telecommunications, web applications and TCP flow modelling. Under the DPS policy the job priority is controlled by the vector of weights. Verifying the vector of weights it is possible to modify the service rates of the jobs and optimize system characteristics. In the present paper we present the results concerning the comparison of two DPS policies with different weight vectors. We show the monotonicity of the expected sojourn time of the system depending on the weight vector under certain condition on the system. Namely, the system has to consist of classes with means which are quite different from each other. The classes with similar means can be organized together and considered as one class, so the given restriction can be overcame.<|reference_end|>
arxiv
@article{osipova2008comparison, title={Comparison of the Discriminatory Processor Sharing Policies}, author={Natalia Osipova (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:0803.2129}, year={2008}, number={RR-6475}, archivePrefix={arXiv}, eprint={0803.2129}, primaryClass={cs.NI} }
osipova2008comparison
arxiv-3064
0803.2135
On $(P_5,\barP_5)$-sparse graphs and other families
<|reference_start|>On $(P_5,\barP_5)$-sparse graphs and other families: We extend the notion of $P_4$-sparse graphs previously introduced by {\scshape Ho\`ang} by considering $\mathcal{F}$-sparse graphs were $\mathcal{F}$ denotes a finite set of graphs on $p$ vertices. Thus we obtain some results on $(P_5,\bar{P_5})$-sparse graphs already known on $(P_5,\bar{P_5})$-free graphs. Finally we completely describe the structure of $(P_5,\bar{P_5}, bull$)-sparse graphs, it follows that those graphs have bounded clique-width.<|reference_end|>
arxiv
@article{fouquet2008on, title={On $(P_5,\bar{P_5})$-sparse graphs and other families}, author={Jean-Luc Fouquet (LIFO), Jean-Marie Vanherpe (LIFO)}, journal={arXiv preprint arXiv:0803.2135}, year={2008}, archivePrefix={arXiv}, eprint={0803.2135}, primaryClass={cs.DM} }
fouquet2008on
arxiv-3065
0803.2174
Local Approximation Schemes for Topology Control
<|reference_start|>Local Approximation Schemes for Topology Control: This paper presents a distributed algorithm on wireless ad-hoc networks that runs in polylogarithmic number of rounds in the size of the network and constructs a linear size, lightweight, (1+\epsilon)-spanner for any given \epsilon > 0. A wireless network is modeled by a d-dimensional \alpha-quasi unit ball graph (\alpha-UBG), which is a higher dimensional generalization of the standard unit disk graph (UDG) model. The d-dimensional \alpha-UBG model goes beyond the unrealistic ``flat world'' assumption of UDGs and also takes into account transmission errors, fading signal strength, and physical obstructions. The main result in the paper is this: for any fixed \epsilon > 0, 0 < \alpha \le 1, and d \ge 2, there is a distributed algorithm running in O(\log n \log^* n) communication rounds on an n-node, d-dimensional \alpha-UBG G that computes a (1+\epsilon)-spanner G' of G with maximum degree \Delta(G') = O(1) and total weight w(G') = O(w(MST(G)). This result is motivated by the topology control problem in wireless ad-hoc networks and improves on existing topology control algorithms along several dimensions. The technical contributions of the paper include a new, sequential, greedy algorithm with relaxed edge ordering and lazy updating, and clustering techniques for filtering out unnecessary edges.<|reference_end|>
arxiv
@article{damian2008local, title={Local Approximation Schemes for Topology Control}, author={Mirela Damian, Saurav Pandit and Sriram Pemmaraju}, journal={Proceedings of the 25th ACM Symposium on Principles of Distributed Computing, pages 208-218, July 2006}, year={2008}, archivePrefix={arXiv}, eprint={0803.2174}, primaryClass={cs.DS cs.CC} }
damian2008local
arxiv-3066
0803.2212
Conditioning Probabilistic Databases
<|reference_start|>Conditioning Probabilistic Databases: Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has led researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techniques from constraint satisfaction, such as the Davis-Putnam algorithm. We complement this with a thorough experimental evaluation of the algorithms proposed. Our experiments show that our exact algorithms scale well to realistic database sizes and can in some scenarios compete with the most efficient previous approximation algorithms.<|reference_end|>
arxiv
@article{koch2008conditioning, title={Conditioning Probabilistic Databases}, author={Christoph Koch and Dan Olteanu}, journal={arXiv preprint arXiv:0803.2212}, year={2008}, archivePrefix={arXiv}, eprint={0803.2212}, primaryClass={cs.DB cs.AI} }
koch2008conditioning
arxiv-3067
0803.2219
Lighweight Target Tracking Using Passive Traces in Sensor Networks
<|reference_start|>Lighweight Target Tracking Using Passive Traces in Sensor Networks: We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.<|reference_end|>
arxiv
@article{marculescu2008lighweight, title={Lighweight Target Tracking Using Passive Traces in Sensor Networks}, author={Andrei Marculescu and Sotiris Nikoletseas and Olivier Powell and Jose Rolim}, journal={arXiv preprint arXiv:0803.2219}, year={2008}, archivePrefix={arXiv}, eprint={0803.2219}, primaryClass={cs.DC} }
marculescu2008lighweight
arxiv-3068
0803.2220
The Anatomy of Mitos Web Search Engine
<|reference_start|>The Anatomy of Mitos Web Search Engine: Engineering a Web search engine offering effective and efficient information retrieval is a challenging task. This document presents our experiences from designing and developing a Web search engine offering a wide spectrum of functionalities and we report some interesting experimental results. A rather peculiar design choice of the engine is that its index is based on a DBMS, while some of the distinctive functionalities that are offered include advanced Greek language stemming, real time result clustering, and advanced link analysis techniques (also for spam page detection).<|reference_end|>
arxiv
@article{papadakos2008the, title={The Anatomy of Mitos Web Search Engine}, author={Panagiotis Papadakos, Giorgos Vasiliadis, Yannis Theoharis, Nikos Armenatzoglou, Stella Kopidaki, Yannis Marketakis, Manos Daskalakis, Kostas Karamaroudis, Giorgos Linardakis, Giannis Makrydakis, Vangelis Papathanasiou, Lefteris Sardis, Petros Tsialiamanis, Georgia Troullinou, Kostas Vandikas, Dimitris Velegrakis and Yannis Tzitzikas}, journal={arXiv preprint arXiv:0803.2220}, year={2008}, archivePrefix={arXiv}, eprint={0803.2220}, primaryClass={cs.IR} }
papadakos2008the
arxiv-3069
0803.2257
High-Resolution Radar via Compressed Sensing
<|reference_start|>High-Resolution Radar via Compressed Sensing: A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N by N grid. Assuming the number of targets K is small (i.e., K much less than N^2), then we can transmit a sufficiently "incoherent" pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel compressed sensing approach offers great potential for better resolution over classical radar.<|reference_end|>
arxiv
@article{herman2008high-resolution, title={High-Resolution Radar via Compressed Sensing}, author={Matthew A. Herman and Thomas Strohmer}, journal={arXiv preprint arXiv:0803.2257}, year={2008}, doi={10.1109/TSP.2009.2014277}, archivePrefix={arXiv}, eprint={0803.2257}, primaryClass={math.NA cs.IT math.IT} }
herman2008high-resolution
arxiv-3070
0803.2262
Constant-Rank Codes and Their Connection to Constant-Dimension Codes
<|reference_start|>Constant-Rank Codes and Their Connection to Constant-Dimension Codes: Constant-dimension codes have recently received attention due to their significance to error control in noncoherent random linear network coding. What the maximal cardinality of any constant-dimension code with finite dimension and minimum distance is and how to construct the optimal constant-dimension code (or codes) that achieves the maximal cardinality both remain open research problems. In this paper, we introduce a new approach to solving these two problems. We first establish a connection between constant-rank codes and constant-dimension codes. Via this connection, we show that optimal constant-dimension codes correspond to optimal constant-rank codes over matrices with sufficiently many rows. As such, the two aforementioned problems are equivalent to determining the maximum cardinality of constant-rank codes and to constructing optimal constant-rank codes, respectively. To this end, we then derive bounds on the maximum cardinality of a constant-rank code with a given minimum rank distance, propose explicit constructions of optimal or asymptotically optimal constant-rank codes, and establish asymptotic bounds on the maximum rate of a constant-rank code.<|reference_end|>
arxiv
@article{gadouleau2008constant-rank, title={Constant-Rank Codes and Their Connection to Constant-Dimension Codes}, author={Maximilien Gadouleau and Zhiyuan Yan}, journal={arXiv preprint arXiv:0803.2262}, year={2008}, archivePrefix={arXiv}, eprint={0803.2262}, primaryClass={cs.IT math.IT} }
gadouleau2008constant-rank
arxiv-3071
0803.2285
A Practical Attack on the MIFARE Classic
<|reference_start|>A Practical Attack on the MIFARE Classic: The MIFARE Classic is the most widely used contactless smart card in the market. Its design and implementation details are kept secret by its manufacturer. This paper studies the architecture of the card and the communication protocol between card and reader. Then it gives a practical, low-cost, attack that recovers secret information from the memory of the card. Due to a weakness in the pseudo-random generator, we are able to recover the keystream generated by the CRYPTO1 stream cipher. We exploit the malleability of the stream cipher to read all memory blocks of the first sector of the card. Moreover, we are able to read any sector of the memory of the card, provided that we know one memory block within this sector. Finally, and perhaps more damaging, the same holds for modifying memory blocks.<|reference_end|>
arxiv
@article{gans2008a, title={A Practical Attack on the MIFARE Classic}, author={Gerhard de Koning Gans, Jaap-Henk Hoepman, and Flavio D. Garcia}, journal={arXiv preprint arXiv:0803.2285}, year={2008}, archivePrefix={arXiv}, eprint={0803.2285}, primaryClass={cs.CR} }
gans2008a
arxiv-3072
0803.2305
The Abella Interactive Theorem Prover (System Description)
<|reference_start|>The Abella Interactive Theorem Prover (System Description): Abella is an interactive system for reasoning about aspects of object languages that have been formally presented through recursive rules based on syntactic structure. Abella utilizes a two-level logic approach to specification and reasoning. One level is defined by a specification logic which supports a transparent encoding of structural semantics rules and also enables their execution. The second level, called the reasoning logic, embeds the specification logic and allows the development of proofs of properties about specifications. An important characteristic of both logics is that they exploit the lambda tree syntax approach to treating binding in object languages. Amongst other things, Abella has been used to prove normalizability properties of the lambda calculus, cut admissibility for a sequent calculus and type uniqueness and subject reduction properties. This paper discusses the logical foundations of Abella, outlines the style of theorem proving that it supports and finally describes some of its recent applications.<|reference_end|>
arxiv
@article{gacek2008the, title={The Abella Interactive Theorem Prover (System Description)}, author={Andrew Gacek}, journal={arXiv preprint arXiv:0803.2305}, year={2008}, archivePrefix={arXiv}, eprint={0803.2305}, primaryClass={cs.LO cs.PL} }
gacek2008the
arxiv-3073
0803.2306
Tableau-based decision procedures for logics of strategic ability in multi-agent systems
<|reference_start|>Tableau-based decision procedures for logics of strategic ability in multi-agent systems: We develop an incremental tableau-based decision procedures for the Alternating-time temporal logic ATL and some of its variants. While running within the theoretically established complexity upper bound, we claim that our tableau is practically more efficient in the average case than other decision procedures for ATL known so far. Besides, the ease of its adaptation to variants of ATL demonstrates the flexibility of the proposed procedure.<|reference_end|>
arxiv
@article{goranko2008tableau-based, title={Tableau-based decision procedures for logics of strategic ability in multi-agent systems}, author={Valentin Goranko and Dmitry Shkatov}, journal={arXiv preprint arXiv:0803.2306}, year={2008}, archivePrefix={arXiv}, eprint={0803.2306}, primaryClass={cs.LO cs.AI cs.MA} }
goranko2008tableau-based
arxiv-3074
0803.2314
Problem Solving and Complex Systems
<|reference_start|>Problem Solving and Complex Systems: The observation and modeling of natural Complex Systems (CSs) like the human nervous system, the evolution or the weather, allows the definition of special abilities and models reusable to solve other problems. For instance, Genetic Algorithms or Ant Colony Optimizations are inspired from natural CSs to solve optimization problems. This paper proposes the use of ant-based systems to solve various problems with a non assessing approach. This means that solutions to some problem are not evaluated. They appear as resultant structures from the activity of the system. Problems are modeled with graphs and such structures are observed directly on these graphs. Problems of Multiple Sequences Alignment and Natural Language Processing are addressed with this approach.<|reference_end|>
arxiv
@article{guinand2008problem, title={Problem Solving and Complex Systems}, author={Fr'ed'eric Guinand (LITIS), Yoann Pign'e (LITIS)}, journal={Emergent Properties in Natural and Artificial Dynamical Systems, Springer Verlag (Ed.) (2006) 53-86}, year={2008}, archivePrefix={arXiv}, eprint={0803.2314}, primaryClass={cs.NE} }
guinand2008problem
arxiv-3075
0803.2315
Science mapping with asymmetrical paradigmatic proximity
<|reference_start|>Science mapping with asymmetrical paradigmatic proximity: We propose a series of methods to represent the evolution of a field of science at different levels: namely micro, meso and macro levels. We use a previously introduced asymmetric measure of paradigmatic proximity between terms that enables us to extract structure from a large publications database. We apply our set of methods on a case study from the complex systems community through the mapping of more than 400 complex systems science concepts indexed from a database as large as several millions of journal papers. We will first summarize the main properties of our asymmetric proximity measure. Then we show how salient paradigmatic fields can be embedded into a 2-dimensional visualization into which the terms are plotted according to their relative specificity and generality index. This meso-level helps us producing macroscopic maps of the field of science studied featuring the former paradigmatic fields.<|reference_end|>
arxiv
@article{cointet2008science, title={Science mapping with asymmetrical paradigmatic proximity}, author={Jean-Philippe Cointet (CREA, TSV), David Chavalarias (CREA)}, journal={Networks and Heterogeneous Media 3, 2 (2008) 267 - 276}, year={2008}, archivePrefix={arXiv}, eprint={0803.2315}, primaryClass={cs.OH} }
cointet2008science
arxiv-3076
0803.2316
On the CNOT-cost of TOFFOLI gates
<|reference_start|>On the CNOT-cost of TOFFOLI gates: The three-input TOFFOLI gate is the workhorse of circuit synthesis for classical logic operations on quantum data, e.g., reversible arithmetic circuits. In physical implementations, however, TOFFOLI gates are decomposed into six CNOT gates and several one-qubit gates. Though this decomposition has been known for at least 10 years, we provide here the first demonstration of its CNOT-optimality. We study three-qubit circuits which contain less than six CNOT gates and implement a block-diagonal operator, then show that they implicitly describe the cosine-sine decomposition of a related operator. Leveraging the canonicity of such decompositions to limit one-qubit gates appearing in respective circuits, we prove that the n-qubit analogue of the TOFFOLI requires at least 2n CNOT gates. Additionally, our results offer a complete classification of three-qubit diagonal operators by their CNOT-cost, which holds even if ancilla qubits are available.<|reference_end|>
arxiv
@article{shende2008on, title={On the CNOT-cost of TOFFOLI gates}, author={Vivek V. Shende and Igor L. Markov}, journal={Quant.Inf.Comp. 9(5-6):461-486 (2009)}, year={2008}, archivePrefix={arXiv}, eprint={0803.2316}, primaryClass={quant-ph cs.ET} }
shende2008on
arxiv-3077
0803.2317
Lissom, a Source Level Proof Carrying Code Platform
<|reference_start|>Lissom, a Source Level Proof Carrying Code Platform: This paper introduces a proposal for a Proof Carrying Code (PCC) architecture called Lissom. Started as a challenge for final year Computing students, Lissom was thought as a mean to prove to a sceptic community, and in particular to students, that formal verification tools can be put to practice in a realistic environment, and be used to solve complex and concrete problems. The attractiveness of the problems that PCC addresses has already brought students to show interest in this project.<|reference_end|>
arxiv
@article{gomes2008lissom,, title={Lissom, a Source Level Proof Carrying Code Platform}, author={Joao Gomes and Daniel Martins and Simao Melo de Sousa and Jorge Sousa Pinto}, journal={arXiv preprint arXiv:0803.2317}, year={2008}, archivePrefix={arXiv}, eprint={0803.2317}, primaryClass={cs.LO cs.SE} }
gomes2008lissom,
arxiv-3078
0803.2319
Two Algorithms for Solving A General Backward Pentadiagonal Linear Systems
<|reference_start|>Two Algorithms for Solving A General Backward Pentadiagonal Linear Systems: In this paper we present an efficient computational and symbolic algorithms for solving a backward pentadiagonal linear systems. The implementation of the algorithms using Computer Algebra Systems (CAS) such as MAPLE, MACSYMA, MATHEMATICA, and MATLAB are straightforward. An examples are given in order to illustrate the algorithms. The symbolic algorithm is competitive the other methods for solving a backward pentadiagonal linear systems.<|reference_end|>
arxiv
@article{karawia2008two, title={Two Algorithms for Solving A General Backward Pentadiagonal Linear Systems}, author={A. A. Karawia}, journal={arXiv preprint arXiv:0803.2319}, year={2008}, archivePrefix={arXiv}, eprint={0803.2319}, primaryClass={cs.SC cs.NA} }
karawia2008two
arxiv-3079
0803.2337
Data Fusion Trees for Detection: Does Architecture Matter?
<|reference_start|>Data Fusion Trees for Detection: Does Architecture Matter?: We consider the problem of decentralized detection in a network consisting of a large number of nodes arranged as a tree of bounded height, under the assumption of conditionally independent, identically distributed observations. We characterize the optimal error exponent under a Neyman-Pearson formulation. We show that the Type II error probability decays exponentially fast with the number of nodes, and the optimal error exponent is often the same as that corresponding to a parallel configuration. We provide sufficient, as well as necessary, conditions for this to happen. For those networks satisfying the sufficient conditions, we propose a simple strategy that nearly achieves the optimal error exponent, and in which all non-leaf nodes need only send 1-bit messages.<|reference_end|>
arxiv
@article{tay2008data, title={Data Fusion Trees for Detection: Does Architecture Matter?}, author={Wee Peng Tay, John Tsitsiklis, and Moe Win}, journal={arXiv preprint arXiv:0803.2337}, year={2008}, archivePrefix={arXiv}, eprint={0803.2337}, primaryClass={cs.IT math.IT} }
tay2008data
arxiv-3080
0803.2363
lambda-Connectedness Determination for Image Segmentation
<|reference_start|>lambda-Connectedness Determination for Image Segmentation: Image segmentation is to separate an image into distinct homogeneous regions belonging to different objects. It is an essential step in image analysis and computer vision. This paper compares some segmentation technologies and attempts to find an automated way to better determine the parameters for image segmentation, especially the connectivity value of $\lambda$ in $\lambda$-connected segmentation. Based on the theories on the maximum entropy method and Otsu's minimum variance method, we propose:(1)maximum entropy connectedness determination: a method that uses maximum entropy to determine the best $\lambda$ value in $\lambda$-connected segmentation, and (2) minimum variance connectedness determination: a method that uses the principle of minimum variance to determine $\lambda$ value. Applying these optimization techniques in real images, the experimental results have shown great promise in the development of the new methods. In the end, we extend the above method to more general case in order to compare it with the famous Mumford-Shah method that uses variational principle and geometric measure.<|reference_end|>
arxiv
@article{chen2008lambda-connectedness, title={lambda-Connectedness Determination for Image Segmentation}, author={Li Chen}, journal={arXiv preprint arXiv:0803.2363}, year={2008}, archivePrefix={arXiv}, eprint={0803.2363}, primaryClass={cs.CV cs.DM} }
chen2008lambda-connectedness
arxiv-3081
0803.2365
SAFIUS - A secure and accountable filesystem over untrusted storage
<|reference_start|>SAFIUS - A secure and accountable filesystem over untrusted storage: We describe SAFIUS, a secure accountable file system that resides over an untrusted storage. SAFIUS provides strong security guarantees like confidentiality, integrity, prevention from rollback attacks, and accountability. SAFIUS also enables read/write sharing of data and provides the standard UNIX-like interface for applications. To achieve accountability with good performance, it uses asynchronous signatures; to reduce the space required for storing these signatures, a novel signature pruning mechanism is used. SAFIUS has been implemented on a GNU/Linux based system modifying OpenGFS. Preliminary performance studies show that SAFIUS has a tolerable overhead for providing secure storage: while it has an overhead of about 50% of OpenGFS in data intensive workloads (due to the overhead of performing encryption/decryption in software), it is comparable (or better in some cases) to OpenGFS in metadata intensive workloads.<|reference_end|>
arxiv
@article{sriram2008safius, title={SAFIUS - A secure and accountable filesystem over untrusted storage}, author={V Sriram, Ganesh Narayan, K Gopinath}, journal={Fourth International IEEE Security in Storage Workshop, 2007 - SISW '07. Publication Date: 27-27 Sept. 2007 On page(s): 34-45}, year={2008}, doi={10.1109/SISW.2007.7}, archivePrefix={arXiv}, eprint={0803.2365}, primaryClass={cs.OS cs.CR cs.DC cs.NI cs.PF} }
sriram2008safius
arxiv-3082
0803.2386
Conformal Computing: Algebraically connecting the hardware/software boundary using a uniform approach to high-performance computation for software and hardware applications
<|reference_start|>Conformal Computing: Algebraically connecting the hardware/software boundary using a uniform approach to high-performance computation for software and hardware applications: We present a systematic, algebraically based, design methodology for efficient implementation of computer programs optimized over multiple levels of the processor/memory and network hierarchy. Using a common formalism to describe the problem and the partitioning of data over processors and memory levels allows one to mathematically prove the efficiency and correctness of a given algorithm as measured in terms of a set of metrics (such as processor/network speeds, etc.). The approach allows the average programmer to achieve high-level optimizations similar to those used by compiler writers (e.g. the notion of "tiling"). The approach presented in this monograph makes use of A Mathematics of Arrays (MoA, Mullin 1988) and an indexing calculus (i.e. the psi-calculus) to enable the programmer to develop algorithms using high-level compiler-like optimizations through the ability to algebraically compose and reduce sequences of array operations. Extensive discussion and benchmark results are presented for the Fast Fourier Transform and other important algorithms.<|reference_end|>
arxiv
@article{mullin2008conformal, title={Conformal Computing: Algebraically connecting the hardware/software boundary using a uniform approach to high-performance computation for software and hardware applications}, author={Lenore R. Mullin and James E. Raynolds}, journal={arXiv preprint arXiv:0803.2386}, year={2008}, archivePrefix={arXiv}, eprint={0803.2386}, primaryClass={cs.MS} }
mullin2008conformal
arxiv-3083
0803.2392
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
<|reference_start|>CoSaMP: Iterative signal recovery from incomplete and inaccurate samples: Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For many cases of interest, the running time is just O(N*log^2(N)), where N is the length of the signal.<|reference_end|>
arxiv
@article{needell2008cosamp:, title={CoSaMP: Iterative signal recovery from incomplete and inaccurate samples}, author={D. Needell and J. A. Tropp}, journal={Appl. Comput. Harmon. Anal., Vol. 26, pp. 301-321, 2008}, year={2008}, archivePrefix={arXiv}, eprint={0803.2392}, primaryClass={math.NA cs.IT math.IT} }
needell2008cosamp:
arxiv-3084
0803.2427
A General Rate Duality of the MIMO Multiple Access Channel and the MIMO Broadcast Channel
<|reference_start|>A General Rate Duality of the MIMO Multiple Access Channel and the MIMO Broadcast Channel: We present a general rate duality between the multiple access channel (MAC) and the broadcast channel (BC) which is applicable to systems with and without nonlinear interference cancellation. Different to the state-of-the-art rate duality with interference subtraction from Vishwanath et al., the proposed duality is filter-based instead of covariance-based and exploits the arising unitary degree of freedom to decorrelate every point-to-point link. Therefore, it allows for noncooperative stream-wise decoding which reduces complexity and latency. Moreover, the conversion from one domain to the other does not exhibit any dependencies during its computation making it accessible to a parallel implementation instead of a serial one. We additionally derive a rate duality for systems with multi-antenna terminals when linear filtering without interference (pre-)subtraction is applied and the different streams of a single user are not treated as self-interference. Both dualities are based on a framework already applied to a mean-square-error duality between the MAC and the BC. Thanks to this novel rate duality, any rate-based optimization with linear filtering in the BC can now be handled in the dual MAC where the arising expressions lead to more efficient algorithmic solutions than in the BC due to the alignment of the channel and precoder indices.<|reference_end|>
arxiv
@article{hunger2008a, title={A General Rate Duality of the MIMO Multiple Access Channel and the MIMO Broadcast Channel}, author={Raphael Hunger, Michael Joham}, journal={arXiv preprint arXiv:0803.2427}, year={2008}, doi={10.1109/GLOCOM.2008.ECP.178}, archivePrefix={arXiv}, eprint={0803.2427}, primaryClass={cs.IT math.IT} }
hunger2008a
arxiv-3085
0803.2443
Discrete stochastic processes, replicator and Fokker-Planck equations of coevolutionary dynamics in finite and infinite populations
<|reference_start|>Discrete stochastic processes, replicator and Fokker-Planck equations of coevolutionary dynamics in finite and infinite populations: Finite-size fluctuations in coevolutionary dynamics arise in models of biological as well as of social and economic systems. This brief tutorial review surveys a systematic approach starting from a stochastic process discrete both in time and state. The limit $N\to \infty$ of an infinite population can be considered explicitly, generally leading to a replicator-type equation in zero order, and to a Fokker-Planck-type equation in first order in $1/\sqrt{N}$. Consequences and relations to some previous approaches are outlined.<|reference_end|>
arxiv
@article{claussen2008discrete, title={Discrete stochastic processes, replicator and Fokker-Planck equations of coevolutionary dynamics in finite and infinite populations}, author={Jens Christian Claussen}, journal={Banach Center Publications 80, 17-31 (2008)}, year={2008}, doi={10.4064/bc80-0-1}, archivePrefix={arXiv}, eprint={0803.2443}, primaryClass={q-bio.PE cond-mat.stat-mech cs.SI math.PR math.ST physics.bio-ph physics.soc-ph stat.TH} }
claussen2008discrete
arxiv-3086
0803.2447
Trajectory Networks and Their Topological Changes Induced by Geographical Infiltration
<|reference_start|>Trajectory Networks and Their Topological Changes Induced by Geographical Infiltration: In this article we investigate the topological changes undergone by trajectory networks as a consequence of progressive geographical infiltration. Trajectory networks, a type of knitted network, are obtained by establishing paths between geographically distributed nodes while following an associated vector field. For instance, the nodes could correspond to neurons along the cortical surface and the vector field could correspond to the gradient of neurotrophic factors, or the nodes could represent towns while the vector fields would be given by economical and/or geographical gradients. Therefore trajectory networks are natural models of a large number of geographical structures. The geographical infiltrations correspond to the addition of new local connections between nearby existing nodes. As such, these infiltrations could be related to several real-world processes such as contaminations, diseases, attacks, parasites, etc. The way in which progressive geographical infiltrations affect trajectory networks is investigated in terms of the degree, clustering coefficient, size of the largest component and the lengths of the existing chains measured along the infiltrations. It is shown that the maximum infiltration distance plays a critical role in the intensity of the induced topological changes. For large enough values of this parameter, the chains intrinsic to the trajectory networks undergo a collapse which is shown not to be related to the percolation of the network also implied by the infiltrations.<|reference_end|>
arxiv
@article{costa2008trajectory, title={Trajectory Networks and Their Topological Changes Induced by Geographical Infiltration}, author={Luciano da Fontoura Costa}, journal={arXiv preprint arXiv:0803.2447}, year={2008}, archivePrefix={arXiv}, eprint={0803.2447}, primaryClass={cs.DM cs.CG} }
costa2008trajectory
arxiv-3087
0803.2460
Upper Bound on Error Exponent of Regular LDPC Codes Transmitted over the BEC
<|reference_start|>Upper Bound on Error Exponent of Regular LDPC Codes Transmitted over the BEC: The error performance of the ensemble of typical LDPC codes transmitted over the binary erasure channel (BEC) is analyzed. In the past, lower bounds on the error exponents were derived. In this paper a probabilistic upper bound on this error exponent is derived. This bound holds with some confidence level.<|reference_end|>
arxiv
@article{goldenberg2008upper, title={Upper Bound on Error Exponent of Regular LDPC Codes Transmitted over the BEC}, author={Idan Goldenberg and David Burshtein}, journal={arXiv preprint arXiv:0803.2460}, year={2008}, archivePrefix={arXiv}, eprint={0803.2460}, primaryClass={cs.IT math.IT} }
goldenberg2008upper
arxiv-3088
0803.2495
Adversarial Scheduling Analysis of Game Theoretic Models of Norm Diffusion
<|reference_start|>Adversarial Scheduling Analysis of Game Theoretic Models of Norm Diffusion: In (Istrate, Marathe, Ravi SODA 2001) we advocated the investigation of robustness of results in the theory of learning in games under adversarial scheduling models. We provide evidence that such an analysis is feasible and can lead to nontrivial results by investigating, in an adversarial scheduling setting, Peyton Young's model of diffusion of norms. In particular, our main result incorporates into Peyton Young's model.<|reference_end|>
arxiv
@article{istrate2008adversarial, title={Adversarial Scheduling Analysis of Game Theoretic Models of Norm Diffusion}, author={Gabriel Istrate, Madhav V. Marathe and S.S.Ravi}, journal={arXiv preprint arXiv:0803.2495}, year={2008}, archivePrefix={arXiv}, eprint={0803.2495}, primaryClass={cs.GT cs.DM math.CO math.PR} }
istrate2008adversarial
arxiv-3089
0803.2527
Controlling the Information Flow in Spreadsheets
<|reference_start|>Controlling the Information Flow in Spreadsheets: There is no denying that spreadsheets have become critical for all operational processes including financial reporting, budgeting, forecasting, and analysis. Microsoft Excel has essentially become a scratch pad and a data browser that can quickly be put to use for information gathering and decision-making. However, there is little control in how data comes into Excel, and how it gets updated. The information supply chain feeding into Excel remains ad hoc and without any centralized IT control. This paper discusses some of the pitfalls of the data collection and maintenance process in Excel. It then suggests service-oriented architecture (SOA) based information gathering and control techniques to ameliorate the pitfalls of this scratch pad while improving the integrity of data, boosting the productivity of the business users, and building controls to satisfy the requirements of Section 404 of the Sarbanes-Oxley Act.<|reference_end|>
arxiv
@article{samar2008controlling, title={Controlling the Information Flow in Spreadsheets}, author={Vipin Samar, Sangeeta Patni}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2005 125-134 ISBN:1-902724-16-X}, year={2008}, archivePrefix={arXiv}, eprint={0803.2527}, primaryClass={cs.HC} }
samar2008controlling
arxiv-3090
0803.2559
Logical Queries over Views: Decidability and Expressiveness
<|reference_start|>Logical Queries over Views: Decidability and Expressiveness: We study the problem of deciding satisfiability of first order logic queries over views, our aim being to delimit the boundary between the decidable and the undecidable fragments of this language. Views currently occupy a central place in database research, due to their role in applications such as information integration and data warehousing. Our main result is the identification of a decidable class of first order queries over unary conjunctive views that generalises the decidability of the classical class of first order sentences over unary relations, known as the Lowenheim class. We then demonstrate how various extensions of this class lead to undecidability and also provide some expressivity results. Besides its theoretical interest, our new decidable class is potentially interesting for use in applications such as deciding implication of complex dependencies, analysis of a restricted class of active database rules, and ontology reasoning.<|reference_end|>
arxiv
@article{bailey2008logical, title={Logical Queries over Views: Decidability and Expressiveness}, author={James Bailey and Guozhu Dong and Anthony Widjaja To}, journal={arXiv preprint arXiv:0803.2559}, year={2008}, archivePrefix={arXiv}, eprint={0803.2559}, primaryClass={cs.LO cs.DB} }
bailey2008logical
arxiv-3091
0803.2570
Unequal Error Protection: An Information Theoretic Perspective
<|reference_start|>Unequal Error Protection: An Information Theoretic Perspective: An information theoretic framework for unequal error protection is developed in terms of the exponential error bounds. The fundamental difference between the bit-wise and message-wise unequal error protection (UEP) is demonstrated, for fixed length block codes on DMCs without feedback. Effect of feedback is investigated via variable length block codes. It is shown that, feedback results in a significant improvement in both bit-wise and message-wise UEP (except the single message case for missed detection). The distinction between false-alarm and missed-detection formalizations for message-wise UEP is also considered. All results presented are at rates close to capacity.<|reference_end|>
arxiv
@article{borade2008unequal, title={Unequal Error Protection: An Information Theoretic Perspective}, author={Shashi Borade, Baris Nakiboglu, Lizhong Zheng}, journal={IEEE Transactions on Information Theory, 55(12):5511-5539, Dec 2009}, year={2008}, doi={10.1109/TIT.2009.2032819 10.1109/ISIT.2008.4595385}, archivePrefix={arXiv}, eprint={0803.2570}, primaryClass={cs.IT cs.DM math.CO math.IT} }
borade2008unequal
arxiv-3092
0803.2615
Rapport de recherche sur le probl\`eme du plus court chemin contraint
<|reference_start|>Rapport de recherche sur le probl\`eme du plus court chemin contraint: This article provides an overview of the performance and the theoretical complexity of approximate and exact methods for various versions of the shortest path problem. The proposed study aims to improve the resolution of a more general covering problem within a column generation scheme in which the shortest path problem is the sub-problem.<|reference_end|>
arxiv
@article{laval2008rapport, title={Rapport de recherche sur le probl\`eme du plus court chemin contraint}, author={Olivier Laval (LIPN), Sophie Toulouse (LIPN), Anass Nagih (LITA)}, journal={arXiv preprint arXiv:0803.2615}, year={2008}, archivePrefix={arXiv}, eprint={0803.2615}, primaryClass={cs.DS} }
laval2008rapport
arxiv-3093
0803.2616
Combinatorial realization of the Thom-Smale complex via discrete Morse theory
<|reference_start|>Combinatorial realization of the Thom-Smale complex via discrete Morse theory: In the case of smooth manifolds, we use Forman's discrete Morse theory to realize combinatorially any Thom-Smale complex coming from a smooth Morse function by a couple triangulation-discrete Morse function. As an application, we prove that any Euler structure on a smooth oriented closed 3-manifold has a particular realization by a complete matching on the Hasse diagram of a triangulation of the manifold.<|reference_end|>
arxiv
@article{gallais2008combinatorial, title={Combinatorial realization of the Thom-Smale complex via discrete Morse theory}, author={Etienne Gallais (LMAM, LMJL)}, journal={arXiv preprint arXiv:0803.2616}, year={2008}, archivePrefix={arXiv}, eprint={0803.2616}, primaryClass={math.GT cs.DM math.CO} }
gallais2008combinatorial
arxiv-3094
0803.2639
Maximal Orders in the Design of Dense Space-Time Lattice Codes
<|reference_start|>Maximal Orders in the Design of Dense Space-Time Lattice Codes: We construct explicit rate-one, full-diversity, geometrically dense matrix lattices with large, non-vanishing determinants (NVD) for four transmit antenna multiple-input single-output (MISO) space-time (ST) applications. The constructions are based on the theory of rings of algebraic integers and related subrings of the Hamiltonian quaternions and can be extended to a larger number of Tx antennas. The usage of ideals guarantees a non-vanishing determinant larger than one and an easy way to present the exact proofs for the minimum determinants. The idea of finding denser sublattices within a given division algebra is then generalized to a multiple-input multiple-output (MIMO) case with an arbitrary number of Tx antennas by using the theory of cyclic division algebras (CDA) and maximal orders. It is also shown that the explicit constructions in this paper all have a simple decoding method based on sphere decoding. Related to the decoding complexity, the notion of sensitivity is introduced, and experimental evidence indicating a connection between sensitivity, decoding complexity and performance is provided. Simulations in a quasi-static Rayleigh fading channel show that our dense quaternionic constructions outperform both the earlier rectangular lattices and the rotated ABBA lattice as well as the DAST lattice. We also show that our quaternionic lattice is better than the DAST lattice in terms of the diversity-multiplexing gain tradeoff.<|reference_end|>
arxiv
@article{hollanti2008maximal, title={Maximal Orders in the Design of Dense Space-Time Lattice Codes}, author={Camilla Hollanti, Jyrki Lahtonen, Hsiao-feng Francis Lu}, journal={IEEE Trans. Inf. Theory, vol. 54(10), Oct. 2008, pp. 4493-4510}, year={2008}, doi={10.1109/TIT.2008.928998}, archivePrefix={arXiv}, eprint={0803.2639}, primaryClass={cs.IT cs.DM math.IT math.RA} }
hollanti2008maximal
arxiv-3095
0803.2675
Digital Ecosystems: Self-Organisation of Evolving Agent Populations
<|reference_start|>Digital Ecosystems: Self-Organisation of Evolving Agent Populations: A primary motivation for our research in Digital Ecosystems is the desire to exploit the self-organising properties of biological ecosystems. Ecosystems are thought to be robust, scalable architectures that can automatically solve complex, dynamic problems. Self-organisation is perhaps one of the most desirable features in the systems that we engineer, and it is important for us to be able to measure self-organising behaviour. We investigate the self-organising aspects of Digital Ecosystems, created through the application of evolutionary computing to Multi-Agent Systems (MASs), aiming to determine a macroscopic variable to characterise the self-organisation of the evolving agent populations within. We study a measure for the self-organisation called Physical Complexity; based on statistical physics, automata theory, and information theory, providing a measure of information relative to the randomness in an organism's genome, by calculating the entropy in a population. We investigate an extension to include populations of variable length, and then built upon this to construct an efficiency measure to investigate clustering within evolving agent populations. Overall an insight has been achieved into where and how self-organisation occurs in our Digital Ecosystem, and how it can be quantified.<|reference_end|>
arxiv
@article{briscoe2008digital, title={Digital Ecosystems: Self-Organisation of Evolving Agent Populations}, author={Gerard Briscoe and Philippe De Wilde}, journal={arXiv preprint arXiv:0803.2675}, year={2008}, archivePrefix={arXiv}, eprint={0803.2675}, primaryClass={cs.NE cs.CC} }
briscoe2008digital
arxiv-3096
0803.2695
KohonAnts: A Self-Organizing Ant Algorithm for Clustering and Pattern Classification
<|reference_start|>KohonAnts: A Self-Organizing Ant Algorithm for Clustering and Pattern Classification: In this paper we introduce a new ant-based method that takes advantage of the cooperative self-organization of Ant Colony Systems to create a naturally inspired clustering and pattern recognition method. The approach considers each data item as an ant, which moves inside a grid changing the cells it goes through, in a fashion similar to Kohonen's Self-Organizing Maps. The resulting algorithm is conceptually more simple, takes less free parameters than other ant-based clustering algorithms, and, after some parameter tuning, yields very good results on some benchmark problems.<|reference_end|>
arxiv
@article{fernandes2008kohonants:, title={KohonAnts: A Self-Organizing Ant Algorithm for Clustering and Pattern Classification}, author={C. Fernandes, A.M. Mora, J.J. Merelo, V. Ramos, J.L.J. Laredo}, journal={arXiv preprint arXiv:0803.2695}, year={2008}, archivePrefix={arXiv}, eprint={0803.2695}, primaryClass={cs.NE cs.CV} }
fernandes2008kohonants:
arxiv-3097
0803.2717
Distributed authentication for randomly compromised networks
<|reference_start|>Distributed authentication for randomly compromised networks: We introduce a simple, practical approach with probabilistic information-theoretic security to solve one of quantum key distribution's major security weaknesses: the requirement of an authenticated classical channel to prevent man-in-the-middle attacks. Our scheme employs classical secret sharing and partially trusted intermediaries to provide arbitrarily high confidence in the security of the protocol. Although certain failures elude detection, we discuss preemptive strategies to reduce the probability of failure to an arbitrarily small level: probability of such failures is exponentially suppressed with increases in connectivity (i.e., connections per node).<|reference_end|>
arxiv
@article{beals2008distributed, title={Distributed authentication for randomly compromised networks}, author={Travis R. Beals, Kevin P. Hynes, Barry C. Sanders}, journal={New Journal of Physics 11 (2009) 085005}, year={2008}, doi={10.1088/1367-2630/11/8/085005}, archivePrefix={arXiv}, eprint={0803.2717}, primaryClass={quant-ph cs.CR} }
beals2008distributed
arxiv-3098
0803.2812
Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors for High Dynamic Range Imaging
<|reference_start|>Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors for High Dynamic Range Imaging: The method of a linear high dynamic range imaging using solid-state photosensors with Bayer colour filters array is provided in this paper. Using information from neighbour pixels, it is possible to reconstruct linear images with wide dynamic range from the oversaturated images. Bayer colour filters array is considered as an array of neutral filters in a quasimonochromatic light. If the camera's response function to the desirable light source is known then one can calculate correction coefficients to reconstruct oversaturated images. Reconstructed images are linearized in order to provide a linear high dynamic range images for optical-digital imaging systems. The calibration procedure for obtaining the camera's response function to the desired light source is described. Experimental results of the reconstruction of the images from the oversaturated images are presented for red, green, and blue quasimonochromatic light sources. Quantitative analysis of the accuracy of the reconstructed images is provided.<|reference_end|>
arxiv
@article{konnik2008using, title={Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors for High Dynamic Range Imaging}, author={Mikhail V. Konnik}, journal={arXiv preprint arXiv:0803.2812}, year={2008}, archivePrefix={arXiv}, eprint={0803.2812}, primaryClass={cs.CV} }
konnik2008using
arxiv-3099
0803.2824
Combined Intra- and Inter-domain Traffic Engineering using Hot-Potato Aware Link Weights Optimization
<|reference_start|>Combined Intra- and Inter-domain Traffic Engineering using Hot-Potato Aware Link Weights Optimization: A well-known approach to intradomain traffic engineering consists in finding the set of link weights that minimizes a network-wide objective function for a given intradomain traffic matrix. This approach is inadequate because it ignores a potential impact on interdomain routing. Indeed, the resulting set of link weights may trigger BGP to change the BGP next hop for some destination prefixes, to enforce hot-potato routing policies. In turn, this results in changes in the intradomain traffic matrix that have not been anticipated by the link weights optimizer, possibly leading to degraded network performance. We propose a BGP-aware link weights optimization method that takes these effects into account, and even turns them into an advantage. This method uses the interdomain traffic matrix and other available BGP data, to extend the intradomain topology with external virtual nodes and links, on which all the well-tuned heuristics of a classical link weights optimizer can be applied. A key innovative asset of our method is its ability to also optimize the traffic on the interdomain peering links. We show, using an operational network as a case study, that our approach does so efficiently at almost no extra computational cost.<|reference_end|>
arxiv
@article{balon2008combined, title={Combined Intra- and Inter-domain Traffic Engineering using Hot-Potato Aware Link Weights Optimization}, author={Simon Balon and Guy Leduc}, journal={arXiv preprint arXiv:0803.2824}, year={2008}, archivePrefix={arXiv}, eprint={0803.2824}, primaryClass={cs.NI} }
balon2008combined
arxiv-3100
0803.2827
Impact of CSI on Distributed Space-Time Coding in Wireless Relay Networks
<|reference_start|>Impact of CSI on Distributed Space-Time Coding in Wireless Relay Networks: We consider a two-hop wireless network where a transmitter communicates with a receiver via $M$ relays with an amplify-and-forward (AF) protocol. Recent works have shown that sophisticated linear processing such as beamforming and distributed space-time coding (DSTC) at relays enables to improve the AF performance. However, the relative utility of these strategies depend on the available channel state information at transmitter (CSIT), which in turn depends on system parameters such as the speed of the underlying fading channel and that of training and feedback procedures. Moreover, it is of practical interest to have a single transmit scheme that handles different CSIT scenarios. This motivates us to consider a unified approach based on DSTC that potentially provides diversity gain with statistical CSIT and exploits some additional side information if available. Under individual power constraints at the relays, we optimize the amplifier power allocation such that pairwise error probability conditioned on the available CSIT is minimized. Under perfect CSIT we propose an on-off gradient algorithm that efficiently finds a set of relays to switch on. Under partial and statistical CSIT, we propose a simple waterfilling algorithm that yields a non-trivial solution between maximum power allocation and a generalized STC that equalizes the averaged amplified noise for all relays. Moreover, we derive closed-form solutions for M=2 and in certain asymptotic regimes that enable an easy interpretation of the proposed algorithms. It is found that an appropriate amplifier power allocation is mandatory for DSTC to offer sufficient diversity and power gain in a general network topology.<|reference_end|>
arxiv
@article{kobayashi2008impact, title={Impact of CSI on Distributed Space-Time Coding in Wireless Relay Networks}, author={Mari Kobayashi and Xavier Mestre}, journal={arXiv preprint arXiv:0803.2827}, year={2008}, archivePrefix={arXiv}, eprint={0803.2827}, primaryClass={cs.IT math.IT} }
kobayashi2008impact