corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-6301
0902.0966
Beam Selection Gain Versus Antenna Selection Gain
<|reference_start|>Beam Selection Gain Versus Antenna Selection Gain: We consider beam selection using a fixed beamforming network (FBN) at a base station with $M$ array antennas. In our setting, a Butler matrix is deployed at the RF stage to form $M$ beams, and then the best beam is selected for transmission. We provide the proofs of the key properties of the noncentral chi-square distribution and the following properties of the beam selection gain verifying that beam selection is superior to antenna selection in Rician channels with any $K$-factors. Furthermore, we find asymptotically tight stochastic bounds of the beam selection gain, which yield approximate closed form expressions of the expected selection gain and the ergodic capacity. Beam selection has the order of growth of the ergodic capacity $\mathnormal{\Theta}(\log(M))$ regardless of user location in contrast to $\mathnormal{\Theta}(\log(\log(M)))$ for antenna selection.<|reference_end|>
arxiv
@article{bai2009beam, title={Beam Selection Gain Versus Antenna Selection Gain}, author={Dongwoon Bai, Saeed S. Ghassemzadeh, Robert R. Miller, and Vahid Tarokh}, journal={arXiv preprint arXiv:0902.0966}, year={2009}, archivePrefix={arXiv}, eprint={0902.0966}, primaryClass={cs.IT math.IT} }
bai2009beam
arxiv-6302
0902.1033
New Confidence Measures for Statistical Machine Translation
<|reference_start|>New Confidence Measures for Statistical Machine Translation: A confidence measure is able to estimate the reliability of an hypothesis provided by a machine translation system. The problem of confidence measure can be seen as a process of testing : we want to decide whether the most probable sequence of words provided by the machine translation system is correct or not. In the following we describe several original word-level confidence measures for machine translation, based on mutual information, n-gram language model and lexical features language model. We evaluate how well they perform individually or together, and show that using a combination of confidence measures based on mutual information yields a classification error rate as low as 25.1% with an F-measure of 0.708.<|reference_end|>
arxiv
@article{raybaud2009new, title={New Confidence Measures for Statistical Machine Translation}, author={Sylvain Raybaud (INRIA Lorraine - LORIA), Caroline Lavecchia (INRIA Lorraine - LORIA), David Langlois (INRIA Lorraine - LORIA), Kamel Sma"ili (INRIA Lorraine - LORIA)}, journal={International Conference On Agents and Artificial Intelligence - ICAART 09 (2009)}, year={2009}, archivePrefix={arXiv}, eprint={0902.1033}, primaryClass={cs.CL} }
raybaud2009new
arxiv-6303
0902.1035
Towards a Statistical Methodology to Evaluate Program Speedups and their Optimisation Techniques
<|reference_start|>Towards a Statistical Methodology to Evaluate Program Speedups and their Optimisation Techniques: The community of program optimisation and analysis, code performance evaluation, parallelisation and optimising compilation has published since many decades hundreds of research and engineering articles in major conferences and journals. These articles study efficient algorithms, strategies and techniques to accelerate programs execution times, or optimise other performance metrics (MIPS, code size, energy/power, MFLOPS, etc.). Many speedups are published, but nobody is able to reproduce them exactly. The non-reproducibility of our research results is a dark point of the art, and we cannot be qualified as {\it computer scientists} if we do not provide rigorous experimental methodology. This article provides a first effort towards a correct statistical protocol for analysing and measuring speedups. As we will see, some common mistakes are done by the community inside published articles, explaining part of the non-reproducibility of the results. Our current article is not sufficient by its own to deliver a complete experimental methodology, further efforts must be done by the community to decide about a common protocol for our future experiences. Anyway, our community should take care about the aspect of reproducibility of the results in the future.<|reference_end|>
arxiv
@article{touati2009towards, title={Towards a Statistical Methodology to Evaluate Program Speedups and their Optimisation Techniques}, author={Sid Touati (PRISM)}, journal={arXiv preprint arXiv:0902.1035}, year={2009}, archivePrefix={arXiv}, eprint={0902.1035}, primaryClass={cs.PF} }
touati2009towards
arxiv-6304
0902.1037
Optimal design and optimal control of structures undergoing finite rotations and elastic deformations
<|reference_start|>Optimal design and optimal control of structures undergoing finite rotations and elastic deformations: In this work we deal with the optimal design and optimal control of structures undergoing large rotations. In other words, we show how to find the corresponding initial configuration and the corresponding set of multiple load parameters in order to recover a desired deformed configuration or some desirable features of the deformed configuration as specified more precisely by the objective or cost function. The model problem chosen to illustrate the proposed optimal design and optimal control methodologies is the one of geometrically exact beam. First, we present a non-standard formulation of the optimal design and optimal control problems, relying on the method of Lagrange multipliers in order to make the mechanics state variables independent from either design or control variables and thus provide the most general basis for developing the best possible solution procedure. Two different solution procedures are then explored, one based on the diffuse approximation of response function and gradient method and the other one based on genetic algorithm. A number of numerical examples are given in order to illustrate both the advantages and potential drawbacks of each of the presented procedures.<|reference_end|>
arxiv
@article{ibrahimbegovic2009optimal, title={Optimal design and optimal control of structures undergoing finite rotations and elastic deformations}, author={A. Ibrahimbegovic, C. Knopf-Lenoir, A. Kucerova and P. Villon}, journal={International Journal for Numerical Methods in Engineering, 61 (14), 2428-2460, 2004)}, year={2009}, doi={10.1002/nme.1150}, archivePrefix={arXiv}, eprint={0902.1037}, primaryClass={cs.NE cs.CE} }
ibrahimbegovic2009optimal
arxiv-6305
0902.1038
Compressed Representations of Permutations, and Applications
<|reference_start|>Compressed Representations of Permutations, and Applications: We explore various techniques to compress a permutation $\pi$ over n integers, taking advantage of ordered subsequences in $\pi$, while supporting its application $\pi$(i) and the application of its inverse $\pi^{-1}(i)$ in small time. Our compression schemes yield several interesting byproducts, in many cases matching, improving or extending the best existing results on applications such as the encoding of a permutation in order to support iterated applications $\pi^k(i)$ of it, of integer functions, and of inverted lists and suffix arrays.<|reference_end|>
arxiv
@article{barbay2009compressed, title={Compressed Representations of Permutations, and Applications}, author={J'er'emy Barbay (DCC), Gonzalo Navarro (DCC)}, journal={STACS 2009 (2009) 111-122}, year={2009}, archivePrefix={arXiv}, eprint={0902.1038}, primaryClass={cs.DS} }
barbay2009compressed
arxiv-6306
0902.1040
Fast solving of Weighted Pairing Least-Squares systems
<|reference_start|>Fast solving of Weighted Pairing Least-Squares systems: This paper presents a generalization of the "weighted least-squares" (WLS), named "weighted pairing least-squares" (WPLS), which uses a rectangular weight matrix and is suitable for data alignment problems. Two fast solving methods, suitable for solving full rank systems as well as rank deficient systems, are studied. Computational experiments clearly show that the best method, in terms of speed, accuracy, and numerical stability, is based on a special {1, 2, 3}-inverse, whose computation reduces to a very simple generalization of the usual "Cholesky factorization-backward substitution" method for solving linear systems.<|reference_end|>
arxiv
@article{courrieu2009fast, title={Fast solving of Weighted Pairing Least-Squares systems}, author={Pierre Courrieu (LPC)}, journal={Journal of Computational and Applied Mathematics 231, 1 (2009) 39-48}, year={2009}, doi={10.1016/j.cam.2009.01.016}, archivePrefix={arXiv}, eprint={0902.1040}, primaryClass={cs.MS cs.NE} }
courrieu2009fast
arxiv-6307
0902.1041
Kolmogorov Complexity and Solovay Functions
<|reference_start|>Kolmogorov Complexity and Solovay Functions: Solovay proved that there exists a computable upper bound f of the prefix-free Kolmogorov complexity function K such that f (x) = K(x) for infinitely many x. In this paper, we consider the class of computable functions f such that K(x) <= f (x)+O(1) for all x and f (x) <= K(x) + O(1) for infinitely many x, which we call Solovay functions. We show that Solovay functions present interesting connections with randomness notions such as Martin-L\"of randomness and K-triviality.<|reference_end|>
arxiv
@article{bienvenu2009kolmogorov, title={Kolmogorov Complexity and Solovay Functions}, author={Laurent Bienvenu, Rod Downey}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 147-158}, year={2009}, archivePrefix={arXiv}, eprint={0902.1041}, primaryClass={cs.CC cs.IT math.IT math.LO} }
bienvenu2009kolmogorov
arxiv-6308
0902.1042
Weak Mso with the Unbounding Quantifier
<|reference_start|>Weak Mso with the Unbounding Quantifier: A new class of languages of infinite words is introduced, called the max-regular languages, extending the class of $\omega$-regular languages. The class has two equivalent descriptions: in terms of automata (a type of deterministic counter automaton), and in terms of logic (weak monadic second-order logic with a bounding quantifier). Effective translations between the logic and automata are given.<|reference_end|>
arxiv
@article{bojanczyk2009weak, title={Weak Mso with the Unbounding Quantifier}, author={Mikolaj Bojanczyk}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 159-170}, year={2009}, archivePrefix={arXiv}, eprint={0902.1042}, primaryClass={cs.FL cs.LO} }
bojanczyk2009weak
arxiv-6309
0902.1043
Polynomial-Time Approximation Schemes for Subset-Connectivity Problems in Bounded-Genus Graphs
<|reference_start|>Polynomial-Time Approximation Schemes for Subset-Connectivity Problems in Bounded-Genus Graphs: We present the first polynomial-time approximation schemes (PTASes) for the following subset-connectivity problems in edge-weighted graphs of bounded genus: Steiner tree, low-connectivity survivable-network design, and subset TSP. The schemes run in O(n log n) time for graphs embedded on both orientable and non-orientable surfaces. This work generalizes the PTAS frameworks of Borradaile, Klein, and Mathieu from planar graphs to bounded-genus graphs: any future problems shown to admit the required structure theorem for planar graphs will similarly extend to bounded-genus graphs.<|reference_end|>
arxiv
@article{borradaile2009polynomial-time, title={Polynomial-Time Approximation Schemes for Subset-Connectivity Problems in Bounded-Genus Graphs}, author={Glencora Borradaile, Erik D. Demaine (MIT), Siamak Tazari}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 171-182}, year={2009}, archivePrefix={arXiv}, eprint={0902.1043}, primaryClass={cs.DM cs.DS} }
borradaile2009polynomial-time
arxiv-6310
0902.1045
On finding a particular class of combinatorial identities
<|reference_start|>On finding a particular class of combinatorial identities: In this paper, a class of combinatorial identities is proved. A method is used which is based on the following rule: counting elements of a given set in two ways and making equal the obtained results. This rule is known as "counting in two ways". The principle of inclusion and exclusion is used for obtaining a class of (0,1)-matrices.<|reference_end|>
arxiv
@article{iordjev2009on, title={On finding a particular class of combinatorial identities}, author={Krassimir Yankov Iordjev, Dimiter Stoichkov Kovachev}, journal={arXiv preprint arXiv:0902.1045}, year={2009}, archivePrefix={arXiv}, eprint={0902.1045}, primaryClass={cs.DM} }
iordjev2009on
arxiv-6311
0902.1047
A Polynomial Kernel For Multicut In Trees
<|reference_start|>A Polynomial Kernel For Multicut In Trees: The MULTICUT IN TREES problem consists in deciding, given a tree, a set of requests (i.e. paths in the tree) and an integer k, whether there exists a set of k edges cutting all the requests. This problem was shown to be FPT by Guo and Niedermeyer. They also provided an exponential kernel. They asked whether this problem has a polynomial kernel. This question was also raised by Fellows. We show that MULTICUT IN TREES has a polynomial kernel.<|reference_end|>
arxiv
@article{bousquet2009a, title={A Polynomial Kernel For Multicut In Trees}, author={Nicolas Bousquet (ENS Cachan), Jean Daligault (LIRMM), Stephan Thomasse (LIRMM), Anders Yeo}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 183-194}, year={2009}, archivePrefix={arXiv}, eprint={0902.1047}, primaryClass={cs.DM} }
bousquet2009a
arxiv-6312
0902.1048
On the Average Complexity of Moore's State Minimization Algorithm
<|reference_start|>On the Average Complexity of Moore's State Minimization Algorithm: We prove that, for any arbitrary finite alphabet and for the uniform distribution over deterministic and accessible automata with n states, the average complexity of Moore's state minimization algorithm is in O(n log n). Moreover this bound is tight in the case of unary utomata.<|reference_end|>
arxiv
@article{bassino2009on, title={On the Average Complexity of Moore's State Minimization Algorithm}, author={Fr'ed'erique Bassino (LIPN), Julien David (IGM), Cyril Nicaud (IGM)}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 123-134}, year={2009}, archivePrefix={arXiv}, eprint={0902.1048}, primaryClass={cs.DS cs.CC} }
bassino2009on
arxiv-6313
0902.1080
A Model for Managing Collections of Patterns
<|reference_start|>A Model for Managing Collections of Patterns: Data mining algorithms are now able to efficiently deal with huge amount of data. Various kinds of patterns may be discovered and may have some great impact on the general development of knowledge. In many domains, end users may want to have their data mined by data mining tools in order to extract patterns that could impact their business. Nevertheless, those users are often overwhelmed by the large quantity of patterns extracted in such a situation. Moreover, some privacy issues, or some commercial one may lead the users not to be able to mine the data by themselves. Thus, the users may not have the possibility to perform many experiments integrating various constraints in order to focus on specific patterns they would like to extract. Post processing of patterns may be an answer to that drawback. Thus, in this paper we present a framework that could allow end users to manage collections of patterns. We propose to use an efficient data structure on which some algebraic operators may be used in order to retrieve or access patterns in pattern bases.<|reference_end|>
arxiv
@article{jeudy2009a, title={A Model for Managing Collections of Patterns}, author={Baptiste Jeudy (LAHC), Christine Largeron (LAHC), Franc{c}ois Jacquenet (LAHC)}, journal={ACM Symposium on Applied Computing, Seoul : Cor\'ee, R\'epublique de (2007)}, year={2009}, archivePrefix={arXiv}, eprint={0902.1080}, primaryClass={cs.AI} }
jeudy2009a
arxiv-6314
0902.1104
How happy is your web browsing? A model to quantify satisfaction of an Internet user, searching for desired information
<|reference_start|>How happy is your web browsing? A model to quantify satisfaction of an Internet user, searching for desired information: We feel happy when web-browsing operations provide us with necessary information; otherwise, we feel bitter. How to measure this happiness (or bitterness)? How does the profile of happiness grow and decay during the course of web-browsing? We propose a probabilistic framework that models evolution of user satisfaction, on top of his/her continuous frustration at not finding the required information. It is found that the cumulative satisfaction profile of a web-searching individual can be modeled effectively as the sum of random number of random terms, where each term is mutually independent random variable, originating from 'memoryless' Poisson flow. Evolution of satisfaction over the entire time interval of user's browsing was modeled with auto-correlation analysis. A utilitarian marker, magnitude of greater than unity of which describe happy web-searching operations; and an empirical limit that connects user's satisfaction with his frustration level - are proposed too. Presence of pertinent information in the very first page of a web-site and magnitude of the decay parameter of user satisfaction (frustration, irritation etc.), are found to be two key aspects that dominate web-browser's psychology. The proposed model employed different combination of decay parameter, searching time and number of helpful web-sites. Obtained results are found to match the results from three real-life case-studies.<|reference_end|>
arxiv
@article{banerji2009how, title={How happy is your web browsing? A model to quantify satisfaction of an Internet user, searching for desired information}, author={Anirban Banerji, Aniket Magarkar}, journal={Physica A: Statistical Mechanics and its Applications, 391: 4215-4224, 2012}, year={2009}, doi={10.1016/j.physa.2012.02.002}, archivePrefix={arXiv}, eprint={0902.1104}, primaryClass={cs.HC} }
banerji2009how
arxiv-6315
0902.1169
Node Weighted Scheduling
<|reference_start|>Node Weighted Scheduling: This paper proposes a new class of online policies for scheduling in input-buffered crossbar switches. Our policies are throughput optimal for a large class of arrival processes which satisfy strong-law of large numbers. Given an initial configuration and no further arrivals, our policies drain all packets in the system in the minimal amount of time (providing an online alternative to the batch approach based on Birkhoff-VonNeumann decompositions). We show that it is possible for policies in our class to be throughput optimal even if they are not constrained to be maximal in every time slot. Most algorithms for switch scheduling take an edge based approach; in contrast, we focus on scheduling (a large enough set of) the most congested ports. This alternate approach allows for lower-complexity algorithms, and also requires a non-standard technique to prove throughput-optimality. One algorithm in our class, Maximum Vertex-weighted Matching (MVM) has worst-case complexity similar to Max-size Matching, and in simulations shows slightly better delay performance than Max-(edge)weighted-Matching (MWM).<|reference_end|>
arxiv
@article{gupta2009node, title={Node Weighted Scheduling}, author={Gagan Raj Gupta, Sujay Sanghavi and Ness B. Shroff}, journal={arXiv preprint arXiv:0902.1169}, year={2009}, archivePrefix={arXiv}, eprint={0902.1169}, primaryClass={cs.NI cs.PF} }
gupta2009node
arxiv-6316
0902.1179
The Complexity of Datalog on Linear Orders
<|reference_start|>The Complexity of Datalog on Linear Orders: We study the program complexity of datalog on both finite and infinite linear orders. Our main result states that on all linear orders with at least two elements, the nonemptiness problem for datalog is EXPTIME-complete. While containment of the nonemptiness problem in EXPTIME is known for finite linear orders and actually for arbitrary finite structures, it is not obvious for infinite linear orders. It sharply contrasts the situation on other infinite structures; for example, the datalog nonemptiness problem on an infinite successor structure is undecidable. We extend our upper bound results to infinite linear orders with constants. As an application, we show that the datalog nonemptiness problem on Allen's interval algebra is EXPTIME-complete.<|reference_end|>
arxiv
@article{grohe2009the, title={The Complexity of Datalog on Linear Orders}, author={Martin Grohe, Goetz Schwandtner}, journal={Logical Methods in Computer Science, Volume 5, Issue 1 (February 27, 2009) lmcs:811}, year={2009}, doi={10.2168/LMCS-5(1:4)2009}, archivePrefix={arXiv}, eprint={0902.1179}, primaryClass={cs.LO cs.CC cs.DB} }
grohe2009the
arxiv-6317
0902.1182
Directed paths on a tree: coloring, multicut and kernel
<|reference_start|>Directed paths on a tree: coloring, multicut and kernel: In the present paper, we study algorithmic questions for the arc-intersection graph of directed paths on a tree. Such graphs are known to be perfect (proved by Monma and Wei in 1986). We present faster algorithms than all previously known algorithms for solving the minimum coloring and the minimum clique cover problems. They both run in $O(np)$ time, where $n$ is the number of vertices of the tree and $p$ the number of paths. Another result is a polynomial algorithm computing a kernel in the intersection graph, when its edges are oriented in a clique-acyclic way. Indeed, such a kernel exists for any perfect graph by a theorem of Boros and Gurvich. Such algorithms computing kernels are known only for few classes of perfect graphs.<|reference_end|>
arxiv
@article{de gévigney2009directed, title={Directed paths on a tree: coloring, multicut and kernel}, author={Olivier Durand de G'evigney, Fr'ed'eric Meunier, Christian Popa, Julien Reygner, Ayrin Romero}, journal={arXiv preprint arXiv:0902.1182}, year={2009}, archivePrefix={arXiv}, eprint={0902.1182}, primaryClass={cs.DM} }
de gévigney2009directed
arxiv-6318
0902.1220
Opportunistic Communications in Fading Multiaccess Relay Channels
<|reference_start|>Opportunistic Communications in Fading Multiaccess Relay Channels: The problem of optimal resource allocation is studied for ergodic fading orthogonal multiaccess relay channels (MARCs) in which the users (sources) communicate with a destination with the aid of a half-duplex relay that transmits on a channel orthogonal to that used by the transmitting sources. Under the assumption that the instantaneous fading state information is available at all nodes, the maximum sum-rate and the optimal user and relay power allocations (policies) are developed for a decode-and-forward (DF) relay. With the observation that a DF relay results in two multiaccess channels, one at the relay and the other at the destination, a single known lemma on the sum-rate of two intersecting polymatroids is used to determine the DF sum-rate and the optimal user and relay policies. The lemma also enables a broad topological classification of fading MARCs into one of three types. The first type is the set of partially clustered MARCs where a user is clustered either with the relay or with the destination such that the users waterfill on their bottle-neck links to the distant receiver. The second type is the set of clustered MARCs where all users are either proximal to the relay or to the destination such that opportunistic multiuser scheduling to one of the receivers is optimal. The third type consists of arbitrarily clustered MARCs which are a combination of the first two types, and for this type it is shown that the optimal policies are opportunistic non-waterfilling solutions. The analysis is extended to develop the rate region of a K-user orthogonal half-duplex MARC. Finally, cutset outer bounds are used to show that DF achieves the capacity region for a class of clustered orthogonal half-duplex MARCs.<|reference_end|>
arxiv
@article{sankar2009opportunistic, title={Opportunistic Communications in Fading Multiaccess Relay Channels}, author={Lalitha Sankar, Yingbin Liang, Narayan Mandayam, H. Vincent Poor}, journal={arXiv preprint arXiv:0902.1220}, year={2009}, archivePrefix={arXiv}, eprint={0902.1220}, primaryClass={cs.IT math.IT} }
sankar2009opportunistic
arxiv-6319
0902.1227
Discovering general partial orders in event streams
<|reference_start|>Discovering general partial orders in event streams: Frequent episode discovery is a popular framework for pattern discovery in event streams. An episode is a partially ordered set of nodes with each node associated with an event type. Efficient (and separate) algorithms exist for episode discovery when the associated partial order is total (serial episode) and trivial (parallel episode). In this paper, we propose efficient algorithms for discovering frequent episodes with general partial orders. These algorithms can be easily specialized to discover serial or parallel episodes. Also, the algorithms are flexible enough to be specialized for mining in the space of certain interesting subclasses of partial orders. We point out that there is an inherent combinatorial explosion in frequent partial order mining and most importantly, frequency alone is not a sufficient measure of interestingness. We propose a new interestingness measure for general partial order episodes and a discovery method based on this measure, for filtering out uninteresting partial orders. Simulations demonstrate the effectiveness of our algorithms.<|reference_end|>
arxiv
@article{achar2009discovering, title={Discovering general partial orders in event streams}, author={Avinash Achar, Srivatsan Laxman, Raajay Viswanathan and P. S. Sastry}, journal={arXiv preprint arXiv:0902.1227}, year={2009}, archivePrefix={arXiv}, eprint={0902.1227}, primaryClass={cs.AI cs.LG} }
achar2009discovering
arxiv-6320
0902.1232
On Why and What of Randomness
<|reference_start|>On Why and What of Randomness: This paper has several objectives. First, it separates randomness from lawlessness and shows why even genuine randomness does not imply lawlessness. Second, it separates the question -why should I call a phenomenon random? (and answers it in part one) from the patent question -What is a random sequence? -for which the answer lies in Kolmogorov complexity (which is explained in part two). While answering the first question the note argues why there should be four motivating factors for calling a phenomenon random: ontic, epistemic, pseudo and telescopic, the first two depicting genuine randomness and the last two false. Third, ontic and epistemic randomness have been distinguished from ontic and epistemic probability. Fourth, it encourages students to be applied statisticians and advises against becoming armchair theorists but this is interestingly achieved by a straight application of telescopic randomness. Overall, it tells (the teacher) not to jump to probability without explaining randomness properly first and similarly advises the students to read (and understand) randomness minutely before taking on probability.<|reference_end|>
arxiv
@article{chakraborty2009on, title={On Why and What of Randomness}, author={Soubhik Chakraborty}, journal={arXiv preprint arXiv:0902.1232}, year={2009}, archivePrefix={arXiv}, eprint={0902.1232}, primaryClass={cs.OH} }
chakraborty2009on
arxiv-6321
0902.1253
On Local Symmetries And Universality In Cellular Autmata
<|reference_start|>On Local Symmetries And Universality In Cellular Autmata: Cellular automata (CA) are dynamical systems defined by a finite local rule but they are studied for their global dynamics. They can exhibit a wide range of complex behaviours and a celebrated result is the existence of (intrinsically) universal CA, that is CA able to fully simulate any other CA. In this paper, we show that the asymptotic density of universal cellular automata is 1 in several families of CA defined by local symmetries. We extend results previously established for captive cellular automata in two significant ways. First, our results apply to well-known families of CA (e.g. the family of outer-totalistic CA containing the Game of Life) and, second, we obtain such density results with both increasing number of states and increasing neighbourhood. Moreover, thanks to universality-preserving encodings, we show that the universality problem remains undecidable in some of those families.<|reference_end|>
arxiv
@article{boyer2009on, title={On Local Symmetries And Universality In Cellular Autmata}, author={Laurent Boyer, Guillaume Theyssier (LM-Savoie)}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 195-206}, year={2009}, archivePrefix={arXiv}, eprint={0902.1253}, primaryClass={cs.DM math.DS} }
boyer2009on
arxiv-6322
0902.1254
Almost-Uniform Sampling of Points on High-Dimensional Algebraic Varieties
<|reference_start|>Almost-Uniform Sampling of Points on High-Dimensional Algebraic Varieties: We consider the problem of uniform sampling of points on an algebraic variety. Specifically, we develop a randomized algorithm that, given a small set of multivariate polynomials over a sufficiently large finite field, produces a common zero of the polynomials almost uniformly at random. The statistical distance between the output distribution of the algorithm and the uniform distribution on the set of common zeros is polynomially small in the field size, and the running time of the algorithm is polynomial in the description of the polynomials and their degrees provided that the number of the polynomials is a constant.<|reference_end|>
arxiv
@article{cheraghchi2009almost-uniform, title={Almost-Uniform Sampling of Points on High-Dimensional Algebraic Varieties}, author={Mahdi Cheraghchi (EPFL), Amin Shokrollahi (EPFL)}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 277-288}, year={2009}, archivePrefix={arXiv}, eprint={0902.1254}, primaryClass={cs.DS cs.CC} }
cheraghchi2009almost-uniform
arxiv-6323
0902.1255
Hardness and Algorithms for Rainbow Connectivity
<|reference_start|>Hardness and Algorithms for Rainbow Connectivity: An edge-colored graph G is rainbow connected if any two vertices are connected by a path whose edges have distinct colors. The rainbow connectivity of a connected graph G, denoted rc(G), is the smallest number of colors that are needed in order to make G rainbow connected. In addition to being a natural combinatorial problem, the rainbow connectivity problem is motivated by applications in cellular networks. In this paper we give the first proof that computing rc(G) is NP-Hard. In fact, we prove that it is already NP-Complete to decide if rc(G) = 2, and also that it is NP-Complete to decide whether a given edge-colored (with an unbounded number of colors) graph is rainbow connected. On the positive side, we prove that for every $\epsilon$ > 0, a connected graph with minimum degree at least $\epsilon n$ has bounded rainbow connectivity, where the bound depends only on $\epsilon$, and the corresponding coloring can be constructed in polynomial time. Additional non-trivial upper bounds, as well as open problems and conjectures are also pre sented.<|reference_end|>
arxiv
@article{chakraborty2009hardness, title={Hardness and Algorithms for Rainbow Connectivity}, author={Sourav Chakraborty, Eldar Fischer, Arie Matsliah, Raphael Yuster}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 243-254}, year={2009}, archivePrefix={arXiv}, eprint={0902.1255}, primaryClass={cs.CC cs.DM} }
chakraborty2009hardness
arxiv-6324
0902.1256
Enumerating Homomorphisms
<|reference_start|>Enumerating Homomorphisms: The homomorphism problem for relational structures is an abstract way of formulating constraint satisfaction problems (CSP) and various problems in database theory. The decision version of the homomorphism problem received a lot of attention in literature; in particular, the way the graph-theoretical structure of the variables and constraints influences the complexity of the problem is intensively studied. Here we study the problem of enumerating all the solutions with polynomial delay from a similar point of view. It turns out that the enumeration problem behaves very differently from the decision version. We give evidence that it is unlikely that a characterization result similar to the decision version can be obtained. Nevertheless, we show nontrivial cases where enumeration can be done with polynomial delay.<|reference_end|>
arxiv
@article{bulatov2009enumerating, title={Enumerating Homomorphisms}, author={Andrei A. Bulatov, Victor Dalmau, Martin Grohe, Daniel Marx}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 231-242}, year={2009}, archivePrefix={arXiv}, eprint={0902.1256}, primaryClass={cs.CC cs.LO} }
bulatov2009enumerating
arxiv-6325
0902.1257
Compilation of extended recursion in call-by-value functional languages
<|reference_start|>Compilation of extended recursion in call-by-value functional languages: This paper formalizes and proves correct a compilation scheme for mutually-recursive definitions in call-by-value functional languages. This scheme supports a wider range of recursive definitions than previous methods. We formalize our technique as a translation scheme to a lambda-calculus featuring in-place update of memory blocks, and prove the translation to be correct.<|reference_end|>
arxiv
@article{hirschowitz2009compilation, title={Compilation of extended recursion in call-by-value functional languages}, author={Tom Hirschowitz (LM-Savoie), Xavier Leroy (INRIA Rocquencourt), J. B. Wells}, journal={Higher-Order and Symbolic Computation 22, 1 (2009) 3-66}, year={2009}, doi={10.1007/s10990-009-9042-z}, archivePrefix={arXiv}, eprint={0902.1257}, primaryClass={cs.PL} }
hirschowitz2009compilation
arxiv-6326
0902.1258
Extraction de concepts sous contraintes dans des donn\'ees d'expression de g\`enes
<|reference_start|>Extraction de concepts sous contraintes dans des donn\'ees d'expression de g\`enes: In this paper, we propose a technique to extract constrained formal concepts.<|reference_end|>
arxiv
@article{jeudy2009extraction, title={Extraction de concepts sous contraintes dans des donn\'ees d'expression de g\`enes}, author={Baptiste Jeudy (LAHC), Franc{c}ois Rioult (GREYC)}, journal={Conf\'erence sur l'apprentissage automatique, Nice : France (2005)}, year={2009}, archivePrefix={arXiv}, eprint={0902.1258}, primaryClass={cs.LG} }
jeudy2009extraction
arxiv-6327
0902.1259
Database Transposition for Constrained (Closed) Pattern Mining
<|reference_start|>Database Transposition for Constrained (Closed) Pattern Mining: Recently, different works proposed a new way to mine patterns in databases with pathological size. For example, experiments in genome biology usually provide databases with thousands of attributes (genes) but only tens of objects (experiments). In this case, mining the "transposed" database runs through a smaller search space, and the Galois connection allows to infer the closed patterns of the original database. We focus here on constrained pattern mining for those unusual databases and give a theoretical framework for database and constraint transposition. We discuss the properties of constraint transposition and look into classical constraints. We then address the problem of generating the closed patterns of the original database satisfying the constraint, starting from those mined in the "transposed" database. Finally, we show how to generate all the patterns satisfying the constraint from the closed ones.<|reference_end|>
arxiv
@article{jeudy2009database, title={Database Transposition for Constrained (Closed) Pattern Mining}, author={Baptiste Jeudy (LAHC, EURISE), Franc{c}ois Rioult (GREYC)}, journal={Knowledge Discovery in Inductive Databases, Third International Workshop, KDID 2004, Pisa, Italy, Septembre 2004, Revised Selected and Invited Papers, Bart Goethals, Arno Siebes (Ed.) (2004) 89-107}, year={2009}, archivePrefix={arXiv}, eprint={0902.1259}, primaryClass={cs.LG} }
jeudy2009database
arxiv-6328
0902.1260
Nonclairvoyant Speed Scaling for Flow and Energy
<|reference_start|>Nonclairvoyant Speed Scaling for Flow and Energy: We study online nonclairvoyant speed scaling to minimize total flow time plus energy. We first consider the traditional model where the power function is P (s) = s\^\propto. We give a nonclairvoyant algorithm that is shown to be O(\propto\^3)-competitive. We then show an \Omega(\propto\^(1/3-\epsilon)) lower bound on the competitive ratio of any nonclairvoyant algorithm. We also show that there are power functions for which no nonclairvoyant algorithm can be O(1)-competitive.<|reference_end|>
arxiv
@article{chan2009nonclairvoyant, title={Nonclairvoyant Speed Scaling for Flow and Energy}, author={Ho-Leung Chan, Jeff Edmonds, Tak-Wah Lam, Lap-Kei Lee, Alberto Marchetti-Spaccamela, Kirk Pruhs}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 255-264}, year={2009}, archivePrefix={arXiv}, eprint={0902.1260}, primaryClass={cs.DS} }
chan2009nonclairvoyant
arxiv-6329
0902.1261
An Approximation Algorithm for l\infty-Fitting Robinson Structures to Distances
<|reference_start|>An Approximation Algorithm for l\infty-Fitting Robinson Structures to Distances: In this paper, we present a factor 16 approximation algorithm for the following NP-hard distance fitting problem: given a finite set X and a distance d on X, find a Robinsonian distance dR on X minimizing the l\infty-error ||d - dR||\infty = maxx,y\epsilonX {|d(x, y) - dR(x, y)|}. A distance dR on a finite set X is Robinsonian if its matrix can be symmetrically permuted so that its elements do not decrease when moving away from the main diagonal along any row or column. Robinsonian distances generalize ultrametrics, line distances and occur in the seriation problems and in classification.<|reference_end|>
arxiv
@article{chepoi2009an, title={An Approximation Algorithm for l\infty-Fitting Robinson Structures to Distances}, author={Victor Chepoi (LIF), M. Seston (LIF)}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 265-276}, year={2009}, archivePrefix={arXiv}, eprint={0902.1261}, primaryClass={cs.DS cs.CC} }
chepoi2009an
arxiv-6330
0902.1267
A Note on the Diagonalization of the Discrete Fourier Transform
<|reference_start|>A Note on the Diagonalization of the Discrete Fourier Transform: Following the approach developed by S. Gurevich and R. Hadani, an analytical formula of the canonical basis of the DFT is given for the case $N=p$ where $p$ is a prime number and $p\equiv 1$ (mod 4).<|reference_end|>
arxiv
@article{wang2009a, title={A Note on the Diagonalization of the Discrete Fourier Transform}, author={Zilong Wang, Guang Gong}, journal={arXiv preprint arXiv:0902.1267}, year={2009}, doi={10.1016/j.acha.2009.05.003}, archivePrefix={arXiv}, eprint={0902.1267}, primaryClass={cs.IT cs.DM math.IT math.RT} }
wang2009a
arxiv-6331
0902.1275
Delay Performance Optimization for Multiuser Diversity Systems with Bursty-Traffic and Heterogeneous Wireless Links
<|reference_start|>Delay Performance Optimization for Multiuser Diversity Systems with Bursty-Traffic and Heterogeneous Wireless Links: This paper presents a cross-layer approach for optimizing the delay performance of a multiuser diversity system with heterogeneous block-fading channels and a delay-sensitive bursty-traffic. We consider the downlink of a time-slotted multiuser system employing opportunistic scheduling with fair performance at the medium access (MAC) layer and adaptive modulation and coding (AMC) with power control at the physical layer. Assuming individual user buffers which temporarily store the arrival traffic of users at the MAC layer, we first present a large deviations based statistical model to evaluate the delay-bound violation of packets in the user buffers. Aiming at minimizing the delay probability of the individual users, we then optimize the AMC and power control module subject to a target packet-error rate constraint. In the case of a quantized feedback channel, we also present a constant-power AMC based opportunistic scheduling scheme. Numerical and simulation results are provided to evaluate the delay performance of the proposed adaptation schemes in a multiuser setup.<|reference_end|>
arxiv
@article{harsini2009delay, title={Delay Performance Optimization for Multiuser Diversity Systems with Bursty-Traffic and Heterogeneous Wireless Links}, author={Jalil Seifali Harsini, Farshad Lahouti}, journal={arXiv preprint arXiv:0902.1275}, year={2009}, archivePrefix={arXiv}, eprint={0902.1275}, primaryClass={cs.IT math.IT} }
harsini2009delay
arxiv-6332
0902.1278
Fountain Codes Based Distributed Storage Algorithms for Large-scale Wireless Sensor Networks
<|reference_start|>Fountain Codes Based Distributed Storage Algorithms for Large-scale Wireless Sensor Networks: We consider large-scale sensor networks with n nodes, out of which k are in possession, (e.g., have sensed or collected in some other way) k information packets. In the scenarios in which network nodes are vulnerable because of, for example, limited energy or a hostile environment, it is desirable to disseminate the acquired information throughout the network so that each of the n nodes stores one (possibly coded) packet and the original k source packets can be recovered later in a computationally simple way from any (1 + \epsilon)k nodes for some small \epsilon > 0. We developed two distributed algorithms for solving this problem based on simple random walks and Fountain codes. Unlike all previously developed schemes, our solution is truly distributed, that is, nodes do not know n, k or connectivity in the network, except in their own neighborhoods, and they do not maintain any routing tables. In the first algorithm, all the sensors have the knowledge of n and k. In the second algorithm, each sensor estimates these parameters through the random walk dissemination. We present analysis of the communication/transmission and encoding/decoding complexity of these two algorithms, and provide extensive simulation results as well<|reference_end|>
arxiv
@article{aly2009fountain, title={Fountain Codes Based Distributed Storage Algorithms for Large-scale Wireless Sensor Networks}, author={Salah A. Aly, Zhenning Kong, Emina Soljanin}, journal={Proc. IEEE/ACM IPSN 2008, pp 171-182}, year={2009}, doi={10.1109/IPSN.2008.64}, archivePrefix={arXiv}, eprint={0902.1278}, primaryClass={cs.IT cs.DS cs.NI math.IT} }
aly2009fountain
arxiv-6333
0902.1284
Multi-Label Prediction via Compressed Sensing
<|reference_start|>Multi-Label Prediction via Compressed Sensing: We consider multi-label prediction problems with large output spaces under the assumption of output sparsity -- that the target (label) vectors have small support. We develop a general theory for a variant of the popular error correcting output code scheme, using ideas from compressed sensing for exploiting this sparsity. The method can be regarded as a simple reduction from multi-label regression problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this approach radically more efficient than others. We also state and prove robustness guarantees for this method in the form of regret transform bounds (in general), and also provide a more detailed analysis for the linear prediction setting.<|reference_end|>
arxiv
@article{hsu2009multi-label, title={Multi-Label Prediction via Compressed Sensing}, author={Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang}, journal={arXiv preprint arXiv:0902.1284}, year={2009}, archivePrefix={arXiv}, eprint={0902.1284}, primaryClass={cs.LG} }
hsu2009multi-label
arxiv-6334
0902.1299
Perfect Quantum Network Communication Protocol Based on Classical Network Coding
<|reference_start|>Perfect Quantum Network Communication Protocol Based on Classical Network Coding: This paper considers a problem of quantum communication between parties that are connected through a network of quantum channels. The model in this paper assumes that there is no prior entanglement shared among any of the parties, but that classical communication is free. The task is to perfectly transfer an unknown quantum state from a source subsystem to a target subsystem, where both source and target are formed by ordered sets of some of the nodes. It is proved that a lower bound of the rate at which this quantum communication task is possible is given by the classical min-cut max-flow theorem of network coding, where the capacities in question are the quantum capacities of the edges of the network.<|reference_end|>
arxiv
@article{kobayashi2009perfect, title={Perfect Quantum Network Communication Protocol Based on Classical Network Coding}, author={Hirotada Kobayashi, Francois Le Gall, Harumichi Nishimura, Martin Roetteler}, journal={Proceedings 2010 IEEE International Symposium on Information Theory (ISIT 2010), pp. 2686-2690}, year={2009}, doi={10.1109/ISIT.2010.5513644}, archivePrefix={arXiv}, eprint={0902.1299}, primaryClass={quant-ph cs.IT math.IT} }
kobayashi2009perfect
arxiv-6335
0902.1351
On the minimum distance graph of an extended Preparata code
<|reference_start|>On the minimum distance graph of an extended Preparata code: The minimum distance graph of an extended Preparata code P(m) has vertices corresponding to codewords and edges corresponding to pairs of codewords that are distance 6 apart. The clique structure of this graph is investigated and it is established that the minimum distance graphs of two extended Preparata codes are isomorphic if and only if the codes are equivalent.<|reference_end|>
arxiv
@article{fernández-córdoba2009on, title={On the minimum distance graph of an extended Preparata code}, author={C. Fern'andez-C'ordoba and K. T. Phelps}, journal={arXiv preprint arXiv:0902.1351}, year={2009}, archivePrefix={arXiv}, eprint={0902.1351}, primaryClass={cs.IT cs.DM math.IT} }
fernández-córdoba2009on
arxiv-6336
0902.1364
A Note on Contractible Edges in Chordal Graphs
<|reference_start|>A Note on Contractible Edges in Chordal Graphs: Contraction of an edge merges its end points into a new vertex which is adjacent to each neighbor of the end points of the edge. An edge in a $k$-connected graph is {\em contractible} if its contraction does not result in a graph of lower connectivity. We characterize contractible edges in chordal graphs using properties of tree decompositions with respect to minimal vertex separators.<|reference_end|>
arxiv
@article{narayanaswamy2009a, title={A Note on Contractible Edges in Chordal Graphs}, author={N.S.Narayanaswamy, N.Sadagopan and Apoorve Dubey}, journal={arXiv preprint arXiv:0902.1364}, year={2009}, archivePrefix={arXiv}, eprint={0902.1364}, primaryClass={cs.DM} }
narayanaswamy2009a
arxiv-6337
0902.1378
On the Additive Constant of the k-server Work Function Algorithm
<|reference_start|>On the Additive Constant of the k-server Work Function Algorithm: We consider the Work Function Algorithm for the k-server problem. We show that if the Work Function Algorithm is c-competitive, then it is also strictly (2c)-competitive. As a consequence of [Koutsoupias and Papadimitriou, JACM 1995] this also shows that the Work Function Algorithm is strictly (4k-2)-competitive.<|reference_end|>
arxiv
@article{emek2009on, title={On the Additive Constant of the k-server Work Function Algorithm}, author={Yuval Emek, Pierre Fraigniaud, Amos Korman, Adi Rosen}, journal={arXiv preprint arXiv:0902.1378}, year={2009}, archivePrefix={arXiv}, eprint={0902.1378}, primaryClass={cs.DS} }
emek2009on
arxiv-6338
0902.1394
Fundamental delay bounds in peer-to-peer chunk-based real-time streaming systems
<|reference_start|>Fundamental delay bounds in peer-to-peer chunk-based real-time streaming systems: This paper addresses the following foundational question: what is the maximum theoretical delay performance achievable by an overlay peer-to-peer streaming system where the streamed content is subdivided into chunks? As shown in this paper, when posed for chunk-based systems, and as a consequence of the store-and-forward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (sub-streams). To circumvent the complexity emerging when directly dealing with delay, we express performance in term of a convenient metric, called "stream diffusion metric". We show that it is directly related to the end-to-end minimum delay achievable in a P2P streaming network. In a homogeneous scenario, we derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. In this bound, k-step Fibonacci sequences do emerge, and appear to set the fundamental laws that characterize the optimal operation of chunk-based systems.<|reference_end|>
arxiv
@article{bianchi2009fundamental, title={Fundamental delay bounds in peer-to-peer chunk-based real-time streaming systems}, author={Giuseppe Bianchi, Nicola Blefari Melazzi, Lorenzo Bracciale, Francesca Lo Piccolo, Stefano Salsano}, journal={Proceedings of 21st International Teletraffic Congress (ITC 21), 2009}, year={2009}, archivePrefix={arXiv}, eprint={0902.1394}, primaryClass={cs.PF cs.MM} }
bianchi2009fundamental
arxiv-6339
0902.1400
The Price of Anarchy in Cooperative Network Creation Games
<|reference_start|>The Price of Anarchy in Cooperative Network Creation Games: In general, the games are played on a host graph, where each node is a selfish independent agent (player) and each edge has a fixed link creation cost \alpha. Together the agents create a network (a subgraph of the host graph) while selfishly minimizing the link creation costs plus the sum of the distances to all other players (usage cost). In this paper, we pursue two important facets of the network creation game. First, we study extensively a natural version of the game, called the cooperative model, where nodes can collaborate and share the cost of creating any edge in the host graph. We prove the first nontrivial bounds in this model, establishing that the price of anarchy is polylogarithmic in n for all values of &#945; in complete host graphs. This bound is the first result of this type for any version of the network creation game; most previous general upper bounds are polynomial in n. Interestingly, we also show that equilibrium graphs have polylogarithmic diameter for the most natural range of \alpha (at most n polylg n). Second, we study the impact of the natural assumption that the host graph is a general graph, not necessarily complete. This model is a simple example of nonuniform creation costs among the edges (effectively allowing weights of \alpha and \infty). We prove the first assemblage of upper and lower bounds for this context, stablishing nontrivial tight bounds for many ranges of \alpha, for both the unilateral and cooperative versions of network creation. In particular, we establish polynomial lower bounds for both versions and many ranges of \alpha, even for this simple nonuniform cost model, which sharply contrasts the conjectured constant bounds for these games in complete (uniform) graphs.<|reference_end|>
arxiv
@article{demaine2009the, title={The Price of Anarchy in Cooperative Network Creation Games}, author={Erik D. Demaine (MIT), Mohammadtaghi Hajiaghayi (MIT), Hamid Mahini, Morteza Zadimoghaddam}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 301-312}, year={2009}, archivePrefix={arXiv}, eprint={0902.1400}, primaryClass={cs.GT} }
demaine2009the
arxiv-6340
0902.1475
Personalised and Dynamic Trust in Social Networks
<|reference_start|>Personalised and Dynamic Trust in Social Networks: We propose a novel trust metric for social networks which is suitable for application in recommender systems. It is personalised and dynamic and allows to compute the indirect trust between two agents which are not neighbours based on the direct trust between agents that are neighbours. In analogy to some personalised versions of PageRank, this metric makes use of the concept of feedback centrality and overcomes some of the limitations of other trust metrics.In particular, it does not neglect cycles and other patterns characterising social networks, as some other algorithms do. In order to apply the metric to recommender systems, we propose a way to make trust dynamic over time. We show by means of analytical approximations and computer simulations that the metric has the desired properties. Finally, we carry out an empirical validation on a dataset crawled from an Internet community and compare the performance of a recommender system using our metric to one using collaborative filtering.<|reference_end|>
arxiv
@article{walter2009personalised, title={Personalised and Dynamic Trust in Social Networks}, author={Frank E. Walter, Stefano Battiston, Frank Schweitzer}, journal={arXiv preprint arXiv:0902.1475}, year={2009}, archivePrefix={arXiv}, eprint={0902.1475}, primaryClass={cs.CY cs.IR physics.soc-ph} }
walter2009personalised
arxiv-6341
0902.1505
On the Bures Volume of Separable Quantum States
<|reference_start|>On the Bures Volume of Separable Quantum States: We obtain two sided estimates for the Bures volume of an arbitrary subset of the set of $N\times N$ density matrices, in terms of the Hilbert-Schmidt volume of that subset. For general subsets, our results are essentially optimal (for large $N$). As applications, we derive in particular nontrivial lower and upper bounds for the Bures volume of sets of separable states and for sets of states with positive partial transpose. PACS numbers: 02.40.Ft, 03.65.Db, 03.65.Ud, 03.67.Mn<|reference_end|>
arxiv
@article{ye2009on, title={On the Bures Volume of Separable Quantum States}, author={Deping Ye}, journal={JOURNAL OF MATHEMATICAL PHYSICS 50, 083502 (2009)}, year={2009}, doi={10.1063/1.3187216}, archivePrefix={arXiv}, eprint={0902.1505}, primaryClass={quant-ph cs.IT math.FA math.IT math.MG} }
ye2009on
arxiv-6342
0902.1587
Forward analysis for WSTS, Part I: Completions
<|reference_start|>Forward analysis for WSTS, Part I: Completions: Well-structured transition systems provide the right foundation to compute a finite basis of the set of predecessors of the upward closure of a state. The dual problem, to compute a finite representation of the set of successors of the downward closure of a state, is harder: Until now, the theoretical framework for manipulating downward-closed sets was missing. We answer this problem, using insights from domain theory (dcpos and ideal completions), from topology (sobrifications), and shed new light on the notion of adequate domains of limits.<|reference_end|>
arxiv
@article{finkel2009forward, title={Forward analysis for WSTS, Part I: Completions}, author={Alain Finkel (LSV), Jean Goubault-Larrecq (LSV)}, journal={26th International Symposium on Theoretical Aspects of Computer Science - STACS 2009 (2009) 433-444}, year={2009}, archivePrefix={arXiv}, eprint={0902.1587}, primaryClass={cs.LO} }
finkel2009forward
arxiv-6343
0902.1591
Correlated Sources over Broadcast Channels
<|reference_start|>Correlated Sources over Broadcast Channels: The problem of reliable transmission of correlated sources over the broadcast channel, originally studied by Han and Costa, is revisited. An alternative characterization of their sufficient condition for reliable transmission is given, which includes results of Marton for channel coding over broadcast channels and of Gray and Wyner for distributed source coding. A ``minimalistic'' coding scheme is presented, which is based on joint typicality encoding and decoding, without requiring the use of Cover's superposition coding, random hashing, and common part between two sources. The analysis of the coding scheme is also conceptually simple and relies on a new multivariate covering lemma and an application of the Fourier--Motzkin elimination procedure.<|reference_end|>
arxiv
@article{minero2009correlated, title={Correlated Sources over Broadcast Channels}, author={Paolo Minero and Young-Han Kim}, journal={arXiv preprint arXiv:0902.1591}, year={2009}, archivePrefix={arXiv}, eprint={0902.1591}, primaryClass={cs.IT math.IT} }
minero2009correlated
arxiv-6344
0902.1602
An Order on Sets of Tilings Corresponding to an Order on Languages
<|reference_start|>An Order on Sets of Tilings Corresponding to an Order on Languages: Traditionally a tiling is defined with a finite number of finite forbidden patterns. We can generalize this notion considering any set of patterns. Generalized tilings defined in this way can be studied with a dynamical point of view, leading to the notion of subshift. In this article we establish a correspondence between an order on subshifts based on dynamical transformations on them and an order on languages of forbidden patterns based on computability properties.<|reference_end|>
arxiv
@article{aubrun2009an, title={An Order on Sets of Tilings Corresponding to an Order on Languages}, author={Nathalie Aubrun (IGM), Mathieu Sablik (LATP)}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 99-110}, year={2009}, archivePrefix={arXiv}, eprint={0902.1602}, primaryClass={cs.DM} }
aubrun2009an
arxiv-6345
0902.1604
A Comparison of Techniques for Sampling Web Pages
<|reference_start|>A Comparison of Techniques for Sampling Web Pages: As the World Wide Web is growing rapidly, it is getting increasingly challenging to gather representative information about it. Instead of crawling the web exhaustively one has to resort to other techniques like sampling to determine the properties of the web. A uniform random sample of the web would be useful to determine the percentage of web pages in a specific language, on a topic or in a top level domain. Unfortunately, no approach has been shown to sample the web pages in an unbiased way. Three promising web sampling algorithms are based on random walks. They each have been evaluated individually, but making a comparison on different data sets is not possible. We directly compare these algorithms in this paper. We performed three random walks on the web under the same conditions and analyzed their outcomes in detail. We discuss the strengths and the weaknesses of each algorithm and propose improvements based on experimental results.<|reference_end|>
arxiv
@article{baykan2009a, title={A Comparison of Techniques for Sampling Web Pages}, author={Eda Baykan (EPFL), Monika Henzinger (EPFL), Stefan F. Keller, Sebastian De Castelberg, Markus Kinzler}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 13-30}, year={2009}, archivePrefix={arXiv}, eprint={0902.1604}, primaryClass={cs.DS} }
baykan2009a
arxiv-6346
0902.1605
Lower Bounds for Multi-Pass Processing of Multiple Data Streams
<|reference_start|>Lower Bounds for Multi-Pass Processing of Multiple Data Streams: This paper gives a brief overview of computation models for data stream processing, and it introduces a new model for multi-pass processing of multiple streams, the so-called mp2s-automata. Two algorithms for solving the set disjointness problem wi th these automata are presented. The main technical contribution of this paper is the proof of a lower bound on the size of memory and the number of heads that are required for solvin g the set disjointness problem with mp2s-automata.<|reference_end|>
arxiv
@article{schweikardt2009lower, title={Lower Bounds for Multi-Pass Processing of Multiple Data Streams}, author={Nicole Schweikardt}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 51-62}, year={2009}, archivePrefix={arXiv}, eprint={0902.1605}, primaryClass={cs.DS} }
schweikardt2009lower
arxiv-6347
0902.1609
Asymptotically Optimal Lower Bounds on the NIH-Multi-Party Information
<|reference_start|>Asymptotically Optimal Lower Bounds on the NIH-Multi-Party Information: Here we prove an asymptotically optimal lower bound on the information complexity of the k-party disjointness function with the unique intersection promise, an important special case of the well known disjointness problem, and the ANDk-function in the number in the hand model. Our (n/k) bound for disjointness improves on an earlier (n/(k log k)) bound by Chakrabarti et al. (2003), who obtained an asymptotically tight lower bound for one-way protocols, but failed to do so for the general case. Our result eliminates both the gap between the upper and the lower bound for unrestricted protocols and the gap between the lower bounds for one-way protocols and unrestricted protocols.<|reference_end|>
arxiv
@article{gronemeier2009asymptotically, title={Asymptotically Optimal Lower Bounds on the NIH-Multi-Party Information}, author={Andr'e Gronemeier}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 505-516}, year={2009}, archivePrefix={arXiv}, eprint={0902.1609}, primaryClass={cs.CC} }
gronemeier2009asymptotically
arxiv-6348
0902.1610
Package upgrades in FOSS distributions: details and challenges
<|reference_start|>Package upgrades in FOSS distributions: details and challenges: The upgrade problems faced by Free and Open Source Software distributions have characteristics not easily found elsewhere. We describe the structure of packages and their role in the upgrade process. We show that state of the art package managers have shortcomings inhibiting their ability to cope with frequent upgrade failures. We survey current countermeasures to such failures, argue that they are not satisfactory, and sketch alternative solutions.<|reference_end|>
arxiv
@article{di cosmo2009package, title={Package upgrades in FOSS distributions: details and challenges}, author={Roberto Di Cosmo (PPS), Stefano Zacchiroli (PPS), Paulo Trezentos}, journal={International Workshop On Hot Topics In Software Upgrades Proceedings of the 1st International Workshop on Hot Topics in Software Upgrades, Nashville, Tennessee : \'Etats-Unis d'Am\'erique (2008)}, year={2009}, doi={10.1145/1490283.1490292}, archivePrefix={arXiv}, eprint={0902.1610}, primaryClass={cs.SE cs.OS} }
di cosmo2009package
arxiv-6349
0902.1612
A baby steps/giant steps Monte Carlo algorithm for computing roadmaps in smooth compact real hypersurfaces
<|reference_start|>A baby steps/giant steps Monte Carlo algorithm for computing roadmaps in smooth compact real hypersurfaces: We consider the problem of constructing roadmaps of real algebraic sets. The problem was introduced by Canny to answer connectivity questions and solve motion planning problems. Given $s$ polynomial equations with rational coefficients, of degree $D$ in $n$ variables, Canny's algorithm has a Monte Carlo cost of $s^n\log(s) D^{O(n^2)}$ operations in $\mathbb{Q}$; a deterministic version runs in time $s^n \log(s) D^{O(n^4)}$. The next improvement was due to Basu, Pollack and Roy, with an algorithm of deterministic cost $s^{d+1} D^{O(n^2)}$ for the more general problem of computing roadmaps of semi-algebraic sets ($d \le n$ is the dimension of an associated object). We give a Monte Carlo algorithm of complexity $(nD)^{O(n^{1.5})}$ for the problem of computing a roadmap of a compact hypersurface $V$ of degree $D$ in $n$ variables; we also have to assume that $V$ has a finite number of singular points. Even under these extra assumptions, no previous algorithm featured a cost better than $D^{O(n^2)}$.<|reference_end|>
arxiv
@article{din2009a, title={A baby steps/giant steps Monte Carlo algorithm for computing roadmaps in smooth compact real hypersurfaces}, author={Mohab Safey El Din (LIP6, INRIA Rocquencourt), 'Eric Schost}, journal={arXiv preprint arXiv:0902.1612}, year={2009}, number={RR-6832}, archivePrefix={arXiv}, eprint={0902.1612}, primaryClass={cs.SC} }
din2009a
arxiv-6350
0902.1617
Perfect Matchings in \~O(n^15) Time in Regular Bipartite Graphs
<|reference_start|>Perfect Matchings in \~O(n^15) Time in Regular Bipartite Graphs: We consider the well-studied problem of finding a perfect matching in $d$-regular bipartite graphs with $2n$ vertices and $m = nd$ edges. While the best-known algorithm for general bipartite graphs (due to Hopcroft and Karp) takes $O(m \sqrt{n})$ time, in regular bipartite graphs, a perfect matching is known to be computable in $O(m)$ time. Very recently, the $O(m)$ bound was improved to $O(\min\{m, \frac{n^{2.5}\ln n}{d}\})$ expected time, an expression that is bounded by $\tilde{O}(n^{1.75})$. In this paper, we further improve this result by giving an $O(\min\{m, \frac{n^2\ln^3 n}{d}\})$ expected time algorithm for finding a perfect matching in regular bipartite graphs; as a function of $n$ alone, the algorithm takes expected time $O((n\ln n)^{1.5})$. To obtain this result, we design and analyze a two-stage sampling scheme that reduces the problem of finding a perfect matching in a regular bipartite graph to the same problem on a subsampled bipartite graph with $O(n\ln n)$ edges that has a perfect matching with high probability. The matching is then recovered using the Hopcroft-Karp algorithm. While the standard analysis of Hopcroft-Karp gives us an $\tilde{O}(n^{1.5})$ running time, we present a tighter analysis for our special case that results in the stronger $\tilde{O}(\min\{m, \frac{n^2}{d} \})$ time mentioned earlier. Our proof of correctness of this sampling scheme uses a new correspondence theorem between cuts and Hall's theorem ``witnesses'' for a perfect matching in a bipartite graph that we prove. We believe this theorem may be of independent interest; as another example application, we show that a perfect matching in the support of an $n \times n$ doubly stochastic matrix with $m$ non-zero entries can be found in expected time $\tilde{O}(m + n^{1.5})$.<|reference_end|>
arxiv
@article{goel2009perfect, title={Perfect Matchings in \~O(n^{1.5}) Time in Regular Bipartite Graphs}, author={Ashish Goel and Michael Kapralov and Sanjeev Khanna}, journal={arXiv preprint arXiv:0902.1617}, year={2009}, archivePrefix={arXiv}, eprint={0902.1617}, primaryClass={cs.DS cs.DM} }
goel2009perfect
arxiv-6351
0902.1629
Improvements of real coded genetic algorithms based on differential operators preventing premature convergence
<|reference_start|>Improvements of real coded genetic algorithms based on differential operators preventing premature convergence: This paper presents several types of evolutionary algorithms (EAs) used for global optimization on real domains. The interest has been focused on multimodal problems, where the difficulties of a premature convergence usually occurs. First the standard genetic algorithm (SGA) using binary encoding of real values and its unsatisfactory behavior with multimodal problems is briefly reviewed together with some improvements of fighting premature convergence. Two types of real encoded methods based on differential operators are examined in detail: the differential evolution (DE), a very modern and effective method firstly published by R. Storn and K. Price, and the simplified real-coded differential genetic algorithm SADE proposed by the authors. In addition, an improvement of the SADE method, called CERAF technology, enabling the population of solutions to escape from local extremes, is examined. All methods are tested on an identical set of objective functions and a systematic comparison based on a reliable methodology is presented. It is confirmed that real coded methods generally exhibit better behavior on real domains than the binary algorithms, even when extended by several improvements. Furthermore, the positive influence of the differential operators due to their possibility of self-adaptation is demonstrated. From the reliability point of view, it seems that the real encoded differential algorithm, improved by the technology described in this paper, is a universal and reliable method capable of solving all proposed test problems.<|reference_end|>
arxiv
@article{hrstka2009improvements, title={Improvements of real coded genetic algorithms based on differential operators preventing premature convergence}, author={O. Hrstka and A. Kucerova}, journal={Advances in Engineering Software, 35 (3-4), 237-246, 2004}, year={2009}, doi={10.1016/S0965-9978(03)00113-3}, archivePrefix={arXiv}, eprint={0902.1629}, primaryClass={cs.NE cs.AI} }
hrstka2009improvements
arxiv-6352
0902.1634
A bound on the size of linear codes
<|reference_start|>A bound on the size of linear codes: We present a bound on the size of linear codes. This bound is independent of other known bounds, e.g. the Griesmer bound.<|reference_end|>
arxiv
@article{guerrini2009a, title={A bound on the size of linear codes}, author={Eleonora Guerrini, Massimiliano Sala}, journal={arXiv preprint arXiv:0902.1634}, year={2009}, archivePrefix={arXiv}, eprint={0902.1634}, primaryClass={cs.IT math.IT} }
guerrini2009a
arxiv-6353
0902.1647
A competitive comparison of different types of evolutionary algorithms
<|reference_start|>A competitive comparison of different types of evolutionary algorithms: This paper presents comparison of several stochastic optimization algorithms developed by authors in their previous works for the solution of some problems arising in Civil Engineering. The introduced optimization methods are: the integer augmented simulated annealing (IASA), the real-coded augmented simulated annealing (RASA), the differential evolution (DE) in its original fashion developed by R. Storn and K. Price and simplified real-coded differential genetic algorithm (SADE). Each of these methods was developed for some specific optimization problem; namely the Chebychev trial polynomial problem, the so called type 0 function and two engineering problems - the reinforced concrete beam layout and the periodic unit cell problem respectively. Detailed and extensive numerical tests were performed to examine the stability and efficiency of proposed algorithms. The results of our experiments suggest that the performance and robustness of RASA, IASA and SADE methods are comparable, while the DE algorithm performs slightly worse. This fact together with a small number of internal parameters promotes the SADE method as the most robust for practical use.<|reference_end|>
arxiv
@article{hrstka2009a, title={A competitive comparison of different types of evolutionary algorithms}, author={O. Hrstka, A. Kucerova, M. Leps and J. Zeman}, journal={Computers & Structures, 81 (18-19), 1979-1990, 2003}, year={2009}, doi={10.1016/S0045-7949(03)00217-7}, archivePrefix={arXiv}, eprint={0902.1647}, primaryClass={cs.NE cs.AI} }
hrstka2009a
arxiv-6354
0902.1661
Even Faster Exact Bandwidth
<|reference_start|>Even Faster Exact Bandwidth: We deal with exact algorithms for Bandwidth, a long studied NP-hard problem. For a long time nothing better than the trivial O*(n!) exhaustive search was known. In 2000, Feige an Kilian came up with a O*(10^n)-time algorithm. Recently we presented algorithm that runs in O*(5^n) time and O*(2^n) space.. In this paper we present a major modification to our algorithm which makes it run in O(4.83^n) time with the cost of O*(4^n) space complexity. This modification allowed us to perform Measure & Conquer analysis for the time complexity which was not used for such types of problems before.<|reference_end|>
arxiv
@article{cygan2009even, title={Even Faster Exact Bandwidth}, author={Marek Cygan and Marcin Pilipczuk}, journal={arXiv preprint arXiv:0902.1661}, year={2009}, archivePrefix={arXiv}, eprint={0902.1661}, primaryClass={cs.CC cs.DS} }
cygan2009even
arxiv-6355
0902.1665
Novel anisotropic continuum-discrete damage model capable of representing localized failure of massive structures Part II: identification from tests under heterogeneous stress field
<|reference_start|>Novel anisotropic continuum-discrete damage model capable of representing localized failure of massive structures Part II: identification from tests under heterogeneous stress field: In Part I of this paper we have presented a simple model capable of describing the localized failure of a massive structure. In this part, we discuss the identification of the model parameters from two kinds of experiments: a uniaxial tensile test and a three-point bending test. The former is used only for illustration of material parameter response dependence, and we focus mostly upon the latter, discussing the inverse optimization problem for which the specimen is subjected to a heterogeneous stress field.<|reference_end|>
arxiv
@article{kucerova2009novel, title={Novel anisotropic continuum-discrete damage model capable of representing localized failure of massive structures. Part II: identification from tests under heterogeneous stress field}, author={A. Kucerova, D. Brancherie, A. Ibrahimbegovic, J. Zeman and Z. Bittnar}, journal={Engineering Computations, 26(1/2), 128-144, 2009}, year={2009}, doi={10.1108/02644400910924834}, archivePrefix={arXiv}, eprint={0902.1665}, primaryClass={cs.NE cs.CE} }
kucerova2009novel
arxiv-6356
0902.1690
Back analysis of microplane model parameters using soft computing methods
<|reference_start|>Back analysis of microplane model parameters using soft computing methods: A new procedure based on layered feed-forward neural networks for the microplane material model parameters identification is proposed in the present paper. Novelties are usage of the Latin Hypercube Sampling method for the generation of training sets, a systematic employment of stochastic sensitivity analysis and a genetic algorithm-based training of a neural network by an evolutionary algorithm. Advantages and disadvantages of this approach together with possible extensions are thoroughly discussed and analyzed.<|reference_end|>
arxiv
@article{kucerova2009back, title={Back analysis of microplane model parameters using soft computing methods}, author={A. Kucerova, M. Leps and J. Zeman}, journal={CAMES: Computer Assisted Mechanics and Engineering Sciences, 14 (2), 219-242, 2007}, year={2009}, archivePrefix={arXiv}, eprint={0902.1690}, primaryClass={cs.NE cs.AI} }
kucerova2009back
arxiv-6357
0902.1693
Fast Evaluation of Interlace Polynomials on Graphs of Bounded Treewidth
<|reference_start|>Fast Evaluation of Interlace Polynomials on Graphs of Bounded Treewidth: We consider the multivariate interlace polynomial introduced by Courcelle (2008), which generalizes several interlace polynomials defined by Arratia, Bollobas, and Sorkin (2004) and by Aigner and van der Holst (2004). We present an algorithm to evaluate the multivariate interlace polynomial of a graph with n vertices given a tree decomposition of the graph of width k. The best previously known result (Courcelle 2008) employs a general logical framework and leads to an algorithm with running time f(k)*n, where f(k) is doubly exponential in k. Analyzing the GF(2)-rank of adjacency matrices in the context of tree decompositions, we give a faster and more direct algorithm. Our algorithm uses 2^{3k^2+O(k)}*n arithmetic operations and can be efficiently implemented in parallel.<|reference_end|>
arxiv
@article{bläser2009fast, title={Fast Evaluation of Interlace Polynomials on Graphs of Bounded Treewidth}, author={Markus Bl"aser, Christian Hoffmann}, journal={Algorithmica, 61(1):3-35, 2011}, year={2009}, doi={10.1007/s00453-010-9439-4}, archivePrefix={arXiv}, eprint={0902.1693}, primaryClass={cs.DS} }
bläser2009fast
arxiv-6358
0902.1700
Linear Time Split Decomposition Revisited
<|reference_start|>Linear Time Split Decomposition Revisited: Given a family F of subsets of a ground set V, its orthogonal is defined to be the family of subsets that do not overlap any element of F. Using this tool we revisit the problem of designing a simple linear time algorithm for undirected graph split (also known as 1-join) decomposition.<|reference_end|>
arxiv
@article{charbit2009linear, title={Linear Time Split Decomposition Revisited}, author={Pierre Charbit, Fabien de Montgolfier, Mathieu Raffinot}, journal={arXiv preprint arXiv:0902.1700}, year={2009}, archivePrefix={arXiv}, eprint={0902.1700}, primaryClass={cs.DM cs.DS} }
charbit2009linear
arxiv-6359
0902.1734
A New Achievable Rate for the Gaussian Parallel Relay Channel
<|reference_start|>A New Achievable Rate for the Gaussian Parallel Relay Channel: Schein and Gallager introduced the Gaussian parallel relay channel in 2000. They proposed the Amplify-and-Forward (AF) and the Decode-and-Forward (DF) strategies for this channel. For a long time, the best known achievable rate for this channel was based on the AF and DF with time sharing (AF-DF). Recently, a Rematch-and-Forward (RF) scheme for the scenario in which different amounts of bandwidth can be assigned to the first and second hops were proposed. In this paper, we propose a \emph{Combined Amplify-and-Decode Forward (CADF)} scheme for the Gaussian parallel relay channel. We prove that the CADF scheme always gives a better achievable rate compared to the RF scheme, when there is a bandwidth mismatch between the first hop and the second hop. Furthermore, for the equal bandwidth case (Schein's setup), we show that the time sharing between the CADF and the DF schemes (CADF-DF) leads to a better achievable rate compared to the time sharing between the RF and the DF schemes (RF-DF) as well as the AF-DF.<|reference_end|>
arxiv
@article{saeed2009a, title={A New Achievable Rate for the Gaussian Parallel Relay Channel}, author={Seyed Saeed, Changiz Rezaei, Shahab Oveis Gharan, Amir K. Khandani}, journal={arXiv preprint arXiv:0902.1734}, year={2009}, archivePrefix={arXiv}, eprint={0902.1734}, primaryClass={cs.IT math.IT} }
saeed2009a
arxiv-6360
0902.1735
Cover Time and Broadcast Time
<|reference_start|>Cover Time and Broadcast Time: We introduce a new technique for bounding the cover time of random walks by relating it to the runtime of randomized broadcast. In particular, we strongly confirm for dense graphs the intuition of Chandra et al. \cite{CRRST97} that "the cover time of the graph is an appropriate metric for the performance of certain kinds of randomized broadcast algorithms". In more detail, our results are as follows: For any graph $G=(V,E)$ of size $n$ and minimum degree $\delta$, we have $\mathcal{R}(G)= \Oh(\frac{|E|}{\delta} \cdot \log n)$, where $\mathcal{R}(G)$ denotes the quotient of the cover time and broadcast time. This bound is tight for binary trees and tight up to logarithmic factors for many graphs including hypercubes, expanders and lollipop graphs. For any $\delta$-regular (or almost $\delta$-regular) graph $G$ it holds that $\mathcal{R}(G) = \Omega(\frac{\delta^2}{n} \cdot \frac{1}{\log n})$. Together with our upper bound on $\mathcal{R}(G)$, this lower bound strongly confirms the intuition of Chandra et al. for graphs with minimum degree $\Theta(n)$, since then the cover time equals the broadcast time multiplied by $n$ (neglecting logarithmic factors). Conversely, for any $\delta$ we construct almost $\delta$-regular graphs that satisfy $\mathcal{R}(G) = \Oh(\max \{\sqrt{n},\delta \} \cdot \log^2 n)$. Since any regular expander satisfies $\mathcal{R}(G) = \Theta(n)$, the strong relationship given above does not hold if $\delta$ is polynomially smaller than $n$. Our bounds also demonstrate that the relationship between cover time and broadcast time is much stronger than the known relationships between any of them and the mixing time (or the closely related spectral gap).<|reference_end|>
arxiv
@article{elsässer2009cover, title={Cover Time and Broadcast Time}, author={Robert Els"asser, Thomas Sauerwald}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 373-384}, year={2009}, archivePrefix={arXiv}, eprint={0902.1735}, primaryClass={cs.DS math.PR math.ST stat.TH} }
elsässer2009cover
arxiv-6361
0902.1736
On the Statistical Characterization of Flows in Internet Traffic with Application to Sampling
<|reference_start|>On the Statistical Characterization of Flows in Internet Traffic with Application to Sampling: A new method of estimating some statistical characteristics of TCP flows in the Internet is developed in this paper. For this purpose, a new set of random variables (referred to as observables) is defined. When dealing with sampled traffic, these observables can easily be computed from sampled data. By adopting a convenient mouse/elephant dichotomy also dependent on traffic, it is shown how these variables give a reliable statistical representation of the number of packets transmitted by large flows during successive time intervals with an appropriate duration. A mathematical framework is developed to estimate the accuracy of the method. As an application, it is shown how one can estimate the number of large TCP flows when only sampled traffic is available. The algorithm proposed is tested against experimental data collected from different types of IP networks.<|reference_end|>
arxiv
@article{chabchoub2009on, title={On the Statistical Characterization of Flows in Internet Traffic with Application to Sampling}, author={Yousra Chabchoub (INRIA), Christine Fricker (INRIA), Fabrice Guillemin (FT R&D), Philippe Robert (INRIA)}, journal={arXiv preprint arXiv:0902.1736}, year={2009}, archivePrefix={arXiv}, eprint={0902.1736}, primaryClass={cs.NI} }
chabchoub2009on
arxiv-6362
0902.1737
Optimal cache-aware suffix selection
<|reference_start|>Optimal cache-aware suffix selection: Given string $S[1..N]$ and integer $k$, the {\em suffix selection} problem is to determine the $k$th lexicographically smallest amongst the suffixes $S[i... N]$, $1 \leq i \leq N$. We study the suffix selection problem in the cache-aware model that captures two-level memory inherent in computing systems, for a \emph{cache} of limited size $M$ and block size $B$. The complexity of interest is the number of block transfers. We present an optimal suffix selection algorithm in the cache-aware model, requiring $\Thetah{N/B}$ block transfers, for any string $S$ over an unbounded alphabet (where characters can only be compared), under the common tall-cache assumption (i.e. $M=\Omegah{B^{1+\epsilon}}$, where $\epsilon<1$). Our algorithm beats the bottleneck bound for permuting an input array to the desired output array, which holds for nearly any nontrivial problem in hierarchical memory models.<|reference_end|>
arxiv
@article{franceschini2009optimal, title={Optimal cache-aware suffix selection}, author={Gianni Franceschini, Roberto Grossi, S. Muthukrishnan}, journal={26th International Symposium on Theoretical Aspects of Computer Science - STACS 2009 (2009) 457-468}, year={2009}, archivePrefix={arXiv}, eprint={0902.1737}, primaryClass={cs.DS cs.AR} }
franceschini2009optimal
arxiv-6363
0902.1786
On the Dynamics of the Error Floor Behavior in (Regular) LDPC Codes
<|reference_start|>On the Dynamics of the Error Floor Behavior in (Regular) LDPC Codes: It is shown that dominant trapping sets of regular LDPC codes, so called absorption sets, undergo a two-phased dynamic behavior in the iterative message-passing decoding algorithm. Using a linear dynamic model for the iteration behavior of these sets, it is shown that they undergo an initial geometric growth phase which stabilizes in a final bit-flipping behavior where the algorithm reaches a fixed point. This analysis is shown to lead to very accurate numerical calculations of the error floor bit error rates down to error rates that are inaccessible by simulation. The topology of the dominant absorption sets of an example code, the IEEE 802.3an (2048,1723) regular LDPC code, are identified and tabulated using topological relationships in combination with search algorithms.<|reference_end|>
arxiv
@article{schlegel2009on, title={On the Dynamics of the Error Floor Behavior in (Regular) LDPC Codes}, author={Christian Schlegel and Shuai Zhang}, journal={arXiv preprint arXiv:0902.1786}, year={2009}, archivePrefix={arXiv}, eprint={0902.1786}, primaryClass={cs.IT math.IT} }
schlegel2009on
arxiv-6364
0902.1790
Counting Distinctions: On the Conceptual Foundations of Shannon's Information Theory
<|reference_start|>Counting Distinctions: On the Conceptual Foundations of Shannon's Information Theory: Categorical logic has shown that modern logic is essentially the logic of subsets (or "subobjects"). Partitions are dual to subsets so there is a dual logic of partitions where a "distinction" [an ordered pair of distinct elements (u,u') from the universe U ] is dual to an "element". An element being in a subset is analogous to a partition p on U making a distinction, i.e., if u and u' were in different blocks of p. Subset logic leads to finite probability theory by taking the (Laplacian) probability as the normalized size of each subset-event of a finite universe. The analogous step in the logic of partitions is to assign to a partition the number of distinctions made by a partition normalized by the total number of ordered pairs |UxU| from the finite universe. That yields a notion of "logical entropy" for partitions and a "logical information theory." The logical theory directly counts the (normalized) number of distinctions in a partition while Shannon's theory gives the average number of binary partitions needed to make those same distinctions. Thus the logical theory is seen as providing a conceptual underpinning for Shannon's theory based on the logical notion of "distinctions." (forthcoming in Synthese)<|reference_end|>
arxiv
@article{ellerman2009counting, title={Counting Distinctions: On the Conceptual Foundations of Shannon's Information Theory}, author={David Ellerman}, journal={arXiv preprint arXiv:0902.1790}, year={2009}, archivePrefix={arXiv}, eprint={0902.1790}, primaryClass={cs.IT cs.LO math.IT math.LO} }
ellerman2009counting
arxiv-6365
0902.1792
Correlation Robust Stochastic Optimization
<|reference_start|>Correlation Robust Stochastic Optimization: We consider a robust model proposed by Scarf, 1958, for stochastic optimization when only the marginal probabilities of (binary) random variables are given, and the correlation between the random variables is unknown. In the robust model, the objective is to minimize expected cost against worst possible joint distribution with those marginals. We introduce the concept of correlation gap to compare this model to the stochastic optimization model that ignores correlations and minimizes expected cost under independent Bernoulli distribution. We identify a class of functions, using concepts of summable cost sharing schemes from game theory, for which the correlation gap is well-bounded and the robust model can be approximated closely by the independent distribution model. As a result, we derive efficient approximation factors for many popular cost functions, like submodular functions, facility location, and Steiner tree. As a byproduct, our analysis also yields some new results in the areas of social welfare maximization and existence of Walrasian equilibria, which may be of independent interest.<|reference_end|>
arxiv
@article{agrawal2009correlation, title={Correlation Robust Stochastic Optimization}, author={Shipra Agrawal, Yichuan Ding, Amin Saberi, Yinyu Ye}, journal={arXiv preprint arXiv:0902.1792}, year={2009}, archivePrefix={arXiv}, eprint={0902.1792}, primaryClass={cs.DS} }
agrawal2009correlation
arxiv-6366
0902.1809
Matrix Graph Grammars with Application Conditions
<|reference_start|>Matrix Graph Grammars with Application Conditions: In the Matrix approach to graph transformation we represent simple digraphs and rules with Boolean matrices and vectors, and the rewriting is expressed using Boolean operators only. In previous works, we developed analysis techniques enabling the study of the applicability of rule sequences, their independence, state reachability and the minimal graph able to fire a sequence. In the present paper we improve our framework in two ways. First, we make explicit (in the form of a Boolean matrix) some negative implicit information in rules. This matrix (called nihilation matrix) contains the elements that, if present, forbid the application of the rule (i.e. potential dangling edges, or newly added edges, which cannot be already present in the simple digraph). Second, we introduce a novel notion of application condition, which combines graph diagrams together with monadic second order logic. This allows for more flexibility and expressivity than previous approaches, as well as more concise conditions in certain cases. We demonstrate that these application conditions can be embedded into rules (i.e. in the left hand side and the nihilation matrix), and show that the applicability of a rule with arbitrary application conditions is equivalent to the applicability of a sequence of plain rules without application conditions. Therefore, the analysis of the former is equivalent to the analysis of the latter, showing that in our framework no additional results are needed for the study of application conditions. Moreover, all analysis techniques of [21, 22] for the study of sequences can be applied to application conditions.<|reference_end|>
arxiv
@article{velasco2009matrix, title={Matrix Graph Grammars with Application Conditions}, author={Pedro Pablo Perez Velasco, Juan de Lara Jaramillo}, journal={arXiv preprint arXiv:0902.1809}, year={2009}, archivePrefix={arXiv}, eprint={0902.1809}, primaryClass={cs.DM} }
velasco2009matrix
arxiv-6367
0902.1834
Optimal Probabilistic Ring Exploration by Asynchronous Oblivious Robots
<|reference_start|>Optimal Probabilistic Ring Exploration by Asynchronous Oblivious Robots: We consider a team of $k$ identical, oblivious, asynchronous mobile robots that are able to sense (\emph{i.e.}, view) their environment, yet are unable to communicate, and evolve on a constrained path. Previous results in this weak scenario show that initial symmetry yields high lower bounds when problems are to be solved by \emph{deterministic} robots. In this paper, we initiate research on probabilistic bounds and solutions in this context, and focus on the \emph{exploration} problem of anonymous unoriented rings of any size. It is known that $\Theta(\log n)$ robots are necessary and sufficient to solve the problem with $k$ deterministic robots, provided that $k$ and $n$ are coprime. By contrast, we show that \emph{four} identical probabilistic robots are necessary and sufficient to solve the same problem, also removing the coprime constraint. Our positive results are constructive.<|reference_end|>
arxiv
@article{devismes2009optimal, title={Optimal Probabilistic Ring Exploration by Asynchronous Oblivious Robots}, author={St'ephane Devismes (VERIMAG - IMAG), Franck Petit (LIP, INRIA Rh^one-Alpes / LIP Laboratoire de l'Informatique du Parall'elisme), S'ebastien Tixeuil (LIP6)}, journal={arXiv preprint arXiv:0902.1834}, year={2009}, number={RR-6838}, archivePrefix={arXiv}, eprint={0902.1834}, primaryClass={cs.DS cs.CC cs.DC cs.RO} }
devismes2009optimal
arxiv-6368
0902.1835
Polynomial Kernelizations for MIN F^+Pi_1 and MAX NP
<|reference_start|>Polynomial Kernelizations for MIN F^+Pi_1 and MAX NP: It has been observed in many places that constant-factor approximable problems often admit polynomial or even linear problem kernels for their decision versions, e.g., Vertex Cover, Feedback Vertex Set, and Triangle Packing. While there exist examples like Bin Packing, which does not admit any kernel unless P = NP, there apparently is a strong relation between these two polynomial-time techniques. We add to this picture by showing that the natural decision versions of all problems in two prominent classes of constant-factor approximable problems, namely MIN F^+\Pi_1 and MAX NP, admit polynomial problem kernels. Problems in MAX SNP, a subclass of MAX NP, are shown to admit kernels with a linear base set, e.g., the set of vertices of a graph. This extends results of Cai and Chen (JCSS 1997), stating that the standard parameterizations of problems in MAX SNP and MIN F^+\Pi_1 are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. JCSS 2009).<|reference_end|>
arxiv
@article{kratsch2009polynomial, title={Polynomial Kernelizations for MIN F^+Pi_1 and MAX NP}, author={Stefan Kratsch}, journal={arXiv preprint arXiv:0902.1835}, year={2009}, archivePrefix={arXiv}, eprint={0902.1835}, primaryClass={cs.CC} }
kratsch2009polynomial
arxiv-6369
0902.1853
A Unified Approach to Sparse Signal Processing
<|reference_start|>A Unified Approach to Sparse Signal Processing: A unified view of sparse signal processing is presented in tutorial form by bringing together various fields. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common benefits of significant reduction in sampling rate and processing manipulations are revealed. The key applications of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of reconstruction algorithms, linkages are made with random sampling, compressed sensing and rate of innovation. The redundancy introduced by channel coding in finite/real Galois fields is then related to sampling with similar reconstruction algorithms. The methods of Prony, Pisarenko, and MUSIC are next discussed for sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter and Error Locator Polynomials in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method. Such spectral estimation methods is then related to multi-source location and DOA estimation in array processing. The notions of sparse array beamforming and sparse sensor networks are also introduced. Sparsity in unobservable source signals is also shown to facilitate source separation in SCA; the algorithms developed in this area are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate OFDM channels.<|reference_end|>
arxiv
@article{marvasti2009a, title={A Unified Approach to Sparse Signal Processing}, author={F. Marvasti, A. Amini, F. Haddadi, M. Soltanolkotabi, B. H. Khalaj, A. Aldroubi, S. Holm, S. Sanei and J. Chambers}, journal={arXiv preprint arXiv:0902.1853}, year={2009}, archivePrefix={arXiv}, eprint={0902.1853}, primaryClass={cs.IT math.IT} }
marvasti2009a
arxiv-6370
0902.1866
A Superpolynomial Lower Bound on the Size of Uniform Non-constant-depth Threshold Circuits for the Permanent
<|reference_start|>A Superpolynomial Lower Bound on the Size of Uniform Non-constant-depth Threshold Circuits for the Permanent: We show that the permanent cannot be computed by DLOGTIME-uniform threshold or arithmetic circuits of depth o(log log n) and polynomial size.<|reference_end|>
arxiv
@article{koiran2009a, title={A Superpolynomial Lower Bound on the Size of Uniform Non-constant-depth Threshold Circuits for the Permanent}, author={Pascal Koiran (LIP), Sylvain Perifel (LIAFA)}, journal={arXiv preprint arXiv:0902.1866}, year={2009}, archivePrefix={arXiv}, eprint={0902.1866}, primaryClass={cs.CC} }
koiran2009a
arxiv-6371
0902.1868
Local Multicoloring Algorithms: Computing a Nearly-Optimal TDMA Schedule in Constant Time
<|reference_start|>Local Multicoloring Algorithms: Computing a Nearly-Optimal TDMA Schedule in Constant Time: The described multicoloring problem has direct applications in the context of wireless ad hoc and sensor networks. In order to coordinate the access to the shared wireless medium, the nodes of such a network need to employ some medium access control (MAC) protocol. Typical MAC protocols control the access to the shared channel by time (TDMA), frequency (FDMA), or code division multiple access (CDMA) schemes. Many channel access schemes assign a fixed set of time slots, frequencies, or (orthogonal) codes to the nodes of a network such that nodes that interfere with each other receive disjoint sets of time slots, frequencies, or code sets. Finding a valid assignment of time slots, frequencies, or codes hence directly corresponds to computing a multicoloring of a graph $G$. The scarcity of bandwidth, energy, and computing resources in ad hoc and sensor networks, as well as the often highly dynamic nature of these networks require that the multicoloring can be computed based on as little and as local information as possible.<|reference_end|>
arxiv
@article{kuhn2009local, title={Local Multicoloring Algorithms: Computing a Nearly-Optimal TDMA Schedule in Constant Time}, author={Fabian Kuhn (CSAIL)}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 613-624}, year={2009}, archivePrefix={arXiv}, eprint={0902.1868}, primaryClass={cs.DM cs.DS} }
kuhn2009local
arxiv-6372
0902.1871
Abstraction and Refinement in Static Model-Checking
<|reference_start|>Abstraction and Refinement in Static Model-Checking: interpretation is a general methodology for building static analyses of programs. It was introduced by P. and R. Cousot in \cite{cc}. We present, in this paper, an application of a generic abstract interpretation to domain of model-checking. Dynamic checking are usually easier to use, because the concept are establishe d and wide well know. But they are usually limited to systems whose states space is finite. In an other part, certain faults cannot be detected dynamically, even by keeping track of the history of the states space.Indeed, the classical problem of finding the right test cases is far from trivial and limit the abilities of dynamic checkers further. Static checking have the advantage that they work on a more abstract level than dynamic checker and can verify system properties for all inputs. Problem, it is hard to guarantee that a violation of a modeled property corresponds to a fault in the concrete system. We propose an approach, in which we generate counter-examples dynamically using the abstract interpretation techniques.<|reference_end|>
arxiv
@article{musumbu2009abstraction, title={Abstraction and Refinement in Static Model-Checking}, author={Kaninda Musumbu (LaBRI)}, journal={IEEE-Computer Society International Conference on Computer Science and Information Technology, ICCSIT-2008 (2008) 107 - 112}, year={2009}, archivePrefix={arXiv}, eprint={0902.1871}, primaryClass={cs.DS cs.SC} }
musumbu2009abstraction
arxiv-6373
0902.1884
A Proof of Concept for Optimizing Task Parallelism by Locality Queues
<|reference_start|>A Proof of Concept for Optimizing Task Parallelism by Locality Queues: Task parallelism as employed by the OpenMP task construct, although ideal for tackling irregular problems or typical producer/consumer schemes, bears some potential for performance bottlenecks if locality of data access is important, which is typically the case for memory-bound code on ccNUMA systems. We present a programming technique which ameliorates adverse effects of dynamic task distribution by sorting tasks into locality queues, each of which is preferably processed by threads that belong to the same locality domain. Dynamic scheduling is fully preserved inside each domain, and is preferred over possible load imbalance even if non-local access is required. The effectiveness of the approach is demonstrated using a blocked six-point stencil solver as a toy model.<|reference_end|>
arxiv
@article{wittmann2009a, title={A Proof of Concept for Optimizing Task Parallelism by Locality Queues}, author={Markus Wittmann and Georg Hager}, journal={arXiv preprint arXiv:0902.1884}, year={2009}, archivePrefix={arXiv}, eprint={0902.1884}, primaryClass={cs.PF cs.DC} }
wittmann2009a
arxiv-6374
0902.1891
NNRU, a noncommutative analogue of NTRU
<|reference_start|>NNRU, a noncommutative analogue of NTRU: NTRU public key cryptosystem is well studied lattice-based Cryptosystem along with Ajtai-Dwork and GGH systems. Underlying NTRU is a hard mathematical problem of finding short vectors in a certain lattice. (Shamir 1997) presented a lattice-based attack by which he could find the original secret key or alternate key. Shamir concluded if one designs a variant of NTRU where the calculations involved during encryption and decryption are non-commutative then the system will be secure against Lattice based attack.This paper presents a new cryptosystem with above property and we have proved that it is completely secure against Lattice based attack. It operates in the non-commutative ring M=M_k Z[X]/(X^n - I_{k*k}, where M is a matrix ring of k*k matrices of polynomials in R={Z}[X]/(X^n-1). Moreover We have got speed improvement by a factor of O(k^{1.624) over NTRU for the same bit of information.<|reference_end|>
arxiv
@article{vats2009nnru,, title={NNRU, a noncommutative analogue of NTRU}, author={Nitin Vats}, journal={arXiv preprint arXiv:0902.1891}, year={2009}, archivePrefix={arXiv}, eprint={0902.1891}, primaryClass={cs.CR} }
vats2009nnru,
arxiv-6375
0902.1911
Topological Centrality and Its Applications
<|reference_start|>Topological Centrality and Its Applications: Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.<|reference_end|>
arxiv
@article{zhuge2009topological, title={Topological Centrality and Its Applications}, author={Hai Zhuge and Junsheng Zhang}, journal={arXiv preprint arXiv:0902.1911}, year={2009}, number={KGRC-2009-02}, archivePrefix={arXiv}, eprint={0902.1911}, primaryClass={cs.IR cs.AI} }
zhuge2009topological
arxiv-6376
0902.1942
On the Classification of Type II Codes of Length 24
<|reference_start|>On the Classification of Type II Codes of Length 24: We give a new, purely coding-theoretic proof of Koch's criterion on the tetrad systems of Type II codes of length 24 using the theory of harmonic weight enumerators. This approach is inspired by Venkov's approach to the classification of the root systems of Type II lattices in R^{24}, and gives a new instance of the analogy between lattices and codes.<|reference_end|>
arxiv
@article{elkies2009on, title={On the Classification of Type II Codes of Length 24}, author={Noam D. Elkies, Scott D. Kominers}, journal={SIAM Journal on Discrete Mathematics 23(4), (2010), 2173-2177}, year={2009}, archivePrefix={arXiv}, eprint={0902.1942}, primaryClass={math.NT cs.DM cs.IT math.CO math.IT} }
elkies2009on
arxiv-6377
0902.1947
Cooperative Spectrum Sensing based on the Limiting Eigenvalue Ratio Distribution in Wishart Matrices
<|reference_start|>Cooperative Spectrum Sensing based on the Limiting Eigenvalue Ratio Distribution in Wishart Matrices: Recent advances in random matrix theory have spurred the adoption of eigenvalue-based detection techniques for cooperative spectrum sensing in cognitive radio. Most of such techniques use the ratio between the largest and the smallest eigenvalues of the received signal covariance matrix to infer the presence or absence of the primary signal. The results derived so far in this field are based on asymptotical assumptions, due to the difficulties in characterizing the exact distribution of the eigenvalues ratio. By exploiting a recent result on the limiting distribution of the smallest eigenvalue in complex Wishart matrices, in this paper we derive an expression for the limiting eigenvalue ratio distribution, which turns out to be much more accurate than the previous approximations also in the non-asymptotical region. This result is then straightforwardly applied to calculate the decision threshold as a function of a target probability of false alarm. Numerical simulations show that the proposed detection rule provides a substantial performance improvement compared to the other eigenvalue-based algorithms.<|reference_end|>
arxiv
@article{penna2009cooperative, title={Cooperative Spectrum Sensing based on the Limiting Eigenvalue Ratio Distribution in Wishart Matrices}, author={Federico Penna, Roberto Garello, Maurizio A. Spirito}, journal={Communications Letters, IEEE, vol.13, no.7, pp.507-509, July 2009}, year={2009}, doi={10.1109/LCOMM.2009.090425}, archivePrefix={arXiv}, eprint={0902.1947}, primaryClass={cs.IT math.IT} }
penna2009cooperative
arxiv-6378
0902.1996
Convergence and Tradeoff of Utility-Optimal CSMA
<|reference_start|>Convergence and Tradeoff of Utility-Optimal CSMA: It has been recently suggested that in wireless networks, CSMA-based distributed MAC algorithms could achieve optimal utility without any message passing. We present the first proof of convergence of such adaptive CSMA algorithms towards an arbitrarily tight approximation of utility-optimizing schedule. We also briefly discuss the tradeoff between optimality at equilibrium and short-term fairness practically achieved by such algorithms.<|reference_end|>
arxiv
@article{liu2009convergence, title={Convergence and Tradeoff of Utility-Optimal CSMA}, author={Jiaping Liu, Yung Yi, Alexandre Proutiere, Mung Chiang and H. Vincent Poor}, journal={arXiv preprint arXiv:0902.1996}, year={2009}, doi={10.4108/ICST.BROADNETS2009.7401}, archivePrefix={arXiv}, eprint={0902.1996}, primaryClass={cs.IT math.IT} }
liu2009convergence
arxiv-6379
0902.2036
Modified Papoulis-Gerchberg algorithm for sparse signal recovery
<|reference_start|>Modified Papoulis-Gerchberg algorithm for sparse signal recovery: Motivated by the well-known Papoulis-Gerchberg algorithm, an iterative thresholding algorithm for recovery of sparse signals from few observations is proposed. The sequence of iterates turns out to be similar to that of the thresholded Landweber iterations, although not the same. The performance of the proposed algorithm is experimentally evaluated and compared to other state-of-the-art methods.<|reference_end|>
arxiv
@article{kayvanrad2009modified, title={Modified Papoulis-Gerchberg algorithm for sparse signal recovery}, author={M.H. Kayvanrad, D. Zonoobi, A.A. Kassim}, journal={arXiv preprint arXiv:0902.2036}, year={2009}, archivePrefix={arXiv}, eprint={0902.2036}, primaryClass={cs.IT math.IT} }
kayvanrad2009modified
arxiv-6380
0902.2072
Strong Completeness of Coalgebraic Modal Logics
<|reference_start|>Strong Completeness of Coalgebraic Modal Logics: Canonical models are of central importance in modal logic, in particular as they witness strong completeness and hence compactness. While the canonical model construction is well understood for Kripke semantics, non-normal modal logics often present subtle difficulties - up to the point that canonical models may fail to exist, as is the case e.g. in most probabilistic logics. Here, we present a generic canonical model construction in the semantic framework of coalgebraic modal logic, which pinpoints coherence conditions between syntax and semantics of modal logics that guarantee strong completeness. We apply this method to reconstruct canonical model theorems that are either known or folklore, and moreover instantiate our method to obtain new strong completeness results. In particular, we prove strong completeness of graded modal logic with finite multiplicities, and of the modal logic of exact probabilities.<|reference_end|>
arxiv
@article{schröder2009strong, title={Strong Completeness of Coalgebraic Modal Logics}, author={Lutz Schr"oder, Dirk Pattinson}, journal={26th International Symposium on Theoretical Aspects of Computer Science - STACS 2009 (2009) 433-444}, year={2009}, archivePrefix={arXiv}, eprint={0902.2072}, primaryClass={cs.LO} }
schröder2009strong
arxiv-6381
0902.2073
Polynomial Size Analysis of First-Order Shapely Functions
<|reference_start|>Polynomial Size Analysis of First-Order Shapely Functions: We present a size-aware type system for first-order shapely function definitions. Here, a function definition is called shapely when the size of the result is determined exactly by a polynomial in the sizes of the arguments. Examples of shapely function definitions may be implementations of matrix multiplication and the Cartesian product of two lists. The type system is proved to be sound w.r.t. the operational semantics of the language. The type checking problem is shown to be undecidable in general. We define a natural syntactic restriction such that the type checking becomes decidable, even though size polynomials are not necessarily linear or monotonic. Furthermore, we have shown that the type-inference problem is at least semi-decidable (under this restriction). We have implemented a procedure that combines run-time testing and type-checking to automatically obtain size dependencies. It terminates on total typable function definitions.<|reference_end|>
arxiv
@article{shkaravska2009polynomial, title={Polynomial Size Analysis of First-Order Shapely Functions}, author={Olha Shkaravska, Marko van Eekelen and Ron van Kesteren}, journal={Logical Methods in Computer Science, Volume 5, Issue 2 (May 25, 2009) lmcs:1148}, year={2009}, doi={10.2168/LMCS-5(2:10)2009}, archivePrefix={arXiv}, eprint={0902.2073}, primaryClass={cs.LO cs.CC} }
shkaravska2009polynomial
arxiv-6382
0902.2081
Languages recognized by nondeterministic quantum finite automata
<|reference_start|>Languages recognized by nondeterministic quantum finite automata: The nondeterministic quantum finite automaton (NQFA) is the only known case where a one-way quantum finite automaton (QFA) model has been shown to be strictly superior in terms of language recognition power to its probabilistic counterpart. We give a characterization of the class of languages recognized by NQFA's, demonstrating that it is equal to the class of exclusive stochastic languages. We also characterize the class of languages that are recognized necessarily by two-sided error by QFA's. It is shown that these classes remain the same when the QFA's used in their definitions are replaced by several different model variants that have appeared in the literature. We prove several closure properties of the related classes. The ramifications of these results about classical and quantum sublogarithmic space complexity classes are examined.<|reference_end|>
arxiv
@article{yakaryilmaz2009languages, title={Languages recognized by nondeterministic quantum finite automata}, author={Abuzer Yakaryilmaz and A. C. Cem Say}, journal={Quantum Information & Computation, Volume 10 Issue 9, September 2010, Pages 747-770}, year={2009}, archivePrefix={arXiv}, eprint={0902.2081}, primaryClass={cs.CC} }
yakaryilmaz2009languages
arxiv-6383
0902.2104
Tableau-based decision procedure for full coalitional multiagent temporal-epistemic logic of linear time
<|reference_start|>Tableau-based decision procedure for full coalitional multiagent temporal-epistemic logic of linear time: We develop a tableau-based decision procedure for the full coalitional multiagent temporal-epistemic logic of linear time CMATEL(CD+LT). It extends LTL with operators of common and distributed knowledge for all coalitions of agents. The tableau procedure runs in exponential time, matching the lower bound obtained by Halpern and Vardi for a fragment of our logic, thus providing a complexity-optimal decision procedure for CMATEL(CD+LT).<|reference_end|>
arxiv
@article{goranko2009tableau-based, title={Tableau-based decision procedure for full coalitional multiagent temporal-epistemic logic of linear time}, author={Valentin Goranko, Dmitry Shkatov}, journal={8th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009), Budapest, Hungary, May 10-15, 2009, Volume 2. IFAAMAS 2009, ISBN 978-0-9817381-7-8}, year={2009}, archivePrefix={arXiv}, eprint={0902.2104}, primaryClass={cs.LO cs.MA} }
goranko2009tableau-based
arxiv-6384
0902.2108
Qualitative Concurrent Stochastic Games with Imperfect Information
<|reference_start|>Qualitative Concurrent Stochastic Games with Imperfect Information: We study a model of games that combines concurrency, imperfect information and stochastic aspects. Those are finite states games in which, at each round, the two players choose, simultaneously and independently, an action. Then a successor state is chosen accordingly to some fixed probability distribution depending on the previous state and on the pair of actions chosen by the players. Imperfect information is modeled as follows: both players have an equivalence relation over states and, instead of observing the exact state, they only know to which equivalence class it belongs. Therefore, if two partial plays are indistinguishable by some player, he should behave the same in both of them. We consider reachability (does the play eventually visit a final state?) and B\"uchi objective (does the play visit infinitely often a final state?). Our main contribution is to prove that the following problem is complete for 2-ExpTime: decide whether the first player has a strategy that ensures her to almost-surely win against any possible strategy of her oponent. We also characterise those strategies needed by the first player to almost-surely win.<|reference_end|>
arxiv
@article{gripon2009qualitative, title={Qualitative Concurrent Stochastic Games with Imperfect Information}, author={Vincent Gripon (LIAFA), Olivier Serre (LIAFA)}, journal={arXiv preprint arXiv:0902.2108}, year={2009}, archivePrefix={arXiv}, eprint={0902.2108}, primaryClass={cs.FL cs.GT cs.LO} }
gripon2009qualitative
arxiv-6385
0902.2125
Tableau-based procedure for deciding satisfiability in the full coalitional multiagent epistemic logic
<|reference_start|>Tableau-based procedure for deciding satisfiability in the full coalitional multiagent epistemic logic: We study the multiagent epistemic logic CMAELCD with operators for common and distributed knowledge for all coalitions of agents. We introduce Hintikka structures for this logic and prove that satisfiability in such structures is equivalent to satisfiability in standard models. Using this result, we design an incremental tableau based decision procedure for testing satisfiability in CMAELCD.<|reference_end|>
arxiv
@article{goranko2009tableau-based, title={Tableau-based procedure for deciding satisfiability in the full coalitional multiagent epistemic logic}, author={Valentin Goranko, Dmitry Shkatov}, journal={arXiv preprint arXiv:0902.2125}, year={2009}, archivePrefix={arXiv}, eprint={0902.2125}, primaryClass={cs.LO cs.MA} }
goranko2009tableau-based
arxiv-6386
0902.2137
A formally verified compiler back-end
<|reference_start|>A formally verified compiler back-end: This article describes the development and formal verification (proof of semantic preservation) of a compiler back-end from Cminor (a simple imperative intermediate language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a verified compiler is useful in the context of formal methods applied to the certification of critical software: the verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.<|reference_end|>
arxiv
@article{leroy2009a, title={A formally verified compiler back-end}, author={Xavier Leroy (INRIA Rocquencourt)}, journal={Journal of Automated Reasoning 43, 4 (2009) 363-446}, year={2009}, doi={10.1007/s10817-009-9155-4}, archivePrefix={arXiv}, eprint={0902.2137}, primaryClass={cs.LO cs.PL} }
leroy2009a
arxiv-6387
0902.2140
Ambiguity and Communication
<|reference_start|>Ambiguity and Communication: The ambiguity of a nondeterministic finite automaton (NFA) N for input size n is the maximal number of accepting computations of N for an input of size n. For all k, r 2 N we construct languages Lr,k which can be recognized by NFA's with size k poly(r) and ambiguity O(nk), but Lr,k has only NFA's with exponential size, if ambiguity o(nk) is required. In particular, a hierarchy for polynomial ambiguity is obtained, solving a long standing open problem (Ravikumar and Ibarra, 1989, Leung, 1998).<|reference_end|>
arxiv
@article{hromkovic2009ambiguity, title={Ambiguity and Communication}, author={Juraj Hromkovic, Georg Schnitger}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 553-564}, year={2009}, archivePrefix={arXiv}, eprint={0902.2140}, primaryClass={cs.FL cs.CC} }
hromkovic2009ambiguity
arxiv-6388
0902.2141
Extracting the Kolmogorov Complexity of Strings and Sequences from Sources with Limited Independence
<|reference_start|>Extracting the Kolmogorov Complexity of Strings and Sequences from Sources with Limited Independence: An infinite binary sequence has randomness rate at least $\sigma$ if, for almost every $n$, the Kolmogorov complexity of its prefix of length $n$ is at least $\sigma n$. It is known that for every rational $\sigma \in (0,1)$, on one hand, there exists sequences with randomness rate $\sigma$ that can not be effectively transformed into a sequence with randomness rate higher than $\sigma$ and, on the other hand, any two independent sequences with randomness rate $\sigma$ can be transformed into a sequence with randomness rate higher than $\sigma$. We show that the latter result holds even if the two input sequences have linear dependency (which, informally speaking, means that all prefixes of length $n$ of the two sequences have in common a constant fraction of their information). The similar problem is studied for finite strings. It is shown that from any two strings with sufficiently large Kolmogorov complexity and sufficiently small dependence, one can effectively construct a string that is random even conditioned by any one of the input strings.<|reference_end|>
arxiv
@article{zimand2009extracting, title={Extracting the Kolmogorov Complexity of Strings and Sequences from Sources with Limited Independence}, author={Marius Zimand}, journal={26th International Symposium on Theoretical Aspects of Computer Science - STACS 2009 (2009) 433-444}, year={2009}, archivePrefix={arXiv}, eprint={0902.2141}, primaryClass={cs.CC cs.IT math.IT} }
zimand2009extracting
arxiv-6389
0902.2146
A Stronger LP Bound for Formula Size Lower Bounds via Clique Constraints
<|reference_start|>A Stronger LP Bound for Formula Size Lower Bounds via Clique Constraints: We introduce a new technique proving formula size lower bounds based on the linear programming bound originally introduced by Karchmer, Kushilevitz and Nisan [11] and the theory of stable set polytope. We apply it to majority functions and prove their formula size lower bounds improved from the classical result of Khrapchenko [13]. Moreover, we introduce a notion of unbalanced recursive ternary majority functions motivated by a decomposition theory of monotone self-dual functions and give integrally matching upper and lower bounds of their formula size. We also show monotone formula size lower bounds of balanced recursive ternary majority functions improved from the quantum adversary bound of Laplante, Lee and Szegedy [15].<|reference_end|>
arxiv
@article{ueno2009a, title={A Stronger LP Bound for Formula Size Lower Bounds via Clique Constraints}, author={Kenya Ueno}, journal={26th International Symposium on Theoretical Aspects of Computer Science - STACS 2009 (2009) 433-444}, year={2009}, archivePrefix={arXiv}, eprint={0902.2146}, primaryClass={cs.CC} }
ueno2009a
arxiv-6390
0902.2149
A Generalization of Nemhauser and Trotter's Local Optimization Theorem
<|reference_start|>A Generalization of Nemhauser and Trotter's Local Optimization Theorem: The Nemhauser-Trotter local optimization theorem applies to the NP-hard Vertex Cover problem and has applications in approximation as well as parameterized algorithmics. We present a framework that generalizes Nemhauser and Trotter's result to vertex deletion and graph packing problems, introducing novel algorithmic strategies based on purely combinatorial arguments (not referring to linear programming as the Nemhauser-Trotter result originally did). We exhibit our framework using a generalization of Vertex Cover, called Bounded- Degree Deletion, that has promise to become an important tool in the analysis of gene and other biological networks. For some fixed d \geq 0, Bounded-Degree Deletion asks to delete as few vertices as possible from a graph in order to transform it into a graph with maximum vertex degree at most d. Vertex Cover is the special case of d = 0. Our generalization of the Nemhauser-Trotter theorem implies that Bounded-Degree Deletion has a problem kernel with a linear number of vertices for every constant d. We also outline an application of our extremal combinatorial approach to the problem of packing stars with a bounded number of leaves. Finally, charting the border between (parameterized) tractability and intractability for Bounded-Degree Deletion, we provide a W[2]-hardness result for Bounded-Degree Deletion in case of unbounded d-values.<|reference_end|>
arxiv
@article{fellows2009a, title={A Generalization of Nemhauser and Trotter's Local Optimization Theorem}, author={Michael R. Fellows, Jiong Guo, Hannes Moser, Rolf Niedermeier}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 409-420}, year={2009}, archivePrefix={arXiv}, eprint={0902.2149}, primaryClass={cs.CC cs.DM cs.DS} }
fellows2009a
arxiv-6391
0902.2150
Computing Graph Roots Without Short Cycles
<|reference_start|>Computing Graph Roots Without Short Cycles: Graph G is the square of graph H if two vertices x, y have an edge in G if and only if x, y are of distance at most two in H. Given H it is easy to compute its square H2, however Motwani and Sudan proved that it is NP-complete to determine if a given graph G is the square of some graph H (of girth 3). In this paper we consider the characterization and recognition problems of graphs that are squares of graphs of small girth, i.e. to determine if G = H2 for some graph H of small girth. The main results are the following. - There is a graph theoretical characterization for graphs that are squares of some graph of girth at least 7. A corollary is that if a graph G has a square root H of girth at least 7 then H is unique up to isomorphism. - There is a polynomial time algorithm to recognize if G = H2 for some graph H of girth at least 6. - It is NP-complete to recognize if G = H2 for some graph H of girth 4. These results almost provide a dichotomy theorem for the complexity of the recognition problem in terms of girth of the square roots. The algorithmic and graph theoretical results generalize previous results on tree square roots, and provide polynomial time algorithms to compute a graph square root of small girth if it exists. Some open questions and conjectures will also be discussed.<|reference_end|>
arxiv
@article{farzad2009computing, title={Computing Graph Roots Without Short Cycles}, author={Babak Farzad, Lap Chi Lau, Van Bang Le, Nguyen Ngoc Tuy}, journal={26th International Symposium on Theoretical Aspects of Computer Science STACS 2009 (2009) 397-408}, year={2009}, archivePrefix={arXiv}, eprint={0902.2150}, primaryClass={cs.DM cs.DS} }
farzad2009computing
arxiv-6392
0902.2152
B\"uchi complementation made tight
<|reference_start|>B\"uchi complementation made tight: The precise complexity of complementing B\"uchi automata is an intriguing and long standing problem. While optimal complementation techniques for finite automata are simple - it suffices to determinize them using a simple subset construction and to dualize the acceptance condition of the resulting automaton - B\"uchi complementation is more involved. Indeed, the construction of an EXPTIME complementation procedure took a quarter of a century from the introduction of B\"uchi automata in the early 60s, and stepwise narrowing the gap between the upper and lower bound to a simple exponent (of (6e)n for B\"uchi automata with n states) took four decades. While the distance between the known upper (O'(0.96 n)n') and lower ('(0.76 n)n') bound on the required number of states has meanwhile been significantly reduced, an exponential factor remains between them. Also, the upper bound on the size of the complement automaton is not linear in the bound of its state space. These gaps are unsatisfactory from a theoretical point of view, but also because B\"uchi complementation is a useful tool in formal verification, in particular for the language containment problem. This paper proposes a B\"uchi complementation algorithm whose complexity meets, modulo a quadratic (O(n2)) factor, the known lower bound for B\"uchi complementation. It thus improves over previous constructions by an exponential factor and concludes the quest for optimal B\"uchi complementation algorithms.<|reference_end|>
arxiv
@article{schewe2009b\"uchi, title={B\"uchi complementation made tight}, author={Sven Schewe}, journal={26th International Symposium on Theoretical Aspects of Computer Science - STACS 2009 (2009) 433-444}, year={2009}, archivePrefix={arXiv}, eprint={0902.2152}, primaryClass={cs.FL cs.CC} }
schewe2009b\"uchi
arxiv-6393
0902.2166
Spanning Trees of Bounded Degree Graphs
<|reference_start|>Spanning Trees of Bounded Degree Graphs: We consider lower bounds on the number of spanning trees of connected graphs with degree bounded by $d$. The question is of interest because such bounds may improve the analysis of the improvement produced by memorisation in the runtime of exponential algorithms. The value of interest is the constant $\beta_d$ such that all connected graphs with degree bounded by $d$ have at least $\beta_d^\mu$ spanning trees where $\mu$ is the cyclomatic number or excess of the graph, namely $m-n+1$. We conjecture that $\beta_d$ is achieved by the complete graph $K_{d+1}$ but we have not proved this for any $d$ greater than 3. We give weaker lower bounds on $\beta_d$ for $d\le 11$.<|reference_end|>
arxiv
@article{robson2009spanning, title={Spanning Trees of Bounded Degree Graphs}, author={John Michael Robson (LaBRI)}, journal={arXiv preprint arXiv:0902.2166}, year={2009}, archivePrefix={arXiv}, eprint={0902.2166}, primaryClass={cs.DM cs.CC} }
robson2009spanning
arxiv-6394
0902.2183
A principal component analysis of 39 scientific impact measures
<|reference_start|>A principal component analysis of 39 scientific impact measures: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.<|reference_end|>
arxiv
@article{bollen2009a, title={A principal component analysis of 39 scientific impact measures}, author={Johan Bollen, Herbert Van de Sompel, Aric Hagberg, Ryan Chute}, journal={Bollen J, Van de Sompel H, Hagberg A, Chute R, 2009 A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE 4(6): e6022. (http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006022)}, year={2009}, doi={10.1371/journal.pone.0006022}, archivePrefix={arXiv}, eprint={0902.2183}, primaryClass={cs.DL cs.CY} }
bollen2009a
arxiv-6395
0902.2186
A List of Household Objects for Robotic Retrieval Prioritized by People with ALS (Version 092008)
<|reference_start|>A List of Household Objects for Robotic Retrieval Prioritized by People with ALS (Version 092008): This technical report is designed to serve as a citable reference for the original prioritized object list that the Healthcare Robotics Lab at Georgia Tech released on its website in September of 2008. It is also expected to serve as the primary citable reference for the research associated with this list until the publication of a detailed, peer-reviewed paper. The original prioritized list of object classes resulted from a needs assessment involving 8 motor-impaired patients with amyotrophic lateral sclerosis (ALS) and targeted, in-person interviews of 15 motor-impaired ALS patients. All of these participants were drawn from the Emory ALS Center. The prioritized object list consists of 43 object classes ranked by how important the participants considered each class to be for retrieval by an assistive robot. We intend for this list to be used by researchers to inform the design and benchmarking of robotic systems, especially research related to autonomous mobile manipulation.<|reference_end|>
arxiv
@article{choi2009a, title={A List of Household Objects for Robotic Retrieval Prioritized by People with ALS (Version 092008)}, author={Young Sang Choi, Travis Deyle, Charles C. Kemp}, journal={arXiv preprint arXiv:0902.2186}, year={2009}, archivePrefix={arXiv}, eprint={0902.2186}, primaryClass={cs.RO cs.HC} }
choi2009a
arxiv-6396
0902.2187
A Standalone Markerless 3D Tracker for Handheld Augmented Reality
<|reference_start|>A Standalone Markerless 3D Tracker for Handheld Augmented Reality: This paper presents an implementation of a markerless tracking technique targeted to the Windows Mobile Pocket PC platform. The primary aim of this work is to allow the development of standalone augmented reality applications for handheld devices based on natural feature tracking. In order to achieve this goal, a subset of two computer vision libraries was ported to the Pocket PC platform. They were also adapted to use fixed point math, with the purpose of improving the overall performance of the routines. The port of these libraries opens up the possibility of having other computer vision tasks being executed on mobile platforms. A model based tracking approach that relies on edge information was adopted. Since it does not require a high processing power, it is suitable for constrained devices such as handhelds. The OpenGL ES graphics library was used to perform computer vision tasks, taking advantage of existing graphics hardware acceleration. An augmented reality application was created using the implemented technique and evaluations were done regarding tracking performance and accuracy<|reference_end|>
arxiv
@article{lima2009a, title={A Standalone Markerless 3D Tracker for Handheld Augmented Reality}, author={Joao Paulo Lima, Veronica Teichrieb, Judith Kelner}, journal={arXiv preprint arXiv:0902.2187}, year={2009}, archivePrefix={arXiv}, eprint={0902.2187}, primaryClass={cs.CV cs.GR cs.MM} }
lima2009a
arxiv-6397
0902.2206
Feature Hashing for Large Scale Multitask Learning
<|reference_start|>Feature Hashing for Large Scale Multitask Learning: Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.<|reference_end|>
arxiv
@article{weinberger2009feature, title={Feature Hashing for Large Scale Multitask Learning}, author={Kilian Weinberger, Anirban Dasgupta, Josh Attenberg, John Langford, Alex Smola}, journal={arXiv preprint arXiv:0902.2206}, year={2009}, archivePrefix={arXiv}, eprint={0902.2206}, primaryClass={cs.AI} }
weinberger2009feature
arxiv-6398
0902.2209
Online Scheduling of Bounded Length Jobs to Maximize Throughput
<|reference_start|>Online Scheduling of Bounded Length Jobs to Maximize Throughput: We consider an online scheduling problem, motivated by the issues present at the joints of networks using ATM and TCP/IP. Namely, IP packets have to broken down to small ATM cells and sent out before their deadlines, but cells corresponding to different packets can be interwoven. More formally, we consider the online scheduling problem with preemptions, where each job j is revealed at release time r_j, has processing time p_j, deadline d_j and weight w_j. A preempted job can be resumed at any time. The goal is to maximize the total weight of all jobs completed on time. Our main result are as follows: we prove that if all jobs have processing time exactly k, the deterministic competitive ratio is between 2.598 and 5, and when the processing times are at most k, the deterministic competitive ratio is Theta(k/log k).<|reference_end|>
arxiv
@article{durr2009online, title={Online Scheduling of Bounded Length Jobs to Maximize Throughput}, author={Christoph Durr, Lukasz Jez and Nguyen Kim Thang}, journal={arXiv preprint arXiv:0902.2209}, year={2009}, archivePrefix={arXiv}, eprint={0902.2209}, primaryClass={cs.DS} }
durr2009online
arxiv-6399
0902.2230
BagPack: A general framework to represent semantic relations
<|reference_start|>BagPack: A general framework to represent semantic relations: We introduce a way to represent word pairs instantiating arbitrary semantic relations that keeps track of the contexts in which the words in the pair occur both together and independently. The resulting features are of sufficient generality to allow us, with the help of a standard supervised machine learning algorithm, to tackle a variety of unrelated semantic tasks with good results and almost no task-specific tailoring.<|reference_end|>
arxiv
@article{herdağdelen2009bagpack:, title={BagPack: A general framework to represent semantic relations}, author={Amac{c} Herdau{g}delen and Marco Baroni}, journal={arXiv preprint arXiv:0902.2230}, year={2009}, archivePrefix={arXiv}, eprint={0902.2230}, primaryClass={cs.CL cs.IR} }
herdağdelen2009bagpack:
arxiv-6400
0902.2235
On Isometries for Convolutional Codes
<|reference_start|>On Isometries for Convolutional Codes: In this paper we will discuss isometries and strong isometries for convolutional codes. Isometries are weight-preserving module isomorphisms whereas strong isometries are, in addition, degree-preserving. Special cases of these maps are certain types of monomial transformations. We will show a form of MacWilliams Equivalence Theorem, that is, each isometry between convolutional codes is given by a monomial transformation. Examples show that strong isometries cannot be characterized this way, but special attention paid to the weight adjacency matrices allows for further descriptions. Various distance parameters appearing in the literature on convolutional codes will be discussed as well.<|reference_end|>
arxiv
@article{gluesing-luerssen2009on, title={On Isometries for Convolutional Codes}, author={Heide Gluesing-Luerssen}, journal={arXiv preprint arXiv:0902.2235}, year={2009}, archivePrefix={arXiv}, eprint={0902.2235}, primaryClass={cs.IT math.IT} }
gluesing-luerssen2009on