corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-1301
0710.0360
Interpolation in Valiant's theory
<|reference_start|>Interpolation in Valiant's theory: We investigate the following question: if a polynomial can be evaluated at rational points by a polynomial-time boolean algorithm, does it have a polynomial-size arithmetic circuit? We argue that this question is certainly difficult. Answering it negatively would indeed imply that the constant-free versions of the algebraic complexity classes VP and VNP defined by Valiant are different. Answering this question positively would imply a transfer theorem from boolean to algebraic complexity. Our proof method relies on Lagrange interpolation and on recent results connecting the (boolean) counting hierarchy to algebraic complexity classes. As a byproduct we obtain two additional results: (i) The constant-free, degree-unbounded version of Valiant's hypothesis that VP and VNP differ implies the degree-bounded version. This result was previously known to hold for fields of positive characteristic only. (ii) If exponential sums of easy to compute polynomials can be computed efficiently, then the same is true of exponential products. We point out an application of this result to the P=NP problem in the Blum-Shub-Smale model of computation over the field of complex numbers.<|reference_end|>
arxiv
@article{koiran2007interpolation, title={Interpolation in Valiant's theory}, author={Pascal Koiran (LIP), Sylvain Perifel (LIP)}, journal={arXiv preprint arXiv:0710.0360}, year={2007}, archivePrefix={arXiv}, eprint={0710.0360}, primaryClass={cs.CC} }
koiran2007interpolation
arxiv-1302
0710.0386
Comparing Maintenance Strategies for Overlays
<|reference_start|>Comparing Maintenance Strategies for Overlays: In this paper, we present an analytical tool for understanding the performance of structured overlay networks under churn based on the master-equation approach of physics. We motivate and derive an equation for the average number of hops taken by lookups during churn, for the Chord network. We analyse this equation in detail to understand the behaviour with and without churn. We then use this understanding to predict how lookups will scale for varying peer population as well as varying the sizes of the routing tables. We then consider a change in the maintenance algorithm of the overlay, from periodic stabilisation to a reactive one which corrects fingers only when a change is detected. We generalise our earlier analysis to underdstand how the reactive strategy compares with the periodic one.<|reference_end|>
arxiv
@article{krishnamurthy2007comparing, title={Comparing Maintenance Strategies for Overlays}, author={Supriya Krishnamurthy, Sameh El-Ansary, Erik Aurell and Seif Haridi}, journal={arXiv preprint arXiv:0710.0386}, year={2007}, number={Tech. Report TR-2007-01, Swedish Institute of Computer Science}, archivePrefix={arXiv}, eprint={0710.0386}, primaryClass={cs.NI cond-mat.stat-mech cs.DC} }
krishnamurthy2007comparing
arxiv-1303
0710.0410
The Theory of Unified Relativity for a Biovielectroluminescence Phenomenon via Fly's Visual and Imaging System
<|reference_start|>The Theory of Unified Relativity for a Biovielectroluminescence Phenomenon via Fly's Visual and Imaging System: The elucidation upon fly's neuronal patterns as a link to computer graphics and memory cards I/O's, is investigated for the phenomenon by propounding a unified theory of Einstein's two known relativities. It is conclusive that flies could contribute a certain amount of neuromatrices indicating an imagery function of a visual-computational system into computer graphics and storage systems. The visual system involves the time aspect, whereas flies possess faster pulses compared to humans' visual ability due to the E-field state on an active fly's eye surface. This behaviour can be tested on a dissected fly specimen at its ommatidia. Electro-optical contacts and electrodes are wired through the flesh forming organic emitter layer to stimulate light emission, thereby to a computer circuit. The next step is applying a threshold voltage with secondary voltages to the circuit denoting an array of essential electrodes for bit switch. As a result, circuit's dormant pulses versus active pulses at the specimen's area are recorded. The outcome matrix possesses a construction of RGB and time radicals expressing the time problem in consumption, allocating time into computational algorithms, enhancing the technology far beyond. The obtained formulation generates consumed distance cons(x), denoting circuital travel between data source/sink for pixel data and bendable wavelengths. Once 'image logic' is in place, incorporating this point of graphical acceleration permits one to enhance graphics and optimize immensely central processing, data transmissions between memory and computer visual system. The phenomenon can be mainly used in 360-deg. display/viewing, 3D scanning techniques, military and medicine, a robust and cheap substitution for e.g. pre-motion pattern analysis, real-time rendering and LCDs.<|reference_end|>
arxiv
@article{alipour2007the, title={The Theory of Unified Relativity for a Biovielectroluminescence Phenomenon via Fly's Visual and Imaging System}, author={Philip B. Alipour}, journal={arXiv preprint arXiv:0710.0410}, year={2007}, archivePrefix={arXiv}, eprint={0710.0410}, primaryClass={cs.CE cs.CV} }
alipour2007the
arxiv-1304
0710.0431
New Counting Codes for Distributed Video Coding
<|reference_start|>New Counting Codes for Distributed Video Coding: This paper introduces a new counting code. Its design was motivated by distributed video coding where, for decoding, error correction methods are applied to improve predictions. Those error corrections sometimes fail which results in decoded values worse than the initial prediction. Our code exploits the fact that bit errors are relatively unlikely events: more than a few bit errors in a decoded pixel value are rare. With a carefully designed counting code combined with a prediction those bit errors can be corrected and sometimes the original pixel value recovered. The error correction improves significantly. Our new code not only maximizes the Hamming distance between adjacent (or "near 1") codewords but also between nearby (for example "near 2") codewords. This is why our code is significantly different from the well-known maximal counting sequences which have maximal average Hamming distance. Fortunately, the new counting code can be derived from Gray Codes for every code word length (i.e. bit depth).<|reference_end|>
arxiv
@article{lakus-becker2007new, title={New Counting Codes for Distributed Video Coding}, author={Axel Lakus-Becker and Ka-Ming Leung}, journal={arXiv preprint arXiv:0710.0431}, year={2007}, archivePrefix={arXiv}, eprint={0710.0431}, primaryClass={cs.IT math.IT} }
lakus-becker2007new
arxiv-1305
0710.0485
Prediction with expert advice for the Brier game
<|reference_start|>Prediction with expert advice for the Brier game: We show that the Brier game of prediction is mixable and find the optimal learning rate and substitution function for it. The resulting prediction algorithm is applied to predict results of football and tennis matches. The theoretical performance guarantee turns out to be rather tight on these data sets, especially in the case of the more extensive tennis data.<|reference_end|>
arxiv
@article{vovk2007prediction, title={Prediction with expert advice for the Brier game}, author={Vladimir Vovk and Fedor Zhdanov}, journal={Journal of Machine Learning Research 10 (2009), 2413 - 2440}, year={2007}, archivePrefix={arXiv}, eprint={0710.0485}, primaryClass={cs.LG} }
vovk2007prediction
arxiv-1306
0710.0510
Q-adic Transform revisited
<|reference_start|>Q-adic Transform revisited: We present an algorithm to perform a simultaneous modular reduction of several residues. This algorithm is applied fast modular polynomial multiplication. The idea is to convert the $X$-adic representation of modular polynomials, with $X$ an indeterminate, to a $q$-adic representation where $q$ is an integer larger than the field characteristic. With some control on the different involved sizes it is then possible to perform some of the $q$-adic arithmetic directly with machine integers or floating points. Depending also on the number of performed numerical operations one can then convert back to the $q$-adic or $X$-adic representation and eventually mod out high residues. In this note we present a new version of both conversions: more tabulations and a way to reduce the number of divisions involved in the process are presented. The polynomial multiplication is then applied to arithmetic in small finite field extensions.<|reference_end|>
arxiv
@article{dumas2007q-adic, title={Q-adic Transform revisited}, author={Jean-Guillaume Dumas (LJK)}, journal={arXiv preprint arXiv:0710.0510}, year={2007}, archivePrefix={arXiv}, eprint={0710.0510}, primaryClass={cs.SC} }
dumas2007q-adic
arxiv-1307
0710.0528
On the interaction between sharing and linearity
<|reference_start|>On the interaction between sharing and linearity: In the analysis of logic programs, abstract domains for detecting sharing and linearity information are widely used. Devising abstract unification algorithms for such domains has proved to be rather hard. At the moment, the available algorithms are correct but not optimal, i.e., they cannot fully exploit the information conveyed by the abstract domains. In this paper, we define a new (infinite) domain ShLin-w which can be thought of as a general framework from which other domains can be easily derived by abstraction. ShLin-w makes the interaction between sharing and linearity explicit. We provide a constructive characterization of the optimal abstract unification operator on ShLin-w and we lift it to two well-known abstractions of ShLin-w. Namely, to the classical Sharing X Lin abstract domain and to the more precise ShLin-2 abstract domain by Andy King. In the case of single binding substitutions, we obtain optimal abstract unification algorithms for such domains. To appear in Theory and Practice of Logic Programming (TPLP).<|reference_end|>
arxiv
@article{amato2007on, title={On the interaction between sharing and linearity}, author={Gianluca Amato and Francesca Scozzari}, journal={Theory and Practice of Logic Programming, volume 10, issue 01, pp. 49-112, 2010}, year={2007}, doi={10.1017/S1471068409990160}, archivePrefix={arXiv}, eprint={0710.0528}, primaryClass={cs.PL cs.LO} }
amato2007on
arxiv-1308
0710.0531
The Problem of Localization in Networks of Randomly Deployed Nodes: Asymptotic and Finite Analysis, and Thresholds
<|reference_start|>The Problem of Localization in Networks of Randomly Deployed Nodes: Asymptotic and Finite Analysis, and Thresholds: We derive the probability that a randomly chosen NL-node over $S$ gets localized as a function of a variety of parameters. Then, we derive the probability that the whole network of NL-nodes over $S$ gets localized. In connection with the asymptotic thresholds, we show the presence of asymptotic thresholds on the network localization probability in two different scenarios. The first refers to dense networks, which arise when the domain $S$ is bounded and the densities of the two kinds of nodes tend to grow unboundedly. The second kind of thresholds manifest themselves when the considered domain increases but the number of nodes grow in such a way that the L-node density remains constant throughout the investigated domain. In this scenario, what matters is the minimum value of the maximum transmission range averaged over the fading process, denoted as $d_{max}$, above which the network of NL-nodes almost surely gets asymptotically localized.<|reference_end|>
arxiv
@article{daneshgaran2007the, title={The Problem of Localization in Networks of Randomly Deployed Nodes: Asymptotic and Finite Analysis, and Thresholds}, author={Fred Daneshgaran, Massimiliano Laddomada, Marina Mondin}, journal={arXiv preprint arXiv:0710.0531}, year={2007}, archivePrefix={arXiv}, eprint={0710.0531}, primaryClass={cs.DM cs.IT cs.NI math.IT} }
daneshgaran2007the
arxiv-1309
0710.0539
A Novel Solution to the ATT48 Benchmark Problem
<|reference_start|>A Novel Solution to the ATT48 Benchmark Problem: A solution to the benchmark ATT48 Traveling Salesman Problem (from the TSPLIB95 library) results from isolating the set of vertices into ten open-ended zones with nine lengthwise boundaries. In each zone, a minimum-length Hamiltonian Path (HP) is found for each combination of boundary vertices, leading to an approximation for the minimum-length Hamiltonian Cycle (HC). Determination of the optimal HPs for subsequent zones has the effect of automatically filtering out non-optimal HPs from earlier zones. Although the optimal HC for ATT48 involves only two crossing edges between all zones (with one exception), adding inter-zone edges can accommodate more complex problems.<|reference_end|>
arxiv
@article{ruffa2007a, title={A Novel Solution to the ATT48 Benchmark Problem}, author={Anthony A. Ruffa}, journal={arXiv preprint arXiv:0710.0539}, year={2007}, archivePrefix={arXiv}, eprint={0710.0539}, primaryClass={cs.DS cs.CC} }
ruffa2007a
arxiv-1310
0710.0550
Community Detection in Complex Networks by Dynamical Simplex Evolution
<|reference_start|>Community Detection in Complex Networks by Dynamical Simplex Evolution: We benchmark the dynamical simplex evolution (DSE) method with several of the currently available algorithms to detect communities in complex networks by comparing the fraction of correctly identified nodes for different levels of ``fuzziness'' of random networks composed of well defined communities. The potential benefits of the DSE method to detect hierarchical sub structures in complex networks are discussed.<|reference_end|>
arxiv
@article{gudkov2007community, title={Community Detection in Complex Networks by Dynamical Simplex Evolution}, author={V. Gudkov and V. Montealegre}, journal={arXiv preprint arXiv:0710.0550}, year={2007}, doi={10.1103/PhysRevE.78.016113}, archivePrefix={arXiv}, eprint={0710.0550}, primaryClass={cond-mat.dis-nn cs.NI physics.soc-ph} }
gudkov2007community
arxiv-1311
0710.0556
A Game Theoretic Approach to Quantum Information
<|reference_start|>A Game Theoretic Approach to Quantum Information: This work is an application of game theory to quantum information. In a state estimate, we are given observations distributed according to an unknown distribution $P_{\theta}$ (associated with award $Q$), which Nature chooses at random from the set $\{P_{\theta}: \theta \in \Theta \}$ according to a known prior distribution $\mu$ on $\Theta$, we produce an estimate $M$ for the unknown distribution $P_{\theta}$, and in the end, we will suffer a relative entropy cost $\mathcal{R}(P;M)$, measuring the quality of this estimate, therefore the whole utility is taken as $P \cdot Q -\mathcal{R}(P; M)$. In an introduction to strategic game, a sufficient condition for minimax theorem is obtained; An estimate is explored in the frame of game theory, and in the view of convex conjugate, we reach one new approach to quantum relative entropy, correspondingly quantum mutual entropy, and quantum channel capacity, which are more general, in the sense, without Radon-Nikodym (RN) derivatives. Also the monotonicity of quantum relative entropy and the additivity of quantum channel capacity are investigated.<|reference_end|>
arxiv
@article{dai2007a, title={A Game Theoretic Approach to Quantum Information}, author={Xianhua Dai and V. P. Belavkin}, journal={arXiv preprint arXiv:0710.0556}, year={2007}, archivePrefix={arXiv}, eprint={0710.0556}, primaryClass={quant-ph cs.GT cs.IT math.IT} }
dai2007a
arxiv-1312
0710.0564
TP Decoding
<|reference_start|>TP Decoding: `Tree pruning' (TP) is an algorithm for probabilistic inference on binary Markov random fields. It has been recently derived by Dror Weitz and used to construct the first fully polynomial approximation scheme for counting independent sets up to the `tree uniqueness threshold.' It can be regarded as a clever method for pruning the belief propagation computation tree, in such a way to exactly account for the effect of loops. In this paper we generalize the original algorithm to make it suitable for decoding linear codes, and discuss various schemes for pruning the computation tree. Further, we present the outcomes of numerical simulations on several linear codes, showing that tree pruning allows to interpolate continuously between belief propagation and maximum a posteriori decoding. Finally, we discuss theoretical implications of the new method.<|reference_end|>
arxiv
@article{lu2007tp, title={TP Decoding}, author={Yi Lu, Cyril Measson and Andrea Montanari}, journal={See also: 45th Annual Allerton Conference on Communication, Control, and Computing, Monticello, USA, 2007}, year={2007}, archivePrefix={arXiv}, eprint={0710.0564}, primaryClass={cs.IT math.IT} }
lu2007tp
arxiv-1313
0710.0658
Detailed Network Measurements Using Sparse Graph Counters: The Theory
<|reference_start|>Detailed Network Measurements Using Sparse Graph Counters: The Theory: Measuring network flow sizes is important for tasks like accounting/billing, network forensics and security. Per-flow accounting is considered hard because it requires that many counters be updated at a very high speed; however, the large fast memories needed for storing the counters are prohibitively expensive. Therefore, current approaches aim to obtain approximate flow counts; that is, to detect large elephant flows and then measure their sizes. Recently the authors and their collaborators have developed [1] a novel method for per-flow traffic measurement that is fast, highly memory efficient and accurate. At the core of this method is a novel counter architecture called "counter braids.'' In this paper, we analyze the performance of the counter braid architecture under a Maximum Likelihood (ML) flow size estimation algorithm and show that it is optimal; that is, the number of bits needed to store the size of a flow matches the entropy lower bound. While the ML algorithm is optimal, it is too complex to implement. In [1] we have developed an easy-to-implement and efficient message passing algorithm for estimating flow sizes.<|reference_end|>
arxiv
@article{lu2007detailed, title={Detailed Network Measurements Using Sparse Graph Counters: The Theory}, author={Yi Lu, Andrea Montanari and Balaji Prabhakar}, journal={arXiv preprint arXiv:0710.0658}, year={2007}, archivePrefix={arXiv}, eprint={0710.0658}, primaryClass={cs.NI cs.IT math.IT} }
lu2007detailed
arxiv-1314
0710.0672
Optimization of supply diversity for the self-assembly of simple objects in two and three dimensions
<|reference_start|>Optimization of supply diversity for the self-assembly of simple objects in two and three dimensions: The field of algorithmic self-assembly is concerned with the design and analysis of self-assembly systems from a computational perspective, that is, from the perspective of mathematical problems whose study may give insight into the natural processes through which elementary objects self-assemble into more complex ones. One of the main problems of algorithmic self-assembly is the minimum tile set problem (MTSP), which asks for a collection of types of elementary objects (called tiles) to be found for the self-assembly of an object having a pre-established shape. Such a collection is to be as concise as possible, thus minimizing supply diversity, while satisfying a set of stringent constraints having to do with the termination and other properties of the self-assembly process from its tile types. We present a study of what we think is the first practical approach to MTSP. Our study starts with the introduction of an evolutionary heuristic to tackle MTSP and includes results from extensive experimentation with the heuristic on the self-assembly of simple objects in two and three dimensions. The heuristic we introduce combines classic elements from the field of evolutionary computation with a problem-specific variant of Pareto dominance into a multi-objective approach to MTSP.<|reference_end|>
arxiv
@article{vieira2007optimization, title={Optimization of supply diversity for the self-assembly of simple objects in two and three dimensions}, author={Fabio R. J. Vieira, Valmir C. Barbosa}, journal={Natural Computing 10 (2011), 551-581}, year={2007}, doi={10.1007/s11047-010-9209-x}, archivePrefix={arXiv}, eprint={0710.0672}, primaryClass={cs.NE} }
vieira2007optimization
arxiv-1315
0710.0736
Colour image segmentation by the vector-valued Allen-Cahn phase-field model: a multigrid solution
<|reference_start|>Colour image segmentation by the vector-valued Allen-Cahn phase-field model: a multigrid solution: We propose a new method for the numerical solution of a PDE-driven model for colour image segmentation and give numerical examples of the results. The method combines the vector-valued Allen-Cahn phase field equation with initial data fitting terms. This method is known to be closely related to the Mumford-Shah problem and the level set segmentation by Chan and Vese. Our numerical solution is performed using a multigrid splitting of a finite element space, thereby producing an efficient and robust method for the segmentation of large images.<|reference_end|>
arxiv
@article{kay2007colour, title={Colour image segmentation by the vector-valued Allen-Cahn phase-field model: a multigrid solution}, author={David A Kay (Oxford University Computational Laboratory), Alessandro Tomasi (University of Sussex)}, journal={IEEE Trans. Im. Proc. 18.10 (2009)}, year={2007}, doi={10.1109/TIP.2009.2026678}, archivePrefix={arXiv}, eprint={0710.0736}, primaryClass={cs.CV cs.NA} }
kay2007colour
arxiv-1316
0710.0748
A Fast Heuristic Algorithm Based on Verification and Elimination Methods for Maximum Clique Problem
<|reference_start|>A Fast Heuristic Algorithm Based on Verification and Elimination Methods for Maximum Clique Problem: A clique in an undirected graph G= (V, E) is a subset V' V of vertices, each pair of which is connected by an edge in E. The clique problem is an optimization problem of finding a clique of maximum size in graph. The clique problem is NP-Complete. We have succeeded in developing a fast algorithm for maximum clique problem by employing the method of verification and elimination. For a graph of size N there are 2N sub graphs, which may be cliques and hence verifying all of them, will take a long time. Idea is to eliminate a major number of sub graphs, which cannot be cliques and verifying only the remaining sub graphs. This heuristic algorithm runs in polynomial time and executes successfully for several examples when applied to random graphs and DIMACS benchmark graphs.<|reference_end|>
arxiv
@article{p2007a, title={A Fast Heuristic Algorithm Based on Verification and Elimination Methods for Maximum Clique Problem}, author={Murali Krishna P, Sabu .M Thampi}, journal={arXiv preprint arXiv:0710.0748}, year={2007}, archivePrefix={arXiv}, eprint={0710.0748}, primaryClass={cs.DM cs.CC} }
p2007a
arxiv-1317
0710.0789
Wireless Local Area Networks with Multiple-Packet Reception Capability
<|reference_start|>Wireless Local Area Networks with Multiple-Packet Reception Capability: Thanks to its simplicity and cost efficiency, wireless local area network (WLAN) enjoys unique advantages in providing high-speed and low-cost wireless services in hot spots and indoor environments. Traditional WLAN medium-access-control (MAC) protocols assume that only one station can transmit at a time: simultaneous transmissions of more than one station causes the destruction of all packets involved. By exploiting recent advances in PHY-layer multiuser detection (MUD) techniques, it is possible for a receiver to receive multiple packets simultaneously. This paper argues that such multipacket reception (MPR) capability can greatly enhance the capacity of future WLANs. In addition, it provides the MAC-layer and PHY-layer designs needed to achieve the improved capacity. First, to demonstrate MUD/MPR as a powerful capacity-enhancement technique, we prove a "super-linearity" result, which states that the system throughput per unit cost increases as the MPR capability increases. Second, we show that the commonly deployed binary exponential backoff (BEB) algorithm in today's WLAN MAC may not be optimal in an MPR system, and that the optimal backoff factor increases with the MPR capability: the number of packets that can be received simultaneously. Third, based on the above insights, we design a joint MAC-PHY layer protocol for an IEEE 802.11-like WLAN that incorporates advanced PHY-layer blind detection and MUD techniques to implement MPR<|reference_end|>
arxiv
@article{zhang2007wireless, title={Wireless Local Area Networks with Multiple-Packet Reception Capability}, author={Ying Jun Zhang, Peng Xuan Zheng, Soung Chang Liew}, journal={arXiv preprint arXiv:0710.0789}, year={2007}, archivePrefix={arXiv}, eprint={0710.0789}, primaryClass={cs.PF cs.NI} }
zhang2007wireless
arxiv-1318
0710.0805
On the Satisfiability Threshold and Clustering of Solutions of Random 3-SAT Formulas
<|reference_start|>On the Satisfiability Threshold and Clustering of Solutions of Random 3-SAT Formulas: We study the structure of satisfying assignments of a random 3-SAT formula. In particular, we show that a random formula of density 4.453 or higher almost surely has no non-trivial "core" assignments. Core assignments are certain partial assignments that can be extended to satisfying assignments, and have been studied recently in connection with the Survey Propagation heuristic for random SAT. Their existence implies the presence of clusters of solutions, and they have been shown to exist with high probability below the satisfiability threshold for k-SAT with k>8, by Achlioptas and Ricci-Tersenghi, STOC 2006. Our result implies that either this does not hold for 3-SAT or the threshold density for satisfiability in 3-SAT lies below 4.453. The main technical tool that we use is a novel simple application of the first moment method.<|reference_end|>
arxiv
@article{maneva2007on, title={On the Satisfiability Threshold and Clustering of Solutions of Random 3-SAT Formulas}, author={Elitza Maneva and Alistair Sinclair}, journal={arXiv preprint arXiv:0710.0805}, year={2007}, archivePrefix={arXiv}, eprint={0710.0805}, primaryClass={cs.CC} }
maneva2007on
arxiv-1319
0710.0811
Band Unfoldings and Prismatoids: A Counterexample
<|reference_start|>Band Unfoldings and Prismatoids: A Counterexample: This note shows that the hope expressed in [ADL+07]--that the new algorithm for edge-unfolding any polyhedral band without overlap might lead to an algorithm for unfolding any prismatoid without overlap--cannot be realized. A prismatoid is constructed whose sides constitute a nested polyhedral band, with the property that every placement of the prismatoid top face overlaps with the band unfolding.<|reference_end|>
arxiv
@article{o'rourke2007band, title={Band Unfoldings and Prismatoids: A Counterexample}, author={Joseph O'Rourke}, journal={arXiv preprint arXiv:0710.0811}, year={2007}, number={Smith Computer Science 086}, archivePrefix={arXiv}, eprint={0710.0811}, primaryClass={cs.CG} }
o'rourke2007band
arxiv-1320
0710.0824
Two algorithms in search of a type system
<|reference_start|>Two algorithms in search of a type system: The authors' ATR programming formalism is a version of call-by-value PCF under a complexity-theoretically motivated type system. ATR programs run in type-2 polynomial-time and all standard type-2 basic feasible functionals are ATR-definable (ATR types are confined to levels 0, 1, and 2). A limitation of the original version of ATR is that the only directly expressible recursions are tail-recursions. Here we extend ATR so that a broad range of affine recursions are directly expressible. In particular, the revised ATR can fairly naturally express the classic insertion- and selection-sort algorithms, thus overcoming a sticking point of most prior implicit-complexity-based formalisms. The paper's main work is in refining the original time-complexity semantics for ATR to show that these new recursion schemes do not lead out of the realm of feasibility.<|reference_end|>
arxiv
@article{danner2007two, title={Two algorithms in search of a type system}, author={Norman Danner and James S. Royer}, journal={arXiv preprint arXiv:0710.0824}, year={2007}, archivePrefix={arXiv}, eprint={0710.0824}, primaryClass={cs.LO cs.PL} }
danner2007two
arxiv-1321
0710.0842
Syst\`emes interactifs sensibles aux \'emotions : architecture logicielle
<|reference_start|>Syst\`emes interactifs sensibles aux \'emotions : architecture logicielle: We propose a software architecture for interactive systems which allows integrating the user's emotion. Emotion can be involved in interaction at several levels. In our application case - ballet dance - emotions is explicitely manipulated by the interactive system to produce emotion-wise output. Our architecture model to develop emotion-wise applications is based on the PAC-Amodeus model. We add a branch to this model, divided into three components: Data capture, analysis and cue extraction, and finally interpretation of those cues. We show the different data flows between this architecture's components depending on the entry point of the emotion branch within the system. We then illustrate our model by describing our application case: capturing a ballet dancer's movement to extract the emotions he expresses and use these emotions to generate graphical content that is displayed on stage.<|reference_end|>
arxiv
@article{clay2007syst\`emes, title={Syst\`emes interactifs sensibles aux \'emotions : architecture logicielle}, author={Alexis Clay (LIPSI)}, journal={arXiv preprint arXiv:0710.0842}, year={2007}, archivePrefix={arXiv}, eprint={0710.0842}, primaryClass={cs.HC} }
clay2007syst\`emes
arxiv-1322
0710.0847
Emotion capture based on body postures and movements
<|reference_start|>Emotion capture based on body postures and movements: In this paper we present a preliminary study for designing interactive systems that are sensible to human emotions based on the body movements. To do so, we first review the literature on the various approaches for defining and characterizing human emotions. After justifying the adopted characterization space for emotions, we then focus on the movement characteristics that must be captured by the system for being able to recognize the human emotions.<|reference_end|>
arxiv
@article{clay2007emotion, title={Emotion capture based on body postures and movements}, author={Alexis Clay (LIPSI), Nadine Couture (LIPSI), Laurence Nigay (CLIPS - IMAG)}, journal={Proceedings of the International Conference on Computing and e-systems 2007 (TIGERA'07), Hammamet : Tunisie (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0710.0847}, primaryClass={cs.HC} }
clay2007emotion
arxiv-1323
0710.0859
Assistance orale \`a la recherche visuelle - \'etude exp\'erimentale de l'apport d'indications spatiales \`a la d\'etection de cibles
<|reference_start|>Assistance orale \`a la recherche visuelle - \'etude exp\'erimentale de l'apport d'indications spatiales \`a la d\'etection de cibles: This paper describes an experimental study that aims at assessing the actual contribution of voice system messages to visual search efficiency and comfort. Messages which include spatial information on the target location are meant to support search for familiar targets in collections of photographs (30 per display). 24 participants carried out 240 visual search tasks in two conditions differing from each other in initial target presentation only. The isolated target was presented either simultaneously with an oral message (multimodal presentation, MP), or without any message (visual presentation, VP). Averaged target selection times were thrice longer and errors almost twice more frequent in the VP condition than in the MP condition. In addition, the contribution of spatial messages to visual search rapidity and accuracy was influenced by display layout and task difficulty. Most results are statistically significant. Besides, subjective judgments indicate that oral messages were well accepted.<|reference_end|>
arxiv
@article{kieffer2007assistance, title={Assistance orale \`a la recherche visuelle - \'etude exp\'erimentale de l'apport d'indications spatiales \`a la d\'etection de cibles}, author={Suzanne Kieffer (INRIA Rocquencourt / INRIA Lorraine - LORIA), No"elle Carbonell (INRIA Rocquencourt / INRIA Lorraine - LORIA)}, journal={Revue d'Interaction Homme-Machine 7, 1 (2006) 30 p}, year={2007}, archivePrefix={arXiv}, eprint={0710.0859}, primaryClass={cs.HC} }
kieffer2007assistance
arxiv-1324
0710.0865
Secrecy Capacity of the Wiretap Channel with Noisy Feedback
<|reference_start|>Secrecy Capacity of the Wiretap Channel with Noisy Feedback: In this work, the role of noisy feedback in enhancing the secrecy capacity of the wiretap channel is investigated. A model is considered in which the feed-forward and feedback signals share the same noisy channel. More specifically, a discrete memoryless modulo-additive channel with a full-duplex destination node is considered first, and it is shown that a judicious use of feedback increases the perfect secrecy capacity to the capacity of the source-destination channel in the absence of the wiretapper. In the achievability scheme, the feedback signal corresponds to a private key, known only to the destination. Then a half-duplex system is considered, for which a novel feedback technique that always achieves a positive perfect secrecy rate (even when the source-wiretapper channel is less noisy than the source-destination channel) is proposed. These results hinge on the modulo-additive property of the channel, which is exploited by the destination to perform encryption over the channel without revealing its key to the source.<|reference_end|>
arxiv
@article{lai2007secrecy, title={Secrecy Capacity of the Wiretap Channel with Noisy Feedback}, author={Lifeng Lai, Hesham El Gamal and H. Vincent Poor}, journal={arXiv preprint arXiv:0710.0865}, year={2007}, archivePrefix={arXiv}, eprint={0710.0865}, primaryClass={cs.IT cs.CR math.IT} }
lai2007secrecy
arxiv-1325
0710.0871
Spreadsheets in Clinical Medicine
<|reference_start|>Spreadsheets in Clinical Medicine: There is overwhelming evidence that the continued and widespread use of untested spreadsheets in business gives rise to regular, significant and unexpected financial losses. Whilst this is worrying, it is perhaps a relatively minor concern compared with the risks arising from the use of poorly constructed and/or untested spreadsheets in medicine, a practice that is already occurring. This article is intended as a warning that the use of poorly constructed and/or untested spreadsheets in clinical medicine cannot be tolerated. It supports this warning by reporting on potentially serious weaknesses found while testing a limited number of publicly available clinical spreadsheets.<|reference_end|>
arxiv
@article{croll2007spreadsheets, title={Spreadsheets in Clinical Medicine}, author={Grenville J. Croll, Raymond J. Butler}, journal={Proc. European Spreadsheet Risks Int. Grp. 2006 7-16}, year={2007}, archivePrefix={arXiv}, eprint={0710.0871}, primaryClass={cs.CY} }
croll2007spreadsheets
arxiv-1326
0710.0900
A New Achievability Scheme for the Relay Channel
<|reference_start|>A New Achievability Scheme for the Relay Channel: In this paper, we propose a new coding scheme for the general relay channel. This coding scheme is in the form of a block Markov code. The transmitter uses a superposition Markov code. The relay compresses the received signal and maps the compressed version of the received signal into a codeword conditioned on the codeword of the previous block. The receiver performs joint decoding after it has received all of the B blocks. We show that this coding scheme can be viewed as a generalization of the well-known Compress-And-Forward (CAF) scheme proposed by Cover and El Gamal. Our coding scheme provides options for preserving the correlation between the channel inputs of the transmitter and the relay, which is not possible in the CAF scheme. Thus, our proposed scheme may potentially yield a larger achievable rate than the CAF scheme.<|reference_end|>
arxiv
@article{kang2007a, title={A New Achievability Scheme for the Relay Channel}, author={Wei Kang and Sennur Ulukus}, journal={arXiv preprint arXiv:0710.0900}, year={2007}, archivePrefix={arXiv}, eprint={0710.0900}, primaryClass={cs.IT math.IT} }
kang2007a
arxiv-1327
0710.0903
Control and Monitoring System for Modular Wireless Robot
<|reference_start|>Control and Monitoring System for Modular Wireless Robot: We introduce our concept on the modular wireless robot consisting of three main modules : main unit, data acquisition and data processing modules. We have developed a generic prototype with an integrated control and monitoring system to enhance its flexibility, and to enable simple operation through a web-based interface accessible wirelessly. In present paper, we focus on the microcontroller based hardware to enable data acquisition and remote mechanical control.<|reference_end|>
arxiv
@article{firmansyah2007control, title={Control and Monitoring System for Modular Wireless Robot}, author={I. Firmansyah, B. Hermanto and L.T. Handoko}, journal={arXiv preprint arXiv:0710.0903}, year={2007}, archivePrefix={arXiv}, eprint={0710.0903}, primaryClass={cs.RO} }
firmansyah2007control
arxiv-1328
0710.0925
Degeneracy of Angular Voronoi Diagram
<|reference_start|>Degeneracy of Angular Voronoi Diagram: Angular Voronoi diagram was introduced by Asano et al. as fundamental research for a mesh generation. In an angular Voronoi diagram, the edges are curves of degree three. From view of computational robustness we need to treat the curves carefully, because they might have a singularity. We enumerate all the possible types of curves that appear as an edge of an angular Voronoi diagram, which tells us what kind of degeneracy is possible and tells us necessity of considering a singularity for computational robustness.<|reference_end|>
arxiv
@article{muta2007degeneracy, title={Degeneracy of Angular Voronoi Diagram}, author={Hidetoshi Muta and Kimikazu Kato}, journal={arXiv preprint arXiv:0710.0925}, year={2007}, archivePrefix={arXiv}, eprint={0710.0925}, primaryClass={cs.CG} }
muta2007degeneracy
arxiv-1329
0710.0937
Multichannel algorithm based on generalized positional numeration system
<|reference_start|>Multichannel algorithm based on generalized positional numeration system: This report is devoted to introduction in multichannel algorithm based on generalized numeration notations (GPN). The internal, external and mixed account are entered. The concept of the GPN and its classification as decomposition of an integer on composed of integers is discussed. Realization of multichannel algorithm on the basis of GPN is introduced. In particular, some properties of Fibonacci multichannel algorithm are discussed.<|reference_end|>
arxiv
@article{lavrenov2007multichannel, title={Multichannel algorithm based on generalized positional numeration system}, author={Alexandre Lavrenov}, journal={arXiv preprint arXiv:0710.0937}, year={2007}, archivePrefix={arXiv}, eprint={0710.0937}, primaryClass={cs.IT math.IT} }
lavrenov2007multichannel
arxiv-1330
0710.1001
Connectivity of Random 1-Dimensional Networks
<|reference_start|>Connectivity of Random 1-Dimensional Networks: An important problem in wireless sensor networks is to find the minimal number of randomly deployed sensors making a network connected with a given probability. In practice sensors are often deployed one by one along a trajectory of a vehicle, so it is natural to assume that arbitrary probability density functions of distances between successive sensors in a segment are given. The paper computes the probability of connectivity and coverage of 1-dimensional networks and gives estimates for a minimal number of sensors for important distributions.<|reference_end|>
arxiv
@article{kurlin2007connectivity, title={Connectivity of Random 1-Dimensional Networks}, author={V. Kurlin, L. Mihaylova}, journal={arXiv preprint arXiv:0710.1001}, year={2007}, archivePrefix={arXiv}, eprint={0710.1001}, primaryClass={cs.IT cs.DS math.IT stat.AP} }
kurlin2007connectivity
arxiv-1331
0710.1007
Two representation theorems of three-valued structures by means of binary relations
<|reference_start|>Two representation theorems of three-valued structures by means of binary relations: The results here presented are a continuation of the algebraic research line which attempts to find properties of multiple-valued systems based on a poset of two agents. The aim of this paper is to exhibit two relationships between some three-valued structures and binary relations. The established connections are so narrow that two representation theorems are obtained.<|reference_end|>
arxiv
@article{iturrioz2007two, title={Two representation theorems of three-valued structures by means of binary relations}, author={Luisa Iturrioz}, journal={arXiv preprint arXiv:0710.1007}, year={2007}, archivePrefix={arXiv}, eprint={0710.1007}, primaryClass={cs.DM} }
iturrioz2007two
arxiv-1332
0710.1149
Z2Z4-linear codes: generator matrices and duality
<|reference_start|>Z2Z4-linear codes: generator matrices and duality: A code ${\cal C}$ is $\Z_2\Z_4$-additive if the set of coordinates can be partitioned into two subsets $X$ and $Y$ such that the punctured code of ${\cal C}$ by deleting the coordinates outside $X$ (respectively, $Y$) is a binary linear code (respectively, a quaternary linear code). In this paper $\Z_2\Z_4$-additive codes are studied. Their corresponding binary images, via the Gray map, are $\Z_2\Z_4$-linear codes, which seem to be a very distinguished class of binary group codes. As for binary and quaternary linear codes, for these codes the fundamental parameters are found and standard forms for generator and parity check matrices are given. For this, the appropriate inner product is deduced and the concept of duality for $\Z_2\Z_4$-additive codes is defined. Moreover, the parameters of the dual codes are computed. Finally, some conditions for self-duality of $\Z_2\Z_4$-additive codes are given.<|reference_end|>
arxiv
@article{borges2007z2z4-linear, title={Z2Z4-linear codes: generator matrices and duality}, author={J. Borges, C. Fernandez, J. Pujol, J. Rifa, M. Villanueva}, journal={arXiv preprint arXiv:0710.1149}, year={2007}, archivePrefix={arXiv}, eprint={0710.1149}, primaryClass={cs.IT cs.DM math.CO math.IT} }
borges2007z2z4-linear
arxiv-1333
0710.1153
Verification of Ptime Reducibility for system F Terms: Type Inference in<br> Dual Light Affine Logic
<|reference_start|>Verification of Ptime Reducibility for system F Terms: Type Inference in<br> Dual Light Affine Logic: In a previous work Baillot and Terui introduced Dual light affine logic (DLAL) as a variant of Light linear logic suitable for guaranteeing complexity properties on lambda calculus terms: all typable terms can be evaluated in polynomial time by beta reduction and all Ptime functions can be represented. In the present work we address the problem of typing lambda-terms in second-order DLAL. For that we give a procedure which, starting with a term typed in system F, determines whether it is typable in DLAL and outputs a concrete typing if there exists any. We show that our procedure can be run in time polynomial in the size of the original Church typed system F term.<|reference_end|>
arxiv
@article{atassi2007verification, title={Verification of Ptime Reducibility for system F Terms: Type Inference in<br> Dual Light Affine Logic}, author={Vincent Atassi, Patrick Baillot, Kazushige Terui}, journal={Logical Methods in Computer Science, Volume 3, Issue 4 (November 15, 2007) lmcs:1234}, year={2007}, doi={10.2168/LMCS-3(4:10)2007}, archivePrefix={arXiv}, eprint={0710.1153}, primaryClass={cs.LO cs.CC} }
atassi2007verification
arxiv-1334
0710.1182
Low-Density Parity-Check Codes for Nonergodic Block-Fading Channels
<|reference_start|>Low-Density Parity-Check Codes for Nonergodic Block-Fading Channels: We solve the problem of designing powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Unfortunately, optimal constructions for maximum-likelihood decoding do not perform well under iterative decoding. To overcome this limitation, we then introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes with multiplexed parallel turbo codes suitable for nonergodic channels and recently reported in the literature.<|reference_end|>
arxiv
@article{boutros2007low-density, title={Low-Density Parity-Check Codes for Nonergodic Block-Fading Channels}, author={Joseph J. Boutros, Albert Guillen i Fabregas, Ezio Biglieri and Gilles Zemor}, journal={arXiv preprint arXiv:0710.1182}, year={2007}, doi={10.1109/TIT.2010.2053890}, archivePrefix={arXiv}, eprint={0710.1182}, primaryClass={cs.IT math.IT} }
boutros2007low-density
arxiv-1335
0710.1190
Power Efficient Scheduling under Delay Constraints over Multi-user Wireless Channels
<|reference_start|>Power Efficient Scheduling under Delay Constraints over Multi-user Wireless Channels: In this paper, we consider the problem of power efficient uplink scheduling in a Time Division Multiple Access (TDMA) system over a fading wireless channel. The objective is to minimize the power expenditure of each user subject to satisfying individual user delay. We make the practical assumption that the system statistics are unknown, i.e., the probability distributions of the user arrivals and channel states are unknown. The problem has the structure of a Constrained Markov Decision Problem (CMDP). Determining an optimal policy under for the CMDP faces the problems of state space explosion and unknown system statistics. To tackle the problem of state space explosion, we suggest determining the transmission rate of a particular user in each slot based on its channel condition and buffer occupancy only. The rate allocation algorithm for a particular user is a learning algorithm that learns about the buffer occupancy and channel states of that user during system execution and thus addresses the issue of unknown system statistics. Once the rate of each user is determined, the proposed algorithm schedules the user with the best rate. Our simulations within an IEEE 802.16 system demonstrate that the algorithm is indeed able to satisfy the user specified delay constraints. We compare the performance of our algorithm with the well known M-LWDF algorithm. Moreover, we demonstrate that the power expended by the users under our algorithm is quite low.<|reference_end|>
arxiv
@article{salodkar2007power, title={Power Efficient Scheduling under Delay Constraints over Multi-user Wireless Channels}, author={Nitin Salodkar, Abhay Karandikar and Vivek S. Borkar}, journal={arXiv preprint arXiv:0710.1190}, year={2007}, archivePrefix={arXiv}, eprint={0710.1190}, primaryClass={cs.NI cs.MA} }
salodkar2007power
arxiv-1336
0710.1203
Semantic distillation: a method for clustering objects by their contextual specificity
<|reference_start|>Semantic distillation: a method for clustering objects by their contextual specificity: Techniques for data-mining, latent semantic analysis, contextual search of databases, etc. have long ago been developed by computer scientists working on information retrieval (IR). Experimental scientists, from all disciplines, having to analyse large collections of raw experimental data (astronomical, physical, biological, etc.) have developed powerful methods for their statistical analysis and for clustering, categorising, and classifying objects. Finally, physicists have developed a theory of quantum measurement, unifying the logical, algebraic, and probabilistic aspects of queries into a single formalism. The purpose of this paper is twofold: first to show that when formulated at an abstract level, problems from IR, from statistical data analysis, and from physical measurement theories are very similar and hence can profitably be cross-fertilised, and, secondly, to propose a novel method of fuzzy hierarchical clustering, termed \textit{semantic distillation} -- strongly inspired from the theory of quantum measurement --, we developed to analyse raw data coming from various types of experiments on DNA arrays. We illustrate the method by analysing DNA arrays experiments and clustering the genes of the array according to their specificity.<|reference_end|>
arxiv
@article{sierocinski2007semantic, title={Semantic distillation: a method for clustering objects by their contextual specificity}, author={Thomas Sierocinski (IRMAR), Anthony Le B'echec, Nathalie Th'eret, Dimitri Petritis (IRMAR)}, journal={arXiv preprint arXiv:0710.1203}, year={2007}, number={2007-58}, archivePrefix={arXiv}, eprint={0710.1203}, primaryClass={math.PR cs.DB math.ST q-bio.QM stat.ML stat.TH} }
sierocinski2007semantic
arxiv-1337
0710.1208
Diagrammatic Inference
<|reference_start|>Diagrammatic Inference: Diagrammatic logics were introduced in 2002, with emphasis on the notions of specifications and models. In this paper we improve the description of the inference process, which is seen as a Yoneda functor on a bicategory of fractions. A diagrammatic logic is defined from a morphism of limit sketches (called a propagator) which gives rise to an adjunction, which in turn determines a bicategory of fractions. The propagator, the adjunction and the bicategory provide respectively the syntax, the models and the inference process for the logic. Then diagrammatic logics and their morphisms are applied to the semantics of side effects in computer languages.<|reference_end|>
arxiv
@article{duval2007diagrammatic, title={Diagrammatic Inference}, author={Dominique Duval (LJK)}, journal={arXiv preprint arXiv:0710.1208}, year={2007}, archivePrefix={arXiv}, eprint={0710.1208}, primaryClass={cs.LO math.CT} }
duval2007diagrammatic
arxiv-1338
0710.1254
A Group Theoretic Model for Information
<|reference_start|>A Group Theoretic Model for Information: In this paper we formalize the notions of information elements and information lattices, first proposed by Shannon. Exploiting this formalization, we identify a comprehensive parallelism between information lattices and subgroup lattices. Qualitatively, we demonstrate isomorphisms between information lattices and subgroup lattices. Quantitatively, we establish a decisive approximation relation between the entropy structures of information lattices and the log-index structures of the corresponding subgroup lattices. This approximation extends the approximation for joint entropies carried out previously by Chan and Yeung. As a consequence of our approximation result, we show that any continuous law holds in general for the entropies of information elements if and only if the same law holds in general for the log-indices of subgroups. As an application, by constructing subgroup counterexamples we find surprisingly that common information, unlike joint information, obeys neither the submodularity nor the supermodularity law. We emphasize that the notion of information elements is conceptually significant--formalizing it helps to reveal the deep connection between information theory and group theory. The parallelism established in this paper admits an appealing group-action explanation and provides useful insights into the intrinsic structure among information elements from a group-theoretic perspective.<|reference_end|>
arxiv
@article{li2007a, title={A Group Theoretic Model for Information}, author={Hua Li and Edwin K.P. Chong}, journal={arXiv preprint arXiv:0710.1254}, year={2007}, archivePrefix={arXiv}, eprint={0710.1254}, primaryClass={cs.IT math.IT} }
li2007a
arxiv-1339
0710.1275
On Convergence Properties of Shannon Entropy
<|reference_start|>On Convergence Properties of Shannon Entropy: Convergence properties of Shannon Entropy are studied. In the differential setting, it is shown that weak convergence of probability measures, or convergence in distribution, is not enough for convergence of the associated differential entropies. A general result for the desired differential entropy convergence is provided, taking into account both compactly and uncompactly supported densities. Convergence of differential entropy is also characterized in terms of the Kullback-Liebler discriminant for densities with fairly general supports, and it is shown that convergence in variation of probability measures guarantees such convergence under an appropriate boundedness condition on the densities involved. Results for the discrete setting are also provided, allowing for infinitely supported probability measures, by taking advantage of the equivalence between weak convergence and convergence in variation in this setting.<|reference_end|>
arxiv
@article{piera2007on, title={On Convergence Properties of Shannon Entropy}, author={Francisco J. Piera, Patricio Parada}, journal={arXiv preprint arXiv:0710.1275}, year={2007}, doi={10.1134/S003294600902001X}, archivePrefix={arXiv}, eprint={0710.1275}, primaryClass={cs.IT math.IT} }
piera2007on
arxiv-1340
0710.1280
On the Relationship between Mutual Information and Minimum Mean-Square Errors in Stochastic Dynamical Systems
<|reference_start|>On the Relationship between Mutual Information and Minimum Mean-Square Errors in Stochastic Dynamical Systems: We consider a general stochastic input-output dynamical system with output evolving in time as the solution to a functional coefficients, It\^{o}'s stochastic differential equation, excited by an input process. This general class of stochastic systems encompasses not only the classical communication channel models, but also a wide variety of engineering systems appearing through a whole range of applications. For this general setting we find analogous of known relationships linking input-output mutual information and minimum mean causal and non-causal square errors, previously established in the context of additive Gaussian noise communication channels. Relationships are not only established in terms of time-averaged quantities, but also their time-instantaneous, dynamical counterparts are presented. The problem of appropriately introducing in this general framework a signal-to-noise ratio notion expressed through a signal-to-noise ratio parameter is also taken into account, identifying conditions for a proper and meaningful interpretation.<|reference_end|>
arxiv
@article{piera2007on, title={On the Relationship between Mutual Information and Minimum Mean-Square Errors in Stochastic Dynamical Systems}, author={Francisco J. Piera, Patricio Parada}, journal={arXiv preprint arXiv:0710.1280}, year={2007}, archivePrefix={arXiv}, eprint={0710.1280}, primaryClass={cs.IT math.IT} }
piera2007on
arxiv-1341
0710.1325
The MIMOME Channel
<|reference_start|>The MIMOME Channel: The MIMOME channel is a Gaussian wiretap channel in which the sender, receiver, and eavesdropper all have multiple antennas. We characterize the secrecy capacity as the saddle-value of a minimax problem. Among other implications, our result establishes that a Gaussian distribution maximizes the secrecy capacity characterization of Csisz{\'a}r and K{\"o}rner when applied to the MIMOME channel. We also determine a necessary and sufficient condition for the secrecy capacity to be zero. Large antenna array analysis of this condition reveals several useful insights into the conditions under which secure communication is possible.<|reference_end|>
arxiv
@article{khisti2007the, title={The MIMOME Channel}, author={Ashish Khisti and Gregory Wornell}, journal={arXiv preprint arXiv:0710.1325}, year={2007}, archivePrefix={arXiv}, eprint={0710.1325}, primaryClass={cs.IT math.IT} }
khisti2007the
arxiv-1342
0710.1336
Multi-User Diversity vs Accurate Channel Feedback for MIMO Broadcast Channels
<|reference_start|>Multi-User Diversity vs Accurate Channel Feedback for MIMO Broadcast Channels: A multiple transmit antenna, single receive antenna (per receiver) downlink channel with limited channel feedback is considered. Given a constraint on the total system-wide channel feedback, the following question is considered: is it preferable to get low-rate feedback from a large number of receivers or to receive high-rate/high-quality feedback from a smaller number of (randomly selected) receivers? Acquiring feedback from many users allows multi-user diversity to be exploited, while high-rate feedback allows for very precise selection of beamforming directions. It is shown that systems in which a limited number of users feedback high-rate channel information significantly outperform low-rate/many user systems. While capacity increases only double logarithmically with the number of users, the marginal benefit of channel feedback is very significant up to the point where the CSI is essentially perfect.<|reference_end|>
arxiv
@article{ravindran2007multi-user, title={Multi-User Diversity vs. Accurate Channel Feedback for MIMO Broadcast Channels}, author={Niranjay Ravindran, Nihar Jindal}, journal={arXiv preprint arXiv:0710.1336}, year={2007}, archivePrefix={arXiv}, eprint={0710.1336}, primaryClass={cs.IT math.IT} }
ravindran2007multi-user
arxiv-1343
0710.1383
Log-concavity property of the error probability with application to local bounds for wireless communications
<|reference_start|>Log-concavity property of the error probability with application to local bounds for wireless communications: A clear understanding the behavior of the error probability (EP) as a function of signal-to-noise ratio (SNR) and other system parameters is fundamental for assessing the design of digital wireless communication systems.We propose an analytical framework based on the log-concavity property of the EP which we prove for a wide family of multidimensional modulation formats in the presence of Gaussian disturbances and fading. Based on this property, we construct a class of local bounds for the EP that improve known generic bounds in a given region of the SNR and are invertible, as well as easily tractable for further analysis. This concept is motivated by the fact that communication systems often operate with performance in a certain region of interest (ROI) and, thus, it may be advantageous to have tighter bounds within this region instead of generic bounds valid for all SNRs. We present a possible application of these local bounds, but their relevance is beyond the example made in this paper.<|reference_end|>
arxiv
@article{conti2007log-concavity, title={Log-concavity property of the error probability with application to local bounds for wireless communications}, author={Andrea Conti, Dmitry Panchenko, Sergiy Sidenko, Velio Tralli}, journal={IEEE Trans. Inform. Theory, 2009, vol. 55, no. 6, 2766-2775.}, year={2007}, doi={10.1109/TIT.2009.2018273}, archivePrefix={arXiv}, eprint={0710.1383}, primaryClass={cs.IT math.IT} }
conti2007log-concavity
arxiv-1344
0710.1385
Cognitive Medium Access: Exploration, Exploitation and Competition
<|reference_start|>Cognitive Medium Access: Exploration, Exploitation and Competition: This paper establishes the equivalence between cognitive medium access and the competitive multi-armed bandit problem. First, the scenario in which a single cognitive user wishes to opportunistically exploit the availability of empty frequency bands in the spectrum with multiple bands is considered. In this scenario, the availability probability of each channel is unknown to the cognitive user a priori. Hence efficient medium access strategies must strike a balance between exploring the availability of other free channels and exploiting the opportunities identified thus far. By adopting a Bayesian approach for this classical bandit problem, the optimal medium access strategy is derived and its underlying recursive structure is illustrated via examples. To avoid the prohibitive computational complexity of the optimal strategy, a low complexity asymptotically optimal strategy is developed. The proposed strategy does not require any prior statistical knowledge about the traffic pattern on the different channels. Next, the multi-cognitive user scenario is considered and low complexity medium access protocols, which strike the optimal balance between exploration and exploitation in such competitive environments, are developed. Finally, this formalism is extended to the case in which each cognitive user is capable of sensing and using multiple channels simultaneously.<|reference_end|>
arxiv
@article{lai2007cognitive, title={Cognitive Medium Access: Exploration, Exploitation and Competition}, author={Lifeng Lai, Hesham El Gamal, Hai Jiang and H. Vincent Poor}, journal={arXiv preprint arXiv:0710.1385}, year={2007}, archivePrefix={arXiv}, eprint={0710.1385}, primaryClass={cs.IT cs.NI math.IT} }
lai2007cognitive
arxiv-1345
0710.1404
Performance Comparison of Persistence Frameworks
<|reference_start|>Performance Comparison of Persistence Frameworks: One of the essential and most complex components in the software development process is the database. The complexity increases when the "orientation" of the interacting components differs. A persistence framework moves the program data in its most natural form to and from a permanent data store, the database. Thus a persistence framework manages the database and the mapping between the database and the objects. This paper compares the performance of two persistence frameworks ? Hibernate and iBatis?s SQLMaps using a banking database. The performance of both of these tools in single and multi-user environments are evaluated.<|reference_end|>
arxiv
@article{thampi2007performance, title={Performance Comparison of Persistence Frameworks}, author={Sabu M. Thampi, Ashwin a K}, journal={arXiv preprint arXiv:0710.1404}, year={2007}, archivePrefix={arXiv}, eprint={0710.1404}, primaryClass={cs.DB cs.IR} }
thampi2007performance
arxiv-1346
0710.1418
Non-Archimedean Ergodic Theory and Pseudorandom Generators
<|reference_start|>Non-Archimedean Ergodic Theory and Pseudorandom Generators: The paper develops techniques in order to construct computer programs, pseudorandom number generators (PRNG), that produce uniformly distributed sequences. The paper exploits an approach that treats standard processor instructions (arithmetic and bitwise logical ones) as continuous functions on the space of 2-adic integers. Within this approach, a PRNG is considered as a dynamical system and is studied by means of the non-Archimedean ergodic theory.<|reference_end|>
arxiv
@article{anashin2007non-archimedean, title={Non-Archimedean Ergodic Theory and Pseudorandom Generators}, author={Vladimir Anashin}, journal={The Computer Journal, 53(4):370--392, 2010}, year={2007}, doi={10.1093/comjnl/bxm101}, archivePrefix={arXiv}, eprint={0710.1418}, primaryClass={math.DS cs.IT math.IT} }
anashin2007non-archimedean
arxiv-1347
0710.1435
Faster Least Squares Approximation
<|reference_start|>Faster Least Squares Approximation: Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets $n$ be the number of constraints and $d$ be the number of variables, with $n \gg d$. Then, existing exact methods find a solution vector in $O(nd^2)$ time. We present two randomized algorithms that provide very accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard Transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if $n$ is sufficiently larger than $d$, the approximate solution can be computed in $O(nd \log d)$ time.<|reference_end|>
arxiv
@article{drineas2007faster, title={Faster Least Squares Approximation}, author={Petros Drineas, Michael W. Mahoney, S. Muthukrishnan, and Tamas Sarlos}, journal={arXiv preprint arXiv:0710.1435}, year={2007}, archivePrefix={arXiv}, eprint={0710.1435}, primaryClass={cs.DS} }
drineas2007faster
arxiv-1348
0710.1436
Polish grid infrastructure for science and research
<|reference_start|>Polish grid infrastructure for science and research: Structure, functionality, parameters and organization of the computing Grid in Poland is described, mainly from the perspective of high-energy particle physics community, currently its largest consumer and developer. It represents distributed Tier-2 in the worldwide Grid infrastructure. It also provides services and resources for data-intensive applications in other sciences.<|reference_end|>
arxiv
@article{gokieli2007polish, title={Polish grid infrastructure for science and research}, author={Ryszard Gokieli, Krzysztof Nawrocki, Adam Padee, Dorota Stojda, Karol Wawrzyniak, Wojciech Wislicki}, journal={2007, ISBN 1-4244-0813-X}, year={2007}, doi={10.1109/EURCON.2007.4400477}, archivePrefix={arXiv}, eprint={0710.1436}, primaryClass={cs.DC hep-ex} }
gokieli2007polish
arxiv-1349
0710.1455
Superrecursive Features of Interactive Computation
<|reference_start|>Superrecursive Features of Interactive Computation: Functioning and interaction of distributed devices and concurrent algorithms are analyzed in the context of the theory of algorithms. Our main concern here is how and under what conditions algorithmic interactive devices can be more powerful than the recursive models of computation, such as Turing machines. Realization of such a higher computing power makes these systems superrecursive. We find here five sources for superrecursiveness in interaction. In addition, we prove that when all of these sources are excluded, the algorithmic interactive system in question is able to perform only recursive computations. These results provide computer scientists with necessary and sufficient conditions for achieving superrecursiveness by algorithmic interactive devices.<|reference_end|>
arxiv
@article{burgin2007superrecursive, title={Superrecursive Features of Interactive Computation}, author={Mark Burgin}, journal={arXiv preprint arXiv:0710.1455}, year={2007}, archivePrefix={arXiv}, eprint={0710.1455}, primaryClass={cs.DC cs.PF} }
burgin2007superrecursive
arxiv-1350
0710.1462
Minimization of entropy functionals
<|reference_start|>Minimization of entropy functionals: Entropy functionals (i.e. convex integral functionals) and extensions of these functionals are minimized on convex sets. This paper is aimed at reducing as much as possible the assumptions on the constraint set. Dual equalities and characterizations of the minimizers are obtained with weak constraint qualifications.<|reference_end|>
arxiv
@article{léonard2007minimization, title={Minimization of entropy functionals}, author={Christian L'eonard (MODAL'x, Cmap)}, journal={arXiv preprint arXiv:0710.1462}, year={2007}, doi={10.1016/j.jmaa.2008.04.048}, archivePrefix={arXiv}, eprint={0710.1462}, primaryClass={math.OC cs.IT math.IT math.PR} }
léonard2007minimization
arxiv-1351
0710.1467
Weight Distributions of Hamming Codes
<|reference_start|>Weight Distributions of Hamming Codes: We derive a recursive formula determing the weight distribution of the [n=(q^m-1)/(q-1), n-m, 3] Hamming code H(m,q), when (m, q-1)=1. Here q is a prime power. The proof is based on Moisio's idea of using Pless power moment identity together with exponential sum techniques.<|reference_end|>
arxiv
@article{kim2007weight, title={Weight Distributions of Hamming Codes}, author={Dae San Kim}, journal={arXiv preprint arXiv:0710.1467}, year={2007}, archivePrefix={arXiv}, eprint={0710.1467}, primaryClass={cs.IT math.IT math.NT} }
kim2007weight
arxiv-1352
0710.1469
Weight Distributions of Hamming Codes (II)
<|reference_start|>Weight Distributions of Hamming Codes (II): In a previous paper, we derived a recursive formula determining the weight distributions of the [n=(q^m-1)/(q-1)] Hamming code H(m,q), when (m,q-1)=1. Here q is a prime power. We note here that the formula actually holds for any positive integer m and any prime power q, without the restriction (m, q-1)=1.<|reference_end|>
arxiv
@article{kim2007weight, title={Weight Distributions of Hamming Codes (II)}, author={Dae San Kim}, journal={arXiv preprint arXiv:0710.1469}, year={2007}, archivePrefix={arXiv}, eprint={0710.1469}, primaryClass={cs.IT math.IT math.NT} }
kim2007weight
arxiv-1353
0710.1481
What's in a Name?
<|reference_start|>What's in a Name?: This paper describes experiments on identifying the language of a single name in isolation or in a document written in a different language. A new corpus has been compiled and made available, matching names against languages. This corpus is used in a series of experiments measuring the performance of general language models and names-only language models on the language identification task. Conclusions are drawn from the comparison between using general language models and names-only language models and between identifying the language of isolated names and the language of very short document fragments. Future research directions are outlined.<|reference_end|>
arxiv
@article{konstantopoulos2007what's, title={What's in a Name?}, author={Stasinos Konstantopoulos}, journal={arXiv preprint arXiv:0710.1481}, year={2007}, archivePrefix={arXiv}, eprint={0710.1481}, primaryClass={cs.CL cs.AI} }
konstantopoulos2007what's
arxiv-1354
0710.1482
Heap Reference Analysis for Functional Programs
<|reference_start|>Heap Reference Analysis for Functional Programs: Current garbage collectors leave a lot of garbage uncollected because they conservatively approximate liveness by reachability from program variables. In this paper, we describe a sequence of static analyses that takes as input a program written in a first-order, eager functional programming language, and finds at each program point the references to objects that are guaranteed not to be used in the future. Such references are made null by a transformation pass. If this makes the object unreachable, it can be collected by the garbage collector. This causes more garbage to be collected, resulting in fewer collections. Additionally, for those garbage collectors which scavenge live objects, it makes each collection faster. The interesting aspects of our method are both in the identification of the analyses required to solve the problem and the way they are carried out. We identify three different analyses -- liveness, sharing and accessibility. In liveness and sharing analyses, the function definitions are analyzed independently of the calling context. This is achieved by using a variable to represent the unknown context of the function being analyzed and setting up constraints expressing the effect of the function with respect to the variable. The solution of the constraints is a summary of the function that is parameterized with respect to a calling context and is used to analyze function calls. As a result we achieve context sensitivity at call sites without analyzing the function multiple number of times.<|reference_end|>
arxiv
@article{karkare2007heap, title={Heap Reference Analysis for Functional Programs}, author={Amey Karkare, Amitabha Sanyal, Uday Khedker}, journal={arXiv preprint arXiv:0710.1482}, year={2007}, archivePrefix={arXiv}, eprint={0710.1482}, primaryClass={cs.PL cs.SE} }
karkare2007heap
arxiv-1355
0710.1484
The structure and modeling results of the parallel spatial switching system
<|reference_start|>The structure and modeling results of the parallel spatial switching system: Problems of the switching parallel system designing provided spatial switching of packets from random time are discussed. Results of modeling of switching system as systems of mass service are resulted.<|reference_end|>
arxiv
@article{kutuzov2007the, title={The structure and modeling results of the parallel spatial switching system}, author={Denis Kutuzov}, journal={IEEE International Siberian Conference on Control and Communications (SIBCON-2007). Proceedings. Tomsk, April 20-21, 2007. (pp. 86-88). IEEE Catalog Number: 07EX1367}, year={2007}, doi={10.1109/SIBCON.2007.371303}, archivePrefix={arXiv}, eprint={0710.1484}, primaryClass={cs.NI cs.DC} }
kutuzov2007the
arxiv-1356
0710.1499
Approximating max-min linear programs with local algorithms
<|reference_start|>Approximating max-min linear programs with local algorithms: A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise $\min_k \sum_v c_{kv} x_v$ subject to $\sum_v a_{iv} x_v \le 1$ for each $i$ and $x_v \ge 0$ for each $v$. Here $c_{kv} \ge 0$, $a_{iv} \ge 0$, and the support sets $V_i = \{v : a_{iv} > 0 \}$, $V_k = \{v : c_{kv}>0 \}$, $I_v = \{i : a_{iv} > 0 \}$ and $K_v = \{k : c_{kv} > 0 \}$ have bounded size. In the distributed setting, each agent $v$ is responsible for choosing the value of $x_v$, and the communication network is a hypergraph $\mathcal{H}$ where the sets $V_k$ and $V_i$ constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if $|V_i|$ and $|V_k|$ are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in $\mathcal{H}$.<|reference_end|>
arxiv
@article{floréen2007approximating, title={Approximating max-min linear programs with local algorithms}, author={Patrik Flor'een, Petteri Kaski, Topi Musto, Jukka Suomela}, journal={arXiv preprint arXiv:0710.1499}, year={2007}, doi={10.1109/IPDPS.2008.4536235}, archivePrefix={arXiv}, eprint={0710.1499}, primaryClass={cs.DC} }
floréen2007approximating
arxiv-1357
0710.1511
Demographic growth and the distribution of language sizes
<|reference_start|>Demographic growth and the distribution of language sizes: It is argued that the present log-normal distribution of language sizes is, to a large extent, a consequence of demographic dynamics within the population of speakers of each language. A two-parameter stochastic multiplicative process is proposed as a model for the population dynamics of individual languages, and applied over a period spanning the last ten centuries. The model disregards language birth and death. A straightforward fitting of the two parameters, which statistically characterize the population growth rate, predicts a distribution of language sizes in excellent agreement with empirical data. Numerical simulations, and the study of the size distribution within language families, validate the assumptions at the basis of the model.<|reference_end|>
arxiv
@article{zanette2007demographic, title={Demographic growth and the distribution of language sizes}, author={Damian H. Zanette}, journal={arXiv preprint arXiv:0710.1511}, year={2007}, doi={10.1142/S0129183108012042}, archivePrefix={arXiv}, eprint={0710.1511}, primaryClass={physics.data-an cs.CL physics.soc-ph} }
zanette2007demographic
arxiv-1358
0710.1522
Distributed spatial multiplexing with 1-bit feedback
<|reference_start|>Distributed spatial multiplexing with 1-bit feedback: We analyze a slow-fading interference network with MN non-cooperating single-antenna sources and M non-cooperating single-antenna destinations. In particular, we assume that the sources are divided into M mutually exclusive groups of N sources each, every group is dedicated to transmit a common message to a unique destination, all transmissions occur concurrently and in the same frequency band and a dedicated 1-bit broadcast feedback channel from each destination to its corresponding group of sources exists. We provide a feedback-based iterative distributed (multi-user) beamforming algorithm, which "learns" the channels between each group of sources and its assigned destination. This algorithm is a straightforward generalization, to the multi-user case, of the feedback-based iterative distributed beamforming algorithm proposed recently by Mudumbai et al., in IEEE Trans. Inf. Th. (submitted) for networks with a single group of sources and a single destination. Putting the algorithm into a Markov chain context, we provide a simple convergence proof. We then show that, for M finite and N approaching infinity, spatial multiplexing based on the beamforming weights produced by the algorithm achieves full spatial multiplexing gain of M and full per-stream array gain of N, provided the time spent "learning'' the channels scales linearly in N. The network is furthermore shown to "crystallize''. Finally, we characterize the corresponding crystallization rate.<|reference_end|>
arxiv
@article{thukral2007distributed, title={Distributed spatial multiplexing with 1-bit feedback}, author={J. Thukral and H. B"olcskei}, journal={arXiv preprint arXiv:0710.1522}, year={2007}, archivePrefix={arXiv}, eprint={0710.1522}, primaryClass={cs.IT math.IT} }
thukral2007distributed
arxiv-1359
0710.1525
Efficient Optimally Lazy Algorithms for Minimal-Interval Semantics
<|reference_start|>Efficient Optimally Lazy Algorithms for Minimal-Interval Semantics: Minimal-interval semantics associates with each query over a document a set of intervals, called witnesses, that are incomparable with respect to inclusion (i.e., they form an antichain): witnesses define the minimal regions of the document satisfying the query. Minimal-interval semantics makes it easy to define and compute several sophisticated proximity operators, provides snippets for user presentation, and can be used to rank documents. In this paper we provide algorithms for computing conjunction and disjunction that are linear in the number of intervals and logarithmic in the number of operands; for additional operators, such as ordered conjunction and Brouwerian difference, we provide linear algorithms. In all cases, space is linear in the number of operands. More importantly, we define a formal notion of optimal laziness, and either prove it, or prove its impossibility, for each algorithm. We cast our results in a general framework of antichains of intervals on total orders, making our algorithms directly applicable to other domains.<|reference_end|>
arxiv
@article{vigna2007efficient, title={Efficient Optimally Lazy Algorithms for Minimal-Interval Semantics}, author={Sebastiano Vigna, Paolo Boldi}, journal={arXiv preprint arXiv:0710.1525}, year={2007}, archivePrefix={arXiv}, eprint={0710.1525}, primaryClass={cs.DS cs.IR} }
vigna2007efficient
arxiv-1360
0710.1589
Fast Reliability-based Algorithm of Finding Minimum-weight Codewords for LDPC Codes
<|reference_start|>Fast Reliability-based Algorithm of Finding Minimum-weight Codewords for LDPC Codes: Despite the NP hardness of acquiring minimum distance $d_m$ for linear codes theoretically, in this paper we propose one experimental method of finding minimum-weight codewords, the weight of which is equal to $d_m$ for LDPC codes. One existing syndrome decoding method, called serial belief propagation (BP) with ordered statistic decoding (OSD), is adapted to serve our purpose. We hold the conjecture that among many candidate error patterns in OSD reprocessing, modulo 2 addition of the lightest error pattern with one of the left error patterns may generate a light codeword. When the decoding syndrome changes to all-zero state, the lightest error pattern reduces to all-zero, the lightest non-zero error pattern is a valid codeword to update lightest codeword list. Given sufficient codewords sending, the survived lightest codewords are likely to be the target. Compared with existing techniques, our method demonstrates its efficiency in the simulation of several interested LDPC codes.<|reference_end|>
arxiv
@article{li2007fast, title={Fast Reliability-based Algorithm of Finding Minimum-weight Codewords for LDPC Codes}, author={Guangwen Li, Guangzeng Feng}, journal={arXiv preprint arXiv:0710.1589}, year={2007}, archivePrefix={arXiv}, eprint={0710.1589}, primaryClass={cs.IT math.IT} }
li2007fast
arxiv-1361
0710.1595
Analysis of Fixed Outage Transmission Schemes: A Finer Look at the Full Multiplexing Point
<|reference_start|>Analysis of Fixed Outage Transmission Schemes: A Finer Look at the Full Multiplexing Point: This paper studies the performance of transmission schemes that have rate that increases with average SNR while maintaining a fixed outage probability. This is in contrast to the classical Zheng-Tse diversity-multiplexing tradeoff (DMT) that focuses on increasing rate and decreasing outage probability. Three different systems are explored: antenna diversity systems, time/frequency diversity systems, and automatic repeat request (ARQ) systems. In order to accurately study performance in the fixed outage setting, it is necesary to go beyond the coarse, asymptotic multiplexing gain metric. In the case of antenna diversity and time/frequency diversity, an affine approximation to high SNR outage capacity (i.e., multiplexing gain plus a power/rate offset) accurately describes performance and shows the very significant benefits of diversity. ARQ is also seen to provide a significant performance advantage, but even an affine approximation to outage capacity is unable to capture this advantage and outage capacity must be directly studied in the non-asymptotic regime.<|reference_end|>
arxiv
@article{wu2007analysis, title={Analysis of Fixed Outage Transmission Schemes: A Finer Look at the Full Multiplexing Point}, author={Peng Wu and Nihar Jindal}, journal={arXiv preprint arXiv:0710.1595}, year={2007}, archivePrefix={arXiv}, eprint={0710.1595}, primaryClass={cs.IT math.IT} }
wu2007analysis
arxiv-1362
0710.1624
Hamiltonian Formulation of Quantum Error Correction and Correlated Noise: The Effects Of Syndrome Extraction in the Long Time Limit
<|reference_start|>Hamiltonian Formulation of Quantum Error Correction and Correlated Noise: The Effects Of Syndrome Extraction in the Long Time Limit: We analyze the long time behavior of a quantum computer running a quantum error correction (QEC) code in the presence of a correlated environment. Starting from a Hamiltonian formulation of realistic noise models, and assuming that QEC is indeed possible, we find formal expressions for the probability of a faulty path and the residual decoherence encoded in the reduced density matrix. Systems with non-zero gate times (``long gates'') are included in our analysis by using an upper bound on the noise. In order to introduce the local error probability for a qubit, we assume that propagation of signals through the environment is slower than the QEC period (hypercube assumption). This allows an explicit calculation in the case of a generalized spin-boson model and a quantum frustration model. The key result is a dimensional criterion: If the correlations decay sufficiently fast, the system evolves toward a stochastic error model for which the threshold theorem of fault-tolerant quantum computation has been proven. On the other hand, if the correlations decay slowly, the traditional proof of this threshold theorem does not hold. This dimensional criterion bears many similarities to criteria that occur in the theory of quantum phase transitions.<|reference_end|>
arxiv
@article{novais2007hamiltonian, title={Hamiltonian Formulation of Quantum Error Correction and Correlated Noise: The Effects Of Syndrome Extraction in the Long Time Limit}, author={E. Novais, Eduardo R. Mucciolo, Harold U. Baranger}, journal={Phys. Rev. A 78, 012314 (2008)}, year={2007}, doi={10.1103/PhysRevA.78.012314}, archivePrefix={arXiv}, eprint={0710.1624}, primaryClass={quant-ph cond-mat.stat-mech cs.IT math.IT} }
novais2007hamiltonian
arxiv-1363
0710.1626
Throughput Scaling in Random Wireless Networks: A Non-Hierarchical Multipath Routing Strategy
<|reference_start|>Throughput Scaling in Random Wireless Networks: A Non-Hierarchical Multipath Routing Strategy: Franceschetti et al. have recently shown that per-node throughput in an extended, ad hoc wireless network with $\Theta(n)$ randomly distributed nodes and multihop routing can be increased from the $\Omega({1 \over \sqrt{n} \log n})$ scaling demonstrated in the seminal paper of Gupta and Kumar to $\Omega({1 \over \sqrt{n}})$. The goal of the present paper is to understand the dependence of this interesting result on the principal new features it introduced relative to Gupta-Kumar: (1) a capacity-based formula for link transmission bit-rates in terms of received signal-to-interference-and-noise ratio (SINR); (2) hierarchical routing from sources to destinations through a system of communal highways; and (3) cell-based routes constructed by percolation. The conclusion of the present paper is that the improved throughput scaling is principally due to the percolation-based routing, which enables shorter hops and, consequently, less interference. This is established by showing that throughput $\Omega({1 \over \sqrt{n}})$ can be attained by a system that does not employ highways, but instead uses percolation to establish, for each source-destination pair, a set of $\Theta(\log n)$ routes within a narrow routing corridor running from source to destination. As a result, highways are not essential. In addition, it is shown that throughput $\Omega({1 \over \sqrt{n}})$ can be attained with the original threshold transmission bit-rate model, provided that node transmission powers are permitted to grow with $n$. Thus, the benefit of the capacity bit-rate model is simply to permit the power to remain bounded, even as the network expands.<|reference_end|>
arxiv
@article{josan2007throughput, title={Throughput Scaling in Random Wireless Networks: A Non-Hierarchical Multipath Routing Strategy}, author={Awlok Josan, Mingyan Liu, David L. Neuhoff and S. Sandeep Pradhan}, journal={arXiv preprint arXiv:0710.1626}, year={2007}, archivePrefix={arXiv}, eprint={0710.1626}, primaryClass={cs.IT math.IT} }
josan2007throughput
arxiv-1364
0710.1641
A polynomial bound for untangling geometric planar graphs
<|reference_start|>A polynomial bound for untangling geometric planar graphs: To untangle a geometric graph means to move some of the vertices so that the resulting geometric graph has no crossings. Pach and Tardos [Discrete Comput. Geom., 2002] asked if every n-vertex geometric planar graph can be untangled while keeping at least n^\epsilon vertices fixed. We answer this question in the affirmative with \epsilon=1/4. The previous best known bound was \Omega((\log n / \log\log n)^{1/2}). We also consider untangling geometric trees. It is known that every n-vertex geometric tree can be untangled while keeping at least (n/3)^{1/2} vertices fixed, while the best upper bound was O(n\log n)^{2/3}. We answer a question of Spillner and Wolff [arXiv:0709.0170 2007] by closing this gap for untangling trees. In particular, we show that for infinitely many values of n, there is an n-vertex geometric tree that cannot be untangled while keeping more than 3(n^{1/2}-1) vertices fixed. Moreover, we improve the lower bound to (n/2)^{1/2}.<|reference_end|>
arxiv
@article{bose2007a, title={A polynomial bound for untangling geometric planar graphs}, author={Prosenjit Bose, Vida Dujmovic, Ferran Hurtado, Stefan Langerman, Pat Morin, David R. Wood}, journal={Discrete & Computational Geometry 42(4):570-585, 2009}, year={2007}, doi={10.1007/s00454-008-9125-3}, archivePrefix={arXiv}, eprint={0710.1641}, primaryClass={cs.CG cs.DM math.CO} }
bose2007a
arxiv-1365
0710.1772
Cross-Participants : fostering design-use mediation in an Open Source Software community
<|reference_start|>Cross-Participants : fostering design-use mediation in an Open Source Software community: Motivation - This research aims at investigating emerging roles and forms of participation fostering design-use mediation during the Open Source Software design process Research approach - We compare online interactions for a successful "pushed-by-users" design process with unsuccessful previous proposals. The methodology developed, articulate structural analyses of the discussions (organization of discussions, participation) to actions to the code and documentation made by participants to the project. We focus on the useroriented and the developer-oriented mailing-lists of the Python project. Findings/Design - We find that key-participants, the cross-participants, foster the design process and act as boundary spanners between the users and the developers' communities. Research limitations/Implications - These findings can be reinforced developing software to automate the structural analysis of discussions and actions to the code and documentation. Further analyses, supported by these tools, will be necessary to generalise our results. Originality/Value - The analysis of participation among the three interaction spaces of OSS design (discussion, documentation and implementation) is the main originality of this work compared to other OSS research that mainly analyse one or two spaces. Take away message - Beside the idealistic picture that users may intervene freely in the process, OSS design is boost and framed by some key-participants and specific rules and there can be barriers to users' participation<|reference_end|>
arxiv
@article{barcellini2007cross-participants, title={Cross-Participants : fostering design-use mediation in an Open Source Software community}, author={Flore Barcellini (INRIA Rocquencourt), Franc{c}oise D'etienne (INRIA Rocquencourt), Jean-Marie Burkhardt (INRIA Rocquencourt, LEI)}, journal={Dans European Conference on Cognitive Ergonomics (2007) 57-64}, year={2007}, archivePrefix={arXiv}, eprint={0710.1772}, primaryClass={cs.CY cs.HC cs.SE} }
barcellini2007cross-participants
arxiv-1366
0710.1784
Designing a commutative replicated data type
<|reference_start|>Designing a commutative replicated data type: Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of \emph{any} data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.<|reference_end|>
arxiv
@article{shapiro2007designing, title={Designing a commutative replicated data type}, author={Marc Shapiro (LIP6, INRIA Rocquencourt), Nuno Preguic{c}a (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0710.1784}, year={2007}, archivePrefix={arXiv}, eprint={0710.1784}, primaryClass={cs.DC} }
shapiro2007designing
arxiv-1367
0710.1842
An explicit universal cycle for the (n-1)-permutations of an n-set
<|reference_start|>An explicit universal cycle for the (n-1)-permutations of an n-set: We show how to construct an explicit Hamilton cycle in the directed Cayley graph Cay({\sigma_n, sigma_{n-1}} : \mathbb{S}_n), where \sigma_k = (1 2 >... k). The existence of such cycles was shown by Jackson (Discrete Mathematics, 149 (1996) 123-129) but the proof only shows that a certain directed graph is Eulerian, and Knuth (Volume 4 Fascicle 2, Generating All Tuples and Permutations (2005)) asks for an explicit construction. We show that a simple recursion describes our Hamilton cycle and that the cycle can be generated by an iterative algorithm that uses O(n) space. Moreover, the algorithm produces each successive edge of the cycle in constant time; such algorithms are said to be loopless.<|reference_end|>
arxiv
@article{ruskey2007an, title={An explicit universal cycle for the (n-1)-permutations of an n-set}, author={Frank Ruskey and Aaron Williams}, journal={arXiv preprint arXiv:0710.1842}, year={2007}, archivePrefix={arXiv}, eprint={0710.1842}, primaryClass={cs.DM cs.DS} }
ruskey2007an
arxiv-1368
0710.1870
Lossless Representation of Graphs using Distributions
<|reference_start|>Lossless Representation of Graphs using Distributions: We consider complete graphs with edge weights and/or node weights taking values in some set. In the first part of this paper, we show that a large number of graphs are completely determined, up to isomorphism, by the distribution of their sub-triangles. In the second part, we propose graph representations in terms of one-dimensional distributions (e.g., distribution of the node weights, sum of adjacent weights, etc.). For the case when the weights of the graph are real-valued vectors, we show that all graphs, except for a set of measure zero, are uniquely determined, up to isomorphism, from these distributions. The motivating application for this paper is the problem of browsing through large sets of graphs.<|reference_end|>
arxiv
@article{boutin2007lossless, title={Lossless Representation of Graphs using Distributions}, author={Mireille Boutin and Gregor Kemper}, journal={arXiv preprint arXiv:0710.1870}, year={2007}, archivePrefix={arXiv}, eprint={0710.1870}, primaryClass={math.CO cs.CV} }
boutin2007lossless
arxiv-1369
0710.1879
Cyclotomic FFTs with Reduced Additive Complexities Based on a Novel Common Subexpression Elimination Algorithm
<|reference_start|>Cyclotomic FFTs with Reduced Additive Complexities Based on a Novel Common Subexpression Elimination Algorithm: In this paper, we first propose a novel common subexpression elimination (CSE) algorithm for matrix-vector multiplications over characteristic-2 fields. As opposed to previously proposed CSE algorithms, which usually focus on complexity savings due to recurrences of subexpressions, our CSE algorithm achieves two types of complexity reductions, differential savings and recurrence savings, by taking advantage of the cancelation property of characteristic-2 fields. Using our CSE algorithm, we reduce the additive complexities of cyclotomic fast Fourier transforms (CFFTs). Using a weighted sum of the numbers of multiplications and additions as a metric, our CFFTs achieve smaller total complexities than previously proposed CFFTs and other FFTs, requiring both fewer multiplications and fewer additions in many cases.<|reference_end|>
arxiv
@article{chen2007cyclotomic, title={Cyclotomic FFTs with Reduced Additive Complexities Based on a Novel Common Subexpression Elimination Algorithm}, author={Ning Chen and Zhiyuan Yan}, journal={arXiv preprint arXiv:0710.1879}, year={2007}, archivePrefix={arXiv}, eprint={0710.1879}, primaryClass={cs.IT cs.CC math.CO math.IT} }
chen2007cyclotomic
arxiv-1370
0710.1916
Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function
<|reference_start|>Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function: The word error rate (WER) of soft-decision-decoded binary block codes rarely has closed-form. Bounding techniques are widely used to evaluate the performance of maximum-likelihood decoding algorithm. But the existing bounds are not tight enough especially for low signal-to-noise ratios and become looser when a suboptimum decoding algorithm is used. This paper proposes a new concept named square radius probability density function (SR-PDF) of decision region to evaluate the WER. Based on the SR-PDF, The WER of binary block codes can be calculated precisely for ML and suboptimum decoders. Furthermore, for a long binary block code, SR-PDF can be approximated by Gamma distribution with only two parameters that can be measured easily. Using this property, two closed-form approximative expressions are proposed which are very close to the simulation results of the WER of interesting.<|reference_end|>
arxiv
@article{chen2007evaluate, title={Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function}, author={Xiaogang Chen, Hongwen Yang, Jian Gu, Hongkui Yang}, journal={arXiv preprint arXiv:0710.1916}, year={2007}, archivePrefix={arXiv}, eprint={0710.1916}, primaryClass={cs.IT math.IT} }
chen2007evaluate
arxiv-1371
0710.1920
The Secrecy Capacity of the MIMO Wiretap Channel
<|reference_start|>The Secrecy Capacity of the MIMO Wiretap Channel: We consider the MIMO wiretap channel, that is a MIMO broadcast channel where the transmitter sends some confidential information to one user which is a legitimate receiver, while the other user is an eavesdropper. Perfect secrecy is achieved when the the transmitter and the legitimate receiver can communicate at some positive rate, while insuring that the eavesdropper gets zero bits of information. In this paper, we compute the perfect secrecy capacity of the multiple antenna MIMO broadcast channel, where the number of antennas is arbitrary for both the transmitter and the two receivers.<|reference_end|>
arxiv
@article{oggier2007the, title={The Secrecy Capacity of the MIMO Wiretap Channel}, author={Fr'ed'erique Oggier and Babak Hassibi}, journal={arXiv preprint arXiv:0710.1920}, year={2007}, archivePrefix={arXiv}, eprint={0710.1920}, primaryClass={cs.IT cs.CR math.IT} }
oggier2007the
arxiv-1372
0710.1924
A Heuristic Routing Mechanism Using a New Addressing Scheme
<|reference_start|>A Heuristic Routing Mechanism Using a New Addressing Scheme: Current methods of routing are based on network information in the form of routing tables, in which routing protocols determine how to update the tables according to the network changes. Despite the variability of data in routing tables, node addresses are constant. In this paper, we first introduce the new concept of variable addresses, which results in a novel framework to cope with routing problems using heuristic solutions. Then we propose a heuristic routing mechanism based on the application of genes for determination of network addresses in a variable address network and describe how this method flexibly solves different problems and induces new ideas in providing integral solutions for variety of problems. The case of ad-hoc networks is where simulation results are more supportive and original solutions have been proposed for issues like mobility.<|reference_end|>
arxiv
@article{ravanbakhsh2007a, title={A Heuristic Routing Mechanism Using a New Addressing Scheme}, author={Mohsen Ravanbakhsh, Yasin Abbasi-Yadkori, Maghsoud Abbaspour, Hamid Sarbazi-Azad}, journal={Proceedings of First International Conference on Bio Inspired models of Networks, Information and Computing Systems (BIONETICS), Cavalese, Italy, December 2006}, year={2007}, archivePrefix={arXiv}, eprint={0710.1924}, primaryClass={cs.NI cs.AI} }
ravanbakhsh2007a
arxiv-1373
0710.1949
Distributed Source Coding Using Continuous-Valued Syndromes
<|reference_start|>Distributed Source Coding Using Continuous-Valued Syndromes: This paper addresses the problem of coding a continuous random source correlated with another source which is only available at the decoder. The proposed approach is based on the extension of the channel coding concept of syndrome from the discrete into the continuous domain. If the correlation between the sources can be described by an additive Gaussian backward channel and capacity-achieving linear codes are employed, it is shown that the performance of the system is asymptotically close to the Wyner-Ziv bound. Even if such an additive channel is not Gaussian, the design procedure can fit the desired correlation and transmission rate. Experiments based on trellis-coded quantization show that the proposed system achieves a performance within 3-4 dB of the theoretical bound in the 0.5-3 bit/sample rate range for any Gaussian correlation, with a reasonable computational complexity.<|reference_end|>
arxiv
@article{cappellari2007distributed, title={Distributed Source Coding Using Continuous-Valued Syndromes}, author={Lorenzo Cappellari}, journal={arXiv preprint arXiv:0710.1949}, year={2007}, archivePrefix={arXiv}, eprint={0710.1949}, primaryClass={cs.IT math.IT} }
cappellari2007distributed
arxiv-1374
0710.1962
Stanford Matrix Considered Harmful
<|reference_start|>Stanford Matrix Considered Harmful: This note argues about the validity of web-graph data used in the literature.<|reference_end|>
arxiv
@article{vigna2007stanford, title={Stanford Matrix Considered Harmful}, author={Sebastiano Vigna}, journal={arXiv preprint arXiv:0710.1962}, year={2007}, archivePrefix={arXiv}, eprint={0710.1962}, primaryClass={cs.IR} }
vigna2007stanford
arxiv-1375
0710.1976
Solving Infinite Kolam in Knot Theory
<|reference_start|>Solving Infinite Kolam in Knot Theory: In south India, there are traditional patterns of line-drawings encircling dots, called ``Kolam'', among which one-line drawings or the ``infinite Kolam'' provide very interesting questions in mathematics. For example, we address the following simple question: how many patterns of infinite Kolam can we draw for a given grid pattern of dots? The simplest way is to draw possible patterns of Kolam while judging if it is infinite Kolam. Such a search problem seems to be NP complete. However, it is certainly not. In this paper, we focus on diamond-shaped grid patterns of dots, (1-3-5-3-1) and (1-3-5-7-5-3-1) in particular. By using the knot-theory description of the infinite Kolam, we show how to find the solution, which inevitably gives a sketch of the proof for the statement ``infinite Kolam is not NP complete.'' Its further discussion will be given in the final section.<|reference_end|>
arxiv
@article{ishimoto2007solving, title={Solving Infinite Kolam in Knot Theory}, author={Yukitaka Ishimoto}, journal={Forma 22 (2007) 15-30}, year={2007}, number={OIQP-06-15}, archivePrefix={arXiv}, eprint={0710.1976}, primaryClass={cs.DM cond-mat.stat-mech} }
ishimoto2007solving
arxiv-1376
0710.2018
Cognitive Interference Channels with Confidential Messages
<|reference_start|>Cognitive Interference Channels with Confidential Messages: The cognitive interference channel with confidential messages is studied. Similarly to the classical two-user interference channel, the cognitive interference channel consists of two transmitters whose signals interfere at the two receivers. It is assumed that there is a common message source (message 1) known to both transmitters, and an additional independent message source (message 2) known only to the cognitive transmitter (transmitter 2). The cognitive receiver (receiver 2) needs to decode both messages, while the non-cognitive receiver (receiver 1) should decode only the common message. Furthermore, message 2 is assumed to be a confidential message which needs to be kept as secret as possible from receiver 1, which is viewed as an eavesdropper with regard to message 2. The level of secrecy is measured by the equivocation rate. A single-letter expression for the capacity-equivocation region of the discrete memoryless cognitive interference channel is established and is further explicitly derived for the Gaussian case. Moreover, particularizing the capacity-equivocation region to the case without a secrecy constraint, establishes a new capacity theorem for a class of interference channels, by providing a converse theorem.<|reference_end|>
arxiv
@article{liang2007cognitive, title={Cognitive Interference Channels with Confidential Messages}, author={Yingbin Liang, Anelia Somekh-Baruch, H. Vincent Poor, Shlomo Shamai (Shitz), and Sergio Verdu}, journal={arXiv preprint arXiv:0710.2018}, year={2007}, archivePrefix={arXiv}, eprint={0710.2018}, primaryClass={cs.IT math.IT} }
liang2007cognitive
arxiv-1377
0710.2037
An Affinity Propagation Based method for Vector Quantization Codebook Design
<|reference_start|>An Affinity Propagation Based method for Vector Quantization Codebook Design: In this paper, we firstly modify a parameter in affinity propagation (AP) to improve its convergence ability, and then, we apply it to vector quantization (VQ) codebook design problem. In order to improve the quality of the resulted codebook, we combine the improved AP (IAP) with the conventional LBG algorithm to generate an effective algorithm call IAP-LBG. According to the experimental results, the proposed method not only enhances the convergence abilities but also is capable of providing higher-quality codebooks than conventional LBG method.<|reference_end|>
arxiv
@article{jiang2007an, title={An Affinity Propagation Based method for Vector Quantization Codebook Design}, author={Wu Jiang, Fei Ding and Qiao-liang Xiang}, journal={arXiv preprint arXiv:0710.2037}, year={2007}, archivePrefix={arXiv}, eprint={0710.2037}, primaryClass={cs.CV} }
jiang2007an
arxiv-1378
0710.2083
Association Rules in the Relational Calculus
<|reference_start|>Association Rules in the Relational Calculus: One of the most utilized data mining tasks is the search for association rules. Association rules represent significant relationships between items in transactions. We extend the concept of association rule to represent a much broader class of associations, which we refer to as \emph{entity-relationship rules.} Semantically, entity-relationship rules express associations between properties of related objects. Syntactically, these rules are based on a broad subclass of safe domain relational calculus queries. We propose a new definition of support and confidence for entity-relationship rules and for the frequency of entity-relationship queries. We prove that the definition of frequency satisfies standard probability axioms and the Apriori property.<|reference_end|>
arxiv
@article{schulte2007association, title={Association Rules in the Relational Calculus}, author={Oliver Schulte, Flavia Moser, Martin Ester and Zhiyong Lu}, journal={arXiv preprint arXiv:0710.2083}, year={2007}, number={SFU School of Computing Science, TR 2007-23}, archivePrefix={arXiv}, eprint={0710.2083}, primaryClass={cs.DB cs.LG cs.LO} }
schulte2007association
arxiv-1379
0710.2092
Self-similarity of complex networks and hidden metric spaces
<|reference_start|>Self-similarity of complex networks and hidden metric spaces: We demonstrate that the self-similarity of some scale-free networks with respect to a simple degree-thresholding renormalization scheme finds a natural interpretation in the assumption that network nodes exist in hidden metric spaces. Clustering, i.e., cycles of length three, plays a crucial role in this framework as a topological reflection of the triangle inequality in the hidden geometry. We prove that a class of hidden variable models with underlying metric spaces are able to accurately reproduce the self-similarity properties that we measured in the real networks. Our findings indicate that hidden geometries underlying these real networks are a plausible explanation for their observed topologies and, in particular, for their self-similarity with respect to the degree-based renormalization.<|reference_end|>
arxiv
@article{serrano2007self-similarity, title={Self-similarity of complex networks and hidden metric spaces}, author={M. Angeles Serrano, Dmitri Krioukov, and Marian Boguna}, journal={Physical Review Letters 100, 078701 (2008)}, year={2007}, doi={10.1103/PhysRevLett.100.078701}, archivePrefix={arXiv}, eprint={0710.2092}, primaryClass={cond-mat.dis-nn cs.NI physics.soc-ph} }
serrano2007self-similarity
arxiv-1380
0710.2134
Discrete entropies of orthogonal polynomials
<|reference_start|>Discrete entropies of orthogonal polynomials: Let $p_n$ be the $n$-th orthonormal polynomial on the real line, whose zeros are $\lambda_j^{(n)}$, $j=1, ..., n$. Then for each $j=1, ..., n$, $$ \vec \Psi_j^2 = (\Psi_{1j}^2, ..., \Psi_{nj}^2) $$ with $$ \Psi_{ij}^2= p_{i-1}^2 (\lambda_j^{(n)}) (\sum_{k=0}^{n-1} p_k^2(\lambda_j^{(n)}))^{-1}, \quad i=1, >..., n, $$ defines a discrete probability distribution. The Shannon entropy of the sequence $\{p_n\}$ is consequently defined as $$ \mathcal S_{n,j} = -\sum_{i=1}^n \Psi_{ij}^{2} \log (\Psi_{ij}^{2}) . $$ In the case of Chebyshev polynomials of the first and second kinds an explicit and closed formula for $\mathcal S_{n,j}$ is obtained, revealing interesting connections with the number theory. Besides, several results of numerical computations exemplifying the behavior of $\mathcal S_{n,j}$ for other families are also presented.<|reference_end|>
arxiv
@article{aptekarev2007discrete, title={Discrete entropies of orthogonal polynomials}, author={A.I. Aptekarev, J.S. Dehesa, A. Martinez-Finkelshtein, R. Ya~nez}, journal={arXiv preprint arXiv:0710.2134}, year={2007}, archivePrefix={arXiv}, eprint={0710.2134}, primaryClass={math.CA cs.IT math-ph math.IT math.MP} }
aptekarev2007discrete
arxiv-1381
0710.2139
Approximation algorithms and hardness for domination with propagation
<|reference_start|>Approximation algorithms and hardness for domination with propagation: The power dominating set (PDS) problem is the following extension of the well-known dominating set problem: find a smallest-size set of nodes $S$ that power dominates all the nodes, where a node $v$ is power dominated if (1) $v$ is in $S$ or $v$ has a neighbor in $S$, or (2) $v$ has a neighbor $w$ such that $w$ and all of its neighbors except $v$ are power dominated. We show a hardness of approximation threshold of $2^{\log^{1-\epsilon}{n}}$ in contrast to the logarithmic hardness for the dominating set problem. We give an $O(\sqrt{n})$ approximation algorithm for planar graphs, and show that our methods cannot improve on this approximation guarantee. Finally, we initiate the study of PDS on directed graphs, and show the same hardness threshold of $2^{\log^{1-\epsilon}{n}}$ for directed \emph{acyclic} graphs. Also we show that the directed PDS problem can be solved optimally in linear time if the underlying undirected graph has bounded tree-width.<|reference_end|>
arxiv
@article{aazami2007approximation, title={Approximation algorithms and hardness for domination with propagation}, author={Ashkan Aazami, Michael D. Stilp}, journal={arXiv preprint arXiv:0710.2139}, year={2007}, archivePrefix={arXiv}, eprint={0710.2139}, primaryClass={cs.CC cs.DM} }
aazami2007approximation
arxiv-1382
0710.2156
Collaborative OLAP with Tag Clouds: Web 20 OLAP Formalism and Experimental Evaluation
<|reference_start|>Collaborative OLAP with Tag Clouds: Web 20 OLAP Formalism and Experimental Evaluation: Increasingly, business projects are ephemeral. New Business Intelligence tools must support ad-lib data sources and quick perusal. Meanwhile, tag clouds are a popular community-driven visualization technique. Hence, we investigate tag-cloud views with support for OLAP operations such as roll-ups, slices, dices, clustering, and drill-downs. As a case study, we implemented an application where users can upload data and immediately navigate through its ad hoc dimensions. To support social networking, views can be easily shared and embedded in other Web sites. Algorithmically, our tag-cloud views are approximate range top-k queries over spontaneous data cubes. We present experimental evidence that iceberg cuboids provide adequate online approximations. We benchmark several browser-oblivious tag-cloud layout optimizations.<|reference_end|>
arxiv
@article{aouiche2007collaborative, title={Collaborative OLAP with Tag Clouds: Web 2.0 OLAP Formalism and Experimental Evaluation}, author={Kamel Aouiche, Daniel Lemire and Robert Godin}, journal={arXiv preprint arXiv:0710.2156}, year={2007}, archivePrefix={arXiv}, eprint={0710.2156}, primaryClass={cs.DB} }
aouiche2007collaborative
arxiv-1383
0710.2227
A System for Predicting Subcellular Localization of Yeast Genome Using Neural Network
<|reference_start|>A System for Predicting Subcellular Localization of Yeast Genome Using Neural Network: The subcellular location of a protein can provide valuable information about its function. With the rapid increase of sequenced genomic data, the need for an automated and accurate tool to predict subcellular localization becomes increasingly important. Many efforts have been made to predict protein subcellular localization. This paper aims to merge the artificial neural networks and bioinformatics to predict the location of protein in yeast genome. We introduce a new subcellular prediction method based on a backpropagation neural network. The results show that the prediction within an error limit of 5 to 10 percentage can be achieved with the system.<|reference_end|>
arxiv
@article{thampi2007a, title={A System for Predicting Subcellular Localization of Yeast Genome Using Neural Network}, author={Sabu M. Thampi, K. Chandra Sekaran}, journal={arXiv preprint arXiv:0710.2227}, year={2007}, archivePrefix={arXiv}, eprint={0710.2227}, primaryClass={cs.NE cs.AI} }
thampi2007a
arxiv-1384
0710.2228
Recommendation model based on opinion diffusion
<|reference_start|>Recommendation model based on opinion diffusion: Information overload in the modern society calls for highly efficient recommendation algorithms. In this letter we present a novel diffusion based recommendation model, with users' ratings built into a transition matrix. To speed up computation we introduce a Green function method. The numerical tests on a benchmark database show that our prediction is superior to the standard recommendation methods.<|reference_end|>
arxiv
@article{zhang2007recommendation, title={Recommendation model based on opinion diffusion}, author={Yi-Cheng Zhang, Matus Medo, Jie Ren, Tao Zhou, Tao Li, and Fan Yang}, journal={Europhysics Letters 80 (2007) 68003}, year={2007}, doi={10.1209/0295-5075/80/68003}, archivePrefix={arXiv}, eprint={0710.2228}, primaryClass={physics.soc-ph cs.CY cs.IR physics.data-an} }
zhang2007recommendation
arxiv-1385
0710.2231
Comparison and Combination of State-of-the-art Techniques for Handwritten Character Recognition: Topping the MNIST Benchmark
<|reference_start|>Comparison and Combination of State-of-the-art Techniques for Handwritten Character Recognition: Topping the MNIST Benchmark: Although the recognition of isolated handwritten digits has been a research topic for many years, it continues to be of interest for the research community and for commercial applications. We show that despite the maturity of the field, different approaches still deliver results that vary enough to allow improvements by using their combination. We do so by choosing four well-motivated state-of-the-art recognition systems for which results on the standard MNIST benchmark are available. When comparing the errors made, we observe that the errors made differ between all four systems, suggesting the use of classifier combination. We then determine the error rate of a hypothetical system that combines the output of the four systems. The result obtained in this manner is an error rate of 0.35% on the MNIST data, the best result published so far. We furthermore discuss the statistical significance of the combined result and of the results of the individual classifiers.<|reference_end|>
arxiv
@article{keysers2007comparison, title={Comparison and Combination of State-of-the-art Techniques for Handwritten Character Recognition: Topping the MNIST Benchmark}, author={Daniel Keysers}, journal={arXiv preprint arXiv:0710.2231}, year={2007}, archivePrefix={arXiv}, eprint={0710.2231}, primaryClass={cs.CV} }
keysers2007comparison
arxiv-1386
0710.2243
Edge Local Complementation and Equivalence of Binary Linear Codes
<|reference_start|>Edge Local Complementation and Equivalence of Binary Linear Codes: Orbits of graphs under the operation edge local complementation (ELC) are defined. We show that the ELC orbit of a bipartite graph corresponds to the equivalence class of a binary linear code. The information sets and the minimum distance of a code can be derived from the corresponding ELC orbit. By extending earlier results on local complementation (LC) orbits, we classify the ELC orbits of all graphs on up to 12 vertices. We also give a new method for classifying binary linear codes, with running time comparable to the best known algorithm.<|reference_end|>
arxiv
@article{danielsen2007edge, title={Edge Local Complementation and Equivalence of Binary Linear Codes}, author={Lars Eirik Danielsen and Matthew G. Parker}, journal={Des. Codes Cryptogr. 49(1-3), 161-170, 2008}, year={2007}, doi={10.1007/s10623-008-9190-x}, archivePrefix={arXiv}, eprint={0710.2243}, primaryClass={math.CO cs.IT math.IT} }
danielsen2007edge
arxiv-1387
0710.2268
Complexity of some Path Problems in DAGs and Linear Orders
<|reference_start|>Complexity of some Path Problems in DAGs and Linear Orders: We investigate here the computational complexity of three natural problems in directed acyclic graphs. We prove their NP Completeness and consider their restrictions to linear orders.<|reference_end|>
arxiv
@article{burckel2007complexity, title={Complexity of some Path Problems in DAGs and Linear Orders}, author={Serge Burckel}, journal={arXiv preprint arXiv:0710.2268}, year={2007}, archivePrefix={arXiv}, eprint={0710.2268}, primaryClass={math.CO cs.IT math.IT} }
burckel2007complexity
arxiv-1388
0710.2284
Symmetric and Synchronous Communication in Peer-to-Peer Networks
<|reference_start|>Symmetric and Synchronous Communication in Peer-to-Peer Networks: Motivated by distributed implementations of game-theoretical algorithms, we study symmetric process systems and the problem of attaining common knowledge between processes. We formalize our setting by defining a notion of peer-to-peer networks(*) and appropriate symmetry concepts in the context of Communicating Sequential Processes (CSP), due to the common knowledge creating effects of its synchronous communication primitives. We then prove that CSP with input and output guards makes common knowledge in symmetric peer-to-peer networks possible, but not the restricted version which disallows output statements in guards and is commonly implemented. (*) Please note that we are not dealing with fashionable incarnations such as file-sharing networks, but merely use this name for a mathematical notion of a network consisting of directly connected peers "treated on an equal footing", i.e. not having a client-server structure or otherwise pre-determined roles.)<|reference_end|>
arxiv
@article{witzel2007symmetric, title={Symmetric and Synchronous Communication in Peer-to-Peer Networks}, author={Andreas Witzel}, journal={arXiv preprint arXiv:0710.2284}, year={2007}, archivePrefix={arXiv}, eprint={0710.2284}, primaryClass={cs.DC cs.GT} }
witzel2007symmetric
arxiv-1389
0710.2296
Vertex Percolation on Expander Graphs
<|reference_start|>Vertex Percolation on Expander Graphs: We say that a graph $G=(V,E)$ on $n$ vertices is a $\beta$-expander for some constant $\beta>0$ if every $U\subseteq V$ of cardinality $|U|\leq \frac{n}{2}$ satisfies $|N_G(U)|\geq \beta|U|$ where $N_G(U)$ denotes the neighborhood of $U$. In this work we explore the process of deleting vertices of a $\beta$-expander independently at random with probability $n^{-\alpha}$ for some constant $\alpha>0$, and study the properties of the resulting graph. Our main result states that as $n$ tends to infinity, the deletion process performed on a $\beta$-expander graph of bounded degree will result with high probability in a graph composed of a giant component containing $n-o(n)$ vertices that is in itself an expander graph, and constant size components. We proceed by applying the main result to expander graphs with a positive spectral gap. In the particular case of $(n,d,\lambda)$-graphs, that are such expanders, we compute the values of $\alpha$, under additional constraints on the graph, for which with high probability the resulting graph will stay connected, or will be composed of a giant component and isolated vertices. As a graph sampled from the uniform probability space of $d$-regular graphs with high probability is an expander and meets the additional constraints, this result strengthens a recent result due to Greenhill, Holt and Wormald about vertex percolation on random $d$-regular graphs. We conclude by showing that performing the above described deletion process on graphs that expand sub-linear sets by an unbounded expansion ratio, with high probability results in a connected expander graph.<|reference_end|>
arxiv
@article{ben-shimon2007vertex, title={Vertex Percolation on Expander Graphs}, author={Sonny Ben-Shimon and Michael Krivelevich}, journal={European Journal of Combinatorics, 30(2), pp. 339-350, 2009}, year={2007}, doi={10.1016/j.ejc.2008.07.001}, archivePrefix={arXiv}, eprint={0710.2296}, primaryClass={math.CO cs.DM math.PR} }
ben-shimon2007vertex
arxiv-1390
0710.2358
Success and failure of programming environments - report on the design and use of a graphic abstract syntax tree editor
<|reference_start|>Success and failure of programming environments - report on the design and use of a graphic abstract syntax tree editor: The STAPLE project investigated (at the end of the eighties), a persistent architecture for functional programming. Work has been done in two directions: the development of a programming environment for a functional language within a persistent system and an experiment on transferring the expertise of functional prototyping into industry. This paper is a report on the first activity. The first section gives a general description of Absynte - the abstract syntax tree editor developed within the Project. Following sections make an attempt at measuring the effectiveness of such an editor and discuss the problems raised by structured syntax editing - specially environments based on abstract syntax trees.<|reference_end|>
arxiv
@article{recanati2007success, title={Success and failure of programming environments - report on the design and use of a graphic abstract syntax tree editor}, author={C. Recanati}, journal={arXiv preprint arXiv:0710.2358}, year={2007}, number={Esprit project no. 891 (STAPLE), Technical Report no 90/1, Paris, Jan 1990}, archivePrefix={arXiv}, eprint={0710.2358}, primaryClass={cs.PL cs.HC} }
recanati2007success
arxiv-1391
0710.2419
The Variable Hierarchy for the Games mu-Calculus
<|reference_start|>The Variable Hierarchy for the Games mu-Calculus: Parity games are combinatorial representations of closed Boolean mu-terms. By adding to them draw positions, they have been organized by Arnold and one of the authors into a mu-calculus. As done by Berwanger et al. for the propositional modal mu-calculus, it is possible to classify parity games into levels of a hierarchy according to the number of fixed-point variables. We ask whether this hierarchy collapses w.r.t. the standard interpretation of the games mu-calculus into the class of all complete lattices. We answer this question negatively by providing, for each n >= 1, a parity game Gn with these properties: it unravels to a mu-term built up with n fixed-point variables, it is semantically equivalent to no game with strictly less than n-2 fixed-point variables.<|reference_end|>
arxiv
@article{belkhir2007the, title={The Variable Hierarchy for the Games mu-Calculus}, author={Walid Belkhir (LIF), Luigi Santocanale (LIF)}, journal={arXiv preprint arXiv:0710.2419}, year={2007}, archivePrefix={arXiv}, eprint={0710.2419}, primaryClass={cs.LO cs.GT math.LO} }
belkhir2007the
arxiv-1392
0710.2446
The structure of verbal sequences analyzed with unsupervised learning techniques
<|reference_start|>The structure of verbal sequences analyzed with unsupervised learning techniques: Data mining allows the exploration of sequences of phenomena, whereas one usually tends to focus on isolated phenomena or on the relation between two phenomena. It offers invaluable tools for theoretical analyses and exploration of the structure of sentences, texts, dialogues, and speech. We report here the results of an attempt at using it for inspecting sequences of verbs from French accounts of road accidents. This analysis comes from an original approach of unsupervised training allowing the discovery of the structure of sequential data. The entries of the analyzer were only made of the verbs appearing in the sentences. It provided a classification of the links between two successive verbs into four distinct clusters, allowing thus text segmentation. We give here an interpretation of these clusters by applying a statistical analysis to independent semantic annotations.<|reference_end|>
arxiv
@article{recanati2007the, title={The structure of verbal sequences analyzed with unsupervised learning techniques}, author={Catherine Recanati (LIPN), Nicoleta Rogovschi (LIPN), Youn`es Bennani (LIPN)}, journal={Dans Proceedings - The 3rd Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, Poznan : Pologne (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0710.2446}, primaryClass={cs.CL cs.AI cs.LG} }
recanati2007the
arxiv-1393
0710.2496
Regression estimation from an individual stable sequence
<|reference_start|>Regression estimation from an individual stable sequence: We consider univariate regression estimation from an individual (non-random) sequence $(x_1,y_1),(x_2,y_2), ... \in \real \times \real$, which is stable in the sense that for each interval $A \subseteq \real$, (i) the limiting relative frequency of $A$ under $x_1, x_2, ...$ is governed by an unknown probability distribution $\mu$, and (ii) the limiting average of those $y_i$ with $x_i \in A$ is governed by an unknown regression function $m(\cdot)$. A computationally simple scheme for estimating $m(\cdot)$ is exhibited, and is shown to be $L_2$ consistent for stable sequences $\{(x_i,y_i)\}$ such that $\{y_i\}$ is bounded and there is a known upper bound for the variation of $m(\cdot)$ on intervals of the form $(-i,i]$, $i \geq 1$. Complementing this positive result, it is shown that there is no consistent estimation scheme for the family of stable sequences whose regression functions have finite variation, even under the restriction that $x_i \in [0,1]$ and $y_i$ is binary-valued.<|reference_end|>
arxiv
@article{morvai2007regression, title={Regression estimation from an individual stable sequence}, author={Gusztav Morvai, Sanjeev R. Kulkarni, Andrew B. Nobel}, journal={Statistics 33 (1999), no. 2, 99--118}, year={2007}, archivePrefix={arXiv}, eprint={0710.2496}, primaryClass={math.PR cs.IT math.IT math.ST stat.TH} }
morvai2007regression
arxiv-1394
0710.2500
Density estimation from an individual numerical sequence
<|reference_start|>Density estimation from an individual numerical sequence: This paper considers estimation of a univariate density from an individual numerical sequence. It is assumed that (i) the limiting relative frequencies of the numerical sequence are governed by an unknown density, and (ii) there is a known upper bound for the variation of the density on an increasing sequence of intervals. A simple estimation scheme is proposed, and is shown to be $L_1$ consistent when (i) and (ii) apply. In addition it is shown that there is no consistent estimation scheme for the set of individual sequences satisfying only condition (i).<|reference_end|>
arxiv
@article{nobel2007density, title={Density estimation from an individual numerical sequence}, author={Andrew B. Nobel, Gusztav Morvai, Sanjeev R. Kulkarni}, journal={IEEE Trans. Inform. Theory 44 (1998), no. 2, 537--541}, year={2007}, archivePrefix={arXiv}, eprint={0710.2500}, primaryClass={math.PR cs.IT math.IT math.ST stat.TH} }
nobel2007density
arxiv-1395
0710.2505
Generic Trace Semantics via Coinduction
<|reference_start|>Generic Trace Semantics via Coinduction: Trace semantics has been defined for various kinds of state-based systems, notably with different forms of branching such as non-determinism vs. probability. In this paper we claim to identify one underlying mathematical structure behind these "trace semantics," namely coinduction in a Kleisli category. This claim is based on our technical result that, under a suitably order-enriched setting, a final coalgebra in a Kleisli category is given by an initial algebra in the category Sets. Formerly the theory of coalgebras has been employed mostly in Sets where coinduction yields a finer process semantics of bisimilarity. Therefore this paper extends the application field of coalgebras, providing a new instance of the principle "process semantics via coinduction."<|reference_end|>
arxiv
@article{hasuo2007generic, title={Generic Trace Semantics via Coinduction}, author={Ichiro Hasuo, Bart Jacobs and Ana Sokolova}, journal={Logical Methods in Computer Science, Volume 3, Issue 4 (November 19, 2007) lmcs:864}, year={2007}, doi={10.2168/LMCS-3(4:11)2007}, archivePrefix={arXiv}, eprint={0710.2505}, primaryClass={cs.LO} }
hasuo2007generic
arxiv-1396
0710.2532
Sleeping on the Job: Energy-Efficient Broadcast for Radio Networks
<|reference_start|>Sleeping on the Job: Energy-Efficient Broadcast for Radio Networks: We address the problem of minimizing power consumption when performing reliable broadcast on a radio network under the following popular model. Each node in the network is located on a point in a two dimensional grid, and whenever a node sends a message, all awake nodes within distance r receive the message. In the broadcast problem, some node wants to successfully send a message to all other nodes in the network even when up to a 1/2 fraction of the nodes within every neighborhood can be deleted by an adversary. The set of deleted nodes is carefully chosen by the adversary to foil our algorithm and moreover, the set of deleted nodes may change periodically. This models worst-case behavior due to mobile nodes, static nodes losing power or simply some points in the grid being unoccupied. A trivial solution requires each node in the network to be awake roughly 1/2 the time, and a trivial lower bound shows that each node must be awake for at least a 1/n fraction of the time. Our first result is an algorithm that requires each node to be awake for only a 1/sqrt(n) fraction of the time in expectation. Our algorithm achieves this while ensuring correctness with probability 1, and keeping optimal values for other resource costs such as latency and number of messages sent. We give a lower-bound that shows that this reduction in power consumption is asymptotically optimal when latency and number of messages sent must be optimal. If we can increase the latency and messages sent by only a log*n factor we give a Las Vegas algorithm that requires each node to be awake for only a (log*n)/n expected fraction of the time; we give a lower-bound showing that this second algorithm is near optimal. Finally, we show how to ensure energy-efficient broadcast in the presence of Byzantine faults.<|reference_end|>
arxiv
@article{king2007sleeping, title={Sleeping on the Job: Energy-Efficient Broadcast for Radio Networks}, author={Valerie King, Cynthia Phillips, Jared Saia and Maxwell Young}, journal={arXiv preprint arXiv:0710.2532}, year={2007}, archivePrefix={arXiv}, eprint={0710.2532}, primaryClass={cs.DS} }
king2007sleeping
arxiv-1397
0710.2553
Capacity of Linear Two-hop Mesh Networks with Rate Splitting, Decode-and-forward Relaying and Cooperation
<|reference_start|>Capacity of Linear Two-hop Mesh Networks with Rate Splitting, Decode-and-forward Relaying and Cooperation: A linear mesh network is considered in which a single user per cell communicates to a local base station via a dedicated relay (two-hop communication). Exploiting the possibly relevant inter-cell channel gains, rate splitting with successive cancellation in both hops is investigated as a promising solution to improve the rate of basic single-rate communications. Then, an alternative solution is proposed that attempts to improve the performance of the second hop (from the relays to base stations) by cooperative transmission among the relay stations. The cooperative scheme leverages the common information obtained by the relays as a by-product of the use of rate splitting in the first hop. Numerical results bring insight into the conditions (network topology and power constraints) under which rate splitting, with possible relay cooperation, is beneficial. Multi-cell processing (joint decoding at the base stations) is also considered for reference.<|reference_end|>
arxiv
@article{simeone2007capacity, title={Capacity of Linear Two-hop Mesh Networks with Rate Splitting, Decode-and-forward Relaying and Cooperation}, author={O. Simeone, O. Somekh, Y. Bar-Ness, H. V. Poor, and S. Shamai}, journal={In the Proceedings of the 45th Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, September 26 - 28, 2007}, year={2007}, archivePrefix={arXiv}, eprint={0710.2553}, primaryClass={cs.IT math.IT} }
simeone2007capacity
arxiv-1398
0710.2604
Efficient Skyline Querying with Variable User Preferences on Nominal Attributes
<|reference_start|>Efficient Skyline Querying with Variable User Preferences on Nominal Attributes: Current skyline evaluation techniques assume a fixed ordering on the attributes. However, dynamic preferences on nominal attributes are more realistic in known applications. In order to generate online response for any such preference issued by a user, we propose two methods of different characteristics. The first one is a semi-materialization method and the second is an adaptive SFS method. Finally, we conduct experiments to show the efficiency of our proposed algorithms.<|reference_end|>
arxiv
@article{wong2007efficient, title={Efficient Skyline Querying with Variable User Preferences on Nominal Attributes}, author={Raymond Chi-Wing Wong, Ada Wai-chee Fu, Jian Pei, Yip Sing Ho, Tai Wong, Yubao Liu}, journal={arXiv preprint arXiv:0710.2604}, year={2007}, archivePrefix={arXiv}, eprint={0710.2604}, primaryClass={cs.DB} }
wong2007efficient
arxiv-1399
0710.2611
Geometric Analogue of Holographic Reduced Representation
<|reference_start|>Geometric Analogue of Holographic Reduced Representation: Holographic reduced representations (HRR) are based on superpositions of convolution-bound $n$-tuples, but the $n$-tuples cannot be regarded as vectors since the formalism is basis dependent. This is why HRR cannot be associated with geometric structures. Replacing convolutions by geometric products one arrives at reduced representations analogous to HRR but interpretable in terms of geometry. Variable bindings occurring in both HRR and its geometric analogue mathematically correspond to two different representations of $Z_2\times...\times Z_2$ (the additive group of binary $n$-tuples with addition modulo 2). As opposed to standard HRR, variable binding performed by means of geometric product allows for computing exact inverses of all nonzero vectors, a procedure even simpler than approximate inverses employed in HRR. The formal structure of the new reduced representation is analogous to cartoon computation, a geometric analogue of quantum computation.<|reference_end|>
arxiv
@article{aerts2007geometric, title={Geometric Analogue of Holographic Reduced Representation}, author={Diederik Aerts, Marek Czachor, Bart De Moor}, journal={Journal of Mathematical Psychology 53, 389-398 (2009)}, year={2007}, archivePrefix={arXiv}, eprint={0710.2611}, primaryClass={cs.AI quant-ph} }
aerts2007geometric
arxiv-1400
0710.2659
Rigidity and persistence for ensuring shape maintenance of multiagent meta formations (ext'd version)
<|reference_start|>Rigidity and persistence for ensuring shape maintenance of multiagent meta formations (ext'd version): This paper treats the problem of the merging of formations, where the underlying model of a formation is graphical. We first analyze the rigidity and persistence of meta-formations, which are formations obtained by connecting several rigid or persistent formations. Persistence is a generalization to directed graphs of the undirected notion of rigidity. In the context of moving autonomous agent formations, persistence characterizes the efficacy of a directed structure of unilateral distance constraints seeking to preserve a formation shape. We derive then, for agents evolving in a two- or three-dimensional space, the conditions under which a set of persistent formations can be merged into a persistent meta-formation, and give the minimal number of interconnections needed for such a merging. We also give conditions for a meta-formation obtained by merging several persistent formations to be persistent.<|reference_end|>
arxiv
@article{hendrickx2007rigidity, title={Rigidity and persistence for ensuring shape maintenance of multiagent meta formations (ext'd version)}, author={Julien M. Hendrickx, Changbin Yu, Baris Fidan and Brian D.O. Anderson}, journal={arXiv preprint arXiv:0710.2659}, year={2007}, archivePrefix={arXiv}, eprint={0710.2659}, primaryClass={cs.MA cs.DM} }
hendrickx2007rigidity