corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-5501 | 0811.3055 | Exact phase transition of backtrack-free search with implications on the power of greedy algorithms | <|reference_start|>Exact phase transition of backtrack-free search with implications on the power of greedy algorithms: Backtracking is a basic strategy to solve constraint satisfaction problems (CSPs). A satisfiable CSP instance is backtrack-free if a solution can be found without encountering any dead-end during a backtracking search, implying that the instance is easy to solve. We prove an exact phase transition of backtrack-free search in some random CSPs, namely in Model RB and in Model RD. This is the first time an exact phase transition of backtrack-free search can be identified on some random CSPs. Our technical results also have interesting implications on the power of greedy algorithms, on the width of random hypergraphs and on the exact satisfiability threshold of random CSPs.<|reference_end|> | arxiv | @article{li2008exact,
title={Exact phase transition of backtrack-free search with implications on the
power of greedy algorithms},
author={Liang Li and Tian Liu and Ke Xu},
journal={arXiv preprint arXiv:0811.3055},
year={2008},
archivePrefix={arXiv},
eprint={0811.3055},
primaryClass={cs.AI cs.DM cs.DS}
} | li2008exact |
arxiv-5502 | 0811.3062 | Dynamic External Hashing: The Limit of Buffering | <|reference_start|>Dynamic External Hashing: The Limit of Buffering: Hash tables are one of the most fundamental data structures in computer science, in both theory and practice. They are especially useful in external memory, where their query performance approaches the ideal cost of just one disk access. Knuth gave an elegant analysis showing that with some simple collision resolution strategies such as linear probing or chaining, the expected average number of disk I/Os of a lookup is merely $1+1/2^{\Omega(b)}$, where each I/O can read a disk block containing $b$ items. Inserting a new item into the hash table also costs $1+1/2^{\Omega(b)}$ I/Os, which is again almost the best one can do if the hash table is entirely stored on disk. However, this assumption is unrealistic since any algorithm operating on an external hash table must have some internal memory (at least $\Omega(1)$ blocks) to work with. The availability of a small internal memory buffer can dramatically reduce the amortized insertion cost to $o(1)$ I/Os for many external memory data structures. In this paper we study the inherent query-insertion tradeoff of external hash tables in the presence of a memory buffer. In particular, we show that for any constant $c>1$, if the query cost is targeted at $1+O(1/b^{c})$ I/Os, then it is not possible to support insertions in less than $1-O(1/b^{\frac{c-1}{4}})$ I/Os amortized, which means that the memory buffer is essentially useless. While if the query cost is relaxed to $1+O(1/b^{c})$ I/Os for any constant $c<1$, there is a simple dynamic hash table with $o(1)$ insertion cost. These results also answer the open question recently posed by Jensen and Pagh.<|reference_end|> | arxiv | @article{wei2008dynamic,
title={Dynamic External Hashing: The Limit of Buffering},
author={Zhewei Wei, Ke Yi, Qin Zhang},
journal={arXiv preprint arXiv:0811.3062},
year={2008},
archivePrefix={arXiv},
eprint={0811.3062},
primaryClass={cs.DS}
} | wei2008dynamic |
arxiv-5503 | 0811.3116 | Geometric properties of satisfying assignments of random $\epsilon$-1-in-k SAT | <|reference_start|>Geometric properties of satisfying assignments of random $\epsilon$-1-in-k SAT: We study the geometric structure of the set of solutions of random $\epsilon$-1-in-k SAT problem. For $l\geq 1$, two satisfying assignments $A$ and $B$ are $l$-connected if there exists a sequence of satisfying assignments connecting them by changing at most $l$ bits at a time. We first prove that w.h.p. two assignments of a random $\epsilon$-1-in-$k$ SAT instance are $O(\log n)$-connected, conditional on being satisfying assignments. Also, there exists $\epsilon_{0}\in (0,\frac{1}{k-2})$ such that w.h.p. no two satisfying assignments at distance at least $\epsilon_{0}\cdot n$ form a "hole" in the set of assignments. We believe that this is true for all $\epsilon >0$, and thus satisfying assignments of a random 1-in-$k$ SAT instance form a single cluster.<|reference_end|> | arxiv | @article{istrate2008geometric,
title={Geometric properties of satisfying assignments of random
$\epsilon$-1-in-k SAT},
author={Gabriel Istrate},
journal={International Journal of Computer Mathematics, 86(12), pp.
2029-2039, 2009},
year={2008},
archivePrefix={arXiv},
eprint={0811.3116},
primaryClass={cs.CC cs.DM}
} | istrate2008geometric |
arxiv-5504 | 0811.3137 | Collecting and Preserving Videogames and Their Related Materials: A Review of Current Practice, Game-Related Archives and Research Projects | <|reference_start|>Collecting and Preserving Videogames and Their Related Materials: A Review of Current Practice, Game-Related Archives and Research Projects: This paper reviews the major methods and theories regarding the preservation of new media artifacts such as videogames, and argues for the importance of collecting and coming to a better understanding of videogame artifacts of creation, which will help build a more detailed understanding of the essential qualities of these culturally significant artifacts. We will also review the major videogame collections in the United States, Europe and Japan to give an idea of the current state of videogame archives, and argue for a fuller, more comprehensive coverage of these materials in institutional repositories.<|reference_end|> | arxiv | @article{winget2008collecting,
title={Collecting and Preserving Videogames and Their Related Materials: A
Review of Current Practice, Game-Related Archives and Research Projects},
author={Megan A. Winget, Caitlin Murray},
journal={arXiv preprint arXiv:0811.3137},
year={2008},
archivePrefix={arXiv},
eprint={0811.3137},
primaryClass={cs.DL}
} | winget2008collecting |
arxiv-5505 | 0811.3140 | Desynched channels on IRCnet | <|reference_start|>Desynched channels on IRCnet: In this paper we describe what a desynchronised channel on IRC is. We give procedures on how to create such a channel and how to remove desynchronisation. We explain which types of desynchronisation there are, what properties desynchronised channels have, and which properties can be exploited.<|reference_end|> | arxiv | @article{hansen2008desynched,
title={Desynched channels on IRCnet},
author={Michael Hansen and Jeroen F. J. Laros},
journal={arXiv preprint arXiv:0811.3140},
year={2008},
archivePrefix={arXiv},
eprint={0811.3140},
primaryClass={cs.NI cs.CR}
} | hansen2008desynched |
arxiv-5506 | 0811.3161 | An Almost Optimal Rank Bound for Depth-3 Identities | <|reference_start|>An Almost Optimal Rank Bound for Depth-3 Identities: We show that the rank of a depth-3 circuit (over any field) that is simple, minimal and zero is at most k^3\log d. The previous best rank bound known was 2^{O(k^2)}(\log d)^{k-2} by Dvir and Shpilka (STOC 2005). This almost resolves the rank question first posed by Dvir and Shpilka (as we also provide a simple and minimal identity of rank \Omega(k\log d)). Our rank bound significantly improves (dependence on k exponentially reduced) the best known deterministic black-box identity tests for depth-3 circuits by Karnin and Shpilka (CCC 2008). Our techniques also shed light on the factorization pattern of nonzero depth-3 circuits, most strikingly: the rank of linear factors of a simple, minimal and nonzero depth-3 circuit (over any field) is at most k^3\log d. The novel feature of this work is a new notion of maps between sets of linear forms, called "ideal matchings", used to study depth-3 circuits. We prove interesting structural results about depth-3 identities using these techniques. We believe that these can lead to the goal of a deterministic polynomial time identity test for these circuits.<|reference_end|> | arxiv | @article{saxena2008an,
title={An Almost Optimal Rank Bound for Depth-3 Identities},
author={Nitin Saxena and C. Seshadhri},
journal={arXiv preprint arXiv:0811.3161},
year={2008},
archivePrefix={arXiv},
eprint={0811.3161},
primaryClass={cs.CC}
} | saxena2008an |
arxiv-5507 | 0811.3165 | Trading GRH for algebra: algorithms for factoring polynomials and related structures | <|reference_start|>Trading GRH for algebra: algorithms for factoring polynomials and related structures: In this paper we develop techniques that eliminate the need of the Generalized Riemann Hypothesis (GRH) from various (almost all) known results about deterministic polynomial factoring over finite fields. Our main result shows that given a polynomial f(x) of degree n over a finite field k, we can find in deterministic poly(n^{\log n},\log |k|) time "either" a nontrivial factor of f(x) "or" a nontrivial automorphism of k[x]/(f(x)) of order n. This main tool leads to various new GRH-free results, most striking of which are: (1) Given a noncommutative algebra over a finite field, we can find a zero divisor in deterministic subexponential time. (2) Given a positive integer r such that either 8|r or r has at least two distinct odd prime factors. There is a deterministic polynomial time algorithm to find a nontrivial factor of the r-th cyclotomic polynomial over a finite field. In this paper, following the seminal work of Lenstra (1991) on constructing isomorphisms between finite fields, we further generalize classical Galois theory constructs like cyclotomic extensions, Kummer extensions, Teichmuller subgroups, to the case of commutative semisimple algebras with automorphisms. These generalized constructs help eliminate the dependence on GRH.<|reference_end|> | arxiv | @article{ivanyos2008trading,
title={Trading GRH for algebra: algorithms for factoring polynomials and
related structures},
author={G'abor Ivanyos, Marek Karpinski, Lajos R'onyai and Nitin Saxena},
journal={arXiv preprint arXiv:0811.3165},
year={2008},
archivePrefix={arXiv},
eprint={0811.3165},
primaryClass={cs.CC cs.SC}
} | ivanyos2008trading |
arxiv-5508 | 0811.3176 | Self-stabilizing Numerical Iterative Computation | <|reference_start|>Self-stabilizing Numerical Iterative Computation: Many challenging tasks in sensor networks, including sensor calibration, ranking of nodes, monitoring, event region detection, collaborative filtering, collaborative signal processing, {\em etc.}, can be formulated as a problem of solving a linear system of equations. Several recent works propose different distributed algorithms for solving these problems, usually by using linear iterative numerical methods. In this work, we extend the settings of the above approaches, by adding another dimension to the problem. Specifically, we are interested in {\em self-stabilizing} algorithms, that continuously run and converge to a solution from any initial state. This aspect of the problem is highly important due to the dynamic nature of the network and the frequent changes in the measured environment. In this paper, we link together algorithms from two different domains. On the one hand, we use the rich linear algebra literature of linear iterative methods for solving systems of linear equations, which are naturally distributed with rapid convergence properties. On the other hand, we are interested in self-stabilizing algorithms, where the input to the computation is constantly changing, and we would like the algorithms to converge from any initial state. We propose a simple novel method called \syncAlg as a self-stabilizing variant of the linear iterative methods. We prove that under mild conditions the self-stabilizing algorithm converges to a desired result. We further extend these results to handle the asynchronous case. As a case study, we discuss the sensor calibration problem and provide simulation results to support the applicability of our approach.<|reference_end|> | arxiv | @article{hoch2008self-stabilizing,
title={Self-stabilizing Numerical Iterative Computation},
author={Ezra N. Hoch, Danny Bickson and Danny Dolev},
journal={In the 10th International Symposium on Stabilization, Safety, and
Security of Distributed Systems (SSS '08), Detriot, Nov. 2008},
year={2008},
doi={10.1007/978-3-540-89335-6_9},
archivePrefix={arXiv},
eprint={0811.3176},
primaryClass={cs.DC}
} | hoch2008self-stabilizing |
arxiv-5509 | 0811.3208 | Quantum algorithms for highly non-linear Boolean functions | <|reference_start|>Quantum algorithms for highly non-linear Boolean functions: Attempts to separate the power of classical and quantum models of computation have a long history. The ultimate goal is to find exponential separations for computational problems. However, such separations do not come a dime a dozen: while there were some early successes in the form of hidden subgroup problems for abelian groups--which generalize Shor's factoring algorithm perhaps most faithfully--only for a handful of non-abelian groups efficient quantum algorithms were found. Recently, problems have gotten increased attention that seek to identify hidden sub-structures of other combinatorial and algebraic objects besides groups. In this paper we provide new examples for exponential separations by considering hidden shift problems that are defined for several classes of highly non-linear Boolean functions. These so-called bent functions arise in cryptography, where their property of having perfectly flat Fourier spectra on the Boolean hypercube gives them resilience against certain types of attack. We present new quantum algorithms that solve the hidden shift problems for several well-known classes of bent functions in polynomial time and with a constant number of queries, while the classical query complexity is shown to be exponential. Our approach uses a technique that exploits the duality between bent functions and their Fourier transforms.<|reference_end|> | arxiv | @article{roetteler2008quantum,
title={Quantum algorithms for highly non-linear Boolean functions},
author={Martin Roetteler},
journal={Proceedings of the 21st Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA'10), pp. 448-457, 2010},
year={2008},
archivePrefix={arXiv},
eprint={0811.3208},
primaryClass={quant-ph cs.CC}
} | roetteler2008quantum |
arxiv-5510 | 0811.3231 | A Rational Deconstruction of Landin's SECD Machine with the J Operator | <|reference_start|>A Rational Deconstruction of Landin's SECD Machine with the J Operator: Landin's SECD machine was the first abstract machine for applicative expressions, i.e., functional programs. Landin's J operator was the first control operator for functional languages, and was specified by an extension of the SECD machine. We present a family of evaluation functions corresponding to this extension of the SECD machine, using a series of elementary transformations (transformation into continu-ation-passing style (CPS) and defunctionalization, chiefly) and their left inverses (transformation into direct style and refunctionalization). To this end, we modernize the SECD machine into a bisimilar one that operates in lockstep with the original one but that (1) does not use a data stack and (2) uses the caller-save rather than the callee-save convention for environments. We also identify that the dump component of the SECD machine is managed in a callee-save way. The caller-save counterpart of the modernized SECD machine precisely corresponds to Thielecke's double-barrelled continuations and to Felleisen's encoding of J in terms of call/cc. We then variously characterize the J operator in terms of CPS and in terms of delimited-control operators in the CPS hierarchy. As a byproduct, we also present several reduction semantics for applicative expressions with the J operator, based on Curien's original calculus of explicit substitutions. These reduction semantics mechanically correspond to the modernized versions of the SECD machine and to the best of our knowledge, they provide the first syntactic theories of applicative expressions with the J operator.<|reference_end|> | arxiv | @article{danvy2008a,
title={A Rational Deconstruction of Landin's SECD Machine with the J Operator},
author={Olivier Danvy and Kevin Millikin},
journal={Logical Methods in Computer Science, Volume 4, Issue 4 (November
29, 2008) lmcs:1112},
year={2008},
doi={10.2168/LMCS-4(4:12)2008},
archivePrefix={arXiv},
eprint={0811.3231},
primaryClass={cs.PL cs.LO}
} | danvy2008a |
arxiv-5511 | 0811.3233 | Cubefree words with many squares | <|reference_start|>Cubefree words with many squares: We construct infinite cubefree binary words containing exponentially many distinct squares of length n. We also show that for every positive integer n, there is a cubefree binary square of length 2n.<|reference_end|> | arxiv | @article{currie2008cubefree,
title={Cubefree words with many squares},
author={James Currie and Narad Rampersad},
journal={arXiv preprint arXiv:0811.3233},
year={2008},
archivePrefix={arXiv},
eprint={0811.3233},
primaryClass={math.CO cs.FL}
} | currie2008cubefree |
arxiv-5512 | 0811.3244 | Linear Time Approximation Schemes for the Gale-Berlekamp Game and Related Minimization Problems | <|reference_start|>Linear Time Approximation Schemes for the Gale-Berlekamp Game and Related Minimization Problems: We design a linear time approximation scheme for the Gale-Berlekamp Switching Game and generalize it to a wider class of dense fragile minimization problems including the Nearest Codeword Problem (NCP) and Unique Games Problem. Further applications include, among other things, finding a constrained form of matrix rigidity and maximum likelihood decoding of an error correcting code. As another application of our method we give the first linear time approximation schemes for correlation clustering with a fixed number of clusters and its hierarchical generalization. Our results depend on a new technique for dealing with small objective function values of optimization problems and could be of independent interest.<|reference_end|> | arxiv | @article{karpinski2008linear,
title={Linear Time Approximation Schemes for the Gale-Berlekamp Game and
Related Minimization Problems},
author={Marek Karpinski, Warren Schudy},
journal={arXiv preprint arXiv:0811.3244},
year={2008},
archivePrefix={arXiv},
eprint={0811.3244},
primaryClass={cs.DS cs.DM}
} | karpinski2008linear |
arxiv-5513 | 0811.3247 | An experimental analysis of Lemke-Howson algorithm | <|reference_start|>An experimental analysis of Lemke-Howson algorithm: We present an experimental investigation of the performance of the Lemke-Howson algorithm, which is the most widely used algorithm for the computation of a Nash equilibrium for bimatrix games. Lemke-Howson algorithm is based upon a simple pivoting strategy, which corresponds to following a path whose endpoint is a Nash equilibrium. We analyze both the basic Lemke-Howson algorithm and a heuristic modification of it, which we designed to cope with the effects of a 'bad' initial choice of the pivot. Our experimental findings show that, on uniformly random games, the heuristics achieves a linear running time, while the basic Lemke-Howson algorithm runs in time roughly proportional to a polynomial of degree seven. To conduct the experiments, we have developed our own implementation of Lemke-Howson algorithm, which turns out to be significantly faster than state-of-the-art software. This allowed us to run the algorithm on a much larger set of data, and on instances of much larger size, compared with previous work.<|reference_end|> | arxiv | @article{codenotti2008an,
title={An experimental analysis of Lemke-Howson algorithm},
author={Bruno Codenotti, Stefano De Rossi, Marino Pagan},
journal={arXiv preprint arXiv:0811.3247},
year={2008},
archivePrefix={arXiv},
eprint={0811.3247},
primaryClass={cs.DS cs.NA}
} | codenotti2008an |
arxiv-5514 | 0811.3272 | Characterizing the Robustness of Complex Networks | <|reference_start|>Characterizing the Robustness of Complex Networks: With increasingly ambitious initiatives such as GENI and FIND that seek to design the future Internet, it becomes imperative to define the characteristics of robust topologies, and build future networks optimized for robustness. This paper investigates the characteristics of network topologies that maintain a high level of throughput in spite of multiple attacks. To this end, we select network topologies belonging to the main network models and some real world networks. We consider three types of attacks: removal of random nodes, high degree nodes, and high betweenness nodes. We use elasticity as our robustness measure and, through our analysis, illustrate that different topologies can have different degrees of robustness. In particular, elasticity can fall as low as 0.8% of the upper bound based on the attack employed. This result substantiates the need for optimized network topology design. Furthermore, we implement a tradeoff function that combines elasticity under the three attack strategies and considers the cost of the network. Our extensive simulations show that, for a given network density, regular and semi-regular topologies can have higher degrees of robustness than heterogeneous topologies, and that link redundancy is a sufficient but not necessary condition for robustness.<|reference_end|> | arxiv | @article{sydney2008characterizing,
title={Characterizing the Robustness of Complex Networks},
author={Ali Sydney, Caterina Scoglio, Mina Youssef, and Phillip Schumm},
journal={arXiv preprint arXiv:0811.3272},
year={2008},
archivePrefix={arXiv},
eprint={0811.3272},
primaryClass={cs.NI cs.PF physics.data-an}
} | sydney2008characterizing |
arxiv-5515 | 0811.3284 | SINR Diagrams: Towards Algorithmically Usable SINR Models of Wireless Networks | <|reference_start|>SINR Diagrams: Towards Algorithmically Usable SINR Models of Wireless Networks: The rules governing the availability and quality of connections in a wireless network are described by physical models such as the signal-to-interference & noise ratio (SINR) model. For a collection of simultaneously transmitting stations in the plane, it is possible to identify a reception zone for each station, consisting of the points where its transmission is received correctly. The resulting SINR diagram partitions the plane into a reception zone per station and the remaining plane where no station can be heard. SINR diagrams appear to be fundamental to understanding the behavior of wireless networks, and may play a key role in the development of suitable algorithms for such networks, analogous perhaps to the role played by Voronoi diagrams in the study of proximity queries and related issues in computational geometry. So far, however, the properties of SINR diagrams have not been studied systematically, and most algorithmic studies in wireless networking rely on simplified graph-based models such as the unit disk graph (UDG) model, which conveniently abstract away interference-related complications, and make it easier to handle algorithmic issues, but consequently fail to capture accurately some important aspects of wireless networks. The current paper focuses on obtaining some basic understanding of SINR diagrams, their properties and their usability in algorithmic applications. Specifically, based on some algebraic properties of the polynomials defining the reception zones we show that assuming uniform power transmissions, the reception zones are convex and relatively well-rounded. These results are then used to develop an efficient approximation algorithm for a fundamental point location problem in wireless networks.<|reference_end|> | arxiv | @article{avin2008sinr,
title={SINR Diagrams: Towards Algorithmically Usable SINR Models of Wireless
Networks},
author={Chen Avin (1), Yuval Emek (2), Erez Kantor (2), Zvi Lotker (1), David
Peleg (2) and Liam Roditty (3) ((1) Department of Communication Systems
Engineering, Ben Gurion University, Israel (2) Department of Computer Science
and Applied Mathematics, Weizmann Institute of Science, Israel, (3)
Department of Computer Science, Bar Ilan University, Israel)},
journal={arXiv preprint arXiv:0811.3284},
year={2008},
archivePrefix={arXiv},
eprint={0811.3284},
primaryClass={cs.NI cs.DC}
} | avin2008sinr |
arxiv-5516 | 0811.3301 | Faster Retrieval with a Two-Pass Dynamic-Time-Warping Lower Bound | <|reference_start|>Faster Retrieval with a Two-Pass Dynamic-Time-Warping Lower Bound: The Dynamic Time Warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB Keogh). We compare LB Keogh with a tighter lower bound (LB Improved). We find that LB Improved-based search is faster. As an example, our approach is 2-3 times faster over random-walk and shape time series.<|reference_end|> | arxiv | @article{lemire2008faster,
title={Faster Retrieval with a Two-Pass Dynamic-Time-Warping Lower Bound},
author={Daniel Lemire},
journal={Daniel Lemire, Faster Retrieval with a Two-Pass
Dynamic-Time-Warping Lower Bound, Pattern Recognition 42(9): 2169-2180 (2009)},
year={2008},
doi={10.1016/j.patcog.2008.11.030},
archivePrefix={arXiv},
eprint={0811.3301},
primaryClass={cs.DB cs.CV}
} | lemire2008faster |
arxiv-5517 | 0811.3328 | chi2TeX Semi-automatic translation from chiwriter to LaTeX | <|reference_start|>chi2TeX Semi-automatic translation from chiwriter to LaTeX: Semi-automatic translation of math-filled book from obsolete ChiWriter format to LaTeX. Is it possible? Idea of criterion whether to use automatic or hand mode for translation. Illustrations.<|reference_end|> | arxiv | @article{bogevolnov2008chi2tex,
title={chi2TeX Semi-automatic translation from chiwriter to LaTeX},
author={Justislav Bogevolnov},
journal={arXiv preprint arXiv:0811.3328},
year={2008},
archivePrefix={arXiv},
eprint={0811.3328},
primaryClass={cs.SE cs.CV}
} | bogevolnov2008chi2tex |
arxiv-5518 | 0811.3373 | Belief functions on lattices | <|reference_start|>Belief functions on lattices: We extend the notion of belief function to the case where the underlying structure is no more the Boolean lattice of subsets of some universal set, but any lattice, which we will endow with a minimal set of properties according to our needs. We show that all classical constructions and definitions (e.g., mass allocation, commonality function, plausibility functions, necessity measures with nested focal elements, possibility distributions, Dempster rule of combination, decomposition w.r.t. simple support functions, etc.) remain valid in this general setting. Moreover, our proof of decomposition of belief functions into simple support functions is much simpler and general than the original one by Shafer.<|reference_end|> | arxiv | @article{grabisch2008belief,
title={Belief functions on lattices},
author={Michel Grabisch (CERMSEM, Ces)},
journal={International Journal of Intelligent Systems (2009) 1-20},
year={2008},
archivePrefix={arXiv},
eprint={0811.3373},
primaryClass={cs.DM}
} | grabisch2008belief |
arxiv-5519 | 0811.3387 | Broadcasting in Prefix Space: P2P Data Dissemination with Predictable Performance | <|reference_start|>Broadcasting in Prefix Space: P2P Data Dissemination with Predictable Performance: A broadcast mode may augment peer-to-peer overlay networks with an efficient, scalable data replication function, but may also give rise to a virtual link layer in VPN-type solutions. We introduce a simple broadcasting mechanism that operates in the prefix space of distributed hash tables without signaling. This paper concentrates on the performance analysis of the prefix flooding scheme. Starting from simple models of recursive $k$-ary trees, we analytically derive distributions of hop counts and the replication load. Extensive simulation results are presented further on, based on an implementation within the OverSim framework. Comparisons are drawn to Scribe, taken as a general reference model for group communication according to the shared, rendezvous-point-centered distribution paradigm. The prefix flooding scheme thereby confirmed its widely predictable performance and consistently outperformed Scribe in all metrics. Reverse path selection in overlays is identified as a major cause of performance degradation.<|reference_end|> | arxiv | @article{wählisch2008broadcasting,
title={Broadcasting in Prefix Space: P2P Data Dissemination with Predictable
Performance},
author={Matthias W"ahlisch, Thomas C. Schmidt and Georg Wittenburg},
journal={Matthias W\"ahlisch, Thomas C. Schmidt, and Georg Wittenburg,
"Broadcasting in Prefix Space: P2P Data Dissemination with Predictable
Performance," in Proc. of the Fourth ICIW: IEEE ComSoc Press, 2009, pp. 74-83},
year={2008},
doi={10.1109/ICIW.2009.19},
archivePrefix={arXiv},
eprint={0811.3387},
primaryClass={cs.NI cs.PF}
} | wählisch2008broadcasting |
arxiv-5520 | 0811.3400 | A Cloning Pushout Approach to Term-Graph Transformation | <|reference_start|>A Cloning Pushout Approach to Term-Graph Transformation: We address the problem of cyclic termgraph rewriting. We propose a new framework where rewrite rules are tuples of the form $(L,R,\tau,\sigma)$ such that $L$ and $R$ are termgraphs representing the left-hand and the right-hand sides of the rule, $\tau$ is a mapping from the nodes of $L$ to those of $R$ and $\sigma$ is a partial function from nodes of $R$ to nodes of $L$. $\tau$ describes how incident edges of the nodes in $L$ are connected in $R$. $\tau$ is not required to be a graph morphism as in classical algebraic approaches of graph transformation. The role of $\sigma$ is to indicate the parts of $L$ to be cloned (copied). Furthermore, we introduce a new notion of \emph{cloning pushout} and define rewrite steps as cloning pushouts in a given category. Among the features of the proposed rewrite systems, we quote the ability to perform local and global redirection of pointers, addition and deletion of nodes as well as cloning and collapsing substructures.<|reference_end|> | arxiv | @article{duval2008a,
title={A Cloning Pushout Approach to Term-Graph Transformation},
author={Dominique Duval (LMC - IMAG, LJK, NMST), Rachid Echahed (LIG, Leibniz
- IMAG, IMAG), Fr'ed'eric Prost (LIG)},
journal={arXiv preprint arXiv:0811.3400},
year={2008},
archivePrefix={arXiv},
eprint={0811.3400},
primaryClass={cs.LO}
} | duval2008a |
arxiv-5521 | 0811.3448 | Binar Sort: A Linear Generalized Sorting Algorithm | <|reference_start|>Binar Sort: A Linear Generalized Sorting Algorithm: Sorting is a common and ubiquitous activity for computers. It is not surprising that there exist a plethora of sorting algorithms. For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial permutation into a sorted permutation. Linear O(N) sorting algorithms exist, but use a priori knowledge of the data to use a specific property of the data and thus have greater performance. In contrast, the linearithmic sorting algorithms are generalized by using a universal property of data-comparison, but have a linearithmic performance lower bound. The trade-off in sorting algorithms is generality for performance by the chosen property used to sort the data elements. A general-purpose, linear sorting algorithm in the context of the trade-off of performance for generality at first consideration seems implausible. But, there is an implicit assumption that only the ordering property is universal. But, as will be discussed and examined, it is not the only universal property for data elements. The binar sort is a general-purpose sorting algorithm that uses this other universal property to sort linearly.<|reference_end|> | arxiv | @article{gilreath2008binar,
title={Binar Sort: A Linear Generalized Sorting Algorithm},
author={William F. Gilreath},
journal={arXiv preprint arXiv:0811.3448},
year={2008},
archivePrefix={arXiv},
eprint={0811.3448},
primaryClass={cs.DS}
} | gilreath2008binar |
arxiv-5522 | 0811.3449 | Binar Shuffle Algorithm: Shuffling Bit by Bit | <|reference_start|>Binar Shuffle Algorithm: Shuffling Bit by Bit: Frequently, randomly organized data is needed to avoid an anomalous operation of other algorithms and computational processes. An analogy is that a deck of cards is ordered within the pack, but before a game of poker or solitaire the deck is shuffled to create a random permutation. Shuffling is used to assure that an aggregate of data elements for a sequence S is randomly arranged, but avoids an ordered or partially ordered permutation. Shuffling is the process of arranging data elements into a random permutation. The sequence S as an aggregation of N data elements, there are N! possible permutations. For the large number of possible permutations, two of the possible permutations are for a sorted or ordered placement of data elements--both an ascending and descending sorted permutation. Shuffling must avoid inadvertently creating either an ascending or descending permutation. Shuffling is frequently coupled to another algorithmic function -- pseudo-random number generation. The efficiency and quality of the shuffle is directly dependent upon the random number generation algorithm utilized. A more effective and efficient method of shuffling is to use parameterization to configure the shuffle, and to shuffle into sub-arrays by utilizing the encoding of the data elements. The binar shuffle algorithm uses the encoding of the data elements and parameterization to avoid any direct coupling to a random number generation algorithm, but still remain a linear O(N) shuffle algorithm.<|reference_end|> | arxiv | @article{gilreath2008binar,
title={Binar Shuffle Algorithm: Shuffling Bit by Bit},
author={William F. Gilreath},
journal={arXiv preprint arXiv:0811.3449},
year={2008},
archivePrefix={arXiv},
eprint={0811.3449},
primaryClass={cs.DS}
} | gilreath2008binar |
arxiv-5523 | 0811.3475 | Robust Network Coding in the Presence of Untrusted Nodes | <|reference_start|>Robust Network Coding in the Presence of Untrusted Nodes: While network coding can be an efficient means of information dissemination in networks, it is highly susceptible to "pollution attacks," as the injection of even a single erroneous packet has the potential to corrupt each and every packet received by a given destination. Even when suitable error-control coding is applied, an adversary can, in many interesting practical situations, overwhelm the error-correcting capability of the code. To limit the power of potential adversaries, a broadcast transformation is introduced, in which nodes are limited to just a single (broadcast) transmission per generation. Under this broadcast transformation, the multicast capacity of a network is changed (in general reduced) from the number of edge-disjoint paths between source and sink to the number of internally-disjoint paths. Exploiting this fact, we propose a family of networks whose capacity is largely unaffected by a broadcast transformation. This results in a significant achievable transmission rate for such networks, even in the presence of adversaries.<|reference_end|> | arxiv | @article{wang2008robust,
title={Robust Network Coding in the Presence of Untrusted Nodes},
author={Da Wang, Danilo Silva, Frank R. Kschischang},
journal={IEEE Transactions on Information Theory, vol. 56, no. 9, pp.
4532-4538, Sep. 2010},
year={2008},
doi={10.1109/TIT.2010.2054650},
archivePrefix={arXiv},
eprint={0811.3475},
primaryClass={cs.IT cs.NI math.IT}
} | wang2008robust |
arxiv-5524 | 0811.3476 | Error correcting code using tree-like multilayer perceptron | <|reference_start|>Error correcting code using tree-like multilayer perceptron: An error correcting code using a tree-like multilayer perceptron is proposed. An original message $\mbi{s}^0$ is encoded into a codeword $\boldmath{y}_0$ using a tree-like committee machine (committee tree) or a tree-like parity machine (parity tree). Based on these architectures, several schemes featuring monotonic or non-monotonic units are introduced. The codeword $\mbi{y}_0$ is then transmitted via a Binary Asymmetric Channel (BAC) where it is corrupted by noise. The analytical performance of these schemes is investigated using the replica method of statistical mechanics. Under some specific conditions, some of the proposed schemes are shown to saturate the Shannon bound at the infinite codeword length limit. The influence of the monotonicity of the units on the performance is also discussed.<|reference_end|> | arxiv | @article{cousseau2008error,
title={Error correcting code using tree-like multilayer perceptron},
author={Florent Cousseau, Kazushi Mimura, Masato Okada},
journal={Phys. Rev. E, 81, 021104 (2010)},
year={2008},
doi={10.1103/PhysRevE.81.021104},
archivePrefix={arXiv},
eprint={0811.3476},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT}
} | cousseau2008error |
arxiv-5525 | 0811.3479 | Counting number of factorizations of a natural number | <|reference_start|>Counting number of factorizations of a natural number: In this note we describe a new method of counting the number of unordered factorizations of a natural number by means of a generating function and a recurrence relation arising from it, which improves an earlier result in this direction.<|reference_end|> | arxiv | @article{ghosh2008counting,
title={Counting number of factorizations of a natural number},
author={Shamik Ghosh},
journal={arXiv preprint arXiv:0811.3479},
year={2008},
archivePrefix={arXiv},
eprint={0811.3479},
primaryClass={cs.DM math.NT}
} | ghosh2008counting |
arxiv-5526 | 0811.3490 | Faster Approximate String Matching for Short Patterns | <|reference_start|>Faster Approximate String Matching for Short Patterns: We study the classical approximate string matching problem, that is, given strings $P$ and $Q$ and an error threshold $k$, find all ending positions of substrings of $Q$ whose edit distance to $P$ is at most $k$. Let $P$ and $Q$ have lengths $m$ and $n$, respectively. On a standard unit-cost word RAM with word size $w \geq \log n$ we present an algorithm using time $$ O(nk \cdot \min(\frac{\log^2 m}{\log n},\frac{\log^2 m\log w}{w}) + n) $$ When $P$ is short, namely, $m = 2^{o(\sqrt{\log n})}$ or $m = 2^{o(\sqrt{w/\log w})}$ this improves the previously best known time bounds for the problem. The result is achieved using a novel implementation of the Landau-Vishkin algorithm based on tabulation and word-level parallelism.<|reference_end|> | arxiv | @article{bille2008faster,
title={Faster Approximate String Matching for Short Patterns},
author={Philip Bille},
journal={arXiv preprint arXiv:0811.3490},
year={2008},
archivePrefix={arXiv},
eprint={0811.3490},
primaryClass={cs.DS}
} | bille2008faster |
arxiv-5527 | 0811.3492 | Dynamic System Adaptation by Constraint Orchestration | <|reference_start|>Dynamic System Adaptation by Constraint Orchestration: For Paradigm models, evolution is just-in-time specified coordination conducted by a special reusable component McPal. Evolution can be treated consistently and on-the-fly through Paradigm's constraint orchestration, also for originally unforeseen evolution. UML-like diagrams visually supplement such migration, as is illustrated for the case of a critical section solution evolving into a pipeline architecture.<|reference_end|> | arxiv | @article{groenewegen2008dynamic,
title={Dynamic System Adaptation by Constraint Orchestration},
author={L.P.J. Groenewegen and E.P. de Vink},
journal={arXiv preprint arXiv:0811.3492},
year={2008},
number={CSR 08/29},
archivePrefix={arXiv},
eprint={0811.3492},
primaryClass={cs.SE}
} | groenewegen2008dynamic |
arxiv-5528 | 0811.3521 | Craig Interpolation for Quantifier-Free Presburger Arithmetic | <|reference_start|>Craig Interpolation for Quantifier-Free Presburger Arithmetic: Craig interpolation has become a versatile algorithmic tool for improving software verification. Interpolants can, for instance, accelerate the convergence of fixpoint computations for infinite-state systems. They also help improve the refinement of iteratively computed lazy abstractions. Efficient interpolation procedures have been presented only for a few theories. In this paper, we introduce a complete interpolation method for the full range of quantifier-free Presburger arithmetic formulas. We propose a novel convex variable projection for integer inequalities and a technique to combine them with equalities. The derivation of the interpolant has complexity low-degree polynomial in the size of the refutation proof and is typically fast in practice.<|reference_end|> | arxiv | @article{brillout2008craig,
title={Craig Interpolation for Quantifier-Free Presburger Arithmetic},
author={Angelo Brillout, Daniel Kroening, and Thomas Wahl},
journal={arXiv preprint arXiv:0811.3521},
year={2008},
archivePrefix={arXiv},
eprint={0811.3521},
primaryClass={cs.LO cs.SC}
} | brillout2008craig |
arxiv-5529 | 0811.3536 | Analyse de la rigidit\'e des machines outils 3 axes d'architecture parall\`ele hyperstatique | <|reference_start|>Analyse de la rigidit\'e des machines outils 3 axes d'architecture parall\`ele hyperstatique: The paper presents a new stiffness modelling method for overconstrained parallel manipulators, which is applied to 3-d.o.f. translational mechanisms. It is based on a multidimensional lumped-parameter model that replaces the link flexibility by localized 6-d.o.f. virtual springs. In contrast to other works, the method includes a FEA-based link stiffness evaluation and employs a new solution strategy of the kinetostatic equations, which allows computing the stiffness matrix for the overconstrained architectures and for the singular manipulator postures. The advantages of the developed technique are confirmed by application examples, which deal with comparative stiffness analysis of two translational parallel manipulators.<|reference_end|> | arxiv | @article{pashkevich2008analyse,
title={Analyse de la rigidit\'e des machines outils 3 axes d'architecture
parall\`ele hyperstatique},
author={Anatoly Pashkevich (IRCCyN), Damien Chablat (IRCCyN), Philippe Wenger
(IRCCyN)},
journal={5eme Assises Machines et Usinage \`a grande vitesse, Nantes :
France (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0811.3536},
primaryClass={cs.RO}
} | pashkevich2008analyse |
arxiv-5530 | 0811.3585 | The Capacity of Ad hoc Networks under Random Packet Losses | <|reference_start|>The Capacity of Ad hoc Networks under Random Packet Losses: We consider the problem of determining asymptotic bounds on the capacity of a random ad hoc network. Previous approaches assumed a link layer model in which if a transmitter-receiver pair can communicate with each other, i.e., the Signal to Interference and Noise Ratio (SINR) is above a certain threshold, then every transmitted packet is received error-free by the receiver thereby. Using this model, the per node capacity of the network was shown to be $\Theta(\frac{1}{\sqrt{n\log{n}}})$. In reality, for any finite link SINR, there is a non-zero probability of erroneous reception of the packet. We show that in a large network, as the packet travels an asymptotically large number of hops from source to destination, the cumulative impact of packet losses over intermediate links results in a per-node throughput of only $O(\frac{1}{n})$. We then propose a new scheduling scheme to counter this effect. The proposed scheme provides tight guarantees on end-to-end packet loss probability, and improves the per-node throughput to $\Omega(\frac{1}{\sqrt{n} ({\log{n}})^{\frac{\alpha{{+2}}}{2(\alpha-2)}}})$ where $\alpha>2$ is the path loss exponent.<|reference_end|> | arxiv | @article{mhatre2008the,
title={The Capacity of Ad hoc Networks under Random Packet Losses},
author={Vivek P. Mhatre, Catherine P. Rosenberg, Ravi R. Mazumdar},
journal={arXiv preprint arXiv:0811.3585},
year={2008},
archivePrefix={arXiv},
eprint={0811.3585},
primaryClass={cs.IT cs.NI math.IT}
} | mhatre2008the |
arxiv-5531 | 0811.3602 | Low-Memory Adaptive Prefix Coding | <|reference_start|>Low-Memory Adaptive Prefix Coding: In this paper we study the adaptive prefix coding problem in cases where the size of the input alphabet is large. We present an online prefix coding algorithm that uses $O(\sigma^{1 / \lambda + \epsilon}) $ bits of space for any constants $\eps>0$, $\lambda>1$, and encodes the string of symbols in $O(\log \log \sigma)$ time per symbol \emph{in the worst case}, where $\sigma$ is the size of the alphabet. The upper bound on the encoding length is $\lambda n H (s) +(\lambda \ln 2 + 2 + \epsilon) n + O (\sigma^{1 / \lambda} \log^2 \sigma)$ bits.<|reference_end|> | arxiv | @article{gagie2008low-memory,
title={Low-Memory Adaptive Prefix Coding},
author={Travis Gagie, Marek Karpinski, Yakov Nekrich},
journal={arXiv preprint arXiv:0811.3602},
year={2008},
archivePrefix={arXiv},
eprint={0811.3602},
primaryClass={cs.DS}
} | gagie2008low-memory |
arxiv-5532 | 0811.3617 | Distributed Scalar Quantization for Computing: High-Resolution Analysis and Extensions | <|reference_start|>Distributed Scalar Quantization for Computing: High-Resolution Analysis and Extensions: Communication of quantized information is frequently followed by a computation. We consider situations of \emph{distributed functional scalar quantization}: distributed scalar quantization of (possibly correlated) sources followed by centralized computation of a function. Under smoothness conditions on the sources and function, companding scalar quantizer designs are developed to minimize mean-squared error (MSE) of the computed function as the quantizer resolution is allowed to grow. Striking improvements over quantizers designed without consideration of the function are possible and are larger in the entropy-constrained setting than in the fixed-rate setting. As extensions to the basic analysis, we characterize a large class of functions for which regular quantization suffices, consider certain functions for which asymptotic optimality is achieved without arbitrarily fine quantization, and allow limited collaboration between source encoders. In the entropy-constrained setting, a single bit per sample communicated between encoders can have an arbitrarily-large effect on functional distortion. In contrast, such communication has very little effect in the fixed-rate setting.<|reference_end|> | arxiv | @article{misra2008distributed,
title={Distributed Scalar Quantization for Computing: High-Resolution Analysis
and Extensions},
author={Vinith Misra, Vivek K Goyal, Lav R. Varshney},
journal={IEEE Trans. on Information Theory, vol. 57, no. 8, pp. 5298-5325,
August 2011},
year={2008},
doi={10.1109/TIT.2011.2158882},
archivePrefix={arXiv},
eprint={0811.3617},
primaryClass={cs.IT math.IT}
} | misra2008distributed |
arxiv-5533 | 0811.3620 | Solving package dependencies: from EDOS to Mancoosi | <|reference_start|>Solving package dependencies: from EDOS to Mancoosi: Mancoosi (Managing the Complexity of the Open Source Infrastructure) is an ongoing research project funded by the European Union for addressing some of the challenges related to the "upgrade problem" of interdependent software components of which Debian packages are prototypical examples. Mancoosi is the natural continuation of the EDOS project which has already contributed tools for distribution-wide quality assurance in Debian and other GNU/Linux distributions. The consortium behind the project consists of several European public and private research institutions as well as some commercial GNU/Linux distributions from Europe and South America. Debian is represented by a small group of Debian Developers who are working in the ranks of the involved universities to drive and integrate back achievements into Debian. This paper presents relevant results from EDOS in dependency management and gives an overview of the Mancoosi project and its objectives, with a particular focus on the prospective benefits for Debian.<|reference_end|> | arxiv | @article{treinen2008solving,
title={Solving package dependencies: from EDOS to Mancoosi},
author={Ralf Treinen (PPS), Stefano Zacchiroli (PPS)},
journal={DebConf8, Argentine (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0811.3620},
primaryClass={cs.SE}
} | treinen2008solving |
arxiv-5534 | 0811.3621 | Description of the CUDF Format | <|reference_start|>Description of the CUDF Format: This document contains several related specifications, together they describe the document formats related to the solver competition which will be organized by Mancoosi. In particular, this document describes: - DUDF (Distribution Upgradeability Description Format), the document format to be used to submit upgrade problem instances from user machines to a (distribution-specific) database of upgrade problems; - CUDF (Common Upgradeability Description Format), the document format used to encode upgrade problems, abstracting over distribution-specific details. Solvers taking part in the competition will be fed with input in CUDF format.<|reference_end|> | arxiv | @article{treinen2008description,
title={Description of the CUDF Format},
author={Ralf Treinen (PPS), Stefano Zacchiroli (PPS)},
journal={arXiv preprint arXiv:0811.3621},
year={2008},
archivePrefix={arXiv},
eprint={0811.3621},
primaryClass={cs.SE}
} | treinen2008description |
arxiv-5535 | 0811.3648 | Revisiting Norm Estimation in Data Streams | <|reference_start|>Revisiting Norm Estimation in Data Streams: The problem of estimating the pth moment F_p (p nonnegative and real) in data streams is as follows. There is a vector x which starts at 0, and many updates of the form x_i <-- x_i + v come sequentially in a stream. The algorithm also receives an error parameter 0 < eps < 1. The goal is then to output an approximation with relative error at most eps to F_p = ||x||_p^p. Previously, it was known that polylogarithmic space (in the vector length n) was achievable if and only if p <= 2. We make several new contributions in this regime, including: (*) An optimal space algorithm for 0 < p < 2, which, unlike previous algorithms which had optimal dependence on 1/eps but sub-optimal dependence on n, does not rely on a generic pseudorandom generator. (*) A near-optimal space algorithm for p = 0 with optimal update and query time. (*) A near-optimal space algorithm for the "distinct elements" problem (p = 0 and all updates have v = 1) with optimal update and query time. (*) Improved L_2 --> L_2 dimensionality reduction in a stream. (*) New 1-pass lower bounds to show optimality and near-optimality of our algorithms, as well as of some previous algorithms (the "AMS sketch" for p = 2, and the L_1-difference algorithm of Feigenbaum et al.). As corollaries of our work, we also obtain a few separations in the complexity of moment estimation problems: F_0 in 1 pass vs. 2 passes, p = 0 vs. p > 0, and F_0 with strictly positive updates vs. arbitrary updates.<|reference_end|> | arxiv | @article{kane2008revisiting,
title={Revisiting Norm Estimation in Data Streams},
author={Daniel M. Kane, Jelani Nelson, David P. Woodruff},
journal={arXiv preprint arXiv:0811.3648},
year={2008},
archivePrefix={arXiv},
eprint={0811.3648},
primaryClass={cs.DS cs.CC}
} | kane2008revisiting |
arxiv-5536 | 0811.3691 | Temporal Support of Regular Expressions in Sequential Pattern Mining | <|reference_start|>Temporal Support of Regular Expressions in Sequential Pattern Mining: Classic algorithms for sequential pattern discovery, return all frequent sequences present in a database, but, in general, only a few ones are interesting for the user. Languages based on regular expressions (RE) have been proposed to restrict frequent sequences to the ones that satisfy user-specified constraints. Although the support of a sequence is computed as the number of data-sequences satisfying a pattern with respect to the total number of data-sequences in the database, once regular expressions come into play, new approaches to the concept of support are needed. For example, users may be interested in computing the support of the RE as a whole, in addition to the one of a particular pattern. Also, when the items are frequently updated, the traditional way of counting support in sequential pattern mining may lead to incorrect (or, at least incomplete), conclusions. The problem gets more involved if we are interested in categorical sequential patterns. In light of the above, in this paper we propose to revise the classic notion of support in sequential pattern mining, introducing the concept of temporal support of regular expressions, intuitively defined as the number of sequences satisfying a target pattern, out of the total number of sequences that could have possibly matched such pattern, where the pattern is defined as a RE over complex items (i.e., not only item identifiers, but also attributes and functions).<|reference_end|> | arxiv | @article{gomez2008temporal,
title={Temporal Support of Regular Expressions in Sequential Pattern Mining},
author={Leticia Gomez, Bart Kuijpers, Alejandro Vaisman},
journal={arXiv preprint arXiv:0811.3691},
year={2008},
archivePrefix={arXiv},
eprint={0811.3691},
primaryClass={cs.DB}
} | gomez2008temporal |
arxiv-5537 | 0811.3704 | Highly Undecidable Problems about Recognizability by Tiling Systems | <|reference_start|>Highly Undecidable Problems about Recognizability by Tiling Systems: Altenbernd, Thomas and W\"ohrle have considered acceptance of languages of infinite two-dimensional words (infinite pictures) by finite tiling systems, with usual acceptance conditions, such as the B\"uchi and Muller ones [1]. It was proved in [9] that it is undecidable whether a B\"uchi-recognizable language of infinite pictures is E-recognizable (respectively, A-recognizable). We show here that these two decision problems are actually $\Pi_2^1$-complete, hence located at the second level of the analytical hierarchy, and "highly undecidable". We give the exact degree of numerous other undecidable problems for B\"uchi-recognizable languages of infinite pictures. In particular, the non-emptiness and the infiniteness problems are $\Sigma^1_1$-complete, and the universality problem, the inclusion problem, the equivalence problem, the determinizability problem, the complementability problem, are all $\Pi^1_2$-complete. It is also $\Pi^1_2$-complete to determine whether a given B\"uchi recognizable language of infinite pictures can be accepted row by row using an automaton model over ordinal words of length $\omega^2$.<|reference_end|> | arxiv | @article{finkel2008highly,
title={Highly Undecidable Problems about Recognizability by Tiling Systems},
author={Olivier Finkel (LIP, Elm)},
journal={Fundamenta Informaticae 2, 91 (2009) 305-323},
year={2008},
archivePrefix={arXiv},
eprint={0811.3704},
primaryClass={cs.CC cs.LO math.LO}
} | finkel2008highly |
arxiv-5538 | 0811.3712 | Performance Modeling and Evaluation for Information-Driven Networks | <|reference_start|>Performance Modeling and Evaluation for Information-Driven Networks: Information-driven networks include a large category of networking systems, where network nodes are aware of information delivered and thus can not only forward data packets but may also perform information processing. In many situations, the quality of service (QoS) in information-driven networks is provisioned with the redundancy in information. Traditional performance models generally adopt evaluation measures suitable for packet-oriented service guarantee, such as packet delay, throughput, and packet loss rate. These performance measures, however, do not align well with the actual need of information-driven networks. New performance measures and models for information-driven networks, despite their importance, have been mainly blank, largely because information processing is clearly application dependent and cannot be easily captured within a generic framework. To fill the vacancy, we present a new performance evaluation framework particularly tailored for information-driven networks, based on the recent development of stochastic network calculus. We analyze the QoS with respect to information delivery and study the scheduling problem with the new performance metrics. Our analytical framework can be used to calculate the network capacity in information delivery and in the meantime to help transmission scheduling for a large body of systems where QoS is stochastically guaranteed with the redundancy in information.<|reference_end|> | arxiv | @article{wu2008performance,
title={Performance Modeling and Evaluation for Information-Driven Networks},
author={Kui Wu, Yuming Jiang, Guoqiang Hu},
journal={arXiv preprint arXiv:0811.3712},
year={2008},
archivePrefix={arXiv},
eprint={0811.3712},
primaryClass={cs.PF cs.NI}
} | wu2008performance |
arxiv-5539 | 0811.3723 | Tight Approximation Ratio of a General Greedy Splitting Algorithm for the Minimum k-Way Cut Problem | <|reference_start|>Tight Approximation Ratio of a General Greedy Splitting Algorithm for the Minimum k-Way Cut Problem: For an edge-weighted connected undirected graph, the minimum $k$-way cut problem is to find a subset of edges of minimum total weight whose removal separates the graph into $k$ connected components. The problem is NP-hard when $k$ is part of the input and W[1]-hard when $k$ is taken as a parameter. A simple algorithm for approximating a minimum $k$-way cut is to iteratively increase the number of components of the graph by $h-1$, where $2 \le h \le k$, until the graph has $k$ components. The approximation ratio of this algorithm is known for $h \le 3$ but is open for $h \ge 4$. In this paper, we consider a general algorithm that iteratively increases the number of components of the graph by $h_i-1$, where $h_1 \le h_2 \le ... \le h_q$ and $\sum_{i=1}^q (h_i-1) = k-1$. We prove that the approximation ratio of this general algorithm is $2 - (\sum_{i=1}^q {h_i \choose 2})/{k \choose 2}$, which is tight. Our result implies that the approximation ratio of the simple algorithm is $2-h/k + O(h^2/k^2)$ in general and $2-h/k$ if $k-1$ is a multiple of $h-1$.<|reference_end|> | arxiv | @article{xiao2008tight,
title={Tight Approximation Ratio of a General Greedy Splitting Algorithm for
the Minimum k-Way Cut Problem},
author={Mingyu Xiao, Leizhen Cai and Andrew C. Yao},
journal={arXiv preprint arXiv:0811.3723},
year={2008},
archivePrefix={arXiv},
eprint={0811.3723},
primaryClass={cs.DS cs.DM}
} | xiao2008tight |
arxiv-5540 | 0811.3760 | Communication Efficiency in Self-stabilizing Silent Protocols | <|reference_start|>Communication Efficiency in Self-stabilizing Silent Protocols: Self-stabilization is a general paradigm to provide forward recovery capabilities to distributed systems and networks. Intuitively, a protocol is self-stabilizing if it is able to recover without external intervention from any catastrophic transient failure. In this paper, our focus is to lower the communication complexity of self-stabilizing protocols \emph{below} the need of checking every neighbor forever. In more details, the contribution of the paper is threefold: (i) We provide new complexity measures for communication efficiency of self-stabilizing protocols, especially in the stabilized phase or when there are no faults, (ii) On the negative side, we show that for non-trivial problems such as coloring, maximal matching, and maximal independent set, it is impossible to get (deterministic or probabilistic) self-stabilizing solutions where every participant communicates with less than every neighbor in the stabilized phase, and (iii) On the positive side, we present protocols for coloring, maximal matching, and maximal independent set such that a fraction of the participants communicates with exactly one neighbor in the stabilized phase.<|reference_end|> | arxiv | @article{devismes2008communication,
title={Communication Efficiency in Self-stabilizing Silent Protocols},
author={St'ephane Devismes, Toshimitsu Masuzawa, S'ebastien Tixeuil (LIP6)},
journal={arXiv preprint arXiv:0811.3760},
year={2008},
number={RR-6731},
archivePrefix={arXiv},
eprint={0811.3760},
primaryClass={cs.DS cs.CC cs.DC cs.NI}
} | devismes2008communication |
arxiv-5541 | 0811.3777 | The Relationship between Tsallis Statistics, the Fourier Transform, and Nonlinear Coupling | <|reference_start|>The Relationship between Tsallis Statistics, the Fourier Transform, and Nonlinear Coupling: Tsallis statistics (or q-statistics) in nonextensive statistical mechanics is a one-parameter description of correlated states. In this paper we use a translated entropic index: $1 - q \to q$ . The essence of this translation is to improve the mathematical symmetry of the q-algebra and make q directly proportional to the nonlinear coupling. A conjugate transformation is defined $\hat q \equiv \frac{{- 2q}}{{2 + q}}$ which provides a dual mapping between the heavy-tail q-Gaussian distributions, whose translated q parameter is between $ - 2 < q < 0$, and the compact-support q-Gaussians, between $0 < q < \infty $ . This conjugate transformation is used to extend the definition of the q-Fourier transform to the domain of compact support. A conjugate q-Fourier transform is proposed which transforms a q-Gaussian into a conjugate $\hat q$ -Gaussian, which has the same exponential decay as the Fourier transform of a power-law function. The nonlinear statistical coupling is defined such that the conjugate pair of q-Gaussians have equal strength but either couple (compact-support) or decouple (heavy-tail) the statistical states. Many of the nonextensive entropy applications can be shown to have physical parameters proportional to the nonlinear statistical coupling.<|reference_end|> | arxiv | @article{nelson2008the,
title={The Relationship between Tsallis Statistics, the Fourier Transform, and
Nonlinear Coupling},
author={Kenric P. Nelson and Sabir Umarov},
journal={arXiv preprint arXiv:0811.3777},
year={2008},
archivePrefix={arXiv},
eprint={0811.3777},
primaryClass={cs.IT math.IT math.PR}
} | nelson2008the |
arxiv-5542 | 0811.3779 | Finding Sparse Cuts Locally Using Evolving Sets | <|reference_start|>Finding Sparse Cuts Locally Using Evolving Sets: A {\em local graph partitioning algorithm} finds a set of vertices with small conductance (i.e. a sparse cut) by adaptively exploring part of a large graph $G$, starting from a specified vertex. For the algorithm to be local, its complexity must be bounded in terms of the size of the set that it outputs, with at most a weak dependence on the number $n$ of vertices in $G$. Previous local partitioning algorithms find sparse cuts using random walks and personalized PageRank. In this paper, we introduce a randomized local partitioning algorithm that finds a sparse cut by simulating the {\em volume-biased evolving set process}, which is a Markov chain on sets of vertices. We prove that for any set of vertices $A$ that has conductance at most $\phi$, for at least half of the starting vertices in $A$ our algorithm will output (with probability at least half), a set of conductance $O(\phi^{1/2} \log^{1/2} n)$. We prove that for a given run of the algorithm, the expected ratio between its computational complexity and the volume of the set that it outputs is $O(\phi^{-1/2} polylog(n))$. In comparison, the best previous local partitioning algorithm, due to Andersen, Chung, and Lang, has the same approximation guarantee, but a larger ratio of $O(\phi^{-1} polylog(n))$ between the complexity and output volume. Using our local partitioning algorithm as a subroutine, we construct a fast algorithm for finding balanced cuts. Given a fixed value of $\phi$, the resulting algorithm has complexity $O((m+n\phi^{-1/2}) polylog(n))$ and returns a cut with conductance $O(\phi^{1/2} \log^{1/2} n)$ and volume at least $v_{\phi}/2$, where $v_{\phi}$ is the largest volume of any set with conductance at most $\phi$.<|reference_end|> | arxiv | @article{andersen2008finding,
title={Finding Sparse Cuts Locally Using Evolving Sets},
author={Reid Andersen and Yuval Peres},
journal={arXiv preprint arXiv:0811.3779},
year={2008},
archivePrefix={arXiv},
eprint={0811.3779},
primaryClass={cs.DS}
} | andersen2008finding |
arxiv-5543 | 0811.3782 | Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability | <|reference_start|>Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability: It is folklore particularly in numerical and computer sciences that, instead of solving some general problem f:A->B, additional structural information about the input x in A (that is any kind of promise that x belongs to a certain subset A' of A) should be taken advantage of. Some examples from real number computation show that such discrete advice can even make the difference between computability and uncomputability. We turn this into a both topological and combinatorial complexity theory of information, investigating for several practical problems how much advice is necessary and sufficient to render them computable. Specifically, finding a nontrivial solution to a homogeneous linear equation A*x=0 for a given singular real NxN-matrix A is possible when knowing rank(A)=0,1,...,N-1; and we show this to be best possible. Similarly, diagonalizing (i.e. finding a BASIS of eigenvectors of) a given real symmetric NxN-matrix is possible when knowing the number of distinct eigenvalues: an integer between 1 and N (the latter corresponding to the nondegenerate case). And again we show that N-fold (i.e. roughly log N bits of) additional information is indeed necessary in order to render this problem (continuous and) computable; whereas for finding SOME SINGLE eigenvector of A, providing the truncated binary logarithm of the least-dimensional eigenspace of A--i.e. Theta(log N)-fold advice--is sufficient and optimal.<|reference_end|> | arxiv | @article{ziegler2008real,
title={Real Computation with Least Discrete Advice: A Complexity Theory of
Nonuniform Computability},
author={Martin Ziegler},
journal={arXiv preprint arXiv:0811.3782},
year={2008},
archivePrefix={arXiv},
eprint={0811.3782},
primaryClass={cs.CC math.LO}
} | ziegler2008real |
arxiv-5544 | 0811.3816 | Adaptive Fault Masking With Incoherence Scoring | <|reference_start|>Adaptive Fault Masking With Incoherence Scoring: An adaptive voting algorithm for digital media was introduced in this study. Availability was improved by incoherence scoring in voting mechanism of Multi-Modular Redundancy. Regulation parameters give the algorithm flexibility of adjusting priorities in decision process. Proposed adaptive voting algorithm was shown to be more aware of fault status of redundant modules<|reference_end|> | arxiv | @article{alagoz2008adaptive,
title={Adaptive Fault Masking With Incoherence Scoring},
author={B. Baykant Alagoz},
journal={OncuBilim Algorithm And systems Lab, Vol.8,No:1,(2008)},
year={2008},
archivePrefix={arXiv},
eprint={0811.3816},
primaryClass={cs.OH}
} | alagoz2008adaptive |
arxiv-5545 | 0811.3828 | Optimal Filtering of Malicious IP Sources | <|reference_start|>Optimal Filtering of Malicious IP Sources: How can we protect the network infrastructure from malicious traffic, such as scanning, malicious code propagation, and distributed denial-of-service (DDoS) attacks? One mechanism for blocking malicious traffic is filtering: access control lists (ACLs) can selectively block traffic based on fields of the IP header. Filters (ACLs) are already available in the routers today but are a scarce resource because they are stored in the expensive ternary content addressable memory (TCAM). In this paper, we develop, for the first time, a framework for studying filter selection as a resource allocation problem. Within this framework, we study five practical cases of source address/prefix filtering, which correspond to different attack scenarios and operator's policies. We show that filter selection optimization leads to novel variations of the multidimensional knapsack problem and we design optimal, yet computationally efficient, algorithms to solve them. We also evaluate our approach using data from Dshield.org and demonstrate that it brings significant benefits in practice. Our set of algorithms is a building block that can be immediately used by operators and manufacturers to block malicious traffic in a cost-efficient way.<|reference_end|> | arxiv | @article{soldo2008optimal,
title={Optimal Filtering of Malicious IP Sources},
author={Fabio Soldo, Athina Markopoulou, Katerina Argyraki},
journal={arXiv preprint arXiv:0811.3828},
year={2008},
archivePrefix={arXiv},
eprint={0811.3828},
primaryClass={cs.NI}
} | soldo2008optimal |
arxiv-5546 | 0811.3859 | On the Complexity of Matroid Isomorphism Problem | <|reference_start|>On the Complexity of Matroid Isomorphism Problem: We study the complexity of testing if two given matroids are isomorphic. The problem is easily seen to be in $\Sigma_2^p$. In the case of linear matroids, which are represented over polynomially growing fields, we note that the problem is unlikely to be $\Sigma_2^p$-complete and is $\co\NP$-hard. We show that when the rank of the matroid is bounded by a constant, linear matroid isomorphism, and matroid isomorphism are both polynomial time many-one equivalent to graph isomorphism. We give a polynomial time Turing reduction from graphic matroid isomorphism problem to the graph isomorphism problem. Using this, we are able to show that graphic matroid isomorphism testing for planar graphs can be done in deterministic polynomial time. We then give a polynomial time many-one reduction from bounded rank matroid isomorphism problem to graphic matroid isomorphism, thus showing that all the above problems are polynomial time equivalent. Further, for linear and graphic matroids, we prove that the automorphism problem is polynomial time equivalent to the corresponding isomorphism problems. In addition, we give a polynomial time membership test algorithm for the automorphism group of a graphic matroid.<|reference_end|> | arxiv | @article{v.2008on,
title={On the Complexity of Matroid Isomorphism Problem},
author={Raghavendra Rao B.V. and Jayalal M.N. Sarma},
journal={arXiv preprint arXiv:0811.3859},
year={2008},
archivePrefix={arXiv},
eprint={0811.3859},
primaryClass={cs.CC}
} | v.2008on |
arxiv-5547 | 0811.3887 | Transmit Diversity v Spatial Multiplexing in Modern MIMO Systems | <|reference_start|>Transmit Diversity v Spatial Multiplexing in Modern MIMO Systems: A contemporary perspective on the tradeoff between transmit antenna diversity and spatial multiplexing is provided. It is argued that, in the context of most modern wireless systems and for the operating points of interest, transmission techniques that utilize all available spatial degrees of freedom for multiplexing outperform techniques that explicitly sacrifice spatial multiplexing for diversity. In the context of such systems, therefore, there essentially is no decision to be made between transmit antenna diversity and spatial multiplexing in MIMO communication. Reaching this conclusion, however, requires that the channel and some key system features be adequately modeled and that suitable performance metrics be adopted; failure to do so may bring about starkly different conclusions. As a specific example, this contrast is illustrated using the 3GPP Long-Term Evolution system design.<|reference_end|> | arxiv | @article{lozano2008transmit,
title={Transmit Diversity v. Spatial Multiplexing in Modern MIMO Systems},
author={Angel Lozano and Nihar Jindal},
journal={arXiv preprint arXiv:0811.3887},
year={2008},
archivePrefix={arXiv},
eprint={0811.3887},
primaryClass={cs.IT math.IT}
} | lozano2008transmit |
arxiv-5548 | 0811.3958 | Extractors and an efficient variant of Muchnik's theorem | <|reference_start|>Extractors and an efficient variant of Muchnik's theorem: Muchnik's theorem about simple conditional descriprion states that for all words $a$ and $b$ there exists a short program $p$ transforming $a$ to $b$ that has the least possible length and is simple conditional on $b$. This paper presents a new proof of this theorem, based on extractors. Employing the extractor technique, two new versions of Muchnik's theorem for space- and time-bounded Kolmogorov complexity are proven.<|reference_end|> | arxiv | @article{musatov2008extractors,
title={Extractors and an efficient variant of Muchnik's theorem},
author={Daniil Musatov},
journal={arXiv preprint arXiv:0811.3958},
year={2008},
archivePrefix={arXiv},
eprint={0811.3958},
primaryClass={cs.CC}
} | musatov2008extractors |
arxiv-5549 | 0811.3959 | A polytime proof of correctness of the Rabin-Miller algorithm from Fermat's little theorem | <|reference_start|>A polytime proof of correctness of the Rabin-Miller algorithm from Fermat's little theorem: Although a deterministic polytime algorithm for primality testing is now known, the Rabin-Miller randomized test of primality continues being the most efficient and widely used algorithm. We prove the correctness of the Rabin-Miller algorithm in the theory V1 for polynomial time reasoning, from Fermat's little theorem. This is interesting because the Rabin-Miller algorithm is a polytime randomized algorithm, which runs in the class RP (i.e., the class of polytime Monte-Carlo algorithms), with a sampling space exponential in the length of the binary encoding of the input number. (The class RP contains polytime P.) However, we show how to express the correctness in the language of V1, and we also show that we can prove the formula expressing correctness with polytime reasoning from Fermat's Little theorem, which is generally expected to be independent of V1. Our proof is also conceptually very basic in the sense that we use the extended Euclid's algorithm, for computing greatest common divisors, as the main workhorse of the proof. For example, we make do without proving the Chinese Reminder theorem, which is used in the standard proofs.<|reference_end|> | arxiv | @article{herman2008a,
title={A polytime proof of correctness of the Rabin-Miller algorithm from
Fermat's little theorem},
author={Grzegorz Herman and Michael Soltys},
journal={arXiv preprint arXiv:0811.3959},
year={2008},
archivePrefix={arXiv},
eprint={0811.3959},
primaryClass={cs.CC cs.CR}
} | herman2008a |
arxiv-5550 | 0811.3975 | Determinacy and Decidability of Reachability Games with Partial Observation on Both Sides | <|reference_start|>Determinacy and Decidability of Reachability Games with Partial Observation on Both Sides: We prove two determinacy and decidability results about two-players stochastic reachability games with partial observation on both sides and finitely many states, signals and actions.<|reference_end|> | arxiv | @article{bertrand2008determinacy,
title={Determinacy and Decidability of Reachability Games with Partial
Observation on Both Sides},
author={Nathalie Bertrand, Blaise Genest, Hugo Gimbert (LaBRI)},
journal={arXiv preprint arXiv:0811.3975},
year={2008},
archivePrefix={arXiv},
eprint={0811.3975},
primaryClass={cs.GT}
} | bertrand2008determinacy |
arxiv-5551 | 0811.3978 | Optimal Strategies in Perfect-Information Stochastic Games with Tail Winning Conditions | <|reference_start|>Optimal Strategies in Perfect-Information Stochastic Games with Tail Winning Conditions: We prove that optimal strategies exist in every perfect-information stochastic game with finitely many states and actions and a tail winning condition.<|reference_end|> | arxiv | @article{gimbert2008optimal,
title={Optimal Strategies in Perfect-Information Stochastic Games with Tail
Winning Conditions},
author={Hugo Gimbert (LaBRI), Florian Horn (LIAFA)},
journal={arXiv preprint arXiv:0811.3978},
year={2008},
archivePrefix={arXiv},
eprint={0811.3978},
primaryClass={cs.GT}
} | gimbert2008optimal |
arxiv-5552 | 0811.4007 | The Simultaneous Membership Problem for Chordal, Comparability and Permutation graphs | <|reference_start|>The Simultaneous Membership Problem for Chordal, Comparability and Permutation graphs: In this paper we introduce the 'simultaneous membership problem', defined for any graph class C characterized in terms of representations, e.g. any class of intersection graphs. Two graphs G_1 and G_2, sharing some vertices X (and the corresponding induced edges), are said to be 'simultaneous members' of graph class C, if there exist representations R_1 and R_2 of G_1 and G_2 that are "consistent" on X. Equivalently (for the classes C that we consider) there exist edges E' between G_1-X and G_2-X such that G_1 \cup G_2 \cup E' belongs to class C. Simultaneous membership problems have application in any situation where it is desirable to consistently represent two related graphs, for example: interval graphs capturing overlaps of DNA fragments of two similar organisms; or graphs connected in time, where one is an updated version of the other. Simultaneous membership problems are related to simultaneous planar embeddings, graph sandwich problems and probe graph recognition problems. In this paper we give efficient algorithms for the simultaneous membership problem on chordal, comparability and permutation graphs. These results imply that graph sandwich problems for the above classes are tractable for an interesting special case: when the set of optional edges form a complete bipartite graph. Our results complement the recent polynomial time recognition algorithms for probe chordal, comparability, and permutation graphs, where the set of optional edges form a clique.<|reference_end|> | arxiv | @article{jampani2008the,
title={The Simultaneous Membership Problem for Chordal, Comparability and
Permutation graphs},
author={Krishnam Raju Jampani and Anna Lubiw},
journal={arXiv preprint arXiv:0811.4007},
year={2008},
archivePrefix={arXiv},
eprint={0811.4007},
primaryClass={cs.DM cs.DS}
} | jampani2008the |
arxiv-5553 | 0811.4030 | Analytical Framework for Optimizing Weighted Average Download Time in Peer-to-Peer Networks | <|reference_start|>Analytical Framework for Optimizing Weighted Average Download Time in Peer-to-Peer Networks: This paper proposes an analytical framework for peer-to-peer (P2P) networks and introduces schemes for building P2P networks to approach the minimum weighted average download time (WADT). In the considered P2P framework, the server, which has the information of all the download bandwidths and upload bandwidths of the peers, minimizes the weighted average download time by determining the optimal transmission rate from the server to the peers and from the peers to the other peers. This paper first defines the static P2P network, the hierarchical P2P network and the strictly hierarchical P2P network. Any static P2P network can be decomposed into an equivalent network of sub-peers that is strictly hierarchical. This paper shows that convex optimization can minimize the WADT for P2P networks by equivalently minimizing the WADT for strictly hierarchical networks of sub-peers. This paper then gives an upper bound for minimizing WADT by constructing a hierarchical P2P network, and lower bound by weakening the constraints of the convex problem. Both the upper bound and the lower bound are very tight. This paper also provides several suboptimal solutions for minimizing the WADT for strictly hierarchical networks, in which peer selection algorithms and chunk selection algorithm can be locally designed.<|reference_end|> | arxiv | @article{xie2008analytical,
title={Analytical Framework for Optimizing Weighted Average Download Time in
Peer-to-Peer Networks},
author={Bike Xie, Mihaela van der Schaar and Richard D. Wesel},
journal={arXiv preprint arXiv:0811.4030},
year={2008},
archivePrefix={arXiv},
eprint={0811.4030},
primaryClass={cs.NI}
} | xie2008analytical |
arxiv-5554 | 0811.4033 | Computation of Grobner basis for systematic encoding of generalized quasi-cyclic codes | <|reference_start|>Computation of Grobner basis for systematic encoding of generalized quasi-cyclic codes: Generalized quasi-cyclic (GQC) codes form a wide and useful class of linear codes that includes thoroughly quasi-cyclic codes, finite geometry (FG) low density parity check (LDPC) codes, and Hermitian codes. Although it is known that the systematic encoding of GQC codes is equivalent to the division algorithm in the theory of Grobner basis of modules, there has been no algorithm that computes Grobner basis for all types of GQC codes. In this paper, we propose two algorithms to compute Grobner basis for GQC codes from their parity check matrices: echelon canonical form algorithm and transpose algorithm. Both algorithms require sufficiently small number of finite-field operations with the order of the third power of code-length. Each algorithm has its own characteristic; the first algorithm is composed of elementary methods, and the second algorithm is based on a novel formula and is faster than the first one for high-rate codes. Moreover, we show that a serial-in serial-out encoder architecture for FG LDPC codes is composed of linear feedback shift registers with the size of the linear order of code-length; to encode a binary codeword of length n, it takes less than 2n adder and 2n memory elements. Keywords: automorphism group, Buchberger's algorithm, division algorithm, circulant matrix, finite geometry low density parity check (LDPC) codes.<|reference_end|> | arxiv | @article{van2008computation,
title={Computation of Grobner basis for systematic encoding of generalized
quasi-cyclic codes},
author={Vo Tam Van, Hajime Matsui, and Seiichi Mita},
journal={arXiv preprint arXiv:0811.4033},
year={2008},
archivePrefix={arXiv},
eprint={0811.4033},
primaryClass={cs.IT cs.DM math.AC math.IT}
} | van2008computation |
arxiv-5555 | 0811.4040 | ELASTICITY: Topological Characterization of Robustness in Complex Networks | <|reference_start|>ELASTICITY: Topological Characterization of Robustness in Complex Networks: Just as a herd of animals relies on its robust social structure to survive in the wild, similarly robustness is a crucial characteristic for the survival of a complex network under attack. The capacity to measure robustness in complex networks defines the resolve of a network to maintain functionality in the advent of classical component failures and at the onset of cryptic malicious attacks. To date, robustness metrics are deficient and unfortunately the following dilemmas exist: accurate models necessitate complex analysis while conversely, simple models lack applicability to our definition of robustness. In this paper, we define robustness and present a novel metric, elasticity- a bridge between accuracy and complexity-a link in the chain of network robustness. Additionally, we explore the performance of elasticity on Internet topologies and online social networks, and articulate results.<|reference_end|> | arxiv | @article{sydney2008elasticity:,
title={ELASTICITY: Topological Characterization of Robustness in Complex
Networks},
author={Ali Sydney, Caterina Scoglio, Phillip Schumm, Robert Kooij},
journal={arXiv preprint arXiv:0811.4040},
year={2008},
archivePrefix={arXiv},
eprint={0811.4040},
primaryClass={cs.NI physics.data-an}
} | sydney2008elasticity: |
arxiv-5556 | 0811.4061 | Benchmarking the solar dynamo with Maxima | <|reference_start|>Benchmarking the solar dynamo with Maxima: Recently, Jouve et al(A&A, 2008) published the paper that presents the numerical benchmark for the solar dynamo models. Here, I would like to show a way how to get it with help of computer algebra system Maxima. This way was used in our paper (Pipin & Seehafer, A&A 2008, in print) to test some new ideas in the large-scale stellar dynamos. In the present paper I complement the dynamo benchmark with the standard test that address the problem of the free-decay modes in the sphere which is submerged in vacuum.<|reference_end|> | arxiv | @article{pipin2008benchmarking,
title={Benchmarking the solar dynamo with Maxima},
author={Valery V. Pipin},
journal={arXiv preprint arXiv:0811.4061},
year={2008},
archivePrefix={arXiv},
eprint={0811.4061},
primaryClass={cs.SE cs.SC}
} | pipin2008benchmarking |
arxiv-5557 | 0811.4089 | Interval greedoids and families of local maximum stable sets of graphs | <|reference_start|>Interval greedoids and families of local maximum stable sets of graphs: A maximum stable set in a graph G is a stable set of maximum cardinality. S is a local maximum stable set of G, if S is a maximum stable set of the subgraph induced by its closed neighborhood. Nemhauser and Trotter Jr. proved in 1975 that any local maximum stable set is a subset of a maximum stable set of G. In 2002 we showed that the family of all local maximum stable sets of a forest forms a greedoid on its vertex set. The cases where G is bipartite, triangle-free, well-covered, while the family of all local maximum stable sets is a greedoid, were analyzed in 2004, 2007, and 2008, respectively. In this paper we demonstrate that if the family of all local maximum stable sets of the graph satisfies the accessibility property, then it is an interval greedoid. We also characterize those graphs whose families of local maximum stable sets are either antimatroids or matroids.<|reference_end|> | arxiv | @article{levit2008interval,
title={Interval greedoids and families of local maximum stable sets of graphs},
author={Vadim E. Levit and Eugen Mandrescu},
journal={arXiv preprint arXiv:0811.4089},
year={2008},
archivePrefix={arXiv},
eprint={0811.4089},
primaryClass={math.CO cs.DM}
} | levit2008interval |
arxiv-5558 | 0811.4121 | String Art: Circle Drawing Using Straight Lines | <|reference_start|>String Art: Circle Drawing Using Straight Lines: An algorithm to generate the locus of a circle using the intersection points of straight lines is proposed. The pixels on the circle are plotted independent of one another and the operations involved in finding the locus of the circle from the intersection of straight lines are parallelizable. Integer only arithmetic and algorithmic optimizations are used for speedup. The proposed algorithm makes use of an envelope to form a parabolic arc which is consequent transformed into a circle. The use of parabolic arcs for the transformation results in higher pixel errors as the radius of the circle to be drawn increases. At its current state, the algorithm presented may be suitable only for generating circles for string art.<|reference_end|> | arxiv | @article{k2008string,
title={String Art: Circle Drawing Using Straight Lines},
author={Sankar K and Sarad AV},
journal={arXiv preprint arXiv:0811.4121},
year={2008},
archivePrefix={arXiv},
eprint={0811.4121},
primaryClass={cs.GR cs.OH}
} | k2008string |
arxiv-5559 | 0811.4138 | LACK - a VoIP Steganographic Method | <|reference_start|>LACK - a VoIP Steganographic Method: The paper presents a new steganographic method called LACK (Lost Audio PaCKets Steganography) which is intended mainly for VoIP. The method is presented in a broader context of network steganography and of VoIP steganography in particular. The analytical results presented in the paper concern the influence of LACK's hidden data insertion procedure on the method's impact on quality of voice transmission and its resistance to steganalysis.<|reference_end|> | arxiv | @article{mazurczyk2008lack,
title={LACK - a VoIP Steganographic Method},
author={Wojciech Mazurczyk, Jozef Lubacz},
journal={arXiv preprint arXiv:0811.4138},
year={2008},
archivePrefix={arXiv},
eprint={0811.4138},
primaryClass={cs.CR cs.MM}
} | mazurczyk2008lack |
arxiv-5560 | 0811.4139 | Artin automorphisms, Cyclotomic function fields, and Folded list-decodable codes | <|reference_start|>Artin automorphisms, Cyclotomic function fields, and Folded list-decodable codes: Algebraic codes that achieve list decoding capacity were recently constructed by a careful ``folding'' of the Reed-Solomon code. The ``low-degree'' nature of this folding operation was crucial to the list decoding algorithm. We show how such folding schemes conducive to list decoding arise out of the Artin-Frobenius automorphism at primes in Galois extensions. Using this approach, we construct new folded algebraic-geometric codes for list decoding based on cyclotomic function fields with a cyclic Galois group. Such function fields are obtained by adjoining torsion points of the Carlitz action of an irreducible $M \in \F_q[T]$. The Reed-Solomon case corresponds to the simplest such extension (corresponding to the case $M=T$). In the general case, we need to descend to the fixed field of a suitable Galois subgroup in order to ensure the existence of many degree one places that can be used for encoding. Our methods shed new light on algebraic codes and their list decoding, and lead to new codes achieving list decoding capacity. Quantitatively, these codes provide list decoding (and list recovery/soft decoding) guarantees similar to folded Reed-Solomon codes but with an alphabet size that is only polylogarithmic in the block length. In comparison, for folded RS codes, the alphabet size is a large polynomial in the block length. This has applications to fully explicit (with no brute-force search) binary concatenated codes for list decoding up to the Zyablov radius.<|reference_end|> | arxiv | @article{guruswami2008artin,
title={Artin automorphisms, Cyclotomic function fields, and Folded
list-decodable codes},
author={Venkatesan Guruswami},
journal={arXiv preprint arXiv:0811.4139},
year={2008},
archivePrefix={arXiv},
eprint={0811.4139},
primaryClass={math.NT cs.IT math.IT}
} | guruswami2008artin |
arxiv-5561 | 0811.4162 | Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels | <|reference_start|>Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels: Consider a memoryless degraded broadcast channel (DBC) in which the channel output is a single-letter function of the channel input and the channel noise. As examples, for the Gaussian broadcast channel (BC) this single-letter function is regular Euclidian addition and for the binary-symmetric BC this single-letter function is Galois-Field-two addition. This paper identifies several classes of discrete memoryless DBCs for which a relatively simple encoding scheme, which we call natural encoding, achieves capacity. Natural Encoding (NE) combines symbols from independent codebooks (one for each receiver) using the same single-letter function that adds distortion to the channel. The alphabet size of each NE codebook is bounded by that of the channel input. Inspired by Witsenhausen and Wyner, this paper defines the conditional entropy bound function $F^*$, studies its properties, and applies them to show that NE achieves the boundary of the capacity region for the multi-receiver broadcast Z channel. Then, this paper defines the input-symmetric DBC, introduces permutation encoding for the input-symmetric DBC, and proves its optimality. Because it is a special case of permutation encoding, NE is capacity achieving for the two-receiver group-operation DBC. Combining the broadcast Z channel and group-operation DBC results yields a proof that NE is also optimal for the discrete multiplication DBC. Along the way, the paper also provides explicit parametric expressions for the two-receiver binary-symmetric DBC and broadcast Z channel.<|reference_end|> | arxiv | @article{xie2008optimal,
title={Optimal Encoding Schemes for Several Classes of Discrete Degraded
Broadcast Channels},
author={Bike Xie, Thomas Courtade, and Richard D. Wesel},
journal={arXiv preprint arXiv:0811.4162},
year={2008},
archivePrefix={arXiv},
eprint={0811.4162},
primaryClass={cs.IT math.IT}
} | xie2008optimal |
arxiv-5562 | 0811.4163 | Packing and Covering Properties of Subspace Codes for Error Control in Random Linear Network Coding | <|reference_start|>Packing and Covering Properties of Subspace Codes for Error Control in Random Linear Network Coding: Codes in the projective space and codes in the Grassmannian over a finite field - referred to as subspace codes and constant-dimension codes (CDCs), respectively - have been proposed for error control in random linear network coding. For subspace codes and CDCs, a subspace metric was introduced to correct both errors and erasures, and an injection metric was proposed to correct adversarial errors. In this paper, we investigate the packing and covering properties of subspace codes with both metrics. We first determine some fundamental geometric properties of the projective space with both metrics. Using these properties, we then derive bounds on the cardinalities of packing and covering subspace codes, and determine the asymptotic rates of optimal packing and optimal covering subspace codes with both metrics. Our results not only provide guiding principles for the code design for error control in random linear network coding, but also illustrate the difference between the two metrics from a geometric perspective. In particular, our results show that optimal packing CDCs are optimal packing subspace codes up to a scalar for both metrics if and only if their dimension is half of their length (up to rounding). In this case, CDCs suffer from only limited rate loss as opposed to subspace codes with the same minimum distance. We also show that optimal covering CDCs can be used to construct asymptotically optimal covering subspace codes with the injection metric only.<|reference_end|> | arxiv | @article{gadouleau2008packing,
title={Packing and Covering Properties of Subspace Codes for Error Control in
Random Linear Network Coding},
author={Maximilien Gadouleau and Zhiyuan Yan},
journal={arXiv preprint arXiv:0811.4163},
year={2008},
archivePrefix={arXiv},
eprint={0811.4163},
primaryClass={cs.IT math.IT}
} | gadouleau2008packing |
arxiv-5563 | 0811.4170 | High resolution dynamical mapping of social interactions with active RFID | <|reference_start|>High resolution dynamical mapping of social interactions with active RFID: In this paper we present an experimental framework to gather data on face-to-face social interactions between individuals, with a high spatial and temporal resolution. We use active Radio Frequency Identification (RFID) devices that assess contacts with one another by exchanging low-power radio packets. When individuals wear the beacons as a badge, a persistent radio contact between the RFID devices can be used as a proxy for a social interaction between individuals. We present the results of a pilot study recently performed during a conference, and a subsequent preliminary data analysis, that provides an assessment of our method and highlights its versatility and applicability in many areas concerned with human dynamics.<|reference_end|> | arxiv | @article{barrat2008high,
title={High resolution dynamical mapping of social interactions with active
RFID},
author={Alain Barrat, Ciro Cattuto, Vittoria Colizza, Jean-Francois Pinton,
Wouter Van den Broeck, Alessandro Vespignani},
journal={PLoS ONE 5(7): e11596 (2010)},
year={2008},
doi={10.1371/journal.pone.0011596},
archivePrefix={arXiv},
eprint={0811.4170},
primaryClass={cs.CY cs.HC physics.soc-ph}
} | barrat2008high |
arxiv-5564 | 0811.4186 | Search Result Clustering via Randomized Partitioning of Query-Induced Subgraphs | <|reference_start|>Search Result Clustering via Randomized Partitioning of Query-Induced Subgraphs: In this paper, we present an approach to search result clustering, using partitioning of underlying link graph. We define the notion of "query-induced subgraph" and formulate the problem of search result clustering as a problem of efficient partitioning of given subgraph into topic-related clusters. Also, we propose a novel algorithm for approximative partitioning of such graph, which results in cluster quality comparable to the one obtained by deterministic algorithms, while operating in more efficient computation time, suitable for practical implementations. Finally, we present a practical clustering search engine developed as a part of this research and use it to get results about real-world performance of proposed concepts.<|reference_end|> | arxiv | @article{bradic2008search,
title={Search Result Clustering via Randomized Partitioning of Query-Induced
Subgraphs},
author={Aleksandar Bradic},
journal={arXiv preprint arXiv:0811.4186},
year={2008},
archivePrefix={arXiv},
eprint={0811.4186},
primaryClass={cs.IR cs.DS}
} | bradic2008search |
arxiv-5565 | 0811.4191 | Performance of Hybrid-ARQ in Block-Fading Channels: A Fixed Outage Probability Analysis | <|reference_start|>Performance of Hybrid-ARQ in Block-Fading Channels: A Fixed Outage Probability Analysis: This paper studies the performance of hybrid-ARQ (automatic repeat request) in Rayleigh block fading channels. The long-term average transmitted rate is analyzed in a fast-fading scenario where the transmitter only has knowledge of channel statistics, and, consistent with contemporary wireless systems, rate adaptation is performed such that a target outage probability (after a maximum number of H-ARQ rounds) is maintained. H-ARQ allows for early termination once decoding is possible, and thus is a coarse, and implicit, mechanism for rate adaptation to the instantaneous channel quality. Although the rate with H-ARQ is not as large as the ergodic capacity, which is achievable with rate adaptation to the instantaneous channel conditions, even a few rounds of H-ARQ make the gap to ergodic capacity reasonably small for operating points of interest. Furthermore, the rate with H-ARQ provides a significant advantage compared to systems that do not use H-ARQ and only adapt rate based on the channel statistics.<|reference_end|> | arxiv | @article{wu2008performance,
title={Performance of Hybrid-ARQ in Block-Fading Channels: A Fixed Outage
Probability Analysis},
author={Peng Wu and Nihar Jindal},
journal={arXiv preprint arXiv:0811.4191},
year={2008},
archivePrefix={arXiv},
eprint={0811.4191},
primaryClass={cs.IT math.IT}
} | wu2008performance |
arxiv-5566 | 0811.4200 | Two Models for Noisy Feedback in MIMO Channels | <|reference_start|>Two Models for Noisy Feedback in MIMO Channels: Two distinct models of feedback, suited for FDD (Frequency Division Duplex) and TDD (Frequency Division Duplex) systems respectively, have been widely studied in the literature. In this paper, we compare these two models of feedback in terms of the diversity multiplexing tradeoff for varying amount of channel state information at the terminals. We find that, when all imperfections are accounted for, the maximum achievable diversity order in FDD systems matches the diversity order in TDD systems. TDD systems achieve better diversity order at higher multiplexing gains. In FDD systems, the maximum diversity order can be achieved with just a single bit of feedback. Additional bits of feedback (perfect or imperfect) do not affect the diversity order if the receiver does not know the channel state information.<|reference_end|> | arxiv | @article{aggarwal2008two,
title={Two Models for Noisy Feedback in MIMO Channels},
author={Vaneet Aggarwal and Gajanana Krishna and Srikrishna Bhashyam and
Ashutosh Sabharwal},
journal={arXiv preprint arXiv:0811.4200},
year={2008},
archivePrefix={arXiv},
eprint={0811.4200},
primaryClass={cs.IT math.IT}
} | aggarwal2008two |
arxiv-5567 | 0811.4227 | Entanglement-assisted communication of classical and quantum information | <|reference_start|>Entanglement-assisted communication of classical and quantum information: We consider the problem of transmitting classical and quantum information reliably over an entanglement-assisted quantum channel. Our main result is a capacity theorem that gives a three-dimensional achievable rate region. Points in the region are rate triples, consisting of the classical communication rate, the quantum communication rate, and the entanglement consumption rate of a particular coding scheme. The crucial protocol in achieving the boundary points of the capacity region is a protocol that we name the classically-enhanced father protocol. The classically-enhanced father protocol is more general than other protocols in the family tree of quantum Shannon theoretic protocols, in the sense that several previously known quantum protocols are now child protocols of it. The classically-enhanced father protocol also shows an improvement over a time-sharing strategy for the case of a qubit dephasing channel--this result justifies the need for simultaneous coding of classical and quantum information over an entanglement-assisted quantum channel. Our capacity theorem is of a multi-letter nature (requiring a limit over many uses of the channel), but it reduces to a single-letter characterization for at least three channels: the completely depolarizing channel, the quantum erasure channel, and the qubit dephasing channel.<|reference_end|> | arxiv | @article{hsieh2008entanglement-assisted,
title={Entanglement-assisted communication of classical and quantum information},
author={Min-Hsiu Hsieh and Mark M. Wilde},
journal={IEEE Transactions on Information Theory, vol. 56, no. 9, pp.
4682-4704, September 2010},
year={2008},
doi={10.1109/TIT.2010.2053903},
archivePrefix={arXiv},
eprint={0811.4227},
primaryClass={quant-ph cs.IT math.IT}
} | hsieh2008entanglement-assisted |
arxiv-5568 | 0811.4257 | Cryptanalysis of the SASI Ultralightweight RFID Authentication Protocol with Modular Rotations | <|reference_start|>Cryptanalysis of the SASI Ultralightweight RFID Authentication Protocol with Modular Rotations: In this work we present the first passive attack over the SASI lightweight authentication protocol with modular rotations. This can be used to fully recover the secret $ID$ of the RFID tag, which is the value the protocol is designed to conceal. The attack is described initially for recovering $\lfloor log_2(96) \rfloor=6$ bits of the secret value $ID$, a result that by itself allows to mount traceability attacks on any given tag. However, the proposed scheme can be extended to obtain any amount of bits of the secret $ID$, provided a sufficiently large number of successful consecutive sessions are eavesdropped. We also present results on the attack's efficiency, and some ideas to secure this version of the SASI protocol.<|reference_end|> | arxiv | @article{hernandez-castro2008cryptanalysis,
title={Cryptanalysis of the SASI Ultralightweight RFID Authentication Protocol
with Modular Rotations},
author={Julio C. Hernandez-Castro, Juan M. E. Tapiador, Pedro Peris-Lopez,
Jean-Jacques Quisquater},
journal={arXiv preprint arXiv:0811.4257},
year={2008},
archivePrefix={arXiv},
eprint={0811.4257},
primaryClass={cs.CR}
} | hernandez-castro2008cryptanalysis |
arxiv-5569 | 0811.4324 | Ensuring Query Compatibility with Evolving XML Schemas | <|reference_start|>Ensuring Query Compatibility with Evolving XML Schemas: During the life cycle of an XML application, both schemas and queries may change from one version to another. Schema evolutions may affect query results and potentially the validity of produced data. Nowadays, a challenge is to assess and accommodate the impact of theses changes in rapidly evolving XML applications. This article proposes a logical framework and tool for verifying forward/backward compatibility issues involving schemas and queries. First, it allows analyzing relations between schemas. Second, it allows XML designers to identify queries that must be reformulated in order to produce the expected results across successive schema versions. Third, it allows examining more precisely the impact of schema changes over queries, therefore facilitating their reformulation.<|reference_end|> | arxiv | @article{genevès2008ensuring,
title={Ensuring Query Compatibility with Evolving XML Schemas},
author={Pierre Genev`es, Nabil Laya"ida, and Vincent Quint},
journal={arXiv preprint arXiv:0811.4324},
year={2008},
number={RR-6711},
archivePrefix={arXiv},
eprint={0811.4324},
primaryClass={cs.PL cs.SE}
} | genevès2008ensuring |
arxiv-5570 | 0811.4339 | Finite Lattice-Size Effects in MIMO Detection | <|reference_start|>Finite Lattice-Size Effects in MIMO Detection: Many powerful data detection algorithms employed in multiple-input multiple-output (MIMO) communication systems, such as sphere decoding (SD) and lattice-reduction (LR)-aided detection, were initially designed for infinite lattices. Detection in MIMO systems is, however, based on finite lattices. In this paper, we systematically study the consequences of finite lattice-size for the performance and complexity of MIMO detection algorithms formulated for infinite lattices. Specifically, we find, considering performance and complexity, that LR does not seem to offer advantages when used in conjunction with SD.<|reference_end|> | arxiv | @article{studer2008finite,
title={Finite Lattice-Size Effects in MIMO Detection},
author={Christoph Studer, Dominik Seethaler, Helmut B"olcskei},
journal={Proc. 42th Asilomar Conf. Signals, Systems, and Computers, Pacific
Grove, CA, Oct. 2008},
year={2008},
archivePrefix={arXiv},
eprint={0811.4339},
primaryClass={cs.IT math.IT}
} | studer2008finite |
arxiv-5571 | 0811.4346 | Dynamic Indexability: The Query-Update Tradeoff for One-Dimensional Range Queries | <|reference_start|>Dynamic Indexability: The Query-Update Tradeoff for One-Dimensional Range Queries: The B-tree is a fundamental secondary index structure that is widely used for answering one-dimensional range reporting queries. Given a set of $N$ keys, a range query can be answered in $O(\log_B \nm + \frac{K}{B})$ I/Os, where $B$ is the disk block size, $K$ the output size, and $M$ the size of the main memory buffer. When keys are inserted or deleted, the B-tree is updated in $O(\log_B N)$ I/Os, if we require the resulting changes to be committed to disk right away. Otherwise, the memory buffer can be used to buffer the recent updates, and changes can be written to disk in batches, which significantly lowers the amortized update cost. A systematic way of batching up updates is to use the logarithmic method, combined with fractional cascading, resulting in a dynamic B-tree that supports insertions in $O(\frac{1}{B}\log\nm)$ I/Os and queries in $O(\log\nm + \frac{K}{B})$ I/Os. Such bounds have also been matched by several known dynamic B-tree variants in the database literature. In this paper, we prove that for any dynamic one-dimensional range query index structure with query cost $O(q+\frac{K}{B})$ and amortized insertion cost $O(u/B)$, the tradeoff $q\cdot \log(u/q) = \Omega(\log B)$ must hold if $q=O(\log B)$. For most reasonable values of the parameters, we have $\nm = B^{O(1)}$, in which case our query-insertion tradeoff implies that the bounds mentioned above are already optimal. Our lower bounds hold in a dynamic version of the {\em indexability model}, which is of independent interests.<|reference_end|> | arxiv | @article{yi2008dynamic,
title={Dynamic Indexability: The Query-Update Tradeoff for One-Dimensional
Range Queries},
author={Ke Yi},
journal={arXiv preprint arXiv:0811.4346},
year={2008},
archivePrefix={arXiv},
eprint={0811.4346},
primaryClass={cs.DS cs.DB}
} | yi2008dynamic |
arxiv-5572 | 0811.4349 | Anti Plagiarism Application with Algorithm Karp-Rabin at Thesis in Gunadarma University | <|reference_start|>Anti Plagiarism Application with Algorithm Karp-Rabin at Thesis in Gunadarma University: Plagiarism that is plagiarizing or composition retrieval, opinion, etcetera from other people and makes it is likely composition and opinion him-self. Plagiarism can be considered to be crime because stealing others copyrights. Like action a student copying some part of writings without valid permission from the original writer. In education world, plagiarism perpetrator can get the devil to pay from school/university. Plagiarism perpetrator conceived of plagiator. This thing is possible unable to be paid attention by the side of campus because of limitation from some interconnected factors for example student amounts Gunadarma University reaching thousands and incommensurate to tester amounts or lecturer the side of campus in charge directs problem thesis. In this paper, an application have been developed in order to check and look for 5 type percentage similarity from a thesis with other one at certain part or chapters. Percentage got that is 0%, under 15%, between 15-50%, up to 50% and 100%. So it should be expected that the results could be used by thesis advisor and also thesis examiner from the Student at Gunadarma University.<|reference_end|> | arxiv | @article{mutiara2008anti,
title={Anti Plagiarism Application with Algorithm Karp-Rabin at Thesis in
Gunadarma University},
author={A.B.Mutiara and S.Agustina},
journal={arXiv preprint arXiv:0811.4349},
year={2008},
archivePrefix={arXiv},
eprint={0811.4349},
primaryClass={cs.IT cs.DL math.IT}
} | mutiara2008anti |
arxiv-5573 | 0811.4354 | Soft-Input Soft-Output Sphere Decoding | <|reference_start|>Soft-Input Soft-Output Sphere Decoding: Soft-input soft-output (SISO) detection algorithms form the basis for iterative decoding. The associated computational complexity often poses significant challenges for practical receiver implementations, in particular in the context of multiple-input multiple-output wireless systems. In this paper, we present a low-complexity SISO sphere decoder which is based on the single tree search paradigm, proposed originally for soft-output detection in Studer et al., IEEE J-SAC, 2008. The algorithm incorporates clipping of the extrinsic log-likelihood ratios in the tree search, which not only results in significant complexity savings, but also allows to cover a large performance/complexity trade-off region by adjusting a single parameter.<|reference_end|> | arxiv | @article{studer2008soft-input,
title={Soft-Input Soft-Output Sphere Decoding},
author={Christoph Studer and Helmut B"olcskei},
journal={IEEE Int. Symposium on Information Theory (ISIT), Toronto, ON,
Canada, pp. 2007-2011, July 2008},
year={2008},
archivePrefix={arXiv},
eprint={0811.4354},
primaryClass={cs.IT math.IT}
} | studer2008soft-input |
arxiv-5574 | 0811.4364 | Revisiting the Core Ontology and Problem in Requirements Engineering | <|reference_start|>Revisiting the Core Ontology and Problem in Requirements Engineering: In their seminal paper in the ACM Transactions on Software Engineering and Methodology, Zave and Jackson established a core ontology for Requirements Engineering (RE) and used it to formulate the "requirements problem", thereby defining what it means to successfully complete RE. Given that stakeholders of the system-to-be communicate the information needed to perform RE, we show that Zave and Jackson's ontology is incomplete. It does not cover all types of basic concerns that the stakeholders communicate. These include beliefs, desires, intentions, and attitudes. In response, we propose a core ontology that covers these concerns and is grounded in sound conceptual foundations resting on a foundational ontology. The new core ontology for RE leads to a new formulation of the requirements problem that extends Zave and Jackson's formulation. We thereby establish new standards for what minimum information should be represented in RE languages and new criteria for determining whether RE has been successfully completed.<|reference_end|> | arxiv | @article{jureta2008revisiting,
title={Revisiting the Core Ontology and Problem in Requirements Engineering},
author={Ivan Jureta, John Mylopoulos, Stephane Faulkner},
journal={arXiv preprint arXiv:0811.4364},
year={2008},
doi={10.1109/RE.2008.13},
archivePrefix={arXiv},
eprint={0811.4364},
primaryClass={cs.SE}
} | jureta2008revisiting |
arxiv-5575 | 0811.4367 | Hybrid: A Definitional Two-Level Approach to Reasoning with Higher-Order Abstract Syntax | <|reference_start|>Hybrid: A Definitional Two-Level Approach to Reasoning with Higher-Order Abstract Syntax: Combining higher-order abstract syntax and (co)induction in a logical framework is well known to be problematic. Previous work described the implementation of a tool called Hybrid, within Isabelle HOL, which aims to address many of these difficulties. It allows object logics to be represented using higher-order abstract syntax, and reasoned about using tactical theorem proving and principles of (co)induction. In this paper we describe how to use it in a multi-level reasoning fashion, similar in spirit to other meta-logics such as Twelf. By explicitly referencing provability in a middle layer called a specification logic, we solve the problem of reasoning by (co)induction in the presence of non-stratifiable hypothetical judgments, which allow very elegant and succinct specifications of object logic inference rules.<|reference_end|> | arxiv | @article{felty2008hybrid:,
title={Hybrid: A Definitional Two-Level Approach to Reasoning with Higher-Order
Abstract Syntax},
author={Amy Felty and Alberto Momigliano},
journal={arXiv preprint arXiv:0811.4367},
year={2008},
number={University of Ottawa Technical Report, number TR-2008-03},
archivePrefix={arXiv},
eprint={0811.4367},
primaryClass={cs.LO}
} | felty2008hybrid: |
arxiv-5576 | 0811.4376 | How robust is quicksort average complexity? | <|reference_start|>How robust is quicksort average complexity?: The paper questions the robustness of average case time complexity of the fast and popular quicksort algorithm. Among the six standard probability distributions examined in the paper, only continuous uniform, exponential and standard normal are supporting it whereas the others are supporting the worst case complexity measure. To the question -why are we getting the worst case complexity measure each time the average case measure is discredited? -- one logical answer is average case complexity under the universal distribution equals worst case complexity. This answer, which is hard to challenge, however gives no idea as to which of the standard probability distributions come under the umbrella of universality. The morale is that average case complexity measures, in cases where they are different from those in worst case, should be deemed as robust provided only they get the support from at least the standard probability distributions, both discrete and continuous. Regretfully, this is not the case with quicksort.<|reference_end|> | arxiv | @article{sourabh2008how,
title={How robust is quicksort average complexity?},
author={Suman Kumar Sourabh and Soubhik Chakraborty},
journal={arXiv preprint arXiv:0811.4376},
year={2008},
archivePrefix={arXiv},
eprint={0811.4376},
primaryClass={cs.DS cs.CC}
} | sourabh2008how |
arxiv-5577 | 0811.4391 | Cross-Layer Link Adaptation Design for Relay Channels with Cooperative ARQ Protocol | <|reference_start|>Cross-Layer Link Adaptation Design for Relay Channels with Cooperative ARQ Protocol: The cooperative automatic repeat request (C-ARQ) is a link layer relaying protocol which exploits the spatial diversity and allows the relay node to retransmit the source data packet to the destination, when the latter is unable to decode the source data correctly. This paper presents a cross-layer link adaptation design for C-ARQ based relay channels in which both source and relay nodes employ adaptive modulation coding and power adaptation at the physical layer. For this scenario, we first derive closed-form expressions for the system spectral efficiency and average power consumption. We then present a low complexity iterative algorithm to find the optimized adaptation solution by maximizing the spectral efficiency subject to a packet loss rate (PLR) and an average power consumption constraint. The results indicate that the proposed adaptation scheme enhances the spectral efficiency noticeably when compared to other adaptive schemes, while guaranteeing the required PLR performance.<|reference_end|> | arxiv | @article{mardani2008cross-layer,
title={Cross-Layer Link Adaptation Design for Relay Channels with Cooperative
ARQ Protocol},
author={Morteza Mardani, Jalil S. Harsini and Farshad Lahouti},
journal={arXiv preprint arXiv:0811.4391},
year={2008},
archivePrefix={arXiv},
eprint={0811.4391},
primaryClass={cs.IT math.IT}
} | mardani2008cross-layer |
arxiv-5578 | 0811.4395 | List Decoding Tensor Products and Interleaved Codes | <|reference_start|>List Decoding Tensor Products and Interleaved Codes: We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. We show that for {\em every} code, the ratio of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rather than squaring, as one might expect). This gives the first efficient list decoders and new combinatorial bounds for some natural codes including multivariate polynomials where the degree in each variable is bounded. We show that for {\em every} code, its list decoding radius remains unchanged under $m$-wise interleaving for an integer $m$. This generalizes a recent result of Dinur et al \cite{DGKS}, who proved such a result for interleaved Hadamard codes (equivalently, linear transformations). Using the notion of generalized Hamming weights, we give better list size bounds for {\em both} tensoring and interleaving of binary linear codes. By analyzing the weight distribution of these codes, we reduce the task of bounding the list size to bounding the number of close-by low-rank codewords. For decoding linear transformations, using rank-reduction together with other ideas, we obtain list size bounds that are tight over small fields.<|reference_end|> | arxiv | @article{gopalan2008list,
title={List Decoding Tensor Products and Interleaved Codes},
author={Parikshit Gopalan, Venkatesan Guruswami, and Prasad Raghavendra},
journal={arXiv preprint arXiv:0811.4395},
year={2008},
archivePrefix={arXiv},
eprint={0811.4395},
primaryClass={cs.IT math.IT}
} | gopalan2008list |
arxiv-5579 | 0811.4397 | Joint Adaptive Modulation-Coding and Cooperative ARQ for Wireless Relay Networks | <|reference_start|>Joint Adaptive Modulation-Coding and Cooperative ARQ for Wireless Relay Networks: This paper presents a cross-layer approach to jointly design adaptive modulation and coding (AMC) at the physical layer and cooperative truncated automatic repeat request (ARQ) protocol at the data link layer. We first derive an exact closed form expression for the spectral efficiency of the proposed joint AMC-cooperative ARQ scheme. Aiming at maximizing this system performance measure, we then optimize an AMC scheme which directly satisfies a prescribed packet loss rate constraint at the data-link layer. The results indicate that utilizing cooperative ARQ as a retransmission strategy, noticeably enhances the spectral efficiency compared with the system that employs AMC alone at the physical layer. Moreover, the proposed adaptive rate cooperative ARQ scheme outperforms the fixed rate counterpart when the transmission modes at the source and relay are chosen based on the channel statistics. This in turn quantifies the possible gain achieved by joint design of AMC and ARQ in wireless relay networks.<|reference_end|> | arxiv | @article{mardani2008joint,
title={Joint Adaptive Modulation-Coding and Cooperative ARQ for Wireless Relay
Networks},
author={Morteza Mardani, Jalil S. Harsini, Farshad Lahouti, Behrouz Eliasi},
journal={arXiv preprint arXiv:0811.4397},
year={2008},
doi={10.1109/ISWCS.2008.4726069},
archivePrefix={arXiv},
eprint={0811.4397},
primaryClass={cs.IT math.IT}
} | mardani2008joint |
arxiv-5580 | 0811.4403 | Joint Adaptive Modulation Coding and Cooperative ARQ over Relay Channels-Applications to Land Mobile Satellite Communications | <|reference_start|>Joint Adaptive Modulation Coding and Cooperative ARQ over Relay Channels-Applications to Land Mobile Satellite Communications: In a cooperative relay network, a relay node (R) facilitates data transmission to the destination node (D), when the latter is unable to decode the source node (S) data correctly. This paper considers such a system model and presents a cross-layer approach to jointly design adaptive modulation and coding (AMC) at the physical layer and cooperative truncated automatic repeat request (ARQ) protocol at the data link layer. We first derive a closed form expression for the spectral efficiency of the joint cooperative ARQ-AMC scheme. Aiming at maximizing this performance measure, we then optimize two AMC schemes for S-D and R-D links, which directly satisfy a prescribed packet loss rate constraint. As an interesting application, we also consider the problem of joint link adaptation and blockage mitigation in land mobile satellite communications (LMSC). We also present a new relay-assisted transmission protocol for LMSC, which delivers the source data to the destination via the relaying link, when the S-D channel is in outage. Numerical results indicate that the proposed schemes noticeably enhances the spectral efficiency compared to a system, which uses a conventional ARQ-AMC scheme at the S-D link, or a system which employs an optimized fixed rate cooperative-ARQ protocol.<|reference_end|> | arxiv | @article{mardani2008joint,
title={Joint Adaptive Modulation Coding and Cooperative ARQ over Relay
Channels-Applications to Land Mobile Satellite Communications},
author={Morteza Mardani, Jalil S. Harsini, Farshad Lahouti, Behrouz Eliasi},
journal={arXiv preprint arXiv:0811.4403},
year={2008},
archivePrefix={arXiv},
eprint={0811.4403},
primaryClass={cs.IT math.IT}
} | mardani2008joint |
arxiv-5581 | 0811.4413 | A Spectral Algorithm for Learning Hidden Markov Models | <|reference_start|>A Spectral Algorithm for Learning Hidden Markov Models: Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typically resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations---it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple, employing only a singular value decomposition and matrix multiplications.<|reference_end|> | arxiv | @article{hsu2008a,
title={A Spectral Algorithm for Learning Hidden Markov Models},
author={Daniel Hsu, Sham M. Kakade, Tong Zhang},
journal={Journal of Computer and System Sciences, 78(5):1460-1480, 2012},
year={2008},
archivePrefix={arXiv},
eprint={0811.4413},
primaryClass={cs.LG cs.AI}
} | hsu2008a |
arxiv-5582 | 0811.4458 | Learning Class-Level Bayes Nets for Relational Data | <|reference_start|>Learning Class-Level Bayes Nets for Relational Data: Many databases store data in relational format, with different types of entities and information about links between the entities. The field of statistical-relational learning (SRL) has developed a number of new statistical models for such data. In this paper we focus on learning class-level or first-order dependencies, which model the general database statistics over attributes of linked objects and links (e.g., the percentage of A grades given in computer science classes). Class-level statistical relationships are important in themselves, and they support applications like policy making, strategic planning, and query optimization. Most current SRL methods find class-level dependencies, but their main task is to support instance-level predictions about the attributes or links of specific entities. We focus only on class-level prediction, and describe algorithms for learning class-level models that are orders of magnitude faster for this task. Our algorithms learn Bayes nets with relational structure, leveraging the efficiency of single-table nonrelational Bayes net learners. An evaluation of our methods on three data sets shows that they are computationally feasible for realistic table sizes, and that the learned structures represent the statistical information in the databases well. After learning compiles the database statistics into a Bayes net, querying these statistics via Bayes net inference is faster than with SQL queries, and does not depend on the size of the database.<|reference_end|> | arxiv | @article{schulte2008learning,
title={Learning Class-Level Bayes Nets for Relational Data},
author={Oliver Schulte, Hassan Khosravi, Flavia Moser, Martin Ester},
journal={arXiv preprint arXiv:0811.4458},
year={2008},
number={TR 2008-17, School of Computing Science, Simon Fraser University},
archivePrefix={arXiv},
eprint={0811.4458},
primaryClass={cs.LG cs.AI}
} | schulte2008learning |
arxiv-5583 | 0811.4483 | Wide spread spectrum watermarking with side information and interference cancellation | <|reference_start|>Wide spread spectrum watermarking with side information and interference cancellation: Nowadays, a popular method used for additive watermarking is wide spread spectrum. It consists in adding a spread signal into the host document. This signal is obtained by the sum of a set of carrier vectors, which are modulated by the bits to be embedded. To extract these embedded bits, weighted correlations between the watermarked document and the carriers are computed. Unfortunately, even without any attack, the obtained set of bits can be corrupted due to the interference with the host signal (host interference) and also due to the interference with the others carriers (inter-symbols interference (ISI) due to the non-orthogonality of the carriers). Some recent watermarking algorithms deal with host interference using side informed methods, but inter-symbols interference problem is still open. In this paper, we deal with interference cancellation methods, and we propose to consider ISI as side information and to integrate it into the host signal. This leads to a great improvement of extraction performance in term of signal-to-noise ratio and/or watermark robustness.<|reference_end|> | arxiv | @article{guelvouit2008wide,
title={Wide spread spectrum watermarking with side information and interference
cancellation},
author={Ga"etan Le Guelvouit, St'ephane Pateux},
journal={Proc. IS&T/SPIE Electronic Imaging, vol. 5020, Santa Clara, CA,
Jan. 2003},
year={2008},
doi={10.1117/12.476839},
archivePrefix={arXiv},
eprint={0811.4483},
primaryClass={cs.MM cs.IT math.IT}
} | guelvouit2008wide |
arxiv-5584 | 0811.4489 | Automatic Generation of the Axial Lines of Urban Environments to Capture What We Perceive | <|reference_start|>Automatic Generation of the Axial Lines of Urban Environments to Capture What We Perceive: Based on the concepts of isovists and medial axes, we developed a set of algorithms that can automatically generate axial lines for representing individual linearly stretched parts of open space of an urban environment. Open space is the space between buildings, where people can freely move around. The generation of the axial lines has been a key aspect of space syntax research, conventionally relying on hand-drawn axial lines of an urban environment, often called axial map, for urban morphological analysis. Although various attempts have been made towards an automatic solution, few of them can produce the axial map that consists of the least number of longest visibility lines, and none of them really works for different urban environments. Our algorithms provide a better solution than existing ones. Throughout this paper, we have also argued and demonstrated that the axial lines constitute a true skeleton, superior to medial axes, in capturing what we perceive about the urban environment. Keywords: Visibility, space syntax, topological analysis, medial axes, axial lines, isovists<|reference_end|> | arxiv | @article{jiang2008automatic,
title={Automatic Generation of the Axial Lines of Urban Environments to Capture
What We Perceive},
author={Bin Jiang and Xintao Liu},
journal={International Journal of Geographical Information Science, 24(4),
2010, 545-558},
year={2008},
archivePrefix={arXiv},
eprint={0811.4489},
primaryClass={cs.CG cs.CV}
} | jiang2008automatic |
arxiv-5585 | 0811.4497 | Homomorphism Preservation on Quasi-Wide Classes | <|reference_start|>Homomorphism Preservation on Quasi-Wide Classes: A class of structures is said to have the homomorphism-preservation property just in case every first-order formula that is preserved by homomorphisms on this class is equivalent to an existential-positive formula. It is known by a result of Rossman that the class of finite structures has this property and by previous work of Atserias et al. that various of its subclasses do. We extend the latter results by introducing the notion of a quasi-wide class and showing that any quasi-wide class that is closed under taking substructures and disjoint unions has the homomorphism-preservation property. We show, in particular, that classes of structures of bounded expansion and that locally exclude minors are quasi-wide. We also construct an example of a class of finite structures which is closed under substructures and disjoint unions but does not admit the homomorphism-preservation property.<|reference_end|> | arxiv | @article{dawar2008homomorphism,
title={Homomorphism Preservation on Quasi-Wide Classes},
author={Anuj Dawar},
journal={arXiv preprint arXiv:0811.4497},
year={2008},
archivePrefix={arXiv},
eprint={0811.4497},
primaryClass={cs.LO}
} | dawar2008homomorphism |
arxiv-5586 | 0811.4565 | Ergodic Capacity Analysis of Amplify-and-Forward MIMO Dual-Hop Systems | <|reference_start|>Ergodic Capacity Analysis of Amplify-and-Forward MIMO Dual-Hop Systems: This paper presents an analytical characterization of the ergodic capacity of amplify-and-forward (AF) MIMO dual-hop relay channels, assuming that the channel state information is available at the destination terminal only. In contrast to prior results, our expressions apply for arbitrary numbers of antennas and arbitrary relay configurations. We derive an expression for the exact ergodic capacity, simplified closed-form expressions for the high SNR regime, and tight closed-form upper and lower bounds. These results are made possible to employing recent tools from finite-dimensional random matrix theory to derive new closed-form expressions for various statistical properties of the equivalent AF MIMO dual-hop relay channel, such as the distribution of an unordered eigenvalue and certain random determinant properties. Based on the analytical capacity expressions, we investigate the impact of the system and channel characteristics, such as the antenna configuration and the relay power gain. We also demonstrate a number of interesting relationships between the dual-hop AF MIMO relay channel and conventional point-to-point MIMO channels in various asymptotic regimes.<|reference_end|> | arxiv | @article{jin2008ergodic,
title={Ergodic Capacity Analysis of Amplify-and-Forward MIMO Dual-Hop Systems},
author={Shi Jin, Matthew R. McKay, Caijun Zhong, Kai-Kit Wong},
journal={arXiv preprint arXiv:0811.4565},
year={2008},
doi={10.1109/TIT.2010.2043765},
archivePrefix={arXiv},
eprint={0811.4565},
primaryClass={cs.IT math.IT}
} | jin2008ergodic |
arxiv-5587 | 0811.4603 | Frozen Footprints | <|reference_start|>Frozen Footprints: Bibliometrics has the ambitious goal of measuring science. To this end, it exploits the way science is disseminated trough scientific publications and the resulting citation network of scientific papers. We survey the main historical contributions to the field, the most interesting bibliometric indicators, and the most popular bibliometric data sources. Moreover, we discuss distributions commonly used to model bibliometric phenomena and give an overview of methods to build bibliometric maps of science.<|reference_end|> | arxiv | @article{franceschet2008frozen,
title={Frozen Footprints},
author={Massimo Franceschet},
journal={arXiv preprint arXiv:0811.4603},
year={2008},
archivePrefix={arXiv},
eprint={0811.4603},
primaryClass={cs.DL cs.IR}
} | franceschet2008frozen |
arxiv-5588 | 0811.4630 | Channel State Prediction, Feedback and Scheduling for a Multiuser MIMO-OFDM Downlink | <|reference_start|>Channel State Prediction, Feedback and Scheduling for a Multiuser MIMO-OFDM Downlink: We consider the downlink of a MIMO-OFDM wireless systems where the base-station (BS) has M antennas and serves K single-antenna user terminals (UT) with K larger than or equal to M. Users estimate their channel vectors from common downlink pilot symbols and feed back a prediction, which is used by the BS to compute the linear beamforming matrix for the next time slot and to select the users to be served according to the proportional fair scheduling (PFS) algorithm. We consider a realistic physical channel model used as a benchmark in standardization and some alternatives for the channel estimation and prediction scheme. We show that a parametric method based on ESPRIT is able to accurately predict the channel even for relatively high user mobility. However, there exists a class of channels characterized by large Doppler spread (high mobility) and clustered angular spread for which prediction is intrinsically difficult and all considered methods fail. We propose a modified PFS that take into account the "predictability" state of the UTs, and significantly outperform the classical PFS in the presence of prediction errors. The main conclusion of this work is that multiuser MIMO downlink yields very good performance even in the presence of high mobility users, provided that the nonpredictable users are handled appropriately<|reference_end|> | arxiv | @article{shirani-mehr2008channel,
title={Channel State Prediction, Feedback and Scheduling for a Multiuser
MIMO-OFDM Downlink},
author={Hooman Shirani-Mehr, Daniel N. Liu and Giuseppe Caire},
journal={arXiv preprint arXiv:0811.4630},
year={2008},
archivePrefix={arXiv},
eprint={0811.4630},
primaryClass={cs.IT math.IT}
} | shirani-mehr2008channel |
arxiv-5589 | 0811.4672 | Fast and Quality-Guaranteed Data Streaming in Resource-Constrained Sensor Networks | <|reference_start|>Fast and Quality-Guaranteed Data Streaming in Resource-Constrained Sensor Networks: In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.<|reference_end|> | arxiv | @article{soroush2008fast,
title={Fast and Quality-Guaranteed Data Streaming in Resource-Constrained
Sensor Networks},
author={Emad Soroush, Kui Wu, Jian Pei},
journal={arXiv preprint arXiv:0811.4672},
year={2008},
archivePrefix={arXiv},
eprint={0811.4672},
primaryClass={cs.DS cs.MM}
} | soroush2008fast |
arxiv-5590 | 0811.4681 | The Good, the Bad, and the Ugly: three different approaches to break their watermarking system | <|reference_start|>The Good, the Bad, and the Ugly: three different approaches to break their watermarking system: The Good is Blondie, a wandering gunman with a strong personal sense of honor. The Bad is Angel Eyes, a sadistic hitman who always hits his mark. The Ugly is Tuco, a Mexican bandit who's always only looking out for himself. Against the backdrop of the BOWS contest, they search for a watermark in gold buried in three images. Each knows only a portion of the gold's exact location, so for the moment they're dependent on each other. However, none are particularly inclined to share...<|reference_end|> | arxiv | @article{guelvouit2008the,
title={The Good, the Bad, and the Ugly: three different approaches to break
their watermarking system},
author={Ga"etan Le Guelvouit, Teddy Furon, Franc{c}ois Cayre},
journal={Proc. IS&T/SPIE Electronic Imaging, vol. 6505, San Jose, CA, Jan.
2007},
year={2008},
doi={10.1117/12.703968},
archivePrefix={arXiv},
eprint={0811.4681},
primaryClass={cs.GR cs.MM}
} | guelvouit2008the |
arxiv-5591 | 0811.4697 | Informed stego-systems in active warden context: statistical undetectability and capacity | <|reference_start|>Informed stego-systems in active warden context: statistical undetectability and capacity: Several authors have studied stego-systems based on Costa scheme, but just a few ones gave both theoretical and experimental justifications of these schemes performance in an active warden context. We provide in this paper a steganographic and comparative study of three informed stego-systems in active warden context: scalar Costa scheme, trellis-coded quantization and spread transform scalar Costa scheme. By leading on analytical formulations and on experimental evaluations, we show the advantages and limits of each scheme in term of statistical undetectability and capacity in the case of active warden. Such as the undetectability is given by the distance between the stego-signal and the cover distance. It is measured by the Kullback-Leibler distance.<|reference_end|> | arxiv | @article{braci2008informed,
title={Informed stego-systems in active warden context: statistical
undetectability and capacity},
author={Sofiane Braci, Claude Delpha, R'emy Boyer, Ga"etan Le Guelvouit},
journal={Proc. IEEE Conf. on Multimedia Signal Processing, Cairns,
Australia, Oct. 2008},
year={2008},
doi={10.1109/MMSP.2008.4665167},
archivePrefix={arXiv},
eprint={0811.4697},
primaryClass={cs.IT cs.MM math.IT}
} | braci2008informed |
arxiv-5592 | 0811.4699 | Mapping Images with the Coherence Length Diagrams | <|reference_start|>Mapping Images with the Coherence Length Diagrams: Statistical pattern recognition methods based on the Coherence Length Diagram (CLD) have been proposed for medical image analyses, such as quantitative characterisation of human skin textures, and for polarized light microscopy of liquid crystal textures. Further investigations are made on image maps originated from such diagram and some examples related to irregularity of microstructures are shown.<|reference_end|> | arxiv | @article{sparavigna2008mapping,
title={Mapping Images with the Coherence Length Diagrams},
author={A. Sparavigna, R. Marazzato},
journal={International Journal of Software Engineering and Computing, pp.
53-57, 2009, Vol. 1},
year={2008},
archivePrefix={arXiv},
eprint={0811.4699},
primaryClass={cs.CV}
} | sparavigna2008mapping |
arxiv-5593 | 0811.4700 | Trellis-coded quantization for public-key steganography | <|reference_start|>Trellis-coded quantization for public-key steganography: This paper deals with public-key steganography in the presence of a passive warden. The aim is to hide secret messages within cover-documents without making the warden suspicious, and without any preliminar secret key sharing. Whereas a practical attempt has been already done to provide a solution to this problem, it suffers of poor flexibility (since embedding and decoding steps highly depend on cover-signals statistics) and of little capacity compared to recent data hiding techniques. Using the same framework, this paper explores the use of trellis-coded quantization techniques (TCQ and turbo TCQ) to design a more efficient public-key scheme. Experiments on audio signals show great improvements considering Cachin's security criterion.<|reference_end|> | arxiv | @article{guelvouit2008trellis-coded,
title={Trellis-coded quantization for public-key steganography},
author={Ga"etan Le Guelvouit},
journal={arXiv preprint arXiv:0811.4700},
year={2008},
archivePrefix={arXiv},
eprint={0811.4700},
primaryClass={cs.MM cs.IT math.IT}
} | guelvouit2008trellis-coded |
arxiv-5594 | 0811.4702 | Information-theoretic resolution of perceptual WSS watermarking of non iid Gaussian signals | <|reference_start|>Information-theoretic resolution of perceptual WSS watermarking of non iid Gaussian signals: The theoretical foundations of data hiding have been revealed by formulating the problem as message communication over a noisy channel. We revisit the problem in light of a more general characterization of the watermark channel and of weighted distortion measures. Considering spread spectrum based information hiding, we release the usual assumption of an i.i.d. cover signal. The game-theoretic resolution of the problem reveals a generalized characterization of optimum attacks. The paper then derives closed-form expressions for the different parameters exhibiting a practical embedding and extraction technique.<|reference_end|> | arxiv | @article{pateux2008information-theoretic,
title={Information-theoretic resolution of perceptual WSS watermarking of non
i.i.d. Gaussian signals},
author={St'ephane Pateux, Ga"etan Le Guelvouit, Christine Guillemot},
journal={Proc. European Signal Processing Conf., Toulouse, France, Sep.
2002},
year={2008},
archivePrefix={arXiv},
eprint={0811.4702},
primaryClass={cs.IT cs.MM math.IT}
} | pateux2008information-theoretic |
arxiv-5595 | 0811.4706 | Comparing Measures of Sparsity | <|reference_start|>Comparing Measures of Sparsity: Sparsity of representations of signals has been shown to be a key concept of fundamental importance in fields such as blind source separation, compression, sampling and signal analysis. The aim of this paper is to compare several commonlyused sparsity measures based on intuitive attributes. Intuitively, a sparse representation is one in which a small number of coefficients contain a large proportion of the energy. In this paper six properties are discussed: (Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates and Babies), each of which a sparsity measure should have. The main contributions of this paper are the proofs and the associated summary table which classify commonly-used sparsity measures based on whether or not they satisfy these six propositions and the corresponding proofs. Only one of these measures satisfies all six: The Gini Index. measures based on whether or not they satisfy these six propositions and the corresponding proofs. Only one of these measures satisfies all six: The Gini Index.<|reference_end|> | arxiv | @article{hurley2008comparing,
title={Comparing Measures of Sparsity},
author={Niall P. Hurley and Scott T. Rickard},
journal={arXiv preprint arXiv:0811.4706},
year={2008},
archivePrefix={arXiv},
eprint={0811.4706},
primaryClass={cs.IT math.IT}
} | hurley2008comparing |
arxiv-5596 | 0811.4713 | Compact Labelings For Efficient First-Order Model-Checking | <|reference_start|>Compact Labelings For Efficient First-Order Model-Checking: We consider graph properties that can be checked from labels, i.e., bit sequences, of logarithmic length attached to vertices. We prove that there exists such a labeling for checking a first-order formula with free set variables in the graphs of every class that is \emph{nicely locally cwd-decomposable}. This notion generalizes that of a \emph{nicely locally tree-decomposable} class. The graphs of such classes can be covered by graphs of bounded \emph{clique-width} with limited overlaps. We also consider such labelings for \emph{bounded} first-order formulas on graph classes of \emph{bounded expansion}. Some of these results are extended to counting queries.<|reference_end|> | arxiv | @article{courcelle2008compact,
title={Compact Labelings For Efficient First-Order Model-Checking},
author={Bruno Courcelle (LaBRI, IUF), Cyril Gavoille (LaBRI, INRIA Futurs),
Mamadou Moustapha Kant'e (LaBRI)},
journal={Journal of Combinatorial Optimisation 21(1):19-46(2011)},
year={2008},
doi={10.1007/s10878-009-9260-7},
archivePrefix={arXiv},
eprint={0811.4713},
primaryClass={cs.DS cs.LO}
} | courcelle2008compact |
arxiv-5597 | 0811.4717 | Prospective Study for Semantic Inter-Media Fusion in Content-Based Medical Image Retrieval | <|reference_start|>Prospective Study for Semantic Inter-Media Fusion in Content-Based Medical Image Retrieval: One important challenge in modern Content-Based Medical Image Retrieval (CBMIR) approaches is represented by the semantic gap, related to the complexity of the medical knowledge. Among the methods that are able to close this gap in CBMIR, the use of medical thesauri/ontologies has interesting perspectives due to the possibility of accessing on-line updated relevant webservices and to extract real-time medical semantic structured information. The CBMIR approach proposed in this paper uses the Unified Medical Language System's (UMLS) Metathesaurus to perform a semantic indexing and fusion of medical media. This fusion operates before the query processing (retrieval) and works at an UMLS-compliant conceptual indexing level. Our purpose is to study various techniques related to semantic data alignment, preprocessing, fusion, clustering and retrieval, by evaluating the various techniques and highlighting future research directions. The alignment and the preprocessing are based on partial text/image retrieval feedback and on the data structure. We analyze various probabilistic, fuzzy and evidence-based approaches for the fusion process and different similarity functions for the retrieval process. All the proposed methods are evaluated on the Cross Language Evaluation Forum's (CLEF) medical image retrieval benchmark, by focusing also on a more homogeneous component medical image database: the Pathology Education Instructional Resource (PEIR).<|reference_end|> | arxiv | @article{teodorescu2008prospective,
title={Prospective Study for Semantic Inter-Media Fusion in Content-Based
Medical Image Retrieval},
author={Roxana Teodorescu (UPT, LAB), Daniel Racoceanu (LAB, IPAAL), Wee-Kheng
Leow (IPAAL, NUS), Vladimir Cretu (UPT)},
journal={arXiv preprint arXiv:0811.4717},
year={2008},
number={Onco-media Teodorescu 2008},
archivePrefix={arXiv},
eprint={0811.4717},
primaryClass={cs.IR cs.CL}
} | teodorescu2008prospective |
arxiv-5598 | 0811.4718 | On the Fourier Spectra of the Infinite Families of Quadratic APN Functions | <|reference_start|>On the Fourier Spectra of the Infinite Families of Quadratic APN Functions: It is well known that a quadratic function defined on a finite field of odd degree is almost bent (AB) if and only if it is almost perfect nonlinear (APN). For the even degree case there is no apparent relationship between the values in the Fourier spectrum of a function and the APN property. In this article we compute the Fourier spectrum of the new quadranomial family of APN functions. With this result, all known infinite families of APN functions now have their Fourier spectra and hence their nonlinearities computed.<|reference_end|> | arxiv | @article{bracken2008on,
title={On the Fourier Spectra of the Infinite Families of Quadratic APN
Functions},
author={Carl Bracken, Zhengbang Zha},
journal={arXiv preprint arXiv:0811.4718},
year={2008},
archivePrefix={arXiv},
eprint={0811.4718},
primaryClass={cs.IT cs.CR cs.DM math.IT}
} | bracken2008on |
arxiv-5599 | 0811.4720 | Automated Induction for Complex Data Structures | <|reference_start|>Automated Induction for Complex Data Structures: We propose a procedure for automated implicit inductive theorem proving for equational specifications made of rewrite rules with conditions and constraints. The constraints are interpreted over constructor terms (representing data values), and may express syntactic equality, disequality, ordering and also membership in a fixed tree language. Constrained equational axioms between constructor terms are supported and can be used in order to specify complex data structures like sets, sorted lists, trees, powerlists... Our procedure is based on tree grammars with constraints, a formalism which can describe exactly the initial model of the given specification (when it is sufficiently complete and terminating). They are used in the inductive proofs first as an induction scheme for the generation of subgoals at induction steps, second for checking validity and redundancy criteria by reduction to an emptiness problem, and third for defining and solving membership constraints. We show that the procedure is sound and refutationally complete. It generalizes former test set induction techniques and yields natural proofs for several non-trivial examples presented in the paper, these examples are difficult to specify and carry on automatically with related induction procedures.<|reference_end|> | arxiv | @article{bouhoula2008automated,
title={Automated Induction for Complex Data Structures},
author={Adel Bouhoula and Florent Jacquemard},
journal={arXiv preprint arXiv:0811.4720},
year={2008},
archivePrefix={arXiv},
eprint={0811.4720},
primaryClass={cs.LO cs.SC}
} | bouhoula2008automated |
arxiv-5600 | 0811.4733 | Kinematic Analysis of a Serial - Parallel Machine Tool: the VERNE machine | <|reference_start|>Kinematic Analysis of a Serial - Parallel Machine Tool: the VERNE machine: The paper derives the inverse and the forward kinematic equations of a serial - parallel 5-axis machine tool: the VERNE machine. This machine is composed of a three-degree-of-freedom (DOF) parallel module and a two-DOF serial tilting table. The parallel module consists of a moving platform that is connected to a fixed base by three non-identical legs. These legs are connected in a way that the combined effects of the three legs lead to an over-constrained mechanism with complex motion. This motion is defined as a simultaneous combination of rotation and translation. In this paper we propose symbolical methods that able to calculate all kinematic solutions and identify the acceptable one by adding analytical constraint on the disposition of legs of the parallel module.<|reference_end|> | arxiv | @article{kanaan2008kinematic,
title={Kinematic Analysis of a Serial - Parallel Machine Tool: the VERNE
machine},
author={Daniel Kanaan (IRCCyN), Philippe Wenger (IRCCyN), Damien Chablat
(IRCCyN)},
journal={Mechanism and Machine Theory 44, 2 (2009) 487-498},
year={2008},
doi={10.1016/j.mechmachtheory.2008.03.002},
archivePrefix={arXiv},
eprint={0811.4733},
primaryClass={cs.RO}
} | kanaan2008kinematic |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.