corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-675601
cs/0702045
Gaussian Interference Channel Capacity to Within One Bit
<|reference_start|>Gaussian Interference Channel Capacity to Within One Bit: The capacity of the two-user Gaussian interference channel has been open for thirty years. The understanding on this problem has been limited. The best known achievable region is due to Han-Kobayashi but its characterization is very complicated. It is also not known how tight the existing outer bounds are. In this work, we show that the existing outer bounds can in fact be arbitrarily loose in some parameter ranges, and by deriving new outer bounds, we show that a simplified Han-Kobayashi type scheme can achieve to within a single bit the capacity for all values of the channel parameters. We also show that the scheme is asymptotically optimal at certain high SNR regimes. Using our results, we provide a natural generalization of the point-to-point classical notion of degrees of freedom to interference-limited scenarios.<|reference_end|>
arxiv
@article{etkin2007gaussian, title={Gaussian Interference Channel Capacity to Within One Bit}, author={Raul Etkin, David Tse, Hua Wang}, journal={arXiv preprint arXiv:cs/0702045}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702045}, primaryClass={cs.IT math.IT} }
etkin2007gaussian
arxiv-675602
cs/0702046
Design and Analysis of the REESSE1+ Public Key Cryptosystem v221
<|reference_start|>Design and Analysis of the REESSE1+ Public Key Cryptosystem v221: In this paper, the authors give the definitions of a coprime sequence and a lever function, and describe the five algorithms and six characteristics of a prototypal public key cryptosystem which is used for encryption and signature, and based on three new problems and one existent problem: the multivariate permutation problem (MPP), the anomalous subset product problem (ASPP), the transcendental logarithm problem (TLP), and the polynomial root finding problem (PRFP). Prove by reduction that MPP, ASPP, and TLP are computationally at least equivalent to the discrete logarithm problem (DLP) in the same prime field, and meanwhile find some evidence which inclines people to believe that the new problems are harder than DLP each, namely unsolvable in DLP subexponential time. Demonstrate the correctness of the decryption and the verification, deduce the probability of a plaintext solution being nonunique is nearly zero, and analyze the exact securities of the cryptosystem against recovering a plaintext from a ciphertext, extracting a private key from a public key or a signature, and forging a signature through known signatures, public keys, and messages on the assumption that IFP, DLP, and LSSP can be solved. Studies manifest that the running times of effectual attack tasks are greater than or equal to O(2^n) so far when n = 80, 96, 112, or 128 with lgM = 696, 864, 1030, or 1216. As viewed from utility, it should be researched further how to decrease the length of a modulus and to increase the speed of the decryption.<|reference_end|>
arxiv
@article{su2007design, title={Design and Analysis of the REESSE1+ Public Key Cryptosystem v2.21}, author={Shenghui Su and Shuwang Lv}, journal={Theoretical Computer Science, v426-427, Apr. 2012, pp. 91-117}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702046}, primaryClass={cs.CR cs.CC} }
su2007design
arxiv-675603
cs/0702047
Hierarchical Unambiguity
<|reference_start|>Hierarchical Unambiguity: We develop techniques to investigate relativized hierarchical unambiguous computation. We apply our techniques to generalize known constructs involving relativized unambiguity based complexity classes (UP and \mathcal{UP}) to new constructs involving arbitrary higher levels of the relativized unambiguous polynomial hierarchy (UPH). Our techniques are developed on constraints imposed by hierarchical arrangement of unambiguous nondeterministic polynomial-time Turing machines, and so they differ substantially, in applicability and in nature, from standard methods (such as the switching lemma [Hastad, Computational Limitations of Small-Depth Circuits, MIT Press, 1987]), which play roles in carrying out similar generalizations. Aside from achieving these generalizations, we resolve a question posed by Cai, Hemachandra, and Vyskoc [J. Cai, L. Hemachandra, and J. Vyskoc, Promises and fault-tolerant database access, In K. Ambos-Spies, S. Homer, and U. Schoening, editors, Complexity Theory, pages 101-146. Cambridge University Press, 1993] on an issue related to nonadaptive Turing access to UP and adaptive smart Turing access to \mathcal{UP}.<|reference_end|>
arxiv
@article{spakowski2007hierarchical, title={Hierarchical Unambiguity}, author={Holger Spakowski and Rahul Tripathi}, journal={arXiv preprint arXiv:cs/0702047}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702047}, primaryClass={cs.CC} }
spakowski2007hierarchical
arxiv-675604
cs/0702048
Finding Community Structure in Mega-scale Social Networks
<|reference_start|>Finding Community Structure in Mega-scale Social Networks: Community analysis algorithm proposed by Clauset, Newman, and Moore (CNM algorithm) finds community structure in social networks. Unfortunately, CNM algorithm does not scale well and its use is practically limited to networks whose sizes are up to 500,000 nodes. The paper identifies that this inefficiency is caused from merging communities in unbalanced manner. The paper introduces three kinds of metrics (consolidation ratio) to control the process of community analysis trying to balance the sizes of the communities being merged. Three flavors of CNM algorithms are built incorporating those metrics. The proposed techniques are tested using data sets obtained from existing social networking service that hosts 5.5 million users. All the methods exhibit dramatic improvement of execution efficiency in comparison with the original CNM algorithm and shows high scalability. The fastest method processes a network with 1 million nodes in 5 minutes and a network with 4 million nodes in 35 minutes, respectively. Another one processes a network with 500,000 nodes in 50 minutes (7 times faster than the original algorithm), finds community structures that has improved modularity, and scales to a network with 5.5 million.<|reference_end|>
arxiv
@article{wakita2007finding, title={Finding Community Structure in Mega-scale Social Networks}, author={Ken Wakita and Toshiyuki Tsurumi}, journal={arXiv preprint arXiv:cs/0702048}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702048}, primaryClass={cs.CY physics.soc-ph} }
wakita2007finding
arxiv-675605
cs/0702049
Parameterized Algorithms for Directed Maximum Leaf Problems
<|reference_start|>Parameterized Algorithms for Directed Maximum Leaf Problems: We prove that finding a rooted subtree with at least $k$ leaves in a digraph is a fixed parameter tractable problem. A similar result holds for finding rooted spanning trees with many leaves in digraphs from a wide family $\cal L$ that includes all strong and acyclic digraphs. This settles completely an open question of Fellows and solves another one for digraphs in $\cal L$. Our algorithms are based on the following combinatorial result which can be viewed as a generalization of many results for a `spanning tree with many leaves' in the undirected case, and which is interesting on its own: If a digraph $D\in \cal L$ of order $n$ with minimum in-degree at least 3 contains a rooted spanning tree, then $D$ contains one with at least $(n/2)^{1/5}-1$ leaves.<|reference_end|>
arxiv
@article{alon2007parameterized, title={Parameterized Algorithms for Directed Maximum Leaf Problems}, author={Noga Alon, Fedor Fomin, Gregory Gutin, Michael Krivelevich and Saket Saurabh}, journal={arXiv preprint arXiv:cs/0702049}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702049}, primaryClass={cs.DS cs.DM} }
alon2007parameterized
arxiv-675606
cs/0702050
Permutation Decoding and the Stopping Redundancy Hierarchy of Linear Block Codes
<|reference_start|>Permutation Decoding and the Stopping Redundancy Hierarchy of Linear Block Codes: We investigate the stopping redundancy hierarchy of linear block codes and its connection to permutation decoding techniques. An element in the ordered list of stopping redundancy values represents the smallest number of possibly linearly dependent rows in any parity-check matrix of a code that avoids stopping sets of a given size. Redundant parity-check equations can be shown to have a similar effect on decoding performance as permuting the coordinates of the received codeword according to a selected set of automorphisms of the code. Based on this finding we develop new decoding strategies for data transmission over the binary erasure channel that combine iterative message passing and permutation decoding in order to avoid errors confined to stopping sets. We also introduce the notion of s-SAD sets, containing the smallest number of automorphisms of a code with the property that they move any set of not more than s erasures into positions that do not correspond to stopping sets within a judiciously chosen parity-check matrix.<|reference_end|>
arxiv
@article{hehn2007permutation, title={Permutation Decoding and the Stopping Redundancy Hierarchy of Linear Block Codes}, author={Thorsten Hehn, Olgica Milenkovic, Stefan Laendner, Johannes B. Huber}, journal={arXiv preprint arXiv:cs/0702050}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702050}, primaryClass={cs.IT math.IT} }
hehn2007permutation
arxiv-675607
cs/0702051
The Gaussian multiple access wire-tap channel: wireless secrecy and cooperative jamming
<|reference_start|>The Gaussian multiple access wire-tap channel: wireless secrecy and cooperative jamming: We consider the General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed eavesdropper. We define two suitable secrecy measures, termed individual and collective, to reflect the confidence in the system for this multi-access environment. We determine achievable rates such that secrecy to some pre-determined degree can be maintained, using Gaussian codebooks. We also find outer bounds for the case when the eavesdropper receives a degraded version of the intended receiver's signal. In the degraded case, Gaussian codewords are shown to achieve the sum capacity for collective constraints. In addition, a TDMA scheme is shown to also achieve sum capacity for both sets of constraints. Numerical results showing the new rate region are presented and compared with the capacity region of the Gaussian Multiple-Access Channel (GMAC) with no secrecy constraints. We then find the secrecy sum-rate maximizing power allocations for the transmitters, and show that a cooperative jamming scheme can be used to increase achievable rates in this scenario.<|reference_end|>
arxiv
@article{tekin2007the, title={The Gaussian multiple access wire-tap channel: wireless secrecy and cooperative jamming}, author={Ender Tekin, Aylin Yener}, journal={arXiv preprint arXiv:cs/0702051}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702051}, primaryClass={cs.IT cs.CR math.IT} }
tekin2007the
arxiv-675608
cs/0702052
On Random Network Coding for Multicast
<|reference_start|>On Random Network Coding for Multicast: Random linear network coding is a particularly decentralized approach to the multicast problem. Use of random network codes introduces a non-zero probability however that some sinks will not be able to successfully decode the required sources. One of the main theoretical motivations for random network codes stems from the lower bound on the probability of successful decoding reported by Ho et. al. (2003). This result demonstrates that all sinks in a linearly solvable network can successfully decode all sources provided that the random code field size is large enough. This paper develops a new bound on the probability of successful decoding.<|reference_end|>
arxiv
@article{tauste-campo2007on, title={On Random Network Coding for Multicast}, author={Adria Tauste-Campo and Alex Grant}, journal={arXiv preprint arXiv:cs/0702052}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702052}, primaryClass={cs.IT math.IT} }
tauste-campo2007on
arxiv-675609
cs/0702053
The DFAs of Finitely Different Languages
<|reference_start|>The DFAs of Finitely Different Languages: Two languages are "finitely different" if their symmetric difference is finite. We consider the DFAs of finitely different regular languages and find major structural similarities. We proceed to consider the smallest DFAs that recognize a language finitely different from some given DFA. Such "f-minimal" DFAs are not unique, and this non-uniqueness is characterized. Finally, we offer a solution to the minimization problem of finding such f-minimal DFAs.<|reference_end|>
arxiv
@article{badr2007the, title={The DFAs of Finitely Different Languages}, author={Andrew Badr, Ian Shipman}, journal={arXiv preprint arXiv:cs/0702053}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702053}, primaryClass={cs.CC} }
badr2007the
arxiv-675610
cs/0702054
Nash equilibria in Voronoi games on graphs
<|reference_start|>Nash equilibria in Voronoi games on graphs: In this paper we study a game where every player is to choose a vertex (facility) in a given undirected graph. All vertices (customers) are then assigned to closest facilities and a player's payoff is the number of customers assigned to it. We show that deciding the existence of a Nash equilibrium for a given graph is NP-hard which to our knowledge is the first result of this kind for a zero-sum game. We also introduce a new measure, the social cost discrepancy, defined as the ratio of the costs between the worst and the best Nash equilibria. We show that the social cost discrepancy in our game is Omega(sqrt(n/k)) and O(sqrt(kn)), where n is the number of vertices and k the number of players.<|reference_end|>
arxiv
@article{durr2007nash, title={Nash equilibria in Voronoi games on graphs}, author={Christoph Durr and Nguyen Kim Thang}, journal={arXiv preprint arXiv:cs/0702054}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702054}, primaryClass={cs.GT cs.DS} }
durr2007nash
arxiv-675611
cs/0702055
On the possibility of making the complete computer model of a human brain
<|reference_start|>On the possibility of making the complete computer model of a human brain: The development of the algorithm of a neural network building by the corresponding parts of a DNA code is discussed.<|reference_end|>
arxiv
@article{paraskevov2007on, title={On the possibility of making the complete computer model of a human brain}, author={A.V. Paraskevov}, journal={arXiv preprint arXiv:cs/0702055}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702055}, primaryClass={cs.NE} }
paraskevov2007on
arxiv-675612
cs/0702056
A probabilistic analysis of a leader election algorithm
<|reference_start|>A probabilistic analysis of a leader election algorithm: A {\em leader election} algorithm is an elimination process that divides recursively into tow subgroups an initial group of n items, eliminates one subgroup and continues the procedure until a subgroup is of size 1. In this paper the biased case is analyzed. We are interested in the {\em cost} of the algorithm, i.e. the number of operations needed until the algorithm stops. Using a probabilistic approach, the asymptotic behavior of the algorithm is shown to be related to the behavior of a hitting time of two random sequences on [0,1].<|reference_end|>
arxiv
@article{mohamed2007a, title={A probabilistic analysis of a leader election algorithm}, author={Hanene Mohamed (INRIA Rocquencourt)}, journal={Fourth Colloquium on Mathematics and Computer Science Algorithms, Trees, Combinatorics and Probabilities (2006) 225-236}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702056}, primaryClass={cs.DS} }
mohamed2007a
arxiv-675613
cs/0702057
An Efficient Algorithm to Recognize Locally Equivalent Graphs in Non-Binary Case
<|reference_start|>An Efficient Algorithm to Recognize Locally Equivalent Graphs in Non-Binary Case: Let $v$ be a vertex of a graph $G$. By the local complementation of $G$ at $v$ we mean to complement the subgraph induced by the neighbors of $v$. This operator can be generalized as follows. Assume that, each edge of $G$ has a label in the finite field $\mathbf{F}_q$. Let $(g_{ij})$ be set of labels ($g_{ij}$ is the label of edge $ij$). We define two types of operators. For the first one, let $v$ be a vertex of $G$ and $a\in \mathbf{F}_q$, and obtain the graph with labels $g'_{ij}=g_{ij}+ag_{vi}g_{vj}$. For the second, if $0\neq b\in \mathbf{F}_q$ the resulted graph is a graph with labels $g''_{vi}=bg_{vi}$ and $g''_{ij}=g_{ij}$, for $i,j$ unequal to $v$. It is clear that if the field is binary, the operators are just local complementations that we described. The problem of whether two graphs are equivalent under local complementations has been studied, \cite{bouchalg}. Here we consider the general case and assuming that $q$ is odd, present the first known efficient algorithm to verify whether two graphs are locally equivalent or not.<|reference_end|>
arxiv
@article{bahramgiri2007an, title={An Efficient Algorithm to Recognize Locally Equivalent Graphs in Non-Binary Case}, author={Mohsen Bahramgiri, Salman Beigi}, journal={arXiv preprint arXiv:cs/0702057}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702057}, primaryClass={cs.DS} }
bahramgiri2007an
arxiv-675614
cs/0702058
Exploring k-Colorability
<|reference_start|>Exploring k-Colorability: An introductory paper to the graph k-colorability problem.<|reference_end|>
arxiv
@article{li2007exploring, title={Exploring k-Colorability}, author={Kia Kai Li}, journal={arXiv preprint arXiv:cs/0702058}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702058}, primaryClass={cs.CC} }
li2007exploring
arxiv-675615
cs/0702059
Redundancy-Related Bounds on Generalized Huffman Codes
<|reference_start|>Redundancy-Related Bounds on Generalized Huffman Codes: This paper presents new lower and upper bounds for the compression rate of binary prefix codes optimized over memoryless sources according to various nonlinear codeword length objectives. Like the most well-known redundancy bounds for minimum average redundancy coding - Huffman coding - these are in terms of a form of entropy and/or the probability of an input symbol, often the most probable one. The bounds here, some of which are tight, improve on known bounds of the form L in [H,H+1), where H is some form of entropy in bits (or, in the case of redundancy objectives, 0) and L is the length objective, also in bits. The objectives explored here include exponential-average length, maximum pointwise redundancy, and exponential-average pointwise redundancy (also called dth exponential redundancy). The first of these relates to various problems involving queueing, uncertainty, and lossless communications; the second relates to problems involving Shannon coding and universal modeling. For these two objectives we also explore the related problem of the necessary and sufficient conditions for the shortest codeword of a code being a specific length.<|reference_end|>
arxiv
@article{baer2007redundancy-related, title={Redundancy-Related Bounds on Generalized Huffman Codes}, author={Michael B. Baer}, journal={arXiv preprint arXiv:cs/0702059}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702059}, primaryClass={cs.IT math.IT} }
baer2007redundancy-related
arxiv-675616
cs/0702060
A local balance property of episturmian words
<|reference_start|>A local balance property of episturmian words: We prove that episturmian words and Arnoux-Rauzy sequences can be characterized using a local balance property. We also give a new characterization of epistandard words and show that the set of finite words that are not factors of an episturmian word is not context-free.<|reference_end|>
arxiv
@article{richomme2007a, title={A local balance property of episturmian words}, author={Gw'ena"el Richomme (LaRIA)}, journal={arXiv preprint arXiv:cs/0702060}, year={2007}, number={LaRIA-LRR-2007-02}, archivePrefix={arXiv}, eprint={cs/0702060}, primaryClass={cs.DM} }
richomme2007a
arxiv-675617
cs/0702061
Sudo-Lyndon
<|reference_start|>Sudo-Lyndon: Based on Lyndon words, a new Sudoku-like puzzle is presented and some relative theoretical questions are proposed.<|reference_end|>
arxiv
@article{richomme2007sudo-lyndon, title={Sudo-Lyndon}, author={Gw'ena"el Richomme (LaRIA)}, journal={arXiv preprint arXiv:cs/0702061}, year={2007}, number={LaRIA-LRR-2007-03}, archivePrefix={arXiv}, eprint={cs/0702061}, primaryClass={cs.DM} }
richomme2007sudo-lyndon
arxiv-675618
cs/0702062
Noise Limited Computational Speed
<|reference_start|>Noise Limited Computational Speed: In modern transistor based logic gates, the impact of noise on computation has become increasingly relevant since the voltage scaling strategy, aimed at decreasing the dissipated power, has increased the probability of error due to the reduced switching threshold voltages. In this paper we discuss the role of noise in a two state model that mimic the dynamics of standard logic gates and show that the presence of the noise sets a fundamental limit to the computing speed. An optimal idle time interval that minimizes the error probability, is derived.<|reference_end|>
arxiv
@article{gammaitoni2007noise, title={Noise Limited Computational Speed}, author={Luca Gammaitoni}, journal={L. Gammaitoni, Applied Physics Letters, 11/2007, Volume 91, p.3, (2007)}, year={2007}, doi={10.1063/1.2817968}, archivePrefix={arXiv}, eprint={cs/0702062}, primaryClass={cs.AR cs.PF} }
gammaitoni2007noise
arxiv-675619
cs/0702063
Entropy vectors and network codes
<|reference_start|>Entropy vectors and network codes: We consider a network multicast example that relates the solvability of the multicast problem with the existence of an entropy function. As a result, we provide an alternative approach to the proving of the insufficiency of linear (and abelian) network codes and demonstrate the utility of non-Shannon inequalities to tighten outer bounds on network coding capacity regions.<|reference_end|>
arxiv
@article{chan2007entropy, title={Entropy vectors and network codes}, author={Terence Chan and Alex Grant}, journal={arXiv preprint arXiv:cs/0702063}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702063}, primaryClass={cs.IT cs.NI math.IT} }
chan2007entropy
arxiv-675620
cs/0702064
Group characterizable entropy functions
<|reference_start|>Group characterizable entropy functions: This paper studies properties of entropy functions that are induced by groups and subgroups. We showed that many information theoretic properties of those group induced entropy functions also have corresponding group theoretic interpretations. Then we propose an extension method to find outer bound for these group induced entropy functions.<|reference_end|>
arxiv
@article{chan2007group, title={Group characterizable entropy functions}, author={Terence H. Chan}, journal={arXiv preprint arXiv:cs/0702064}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702064}, primaryClass={cs.IT math.IT} }
chan2007group
arxiv-675621
cs/0702065
Towards a New ODE Solver Based on Cartan's Equivalence Method
<|reference_start|>Towards a New ODE Solver Based on Cartan's Equivalence Method: The aim of the present paper is to propose an algorithm for a new ODE--solver which should improve the abilities of current solvers to handle second order differential equations. The paper provides also a theoretical result revealing the relationship between the change of coordinates, that maps the generic equation to a given target equation, and the symmetry $\D$-groupoid of this target.<|reference_end|>
arxiv
@article{dridi2007towards, title={Towards a New ODE Solver Based on Cartan's Equivalence Method}, author={R. Dridi and M. Petitot}, journal={arXiv preprint arXiv:cs/0702065}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702065}, primaryClass={cs.SC} }
dridi2007towards
arxiv-675622
cs/0702066
Comments on "Design and performance evaluation of load distribution strategies for multiple loads on heterogeneous linear daisy chain networks''
<|reference_start|>Comments on "Design and performance evaluation of load distribution strategies for multiple loads on heterogeneous linear daisy chain networks'': Min, Veeravalli, and Barlas proposed strategies to minimize the overall execution time of one or several divisible loads on a heterogeneous linear network, using one or more installments. We show on a very simple example that the proposed approach does not always produce a solution and that, when it does, the solution is often suboptimal. We also show how to find an optimal scheduling for any instance, once the number of installments per load is given. Finally, we formally prove that under a linear cost model, as in the original paper, an optimal schedule has an infinite number of installments. Such a cost model can therefore not be sed to design practical multi-installment strategies.<|reference_end|>
arxiv
@article{gallet2007comments, title={Comments on "Design and performance evaluation of load distribution strategies for multiple loads on heterogeneous linear daisy chain networks''}, author={Matthieu Gallet (LIP, INRIA Rh^one-Alpes), Yves Robert (LIP, INRIA Rh^one-Alpes), Fr'ed'eric Vivien (LIP, INRIA Rh^one-Alpes)}, journal={arXiv preprint arXiv:cs/0702066}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702066}, primaryClass={cs.DC} }
gallet2007comments
arxiv-675623
cs/0702067
The Haar Wavelet Transform of a Dendrogram: Additional Notes
<|reference_start|>The Haar Wavelet Transform of a Dendrogram: Additional Notes: We consider the wavelet transform of a finite, rooted, node-ranked, $p$-way tree, focusing on the case of binary ($p = 2$) trees. We study a Haar wavelet transform on this tree. Wavelet transforms allow for multiresolution analysis through translation and dilation of a wavelet function. We explore how this works in our tree context.<|reference_end|>
arxiv
@article{murtagh2007the, title={The Haar Wavelet Transform of a Dendrogram: Additional Notes}, author={Fionn Murtagh}, journal={arXiv preprint arXiv:cs/0702067}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702067}, primaryClass={cs.IR} }
murtagh2007the
arxiv-675624
cs/0702068
Distributed Decision Through Self-Synchronizing Sensor Networks in the Presence of Propagation Delays and Nonreciprocal Channels
<|reference_start|>Distributed Decision Through Self-Synchronizing Sensor Networks in the Presence of Propagation Delays and Nonreciprocal Channels: In this paper we propose and analyze a distributed algorithm for achieving globally optimal decisions, either estimation or detection, through a self-synchronization mechanism among linearly coupled integrators initialized with local measurements. We model the interaction among the nodes as a directed graph with weights dependent on the radio interface and we pose special attention to the effect of the propagation delays occurring in the exchange of data among sensors, as a function of the network geometry. We derive necessary and sufficient conditions for the proposed system to reach a consensus on globally optimal decision statistics. One of the major results proved in this work is that a consensus is achieved for any bounded delay condition if and only if the directed graph is quasi-strongly connected. We also provide a closed form expression for the global consensus, showing that the effect of delays is, in general, to introduce a bias in the final decision. The closed form expression is also useful to modify the consensus mechanism in order to get rid of the bias with minimum extra complexity.<|reference_end|>
arxiv
@article{scutari2007distributed, title={Distributed Decision Through Self-Synchronizing Sensor Networks in the Presence of Propagation Delays and Nonreciprocal Channels}, author={Gesualdo Scutari, Sergio Barbarossa and Loreto Pescosolido}, journal={arXiv preprint arXiv:cs/0702068}, year={2007}, doi={10.1109/SPAWC.2007.4401363}, archivePrefix={arXiv}, eprint={cs/0702068}, primaryClass={cs.IT cs.MA math.IT} }
scutari2007distributed
arxiv-675625
cs/0702069
Feasible reactivity in a synchronous pi-calculus
<|reference_start|>Feasible reactivity in a synchronous pi-calculus: Reactivity is an essential property of a synchronous program. Informally, it guarantees that at each instant the program fed with an input will `react' producing an output. In the present work, we consider a refined property that we call ` feasible reactivity'. Beyond reactivity, this property guarantees that at each instant both the size of the program and its reaction time are bounded by a polynomial in the size of the parameters at the beginning of the computation and the size of the largest input. We propose a method to annotate programs and we develop related static analysis techniques that guarantee feasible reactivity for programs expressed in the S-pi-calculus. The latter is a synchronous version of the pi-calculus based on the SL synchronous programming model.<|reference_end|>
arxiv
@article{amadio2007feasible, title={Feasible reactivity in a synchronous pi-calculus}, author={Roberto Amadio (PPS), Frederique Dabrowski}, journal={Proceedings ACM SIGPLAN Principles and Practice of Declarative Programming (16/07/2007) 221-231}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702069}, primaryClass={cs.LO} }
amadio2007feasible
arxiv-675626
cs/0702070
A Practical Approach to Lossy Joint Source-Channel Coding
<|reference_start|>A Practical Approach to Lossy Joint Source-Channel Coding: This work is devoted to practical joint source channel coding. Although the proposed approach has more general scope, for the sake of clarity we focus on a specific application example, namely, the transmission of digital images over noisy binary-input output-symmetric channels. The basic building blocks of most state-of the art source coders are: 1) a linear transformation; 2) scalar quantization of the transform coefficients; 3) probability modeling of the sequence of quantization indices; 4) an entropy coding stage. We identify the weakness of the conventional separated source-channel coding approach in the catastrophic behavior of the entropy coding stage. Hence, we replace this stage with linear coding, that maps directly the sequence of redundant quantizer output symbols into a channel codeword. We show that this approach does not entail any loss of optimality in the asymptotic regime of large block length. However, in the practical regime of finite block length and low decoding complexity our approach yields very significant improvements. Furthermore, our scheme allows to retain the transform, quantization and probability modeling of current state-of the art source coders, that are carefully matched to the features of specific classes of sources. In our working example, we make use of ``bit-planes'' and ``contexts'' model defined by the JPEG2000 standard and we re-interpret the underlying probability model as a sequence of conditionally Markov sources. The Markov structure allows to derive a simple successive coding and decoding scheme, where the latter is based on iterative Belief Propagation. We provide a construction example of the proposed scheme based on punctured Turbo Codes and we demonstrate the gain over a conventional separated scheme by running extensive numerical experiments on test images.<|reference_end|>
arxiv
@article{fresia2007a, title={A Practical Approach to Lossy Joint Source-Channel Coding}, author={Maria Fresia, Giuseppe Caire}, journal={arXiv preprint arXiv:cs/0702070}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702070}, primaryClass={cs.IT math.IT} }
fresia2007a
arxiv-675627
cs/0702071
What is needed to exploit knowledge of primary transmissions?
<|reference_start|>What is needed to exploit knowledge of primary transmissions?: Recently, Tarokh and others have raised the possibility that a cognitive radio might know the interference signal being transmitted by a strong primary user in a non-causal way, and use this knowledge to increase its data rates. However, there is a subtle difference between knowing the signal transmitted by the primary and the actual interference at our receiver since there is a wireless channel between these two points. We show that even an unknown phase results in a substantial decrease in the data rates that can be achieved, and thus there is a need to feedback interference channel estimates to the cognitive transmitter. We then consider the case of fading channels. We derive an upper bound on the rate for given outage error probability for faded dirt. We give a scheme that uses appropriate "training" to obtain such estimates and quantify this scheme's required overhead as a function of the relevant coherence time and interference power.<|reference_end|>
arxiv
@article{grover2007what, title={What is needed to exploit knowledge of primary transmissions?}, author={Pulkit Grover and Anant Sahai}, journal={arXiv preprint arXiv:cs/0702071}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702071}, primaryClass={cs.IT math.IT} }
grover2007what
arxiv-675628
cs/0702072
Logic Programming with Satisfiability
<|reference_start|>Logic Programming with Satisfiability: This paper presents a Prolog interface to the MiniSat satisfiability solver. Logic program- ming with satisfiability combines the strengths of the two paradigms: logic programming for encoding search problems into satisfiability on the one hand and efficient SAT solving on the other. This synergy between these two exposes a programming paradigm which we propose here as a logic programming pearl. To illustrate logic programming with SAT solving we give an example Prolog program which solves instances of Partial MAXSAT.<|reference_end|>
arxiv
@article{codish2007logic, title={Logic Programming with Satisfiability}, author={Michael Codish, Vitaly Lagoon, and Peter J. Stuckey}, journal={Theory and Practice of Logic Programming: 8(1):121-128, 2008}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702072}, primaryClass={cs.PL cs.AI} }
codish2007logic
arxiv-675629
cs/0702073
Tradeoff between decoding complexity and rate for codes on graphs
<|reference_start|>Tradeoff between decoding complexity and rate for codes on graphs: We consider transmission over a general memoryless channel, with bounded decoding complexity per bit under message passing decoding. We show that the achievable rate is bounded below capacity if there is a finite success in the decoding in a specified number of operations per bit at the decoder for some codes on graphs. These codes include LDPC and LDGM codes. Good performance with low decoding complexity suggests strong local structures in the graphs of these codes, which are detrimental to the code rate asymptotically. The proof method leads to an interesting necessary condition on the code structures which could achieve capacity with bounded decoding complexity. We also show that if a code sequence achieves a rate epsilon close to the channel capacity, the decoding complexity scales at least as O(log(1/epsilon).<|reference_end|>
arxiv
@article{grover2007tradeoff, title={Tradeoff between decoding complexity and rate for codes on graphs}, author={Pulkit Grover}, journal={arXiv preprint arXiv:cs/0702073}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702073}, primaryClass={cs.IT cs.CC math.IT} }
grover2007tradeoff
arxiv-675630
cs/0702074
Dynamic Random Geometric Graphs
<|reference_start|>Dynamic Random Geometric Graphs: In this work we introduce Dynamic Random Geometric Graphs as a basic rough model for mobile wireless sensor networks, where communication distances are set to the known threshold for connectivity of static random geometric graphs. We provide precise asymptotic results for the expected length of the connectivity and disconnectivity periods of the network. We believe the formal tools developed in this work could be of use in future studies in more concrete settings. In addition, for static random geometric graphs at the threshold for connectivity, we provide asymptotic expressions on the probability of existence of components according to their sizes.<|reference_end|>
arxiv
@article{diaz2007dynamic, title={Dynamic Random Geometric Graphs}, author={Josep Diaz, Dieter Mitsche, Xavier Perez}, journal={arXiv preprint arXiv:cs/0702074}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702074}, primaryClass={cs.DM} }
diaz2007dynamic
arxiv-675631
cs/0702075
Firebird Database Backup by Serialized Database Table Dump
<|reference_start|>Firebird Database Backup by Serialized Database Table Dump: This paper presents a simple data dump and load utility for Firebird databases which mimics mysqldump in MySQL. This utility, fb_dump and fb_load, for dumping and loading respectively, retrieves each database table using kinterbasdb and serializes the data using marshal module. This utility has two advantages over the standard Firebird database backup utility, gbak. Firstly, it is able to backup and restore single database tables which might help to recover corrupted databases. Secondly, the output is in text-coded format (from marshal module) making it more resilient than a compressed text backup, as in the case of using gbak.<|reference_end|>
arxiv
@article{ling2007firebird, title={Firebird Database Backup by Serialized Database Table Dump}, author={Maurice HT Ling}, journal={Ling, Maurice HT. 2007. Firebird Database Backup by Serialized Database Table Dump. The Python Papers 2 (1): 10-14}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702075}, primaryClass={cs.DB} }
ling2007firebird
arxiv-675632
cs/0702076
A First Step Towards Automatically Building Network Representations
<|reference_start|>A First Step Towards Automatically Building Network Representations: To fully harness Grids, users or middlewares must have some knowledge on the topology of the platform interconnection network. As such knowledge is usually not available, one must uses tools which automatically build a topological network model through some measurements. In this article, we define a methodology to assess the quality of these network model building tools, and we apply this methodology to representatives of the main classes of model builders and to two new algorithms. We show that none of the main existing techniques build models that enable to accurately predict the running time of simple application kernels for actual platforms. However some of the new algorithms we propose give excellent results in a wide range of situations.<|reference_end|>
arxiv
@article{eyraud-dubois2007a, title={A First Step Towards Automatically Building Network Representations}, author={Lionel Eyraud-Dubois (INRIA Rh^one-Alpes, LIP), Arnaud Legrand (ID-IMAG, INRIA Rh^one-Alpes / ID-IMAG), Martin Quinson (INRIA Lorraine - LORIA), Fr'ed'eric Vivien (INRIA Rh^one-Alpes, LIP)}, journal={arXiv preprint arXiv:cs/0702076}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702076}, primaryClass={cs.DC} }
eyraud-dubois2007a
arxiv-675633
cs/0702077
Properties of Rank Metric Codes
<|reference_start|>Properties of Rank Metric Codes: This paper investigates general properties of codes with the rank metric. We first investigate asymptotic packing properties of rank metric codes. Then, we study sphere covering properties of rank metric codes, derive bounds on their parameters, and investigate their asymptotic covering properties. Finally, we establish several identities that relate the rank weight distribution of a linear code to that of its dual code. One of our identities is the counterpart of the MacWilliams identity for the Hamming metric, and it has a different form from the identity by Delsarte.<|reference_end|>
arxiv
@article{gadouleau2007properties, title={Properties of Rank Metric Codes}, author={Maximilien Gadouleau and Zhiyuan Yan}, journal={arXiv preprint arXiv:cs/0702077}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702077}, primaryClass={cs.IT math.IT} }
gadouleau2007properties
arxiv-675634
cs/0702078
A Local Algorithm for Finding Dense Subgraphs
<|reference_start|>A Local Algorithm for Finding Dense Subgraphs: We present a local algorithm for finding dense subgraphs of bipartite graphs, according to the definition of density proposed by Kannan and Vinay. Our algorithm takes as input a bipartite graph with a specified starting vertex, and attempts to find a dense subgraph near that vertex. We prove that for any subgraph S with k vertices and density theta, there are a significant number of starting vertices within S for which our algorithm produces a subgraph S' with density theta / O(log n) on at most O(D k^2) vertices, where D is the maximum degree. The running time of the algorithm is O(D k^2), independent of the number of vertices in the graph.<|reference_end|>
arxiv
@article{andersen2007a, title={A Local Algorithm for Finding Dense Subgraphs}, author={Reid Andersen}, journal={arXiv preprint arXiv:cs/0702078}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702078}, primaryClass={cs.DS cs.CC} }
andersen2007a
arxiv-675635
cs/0702079
The Hadwiger Number of Jordan Regions is Unbounded
<|reference_start|>The Hadwiger Number of Jordan Regions is Unbounded: We show that for every n > 0 there is a planar topological disk A_0 and n translates A_1, A_2, ..., A_n of A_0 such that the interiors of A_0, ... A_n are pairwise disjoint, but with each A_i touching A_0 for 1 <= i <= n.<|reference_end|>
arxiv
@article{cheong2007the, title={The Hadwiger Number of Jordan Regions is Unbounded}, author={Otfried Cheong, Mira Lee}, journal={arXiv preprint arXiv:cs/0702079}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702079}, primaryClass={cs.CG math.MG} }
cheong2007the
arxiv-675636
cs/0702080
Sparse geometric graphs with small dilation
<|reference_start|>Sparse geometric graphs with small dilation: Given a set S of n points in R^D, and an integer k such that 0 <= k < n, we show that a geometric graph with vertex set S, at most n - 1 + k edges, maximum degree five, and dilation O(n / (k+1)) can be computed in time O(n log n). For any k, we also construct planar n-point sets for which any geometric graph with n-1+k edges has dilation Omega(n/(k+1)); a slightly weaker statement holds if the points of S are required to be in convex position.<|reference_end|>
arxiv
@article{aronov2007sparse, title={Sparse geometric graphs with small dilation}, author={Boris Aronov, Mark de Berg, Otfried Cheong, Joachim Gudmundsson, Herman Haverkort, Michiel Smid, Antoine Vigneron}, journal={arXiv preprint arXiv:cs/0702080}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702080}, primaryClass={cs.CG} }
aronov2007sparse
arxiv-675637
cs/0702081
Random Sentences from a Generalized Phrase-Structure Grammar Interpreter
<|reference_start|>Random Sentences from a Generalized Phrase-Structure Grammar Interpreter: In numerous domains in cognitive science it is often useful to have a source for randomly generated corpora. These corpora may serve as a foundation for artificial stimuli in a learning experiment (e.g., Ellefson & Christiansen, 2000), or as input into computational models (e.g., Christiansen & Dale, 2001). The following compact and general C program interprets a phrase-structure grammar specified in a text file. It follows parameters set at a Unix or Unix-based command-line and generates a corpus of random sentences from that grammar.<|reference_end|>
arxiv
@article{dale2007random, title={Random Sentences from a Generalized Phrase-Structure Grammar Interpreter}, author={Rick Dale}, journal={arXiv preprint arXiv:cs/0702081}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702081}, primaryClass={cs.CL} }
dale2007random
arxiv-675638
cs/0702082
Invariant template matching in systems with spatiotemporal coding: a vote for instability
<|reference_start|>Invariant template matching in systems with spatiotemporal coding: a vote for instability: We consider the design of a pattern recognition that matches templates to images, both of which are spatially sampled and encoded as temporal sequences. The image is subject to a combination of various perturbations. These include ones that can be modeled as parameterized uncertainties such as image blur, luminance, translation, and rotation as well as unmodeled ones. Biological and neural systems require that these perturbations be processed through a minimal number of channels by simple adaptation mechanisms. We found that the most suitable mathematical framework to meet this requirement is that of weakly attracting sets. This framework provides us with a normative and unifying solution to the pattern recognition problem. We analyze the consequences of its explicit implementation in neural systems. Several properties inherent to the systems designed in accordance with our normative mathematical argument coincide with known empirical facts. This is illustrated in mental rotation, visual search and blur/intensity adaptation. We demonstrate how our results can be applied to a range of practical problems in template matching and pattern recognition.<|reference_end|>
arxiv
@article{tyukin2007invariant, title={Invariant template matching in systems with spatiotemporal coding: a vote for instability}, author={Ivan Tyukin, Tatiana Tyukina, Cees van Leeuwen}, journal={Neural Networks, vol. 22, no. 4, (2009), 425-449}, year={2007}, doi={10.1016/j.neunet.2009.01.014}, archivePrefix={arXiv}, eprint={cs/0702082}, primaryClass={cs.CV cs.AI} }
tyukin2007invariant
arxiv-675639
cs/0702083
Improving Prolog programs: Refactoring for Prolog
<|reference_start|>Improving Prolog programs: Refactoring for Prolog: Refactoring is an established technique from the object-oriented (OO) programming community to restructure code: it aims at improving software readability, maintainability and extensibility. Although refactoring is not tied to the OO-paradigm in particular, its ideas have not been applied to Logic Programming until now. This paper applies the ideas of refactoring to Prolog programs. A catalogue is presented listing refactorings classified according to scope. Some of the refactorings have been adapted from the OO-paradigm, while others have been specifically designed for Prolog. The discrepancy between intended and operational semantics in Prolog is also addressed by some of the refactorings. In addition, ViPReSS, a semi-automatic refactoring browser, is discussed and the experience with applying ViPReSS to a large Prolog legacy system is reported. The main conclusion is that refactoring is both a viable technique in Prolog and a rather desirable one.<|reference_end|>
arxiv
@article{serebrenik2007improving, title={Improving Prolog programs: Refactoring for Prolog}, author={Alexander Serebrenik, Tom Schrijvers, Bart Demoen}, journal={arXiv preprint arXiv:cs/0702083}, year={2007}, number={2006-1}, archivePrefix={arXiv}, eprint={cs/0702083}, primaryClass={cs.SE} }
serebrenik2007improving
arxiv-675640
cs/0702084
Performance of Ultra-Wideband Impulse Radio in Presence of Impulsive Interference
<|reference_start|>Performance of Ultra-Wideband Impulse Radio in Presence of Impulsive Interference: We analyze the performance of coherent impulsive-radio (IR) ultra-wideband (UWB) channel in presence of the interference generated by concurrent transmissions of the systems with the same impulsive radio. We derive a novel algorithm, using Monte-Carlo method, to calculate a lower bound on the rate that can be achieved using maximum-likelihood estimator. Using this bound we show that such a channel is very robust to interference, in contrast to the nearest-neighbor detector.<|reference_end|>
arxiv
@article{radunovic2007performance, title={Performance of Ultra-Wideband Impulse Radio in Presence of Impulsive Interference}, author={Bozidar Radunovic, Jean-Yves Le Boudec, Raymond Knopp}, journal={arXiv preprint arXiv:cs/0702084}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702084}, primaryClass={cs.IT math.IT} }
radunovic2007performance
arxiv-675641
cs/0702085
Social Behaviours Applied to P2P Systems: An efficient Algorithm for Resource Organisation
<|reference_start|>Social Behaviours Applied to P2P Systems: An efficient Algorithm for Resource Organisation: P2P systems are a great solution to the problem of distributing resources. The main issue of P2P networks is that searching and retrieving resources shared by peers is usually expensive and does not take into account similarities among peers. In this paper we present preliminary simulations of PROSA, a novel algorithm for P2P network structuring, inspired by social behaviours. Peers in PROSA self--organise in social groups of similar peers, called ``semantic--groups'', depending on the resources they are sharing. Such a network smoothly evolves to a small--world graph, where queries for resources are efficiently and effectively routed.<|reference_end|>
arxiv
@article{carchiolo2007social, title={Social Behaviours Applied to P2P Systems: An efficient Algorithm for Resource Organisation}, author={V. Carchiolo, M. Malgeri, G. Mangioni and V. Nicosia}, journal={15th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2006. WETICE '06. June 2006 Page(s):65 - 72}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702085}, primaryClass={cs.DC cs.IR} }
carchiolo2007social
arxiv-675642
cs/0702086
Protection of DVB Systems by Trusted Computing
<|reference_start|>Protection of DVB Systems by Trusted Computing: We describe a concept to employ Trusted Computing technology to secure Conditional Access Systems (CAS) for DVB. Central is the embedding of a trusted platform module (TPM) into the set-top-box or residential home gateway. Various deployment scenarios exhibit possibilities of charging co-operation with mobile network operators (MNO), or other payment providers.<|reference_end|>
arxiv
@article{kuntze2007protection, title={Protection of DVB Systems by Trusted Computing}, author={Nicolai Kuntze and Andreas U. Schmidt}, journal={arXiv preprint arXiv:cs/0702086}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702086}, primaryClass={cs.CR} }
kuntze2007protection
arxiv-675643
cs/0702087
An Upper Bound on the Average Size of Silhouettes
<|reference_start|>An Upper Bound on the Average Size of Silhouettes: It is a widely observed phenomenon in computer graphics that the size of the silhouette of a polyhedron is much smaller than the size of the whole polyhedron. This paper provides, for the first time, theoretical evidence supporting this for a large class of objects, namely for polyhedra that approximate surfaces in some reasonable way; the surfaces may be non-convex and non-differentiable and they may have boundaries. We prove that such polyhedra have silhouettes of expected size $O(\sqrt{n})$ where the average is taken over all points of view and n is the complexity of the polyhedron.<|reference_end|>
arxiv
@article{glisse2007an, title={An Upper Bound on the Average Size of Silhouettes}, author={Marc Glisse (INRIA Lorraine - LORIA), Sylvain Lazard (INRIA Lorraine - LORIA)}, journal={arXiv preprint arXiv:cs/0702087}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702087}, primaryClass={cs.CG} }
glisse2007an
arxiv-675644
cs/0702088
Paths Beyond Local Search: A Nearly Tight Bound for Randomized Fixed-Point Computation
<|reference_start|>Paths Beyond Local Search: A Nearly Tight Bound for Randomized Fixed-Point Computation: In 1983, Aldous proved that randomization can speedup local search. For example, it reduces the query complexity of local search over [1:n]^d from Theta (n^{d-1}) to O (d^{1/2}n^{d/2}). It remains open whether randomization helps fixed-point computation. Inspired by this open problem and recent advances on equilibrium computation, we have been fascinated by the following question: Is a fixed-point or an equilibrium fundamentally harder to find than a local optimum? In this paper, we give a nearly-tight bound of Omega(n)^{d-1} on the randomized query complexity for computing a fixed point of a discrete Brouwer function over [1:n]^d. Since the randomized query complexity of global optimization over [1:n]^d is Theta (n^{d}), the randomized query model over [1:n]^d strictly separates these three important search problems: Global optimization is harder than fixed-point computation, and fixed-point computation is harder than local search. Our result indeed demonstrates that randomization does not help much in fixed-point computation in the query model; the deterministic complexity of this problem is Theta (n^{d-1}).<|reference_end|>
arxiv
@article{chen2007paths, title={Paths Beyond Local Search: A Nearly Tight Bound for Randomized Fixed-Point Computation}, author={Xi Chen and Shang-Hua Teng}, journal={arXiv preprint arXiv:cs/0702088}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702088}, primaryClass={cs.GT} }
chen2007paths
arxiv-675645
cs/0702089
Mapping the Object-Role Modeling language ORM2 into Description Logic language DLRifd
<|reference_start|>Mapping the Object-Role Modeling language ORM2 into Description Logic language DLRifd: In recent years, several efforts have been made to enhance conceptual data modelling with automated reasoning to improve the model's quality and derive implicit information. One approach to achieve this in implementations, is to constrain the language. Advances in Description Logics can help choosing the right language to have greatest expressiveness yet to remain within the decidable fragment of first order logic to realise a workable implementation with good performance using DL reasoners. The best fit DL language appears to be the ExpTime-complete DLRifd. To illustrate trade-offs and highlight features of the modelling languages, we present a precise transformation of the mappable features of the very expressive (undecidable) ORM/ORM2 conceptual data modelling languages to exactly DLRifd. Although not all ORM2 features can be mapped, this is an interesting fragment because it has been shown that DLRifd can also encode UML Class Diagrams and EER, and therefore can foster interoperation between conceptual data models and research into ontological aspects of the modelling languages.<|reference_end|>
arxiv
@article{keet2007mapping, title={Mapping the Object-Role Modeling language ORM2 into Description Logic language DLRifd}, author={C. Maria Keet}, journal={arXiv preprint arXiv:cs/0702089}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702089}, primaryClass={cs.LO} }
keet2007mapping
arxiv-675646
cs/0702090
Aperture-Angle and Hausdorff-Approximation of Convex Figures
<|reference_start|>Aperture-Angle and Hausdorff-Approximation of Convex Figures: The aperture angle alpha(x, Q) of a point x not in Q in the plane with respect to a convex polygon Q is the angle of the smallest cone with apex x that contains Q. The aperture angle approximation error of a compact convex set C in the plane with respect to an inscribed convex polygon Q of C is the minimum aperture angle of any x in C Q with respect to Q. We show that for any compact convex set C in the plane and any k > 2, there is an inscribed convex k-gon Q of C with aperture angle approximation error (1 - 2/(k+1)) pi. This bound is optimal, and settles a conjecture by Fekete from the early 1990s. The same proof technique can be used to prove a conjecture by Brass: If a polygon P admits no approximation by a sub-k-gon (the convex hull of k vertices of P) with Hausdorff distance sigma, but all subpolygons of P (the convex hull of some vertices of P) admit such an approximation, then P is a (k+1)-gon. This implies the following result: For any k > 2 and any convex polygon P of perimeter at most 1 there is a sub-k-gon Q of P such that the Hausdorff-distance of P and Q is at most 1/(k+1) * sin(pi/(k+1)).<|reference_end|>
arxiv
@article{ahn2007aperture-angle, title={Aperture-Angle and Hausdorff-Approximation of Convex Figures}, author={Hee-Kap Ahn, Sang Won Bae, Otfried Cheong, Joachim Gudmundsson}, journal={arXiv preprint arXiv:cs/0702090}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702090}, primaryClass={cs.CG math.MG} }
ahn2007aperture-angle
arxiv-675647
cs/0702091
Observable Graphs
<|reference_start|>Observable Graphs: An edge-colored directed graph is \emph{observable} if an agent that moves along its edges is able to determine his position in the graph after a sufficiently long observation of the edge colors. When the agent is able to determine his position only from time to time, the graph is said to be \emph{partly observable}. Observability in graphs is desirable in situations where autonomous agents are moving on a network and one wants to localize them (or the agent wants to localize himself) with limited information. In this paper, we completely characterize observable and partly observable graphs and show how these concepts relate to observable discrete event systems and to local automata. Based on these characterizations, we provide polynomial time algorithms to decide observability, to decide partial observability, and to compute the minimal number of observations necessary for finding the position of an agent. In particular we prove that in the worst case this minimal number of observations increases quadratically with the number of nodes in the graph. From this it follows that it may be necessary for an agent to pass through the same node several times before he is finally able to determine his position in the graph. We then consider the more difficult question of assigning colors to a graph so as to make it observable and we prove that two different versions of this problem are NP-complete.<|reference_end|>
arxiv
@article{jungers2007observable, title={Observable Graphs}, author={Raphael M. Jungers and Vincent D. Blondel}, journal={arXiv preprint arXiv:cs/0702091}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702091}, primaryClass={cs.MA} }
jungers2007observable
arxiv-675648
cs/0702092
A Note on the Periodicity and the Output Rate of Bit Search Type Generators
<|reference_start|>A Note on the Periodicity and the Output Rate of Bit Search Type Generators: We investigate the bit-search type irregular decimation algorithms that are used within LFSR-based stream ciphers. In particular, we concentrate on BSG and ABSG, and consider two different setups for the analysis. In the first case, the input is assumed to be a m-sequence; we show that all possible output sequences can be classified into two sets, each of which is characterized by the equivalence of their elements up to shifts. Furthermore, we prove that the cardinality of each of these sets is equal to the period of one of its elements and subsequently derive the first known bounds on the expected output period (assuming that no subperiods exist). In the second setup, we work in a probabilistic framework and assume that the input sequence is evenly distributed (i.e., independent identically distributed Bernoulli process with probability 1/2). Under these assumptions, we derive closed-form expressions for the distribution of the output length and the output rate, which is shown to be asymptotically Gaussian-distributed and concentrated around the mean with exponential tightness.<|reference_end|>
arxiv
@article{altug2007a, title={A Note on the Periodicity and the Output Rate of Bit Search Type Generators}, author={Yucel Altug, N. Polat Ayerden, M. Kivanc Mihcak, Emin Anarim}, journal={arXiv preprint arXiv:cs/0702092}, year={2007}, doi={10.1109/TIT.2007.913503}, archivePrefix={arXiv}, eprint={cs/0702092}, primaryClass={cs.CR} }
altug2007a
arxiv-675649
cs/0702093
Secure Broadcasting
<|reference_start|>Secure Broadcasting: Wyner's wiretap channel is extended to parallel broadcast channels and fading channels with multiple receivers. In the first part of the paper, we consider the setup of parallel broadcast channels with one sender, multiple intended receivers, and one eavesdropper. We study the situations where the sender broadcasts either a common message or independent messages to the intended receivers. We derive upper and lower bounds on the common-message-secrecy capacity, which coincide when the users are reversely degraded. For the case of independent messages we establish the secrecy sum-capacity when the users are reversely degraded. In the second part of the paper we apply our results to fading channels: perfect channel state information of all intended receivers is known globally, whereas the eavesdropper channel is known only to her. For the common message case, a somewhat surprising result is proven: a positive rate can be achieved independently of the number of intended receivers. For independent messages, an opportunistic transmission scheme is presented that achieves the secrecy sum-capacity in the limit of large number of receivers. Our results are stated for a fast fading channel model. Extensions to the block fading model are also discussed.<|reference_end|>
arxiv
@article{khisti2007secure, title={Secure Broadcasting}, author={Ashish Khisti, Aslan Tchamkerten, Gregory Wornell}, journal={arXiv preprint arXiv:cs/0702093}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702093}, primaryClass={cs.IT math.IT} }
khisti2007secure
arxiv-675650
cs/0702094
Authentication via wireless networks
<|reference_start|>Authentication via wireless networks: Personal authentication is an important process we encounter almost every day; when we are logging on a computer, entering a company where we work, or a restricted area, when we are using our plastic credit cards to pay for a service or to complete some other financial transaction, etc. In each of these processes of personal authentication some kind of magnetic or optical token is required. But by using novel technologies like mobile computing and wireless networking, it is possible to avoid carrying multitude of ID cards or remembering a number of PIN codes. Article shows how to efficiently authenticate users via Personal Area Networks (PAN) like Bluetooth or IrDA using commonplace AES (Rijndel) or MD5 encryption. This method can be implemented on many types of mobile devices like Pocket PC PDA with Windows CE (Windows Mobile 2003) real-time operating system, or any other customized OS, so we will explain all components and key features of such basic system.<|reference_end|>
arxiv
@article{fuduric2007authentication, title={Authentication via wireless networks}, author={Darko Fuduric, Marko Horvat and Mario Zagar}, journal={MIPRO, 2006}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702094}, primaryClass={cs.OH} }
fuduric2007authentication
arxiv-675651
cs/0702095
A note on using finite non-abelian $p$-groups in the MOR cryptosystem
<|reference_start|>A note on using finite non-abelian $p$-groups in the MOR cryptosystem: The MOR cryptosystem is a natural generalization of the El-Gamal cryptosystem to non-abelian groups. Using a $p$-group, a cryptosystem was built by this author in 'A simple generalization of El-Gamal cryptosystem to non-abelian groups'. It seems reasonable to assume the cryptosystem is as secure as the El-Gamal cryptosystem over finite fields. A natural question arises can one make a better cryptosystem using $p$-groups? In this paper we show that the answer is no.<|reference_end|>
arxiv
@article{mahalanobis2007a, title={A note on using finite non-abelian $p$-groups in the MOR cryptosystem}, author={Ayan Mahalanobis}, journal={arXiv preprint arXiv:cs/0702095}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702095}, primaryClass={cs.CR math.GR} }
mahalanobis2007a
arxiv-675652
cs/0702096
Overcoming Hierarchical Difficulty by Hill-Climbing the Building Block Structure
<|reference_start|>Overcoming Hierarchical Difficulty by Hill-Climbing the Building Block Structure: The Building Block Hypothesis suggests that Genetic Algorithms (GAs) are well-suited for hierarchical problems, where efficient solving requires proper problem decomposition and assembly of solution from sub-solution with strong non-linear interdependencies. The paper proposes a hill-climber operating over the building block (BB) space that can efficiently address hierarchical problems. The new Building Block Hill-Climber (BBHC) uses past hill-climb experience to extract BB information and adapts its neighborhood structure accordingly. The perpetual adaptation of the neighborhood structure allows the method to climb the hierarchical structure solving successively the hierarchical levels. It is expected that for fully non deceptive hierarchical BB structures the BBHC can solve hierarchical problems in linearithmic time. Empirical results confirm that the proposed method scales almost linearly with the problem size thus clearly outperforms population based recombinative methods.<|reference_end|>
arxiv
@article{iclanzan2007overcoming, title={Overcoming Hierarchical Difficulty by Hill-Climbing the Building Block Structure}, author={David Iclanzan, Dan Dumitrescu}, journal={arXiv preprint arXiv:cs/0702096}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702096}, primaryClass={cs.NE cs.AI} }
iclanzan2007overcoming
arxiv-675653
cs/0702097
Avoiding bias in cards cryptography
<|reference_start|>Avoiding bias in cards cryptography: We outline the need for stricter requirements for unconditionally secure cryptographic protocols inspired by the Russian Cards problem. A new requirement CA4 is proposed that checks for bias in single card occurrence in announcements consisting of alternatives for players' holdings of cards. This requirement CA4 is shown to be equivalent to an alternative requirement CA5. All announcements found to satisfy CA4 are 2-designs. We also show that all binary designs are 3-designs. Instead of avoiding bias in announcements produced by such protocols, one may as well apply unbiased protocols such that patterns in announcements become meaningless. We gave two examples of such protocols for card deal parameters (3,3,1), i.e. two of the players hold three cards, and the remaining player, playing the role of eavesdropper, holds a single card.<|reference_end|>
arxiv
@article{atkinson2007avoiding, title={Avoiding bias in cards cryptography}, author={M.D. Atkinson and H.P. van Ditmarsch and S. Roehling}, journal={Australasian Journal of Combinatorics 44:3-17, 2009}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702097}, primaryClass={cs.CR cs.MA} }
atkinson2007avoiding
arxiv-675654
cs/0702098
A Sum-Product Model as a Physical Basis for Shadow Fading
<|reference_start|>A Sum-Product Model as a Physical Basis for Shadow Fading: Shadow fading (slow fading) effects play a central role in mobile communication system design and analysis. Experimental evidence indicates that shadow fading exhibits log-normal power distribution almost universally, and yet it is still not well understood what causes this. In this paper, we propose a versatile sum-product signal model as a physical basis for shadow fading. Simulation results imply that the proposed model results in log-normally distributed local mean power regardless of the distributions of the interactions in the radio channel, and hence it is capable of explaining the log-normality in a wide variety of propagation scenarios. The sum-product model also includes as its special cases the conventional product model as well as the recently proposed sum model, and improves upon these by: a) being applicable in both global and local distance scales; b) being more plausible from physical point of view; c) providing better goodness-of-fit to log-normal distribution than either of these models.<|reference_end|>
arxiv
@article{salo2007a, title={A Sum-Product Model as a Physical Basis for Shadow Fading}, author={Jari Salo}, journal={arXiv preprint arXiv:cs/0702098}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702098}, primaryClass={cs.OH} }
salo2007a
arxiv-675655
cs/0702099
Discrete Memoryless Interference and Broadcast Channels with Confidential Messages: Secrecy Rate Regions
<|reference_start|>Discrete Memoryless Interference and Broadcast Channels with Confidential Messages: Secrecy Rate Regions: We study information-theoretic security for discrete memoryless interference and broadcast channels with independent confidential messages sent to two receivers. Confidential messages are transmitted to their respective receivers with information-theoretic secrecy. That is, each receiver is kept in total ignorance with respect to the message intended for the other receiver. The secrecy level is measured by the equivocation rate at the eavesdropping receiver. In this paper, we present inner and outer bounds on secrecy capacity regions for these two communication systems. The derived outer bounds have an identical mutual information expression that applies to both channel models. The difference is in the input distributions over which the expression is optimized. The inner bound rate regions are achieved by random binning techniques. For the broadcast channel, a double-binning coding scheme allows for both joint encoding and preserving of confidentiality. Furthermore, we show that, for a special case of the interference channel, referred to as the switch channel, the two bound bounds meet. Finally, we describe several transmission schemes for Gaussian interference channels and derive their achievable rate regions while ensuring mutual information-theoretic secrecy. An encoding scheme in which transmitters dedicate some of their power to create artificial noise is proposed and shown to outperform both time-sharing and simple multiplexed transmission of the confidential messages.<|reference_end|>
arxiv
@article{liu2007discrete, title={Discrete Memoryless Interference and Broadcast Channels with Confidential Messages: Secrecy Rate Regions}, author={Ruoheng Liu, Ivana Maric, Predrag Spasojevic, and Roy D. Yates}, journal={arXiv preprint arXiv:cs/0702099}, year={2007}, doi={10.1109/TIT.2008.921879}, archivePrefix={arXiv}, eprint={cs/0702099}, primaryClass={cs.IT math.IT} }
liu2007discrete
arxiv-675656
cs/0702100
A Class of Multi-Channel Cosine Modulated IIR Filter Banks
<|reference_start|>A Class of Multi-Channel Cosine Modulated IIR Filter Banks: This paper presents a class of multi-channel cosine-modulated filter banks satisfying the perfect reconstruction (PR) property using an IIR prototype filter. By imposing a suitable structure on the polyphase filter coefficients, we show that it is possible to greatly simplify the PR condition, while preserving the causality and stability of the system. We derive closed-form expressions for the synthesis filters and also study the numerical stability of the filter bank using frame theoretic bounds. Further, we show that it is possible to implement this filter bank with much lower number of arithmetic operations when compared to FIR filter banks with comparable performance. The filter bank's modular structure also lends itself to efficient VLSI implementation.<|reference_end|>
arxiv
@article{vanka2007a, title={A Class of Multi-Channel Cosine Modulated IIR Filter Banks}, author={Sundaram Vanka, M. J. Dehghani, K. M. M. Prabhu, R. Aravind}, journal={arXiv preprint arXiv:cs/0702100}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702100}, primaryClass={cs.IT math.IT} }
vanka2007a
arxiv-675657
cs/0702101
An identity of Chernoff bounds with an interpretation in statistical physics and applications in information theory
<|reference_start|>An identity of Chernoff bounds with an interpretation in statistical physics and applications in information theory: An identity between two versions of the Chernoff bound on the probability a certain large deviations event, is established. This identity has an interpretation in statistical physics, namely, an isothermal equilibrium of a composite system that consists of multiple subsystems of particles. Several information--theoretic application examples, where the analysis of this large deviations probability naturally arises, are then described from the viewpoint of this statistical mechanical interpretation. This results in several relationships between information theory and statistical physics, which we hope, the reader will find insightful.<|reference_end|>
arxiv
@article{merhav2007an, title={An identity of Chernoff bounds with an interpretation in statistical physics and applications in information theory}, author={Neri Merhav}, journal={arXiv preprint arXiv:cs/0702101}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702101}, primaryClass={cs.IT math.IT} }
merhav2007an
arxiv-675658
cs/0702102
Paging and Registration in Cellular Networks: Jointly Optimal Policies and an Iterative Algorithm
<|reference_start|>Paging and Registration in Cellular Networks: Jointly Optimal Policies and an Iterative Algorithm: This paper explores optimization of paging and registration policies in cellular networks. Motion is modeled as a discrete-time Markov process, and minimization of the discounted, infinite-horizon average cost is addressed. The structure of jointly optimal paging and registration policies is investigated through the use of dynamic programming for partially observed Markov processes. It is shown that there exist policies with a certain simple form that are jointly optimal, though the dynamic programming approach does not directly provide an efficient method to find the policies. An iterative algorithm for policies with the simple form is proposed and investigated. The algorithm alternates between paging policy optimization and registration policy optimization. It finds a pair of individually optimal policies, but an example is given showing that the policies need not be jointly optimal. Majorization theory and Riesz's rearrangement inequality are used to show that jointly optimal paging and registration policies are given for symmetric or Gaussian random walk models by the nearest-location-first paging policy and distance threshold registration policies.<|reference_end|>
arxiv
@article{hajek2007paging, title={Paging and Registration in Cellular Networks: Jointly Optimal Policies and an Iterative Algorithm}, author={Bruce Hajek, Kevin Mitzel, and Sichao Yang}, journal={arXiv preprint arXiv:cs/0702102}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702102}, primaryClass={cs.IT cs.NI math.IT} }
hajek2007paging
arxiv-675659
cs/0702103
Exploring the academic invisible web
<|reference_start|>Exploring the academic invisible web: Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic Invisible Web.<|reference_end|>
arxiv
@article{lewandowski2007exploring, title={Exploring the academic invisible web}, author={Dirk Lewandowski, Philipp Mayr}, journal={Library Hi Tech, 24 (2006) 4. pp. 529-539}, year={2007}, doi={10.1108/07378830610715392}, archivePrefix={arXiv}, eprint={cs/0702103}, primaryClass={cs.DL} }
lewandowski2007exploring
arxiv-675660
cs/0702104
A Union Bound Approximation for Rapid Performance Evaluation of Punctured Turbo Codes
<|reference_start|>A Union Bound Approximation for Rapid Performance Evaluation of Punctured Turbo Codes: In this paper, we present a simple technique to approximate the performance union bound of a punctured turbo code. The bound approximation exploits only those terms of the transfer function that have a major impact on the overall performance. We revisit the structure of the constituent convolutional encoder and we develop a rapid method to calculate the most significant terms of the transfer function of a turbo encoder. We demonstrate that, for a large interleaver size, this approximation is very accurate. Furthermore, we apply our proposed method to a family of punctured turbo codes, which we call pseudo-randomly punctured codes. We conclude by emphasizing the benefits of our approach compared to those employed previously. We also highlight the advantages of pseudo-random puncturing over other puncturing schemes.<|reference_end|>
arxiv
@article{chatzigeorgiou2007a, title={A Union Bound Approximation for Rapid Performance Evaluation of Punctured Turbo Codes}, author={Ioannis Chatzigeorgiou, Miguel R. D. Rodrigues, Ian J. Wassell, Rolando Carrasco}, journal={arXiv preprint arXiv:cs/0702104}, year={2007}, doi={10.1109/CISS.2007.4298352}, archivePrefix={arXiv}, eprint={cs/0702104}, primaryClass={cs.IT math.IT} }
chatzigeorgiou2007a
arxiv-675661
cs/0702105
The Simplest Solution to an Underdetermined System of Linear Equations
<|reference_start|>The Simplest Solution to an Underdetermined System of Linear Equations: Consider a d*n matrix A, with d<n. The problem of solving for x in y=Ax is underdetermined, and has infinitely many solutions (if there are any). Given y, the minimum Kolmogorov complexity solution (MKCS) of the input x is defined to be an input z (out of many) with minimum Kolmogorov-complexity that satisfies y=Az. One expects that if the actual input is simple enough, then MKCS will recover the input exactly. This paper presents a preliminary study of the existence and value of the complexity level up to which such a complexity-based recovery is possible. It is shown that for the set of all d*n binary matrices (with entries 0 or 1 and d<n), MKCS exactly recovers the input for an overwhelming fraction of the matrices provided the Kolmogorov complexity of the input is O(d). A weak converse that is loose by a log n factor is also established for this case. Finally, we investigate the difficulty of finding a matrix that has the property of recovering inputs with complexity of O(d) using MKCS.<|reference_end|>
arxiv
@article{donoho2007the, title={The Simplest Solution to an Underdetermined System of Linear Equations}, author={David Donoho, Hossein Kakavand, James Mammen}, journal={arXiv preprint arXiv:cs/0702105}, year={2007}, doi={10.1109/ISIT.2006.261816}, archivePrefix={arXiv}, eprint={cs/0702105}, primaryClass={cs.IT math.IT} }
donoho2007the
arxiv-675662
cs/0702106
Wild, Wild Wikis: A way forward
<|reference_start|>Wild, Wild Wikis: A way forward: Wikis can be considered as public domain knowledge sharing system. They provide opportunity for those who may not have the privilege to publish their thoughts through the traditional methods. They are one of the fastest growing systems of online encyclopaedia. In this study, we consider the importance of wikis as a way of creating, sharing and improving public knowledge. We identify some of the problems associated with wikis to include, (a) identification of the identities of information and its creator (b) accuracy of information (c) justification of the credibility of authors (d) vandalism of quality of information (e) weak control over the contents. A solution to some of these problems is sought through the use of an annotation model. The model assumes that contributions in wikis can be seen as annotation to the initial document. It proposed a systematic control of contributors and contributions to the initiative and the keeping of records of what existed and what was done to initial documents. We believe that with this model, analysis can be done on the progress of wiki initiatives. We assumed that using this model, wikis can be better used for creation and sharing of knowledge for public use.<|reference_end|>
arxiv
@article{robert2007wild,, title={Wild, Wild Wikis: A way forward}, author={Charles Robert (LORIA), Ranmi Adigun (YABATECH)}, journal={Dans The Fifth International Conference on Creating, Connecting and Collaborating through Computing, C5 2007 (2007)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702106}, primaryClass={cs.IR} }
robert2007wild,
arxiv-675663
cs/0702107
AMIEDoT: An annotation model for document tracking and recommendation service
<|reference_start|>AMIEDoT: An annotation model for document tracking and recommendation service: The primary objective of document annotation in whatever form, manual or electronic is to allow those who may not have control to original document to provide personal view on information source. Beyond providing personal assessment to original information sources, we are looking at a situation where annotation made can be used as additional source of information for document tracking and recommendation service. Most of the annotation tools existing today were conceived for their independent use with no reference to the creator of the annotation. We propose AMIEDoT (Annotation Model for Information Exchange and Document Tracking) an annotation model that can assist in document tracking and recommendation service. The model is based on three parameters in the acts of annotation. We believe that introducing document parameters, time and the parameters of the creator of annotation into an annotation process can be a dependable source to know, who used a document, when a document was used and for what a document was used for. Beyond document tracking, our model can be used in not only for selective dissemination of information but for recommendation services. AMIEDoT can also be used for information sharing and information reuse.<|reference_end|>
arxiv
@article{robert2007amiedot:, title={AMIEDoT: An annotation model for document tracking and recommendation service}, author={Charles A. Robert (LORIA)}, journal={Dans International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering, (CIS2E 06) (2007)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702107}, primaryClass={cs.IR} }
robert2007amiedot:
arxiv-675664
cs/0702108
Orthogonal Codes for Robust Low-Cost Communication
<|reference_start|>Orthogonal Codes for Robust Low-Cost Communication: Orthogonal coding schemes, known to asymptotically achieve the capacity per unit cost (CPUC) for single-user ergodic memoryless channels with a zero-cost input symbol, are investigated for single-user compound memoryless channels, which exhibit uncertainties in their input-output statistical relationships. A minimax formulation is adopted to attain robustness. First, a class of achievable rates per unit cost (ARPUC) is derived, and its utility is demonstrated through several representative case studies. Second, when the uncertainty set of channel transition statistics satisfies a convexity property, optimization is performed over the class of ARPUC through utilizing results of minimax robustness. The resulting CPUC lower bound indicates the ultimate performance of the orthogonal coding scheme, and coincides with the CPUC under certain restrictive conditions. Finally, still under the convexity property, it is shown that the CPUC can generally be achieved, through utilizing a so-called mixed strategy in which an orthogonal code contains an appropriate composition of different nonzero-cost input symbols.<|reference_end|>
arxiv
@article{zhang2007orthogonal, title={Orthogonal Codes for Robust Low-Cost Communication}, author={Wenyi Zhang, Urbashi Mitra}, journal={arXiv preprint arXiv:cs/0702108}, year={2007}, number={USC CSI Technical Report CSI-07-02-01}, archivePrefix={arXiv}, eprint={cs/0702108}, primaryClass={cs.IT math.IT} }
zhang2007orthogonal
arxiv-675665
cs/0702109
AMIE: An annotation model for information research
<|reference_start|>AMIE: An annotation model for information research: The objective of most users for consulting any information database, information warehouse or the internet is to resolve one problem or the other. Available online or offline annotation tools were not conceived with the objective of assisting users in their bid to resolve a decisional problem. Apart from the objective and usage of annotation tools, how these tools are conceived and classified has implication on their usage. Several criteria have been used to categorize annotation concepts. Typically annotation are conceived based on how it affect the organization of document been considered for annotation or the organization of the resulting annotation. Our approach is annotation that will assist in information research for decision making. Annotation model for information exchange (AMIE) was conceived with the objective of information sharing and reuse.<|reference_end|>
arxiv
@article{robert2007amie:, title={AMIE: An annotation model for information research}, author={Charles A. Robert (LORIA), David Amos (LORIA)}, journal={Dans International Conference on Computers in Education, (2006)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702109}, primaryClass={cs.IR} }
robert2007amie:
arxiv-675666
cs/0702110
Security Implications of Converged Networks and Protecting Them, without Compromising Efficiency
<|reference_start|>Security Implications of Converged Networks and Protecting Them, without Compromising Efficiency: This dissertation has extensively looked into all aspects of VoIP commu-nications technology, and information presented in preceding chapters, which build up a solid framework to discuss the conceptual design model, and investigate features that could be incorporated for actual Pro-jects, with parameters that are tested on field values. The dissertation follows a five-course model, for answering different questions, both tech-nical and businesslike, around central issues, that have been crucial to explanation of the topic; starting with a general overview of VoIP tech-nology, analyzing current VoIP encryption methods, identifying security threats, designing a robust VoIP system based on particulars discussed in preceding chapters, and finally, a VoIP simulation.<|reference_end|>
arxiv
@article{aksahin2007security, title={Security Implications of Converged Networks and Protecting Them, without Compromising Efficiency}, author={Saltuk Aksahin}, journal={arXiv preprint arXiv:cs/0702110}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702110}, primaryClass={cs.NI} }
aksahin2007security
arxiv-675667
cs/0702111
Informed Dynamic Scheduling for Belief-Propagation Decoding of LDPC Codes
<|reference_start|>Informed Dynamic Scheduling for Belief-Propagation Decoding of LDPC Codes: Low-Density Parity-Check (LDPC) codes are usually decoded by running an iterative belief-propagation, or message-passing, algorithm over the factor graph of the code. The traditional message-passing schedule consists of updating all the variable nodes in the graph, using the same pre-update information, followed by updating all the check nodes of the graph, again, using the same pre-update information. Recently several studies show that sequential scheduling, in which messages are generated using the latest available information, significantly improves the convergence speed in terms of number of iterations. Sequential scheduling raises the problem of finding the best sequence of message updates. This paper presents practical scheduling strategies that use the value of the messages in the graph to find the next message to be updated. Simulation results show that these informed update sequences require significantly fewer iterations than standard sequential schedules. Furthermore, the paper shows that informed scheduling solves some standard trapping set errors. Therefore, it also outperforms traditional scheduling for a large numbers of iterations. Complexity and implementability issues are also addressed.<|reference_end|>
arxiv
@article{casado2007informed, title={Informed Dynamic Scheduling for Belief-Propagation Decoding of LDPC Codes}, author={Andres I. Vila Casado, Miguel Griot and Richard D. Wesel}, journal={arXiv preprint arXiv:cs/0702111}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702111}, primaryClass={cs.IT math.IT} }
casado2007informed
arxiv-675668
cs/0702112
The General Gaussian Multiple Access and Two-Way Wire-Tap Channels: Achievable Rates and Cooperative Jamming
<|reference_start|>The General Gaussian Multiple Access and Two-Way Wire-Tap Channels: Achievable Rates and Cooperative Jamming: The General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT) and the Gaussian Two-Way Wire-Tap Channel (GTW-WT) are considered. In the GGMAC-WT, multiple users communicate with an intended receiver in the presence of an eavesdropper who receives their signals through another GMAC. In the GTW-WT, two users communicate with each other over a common Gaussian channel, with an eavesdropper listening through a GMAC. A secrecy measure that is suitable for this multi-terminal environment is defined, and achievable secrecy rate regions are found for both channels. For both cases, the power allocations maximizing the achievable secrecy sum-rate are determined. It is seen that the optimum policy may prevent some terminals from transmission in order to preserve the secrecy of the system. Inspired by this construct, a new scheme, \ital{cooperative jamming}, is proposed, where users who are prevented from transmitting according to the secrecy sum-rate maximizing power allocation policy "jam" the eavesdropper, thereby helping the remaining users. This scheme is shown to increase the achievable secrecy sum-rate. Overall, our results show that in multiple-access scenarios, users can help each other to collectively achieve positive secrecy rates. In other words, cooperation among users can be invaluable for achieving secrecy for the system.<|reference_end|>
arxiv
@article{tekin2007the, title={The General Gaussian Multiple Access and Two-Way Wire-Tap Channels: Achievable Rates and Cooperative Jamming}, author={Ender Tekin, Aylin Yener}, journal={arXiv preprint arXiv:cs/0702112}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702112}, primaryClass={cs.IT cs.CR math.IT} }
tekin2007the
arxiv-675669
cs/0702113
Fast Computation of Small Cuts via Cycle Space Sampling
<|reference_start|>Fast Computation of Small Cuts via Cycle Space Sampling: We describe a new sampling-based method to determine cuts in an undirected graph. For a graph (V, E), its cycle space is the family of all subsets of E that have even degree at each vertex. We prove that with high probability, sampling the cycle space identifies the cuts of a graph. This leads to simple new linear-time sequential algorithms for finding all cut edges and cut pairs (a set of 2 edges that form a cut) of a graph. In the model of distributed computing in a graph G=(V, E) with O(log V)-bit messages, our approach yields faster algorithms for several problems. The diameter of G is denoted by Diam, and the maximum degree by Delta. We obtain simple O(Diam)-time distributed algorithms to find all cut edges, 2-edge-connected components, and cut pairs, matching or improving upon previous time bounds. Under natural conditions these new algorithms are universally optimal --- i.e. a Omega(Diam)-time lower bound holds on every graph. We obtain a O(Diam+Delta/log V)-time distributed algorithm for finding cut vertices; this is faster than the best previous algorithm when Delta, Diam = O(sqrt(V)). A simple extension of our work yields the first distributed algorithm with sub-linear time for 3-edge-connected components. The basic distributed algorithms are Monte Carlo, but they can be made Las Vegas without increasing the asymptotic complexity. In the model of parallel computing on the EREW PRAM our approach yields a simple algorithm with optimal time complexity O(log V) for finding cut pairs and 3-edge-connected components.<|reference_end|>
arxiv
@article{pritchard2007fast, title={Fast Computation of Small Cuts via Cycle Space Sampling}, author={David Pritchard and Ramakrishna Thurimella}, journal={arXiv preprint arXiv:cs/0702113}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702113}, primaryClass={cs.DC cs.DS} }
pritchard2007fast
arxiv-675670
cs/0702114
Nearest Neighbor Network Traversal
<|reference_start|>Nearest Neighbor Network Traversal: A mobile agent in a network wants to visit every node of an n-node network, using a small number of steps. We investigate the performance of the following ``nearest neighbor'' heuristic: always go to the nearest unvisited node. If the network graph never changes, then from (Rosenkrantz, Stearns and Lewis, 1977) and (Hurkens and Woeginger, 2004) it follows that Theta(n log n) steps are necessary and sufficient in the worst case. We give a simpler proof of the upper bound and an example that improves the best known lower bound. We investigate how the performance of this heuristic changes when it is distributively implemented in a network. Even if network edges are allow to fail over time, we show that the nearest neighbor strategy never runs for more than O(n^2) iterations. We also show that any strategy can be forced to take at least n(n-1)/2 steps before all nodes are visited, if the edges of the network are deleted in an adversarial way.<|reference_end|>
arxiv
@article{pritchard2007nearest, title={Nearest Neighbor Network Traversal}, author={David Pritchard}, journal={arXiv preprint arXiv:cs/0702114}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702114}, primaryClass={cs.DC} }
pritchard2007nearest
arxiv-675671
cs/0702115
Guessing based on length functions
<|reference_start|>Guessing based on length functions: A guessing wiretapper's performance on a Shannon cipher system is analyzed for a source with memory. Close relationships between guessing functions and length functions are first established. Subsequently, asymptotically optimal encryption and attack strategies are identified and their performances analyzed for sources with memory. The performance metrics are exponents of guessing moments and probability of large deviations. The metrics are then characterized for unifilar sources. Universal asymptotically optimal encryption and attack strategies are also identified for unifilar sources. Guessing in the increasing order of Lempel-Ziv coding lengths is proposed for finite-state sources, and shown to be asymptotically optimal. Finally, competitive optimality properties of guessing in the increasing order of description lengths and Lempel-Ziv coding lengths are demonstrated.<|reference_end|>
arxiv
@article{sundaresan2007guessing, title={Guessing based on length functions}, author={Rajesh Sundaresan}, journal={arXiv preprint arXiv:cs/0702115}, year={2007}, doi={10.1109/ISIT.2007.4557309}, archivePrefix={arXiv}, eprint={cs/0702115}, primaryClass={cs.IT cs.CR math.IT} }
sundaresan2007guessing
arxiv-675672
cs/0702116
The Bedwyr system for model checking over syntactic expressions
<|reference_start|>The Bedwyr system for model checking over syntactic expressions: Bedwyr is a generalization of logic programming that allows model checking directly on syntactic expressions possibly containing bindings. This system, written in OCaml, is a direct implementation of two recent advances in the theory of proof search. The first is centered on the fact that both finite success and finite failure can be captured in the sequent calculus by incorporating inference rules for definitions that allow fixed points to be explored. As a result, proof search in such a sequent calculus can capture simple model checking problems as well as may and must behavior in operational semantics. The second is that higher-order abstract syntax is directly supported using term-level $\lambda$-binders and the $\nabla$ quantifier. These features allow reasoning directly on expressions containing bound variables.<|reference_end|>
arxiv
@article{baelde2007the, title={The Bedwyr system for model checking over syntactic expressions}, author={David Baelde, Andrew Gacek, Dale Miller, Gopalan Nadathur, and Alwen Tiu}, journal={CADE 2007: 21th Conference on Automated Deduction, Frank Pfenning, editor, LNAI 4603, pages 391-397. Springer, 2007}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702116}, primaryClass={cs.LO} }
baelde2007the
arxiv-675673
cs/0702117
On a family of strong geometric spanners that admit local routing strategies
<|reference_start|>On a family of strong geometric spanners that admit local routing strategies: We introduce a family of directed geometric graphs, denoted $\paz$, that depend on two parameters $\lambda$ and $\theta$. For $0\leq \theta<\frac{\pi}{2}$ and ${1/2} < \lambda < 1$, the $\paz$ graph is a strong $t$-spanner, with $t=\frac{1}{(1-\lambda)\cos\theta}$. The out-degree of a node in the $\paz$ graph is at most $\lfloor2\pi/\min(\theta, \arccos\frac{1}{2\lambda})\rfloor$. Moreover, we show that routing can be achieved locally on $\paz$. Next, we show that all strong $t$-spanners are also $t$-spanners of the unit disk graph. Simulations for various values of the parameters $\lambda$ and $\theta$ indicate that for random point sets, the spanning ratio of $\paz$ is better than the proven theoretical bounds.<|reference_end|>
arxiv
@article{bose2007on, title={On a family of strong geometric spanners that admit local routing strategies}, author={Prosenjit Bose and Paz Carmi and Mathieu Couture and Michiel Smid and Daming Xu}, journal={arXiv preprint arXiv:cs/0702117}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702117}, primaryClass={cs.CG} }
bose2007on
arxiv-675674
cs/0702118
Interpolation-based Decoding of Alternant Codes
<|reference_start|>Interpolation-based Decoding of Alternant Codes: We formulate the classical decoding algorithm of alternant codes afresh based on interpolation as in Sudan's list decoding of Reed-Solomon codes, and thus get rid of the key equation and the linear recurring sequences in the theory. The result is a streamlined exposition of the decoding algorithm using a bit of the theory of Groebner bases of modules.<|reference_end|>
arxiv
@article{lee2007interpolation-based, title={Interpolation-based Decoding of Alternant Codes}, author={Kwankyu Lee}, journal={arXiv preprint arXiv:cs/0702118}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702118}, primaryClass={cs.IT math.IT} }
lee2007interpolation-based
arxiv-675675
cs/0702119
Ulam's Conjecture is True for Connected Graphs
<|reference_start|>Ulam's Conjecture is True for Connected Graphs: This submission has been withdrawn at the request of the author.<|reference_end|>
arxiv
@article{g2007ulam's, title={Ulam's Conjecture is True for Connected Graphs}, author={Raju Renjit. G}, journal={arXiv preprint arXiv:cs/0702119}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702119}, primaryClass={cs.DM} }
g2007ulam's
arxiv-675676
cs/0702120
On the decidability and complexity of Metric Temporal Logic over finite words
<|reference_start|>On the decidability and complexity of Metric Temporal Logic over finite words: Metric Temporal Logic (MTL) is a prominent specification formalism for real-time systems. In this paper, we show that the satisfiability problem for MTL over finite timed words is decidable, with non-primitive recursive complexity. We also consider the model-checking problem for MTL: whether all words accepted by a given Alur-Dill timed automaton satisfy a given MTL formula. We show that this problem is decidable over finite words. Over infinite words, we show that model checking the safety fragment of MTL--which includes invariance and time-bounded response properties--is also decidable. These results are quite surprising in that they contradict various claims to the contrary that have appeared in the literature.<|reference_end|>
arxiv
@article{ouaknine2007on, title={On the decidability and complexity of Metric Temporal Logic over finite words}, author={Joel Ouaknine and James Worrell}, journal={Logical Methods in Computer Science, Volume 3, Issue 1 (February 28, 2007) lmcs:2230}, year={2007}, doi={10.2168/LMCS-3(1:8)2007}, archivePrefix={arXiv}, eprint={cs/0702120}, primaryClass={cs.LO cs.CC} }
ouaknine2007on
arxiv-675677
cs/0702121
Induced Hilbert Space, Markov Chain, Diffusion Map and Fock Space in Thermophysics
<|reference_start|>Induced Hilbert Space, Markov Chain, Diffusion Map and Fock Space in Thermophysics: In this article, we continue to explore Probability Bracket Notation (PBN), proposed in our previous article. Using both Dirac vector bracket notation (VBN) and PBN, we define induced Hilbert space and induced sample space, and propose that there exists an equivalence relation between a Hilbert space and a sample space constructed from the same base observable(s). Then we investigate Markov transition matrices and their eigenvectors to make diffusion maps with two examples: a simple graph theory example, to serve as a prototype of bidirectional transition operator; a famous text document example in IR literature, to serve as a tutorial of diffusion map in text document space. We show that the sample space of the Markov chain and the Hilbert space spanned by the eigenvectors of the transition matrix are not equivalent. At the end, we apply our PBN and equivalence proposal to Thermophysics by associating sample (phase) space with the Hilbert space of a single particle and the Fock space of many-particle systems.<|reference_end|>
arxiv
@article{wang2007induced, title={Induced Hilbert Space, Markov Chain, Diffusion Map and Fock Space in Thermophysics}, author={Xing M. Wang}, journal={arXiv preprint arXiv:cs/0702121}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702121}, primaryClass={cs.OH math.PR} }
wang2007induced
arxiv-675678
cs/0702122
Transmitter and Precoding Order Optimization for Nonlinear Downlink Beamforming
<|reference_start|>Transmitter and Precoding Order Optimization for Nonlinear Downlink Beamforming: The downlink of a multiple-input multiple output (MIMO) broadcast channel (BC) is considered, where each receiver is equipped with a single antenna and the transmitter performs nonlinear Dirty-Paper Coding (DPC). We present an efficient algorithm that finds the optimum transmit filters and power allocation as well as the optimum precoding order(s) possibly affording time-sharing between individual DPC orders. Subsequently necessary and sufficient conditions for the optimality of an arbitrary precoding order are derived. Based on these we propose a suboptimal algorithm showing excellent performance and having low complexity.<|reference_end|>
arxiv
@article{michel2007transmitter, title={Transmitter and Precoding Order Optimization for Nonlinear Downlink Beamforming}, author={Thomas Michel, Gerhard Wunder}, journal={arXiv preprint arXiv:cs/0702122}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702122}, primaryClass={cs.IT math.IT} }
michel2007transmitter
arxiv-675679
cs/0702123
Tree automata and separable sets of input variables
<|reference_start|>Tree automata and separable sets of input variables: We consider the computational complexity of tree transducers, depending on their separable sets of input variables.<|reference_end|>
arxiv
@article{shtrakov2007tree, title={Tree automata and separable sets of input variables}, author={Slavcho Shtrakov and Vladimir Shtrakov}, journal={J. FILOMAT, v. 15, 2001, University of Nis, 61-71 p., ISSN 0354-5180 (http://www.pmf.ni.ac.yu/sajt/publikacije/filomat_15.html)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702123}, primaryClass={cs.CC cs.DM} }
shtrakov2007tree
arxiv-675680
cs/0702124
A Sequential Algorithm for Generating Random Graphs
<|reference_start|>A Sequential Algorithm for Generating Random Graphs: We present a nearly-linear time algorithm for counting and randomly generating simple graphs with a given degree sequence in a certain range. For degree sequence $(d_i)_{i=1}^n$ with maximum degree $d_{\max}=O(m^{1/4-\tau})$, our algorithm generates almost uniform random graphs with that degree sequence in time $O(m\,d_{\max})$ where $m=\f{1}{2}\sum_id_i$ is the number of edges in the graph and $\tau$ is any positive constant. The fastest known algorithm for uniform generation of these graphs McKay Wormald (1990) has a running time of $O(m^2d_{\max}^2)$. Our method also gives an independent proof of McKay's estimate McKay (1985) for the number of such graphs. We also use sequential importance sampling to derive fully Polynomial-time Randomized Approximation Schemes (FPRAS) for counting and uniformly generating random graphs for the same range of $d_{\max}=O(m^{1/4-\tau})$. Moreover, we show that for $d = O(n^{1/2-\tau})$, our algorithm can generate an asymptotically uniform $d$-regular graph. Our results improve the previous bound of $d = O(n^{1/3-\tau})$ due to Kim and Vu (2004) for regular graphs.<|reference_end|>
arxiv
@article{bayati2007a, title={A Sequential Algorithm for Generating Random Graphs}, author={Mohsen Bayati, Jeong Han Kim and Amin saberi}, journal={Algorithmica (2010) 58: 860-910}, year={2007}, doi={10.1007/s00453-009-9340-1}, archivePrefix={arXiv}, eprint={cs/0702124}, primaryClass={cs.CC cs.DM} }
bayati2007a
arxiv-675681
cs/0702125
Bayesian Network Tomography and Inference
<|reference_start|>Bayesian Network Tomography and Inference: The aim of this technical report is to give a short overview of known techniques for network tomography (introduced in the paper of Vardi (1996)), extended by a Bayesian approach originating Tebaldi and West (1998). Since the studies of A.K. Erlang (1878-1929) on telephone networks in the last millennium, lots of needs are seen in todays applications of networks and network tomography, so for instance networks are a critical component of the information structure supporting finance, commerce and even civil and national defence. An attack on a network can be performed as an intrusion in the network or as sending a lot of fault information and disturbing the network flow. Such attacks can be detected by modelling the traffic flows in a network, by counting the source destination packets and even by measuring counts over time and by drawing a comparison with this 'time series' for instance.<|reference_end|>
arxiv
@article{pluch2007bayesian, title={Bayesian Network Tomography and Inference}, author={Philipp Pluch, Samo Wakounig}, journal={arXiv preprint arXiv:cs/0702125}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702125}, primaryClass={cs.NI} }
pluch2007bayesian
arxiv-675682
cs/0702126
Efficient Searching and Retrieval of Documents in PROSA
<|reference_start|>Efficient Searching and Retrieval of Documents in PROSA: Retrieving resources in a distributed environment is more difficult than finding data in centralised databases. In the last decade P2P system arise as new and effective distributed architectures for resource sharing, but searching in such environments could be difficult and time-consuming. In this paper we discuss efficiency of resource discovery in PROSA, a self-organising P2P system heavily inspired by social networks. All routing choices in PROSA are made locally, looking only at the relevance of the next peer to each query. We show that PROSA is able to effectively answer queries for rare documents, forwarding them through the most convenient path to nodes that much probably share matching resources. This result is heavily related to the small-world structure that naturally emerges in PROSA.<|reference_end|>
arxiv
@article{nicosia2007efficient, title={Efficient Searching and Retrieval of Documents in PROSA}, author={V. Nicosia, G. Mangioni, V. Carchiolo, M. Malgeri}, journal={arXiv preprint arXiv:cs/0702126}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702126}, primaryClass={cs.DC cs.IR} }
nicosia2007efficient
arxiv-675683
cs/0702127
Exploiting social networks dynamics for P2P resource organisation
<|reference_start|>Exploiting social networks dynamics for P2P resource organisation: In this paper we present a formal description of PROSA, a P2P resource management system heavily inspired by social networks. Social networks have been deeply studied in the last two decades in order to understand how communities of people arise and grow. It is a widely known result that networks of social relationships usually evolves to small-worlds, i.e. networks where nodes are strongly connected to neighbours and separated from all other nodes by a small amount of hops. This work shows that algorithms implemented into PROSA allow to obtain an efficient small-world P2P network.<|reference_end|>
arxiv
@article{nicosia2007exploiting, title={Exploiting social networks dynamics for P2P resource organisation}, author={V. Nicosia, G. Mangioni, V. Carchiolo, M. Malgeri}, journal={Lecture Notes on Computer Science (LNCS) 4263 (2006)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702127}, primaryClass={cs.DC cs.IR} }
nicosia2007exploiting
arxiv-675684
cs/0702128
Reconstructing the Nonlinear Filter Function of LILI-128 Stream Cipher Based on Complexity
<|reference_start|>Reconstructing the Nonlinear Filter Function of LILI-128 Stream Cipher Based on Complexity: In this letter we assert that we have reconstructed the nonlinear filter function of LILI-128 stream cipher on IBM notebook PC using MATLAB. Our reconstruction need approximately 2^12~2^13 and the attack consumes 5825.016 sec (using tic and toc sentences of MATLAB) or 5825.016/3600=1.6181hours. We got the expression of the nonlinear filter function fd of Lili-128 which has 46 items from liner items to nonlinear items based on complexity, the phase space reconstruction, Clustering and nonlinear prediction. We have verified our reconstruction result correctness by simulating the overview of Lili-128 keystream generators using our getting fd and implement designers reference module of the Lili-128 stream cipher, and two methods produce the same synchronous keystream sequence on same initial state, so that our research work proves that the nonlinear filter function of LILI-128 stream cipher is successfully reconstructed.<|reference_end|>
arxiv
@article{huang2007reconstructing, title={Reconstructing the Nonlinear Filter Function of LILI-128 Stream Cipher Based on Complexity}, author={Xiangao Huang and Wei Huang and Xiaozhou Liu and Chao Wang and Zhu jing Wang and Tao Wang}, journal={arXiv preprint arXiv:cs/0702128}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702128}, primaryClass={cs.CR} }
huang2007reconstructing
arxiv-675685
cs/0702129
Tree Automata and Essential Input Variables
<|reference_start|>Tree Automata and Essential Input Variables: We introduce and study the essential inputs (variables) for terms (trees) and tree automata.<|reference_end|>
arxiv
@article{shtrakov2007tree, title={Tree Automata and Essential Input Variables}, author={Slavcho Shtrakov}, journal={J. Contributions to General Algebra, v.13, Verlag Johannes Heyn, Klagenfurt, 2001, 309-319 p}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702129}, primaryClass={cs.CC cs.DM} }
shtrakov2007tree
arxiv-675686
cs/0702130
Syndrome Decoding of Reed-Solomon Codes Beyond Half the Minimum Distance based on Shift-Register Synthesis
<|reference_start|>Syndrome Decoding of Reed-Solomon Codes Beyond Half the Minimum Distance based on Shift-Register Synthesis: In this paper, a new approach for decoding low-rate Reed-Solomon codes beyond half the minimum distance is considered and analyzed. Unlike the Sudan algorithm published in 1997, this new approach is based on multi-sequence shift-register synthesis, which makes it easy to understand and simple to implement. The computational complexity of this shift-register based algorithm is of the same order as the complexity of the well-known Berlekamp-Massey algorithm. Moreover, the error correcting radius coincides with the error correcting radius of the original Sudan algorithm, and the practical decoding performance observed on a q-ary symmetric channel (QSC) is virtually identical to the decoding performance of the Sudan algorithm. Bounds for the failure and error probability as well as for the QSC decoding performance of the new algorithm are derived, and the performance is illustrated by means of examples.<|reference_end|>
arxiv
@article{schmidt2007syndrome, title={Syndrome Decoding of Reed-Solomon Codes Beyond Half the Minimum Distance based on Shift-Register Synthesis}, author={Georg Schmidt, Vladimir R. Sidorenko, Martin Bossert}, journal={arXiv preprint arXiv:cs/0702130}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702130}, primaryClass={cs.IT math.IT} }
schmidt2007syndrome
arxiv-675687
cs/0702131
AICA: a New Pair Force Evaluation Method for Parallel Molecular Dynamics in Arbitrary Geometries
<|reference_start|>AICA: a New Pair Force Evaluation Method for Parallel Molecular Dynamics in Arbitrary Geometries: A new algorithm for calculating intermolecular pair forces in Molecular Dynamics (MD) simulations on a distributed parallel computer is presented. The Arbitrary Interacting Cells Algorithm (AICA) is designed to operate on geometrical domains defined by an unstructured, arbitrary polyhedral mesh, which has been spatially decomposed into irregular portions for parallelisation. It is intended for nano scale fluid mechanics simulation by MD in complex geometries, and to provide the MD component of a hybrid MD/continuum simulation. AICA has been implemented in the open-source computational toolbox OpenFOAM, and verified against a published MD code.<|reference_end|>
arxiv
@article{macpherson2007aica:, title={AICA: a New Pair Force Evaluation Method for Parallel Molecular Dynamics in Arbitrary Geometries}, author={Graham B. Macpherson and Jason M. Reese}, journal={arXiv preprint arXiv:cs/0702131}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702131}, primaryClass={cs.CE cs.DC} }
macpherson2007aica:
arxiv-675688
cs/0702132
Uplink Capacity and Interference Avoidance for Two-Tier Femtocell Networks
<|reference_start|>Uplink Capacity and Interference Avoidance for Two-Tier Femtocell Networks: Two-tier femtocell networks-- comprising a conventional macrocellular network plus embedded femtocell hotspots-- offer an economically viable solution to achieving high cellular user capacity and improved coverage. With universal frequency reuse and DS-CDMA transmission however, the ensuing cross-tier cochannel interference (CCI) causes unacceptable outage probability. This paper develops an uplink capacity analysis and interference avoidance strategy in such a two-tier CDMA network. We evaluate a network-wide area spectral efficiency metric called the \emph{operating contour (OC)} defined as the feasible combinations of the average number of active macrocell users and femtocell base stations (BS) per cell-site that satisfy a target outage constraint. The capacity analysis provides an accurate characterization of the uplink outage probability, accounting for power control, path-loss and shadowing effects. Considering worst case CCI at a corner femtocell, results reveal that interference avoidance through a time-hopped CDMA physical layer and sectorized antennas allows about a 7x higher femtocell density, relative to a split spectrum two-tier network with omnidirectional femtocell antennas. A femtocell exclusion region and a tier selection based handoff policy offers modest improvements in the OCs. These results provide guidelines for the design of robust shared spectrum two-tier networks.<|reference_end|>
arxiv
@article{chandrasekhar2007uplink, title={Uplink Capacity and Interference Avoidance for Two-Tier Femtocell Networks}, author={Vikram Chandrasekhar and Jeffrey G. Andrews}, journal={arXiv preprint arXiv:cs/0702132}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702132}, primaryClass={cs.NI cs.IT math.IT} }
chandrasekhar2007uplink
arxiv-675689
cs/0702133
Fast Exact Method for Solving the Travelling Salesman Problem
<|reference_start|>Fast Exact Method for Solving the Travelling Salesman Problem: This paper describes TSP exact solution of polynomial complexity. It is considered properties of proposed method. Effectiveness of proposed solution is illustrated by outcomes of computer modeling.<|reference_end|>
arxiv
@article{yatsenko2007fast, title={Fast Exact Method for Solving the Travelling Salesman Problem}, author={Vadim Yatsenko}, journal={arXiv preprint arXiv:cs/0702133}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702133}, primaryClass={cs.CC} }
yatsenko2007fast
arxiv-675690
cs/0702134
Patterns of technological progress: A Predictability-Based Perspective
<|reference_start|>Patterns of technological progress: A Predictability-Based Perspective: The paper tries to identify new emerging patterns in the context of technological progress. Just as industrialization is associated with rationalization, mechanization, and automation, the Internet age is associated with computer models, embedded knowledge, and collaboration. Comparison among patterns is highlighted and analysis is done from predictability-based perspective.<|reference_end|>
arxiv
@article{sati2007patterns, title={Patterns of technological progress: A Predictability-Based Perspective}, author={Pankaj Sati}, journal={arXiv preprint arXiv:cs/0702134}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702134}, primaryClass={cs.CY} }
sati2007patterns
arxiv-675691
cs/0702135
High Performance Direct Gravitational N-body Simulations on Graphics Processing Units
<|reference_start|>High Performance Direct Gravitational N-body Simulations on Graphics Processing Units: We present the results of gravitational direct $N$-body simulations using the commercial graphics processing units (GPU) NVIDIA Quadro FX1400 and GeForce 8800GTX, and compare the results with GRAPE-6Af special purpose hardware. The force evaluation of the $N$-body problem was implemented in Cg using the GPU directly to speed-up the calculations. The integration of the equations of motions were, running on the host computer, implemented in C using the 4th order predictor-corrector Hermite integrator with block time steps. We find that for a large number of particles ($N \apgt 10^4$) modern graphics processing units offer an attractive low cost alternative to GRAPE special purpose hardware. A modern GPU continues to give a relatively flat scaling with the number of particles, comparable to that of the GRAPE. Using the same time step criterion the total energy of the $N$-body system was conserved better than to one in $10^6$ on the GPU, which is only about an order of magnitude worse than obtained with GRAPE. For $N\apgt 10^6$ the GeForce 8800GTX was about 20 times faster than the host computer. Though still about an order of magnitude slower than GRAPE, modern GPU's outperform GRAPE in their low cost, long mean time between failure and the much larger onboard memory; the GRAPE-6Af holds at most 256k particles whereas the GeForce 8800GTF can hold 9 million particles in memory.<|reference_end|>
arxiv
@article{zwart2007high, title={High Performance Direct Gravitational N-body Simulations on Graphics Processing Units}, author={Simon Portegies Zwart, Robert Belleman, Peter Geldof}, journal={arXiv preprint arXiv:cs/0702135}, year={2007}, doi={10.1016/j.newast.2007.05.004}, archivePrefix={arXiv}, eprint={cs/0702135}, primaryClass={cs.PF} }
zwart2007high
arxiv-675692
cs/0702136
Essential Inputs and Minimal Tree Automata
<|reference_start|>Essential Inputs and Minimal Tree Automata: We continue studying essential inputs of trees and automata. Strongly essential inputs of trees are introduced and studied. Various examples for application in Computer Science are shown.<|reference_end|>
arxiv
@article{damyanov2007essential, title={Essential Inputs and Minimal Tree Automata}, author={Ivo Damyanov and Slavcho Shtrakov}, journal={Proc. of ICDMA, 31.08-02.09.2001, Bansko, v.6, 77-85 p}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702136}, primaryClass={cs.CC cs.DM} }
damyanov2007essential
arxiv-675693
cs/0702137
Tree Automata and Essential Subtrees
<|reference_start|>Tree Automata and Essential Subtrees: We introduce essential subtrees for terms (trees) and tree automata . There are some results concerning independent sets of subtrees and separable sets for a tree and an automaton.<|reference_end|>
arxiv
@article{shtrakov2007tree, title={Tree Automata and Essential Subtrees}, author={Slavcho Shtrakov}, journal={Proc. of ICDMA, 31.08-02.09.2001, Bansko, v.6, 51-60 p}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702137}, primaryClass={cs.CC cs.DM} }
shtrakov2007tree
arxiv-675694
cs/0702138
On the Maximal Diversity Order of Spatial Multiplexing with Transmit Antenna Selection
<|reference_start|>On the Maximal Diversity Order of Spatial Multiplexing with Transmit Antenna Selection: Zhang et. al. recently derived upper and lower bounds on the achievable diversity of an N_R x N_T i.i.d. Rayleigh fading multiple antenna system using transmit antenna selection, spatial multiplexing and a linear receiver structure. For the case of L = 2 transmitting (out of N_T available) antennas the bounds are tight and therefore specify the maximal diversity order. For the general case with L <= min(N_R,N_T) transmitting antennas it was conjectured that the maximal diversity is (N_T-L+1)(N_R-L+1) which coincides with the lower bound. Herein, we prove this conjecture for the zero forcing and zero forcing decision feedback (with optimal detection ordering) receiver structures.<|reference_end|>
arxiv
@article{jalden2007on, title={On the Maximal Diversity Order of Spatial Multiplexing with Transmit Antenna Selection}, author={J. Jalden and B. Ottersten}, journal={arXiv preprint arXiv:cs/0702138}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702138}, primaryClass={cs.IT math.IT} }
jalden2007on
arxiv-675695
cs/0702139
Characterization of $m$-Sequences of Lengths $2^2k-1$ and $2^k-1$ with Three-Valued Crosscorrelation
<|reference_start|>Characterization of $m$-Sequences of Lengths $2^2k-1$ and $2^k-1$ with Three-Valued Crosscorrelation: Considered is the distribution of the crosscorrelation between $m$-sequences of length $2^m-1$, where $m=2k$, and $m$-sequences of shorter length $2^k-1$. New pairs of $m$-sequences with three-valued crosscorrelation are found and the complete correlation distribution is determined. Finally, we conjecture that there are no more cases with a three-valued crosscorrelation apart from the ones proven here.<|reference_end|>
arxiv
@article{helleseth2007characterization, title={Characterization of $m$-Sequences of Lengths $2^{2k}-1$ and $2^k-1$ with Three-Valued Crosscorrelation}, author={Tor Helleseth and Alexander Kholosha and Geir Jarle Ness}, journal={arXiv preprint arXiv:cs/0702139}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702139}, primaryClass={cs.CR cs.DM} }
helleseth2007characterization
arxiv-675696
cs/0702140
Assessing the Value of Coooperation in Wikipedia
<|reference_start|>Assessing the Value of Coooperation in Wikipedia: Since its inception six years ago, the online encyclopedia Wikipedia has accumulated 6.40 million articles and 250 million edits, contributed in a predominantly undirected and haphazard fashion by 5.77 million unvetted volunteers. Despite the apparent lack of order, the 50 million edits by 4.8 million contributors to the 1.5 million articles in the English-language Wikipedia follow strong certain overall regularities. We show that the accretion of edits to an article is described by a simple stochastic mechanism, resulting in a heavy tail of highly visible articles with a large number of edits. We also demonstrate a crucial correlation between article quality and number of edits, which validates Wikipedia as a successful collaborative effort.<|reference_end|>
arxiv
@article{wilkinson2007assessing, title={Assessing the Value of Coooperation in Wikipedia}, author={Dennis M. Wilkinson and Bernardo A. Huberman}, journal={arXiv preprint arXiv:cs/0702140}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702140}, primaryClass={cs.DL cs.CY physics.soc-ph} }
wilkinson2007assessing
arxiv-675697
cs/0702141
Recruitment, Preparation, Retention: A case study of computing culture at the University of Illinois at Urbana-Champaign
<|reference_start|>Recruitment, Preparation, Retention: A case study of computing culture at the University of Illinois at Urbana-Champaign: Computer science is seeing a decline in enrollment at all levels of education, including undergraduate and graduate study. This paper reports on the results of a study conducted at the University of Illinois at Urbana-Champaign which evaluated students attitudes regarding three areas which can contribute to improved enrollment in the Department of Computer Science: Recruitment, preparation and retention. The results of our study saw two themes. First, the department's tight research focus appears to draw significant attention from other activities -- such as teaching, service, and other community-building activities -- that are necessary for a department's excellence. Yet, as demonstrated by our second theme, one partial solution is to better promote such activities already employed by the department to its students and faculty. Based on our results, we make recommendations for improvements and enhancements based on the current state of practice at peer institutions.<|reference_end|>
arxiv
@article{crenshaw2007recruitment,, title={Recruitment, Preparation, Retention: A case study of computing culture at the University of Illinois at Urbana-Champaign}, author={Tanya L. Crenshaw and Erin Wolf Chambers and Heather Metcalf and Umesh Thakkar}, journal={arXiv preprint arXiv:cs/0702141}, year={2007}, number={UIUCDCS-R-2007-2811}, archivePrefix={arXiv}, eprint={cs/0702141}, primaryClass={cs.GL} }
crenshaw2007recruitment,
arxiv-675698
cs/0702142
An Optimal Linear Time Algorithm for Quasi-Monotonic Segmentation
<|reference_start|>An Optimal Linear Time Algorithm for Quasi-Monotonic Segmentation: Monotonicity is a simple yet significant qualitative characteristic. We consider the problem of segmenting an array in up to K segments. We want segments to be as monotonic as possible and to alternate signs. We propose a quality metric for this problem, present an optimal linear time algorithm based on novel formalism, and compare experimentally its performance to a linear time top-down regression algorithm. We show that our algorithm is faster and more accurate. Applications include pattern recognition and qualitative modeling.<|reference_end|>
arxiv
@article{lemire2007an, title={An Optimal Linear Time Algorithm for Quasi-Monotonic Segmentation}, author={Daniel Lemire, Martin Brooks, Yuhong Yan}, journal={arXiv preprint arXiv:cs/0702142}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702142}, primaryClass={cs.DS cs.DB} }
lemire2007an
arxiv-675699
cs/0702143
Attribute Value Reordering For Efficient Hybrid OLAP
<|reference_start|>Attribute Value Reordering For Efficient Hybrid OLAP: The normalization of a data cube is the ordering of the attribute values. For large multidimensional arrays where dense and sparse chunks are stored differently, proper normalization can lead to improved storage efficiency. We show that it is NP-hard to compute an optimal normalization even for 1x3 chunks, although we find an exact algorithm for 1x2 chunks. When dimensions are nearly statistically independent, we show that dimension-wise attribute frequency sorting is an optimal normalization and takes time O(d n log(n)) for data cubes of size n^d. When dimensions are not independent, we propose and evaluate several heuristics. The hybrid OLAP (HOLAP) storage mechanism is already 19%-30% more efficient than ROLAP, but normalization can improve it further by 9%-13% for a total gain of 29%-44% over ROLAP.<|reference_end|>
arxiv
@article{kaser2007attribute, title={Attribute Value Reordering For Efficient Hybrid OLAP}, author={Owen Kaser, Daniel Lemire}, journal={Owen Kaser, Daniel Lemire, Attribute Value Reordering For Efficient Hybrid OLAP, Information Sciences, Volume 176, Issue 16, 2006, Pages 2304-2336}, year={2007}, doi={10.1016/j.ins.2005.09.005}, archivePrefix={arXiv}, eprint={cs/0702143}, primaryClass={cs.DB} }
kaser2007attribute
arxiv-675700
cs/0702144
Slope One Predictors for Online Rating-Based Collaborative Filtering
<|reference_start|>Slope One Predictors for Online Rating-Based Collaborative Filtering: Rating-based collaborative filtering is the process of predicting how a user would rate a given item from other user ratings. We propose three related slope one schemes with predictors of the form f(x) = x + b, which precompute the average difference between the ratings of one item and another for users who rated both. Slope one algorithms are easy to implement, efficient to query, reasonably accurate, and they support both online queries and dynamic updates, which makes them good candidates for real-world systems. The basic slope one scheme is suggested as a new reference scheme for collaborative filtering. By factoring in items that a user liked separately from items that a user disliked, we achieve results competitive with slower memory-based schemes over the standard benchmark EachMovie and Movielens data sets while better fulfilling the desiderata of CF applications.<|reference_end|>
arxiv
@article{lemire2007slope, title={Slope One Predictors for Online Rating-Based Collaborative Filtering}, author={Daniel Lemire, Anna Maclachlan}, journal={arXiv preprint arXiv:cs/0702144}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702144}, primaryClass={cs.DB cs.AI} }
lemire2007slope