corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-675001 | cs/0610129 | Community Detection in Complex Networks Using Agents | <|reference_start|>Community Detection in Complex Networks Using Agents: Community structure identification has been one of the most popular research areas in recent years due to its applicability to the wide scale of disciplines. To detect communities in varied topics, there have been many algorithms proposed so far. However, most of them still have some drawbacks to be addressed. In this paper, we present an agent-based based community detection algorithm. The algorithm that is a stochastic one makes use of agents by forcing them to perform biased moves in a smart way. Using the information collected by the traverses of these agents in the network, the network structure is revealed. Also, the network modularity is used for determining the number of communities. Our algorithm removes the need for prior knowledge about the network such as number of the communities or any threshold values. Furthermore, the definite community structure is provided as a result instead of giving some structures requiring further processes. Besides, the computational and time costs are optimized because of using thread like working agents. The algorithm is tested on three network data of different types and sizes named Zachary karate club, college football and political books. For all three networks, the real network structures are identified in almost every run.<|reference_end|> | arxiv | @article{gunes2006community,
title={Community Detection in Complex Networks Using Agents},
author={Ismail Gunes, Haluk Bingol},
journal={arXiv preprint arXiv:cs/0610129},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610129},
primaryClass={cs.MA cs.CY}
} | gunes2006community |
arxiv-675002 | cs/0610130 | On Bounds for $E$-capacity of DMC | <|reference_start|>On Bounds for $E$-capacity of DMC: Random coding, expurgated and sphere packing bounds are derived by method of types and method of graph decomposition for $E$-capacity of discrete memoryless channel (DMC). Three decoding rules are considered, the random coding bound is attainable by each of the three rules, but the expurgated bound is achievable only by maximum-likelihood decoding. Sphere packing bound is obtained by very simple combinatorial reasonings of the method of types. The paper joins and reviews the results of previous hard achievable publications.<|reference_end|> | arxiv | @article{haroutunian2006on,
title={On Bounds for $E$-capacity of DMC},
author={Evgueni A. Haroutunian (Associate Member, IEEE)},
journal={arXiv preprint arXiv:cs/0610130},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610130},
primaryClass={cs.IT math.IT}
} | haroutunian2006on |
arxiv-675003 | cs/0610131 | Scheduling and data redistribution strategies on star platforms | <|reference_start|>Scheduling and data redistribution strategies on star platforms: In this work we are interested in the problem of scheduling and redistributing data on master-slave platforms. We consider the case were the workers possess initial loads, some of which having to be redistributed in order to balance their completion times. We examine two different scenarios. The first model assumes that the data consists of independent and identical tasks. We prove the NP-completeness in the strong sense for the general case, and we present two optimal algorithms for special platform types. Furthermore we propose three heuristics for the general case. Simulations consolidate the theoretical results. The second data model is based on Divisible Load Theory. This problem can be solved in polynomial time by a combination of linear programming and simple analytical manipulations.<|reference_end|> | arxiv | @article{marchal2006scheduling,
title={Scheduling and data redistribution strategies on star platforms},
author={Loris Marchal (INRIA Rh^one-Alpes, LIP), Veronika Rehn (INRIA
Rh^one-Alpes, LIP), Yves Robert (INRIA Rh^one-Alpes, LIP), Fr'ed'eric
Vivien (INRIA Rh^one-Alpes, LIP)},
journal={arXiv preprint arXiv:cs/0610131},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610131},
primaryClass={cs.DC}
} | marchal2006scheduling |
arxiv-675004 | cs/0610132 | List Decoding of Hermitian Codes using Groebner Bases | <|reference_start|>List Decoding of Hermitian Codes using Groebner Bases: List decoding of Hermitian codes is reformulated to allow an efficient and simple algorithm for the interpolation step. The algorithm is developed using the theory of Groebner bases of modules. The computational complexity of the algorithm seems comparable to previously known algorithms achieving the same task, and the algorithm is better suited for hardware implementation.<|reference_end|> | arxiv | @article{lee2006list,
title={List Decoding of Hermitian Codes using Groebner Bases},
author={Kwankyu Lee, Michael E. O'Sullivan},
journal={arXiv preprint arXiv:cs/0610132},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610132},
primaryClass={cs.IT cs.SC math.IT}
} | lee2006list |
arxiv-675005 | cs/0610133 | P2P IPTV Measurement: A Comparison Study | <|reference_start|>P2P IPTV Measurement: A Comparison Study: With the success of P2P file sharing, new emerging P2P applications arise on the Internet for streaming content like voice (VoIP) or live video (IPTV). Nowadays, there are lots of works measuring P2P file sharing or P2P telephony systems, but there is still no comprehensive study about P2P IPTV, whereas it should be massively used in the future. During the last FIFA world cup, we measured network traffic generated by P2P IPTV applications like PPlive, PPstream, TVants and Sopcast. In this paper we analyze some of our results during the same games for the applications. We focus on traffic statistics and churn of peers within these P2P networks. Our objectives are threefold: we point out the traffic generated to understand the impact they will have on the network, we try to infer the mechanisms of such applications and highlight differences, and we give some insights about the users' behavior.<|reference_end|> | arxiv | @article{silverston2006p2p,
title={P2P IPTV Measurement: A Comparison Study},
author={Thomas Silverston and Olivier Fourmaux},
journal={arXiv preprint arXiv:cs/0610133},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610133},
primaryClass={cs.NI cs.MM}
} | silverston2006p2p |
arxiv-675006 | cs/0610134 | A Markov Chain based method for generating long-range dependence | <|reference_start|>A Markov Chain based method for generating long-range dependence: This paper describes a model for generating time series which exhibit the statistical phenomenon known as long-range dependence (LRD). A Markov Modulated Process based upon an infinite Markov chain is described. The work described is motivated by applications in telecommunications where LRD is a known property of time-series measured on the internet. The process can generate a time series exhibiting LRD with known parameters and is particularly suitable for modelling internet traffic since the time series is in terms of ones and zeros which can be interpreted as data packets and inter-packet gaps. The method is extremely simple computationally and analytically and could prove more tractable than other methods described in the literature<|reference_end|> | arxiv | @article{clegg2006a,
title={A Markov Chain based method for generating long-range dependence},
author={Richard G. Clegg, Maurice Dodson},
journal={Phys. Rev. E 72, 026118 2005},
year={2006},
doi={10.1103/PhysRevE.72.026118},
archivePrefix={arXiv},
eprint={cs/0610134},
primaryClass={cs.NI cs.PF math.ST stat.TH}
} | clegg2006a |
arxiv-675007 | cs/0610135 | Markov-modulated on/off processes for long-range dependent internet traffic | <|reference_start|>Markov-modulated on/off processes for long-range dependent internet traffic: The aim of this paper is to use a very simple queuing model to compare a number of models from the literature which have been used to replicate the statistical nature of internet traffic and, in particular, the long-range dependence of this traffic. The four models all have the form of discrete time Markov-modulated processes (two other models are introduced for comparison purposes). While it is often stated that long-range dependence has a critical effect on queuing performance, it appears that the models used here do not well replicated the queuing performance of real internet traffic. In particular, they fail to replicate the mean queue length (and hence the mean delay) and the probability of the queue length exceeding a given level.<|reference_end|> | arxiv | @article{clegg2006markov-modulated,
title={Markov-modulated on/off processes for long-range dependent internet
traffic},
author={Richard G. Clegg},
journal={arXiv preprint arXiv:cs/0610135},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610135},
primaryClass={cs.NI cs.DM cs.PF math.ST stat.TH}
} | clegg2006markov-modulated |
arxiv-675008 | cs/0610136 | Bounds on the coefficients of the characteristic and minimal polynomials | <|reference_start|>Bounds on the coefficients of the characteristic and minimal polynomials: This note presents absolute bounds on the size of the coefficients of the characteristic and minimal polynomials depending on the size of the coefficients of the associated matrix. Moreover, we present algorithms to compute more precise input-dependant bounds on these coefficients. Such bounds are e.g. useful to perform deterministic chinese remaindering of the characteristic or minimal polynomial of an integer matrix.<|reference_end|> | arxiv | @article{dumas2006bounds,
title={Bounds on the coefficients of the characteristic and minimal polynomials},
author={Jean-Guillaume Dumas (LJK)},
journal={Journal of Inequalities in Pure and Applied Mathematics 8, 2
(2007) art. 31, 6pp},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610136},
primaryClass={cs.SC}
} | dumas2006bounds |
arxiv-675009 | cs/0610137 | A Concurrent Calculus with Atomic Transactions | <|reference_start|>A Concurrent Calculus with Atomic Transactions: The Software Transactional Memory (STM) model is an original approach for controlling concurrent accesses to ressources without the need for explicit lock-based synchronization mechanisms. A key feature of STM is to provide a way to group sequences of read and write actions inside atomic blocks, similar to database transactions, whose whole effect should occur atomically. In this paper, we investigate STM from a process algebra perspective and define an extension of asynchronous CCS with atomic blocks of actions. Our goal is not only to set a formal ground for reasoning on STM implementations but also to understand how this model fits with other concurrency control mechanisms. We also view this calculus as a test bed for extending process calculi with atomic transactions. This is an interesting direction for investigation since, for the most part, actual works that mix transactions with process calculi consider compensating transactions, a model that lacks all the well-known ACID properties. We show that the addition of atomic transactions results in a very expressive calculus, enough to easily encode other concurrent primitives such as guarded choice and multiset-synchronization (\`{a} la join-calculus). The correctness of our encodings is proved using a suitable notion of bisimulation equivalence. The equivalence is then applied to prove interesting ``laws of transactions'' and to obtain a simple normal form for transactions.<|reference_end|> | arxiv | @article{acciai2006a,
title={A Concurrent Calculus with Atomic Transactions},
author={Lucia Acciai (LIF), Michele Boreale, Silvano Dal Zilio (LIF)},
journal={arXiv preprint arXiv:cs/0610137},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610137},
primaryClass={cs.LO cs.DC}
} | acciai2006a |
arxiv-675010 | cs/0610138 | Why block length and delay behave differently if feedback is present | <|reference_start|>Why block length and delay behave differently if feedback is present: For output-symmetric DMCs at even moderately high rates, fixed-block-length communication systems show no improvements in their error exponents with feedback. In this paper, we study systems with fixed end-to-end delay and show that feedback generally provides dramatic gains in the error exponents. A new upper bound (the uncertainty-focusing bound) is given on the probability of symbol error in a fixed-delay communication system with feedback. This bound turns out to have a similar form to Viterbi's bound used for the block error probability of convolutional codes as a function of the fixed constraint length. The uncertainty-focusing bound is shown to be asymptotically achievable with noiseless feedback for erasure channels as well as any output-symmetric DMC that has strictly positive zero-error capacity. Furthermore, it can be achieved in a delay-universal (anytime) fashion even if the feedback itself is delayed by a small amount. Finally, it is shown that for end-to-end delay, it is generally possible at high rates to beat the sphere-packing bound for general DMCs -- thereby providing a counterexample to a conjecture of Pinsker.<|reference_end|> | arxiv | @article{sahai2006why,
title={Why block length and delay behave differently if feedback is present},
author={Anant Sahai},
journal={arXiv preprint arXiv:cs/0610138},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610138},
primaryClass={cs.IT math.IT}
} | sahai2006why |
arxiv-675011 | cs/0610139 | How to beat the sphere-packing bound with feedback | <|reference_start|>How to beat the sphere-packing bound with feedback: The sphere-packing bound $E_{sp}(R)$ bounds the reliability function for fixed-length block-codes. For symmetric channels, it remains a valid bound even when strictly causal noiseless feedback is allowed from the decoder to the encoder. To beat the bound, the problem must be changed. While it has long been known that variable-length block codes can do better when trading-off error probability with expected block-length, this correspondence shows that the {\em fixed-delay} setting also presents such an opportunity for generic channels. While $E_{sp}(R)$ continues to bound the tradeoff between bit error and fixed end-to-end latency for symmetric channels used {\em without} feedback, a new bound called the ``focusing bound'' gives the limits on what can be done with feedback. If low-rate reliable flow-control is free (ie. the noisy channel has strictly positive zero-error capacity), then the focusing bound can be asymptotically achieved. Even when the channel has no zero-error capacity, it is possible to substantially beat the sphere-packing bound by synthesizing an appropriately reliable channel to carry the flow-control information.<|reference_end|> | arxiv | @article{sahai2006how,
title={How to beat the sphere-packing bound with feedback},
author={Anant Sahai},
journal={arXiv preprint arXiv:cs/0610139},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610139},
primaryClass={cs.IT math.IT}
} | sahai2006how |
arxiv-675012 | cs/0610140 | Constant for associative patterns ensemble | <|reference_start|>Constant for associative patterns ensemble: Creation procedure of associative patterns ensemble in terms of formal logic with using neural net-work (NN) model is formulated. It is shown that the associative patterns set is created by means of unique procedure of NN work which having individual parameters of entrance stimulus transformation. It is ascer-tained that the quantity of the selected associative patterns possesses is a constant.<|reference_end|> | arxiv | @article{makarov2006constant,
title={Constant for associative patterns ensemble},
author={Leonid Makarov and Peter Komarov},
journal={arXiv preprint arXiv:cs/0610140},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610140},
primaryClass={cs.AI}
} | makarov2006constant |
arxiv-675013 | cs/0610141 | Stabilization using both noisy and noiseless feedback | <|reference_start|>Stabilization using both noisy and noiseless feedback: When designing a distributed control system, the system designer has a choice in how to connect the different units through communication channels. In practice, noiseless and noisy channels may coexist. Using the standard toy example of scalar stabilization, this paper shows how a small amount of noiseless feedback can perform a ``supervisory'' role and thereby boost the effectiveness of noisy feedback.<|reference_end|> | arxiv | @article{sahai2006stabilization,
title={Stabilization using both noisy and noiseless feedback},
author={Anant Sahai},
journal={arXiv preprint arXiv:cs/0610141},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610141},
primaryClass={cs.IT math.IT}
} | sahai2006stabilization |
arxiv-675014 | cs/0610142 | Coding into a source: a direct inverse Rate-Distortion theorem | <|reference_start|>Coding into a source: a direct inverse Rate-Distortion theorem: Shannon proved that if we can transmit bits reliably at rates larger than the rate distortion function $R(D)$, then we can transmit this source to within a distortion $D$. We answer the converse question ``If we can transmit a source to within a distortion $D$, can we transmit bits reliably at rates less than the rate distortion function?'' in the affirmative. This can be viewed as a direct converse of the rate distortion theorem.<|reference_end|> | arxiv | @article{agarwal2006coding,
title={Coding into a source: a direct inverse Rate-Distortion theorem},
author={Mukul Agarwal, Anant Sahai, and Sanjoy Mitter},
journal={arXiv preprint arXiv:cs/0610142},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610142},
primaryClass={cs.IT math.IT}
} | agarwal2006coding |
arxiv-675015 | cs/0610143 | Source coding and channel requirements for unstable processes | <|reference_start|>Source coding and channel requirements for unstable processes: Our understanding of information in systems has been based on the foundation of memoryless processes. Extensions to stable Markov and auto-regressive processes are classical. Berger proved a source coding theorem for the marginally unstable Wiener process, but the infinite-horizon exponentially unstable case has been open since Gray's 1970 paper. There were also no theorems showing what is needed to communicate such processes across noisy channels. In this work, we give a fixed-rate source-coding theorem for the infinite-horizon problem of coding an exponentially unstable Markov process. The encoding naturally results in two distinct bitstreams that have qualitatively different QoS requirements for communicating over a noisy medium. The first stream captures the information that is accumulating within the nonstationary process and requires sufficient anytime reliability from the channel used to communicate the process. The second stream captures the historical information that dissipates within the process and is essentially classical. This historical information can also be identified with a natural stable counterpart to the unstable process. A converse demonstrating the fundamentally layered nature of unstable sources is given by means of information-embedding ideas.<|reference_end|> | arxiv | @article{sahai2006source,
title={Source coding and channel requirements for unstable processes},
author={Anant Sahai and Sanjoy Mitter},
journal={arXiv preprint arXiv:cs/0610143},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610143},
primaryClass={cs.IT math.IT}
} | sahai2006source |
arxiv-675016 | cs/0610144 | Lossless coding for distributed streaming sources | <|reference_start|>Lossless coding for distributed streaming sources: Distributed source coding is traditionally viewed in the block coding context -- all the source symbols are known in advance at the encoders. This paper instead considers a streaming setting in which iid source symbol pairs are revealed to the separate encoders in real time and need to be reconstructed at the decoder with some tolerable end-to-end delay using finite rate noiseless channels. A sequential random binning argument is used to derive a lower bound on the error exponent with delay and show that both ML decoding and universal decoding achieve the same positive error exponents inside the traditional Slepian-Wolf rate region. The error events are different from the block-coding error events and give rise to slightly different exponents. Because the sequential random binning scheme is also universal over delays, the resulting code eventually reconstructs every source symbol correctly with probability 1.<|reference_end|> | arxiv | @article{chang2006lossless,
title={Lossless coding for distributed streaming sources},
author={Cheng Chang, Stark Draper, and Anant Sahai},
journal={arXiv preprint arXiv:cs/0610144},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610144},
primaryClass={cs.IT math.IT}
} | chang2006lossless |
arxiv-675017 | cs/0610145 | A Simple Converse of Burnashev's Reliability | <|reference_start|>A Simple Converse of Burnashev's Reliability: In a remarkable paper published in 1976, Burnashev determined the reliability function of variable-length block codes over discrete memoryless channels with feedback. Subsequently, an alternative achievability proof was obtained by Yamamoto and Itoh via a particularly simple and instructive scheme. Their idea is to alternate between a communication and a confirmation phase until the receiver detects the codeword used by the sender to acknowledge that the message is correct. We provide a converse that parallels the Yamamoto-Itoh achievability construction. Besides being simpler than the original, the proposed converse suggests that a communication and a confirmation phase are implicit in any scheme for which the probability of error decreases with the largest possible exponent. The proposed converse also makes it intuitively clear why the terms that appear in Burnashev's exponent are necessary.<|reference_end|> | arxiv | @article{berlin2006a,
title={A Simple Converse of Burnashev's Reliability},
author={Peter Berlin, Baris Nakiboglu, Bixio Rimoldi, Emre Telatar},
journal={IEEE Transactions on Information Theory, 55(7):3074-3080, July
2009},
year={2006},
doi={10.1109/TIT.2009.2021322},
archivePrefix={arXiv},
eprint={cs/0610145},
primaryClass={cs.IT math.IT}
} | berlin2006a |
arxiv-675018 | cs/0610146 | The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link, Part II: vector systems | <|reference_start|>The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link, Part II: vector systems: In part I, we reviewed how Shannon's classical notion of capacity is not sufficient to characterize a noisy communication channel if the channel is intended to be used as part of a feedback loop to stabilize an unstable scalar linear system. While classical capacity is not enough, a sense of capacity (parametrized by reliability) called "anytime capacity" is both necessary and sufficient for channel evaluation in this context. The rate required is the log of the open-loop system gain and the required reliability comes from the desired sense of stability. Sufficiency is maintained even in cases with noisy observations and without any explicit feedback between the observer and the controller. This established the asymptotic equivalence between scalar stabilization problems and delay-universal communication problems with feedback. Here in part II, the vector-state generalizations are established and it is the magnitudes of the unstable eigenvalues that play an essential role. To deal with such systems, the concept of the anytime rate-region is introduced. This is the region of rates that the channel can support while still meeting potentially different anytime reliability targets for parallel message streams. All the scalar results generalize on an eigenvalue by eigenvalue basis. When there is no explicit feedback of the noisy channel outputs, the intrinsic delay of the unstable system tells us what the feedback delay needs to be while evaluating the anytime-rate-region for the channel. An example involving a binary erasure channel is used to illustrate how differentiated service is required in any separation-based control architecture.<|reference_end|> | arxiv | @article{sahai2006the,
title={The necessity and sufficiency of anytime capacity for stabilization of a
linear system over a noisy communication link, Part II: vector systems},
author={Anant Sahai and Sanjoy Mitter},
journal={arXiv preprint arXiv:cs/0610146},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610146},
primaryClass={cs.IT math.IT}
} | sahai2006the |
arxiv-675019 | cs/0610147 | Grooming of Dynamic Traffic in WDM Star and Tree Networks Using Genetic Algorithm | <|reference_start|>Grooming of Dynamic Traffic in WDM Star and Tree Networks Using Genetic Algorithm: The advances in WDM technology lead to the great interest in traffic grooming problems. As traffic often changes from time to time, the problem of grooming dynamic traffic is of great practical value. In this paper, we discuss dynamic grooming of traffic in star and tree networks. A genetic algorithm (GA) based approach is proposed to support arbitrary dynamic traffic patterns, which minimizes the number of ADM's and wavelengths. To evaluate the algorithm, tighter bounds are derived. Computer simulation results show that our algorithm is efficient in reducing both the numbers of ADM's and wavelengths in tree and star networks.<|reference_end|> | arxiv | @article{liu2006grooming,
title={Grooming of Dynamic Traffic in WDM Star and Tree Networks Using Genetic
Algorithm},
author={Kun-hong Liu, Yong Xu, De-shuang Huang, Min Cheng},
journal={Photonic Network Communications,Volume 15, Number 2, 2008},
year={2006},
doi={10.1007/s11107-007-0103-0},
archivePrefix={arXiv},
eprint={cs/0610147},
primaryClass={cs.NI}
} | liu2006grooming |
arxiv-675020 | cs/0610148 | Decoder Error Probability of MRD Codes | <|reference_start|>Decoder Error Probability of MRD Codes: In this paper, we first introduce the concept of elementary linear subspace, which has similar properties to those of a set of coordinates. Using this new concept, we derive properties of maximum rank distance (MRD) codes that parallel those of maximum distance separable (MDS) codes. Using these properties, we show that the decoder error probability of MRD codes with error correction capability t decreases exponentially with t^2 based on the assumption that all errors with the same rank are equally likely. We argue that the channel based on this assumption is an approximation of a channel corrupted by crisscross errors.<|reference_end|> | arxiv | @article{gadouleau2006decoder,
title={Decoder Error Probability of MRD Codes},
author={Maximilien Gadouleau and Zhiyuan Yan},
journal={arXiv preprint arXiv:cs/0610148},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610148},
primaryClass={cs.IT math.IT}
} | gadouleau2006decoder |
arxiv-675021 | cs/0610149 | Canonical decomposition of catenation of factorial languages | <|reference_start|>Canonical decomposition of catenation of factorial languages: According to a previous result by S. V. Avgustinovich and the author, each factorial language admits a unique canonical decomposition to a catenation of factorial languages. In this paper, we analyze the appearance of the canonical decomposition of a catenation of two factorial languages whose canonical decompositions are given.<|reference_end|> | arxiv | @article{frid2006canonical,
title={Canonical decomposition of catenation of factorial languages},
author={A. Frid},
journal={arXiv preprint arXiv:cs/0610149},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610149},
primaryClass={cs.LO}
} | frid2006canonical |
arxiv-675022 | cs/0610150 | On LAO Testing of Multiple Hypotheses for Many Independent Objects | <|reference_start|>On LAO Testing of Multiple Hypotheses for Many Independent Objects: The problem of many hypotheses logarithmically asymptotically optimal (LAO) testing for a model consisting of three or more independent objects is solved. It is supposed that $M$ probability distributions are known and each object independently of others follows to one of them. The matrix of asymptotic interdependencies (reliability--reliability functions) of all possible pairs of the error probability exponents (reliabilities) in optimal testing for this model is studied. This problem was introduced (and solved for the case of two objects and two given probability distributions) by Ahlswede and Haroutunian. The model with two independent objects with $M$ hypotheses was explored by Haroutunian and Hakobyan.<|reference_end|> | arxiv | @article{haroutunian2006on,
title={On LAO Testing of Multiple Hypotheses for Many Independent Objects},
author={Evgueni A. Haroutunian (Associate Member, IEEE) and Parandzem M.
Hakobyan},
journal={arXiv preprint arXiv:cs/0610150},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610150},
primaryClass={cs.IT math.IT}
} | haroutunian2006on |
arxiv-675023 | cs/0610151 | Anytime coding on the infinite bandwidth AWGN channel: A sequential semi-orthogonal optimal code | <|reference_start|>Anytime coding on the infinite bandwidth AWGN channel: A sequential semi-orthogonal optimal code: It is well known that orthogonal coding can be used to approach the Shannon capacity of the power-constrained AWGN channel without a bandwidth constraint. This correspondence describes a semi-orthogonal variation of pulse position modulation that is sequential in nature -- bits can be ``streamed across'' without having to buffer up blocks of bits at the transmitter. ML decoding results in an exponentially small probability of error as a function of tolerated receiver delay and thus eventually a zero probability of error on every transmitted bit. In the high-rate regime, a matching upper bound is given on the delay error exponent. We close with some comments on the case with feedback and the connections to the capacity per unit cost problem.<|reference_end|> | arxiv | @article{sahai2006anytime,
title={Anytime coding on the infinite bandwidth AWGN channel: A sequential
semi-orthogonal optimal code},
author={Anant Sahai},
journal={arXiv preprint arXiv:cs/0610151},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610151},
primaryClass={cs.IT math.IT}
} | sahai2006anytime |
arxiv-675024 | cs/0610152 | An unbreakable cryptosystem for common people | <|reference_start|>An unbreakable cryptosystem for common people: It has been found that an algorithm can generate true random numbers on classical computer. The algorithm can be used to generate unbreakable message PIN (personal identification number) and password.<|reference_end|> | arxiv | @article{mitra2006an,
title={An unbreakable cryptosystem for common people},
author={Arindam Mitra},
journal={arXiv preprint arXiv:cs/0610152},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610152},
primaryClass={cs.CR}
} | mitra2006an |
arxiv-675025 | cs/0610153 | Most Programs Stop Quickly or Never Halt | <|reference_start|>Most Programs Stop Quickly or Never Halt: Since many real-world problems arising in the fields of compiler optimisation, automated software engineering, formal proof systems, and so forth are equivalent to the Halting Problem--the most notorious undecidable problem--there is a growing interest, not only academically, in understanding the problem better and in providing alternative solutions. Halting computations can be recognised by simply running them; the main difficulty is to detect non-halting programs. Our approach is to have the probability space extend over both space and time and to consider the probability that a random $N$-bit program has halted by a random time. We postulate an a priori computable probability distribution on all possible runtimes and we prove that given an integer k>0, we can effectively compute a time bound T such that the probability that an N-bit program will eventually halt given that it has not halted by T is smaller than 2^{-k}. We also show that the set of halting programs (which is computably enumerable, but not computable) can be written as a disjoint union of a computable set and a set of effectively vanishing probability. Finally, we show that ``long'' runtimes are effectively rare. More formally, the set of times at which an N-bit program can stop after the time 2^{N+constant} has effectively zero density.<|reference_end|> | arxiv | @article{calude2006most,
title={Most Programs Stop Quickly or Never Halt},
author={Cristian S. Calude and Michael A. Stay},
journal={arXiv preprint arXiv:cs/0610153},
year={2006},
number={CDMTCS-284},
archivePrefix={arXiv},
eprint={cs/0610153},
primaryClass={cs.IT math.IT}
} | calude2006most |
arxiv-675026 | cs/0610154 | Usage Impact Factor: the effects of sample characteristics on usage-based impact metrics | <|reference_start|>Usage Impact Factor: the effects of sample characteristics on usage-based impact metrics: There exist ample demonstrations that indicators of scholarly impact analogous to the citation-based ISI Impact Factor can be derived from usage data. However, contrary to the ISI IF which is based on citation data generated by the global community of scholarly authors, so far usage can only be practically recorded at a local level leading to community-specific assessments of scholarly impact that are difficult to generalize to the global scholarly community. We define a journal Usage Impact Factor which mimics the definition of the Thomson Scientific's ISI Impact Factor. Usage Impact Factor rankings are calculated on the basis of a large-scale usage data set recorded for the California State University system from 2003 to 2005. The resulting journal rankings are then compared to Thomson Scientific's ISI Impact Factor which is used as a baseline indicator of general impact. Our results indicate that impact as derived from California State University usage reflects the particular scientific and demographic characteristics of its communities.<|reference_end|> | arxiv | @article{bollen2006usage,
title={Usage Impact Factor: the effects of sample characteristics on
usage-based impact metrics},
author={Johan Bollen and Herbert Van de Sompel},
journal={Journal of the American Society for Information Science and
Technology, 59(1), 2008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610154},
primaryClass={cs.DL}
} | bollen2006usage |
arxiv-675027 | cs/0610155 | Nonlinear Estimators and Tail Bounds for Dimension Reduction in $l_1$ Using Cauchy Random Projections | <|reference_start|>Nonlinear Estimators and Tail Bounds for Dimension Reduction in $l_1$ Using Cauchy Random Projections: For dimension reduction in $l_1$, the method of {\em Cauchy random projections} multiplies the original data matrix $\mathbf{A} \in\mathbb{R}^{n\times D}$ with a random matrix $\mathbf{R} \in \mathbb{R}^{D\times k}$ ($k\ll\min(n,D)$) whose entries are i.i.d. samples of the standard Cauchy C(0,1). Because of the impossibility results, one can not hope to recover the pairwise $l_1$ distances in $\mathbf{A}$ from $\mathbf{B} = \mathbf{AR} \in \mathbb{R}^{n\times k}$, using linear estimators without incurring large errors. However, nonlinear estimators are still useful for certain applications in data stream computation, information retrieval, learning, and data mining. We propose three types of nonlinear estimators: the bias-corrected sample median estimator, the bias-corrected geometric mean estimator, and the bias-corrected maximum likelihood estimator. The sample median estimator and the geometric mean estimator are asymptotically (as $k\to \infty$) equivalent but the latter is more accurate at small $k$. We derive explicit tail bounds for the geometric mean estimator and establish an analog of the Johnson-Lindenstrauss (JL) lemma for dimension reduction in $l_1$, which is weaker than the classical JL lemma for dimension reduction in $l_2$. Asymptotically, both the sample median estimator and the geometric mean estimators are about 80% efficient compared to the maximum likelihood estimator (MLE). We analyze the moments of the MLE and propose approximating the distribution of the MLE by an inverse Gaussian.<|reference_end|> | arxiv | @article{li2006nonlinear,
title={Nonlinear Estimators and Tail Bounds for Dimension Reduction in $l_1$
Using Cauchy Random Projections},
author={Ping Li, Trevor J. Hastie, Kenneth W. Church},
journal={arXiv preprint arXiv:cs/0610155},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610155},
primaryClass={cs.DS cs.IR cs.LG}
} | li2006nonlinear |
arxiv-675028 | cs/0610156 | Adaptation Knowledge Discovery from a Case Base | <|reference_start|>Adaptation Knowledge Discovery from a Case Base: In case-based reasoning, the adaptation step depends in general on domain-dependent knowledge, which motivates studies on adaptation knowledge acquisition (AKA). CABAMAKA is an AKA system based on principles of knowledge discovery from databases. This system explores the variations within the case base to elicit adaptation knowledge. It has been successfully tested in an application of case-based decision support to breast cancer treatment.<|reference_end|> | arxiv | @article{d'aquin2006adaptation,
title={Adaptation Knowledge Discovery from a Case Base},
author={Mathieu D'Aquin (INRIA Lorraine - LORIA, KMI), Fadi Badra (INRIA
Lorraine - LORIA), Sandrine Lafrogne (INRIA Lorraine - LORIA), Jean Lieber
(INRIA Lorraine - LORIA), Amedeo Napoli (INRIA Lorraine - LORIA), Laszlo
Szathmary (INRIA Lorraine - LORIA)},
journal={Proceedings of the 17th European Conference on Artificial
Intelligence (ECAI-06), Trento G. Brewka (Ed.) (2006) 795--796},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610156},
primaryClass={cs.AI}
} | d'aquin2006adaptation |
arxiv-675029 | cs/0610157 | A Genetic Algorithm Approach to the Grooming of Dynamic Traffic in Tree and Star Networks with Bifurcation | <|reference_start|>A Genetic Algorithm Approach to the Grooming of Dynamic Traffic in Tree and Star Networks with Bifurcation: Traffic grooming is widely employed to reduce the number of ADM's and wavelengths. We consider the problem of grooming of dynamic traffic in WDM tree and star networks in this paper. To achieve better results, we used the bifurcation techniques to the grooming of arbitrary dynamic traffic in a strictly non-blocking manner in networks. Three splitting methods, including Traffic-Cutting, Traffic-Dividing and Synthesized-Splitting were proposed. A genetic algorithm (GA) approach based on these methods was proposed to tackle such grooming problems in tree and star networks. The performance of these algorithms was tested under different conditions in star and tree networks. Computer simulation results showed that our algorithm is efficient in reducing both the numbers of ADM's and wavelengths.<|reference_end|> | arxiv | @article{liu2006a,
title={A Genetic Algorithm Approach to the Grooming of Dynamic Traffic in Tree
and Star Networks with Bifurcation},
author={Kun-hong Liu, Yong Xu, De-Shuang Huang},
journal={arXiv preprint arXiv:cs/0610157},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610157},
primaryClass={cs.NI}
} | liu2006a |
arxiv-675030 | cs/0610158 | Considering users' behaviours in improving the responses of an information base | <|reference_start|>Considering users' behaviours in improving the responses of an information base: In this paper, our aim is to propose a model that helps in the efficient use of an information system by users, within the organization represented by the IS, in order to resolve their decisional problems. In other words we want to aid the user within an organization in obtaining the information that corresponds to his needs (informational needs that result from his decisional problems). This type of information system is what we refer to as economic intelligence system because of its support for economic intelligence processes of the organisation. Our assumption is that every EI process begins with the identification of the decisional problem which is translated into an informational need. This need is then translated into one or many information search problems (ISP). We also assumed that an ISP is expressed in terms of the user's expectations and that these expectations determine the activities or the behaviors of the user, when he/she uses an IS. The model we are proposing is used for the conception of the IS so that the process of retrieving of solution(s) or the responses given by the system to an ISP is based on these behaviours and correspond to the needs of the user.<|reference_end|> | arxiv | @article{afolabi2006considering,
title={Considering users' behaviours in improving the responses of an
information base},
author={Babajide Afolabi (LORIA), Odile Thiery (LORIA)},
journal={Dans I International Conference on Multidisciplinary Information
Sciences and Technologies (2006)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610158},
primaryClass={cs.LG cs.IR}
} | afolabi2006considering |
arxiv-675031 | cs/0610159 | Boolean Functions, Projection Operators and Quantum Error Correcting Codes | <|reference_start|>Boolean Functions, Projection Operators and Quantum Error Correcting Codes: This paper describes a fundamental correspondence between Boolean functions and projection operators in Hilbert space. The correspondence is widely applicable, and it is used in this paper to provide a common mathematical framework for the design of both additive and non-additive quantum error correcting codes. The new framework leads to the construction of a variety of codes including an infinite class of codes that extend the original ((5,6,2)) code found by Rains [21]. It also extends to operator quantum error correcting codes.<|reference_end|> | arxiv | @article{aggarwal2006boolean,
title={Boolean Functions, Projection Operators and Quantum Error Correcting
Codes},
author={Vaneet Aggarwal, A. Robert Calderbank},
journal={IEEE Trans. Inf. Theory, vol. 54, no. 4, pp.1700-1707, Apr. 2008.},
year={2006},
doi={10.1109/TIT.2008.917720},
archivePrefix={arXiv},
eprint={cs/0610159},
primaryClass={cs.IT math.IT quant-ph}
} | aggarwal2006boolean |
arxiv-675032 | cs/0610160 | A Non-Orthogonal Distributed Space-Time Coded Protocol Part II-Code Construction and DM-G Tradeoff | <|reference_start|>A Non-Orthogonal Distributed Space-Time Coded Protocol Part II-Code Construction and DM-G Tradeoff: This is the second part of a two-part series of papers. In this paper, for the generalized non-orthogonal amplify and forward (GNAF) protocol presented in Part-I, a construction of a new family of distributed space-time codes based on Co-ordinate Interleaved Orthogonal Designs (CIOD) which result in reduced Maximum Likelihood (ML) decoding complexity at the destination is proposed. Further, it is established that the recently proposed Toeplitz space-time codes as well as space-time block codes (STBCs) from cyclic division algebras can be used in GNAF protocol. Finally, a lower bound on the optimal Diversity-Multiplexing Gain (DM-G) tradeoff for the GNAF protocol is established and it is shown that this bound approaches the transmit diversity bound asymptotically as the number of relays and the number of channels uses increases.<|reference_end|> | arxiv | @article{rajan2006a,
title={A Non-Orthogonal Distributed Space-Time Coded Protocol Part II-Code
Construction and DM-G Tradeoff},
author={G. Susinder Rajan, B. Sundar Rajan},
journal={Proceedings of IEEE ITW'06, Chengdu, China, October 22-26, 2006,
pp. 488-492},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610160},
primaryClass={cs.IT math.IT}
} | rajan2006a |
arxiv-675033 | cs/0610161 | A Non-Orthogonal Distributed Space-Time Coded Protocol Part I: Signal Model and Design Criteria | <|reference_start|>A Non-Orthogonal Distributed Space-Time Coded Protocol Part I: Signal Model and Design Criteria: In this two-part series of papers, a generalized non-orthogonal amplify and forward (GNAF) protocol which generalizes several known cooperative diversity protocols is proposed. Transmission in the GNAF protocol comprises of two phases - the broadcast phase and the cooperation phase. In the broadcast phase, the source broadcasts its information to the relays as well as the destination. In the cooperation phase, the source and the relays together transmit a space-time code in a distributed fashion. The GNAF protocol relaxes the constraints imposed by the protocol of Jing and Hassibi on the code structure. In Part-I of this paper, a code design criteria is obtained and it is shown that the GNAF protocol is delay efficient and coding gain efficient as well. Moreover GNAF protocol enables the use of sphere decoders at the destination with a non-exponential Maximum likelihood (ML) decoding complexity. In Part-II, several low decoding complexity code constructions are studied and a lower bound on the Diversity-Multiplexing Gain tradeoff of the GNAF protocol is obtained.<|reference_end|> | arxiv | @article{rajan2006a,
title={A Non-Orthogonal Distributed Space-Time Coded Protocol Part I: Signal
Model and Design Criteria},
author={G. Susinder Rajan, B. Sundar Rajan},
journal={Proceedings of 2006 IEEE Information Theory Workshop (ITW'06),
Oct. 22-26, 2006, Chengdu, China, pp.385-389},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610161},
primaryClass={cs.IT math.IT}
} | rajan2006a |
arxiv-675034 | cs/0610162 | Multigroup-Decodable STBCs from Clifford Algebras | <|reference_start|>Multigroup-Decodable STBCs from Clifford Algebras: A Space-Time Block Code (STBC) in $K$ symbols (variables) is called $g$-group decodable STBC if its maximum-likelihood decoding metric can be written as a sum of $g$ terms such that each term is a function of a subset of the $K$ variables and each variable appears in only one term. In this paper we provide a general structure of the weight matrices of multi-group decodable codes using Clifford algebras. Without assuming that the number of variables in each group to be the same, a method of explicitly constructing the weight matrices of full-diversity, delay-optimal $g$-group decodable codes is presented for arbitrary number of antennas. For the special case of $N_t=2^a$ we construct two subclass of codes: (i) A class of $2a$-group decodable codes with rate $\frac{a}{2^{(a-1)}}$, which is, equivalently, a class of Single-Symbol Decodable codes, (ii) A class of $(2a-2)$-group decodable with rate $\frac{(a-1)}{2^{(a-2)}}$, i.e., a class of Double-Symbol Decodable codes. Simulation results show that the DSD codes of this paper perform better than previously known Quasi-Orthogonal Designs.<|reference_end|> | arxiv | @article{karmakar2006multigroup-decodable,
title={Multigroup-Decodable STBCs from Clifford Algebras},
author={Sanjay Karmakar, B. Sundar Rajan},
journal={Proceedings of 2006 IEEE Information Theory Workshop (ITW 2006),
October 22-26, 2006, Chengdu, China, pp.448-452},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610162},
primaryClass={cs.IT math.IT}
} | karmakar2006multigroup-decodable |
arxiv-675035 | cs/0610163 | A Taxonomy of Peer-to-Peer Based Complex Queries: a Grid perspective | <|reference_start|>A Taxonomy of Peer-to-Peer Based Complex Queries: a Grid perspective: Grid superscheduling requires support for efficient and scalable discovery of resources. Resource discovery activities involve searching for the appropriate resource types that match the user's job requirements. To accomplish this goal, a resource discovery system that supports the desired look-up operation is mandatory. Various kinds of solutions to this problem have been suggested, including the centralised and hierarchical information server approach. However, both of these approaches have serious limitations in regards to scalability, fault-tolerance and network congestion. To overcome these limitations, organising resource information using Peer-to-Peer (P2P) network model has been proposed. Existing approaches advocate an extension to structured P2P protocols, to support the Grid resource information system (GRIS). In this paper, we identify issues related to the design of such an efficient, scalable, fault-tolerant, consistent and practical GRIS system using a P2P network model. We compile these issues into various taxonomies in sections III and IV. Further, we look into existing works that apply P2P based network protocols to GRIS. We think that this taxonomy and its mapping to relevant systems would be useful for academic and industry based researchers who are engaged in the design of scalable Grid systems.<|reference_end|> | arxiv | @article{ranjan2006a,
title={A Taxonomy of Peer-to-Peer Based Complex Queries: a Grid perspective},
author={Rajiv Ranjan, Aaron Harwood and Rajkumar Buyya},
journal={arXiv preprint arXiv:cs/0610163},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610163},
primaryClass={cs.NI cs.DC cs.DS}
} | ranjan2006a |
arxiv-675036 | cs/0610164 | Complexity of Data Flow Analysis for Non-Separable Frameworks | <|reference_start|>Complexity of Data Flow Analysis for Non-Separable Frameworks: The complexity of round robin method of intraprocedural data flow analysis is measured in number of iterations over the control flow graph. Existing complexity bounds realistically explain the complexity of only Bit-vector frameworks which are separable. In this paper we define the complexity bounds for non-separable frameworks by quantifying the interdependences among the data flow information of program entities using an Entity Dependence Graph.<|reference_end|> | arxiv | @article{karkare2006complexity,
title={Complexity of Data Flow Analysis for Non-Separable Frameworks},
author={Bageshri Karkare (Sathe) and Uday Khedker},
journal={arXiv preprint arXiv:cs/0610164},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610164},
primaryClass={cs.PL}
} | karkare2006complexity |
arxiv-675037 | cs/0610165 | Decentralized Failure Diagnosis of Stochastic Discrete Event Systems | <|reference_start|>Decentralized Failure Diagnosis of Stochastic Discrete Event Systems: Recently, the diagnosability of {\it stochastic discrete event systems} (SDESs) was investigated in the literature, and, the failure diagnosis considered was {\it centralized}. In this paper, we propose an approach to {\it decentralized} failure diagnosis of SDESs, where the stochastic system uses multiple local diagnosers to detect failures and each local diagnoser possesses its own information. In a way, the centralized failure diagnosis of SDESs can be viewed as a special case of the decentralized failure diagnosis presented in this paper with only one projection. The main contributions are as follows: (1) We formalize the notion of codiagnosability for stochastic automata, which means that a failure can be detected by at least one local stochastic diagnoser within a finite delay. (2) We construct a codiagnoser from a given stochastic automaton with multiple projections, and the codiagnoser associated with the local diagnosers is used to test codiagnosability condition of SDESs. (3) We deal with a number of basic properties of the codiagnoser. In particular, a necessary and sufficient condition for the codiagnosability of SDESs is presented. (4) We give a computing method in detail to check whether codiagnosability is violated. And (5) some examples are described to illustrate the applications of the codiagnosability and its computing method.<|reference_end|> | arxiv | @article{liu2006decentralized,
title={Decentralized Failure Diagnosis of Stochastic Discrete Event Systems},
author={Fuchun Liu, Daowen Qiu, Hongyan Xing, and Zhujun Fan},
journal={IEEE Transactions on Automatic Control, 53 (2) (2008) 535-546.},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610165},
primaryClass={cs.AI}
} | liu2006decentralized |
arxiv-675038 | cs/0610166 | Tree Automata Make Ordinal Theory Easy | <|reference_start|>Tree Automata Make Ordinal Theory Easy: We give a new simple proof of the decidability of the First Order Theory of (omega^omega^i,+) and the Monadic Second Order Theory of (omega^i,<), improving the complexity in both cases. Our algorithm is based on tree automata and a new representation of (sets of) ordinals by (infinite) trees.<|reference_end|> | arxiv | @article{cachat2006tree,
title={Tree Automata Make Ordinal Theory Easy},
author={Thierry Cachat (LIAFA)},
journal={Foundations of Software Technology and Theoretical Computer
Science, 26th International Conference, 2006, Proceedings. FSTTCS 2006 (2006)
286-297},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610166},
primaryClass={cs.GT}
} | cachat2006tree |
arxiv-675039 | cs/0610167 | ECA-RuleML: An Approach combining ECA Rules with temporal interval-based KR Event/Action Logics and Transactional Update Logics | <|reference_start|>ECA-RuleML: An Approach combining ECA Rules with temporal interval-based KR Event/Action Logics and Transactional Update Logics: An important problem to be addressed within Event-Driven Architecture (EDA) is how to correctly and efficiently capture and process the event/action-based logic. This paper endeavors to bridge the gap between the Knowledge Representation (KR) approaches based on durable events/actions and such formalisms as event calculus, on one hand, and event-condition-action (ECA) reaction rules extending the approach of active databases that view events as instantaneous occurrences and/or sequences of events, on the other. We propose formalism based on reaction rules (ECA rules) and a novel interval-based event logic and present concrete RuleML-based syntax, semantics and implementation. We further evaluate this approach theoretically, experimentally and on an example derived from common industry use cases and illustrate its benefits.<|reference_end|> | arxiv | @article{paschke2006eca-ruleml:,
title={ECA-RuleML: An Approach combining ECA Rules with temporal interval-based
KR Event/Action Logics and Transactional Update Logics},
author={Adrian Paschke},
journal={arXiv preprint arXiv:cs/0610167},
year={2006},
number={IBIS, TUM, Technical Report 11/05},
archivePrefix={arXiv},
eprint={cs/0610167},
primaryClass={cs.AI cs.LO cs.MA cs.SE}
} | paschke2006eca-ruleml: |
arxiv-675040 | cs/0610168 | Presentation Theorems for Coded Character Sets | <|reference_start|>Presentation Theorems for Coded Character Sets: The notion of 'presentation', as used in combinatorial group theory, is applied to coded character sets(CCSs) - sets which facilitate the interchange of messages in a digital computer network(DCN) . By grouping each element of the set into two portions and using the idea of group presentation(whereby a group is specified by its set of generators and its set of relators), the presentation of a CCS is described. This is illustrated using the Extended Binary Coded Decimal Interchange Code(EBCDIC) which is one of the most popular CCSs in DCNs. Key words: Group presentation, coded character set, digital computer network<|reference_end|> | arxiv | @article{oluwade2006presentation,
title={Presentation Theorems for Coded Character Sets},
author={Dele Oluwade},
journal={arXiv preprint arXiv:cs/0610168},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610168},
primaryClass={cs.DM}
} | oluwade2006presentation |
arxiv-675041 | cs/0610169 | On the User Selection for MIMO Broadcast Channels | <|reference_start|>On the User Selection for MIMO Broadcast Channels: In this paper, a downlink communication system, in which a Base Station (BS) equipped with $M$ antennas communicates with $N$ users each equipped with $K$ receive antennas, is considered. An efficient suboptimum algorithm is proposed for selecting a set of users in order to maximize the sum-rate throughput of the system. For the asymptotic case when $N$ tends to infinity, the necessary and sufficient conditions in order to achieve the maximum sum-rate throughput, such that the difference between the achievable sum-rate and the maximum value approaches zero, is derived. The complexity of our algorithm is investigated in terms of the required amount of feedback from the users to the base station, as well as the number of searches required for selecting the users. It is shown that the proposed method is capable of achieving a large portion of the sum-rate capacity, with a very low complexity.<|reference_end|> | arxiv | @article{bayesteh2006on,
title={On the User Selection for MIMO Broadcast Channels},
author={Alireza Bayesteh and Amir Keyvan Khandani},
journal={arXiv preprint arXiv:cs/0610169},
year={2006},
number={Technical Report #2005-16},
archivePrefix={arXiv},
eprint={cs/0610169},
primaryClass={cs.IT math.IT}
} | bayesteh2006on |
arxiv-675042 | cs/0610170 | Low-complexity modular policies: learning to play Pac-Man and a new framework beyond MDPs | <|reference_start|>Low-complexity modular policies: learning to play Pac-Man and a new framework beyond MDPs: In this paper we propose a method that learns to play Pac-Man. We define a set of high-level observation and action modules. Actions are temporally extended, and multiple action modules may be in effect concurrently. A decision of the agent is represented as a rule-based policy. For learning, we apply the cross-entropy method, a recent global optimization algorithm. The learned policies reached better score than the hand-crafted policy, and neared the score of average human players. We argue that learning is successful mainly because (i) the policy space includes the combination of individual actions and thus it is sufficiently rich, (ii) the search is biased towards low-complexity policies and low complexity solutions can be found quickly if they exist. Based on these principles, we formulate a new theoretical framework, which can be found in the Appendix as supporting material.<|reference_end|> | arxiv | @article{szita2006low-complexity,
title={Low-complexity modular policies: learning to play Pac-Man and a new
framework beyond MDPs},
author={Istvan Szita and Andras Lorincz},
journal={arXiv preprint arXiv:cs/0610170},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610170},
primaryClass={cs.LG cs.AI}
} | szita2006low-complexity |
arxiv-675043 | cs/0610171 | Coupling of quantum angular momenta: an insight into analogic/discrete and local/global models of computation | <|reference_start|>Coupling of quantum angular momenta: an insight into analogic/discrete and local/global models of computation: In the past few years there has been a tumultuous activity aimed at introducing novel conceptual schemes for quantum computing. The approach proposed in (Marzuoli A and Rasetti M 2002, 2005a) relies on the (re)coupling theory of SU(2) angular momenta and can be viewed as a generalization to arbitrary values of the spin variables of the usual quantum-circuit model based on `qubits' and Boolean gates. Computational states belong to finite-dimensional Hilbert spaces labelled by both discrete and continuous parameters, and unitary gates may depend on quantum numbers ranging over finite sets of values as well as continuous (angular) variables. Such a framework is an ideal playground to discuss discrete (digital) and analogic computational processes, together with their relationships occuring when a consistent semiclassical limit takes place on discrete quantum gates. When working with purely discrete unitary gates, the simulator is naturally modelled as families of quantum finite states--machines which in turn represent discrete versions of topological quantum computation models. We argue that our model embodies a sort of unifying paradigm for computing inspired by Nature and, even more ambitiously, a universal setting in which suitably encoded quantum symbolic manipulations of combinatorial, topological and algebraic problems might find their `natural' computational reference model.<|reference_end|> | arxiv | @article{marzuoli2006coupling,
title={Coupling of quantum angular momenta: an insight into analogic/discrete
and local/global models of computation},
author={Annalisa Marzuoli and Mario Rasetti},
journal={Natural Computing Vol 6, No.2 (2007) 151-168},
year={2006},
doi={10.1007/s11047-006-9018-4},
archivePrefix={arXiv},
eprint={cs/0610171},
primaryClass={cs.CC quant-ph}
} | marzuoli2006coupling |
arxiv-675044 | cs/0610172 | On the Analysis and Generalization of Extended Visual Cryptography Schemes | <|reference_start|>On the Analysis and Generalization of Extended Visual Cryptography Schemes: An Extended Visual Cryptography Scheme (EVCS) was proposed by Ateniese et al. [3] to protect a binary secret image with meaningful (innocent-looking) shares. This is implemented by concatenating an extended matrix to each basis matrix. The minimum size of the extended matrix was obtained from a hypergraph coloring model and the scheme was designed for binary images only [3]. In this paper, we give a more concise derivation for this matrix extension for color images. Furthermore, we present a (k, n) scheme to protect multiple color images with meaningful shares. This scheme is an extension of the (n, n) VCS for multiple binary images proposed in Droste scheme [2].<|reference_end|> | arxiv | @article{wang2006on,
title={On the Analysis and Generalization of Extended Visual Cryptography
Schemes},
author={DaoShun Wang, Feng Yi, Xiaobo Li, Ping Luo and Yiqi Dai},
journal={arXiv preprint arXiv:cs/0610172},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610172},
primaryClass={cs.CR}
} | wang2006on |
arxiv-675045 | cs/0610173 | On Degree-Based Decentralized Search in Complex Networks | <|reference_start|>On Degree-Based Decentralized Search in Complex Networks: Decentralized search aims to find the target node in a large network by using only local information. The applications of it include peer-to-peer file sharing, web search and anything else that requires locating a specific target in a complex system. In this paper, we examine the degree-based decentralized search method. Specifically, we evaluate the efficiency of the method in different cases with different amounts of available local information. In addition, we propose a simple refinement algorithm for significantly shortening the length of the route that has been found. Some insights useful for the future developments of efficient decentralized search schemes have been achieved.<|reference_end|> | arxiv | @article{xiao2006on,
title={On Degree-Based Decentralized Search in Complex Networks},
author={Shi Xiao and Gaoxi Xiao},
journal={arXiv preprint arXiv:cs/0610173},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610173},
primaryClass={cs.PF}
} | xiao2006on |
arxiv-675046 | cs/0610174 | A Fixed-Parameter Algorithm for #SAT with Parameter Incidence Treewidth | <|reference_start|>A Fixed-Parameter Algorithm for #SAT with Parameter Incidence Treewidth: We present an efficient fixed-parameter algorithm for #SAT parameterized by the incidence treewidth, i.e., the treewidth of the bipartite graph whose vertices are the variables and clauses of the given CNF formula; a variable and a clause are joined by an edge if and only if the variable occurs in the clause. Our algorithm runs in time O(4^k k l N), where k denotes the incidence treewidth, l denotes the size of a largest clause, and N denotes the number of nodes of the tree-decomposition.<|reference_end|> | arxiv | @article{samer2006a,
title={A Fixed-Parameter Algorithm for #SAT with Parameter Incidence Treewidth},
author={Marko Samer, Stefan Szeider},
journal={arXiv preprint arXiv:cs/0610174},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610174},
primaryClass={cs.DS cs.CC cs.LO}
} | samer2006a |
arxiv-675047 | cs/0610175 | DSmT: A new paradigm shift for information fusion | <|reference_start|>DSmT: A new paradigm shift for information fusion: The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been and still remains of primal importance for the development of reliable information fusion systems. In this short survey paper, we present the theory of plausible and paradoxical reasoning, known as DSmT (Dezert-Smarandache Theory) in literature, developed for dealing with imprecise, uncertain and potentially highly conflicting sources of information. DSmT is a new paradigm shift for information fusion and recent publications have shown the interest and the potential ability of DSmT to solve fusion problems where Dempster's rule used in Dempster-Shafer Theory (DST) provides counter-intuitive results or fails to provide useful result at all. This paper is focused on the foundations of DSmT and on its main rules of combination (classic, hybrid and Proportional Conflict Redistribution rules). Shafer's model on which is based DST appears as a particular and specific case of DSm hybrid model which can be easily handled by DSmT as well. Several simple but illustrative examples are given throughout this paper to show the interest and the generality of this new theory.<|reference_end|> | arxiv | @article{dezert2006dsmt:,
title={DSmT: A new paradigm shift for information fusion},
author={Jean Dezert, Florentin Smarandache},
journal={arXiv preprint arXiv:cs/0610175},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610175},
primaryClass={cs.AI}
} | dezert2006dsmt: |
arxiv-675048 | cs/0611001 | A near-optimal fully dynamic distributed algorithm for maintaining sparse spanners | <|reference_start|>A near-optimal fully dynamic distributed algorithm for maintaining sparse spanners: In this paper we devise an extremely efficient fully dynamic distributed algorithm for maintaining sparse spanners. Our resuls also include the first fully dynamic centralized algorithm for the problem with non-trivial bounds for both incremental and decremental update. Finally, we devise a very efficient streaming algorithm for the problem.<|reference_end|> | arxiv | @article{elkin2006a,
title={A near-optimal fully dynamic distributed algorithm for maintaining
sparse spanners},
author={Michael Elkin},
journal={arXiv preprint arXiv:cs/0611001},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611001},
primaryClass={cs.DS}
} | elkin2006a |
arxiv-675049 | cs/0611002 | Lattice Quantization with Side Information: Codes, Asymptotics, and Applications in Sensor Networks | <|reference_start|>Lattice Quantization with Side Information: Codes, Asymptotics, and Applications in Sensor Networks: We consider the problem of rate/distortion with side information available only at the decoder. For the case of jointly-Gaussian source X and side information Y, and mean-squared error distortion, Wyner proved in 1976 that the rate/distortion function for this problem is identical to the conditional rate/distortion function R_{X|Y}, assuming the side information Y is available at the encoder. In this paper we construct a structured class of asymptotically optimal quantizers for this problem: under the assumption of high correlation between source X and side information Y, we show there exist quantizers within our class whose performance comes arbitrarily close to Wyner's bound. As an application illustrating the relevance of the high-correlation asymptotics, we also explore the use of these quantizers in the context of a problem of data compression for sensor networks, in a setup involving a large number of devices collecting highly correlated measurements within a confined area. An important feature of our formulation is that, although the per-node throughput of the network tends to zero as network size increases, so does the amount of information generated by each transmitter. This is a situation likely to be encountered often in practice, which allows us to cast under new--and more ``optimistic''--light some negative results on the transport capacity of large-scale wireless networks.<|reference_end|> | arxiv | @article{servetto2006lattice,
title={Lattice Quantization with Side Information: Codes, Asymptotics, and
Applications in Sensor Networks},
author={Sergio D. Servetto (Cornell University)},
journal={IEEE Transactions on Information Theory; 53(2):714-731, 2007.},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611002},
primaryClass={cs.IT math.IT}
} | servetto2006lattice |
arxiv-675050 | cs/0611003 | A Scalable Protocol for Cooperative Time Synchronization Using Spatial Averaging | <|reference_start|>A Scalable Protocol for Cooperative Time Synchronization Using Spatial Averaging: Time synchronization is an important aspect of sensor network operation. However, it is well known that synchronization error accumulates over multiple hops. This presents a challenge for large-scale, multi-hop sensor networks with a large number of nodes distributed over wide areas. In this work, we present a protocol that uses spatial averaging to reduce error accumulation in large-scale networks. We provide an analysis to quantify the synchronization improvement achieved using spatial averaging and find that in a basic cooperative network, the skew and offset variance decrease approximately as $1/\bar{N}$ where $\bar{N}$ is the number of cooperating nodes. For general networks, simulation results and a comparison to basic cooperative network results are used to illustrate the improvement in synchronization performance.<|reference_end|> | arxiv | @article{hu2006a,
title={A Scalable Protocol for Cooperative Time Synchronization Using Spatial
Averaging},
author={An-swol Hu and Sergio D. Servetto (Cornell University)},
journal={arXiv preprint arXiv:cs/0611003},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611003},
primaryClass={cs.NI cs.IT math.IT}
} | hu2006a |
arxiv-675051 | cs/0611004 | Linear Abadi and Plotkin Logic | <|reference_start|>Linear Abadi and Plotkin Logic: We present a formalization of a version of Abadi and Plotkin's logic for parametricity for a polymorphic dual intuitionistic/linear type theory with fixed points, and show, following Plotkin's suggestions, that it can be used to define a wide collection of types, including existential types, inductive types, coinductive types and general recursive types. We show that the recursive types satisfy a universal property called dinaturality, and we develop reasoning principles for the constructed types. In the case of recursive types, the reasoning principle is a mixed induction/coinduction principle, with the curious property that coinduction holds for general relations, but induction only for a limited collection of ``admissible'' relations. A similar property was observed in Pitts' 1995 analysis of recursive types in domain theory. In a future paper we will develop a category theoretic notion of models of the logic presented here, and show how the results developed in the logic can be transferred to the models.<|reference_end|> | arxiv | @article{birkedal2006linear,
title={Linear Abadi and Plotkin Logic},
author={Lars Birkedal and Rasmus E. M{o}gelberg and Rasmus Lerchedahl
Petersen},
journal={Logical Methods in Computer Science, Volume 2, Issue 5 (November
3, 2006) lmcs:2233},
year={2006},
doi={10.2168/LMCS-2(5:2)2006},
archivePrefix={arXiv},
eprint={cs/0611004},
primaryClass={cs.LO}
} | birkedal2006linear |
arxiv-675052 | cs/0611005 | Protocols for Scholarly Communication | <|reference_start|>Protocols for Scholarly Communication: CERN, the European Organization for Nuclear Research, has operated an institutional preprint repository for more than 10 years. The repository contains over 850,000 records of which more than 450,000 are full-text OA preprints, mostly in the field of particle physics, and it is integrated with the library's holdings of books, conference proceedings, journals and other grey literature. In order to encourage effective propagation and open access to scholarly material, CERN is implementing a range of innovative library services into its document repository: automatic keywording, reference extraction, collaborative management tools and bibliometric tools. Some of these services, such as user reviewing and automatic metadata extraction, could make up an interesting testbed for future publishing solutions and certainly provide an exciting environment for e-science possibilities. The future protocol for scientific communication should naturally guide authors towards OA publication and CERN wants to help reach a full open access publishing environment for the particle physics community and the related sciences in the next few years.<|reference_end|> | arxiv | @article{pepe2006protocols,
title={Protocols for Scholarly Communication},
author={Alberto Pepe and Joanne Yeomans},
journal={arXiv preprint arXiv:cs/0611005},
year={2006},
number={CERN-OPEN-2006-053},
archivePrefix={arXiv},
eprint={cs/0611005},
primaryClass={cs.DL}
} | pepe2006protocols |
arxiv-675053 | cs/0611006 | Evolving controllers for simulated car racing | <|reference_start|>Evolving controllers for simulated car racing: This paper describes the evolution of controllers for racing a simulated radio-controlled car around a track, modelled on a real physical track. Five different controller architectures were compared, based on neural networks, force fields and action sequences. The controllers use either egocentric (first person), Newtonian (third person) or no information about the state of the car (open-loop controller). The only controller that was able to evolve good racing behaviour was based on a neural network acting on egocentric inputs.<|reference_end|> | arxiv | @article{togelius2006evolving,
title={Evolving controllers for simulated car racing},
author={Julian Togelius and Simon M. Lucas},
journal={Proceedings of the 2005 Congress on Evolutionary Computation,
pages 1906-1913},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611006},
primaryClass={cs.NE cs.LG cs.RO}
} | togelius2006evolving |
arxiv-675054 | cs/0611007 | MIMO Multichannel Beamforming: SER and Outage Using New Eigenvalue Distributions of Complex Noncentral Wishart Matrices | <|reference_start|>MIMO Multichannel Beamforming: SER and Outage Using New Eigenvalue Distributions of Complex Noncentral Wishart Matrices: This paper analyzes MIMO systems with multichannel beamforming in Ricean fading. Our results apply to a wide class of multichannel systems which transmit on the eigenmodes of the MIMO channel. We first present new closed-form expressions for the marginal ordered eigenvalue distributions of complex noncentral Wishart matrices. These are used to characterize the statistics of the signal to noise ratio (SNR) on each eigenmode. Based on this, we present exact symbol error rate (SER) expressions. We also derive closed-form expressions for the diversity order, array gain, and outage probability. We show that the global SER performance is dominated by the subchannel corresponding to the minimum channel singular value. We also show that, at low outage levels, the outage probability varies inversely with the Ricean K-factor for cases where transmission is only on the most dominant subchannel (i.e. a singlechannel beamforming system). Numerical results are presented to validate the theoretical analysis.<|reference_end|> | arxiv | @article{jin2006mimo,
title={MIMO Multichannel Beamforming: SER and Outage Using New Eigenvalue
Distributions of Complex Noncentral Wishart Matrices},
author={Shi Jin, Matthew R. McKay, Xiqi Gao, Iain B. Collings},
journal={arXiv preprint arXiv:cs/0611007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611007},
primaryClass={cs.IT math.IT}
} | jin2006mimo |
arxiv-675055 | cs/0611008 | Why Linear Programming cannot solve large instances of NP-complete problems in polynomial time | <|reference_start|>Why Linear Programming cannot solve large instances of NP-complete problems in polynomial time: This article discusses ability of Linear Programming models to be used as solvers of NP-complete problems. Integer Linear Programming is known as NP-complete problem, but non-integer Linear Programming problems can be solved in polynomial time, what places them in P class. During past three years there appeared some articles using LP to solve NP-complete problems. This methods use large number of variables (O(n^9)) solving correctly almost all instances that can be solved in reasonable time. Can they solve infinitively large instances? This article gives answer to this question.<|reference_end|> | arxiv | @article{hofman2006why,
title={Why Linear Programming cannot solve large instances of NP-complete
problems in polynomial time},
author={Radoslaw Hofman},
journal={arXiv preprint arXiv:cs/0611008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611008},
primaryClass={cs.CC cs.DM cs.DS cs.NA}
} | hofman2006why |
arxiv-675056 | cs/0611009 | Efficient constraint propagation engines | <|reference_start|>Efficient constraint propagation engines: This paper presents a model and implementation techniques for speeding up constraint propagation. Three fundamental approaches to improving constraint propagation based on propagators as implementations of constraints are explored: keeping track of which propagators are at fixpoint, choosing which propagator to apply next, and how to combine several propagators for the same constraint. We show how idempotence reasoning and events help track fixpoints more accurately. We improve these methods by using them dynamically (taking into account current domains to improve accuracy). We define priority-based approaches to choosing a next propagator and show that dynamic priorities can improve propagation. We illustrate that the use of multiple propagators for the same constraint can be advantageous with priorities, and introduce staged propagators that combine the effects of multiple propagators with priorities for greater efficiency.<|reference_end|> | arxiv | @article{schulte2006efficient,
title={Efficient constraint propagation engines},
author={Christian Schulte and Peter J. Stuckey},
journal={ACM TOPLAS, 31(1) article 2, 2008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611009},
primaryClass={cs.AI cs.PL}
} | schulte2006efficient |
arxiv-675057 | cs/0611010 | On the structure of generalized toric codes | <|reference_start|>On the structure of generalized toric codes: Toric codes are obtained by evaluating rational functions of a nonsingular toric variety at the algebraic torus. One can extend toric codes to the so called generalized toric codes. This extension consists on evaluating elements of an arbitrary polynomial algebra at the algebraic torus instead of a linear combination of monomials whose exponents are rational points of a convex polytope. We study their multicyclic and metric structure, and we use them to express their dual and to estimate their minimum distance.<|reference_end|> | arxiv | @article{ruano2006on,
title={On the structure of generalized toric codes},
author={Diego Ruano},
journal={The final version can be found in: Journal of Symbolic
Computation. Volume 44, Issue 5, May 2009, Pages 499-506},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611010},
primaryClass={cs.IT math.IT}
} | ruano2006on |
arxiv-675058 | cs/0611011 | Hedging predictions in machine learning | <|reference_start|>Hedging predictions in machine learning: Recent advances in machine learning make it possible to design efficient prediction algorithms for data sets with huge numbers of parameters. This paper describes a new technique for "hedging" the predictions output by many such algorithms, including support vector machines, kernel ridge regression, kernel nearest neighbours, and by many other state-of-the-art methods. The hedged predictions for the labels of new objects include quantitative measures of their own accuracy and reliability. These measures are provably valid under the assumption of randomness, traditional in machine learning: the objects and their labels are assumed to be generated independently from the same probability distribution. In particular, it becomes possible to control (up to statistical fluctuations) the number of erroneous predictions by selecting a suitable confidence level. Validity being achieved automatically, the remaining goal of hedged prediction is efficiency: taking full account of the new objects' features and other available information to produce as accurate predictions as possible. This can be done successfully using the powerful machinery of modern machine learning.<|reference_end|> | arxiv | @article{gammerman2006hedging,
title={Hedging predictions in machine learning},
author={Alexander Gammerman and Vladimir Vovk},
journal={Computer Journal, 50:151-177, 2007},
year={2006},
doi={10.1093/comjnl/bxl065},
number={On-line Compression Modelling Project (New Series), Working Paper 02},
archivePrefix={arXiv},
eprint={cs/0611011},
primaryClass={cs.LG}
} | gammerman2006hedging |
arxiv-675059 | cs/0611012 | Asymptotic SER and Outage Probability of MIMO MRC in Correlated Fading | <|reference_start|>Asymptotic SER and Outage Probability of MIMO MRC in Correlated Fading: This letter derives the asymptotic symbol error rate (SER) and outage probability of multiple-input multiple-output (MIMO) maximum ratio combining (MRC) systems. We consider Rayleigh fading channels with both transmit and receive spatial correlation. Our results are based on new asymptotic expressions which we derive for the p.d.f. and c.d.f. of the maximum eigenvalue of positive-definite quadratic forms in complex Gaussian matrices. We prove that spatial correlation does not affect the diversity order, but that it reduces the array gain and hence increases the SER in the high SNR regime.<|reference_end|> | arxiv | @article{jin2006asymptotic,
title={Asymptotic SER and Outage Probability of MIMO MRC in Correlated Fading},
author={Shi Jin, Matthew R. McKay, Xiqi Gao, Iain B. Collings},
journal={arXiv preprint arXiv:cs/0611012},
year={2006},
doi={10.1109/LSP.2006.881512},
archivePrefix={arXiv},
eprint={cs/0611012},
primaryClass={cs.IT math.IT}
} | jin2006asymptotic |
arxiv-675060 | cs/0611013 | Developing strategies to produce better scientific papers: a Recipe for non-native users of English | <|reference_start|>Developing strategies to produce better scientific papers: a Recipe for non-native users of English: In this paper we introduce the AMADEUS strategy, which has been used to produce scientific writing tools for non-native users of English for 15 years, and emphasize a learn-by-doing approach through which students and novice writers can improve their scientific writing. More specifically, we provide a 9-step recipe for the students to compile writing material according to a procedure that has proven efficient in scientific writing courses.<|reference_end|> | arxiv | @article{oliveira2006developing,
title={Developing strategies to produce better scientific papers: a Recipe for
non-native users of English},
author={Osvaldo N. Oliveira Jr., Valtencir Zucolotto, Sandra M. Aluisio},
journal={arXiv preprint arXiv:cs/0611013},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611013},
primaryClass={cs.OH}
} | oliveira2006developing |
arxiv-675061 | cs/0611014 | Interactive Problem Solving in Prolog | <|reference_start|>Interactive Problem Solving in Prolog: This paper presents an environment for solving Prolog problems which has been implemented as a module for the virtual laboratory VILAB. During the problem solving processes the learners get fast adaptive feedback. As a result analysing the learner's actions the system suggests the use of suitable auxiliary predicates which will also be checked for proper implementation. The focus of the environment has been set on robustness and the integration in VILAB.<|reference_end|> | arxiv | @article{braun2006interactive,
title={Interactive Problem Solving in Prolog},
author={Erik Braun, Rainer Luetticke, Ingo Gloeckner, Hermann Helbig},
journal={arXiv preprint arXiv:cs/0611014},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611014},
primaryClass={cs.HC cs.CY cs.PL}
} | braun2006interactive |
arxiv-675062 | cs/0611015 | On the Fairness of Rate Allocation in Gaussian Multiple Access Channel and Broadcast Channel | <|reference_start|>On the Fairness of Rate Allocation in Gaussian Multiple Access Channel and Broadcast Channel: The capacity region of a channel consists of all achievable rate vectors. Picking a particular point in the capacity region is synonymous with rate allocation. The issue of fairness in rate allocation is addressed in this paper. We review several notions of fairness, including max-min fairness, proportional fairness and Nash bargaining solution. Their efficiencies for general multiuser channels are discussed. We apply these ideas to the Gaussian multiple access channel (MAC) and the Gaussian broadcast channel (BC). We show that in the Gaussian MAC, max-min fairness and proportional fairness coincide. For both Gaussian MAC and BC, we devise efficient algorithms that locate the fair point in the capacity region. Some elementary properties of fair rate allocations are proved.<|reference_end|> | arxiv | @article{shum2006on,
title={On the Fairness of Rate Allocation in Gaussian Multiple Access Channel
and Broadcast Channel},
author={Kenneth W. Shum and Chi Wan Sung},
journal={arXiv preprint arXiv:cs/0611015},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611015},
primaryClass={cs.IT math.IT}
} | shum2006on |
arxiv-675063 | cs/0611016 | Increasing Data Resilience of Mobile Devices with a Collaborative Backup Service | <|reference_start|>Increasing Data Resilience of Mobile Devices with a Collaborative Backup Service: Whoever has had his cell phone stolen knows how frustrating it is to be unable to get his contact list back. To avoid data loss when losing or destroying a mobile device like a PDA or a cell phone, data is usually backed-up to a fixed station. However, in the time between the last backup and the failure, important data can have been produced and then lost. To handle this issue, we propose a transparent collaborative backup system. Indeed, by saving data on other mobile devices between two connections to a global infrastructure, we can resist to such scenarios. In this paper, after a general description of such a system, we present a way to replicate data on mobile devices to attain a prerequired resilience for the backup.<|reference_end|> | arxiv | @article{martin-guillerez2006increasing,
title={Increasing Data Resilience of Mobile Devices with a Collaborative Backup
Service},
author={Damien Martin-Guillerez (IRISA / INRIA Rennes), Michel Ban^atre
(IRISA / INRIA Rennes), Paul Couderc (IRISA / INRIA Rennes)},
journal={arXiv preprint arXiv:cs/0611016},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611016},
primaryClass={cs.NI}
} | martin-guillerez2006increasing |
arxiv-675064 | cs/0611017 | A New Data Processing Inequality and Its Applications in Distributed Source and Channel Coding | <|reference_start|>A New Data Processing Inequality and Its Applications in Distributed Source and Channel Coding: In the distributed coding of correlated sources, the problem of characterizing the joint probability distribution of a pair of random variables satisfying an n-letter Markov chain arises. The exact solution of this problem is intractable. In this paper, we seek a single-letter necessary condition for this n-letter Markov chain. To this end, we propose a new data processing inequality on a new measure of correlation by means of spectrum analysis. Based on this new data processing inequality, we provide a single-letter necessary condition for the required joint probability distribution. We apply our results to two specific examples involving the distributed coding of correlated sources: multi-terminal rate-distortion region and multiple access channel with correlated sources, and propose new necessary conditions for these two problems.<|reference_end|> | arxiv | @article{kang2006a,
title={A New Data Processing Inequality and Its Applications in Distributed
Source and Channel Coding},
author={W. Kang, S. Ulukus},
journal={arXiv preprint arXiv:cs/0611017},
year={2006},
doi={10.1109/TIT.2010.2090211},
archivePrefix={arXiv},
eprint={cs/0611017},
primaryClass={cs.IT math.IT}
} | kang2006a |
arxiv-675065 | cs/0611018 | Logic Column 17: A Rendezvous of Logic, Complexity, and Algebra | <|reference_start|>Logic Column 17: A Rendezvous of Logic, Complexity, and Algebra: This article surveys recent advances in applying algebraic techniques to constraint satisfaction problems.<|reference_end|> | arxiv | @article{chen2006logic,
title={Logic Column 17: A Rendezvous of Logic, Complexity, and Algebra},
author={Hubie Chen},
journal={arXiv preprint arXiv:cs/0611018},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611018},
primaryClass={cs.LO}
} | chen2006logic |
arxiv-675066 | cs/0611019 | Algorithmic Aspects of a General Modular Decomposition Theory | <|reference_start|>Algorithmic Aspects of a General Modular Decomposition Theory: A new general decomposition theory inspired from modular graph decomposition is presented. This helps unifying modular decomposition on different structures, including (but not restricted to) graphs. Moreover, even in the case of graphs, the terminology ``module'' not only captures the classical graph modules but also allows to handle 2-connected components, star-cutsets, and other vertex subsets. The main result is that most of the nice algorithmic tools developed for modular decomposition of graphs still apply efficiently on our generalisation of modules. Besides, when an essential axiom is satisfied, almost all the important properties can be retrieved. For this case, an algorithm given by Ehrenfeucht, Gabow, McConnell and Sullivan 1994 is generalised and yields a very efficient solution to the associated decomposition problem.<|reference_end|> | arxiv | @article{bui-xuan2006algorithmic,
title={Algorithmic Aspects of a General Modular Decomposition Theory},
author={Binh-Minh Bui-Xuan (LIRMM), Michel Habib (LIAFA), Vincent Limouzy
(LIAFA), Fabien De Montgolfier (LIAFA)},
journal={arXiv preprint arXiv:cs/0611019},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611019},
primaryClass={cs.DS}
} | bui-xuan2006algorithmic |
arxiv-675067 | cs/0611020 | An associative memory for the on-line recognition and prediction of temporal sequences | <|reference_start|>An associative memory for the on-line recognition and prediction of temporal sequences: This paper presents the design of an associative memory with feedback that is capable of on-line temporal sequence learning. A framework for on-line sequence learning has been proposed, and different sequence learning models have been analysed according to this framework. The network model is an associative memory with a separate store for the sequence context of a symbol. A sparse distributed memory is used to gain scalability. The context store combines the functionality of a neural layer with a shift register. The sensitivity of the machine to the sequence context is controllable, resulting in different characteristic behaviours. The model can store and predict on-line sequences of various types and length. Numerical simulations on the model have been carried out to determine its properties.<|reference_end|> | arxiv | @article{bose2006an,
title={An associative memory for the on-line recognition and prediction of
temporal sequences},
author={J. Bose, S.B. Furber, J.L. Shapiro},
journal={arXiv preprint arXiv:cs/0611020},
year={2006},
doi={10.1109/IJCNN.2005.1556028},
archivePrefix={arXiv},
eprint={cs/0611020},
primaryClass={cs.NE cs.AI}
} | bose2006an |
arxiv-675068 | cs/0611021 | Relatively inertial delays | <|reference_start|>Relatively inertial delays: The paper studies the relatively inertial delays that represent one of the most important concepts in the modeling of the asynchronous circuits.<|reference_end|> | arxiv | @article{vlad2006relatively,
title={Relatively inertial delays},
author={Serban E. Vlad},
journal={arXiv preprint arXiv:cs/0611021},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611021},
primaryClass={cs.OH}
} | vlad2006relatively |
arxiv-675069 | cs/0611022 | Multirobot rendezvous with visibility sensors in nonconvex environments | <|reference_start|>Multirobot rendezvous with visibility sensors in nonconvex environments: This paper presents a coordination algorithm for mobile autonomous robots. Relying upon distributed sensing the robots achieve rendezvous, that is, they move to a common location. Each robot is a point mass moving in a nonconvex environment according to an omnidirectional kinematic model. Each robot is equipped with line-of-sight limited-range sensors, i.e., a robot can measure the relative position of any object (robots or environment boundary) if and only if the object is within a given distance and there are no obstacles in-between. The algorithm is designed using the notions of robust visibility, connectivity-preserving constraint sets, and proximity graphs. Simulations illustrate the theoretical results on the correctness of the proposed algorithm, and its performance in asynchronous setups and with sensor measurement and control errors.<|reference_end|> | arxiv | @article{ganguli2006multirobot,
title={Multirobot rendezvous with visibility sensors in nonconvex environments},
author={Anurag Ganguli, Jorge Cortes, Francesco Bullo},
journal={arXiv preprint arXiv:cs/0611022},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611022},
primaryClass={cs.RO}
} | ganguli2006multirobot |
arxiv-675070 | cs/0611023 | Faster Streaming algorithms for graph spanners | <|reference_start|>Faster Streaming algorithms for graph spanners: Given an undirected graph $G=(V,E)$ on $n$ vertices, $m$ edges, and an integer $t\ge 1$, a subgraph $(V,E_S)$, $E_S\subseteq E$ is called a $t$-spanner if for any pair of vertices $u,v \in V$, the distance between them in the subgraph is at most $t$ times the actual distance. We present streaming algorithms for computing a $t$-spanner of essentially optimal size-stretch trade offs for any undirected graph. Our first algorithm is for the classical streaming model and works for unweighted graphs only. The algorithm performs a single pass on the stream of edges and requires $O(m)$ time to process the entire stream of edges. This drastically improves the previous best single pass streaming algorithm for computing a $t$-spanner which requires $\theta(mn^{\frac{2}{t}})$ time to process the stream and computes spanner with size slightly larger than the optimal. Our second algorithm is for {\em StreamSort} model introduced by Aggarwal et al. [FOCS 2004], which is the streaming model augmented with a sorting primitive. The {\em StreamSort} model has been shown to be a more powerful and still very realistic model than the streaming model for massive data sets applications. Our algorithm, which works of weighted graphs as well, performs $O(t)$ passes using $O(\log n)$ bits of working memory only. Our both the algorithms require elementary data structures.<|reference_end|> | arxiv | @article{baswana2006faster,
title={Faster Streaming algorithms for graph spanners},
author={Surender Baswana},
journal={arXiv preprint arXiv:cs/0611023},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611023},
primaryClass={cs.DS}
} | baswana2006faster |
arxiv-675071 | cs/0611024 | A Relational Approach to Functional Decomposition of Logic Circuits | <|reference_start|>A Relational Approach to Functional Decomposition of Logic Circuits: Functional decomposition of logic circuits has profound influence on all quality aspects of the cost-effective implementation of modern digital systems. In this paper, a relational approach to the decomposition of logic circuits is proposed. This approach is parallel to the normalization of relational databases, they are governed by the same concepts of functional dependency (FD) and multi-valued dependency (MVD). It is manifest that the functional decomposition of switching function actually exploits the same idea and serves a similar purpose as database normalization. Partitions play an important role in the decomposition. The interdependency of two partitions can be represented by a bipartite graph. We demonstrate that both FD and MVD can be represented by bipartite graphs with specific topological properties, which are delineated by partitions of minterms. It follows that our algorithms are procedures of constructing those specific bipartite graphs of interest to meet the information-lossless criteria of functional decomposition.<|reference_end|> | arxiv | @article{lee2006a,
title={A Relational Approach to Functional Decomposition of Logic Circuits},
author={Tony T. Lee and Tong Ye},
journal={arXiv preprint arXiv:cs/0611024},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611024},
primaryClass={cs.DM cs.LG}
} | lee2006a |
arxiv-675072 | cs/0611025 | A Logical Approach to Efficient Max-SAT solving | <|reference_start|>A Logical Approach to Efficient Max-SAT solving: Weighted Max-SAT is the optimization version of SAT and many important problems can be naturally encoded as such. Solving weighted Max-SAT is an important problem from both a theoretical and a practical point of view. In recent years, there has been considerable interest in finding efficient solving techniques. Most of this work focus on the computation of good quality lower bounds to be used within a branch and bound DPLL-like algorithm. Most often, these lower bounds are described in a procedural way. Because of that, it is difficult to realize the {\em logic} that is behind. In this paper we introduce an original framework for Max-SAT that stresses the parallelism with classical SAT. Then, we extend the two basic SAT solving techniques: {\em search} and {\em inference}. We show that many algorithmic {\em tricks} used in state-of-the-art Max-SAT solvers are easily expressable in {\em logic} terms with our framework in a unified manner. Besides, we introduce an original search algorithm that performs a restricted amount of {\em weighted resolution} at each visited node. We empirically compare our algorithm with a variety of solving alternatives on several benchmarks. Our experiments, which constitute to the best of our knowledge the most comprehensive Max-sat evaluation ever reported, show that our algorithm is generally orders of magnitude faster than any competitor.<|reference_end|> | arxiv | @article{larrosa2006a,
title={A Logical Approach to Efficient Max-SAT solving},
author={Javier Larrosa, Federico Heras, Simon de Givry},
journal={arXiv preprint arXiv:cs/0611025},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611025},
primaryClass={cs.AI cs.LO}
} | larrosa2006a |
arxiv-675073 | cs/0611026 | Un mod\`ele g\'en\'erique d'organisation de corpus en ligne: application \`a la FReeBank | <|reference_start|>Un mod\`ele g\'en\'erique d'organisation de corpus en ligne: application \`a la FReeBank: The few available French resources for evaluating linguistic models or algorithms on other linguistic levels than morpho-syntax are either insufficient from quantitative as well as qualitative point of view or not freely accessible. Based on this fact, the FREEBANK project intends to create French corpora constructed using manually revised output from a hybrid Constraint Grammar parser and annotated on several linguistic levels (structure, morpho-syntax, syntax, coreference), with the objective to make them available on-line for research purposes. Therefore, we will focus on using standard annotation schemes, integration of existing resources and maintenance allowing for continuous enrichment of the annotations. Prior to the actual presentation of the prototype that has been implemented, this paper describes a generic model for the organization and deployment of a linguistic resource archive, in compliance with the various works currently conducted within international standardization initiatives (TEI and ISO/TC 37/SC 4).<|reference_end|> | arxiv | @article{salmon-alt2006un,
title={Un mod\`ele g\'en\'erique d'organisation de corpus en ligne: application
\`a la FReeBank},
author={Susanne Salmon-Alt (ATILF), Laurent Romary (INRIA Lorraine - LORIA),
Jean-Marie Pierrel (ATILF)},
journal={Traitement Automatique des Langues (TAL) 45 (2006) 145-169},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611026},
primaryClass={cs.CL}
} | salmon-alt2006un |
arxiv-675074 | cs/0611027 | Efficient and Dynamic Group Key Agreement in Ad hoc Networks | <|reference_start|>Efficient and Dynamic Group Key Agreement in Ad hoc Networks: Confidentiality, integrity and authentication are more relevant issues in Ad hoc networks than in wired fixed networks. One way to address these issues is the use of symmetric key cryptography, relying on a secret key shared by all members of the network. But establishing and maintaining such a key (also called the session key) is a non-trivial problem. We show that Group Key Agreement (GKA) protocols are suitable for establishing and maintaining such a session key in these dynamic networks. We take an existing GKA protocol, which is robust to connectivity losses and discuss all the issues for good functioning of this protocol in Ad hoc networks. We give implementation details and network parameters, which significantly reduce the computational burden of using public key cryptography in such networks.<|reference_end|> | arxiv | @article{bhaskar2006efficient,
title={Efficient and Dynamic Group Key Agreement in Ad hoc Networks},
author={Raghav Bhaskar (INRIA Rocquencourt), Paul M"uhlethaler (INRIA
Rocquencourt), Daniel Augot (INRIA Rocquencourt), Cdric Adjih (INRIA
Rocquencourt), Saadi Boudjit (INRIA Rocquencourt), Anis Laouiti (INRIA
Rocquencourt)},
journal={arXiv preprint arXiv:cs/0611027},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611027},
primaryClass={cs.CR}
} | bhaskar2006efficient |
arxiv-675075 | cs/0611028 | A Decomposition Theory for Binary Linear Codes | <|reference_start|>A Decomposition Theory for Binary Linear Codes: The decomposition theory of matroids initiated by Paul Seymour in the 1980's has had an enormous impact on research in matroid theory. This theory, when applied to matrices over the binary field, yields a powerful decomposition theory for binary linear codes. In this paper, we give an overview of this code decomposition theory, and discuss some of its implications in the context of the recently discovered formulation of maximum-likelihood (ML) decoding of a binary linear code over a discrete memoryless channel as a linear programming problem. We translate matroid-theoretic results of Gr\"otschel and Truemper from the combinatorial optimization literature to give examples of non-trivial families of codes for which the ML decoding problem can be solved in time polynomial in the length of the code. One such family is that consisting of codes $C$ for which the codeword polytope is identical to the Koetter-Vontobel fundamental polytope derived from the entire dual code $C^\perp$. However, we also show that such families of codes are not good in a coding-theoretic sense -- either their dimension or their minimum distance must grow sub-linearly with codelength. As a consequence, we have that decoding by linear programming, when applied to good codes, cannot avoid failing occasionally due to the presence of pseudocodewords.<|reference_end|> | arxiv | @article{kashyap2006a,
title={A Decomposition Theory for Binary Linear Codes},
author={Navin Kashyap},
journal={arXiv preprint arXiv:cs/0611028},
year={2006},
doi={10.1109/TIT.2008.924700},
archivePrefix={arXiv},
eprint={cs/0611028},
primaryClass={cs.DM cs.IT math.IT}
} | kashyap2006a |
arxiv-675076 | cs/0611029 | Linear Encodings of Bounded LTL Model Checking | <|reference_start|>Linear Encodings of Bounded LTL Model Checking: We consider the problem of bounded model checking (BMC) for linear temporal logic (LTL). We present several efficient encodings that have size linear in the bound. Furthermore, we show how the encodings can be extended to LTL with past operators (PLTL). The generalised encoding is still of linear size, but cannot detect minimal length counterexamples. By using the virtual unrolling technique minimal length counterexamples can be captured, however, the size of the encoding is quadratic in the specification. We also extend virtual unrolling to Buchi automata, enabling them to accept minimal length counterexamples. Our BMC encodings can be made incremental in order to benefit from incremental SAT technology. With fairly small modifications the incremental encoding can be further enhanced with a termination check, allowing us to prove properties with BMC. Experiments clearly show that our new encodings improve performance of BMC considerably, particularly in the case of the incremental encoding, and that they are very competitive for finding bugs. An analysis of the liveness-to-safety transformation reveals many similarities to the BMC encodings in this paper. Using the liveness-to-safety translation with BDD-based invariant checking results in an efficient method to find shortest counterexamples that complements the BMC-based approach.<|reference_end|> | arxiv | @article{biere2006linear,
title={Linear Encodings of Bounded LTL Model Checking},
author={Armin Biere, Keijo Heljanko, Tommi Junttila, Timo Latvala, and Viktor
Schuppan},
journal={Logical Methods in Computer Science, Volume 2, Issue 5 (November
15, 2006) lmcs:2236},
year={2006},
doi={10.2168/LMCS-2(5:5)2006},
archivePrefix={arXiv},
eprint={cs/0611029},
primaryClass={cs.LO}
} | biere2006linear |
arxiv-675077 | cs/0611030 | Nonextensive Pythagoras' Theorem | <|reference_start|>Nonextensive Pythagoras' Theorem: Kullback-Leibler relative-entropy, in cases involving distributions resulting from relative-entropy minimization, has a celebrated property reminiscent of squared Euclidean distance: it satisfies an analogue of the Pythagoras' theorem. And hence, this property is referred to as Pythagoras' theorem of relative-entropy minimization or triangle equality and plays a fundamental role in geometrical approaches of statistical estimation theory like information geometry. Equvalent of Pythagoras' theorem in the generalized nonextensive formalism is established in (Dukkipati at el., Physica A, 361 (2006) 124-138). In this paper we give a detailed account of it.<|reference_end|> | arxiv | @article{dukkipati2006nonextensive,
title={Nonextensive Pythagoras' Theorem},
author={Ambedkar Dukkipati},
journal={arXiv preprint arXiv:cs/0611030},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611030},
primaryClass={cs.IT math.IT}
} | dukkipati2006nonextensive |
arxiv-675078 | cs/0611031 | Efficient Threshold Aggregation of Moving Objects | <|reference_start|>Efficient Threshold Aggregation of Moving Objects: Calculating aggregation operators of moving point objects, using time as a continuous variable, presents unique problems when querying for congestion in a moving and changing (or dynamic) query space. We present a set of congestion query operators, based on a threshold value, that estimate the following 5 aggregation operations in d-dimensions. 1) We call the count of point objects that intersect the dynamic query space during the query time interval, the CountRange. 2) We call the Maximum (or Minimum) congestion in the dynamic query space at any time during the query time interval, the MaxCount (or MinCount). 3) We call the sum of time that the dynamic query space is congested, the ThresholdSum. 4) We call the number of times that the dynamic query space is congested, the ThresholdCount. And 5) we call the average length of time of all the time intervals when the dynamic query space is congested, the ThresholdAverage. These operators rely on a novel approach to transforming the problem of selection based on position to a problem of selection based on a threshold. These operators can be used to predict concentrations of migrating birds that may carry disease such as Bird Flu and hence the information may be used to predict high risk areas. On a smaller scale, those operators are also applicable to maintaining safety in airplane operations. We present the theory of our estimation operators and provide algorithms for exact operators. The implementations of those operators, and experiments, which include data from more than 7500 queries, indicate that our estimation operators produce fast, efficient results with error under 5%.<|reference_end|> | arxiv | @article{anderson2006efficient,
title={Efficient Threshold Aggregation of Moving Objects},
author={Scot Anderson, Peter Revesz},
journal={arXiv preprint arXiv:cs/0611031},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611031},
primaryClass={cs.DB}
} | anderson2006efficient |
arxiv-675079 | cs/0611032 | V-like formations in flocks of artificial birds | <|reference_start|>V-like formations in flocks of artificial birds: We consider flocks of artificial birds and study the emergence of V-like formations during flight. We introduce a small set of fully distributed positioning rules to guide the birds' movements and demonstrate, by means of simulations, that they tend to lead to stabilization into several of the well-known V-like formations that have been observed in nature. We also provide quantitative indicators that we believe are closely related to achieving V-like formations, and study their behavior over a large set of independent simulations.<|reference_end|> | arxiv | @article{nathan2006v-like,
title={V-like formations in flocks of artificial birds},
author={Andre Nathan, Valmir C. Barbosa},
journal={Artificial Life 14 (2008), 179-188},
year={2006},
doi={10.1162/artl.2008.14.2.179},
archivePrefix={arXiv},
eprint={cs/0611032},
primaryClass={cs.NE}
} | nathan2006v-like |
arxiv-675080 | cs/0611033 | Cryptanalyse de Achterbahn-128/80 | <|reference_start|>Cryptanalyse de Achterbahn-128/80: This paper presents two attacks against Achterbahn-128/80, the last version of one of the stream cipher proposals in the eSTREAM project. The attack against the 80-bit variant, Achterbahn-80, has complexity 2^{56.32}. The attack against Achterbahn-128 requires 2^{75.4} operations and 2^{61} keystream bits. These attacks are based on an improvement of the attack due to Hell and Johansson against Achterbahn version 2 and also on an algorithm that makes profit of the short lengths of the constituent registers. ***** Ce papier pr\'{e}sente deux attaques sur Achterbahn-128/80, la derni\`{e}re version d'un des algorithmes propos\'{e}s dans le cadre de eSTREAM. L'attaque sur la version de 80 bits, Achterbahn-80, est en 2^{56.32}. L'attaque sur Achterbahn-128 a besoin de 2^{75.4} calculs et 2^{61} bits de suite chiffrante. Ces attaques sont bas\'{e}es sur une am\'{e}lioration de l'attaque propos\'{e}e par Hell et Johansson sur la version 2 d'Achterbahn et aussi sur un algorithme qui tire profit des petites longueurs des registres.<|reference_end|> | arxiv | @article{plasencia2006cryptanalyse,
title={Cryptanalyse de Achterbahn-128/80},
author={Maria Naya Plasencia (INRIA Rocquencourt)},
journal={arXiv preprint arXiv:cs/0611033},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611033},
primaryClass={cs.CR}
} | plasencia2006cryptanalyse |
arxiv-675081 | cs/0611034 | Strategies for Replica Placement in Tree Networks | <|reference_start|>Strategies for Replica Placement in Tree Networks: In this paper, we discuss and compare several policies to place replicas in tree networks, subject to server capacity and QoS constraints. The client requests are known beforehand, while the number and location of the servers are to be determined. The standard approach in the literature is to enforce that all requests of a client be served by the closest server in the tree. We introduce and study two new policies. In the first policy, all requests from a given client are still processed by the same server, but this server can be located anywhere in the path from the client to the root. In the second policy, the requests of a given client can be processed by multiple servers. One major contribution of this paper is to assess the impact of these new policies on the total replication cost. Another important goal is to assess the impact of server heterogeneity, both from a theoretical and a practical perspective. In this paper, we establish several new complexity results, and provide several efficient polynomial heuristics for NP-complete instances of the problem. These heuristics are compared to an absolute lower bound provided by the formulation of the problem in terms of the solution of an integer linear program.<|reference_end|> | arxiv | @article{robert2006strategies,
title={Strategies for Replica Placement in Tree Networks},
author={Yves Robert (INRIA Rh^one-Alpes, LIP), Anne Benoit (INRIA
Rh^one-Alpes, LIP), Veronika Rehn (INRIA Rh^one-Alpes, LIP)},
journal={arXiv preprint arXiv:cs/0611034},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611034},
primaryClass={cs.DC}
} | robert2006strategies |
arxiv-675082 | cs/0611035 | The Role of Quasi-identifiers in k-Anonymity Revisited | <|reference_start|>The Role of Quasi-identifiers in k-Anonymity Revisited: The concept of k-anonymity, used in the recent literature to formally evaluate the privacy preservation of published tables, was introduced based on the notion of quasi-identifiers (or QI for short). The process of obtaining k-anonymity for a given private table is first to recognize the QIs in the table, and then to anonymize the QI values, the latter being called k-anonymization. While k-anonymization is usually rigorously validated by the authors, the definition of QI remains mostly informal, and different authors seem to have different interpretations of the concept of QI. The purpose of this paper is to provide a formal underpinning of QI and examine the correctness and incorrectness of various interpretations of QI in our formal framework. We observe that in cases where the concept has been used correctly, its application has been conservative; this note provides a formal understanding of the conservative nature in such cases.<|reference_end|> | arxiv | @article{bettini2006the,
title={The Role of Quasi-identifiers in k-Anonymity Revisited},
author={Claudio Bettini, X. Sean Wang, Sushil Jajodia},
journal={arXiv preprint arXiv:cs/0611035},
year={2006},
number={RT-11-06},
archivePrefix={arXiv},
eprint={cs/0611035},
primaryClass={cs.DB cs.CR}
} | bettini2006the |
arxiv-675083 | cs/0611036 | Intra-site Level Cultural Heritage Documentation: Combination of Survey, Modeling and Imagery Data in a Web Information System | <|reference_start|>Intra-site Level Cultural Heritage Documentation: Combination of Survey, Modeling and Imagery Data in a Web Information System: Cultural heritage documentation induces the use of computerized techniques to manage and preserve the information produced. Geographical information systems have proved their potentialities in this scope, but they are not always adapted for the management of features at the scale of a particular archaeological site. Moreover, computer applications in archaeology are often technology driven and software constrained. Thus, we propose a tool that tries to avoid these difficulties. We are developing an information system that works over the Internet and that is joined with a web site. Aims are to assist the work of archaeological sites managers and to be a documentation tool about these sites, dedicated to everyone. We devote therefore our system both to the professionals who are in charge of the site, and to the general public who visits it or who wants to have information on it. The system permits to do exploratory analyses of the data, especially at spatial and temporal levels. We propose to record metadata about the archaeological features in XML and to access these features through interactive 2D and 3D representations, and through queries systems (keywords and images). The 2D images, photos, or vectors are generated in SVG, while 3D models are generated in X3D. Archaeological features are also automatically integrated in a MySQL database. The web site is an exchange platform with the information system and is written in PHP. Our first application case is the medieval castle of Vianden, Luxembourg.<|reference_end|> | arxiv | @article{durand2006intra-site,
title={Intra-site Level Cultural Heritage Documentation: Combination of Survey,
Modeling and Imagery Data in a Web Information System},
author={Anne Durand (CRAI), Pierre Drap (CRAI), Elise Meyer (CRAI), Pierre
Grussenmeyer (CRAI), Jean-Pierre Perrin (CRAI)},
journal={arXiv preprint arXiv:cs/0611036},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611036},
primaryClass={cs.DL}
} | durand2006intra-site |
arxiv-675084 | cs/0611037 | On Conditional Branches in Optimal Decision Trees | <|reference_start|>On Conditional Branches in Optimal Decision Trees: The decision tree is one of the most fundamental programming abstractions. A commonly used type of decision tree is the alphabetic binary tree, which uses (without loss of generality) ``less than'' versus ''greater than or equal to'' tests in order to determine one of $n$ outcome events. The process of finding an optimal alphabetic binary tree for a known probability distribution on outcome events usually has the underlying assumption that the cost (time) per decision is uniform and thus independent of the outcome of the decision. This assumption, however, is incorrect in the case of software to be optimized for a given microprocessor, e.g., in compiling switch statements or in fine-tuning program bottlenecks. The operation of the microprocessor generally means that the cost for the more likely decision outcome can or will be less -- often far less -- than the less likely decision outcome. Here we formulate a variety of $O(n^3)$-time $O(n^2)$-space dynamic programming algorithms to solve such optimal binary decision tree problems, optimizing for the behavior of processors with predictive branch capabilities, both static and dynamic. In the static case, we use existing results to arrive at entropy-based performance bounds. Solutions to this formulation are often faster in practice than ``optimal'' decision trees as formulated in the literature, and, for small problems, are easily worth the extra complexity in finding the better solution. This can be applied in fast implementation of decoding Huffman codes.<|reference_end|> | arxiv | @article{baer2006on,
title={On Conditional Branches in Optimal Decision Trees},
author={Michael B. Baer},
journal={arXiv preprint arXiv:cs/0611037},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611037},
primaryClass={cs.PF cs.IT math.IT}
} | baer2006on |
arxiv-675085 | cs/0611038 | Nonsymmetric entropy I: basic concepts and results | <|reference_start|>Nonsymmetric entropy I: basic concepts and results: A new concept named nonsymmetric entropy which generalizes the concepts of Boltzman's entropy and shannon's entropy, was introduced. Maximal nonsymmetric entropy principle was proven. Some important distribution laws were derived naturally from maximal nonsymmetric entropy principle.<|reference_end|> | arxiv | @article{liu2006nonsymmetric,
title={Nonsymmetric entropy I: basic concepts and results},
author={Chengshi Liu},
journal={arXiv preprint arXiv:cs/0611038},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611038},
primaryClass={cs.IT math.IT}
} | liu2006nonsymmetric |
arxiv-675086 | cs/0611039 | Substitutions for tilings $\p,q\$ | <|reference_start|>Substitutions for tilings $\p,q\$: In this paper we consider tiling $\{p, q \}$ of the Euclidean space and of the hyperbolic space, and its dual graph $\Gamma_{q, p}$ from a combinatorial point of view. A substitution $\sigma_{q, p}$ on an appropriate finite alphabet is constructed. The homogeneity of graph $\Gamma_{q, p}$ and its generation function are the basic tools for the construction. The tree associated with substitution $\sigma_{q, p}$ is a spanning tree of graph $\Gamma_{q, p}$. Let $u_n$ be the number of tiles of tiling $\{p, q \}$ of generation $n$. The characteristic polynomial of the transition matrix of substitution $\sigma_{q, p}$ is a characteristic polynomial of a linear recurrence. The sequence $(u_n)_{n \geq 0}$ is a solution of this recurrence. The growth of sequence $(u_n)_{n \geq 0}$ is given by the dominant root of the characteristic polynomial.<|reference_end|> | arxiv | @article{margenstern2006substitutions,
title={Substitutions for tilings $\{p,q\}$},
author={Maurice Margenstern, Guentcho Skordev},
journal={arXiv preprint arXiv:cs/0611039},
year={2006},
number={2005-102 (Publications du LITA, local research reports)},
archivePrefix={arXiv},
eprint={cs/0611039},
primaryClass={cs.CG cs.DM}
} | margenstern2006substitutions |
arxiv-675087 | cs/0611040 | The Formal System lambda-delta | <|reference_start|>The Formal System lambda-delta: The formal system lambda-delta is a typed lambda calculus that pursues the unification of terms, types, environments and contexts as the main goal. lambda-delta takes some features from the Automath-related lambda calculi and some from the pure type systems, but differs from both in that it does not include the Pi construction while it provides for an abbreviation mechanism at the level of terms. lambda-delta enjoys some important desirable properties such as the confluence of reduction, the correctness of types, the uniqueness of types up to conversion, the subject reduction of the type assignment, the strong normalization of the typed terms and, as a corollary, the decidability of type inference problem.<|reference_end|> | arxiv | @article{guidi2006the,
title={The Formal System lambda-delta},
author={F. Guidi},
journal={arXiv preprint arXiv:cs/0611040},
year={2006},
number={UBLCS-2006-25},
archivePrefix={arXiv},
eprint={cs/0611040},
primaryClass={cs.LO}
} | guidi2006the |
arxiv-675088 | cs/0611041 | Groebner Bases Applied to Systems of Linear Difference Equations | <|reference_start|>Groebner Bases Applied to Systems of Linear Difference Equations: In this paper we consider systems of partial (multidimensional) linear difference equations. Specifically, such systems arise in scientific computing under discretization of linear partial differential equations and in computational high energy physics as recurrence relations for multiloop Feynman integrals. The most universal algorithmic tool for investigation of linear difference systems is based on their transformation into an equivalent Groebner basis form. We present an algorithm for this transformation implemented in Maple. The algorithm and its implementation can be applied to automatic generation of difference schemes for linear partial differential equations and to reduction of Feynman integrals. Some illustrative examples are given.<|reference_end|> | arxiv | @article{gerdt2006groebner,
title={Groebner Bases Applied to Systems of Linear Difference Equations},
author={V. P. Gerdt},
journal={arXiv preprint arXiv:cs/0611041},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611041},
primaryClass={cs.SC}
} | gerdt2006groebner |
arxiv-675089 | cs/0611042 | CSCR:Computer Supported Collaborative Research | <|reference_start|>CSCR:Computer Supported Collaborative Research: It is suggested that a new area of CSCR (Computer Supported Collaborative Research) is distinguished from CSCW and CSCL and that the demarcation between the three areas could do with greater clarification and prescription.<|reference_end|> | arxiv | @article{hinze-hoare2006cscr:computer,
title={CSCR:Computer Supported Collaborative Research},
author={Vita Hinze-Hoare},
journal={arXiv preprint arXiv:cs/0611042},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611042},
primaryClass={cs.HC cs.LG}
} | hinze-hoare2006cscr:computer |
arxiv-675090 | cs/0611043 | On the Convexity of log det (I + K X^-1) | <|reference_start|>On the Convexity of log det (I + K X^-1): A simple proof is given for the convexity of log det (I+K X^{-1}) in the positive definite matrix variable X with a given positive semidefinite K.<|reference_end|> | arxiv | @article{kim2006on,
title={On the Convexity of log det (I + K X^{-1})},
author={Young-Han Kim, Seung-Jean Kim},
journal={arXiv preprint arXiv:cs/0611043},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611043},
primaryClass={cs.IT math.IT}
} | kim2006on |
arxiv-675091 | cs/0611044 | Protection of the information in a complex CAD system of renovation of industrial firms | <|reference_start|>Protection of the information in a complex CAD system of renovation of industrial firms: The threats to security of the information originating owing to involuntary operations of the users of a CAD, and methods of its protection implemented in a complex CAD system of renovation of firms are considered: rollback, autosave, automatic backup copying and electronic subscript. The specificity of a complex CAD is reflected in necessity of rollback and autosave both of the draw and the parametric representations of its parts, which are the information models of the problem-oriented extensions of the CAD<|reference_end|> | arxiv | @article{migunov2006protection,
title={Protection of the information in a complex CAD system of renovation of
industrial firms},
author={Vladimir V. Migunov, Rustem R. Kafiyatullov},
journal={arXiv preprint arXiv:cs/0611044},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611044},
primaryClass={cs.CE}
} | migunov2006protection |
arxiv-675092 | cs/0611045 | The evolution of the parametric models of drawings (modules) in the enterprises reconstruction CAD system | <|reference_start|>The evolution of the parametric models of drawings (modules) in the enterprises reconstruction CAD system: Progressing methods of drawings creating automation is discussed on the basis of so-called modules containing parametric representation of a part of the drawing and the geometrical elements. The stages of evolution of modular technology of automation of engineering are describing alternatives of applying of moduluss for simple association of elements of the drawing without parametric representation with an opportunity of its commenting, for graphic symbols creating in the schemas of automation and drawings of pipelines, for storage of the specific properties of elements, for development of the specialized parts of the project: the axonometric schemas, profiles of outboard pipe networks etc.<|reference_end|> | arxiv | @article{migunov2006the,
title={The evolution of the parametric models of drawings (modules) in the
enterprises reconstruction CAD system},
author={Vladimir V. Migunov},
journal={arXiv preprint arXiv:cs/0611045},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611045},
primaryClass={cs.CE}
} | migunov2006the |
arxiv-675093 | cs/0611046 | Analytic Tableaux Calculi for KLM Logics of Nonmonotonic Reasoning | <|reference_start|>Analytic Tableaux Calculi for KLM Logics of Nonmonotonic Reasoning: We present tableau calculi for some logics of nonmonotonic reasoning, as defined by Kraus, Lehmann and Magidor. We give a tableau proof procedure for all KLM logics, namely preferential, loop-cumulative, cumulative and rational logics. Our calculi are obtained by introducing suitable modalities to interpret conditional assertions. We provide a decision procedure for the logics considered, and we study their complexity.<|reference_end|> | arxiv | @article{giordano2006analytic,
title={Analytic Tableaux Calculi for KLM Logics of Nonmonotonic Reasoning},
author={Laura Giordano, Valentina Gliozzi, Nicola Olivetti, and Gian Luca
Pozzato},
journal={arXiv preprint arXiv:cs/0611046},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611046},
primaryClass={cs.LO cs.AI}
} | giordano2006analytic |
arxiv-675094 | cs/0611047 | The Reaction RuleML Classification of the Event / Action / State Processing and Reasoning Space | <|reference_start|>The Reaction RuleML Classification of the Event / Action / State Processing and Reasoning Space: Reaction RuleML is a general, practical, compact and user-friendly XML-serialized language for the family of reaction rules. In this white paper we give a review of the history of event / action /state processing and reaction rule approaches and systems in different domains, define basic concepts and give a classification of the event, action, state processing and reasoning space as well as a discussion of relevant / related work<|reference_end|> | arxiv | @article{paschke2006the,
title={The Reaction RuleML Classification of the Event / Action / State
Processing and Reasoning Space},
author={Adrian Paschke},
journal={arXiv preprint arXiv:cs/0611047},
year={2006},
number={Paschke, A.: The Reaction RuleML Classification of the Event /
Action / State Processing and Reasoning Space, White Paper, October, 2006},
archivePrefix={arXiv},
eprint={cs/0611047},
primaryClass={cs.AI}
} | paschke2006the |
arxiv-675095 | cs/0611048 | Dense-Timed Petri Nets: Checking Zenoness, Token liveness and Boundedness | <|reference_start|>Dense-Timed Petri Nets: Checking Zenoness, Token liveness and Boundedness: We consider Dense-Timed Petri Nets (TPN), an extension of Petri nets in which each token is equipped with a real-valued clock and where the semantics is lazy (i.e., enabled transitions need not fire; time can pass and disable transitions). We consider the following verification problems for TPNs. (i) Zenoness: whether there exists a zeno-computation from a given marking, i.e., an infinite computation which takes only a finite amount of time. We show decidability of zenoness for TPNs, thus solving an open problem from [Escrig et al.]. Furthermore, the related question if there exist arbitrarily fast computations from a given marking is also decidable. On the other hand, universal zenoness, i.e., the question if all infinite computations from a given marking are zeno, is undecidable. (ii) Token liveness: whether a token is alive in a marking, i.e., whether there is a computation from the marking which eventually consumes the token. We show decidability of the problem by reducing it to the coverability problem, which is decidable for TPNs. (iii) Boundedness: whether the size of the reachable markings is bounded. We consider two versions of the problem; namely semantic boundedness where only live tokens are taken into consideration in the markings, and syntactic boundedness where also dead tokens are considered. We show undecidability of semantic boundedness, while we prove that syntactic boundedness is decidable through an extension of the Karp-Miller algorithm.<|reference_end|> | arxiv | @article{abdulla2006dense-timed,
title={Dense-Timed Petri Nets: Checking Zenoness, Token liveness and
Boundedness},
author={Parosh Abdulla, Pritha Mahata, Richard Mayr},
journal={Logical Methods in Computer Science, Volume 3, Issue 1 (February
7, 2007) lmcs:2223},
year={2006},
doi={10.2168/LMCS-3(1:1)2007},
archivePrefix={arXiv},
eprint={cs/0611048},
primaryClass={cs.LO}
} | abdulla2006dense-timed |
arxiv-675096 | cs/0611049 | On numerical stability of recursive present value computation method | <|reference_start|>On numerical stability of recursive present value computation method: We analyze numerical stability of a recursive computation scheme of present value (PV) amd show that the absolute error increases exponentially for positive discount rates. We show that reversing the direction of calculations in the recurrence equation yields a robust PV computation routine.<|reference_end|> | arxiv | @article{kuketayev2006on,
title={On numerical stability of recursive present value computation method},
author={Argyn Kuketayev},
journal={arXiv preprint arXiv:cs/0611049},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611049},
primaryClass={cs.CE cs.NA}
} | kuketayev2006on |
arxiv-675097 | cs/0611050 | HowTo Authenticate and Encrypt | <|reference_start|>HowTo Authenticate and Encrypt: Recently, various side-channel attacks on widely used encryption methods have been discovered. Extensive research is currently undertaken to develop new types of combined encryption and authentication mechanisms. Developers of security systems ask whether to implement methods recommended by international standards or to choose one of the new proposals. We explain the nature of the attacks and how they can be avoided, and recommend a sound, provably secure solution: the CCM standard.<|reference_end|> | arxiv | @article{thomann2006howto,
title={HowTo Authenticate and Encrypt},
author={Hans-Rudolf Thomann},
journal={arXiv preprint arXiv:cs/0611050},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611050},
primaryClass={cs.CR}
} | thomann2006howto |
arxiv-675098 | cs/0611051 | Numerical Simulation guided Lazy Abstraction Refinement for Nonlinear Hybrid Automata | <|reference_start|>Numerical Simulation guided Lazy Abstraction Refinement for Nonlinear Hybrid Automata: This draft suggests a new counterexample guided abstraction refinement (CEGAR) framework that uses the combination of numerical simulation for nonlinear differential equations with linear programming for linear hybrid automata (LHA) to perform reachability analysis on nonlinear hybrid automata. A notion of $\epsilon-$ structural robustness is also introduced which allows the algorithm to validate counterexamples using numerical simulations. Keywords: verification, model checking, hybrid systems, hybrid automata, robustness, robust hybrid systems, numerical simulation, cegar, abstraction refinement.<|reference_end|> | arxiv | @article{jha2006numerical,
title={Numerical Simulation guided Lazy Abstraction Refinement for Nonlinear
Hybrid Automata},
author={Sumit Kumar Jha},
journal={arXiv preprint arXiv:cs/0611051},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611051},
primaryClass={cs.LO}
} | jha2006numerical |
arxiv-675099 | cs/0611052 | On the Solution-Space Geometry of Random Constraint Satisfaction Problems | <|reference_start|>On the Solution-Space Geometry of Random Constraint Satisfaction Problems: For a large number of random constraint satisfaction problems, such as random k-SAT and random graph and hypergraph coloring, there are very good estimates of the largest constraint density for which solutions exist. Yet, all known polynomial-time algorithms for these problems fail to find solutions even at much lower densities. To understand the origin of this gap we study how the structure of the space of solutions evolves in such problems as constraints are added. In particular, we prove that much before solutions disappear, they organize into an exponential number of clusters, each of which is relatively small and far apart from all other clusters. Moreover, inside each cluster most variables are frozen, i.e., take only one value. The existence of such frozen variables gives a satisfying intuitive explanation for the failure of the polynomial-time algorithms analyzed so far. At the same time, our results establish rigorously one of the two main hypotheses underlying Survey Propagation, a heuristic introduced by physicists in recent years that appears to perform extraordinarily well on random constraint satisfaction problems.<|reference_end|> | arxiv | @article{achlioptas2006on,
title={On the Solution-Space Geometry of Random Constraint Satisfaction
Problems},
author={Dimitris Achlioptas and Federico Ricci-Tersenghi},
journal={arXiv preprint arXiv:cs/0611052},
year={2006},
archivePrefix={arXiv},
eprint={cs/0611052},
primaryClass={cs.CC cond-mat.dis-nn}
} | achlioptas2006on |
arxiv-675100 | cs/0611053 | Capacity of a Class of Deterministic Relay Channels | <|reference_start|>Capacity of a Class of Deterministic Relay Channels: The capacity of a class of deterministic relay channels with the transmitter input X, the receiver output Y, the relay output Y_1 = f(X, Y), and a separate communication link from the relay to the receiver with capacity R_0, is shown to be C(R_0) = \max_{p(x)} \min \{I(X;Y)+R_0, I(X;Y, Y_1) \}. Thus every bit from the relay is worth exactly one bit to the receiver. Two alternative coding schemes are presented that achieve this capacity. The first scheme, ``hash-and-forward'', is based on a simple yet novel use of random binning on the space of relay outputs, while the second scheme uses the usual ``compress-and-forward''. In fact, these two schemes can be combined together to give a class of optimal coding schemes. As a corollary, this relay capacity result confirms a conjecture by Ahlswede and Han on the capacity of a channel with rate-limited state information at the decoder in the special case when the channel state is recoverable from the channel input and the output.<|reference_end|> | arxiv | @article{cover2006capacity,
title={Capacity of a Class of Deterministic Relay Channels},
author={Thomas M. Cover, Young-Han Kim},
journal={arXiv preprint arXiv:cs/0611053},
year={2006},
doi={10.1109/ISIT.2007.4557289},
archivePrefix={arXiv},
eprint={cs/0611053},
primaryClass={cs.IT math.IT}
} | cover2006capacity |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.