corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-673501
cs/0511035
Decoding the structure of the WWW: facts versus sampling biases
<|reference_start|>Decoding the structure of the WWW: facts versus sampling biases: The understanding of the immense and intricate topological structure of the World Wide Web (WWW) is a major scientific and technological challenge. This has been tackled recently by characterizing the properties of its representative graphs in which vertices and directed edges are identified with web-pages and hyperlinks, respectively. Data gathered in large scale crawls have been analyzed by several groups resulting in a general picture of the WWW that encompasses many of the complex properties typical of rapidly evolving networks. In this paper, we report a detailed statistical analysis of the topological properties of four different WWW graphs obtained with different crawlers. We find that, despite the very large size of the samples, the statistical measures characterizing these graphs differ quantitatively, and in some cases qualitatively, depending on the domain analyzed and the crawl used for gathering the data. This spurs the issue of the presence of sampling biases and structural differences of Web crawls that might induce properties not representative of the actual global underlying graph. In order to provide a more accurate characterization of the Web graph and identify observables which are clearly discriminating with respect to the sampling process, we study the behavior of degree-degree correlation functions and the statistics of reciprocal connections. The latter appears to enclose the relevant correlations of the WWW graph and carry most of the topological information of theWeb. The analysis of this quantity is also of major interest in relation to the navigability and searchability of the Web.<|reference_end|>
arxiv
@article{serrano2005decoding, title={Decoding the structure of the WWW: facts versus sampling biases}, author={M. Angeles Serrano, Ana Maguitman, Marian Boguna, Santo Fortunato, Alessandro Vespignani}, journal={ACM Transactions on the Web (TWEB) 1, 10 (2007)}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511035}, primaryClass={cs.NI cond-mat.dis-nn physics.soc-ph} }
serrano2005decoding
arxiv-673502
cs/0511036
A Capacity Achieving and Low Complexity Multilevel Coding Scheme for ISI Channels
<|reference_start|>A Capacity Achieving and Low Complexity Multilevel Coding Scheme for ISI Channels: We propose a computationally efficient multilevel coding scheme to achieve the capacity of an ISI channel using layers of binary inputs. The transmitter employs multilevel coding with linear mapping. The receiver uses multistage decoding where each stage performs a separate linear minimum mean square error (LMMSE) equalization and decoding. The optimality of the scheme is due to the fact that the LMMSE equalizer is information lossless in an ISI channel when signal to noise ratio is sufficiently low. The computational complexity is low and scales linearly with the length of the channel impulse response and the number of layers. The decoder at each layer sees an equivalent AWGN channel, which makes coding straightforward.<|reference_end|>
arxiv
@article{chen2005a, title={A Capacity Achieving and Low Complexity Multilevel Coding Scheme for ISI Channels}, author={Mei Chen, Teng Li and Oliver M. Collins}, journal={arXiv preprint arXiv:cs/0511036}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511036}, primaryClass={cs.IT math.IT} }
chen2005a
arxiv-673503
cs/0511037
Trellis Pruning for Peak-to-Average Power Ratio Reduction
<|reference_start|>Trellis Pruning for Peak-to-Average Power Ratio Reduction: This paper introduces a new trellis pruning method which uses nonlinear convolutional coding for peak-to-average power ratio (PAPR) reduction of filtered QPSK and 16-QAM modulations. The Nyquist filter is viewed as a convolutional encoder that controls the analog waveforms of the filter output directly. Pruning some edges of the encoder trellis can effectively reduce the PAPR. The only tradeoff is a slightly lower channel capacity and increased complexity. The paper presents simulation results of the pruning action and the resulting PAPR, and also discusses the decoding algorithm and the capacity of the filtered and pruned QPSK and 16-QAM modulations on the AWGN channel. Simulation results show that the pruning method reduces the PAPR significantly without much damage to capacity.<|reference_end|>
arxiv
@article{chen2005trellis, title={Trellis Pruning for Peak-to-Average Power Ratio Reduction}, author={Mei Chen and Oliver M. Collins}, journal={arXiv preprint arXiv:cs/0511037}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511037}, primaryClass={cs.IT math.IT} }
chen2005trellis
arxiv-673504
cs/0511038
Towards a unified theory of logic programming semantics: Level mapping characterizations of selector generated models
<|reference_start|>Towards a unified theory of logic programming semantics: Level mapping characterizations of selector generated models: Currently, the variety of expressive extensions and different semantics created for logic programs with negation is diverse and heterogeneous, and there is a lack of comprehensive comparative studies which map out the multitude of perspectives in a uniform way. Most recently, however, new methodologies have been proposed which allow one to derive uniform characterizations of different declarative semantics for logic programs with negation. In this paper, we study the relationship between two of these approaches, namely the level mapping characterizations due to [Hitzler and Wendt 2005], and the selector generated models due to [Schwarz 2004]. We will show that the latter can be captured by means of the former, thereby supporting the claim that level mappings provide a very flexible framework which is applicable to very diversely defined semantics.<|reference_end|>
arxiv
@article{hitzler2005towards, title={Towards a unified theory of logic programming semantics: Level mapping characterizations of selector generated models}, author={Pascal Hitzler and Sibylle Schwarz}, journal={arXiv preprint arXiv:cs/0511038}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511038}, primaryClass={cs.AI cs.LO} }
hitzler2005towards
arxiv-673505
cs/0511039
The Generalized Area Theorem and Some of its Consequences
<|reference_start|>The Generalized Area Theorem and Some of its Consequences: There is a fundamental relationship between belief propagation and maximum a posteriori decoding. The case of transmission over the binary erasure channel was investigated in detail in a companion paper. This paper investigates the extension to general memoryless channels (paying special attention to the binary case). An area theorem for transmission over general memoryless channels is introduced and some of its many consequences are discussed. We show that this area theorem gives rise to an upper-bound on the maximum a posteriori threshold for sparse graph codes. In situations where this bound is tight, the extrinsic soft bit estimates delivered by the belief propagation decoder coincide with the correct a posteriori probabilities above the maximum a posteriori threshold. More generally, it is conjectured that the fundamental relationship between the maximum a posteriori and the belief propagation decoder which was observed for transmission over the binary erasure channel carries over to the general case. We finally demonstrate that in order for the design rate of an ensemble to approach the capacity under belief propagation decoding the component codes have to be perfectly matched, a statement which is well known for the special case of transmission over the binary erasure channel.<|reference_end|>
arxiv
@article{measson2005the, title={The Generalized Area Theorem and Some of its Consequences}, author={Cyril Measson, Andrea Montanari, Tom Richardson, Rudiger Urbanke}, journal={arXiv preprint arXiv:cs/0511039}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511039}, primaryClass={cs.IT math.IT} }
measson2005the
arxiv-673506
cs/0511040
Design and Analysis of Nonbinary LDPC Codes for Arbitrary Discrete-Memoryless Channels
<|reference_start|>Design and Analysis of Nonbinary LDPC Codes for Arbitrary Discrete-Memoryless Channels: We present an analysis, under iterative decoding, of coset LDPC codes over GF(q), designed for use over arbitrary discrete-memoryless channels (particularly nonbinary and asymmetric channels). We use a random-coset analysis to produce an effect that is similar to output-symmetry with binary channels. We show that the random selection of the nonzero elements of the GF(q) parity-check matrix induces a permutation-invariance property on the densities of the decoder messages, which simplifies their analysis and approximation. We generalize several properties, including symmetry and stability from the analysis of binary LDPC codes. We show that under a Gaussian approximation, the entire q-1 dimensional distribution of the vector messages is described by a single scalar parameter (like the distributions of binary LDPC messages). We apply this property to develop EXIT charts for our codes. We use appropriately designed signal constellations to obtain substantial shaping gains. Simulation results indicate that our codes outperform multilevel codes at short block lengths. We also present simulation results for the AWGN channel, including results within 0.56 dB of the unconstrained Shannon limit (i.e. not restricted to any signal constellation) at a spectral efficiency of 6 bits/s/Hz.<|reference_end|>
arxiv
@article{bennatan2005design, title={Design and Analysis of Nonbinary LDPC Codes for Arbitrary Discrete-Memoryless Channels}, author={Amir Bennatan and David Burshtein}, journal={IEEE Transactions on Information Theory, vol. IT-52, no. 2, pp. 549-583, 2006}, year={2005}, doi={10.1109/TIT.2005.862080}, archivePrefix={arXiv}, eprint={cs/0511040}, primaryClass={cs.IT math.IT} }
bennatan2005design
arxiv-673507
cs/0511041
Logic Programming with Default, Weak and Strict Negations
<|reference_start|>Logic Programming with Default, Weak and Strict Negations: This paper treats logic programming with three kinds of negation: default, weak and strict negations. A 3-valued logic model theory is discussed for logic programs with three kinds of negation. The procedure is constructed for negations so that a soundness of the procedure is guaranteed in terms of 3-valued logic model theory.<|reference_end|>
arxiv
@article{yamasaki2005logic, title={Logic Programming with Default, Weak and Strict Negations}, author={Susumu Yamasaki}, journal={arXiv preprint arXiv:cs/0511041}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511041}, primaryClass={cs.LO} }
yamasaki2005logic
arxiv-673508
cs/0511042
Dimensions of Neural-symbolic Integration - A Structured Survey
<|reference_start|>Dimensions of Neural-symbolic Integration - A Structured Survey: Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. We present a comprehensive survey of the field of neural-symbolic integration, including a new classification of system according to their architectures and abilities.<|reference_end|>
arxiv
@article{bader2005dimensions, title={Dimensions of Neural-symbolic Integration - A Structured Survey}, author={Sebastian Bader and Pascal Hitzler}, journal={arXiv preprint arXiv:cs/0511042}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511042}, primaryClass={cs.AI cs.LO cs.NE} }
bader2005dimensions
arxiv-673509
cs/0511043
Poseidon: a 2-tier Anomaly-based Intrusion Detection System
<|reference_start|>Poseidon: a 2-tier Anomaly-based Intrusion Detection System: We present Poseidon, a new anomaly based intrusion detection system. Poseidon is payload-based, and presents a two-tier architecture: the first stage consists of a Self-Organizing Map, while the second one is a modified PAYL system. Our benchmarks on the 1999 DARPA data set show a higher detection rate and lower number of false positives than PAYL and PHAD.<|reference_end|>
arxiv
@article{bolzoni2005poseidon:, title={Poseidon: a 2-tier Anomaly-based Intrusion Detection System}, author={Damiano Bolzoni, Emmanuele Zambon, Sandro Etalle, Pieter Hartel}, journal={arXiv preprint arXiv:cs/0511043}, year={2005}, number={TR-CTIT-05-53}, archivePrefix={arXiv}, eprint={cs/0511043}, primaryClass={cs.CR} }
bolzoni2005poseidon:
arxiv-673510
cs/0511044
Various Solutions to the Firing Squad Synchronization Problems
<|reference_start|>Various Solutions to the Firing Squad Synchronization Problems: We present different classes of solutions to the Firing Squad Synchronization Problem on networks of different shapes. The nodes are finite state processors that work at unison discrete steps. The networks considered are the line, the ring and the square. For all of these models we have considered one and two-way communication modes and also constrained the quantity of information that adjacent processors can exchange each step. We are given a particular time expressed as a function of the number of nodes of the network, $f(n)$ and present synchronization algorithms in time $n^2$, $n \log n$, $n\sqrt n$, $2^n$. The solutions are presented as {\em signals} that are used as building blocks to compose new solutions for all times expressed by polynomials with nonnegative coefficients.<|reference_end|>
arxiv
@article{gruska2005various, title={Various Solutions to the Firing Squad Synchronization Problems}, author={J. Gruska, S. La Torre, M. Napoli, M. Parente}, journal={arXiv preprint arXiv:cs/0511044}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511044}, primaryClass={cs.DS cs.CC} }
gruska2005various
arxiv-673511
cs/0511045
An Invariant Cost Model for the Lambda Calculus
<|reference_start|>An Invariant Cost Model for the Lambda Calculus: We define a new cost model for the call-by-value lambda-calculus satisfying the invariance thesis. That is, under the proposed cost model, Turing machines and the call-by-value lambda-calculus can simulate each other within a polynomial time overhead. The model only relies on combinatorial properties of usual beta-reduction, without any reference to a specific machine or evaluator. In particular, the cost of a single beta reduction is proportional to the difference between the size of the redex and the size of the reduct. In this way, the total cost of normalizing a lambda term will take into account the size of all intermediate results (as well as the number of steps to normal form).<|reference_end|>
arxiv
@article{lago2005an, title={An Invariant Cost Model for the Lambda Calculus}, author={Ugo Dal Lago and Simone Martini}, journal={arXiv preprint arXiv:cs/0511045}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511045}, primaryClass={cs.LO cs.CC} }
lago2005an
arxiv-673512
cs/0511046
Generalized Kasami Sequences: The Large Set
<|reference_start|>Generalized Kasami Sequences: The Large Set: In this paper new binary sequence families $\mathcal{F}^k$ of period $2^n-1$ are constructed for even $n$ and any $k$ with ${\rm gcd}(k,n)=2$ if $n/2$ is odd or ${\rm gcd}(k,n)=1$ if $n/2$ is even. The distribution of their correlation values is completely determined. These families have maximum correlation $2^{n/2+1}+1$ and family size $2^{3n/2}+2^{n/2}$ for odd $n/2$ or $2^{3n/2}+2^{n/2}-1$ for even $n/2$. The construction of the large set of Kasami sequences which is exactly the $\mathcal{F}^{k}$ with $k=n/2+1$ is generalized.<|reference_end|>
arxiv
@article{zeng2005generalized, title={Generalized Kasami Sequences: The Large Set}, author={Xiangyong Zeng, Qingchong Liu, Lei Hu}, journal={arXiv preprint arXiv:cs/0511046}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511046}, primaryClass={cs.IT cs.CR math.IT} }
zeng2005generalized
arxiv-673513
cs/0511047
The Secret Key-Private Key Capacity Region for Three Terminals
<|reference_start|>The Secret Key-Private Key Capacity Region for Three Terminals: We consider a model for secrecy generation, with three terminals, by means of public interterminal communication, and examine the problem of characterizing all the rates at which all three terminals can generate a ``secret key,'' and -- simultaneously -- two designated terminals can generate a ``private key'' which is effectively concealed from the remaining terminal; both keys are also concealed from an eavesdropper that observes the public communication. Inner and outer bounds for the ``secret key--private key capacity region'' are derived. Under a certain special condition, these bounds coincide to yield the (exact) secret key--private key capacity region.<|reference_end|>
arxiv
@article{ye2005the, title={The Secret Key-Private Key Capacity Region for Three Terminals}, author={Chunxuan Ye and Prakash Narayan}, journal={arXiv preprint arXiv:cs/0511047}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511047}, primaryClass={cs.IT math.IT} }
ye2005the
arxiv-673514
cs/0511048
Joint Network-Source Coding: An Achievable Region with Diversity Routing
<|reference_start|>Joint Network-Source Coding: An Achievable Region with Diversity Routing: We are interested in how to best communicate a (usually real valued) source to a number of destinations (sinks) over a network with capacity constraints in a collective fidelity metric over all the sinks, a problem which we call joint network-source coding. Unlike the lossless network coding problem, lossy reconstruction of the source at the sinks is permitted. We make a first attempt to characterize the set of all distortions achievable by a set of sinks in a given network. While the entire region of all achievable distortions remains largely an open problem, we find a large, non-trivial subset of it using ideas in multiple description coding. The achievable region is derived over all balanced multiple-description codes and over all network flows, while the network nodes are allowed to forward and duplicate data packets.<|reference_end|>
arxiv
@article{sarshar2005joint, title={Joint Network-Source Coding: An Achievable Region with Diversity Routing}, author={Nima Sarshar and Xiaolin Wu}, journal={arXiv preprint arXiv:cs/0511048}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511048}, primaryClass={cs.IT math.IT} }
sarshar2005joint
arxiv-673515
cs/0511049
Entropy, Convex Optimization, and Competitive Quantum Interactions
<|reference_start|>Entropy, Convex Optimization, and Competitive Quantum Interactions: This paper has been withdrawn by the author due to errors.<|reference_end|>
arxiv
@article{gutoski2005entropy,, title={Entropy, Convex Optimization, and Competitive Quantum Interactions}, author={Gus Gutoski}, journal={arXiv preprint arXiv:cs/0511049}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511049}, primaryClass={cs.CC cs.GT quant-ph} }
gutoski2005entropy,
arxiv-673516
cs/0511050
Secret Key and Private Key Constructions for Simple Multiterminal Source Models
<|reference_start|>Secret Key and Private Key Constructions for Simple Multiterminal Source Models: This work is motivated by recent results of Csiszar and Narayan (IEEE Trans. on Inform. Theory, Dec. 2004), which highlight innate connections between secrecy generation by multiple terminals and multiterminal Slepian-Wolf near-lossless data compression (sans secrecy restrictions). We propose a new approach for constructing secret and private keys based on the long-known Slepian-Wolf code for sources connected by a virtual additive noise channel, due to Wyner (IEEE Trans. on Inform. Theory, Jan. 1974). Explicit procedures for such constructions, and their substantiation, are provided.<|reference_end|>
arxiv
@article{ye2005secret, title={Secret Key and Private Key Constructions for Simple Multiterminal Source Models}, author={Chunxuan Ye and Prakash Narayan}, journal={arXiv preprint arXiv:cs/0511050}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511050}, primaryClass={cs.IT cs.CR math.IT} }
ye2005secret
arxiv-673517
cs/0511051
The Private Key Capacity Region for Three Terminals
<|reference_start|>The Private Key Capacity Region for Three Terminals: We consider a model with three terminals and examine the problem of characterizing the largest rates at which two pairs of terminals can simultaneously generate private keys, each of which is effectively concealed from the remaining terminal.<|reference_end|>
arxiv
@article{ye2005the, title={The Private Key Capacity Region for Three Terminals}, author={Chunxuan Ye and Prakash Narayan}, journal={arXiv preprint arXiv:cs/0511051}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511051}, primaryClass={cs.IT cs.CR math.IT} }
ye2005the
arxiv-673518
cs/0511052
Mining Cellular Automata DataBases throug PCA Models
<|reference_start|>Mining Cellular Automata DataBases throug PCA Models: Cellular Automata are discrete dynamical systems that evolve following simple and local rules. Despite of its local simplicity, knowledge discovery in CA is a NP problem. This is the main motivation for using data mining techniques for CA study. The Principal Component Analysis (PCA) is a useful tool for data mining because it provides a compact and optimal description of data sets. Such feature have been explored to compute the best subspace which maximizes the projection of the I/O patterns of CA onto the principal axis. The stability of the principal components against the input patterns is the main result of this approach. In this paper we perform such analysis but in the presence of noise which randomly reverses the CA output values with probability $p$. As expected, the number of principal components increases when the pattern size is increased. However, it seems to remain stable when the pattern size is unchanged but the noise intensity gets larger. We describe our experiments and point out further works using KL transform theory and parameter sensitivity analysis.<|reference_end|>
arxiv
@article{giraldi2005mining, title={Mining Cellular Automata DataBases throug PCA Models}, author={Gilson A. Giraldi, Antonio A.F. Oliveira, Leonardo Carvalho}, journal={arXiv preprint arXiv:cs/0511052}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511052}, primaryClass={cs.DM cs.DB} }
giraldi2005mining
arxiv-673519
cs/0511053
A Model Based Approach to Reachability Routing
<|reference_start|>A Model Based Approach to Reachability Routing: Current directions in network routing research have not kept pace with the latest developments in network architectures, such as peer-to-peer networks, sensor networks, ad-hoc wireless networks, and overlay networks. A common characteristic among all of these new technologies is the presence of highly dynamic network topologies. Currently deployed single-path routing protocols cannot adequately cope with this dynamism, and existing multi-path algorithms make trade-offs which lead to less than optimal performance on these networks. This drives the need for routing protocols designed with the unique characteristics of these networks in mind. In this paper we propose the notion of reachability routing as a solution to the challenges posed by routing on such dynamic networks. In particular, our formulation of reachability routing provides cost-sensitive multi-path forwarding along with loop avoidance within the confines of the Internet Protocol (IP) architecture. This is achieved through the application of reinforcement learning within a probabilistic routing framework. Following an explanation of our design decisions and a description of the algorithm, we provide an evaluation of the performance of the algorithm on a variety of network topologies. The results show consistently superior performance compared to other reinforcement learning based routing algorithms.<|reference_end|>
arxiv
@article{smith2005a, title={A Model Based Approach to Reachability Routing}, author={Leland Smith, Muthukumar Thirunavukkarasu, Srinidhi Varadarajan, and Naren Ramakrishnan}, journal={arXiv preprint arXiv:cs/0511053}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511053}, primaryClass={cs.NI} }
smith2005a
arxiv-673520
cs/0511054
Eigenvalue Distributions of Sums and Products of Large Random Matrices via Incremental Matrix Expansions
<|reference_start|>Eigenvalue Distributions of Sums and Products of Large Random Matrices via Incremental Matrix Expansions: This paper uses an incremental matrix expansion approach to derive asymptotic eigenvalue distributions (a.e.d.'s) of sums and products of large random matrices. We show that the result can be derived directly as a consequence of two common assumptions, and matches the results obtained from using R- and S-transforms in free probability theory. We also give a direct derivation of the a.e.d. of the sum of certain random matrices which are not free. This is used to determine the asymptotic signal-to-interference-ratio of a multiuser CDMA system with a minimum mean-square error linear receiver.<|reference_end|>
arxiv
@article{peacock2005eigenvalue, title={Eigenvalue Distributions of Sums and Products of Large Random Matrices via Incremental Matrix Expansions}, author={Matthew J.M. Peacock, Iain B. Collings, Michael L. Honig}, journal={arXiv preprint arXiv:cs/0511054}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511054}, primaryClass={cs.IT math.IT} }
peacock2005eigenvalue
arxiv-673521
cs/0511055
Embedding Defeasible Logic into Logic Programming
<|reference_start|>Embedding Defeasible Logic into Logic Programming: Defeasible reasoning is a simple but efficient approach to nonmonotonic reasoning that has recently attracted considerable interest and that has found various applications. Defeasible logic and its variants are an important family of defeasible reasoning methods. So far no relationship has been established between defeasible logic and mainstream nonmonotonic reasoning approaches. In this paper we establish close links to known semantics of logic programs. In particular, we give a translation of a defeasible theory D into a meta-program P(D). We show that under a condition of decisiveness, the defeasible consequences of D correspond exactly to the sceptical conclusions of P(D) under the stable model semantics. Without decisiveness, the result holds only in one direction (all defeasible consequences of D are included in all stable models of P(D)). If we wish a complete embedding for the general case, we need to use the Kunen semantics of P(D), instead.<|reference_end|>
arxiv
@article{antoniou2005embedding, title={Embedding Defeasible Logic into Logic Programming}, author={Grigoris Antoniou, David Billington, Guido Governatori, and Michael J. Maher}, journal={Theory and Practice of Logic Programming, 6, 6 (2006): 703-735}, year={2005}, doi={10.1017/S147106840600277}, archivePrefix={arXiv}, eprint={cs/0511055}, primaryClass={cs.LO} }
antoniou2005embedding
arxiv-673522
cs/0511056
Improved Upper Bounds on Stopping Redundancy
<|reference_start|>Improved Upper Bounds on Stopping Redundancy: Let C be a linear code with length n and minimum distance d. The stopping redundancy of C is defined as the minimum number of rows in a parity-check matrix for C such that the smallest stopping sets in the corresponding Tanner graph have size d. We derive new upper bounds on the stopping redundancy of linear codes in general, and of maximum distance separable (MDS) codes specifically, and show how they improve upon previously known results. For MDS codes, the new bounds are found by upper bounding the stopping redundancy by a combinatorial quantity closely related to Turan numbers. (The Turan number, T(v,k,t), is the smallest number of t-subsets of a v-set, such that every k-subset of the v-set contains at least one of the t-subsets.) We further show that the stopping redundancy of MDS codes is T(n,d-1,d-2)(1+O(n^{-1})) for fixed d, and is at most T(n,d-1,d-2)(3+O(n^{-1})) for fixed code dimension k=n-d+1. For d=3,4, we prove that the stopping redundancy of MDS codes is equal to T(n,d-1,d-2), for which exact formulas are known. For d=5, we show that the stopping redundancy of MDS codes is either T(n,4,3) or T(n,4,3)+1.<|reference_end|>
arxiv
@article{han2005improved, title={Improved Upper Bounds on Stopping Redundancy}, author={Junsheng Han and Paul H. Siegel}, journal={arXiv preprint arXiv:cs/0511056}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511056}, primaryClass={cs.IT cs.DM math.IT} }
han2005improved
arxiv-673523
cs/0511057
Quantized Indexing: Beyond Arithmetic Coding
<|reference_start|>Quantized Indexing: Beyond Arithmetic Coding: Quantized Indexing is a fast and space-efficient form of enumerative (combinatorial) coding, the strongest among asymptotically optimal universal entropy coding algorithms. The present advance in enumerative coding is similar to that made by arithmetic coding with respect to its unlimited precision predecessor, Elias coding. The arithmetic precision, execution time, table sizes and coding delay are all reduced by a factor O(n) at a redundancy below 2*log(e)/2^g bits/symbol (for n input symbols and g-bit QI precision). Due to its tighter enumeration, QI output redundancy is below that of arithmetic coding (which can be derived as a lower accuracy approximation of QI). The relative compression gain vanishes in large n and in high entropy limits and increases for shorter outputs and for less predictable data. QI is significantly faster than the fastest arithmetic coders, from factor 6 in high entropy limit to over 100 in low entropy limit (`typically' 10-20 times faster). These speedups are result of using only 3 adds, 1 shift and 2 array lookups (all in 32 bit precision) per less probable symbol and no coding operations for the most probable symbol . Further, the exact enumeration algorithm is sharpened and its lattice walks formulation is generalized. A new numeric type with a broader applicability, sliding window integer, is introduced.<|reference_end|>
arxiv
@article{tomic2005quantized, title={Quantized Indexing: Beyond Arithmetic Coding}, author={Ratko V. Tomic}, journal={IEEE/DCC 2006, p. 468}, year={2005}, doi={10.1109/DCC.2006.70}, number={TR05-1115}, archivePrefix={arXiv}, eprint={cs/0511057}, primaryClass={cs.IT cs.DM math.CO math.IT} }
tomic2005quantized
arxiv-673524
cs/0511058
On-line regression competitive with reproducing kernel Hilbert spaces
<|reference_start|>On-line regression competitive with reproducing kernel Hilbert spaces: We consider the problem of on-line prediction of real-valued labels, assumed bounded in absolute value by a known constant, of new objects from known labeled objects. The prediction algorithm's performance is measured by the squared deviation of the predictions from the actual labels. No stochastic assumptions are made about the way the labels and objects are generated. Instead, we are given a benchmark class of prediction rules some of which are hoped to produce good predictions. We show that for a wide range of infinite-dimensional benchmark classes one can construct a prediction algorithm whose cumulative loss over the first N examples does not exceed the cumulative loss of any prediction rule in the class plus O(sqrt(N)); the main differences from the known results are that we do not impose any upper bound on the norm of the considered prediction rules and that we achieve an optimal leading term in the excess loss of our algorithm. If the benchmark class is "universal" (dense in the class of continuous functions on each compact set), this provides an on-line non-stochastic analogue of universally consistent prediction in non-parametric statistics. We use two proof techniques: one is based on the Aggregating Algorithm and the other on the recently developed method of defensive forecasting.<|reference_end|>
arxiv
@article{vovk2005on-line, title={On-line regression competitive with reproducing kernel Hilbert spaces}, author={Vladimir Vovk}, journal={arXiv preprint arXiv:cs/0511058}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511058}, primaryClass={cs.LG} }
vovk2005on-line
arxiv-673525
cs/0511059
Virtual Coordinate Backtracking for Void Traversal in Geographic Routing
<|reference_start|>Virtual Coordinate Backtracking for Void Traversal in Geographic Routing: Geographical routing protocols have several desirable features for use in ad hoc and sensor networks but are susceptible to voids and localization errors. Virtual coordinate systems are an alternative solution to geographically based routing protocols that works by overlaying a coordinate system on the sensors relative to well chosen reference points. VC is resilient to localization errors; however, we show that it is vulnerable to different forms of the void problem and have no viable complementary approach to overcome them. Specifically, we show that there are instances when packets reach nodes with no viable next hop nodes in the forwarding set. In addition, it is possible for nodes with the same coordinates to arise at different points in the network in the presence of voids. This paper identifies and analyzes these problems. It also compares several existing routing protocols based on Virtual Coordinate systems. Finally, it presents a new routing algorithm that uses backtracking to overcome voids to achieve high connectivity in the greedy phase, higher overall path quality and more resilience to localization errors. We show these properties using extensive simulation analysis.<|reference_end|>
arxiv
@article{liu2005virtual, title={Virtual Coordinate Backtracking for Void Traversal in Geographic Routing}, author={Ke Liu, Nael Abu-Ghazaleh}, journal={AdHocNow 2006}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511059}, primaryClass={cs.NI} }
liu2005virtual
arxiv-673526
cs/0511060
On Quadratic Inverses for Quadratic Permutation Polynomials over Integer Rings
<|reference_start|>On Quadratic Inverses for Quadratic Permutation Polynomials over Integer Rings: An interleaver is a critical component for the channel coding performance of turbo codes. Algebraic constructions are of particular interest because they admit analytical designs and simple, practical hardware implementation. Sun and Takeshita have recently shown that the class of quadratic permutation polynomials over integer rings provides excellent performance for turbo codes. In this correspondence, a necessary and sufficient condition is proven for the existence of a quadratic inverse polynomial for a quadratic permutation polynomial over an integer ring. Further, a simple construction is given for the quadratic inverse. All but one of the quadratic interleavers proposed earlier by Sun and Takeshita are found to admit a quadratic inverse, although none were explicitly designed to do so. An explanation is argued for the observation that restriction to a quadratic inverse polynomial does not narrow the pool of good quadratic interleavers for turbo codes.<|reference_end|>
arxiv
@article{ryu2005on, title={On Quadratic Inverses for Quadratic Permutation Polynomials over Integer Rings}, author={Jonghoon Ryu and Oscar Y. Takeshita}, journal={arXiv preprint arXiv:cs/0511060}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511060}, primaryClass={cs.IT math.IT} }
ryu2005on
arxiv-673527
cs/0511061
Truly On-The-Fly LTL Model Checking
<|reference_start|>Truly On-The-Fly LTL Model Checking: We propose a novel algorithm for automata-based LTL model checking that interleaves the construction of the generalized B\"{u}chi automaton for the negation of the formula and the emptiness check. Our algorithm first converts the LTL formula into a linear weak alternating automaton; configurations of the alternating automaton correspond to the locations of a generalized B\"{u}chi automaton, and a variant of Tarjan's algorithm is used to decide the existence of an accepting run of the product of the transition system and the automaton. Because we avoid an explicit construction of the B\"{u}chi automaton, our approach can yield significant improvements in runtime and memory, for large LTL formulas. The algorithm has been implemented within the SPIN model checker, and we present experimental results for some benchmark examples.<|reference_end|>
arxiv
@article{hammer2005truly, title={Truly On-The-Fly LTL Model Checking}, author={Moritz Hammer (IFI-LMU), Alexander Knapp (IFI-LMU), Stephan Merz (INRIA Lorraine - LORIA)}, journal={Dans Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2005)}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511061}, primaryClass={cs.LO} }
hammer2005truly
arxiv-673528
cs/0511062
Analytic performance comparison of routing protocols in master-slave PLC networks
<|reference_start|>Analytic performance comparison of routing protocols in master-slave PLC networks: In the wide area master-slave PLC (powerline communication) system, the source node cannot reach the destination node without packet relay. Due to the time-variable attenuation in the powerline, the communication distance cannot be defined. Two kind of dynamic repeater algorithms are developed, dynamic source routing and flooding based routing. In this paper, we use analytic approach to compare the performance of those two routing protocols. We give formulas to calculate the average duration of a polling cycle for each protocols. Then we present simulation results to bolster the results of our analysis. We use three metrics, which are bandwidth consumed for routing signaling, normalized routing load and average duration of a polling cycle to evaluate those routing protocols.<|reference_end|>
arxiv
@article{bumiller2005analytic, title={Analytic performance comparison of routing protocols in master-slave PLC networks}, author={Gerd Bumiller, Liping Lu (INRIA Lorraine - LORIA), Yeqiong Song (INRIA Lorraine - LORIA)}, journal={arXiv preprint arXiv:cs/0511062}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511062}, primaryClass={cs.NI} }
bumiller2005analytic
arxiv-673529
cs/0511063
Pathwords: a user-friendly schema for common passwords management
<|reference_start|>Pathwords: a user-friendly schema for common passwords management: Many computer-based authentication schemata are based on pass- words. Logging on a computer, reading email, accessing content on a web server are all examples of applications where the identification of the user is usually accomplished matching the data provided by the user with data known by the application. Such a widespread approach relies on some assumptions, whose satisfaction is of foremost importance to guarantee the robustness of the solution. Some of these assumptions, like having a "secure" chan- nel to transmit data, or having sound algorithms to check the correct- ness of the data, are not addressed by this paper. We will focus on two simple issues: the problem of using adequate passwords and the problem of managing passwords. The proposed solution, the pathword, is a method that guarantees: 1 that the passwords generated with the help of a pathword are adequate (i.e. that they are not easy to guess), 2 that managing pathwords is more user friendly than managing passwords and that pathwords are less amenable to problems typical of passwords.<|reference_end|>
arxiv
@article{finelli2005pathwords:, title={Pathwords: a user-friendly schema for common passwords management}, author={Michele Finelli}, journal={arXiv preprint arXiv:cs/0511063}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511063}, primaryClass={cs.CR} }
finelli2005pathwords:
arxiv-673530
cs/0511064
The consistency principle for a digitization procedure An algorithm for building normal digital spaces of continuous n-dimensional objects
<|reference_start|>The consistency principle for a digitization procedure An algorithm for building normal digital spaces of continuous n-dimensional objects: This paper considers conditions, which allow to preserve important topological and geometric properties in the process of digitization. For this purpose, we introduce a triplet {C,M,D} consisting of a continuous object C, an intermediate model M, which is a collection of subregions whose union is C, a digital model D, which is the intersection graph of M, and apply the consistency principle and criteria of similarity to M in order to make its mathematical structure consistent with the natural structure of D. Specifically, this paper introduces a locally centered lump collection of subregions and shows that for any locally centered lump cover of an n-dimensional continuous manifold, the digital model of the manifold is a digital normal n-dimensional space. In addition, we give examples of locally centered lump tilings of two-manifolds. We propose an algorithm for constructing normal digital models of continuous objects.<|reference_end|>
arxiv
@article{evako2005the, title={The consistency principle for a digitization procedure. An algorithm for building normal digital spaces of continuous n-dimensional objects}, author={Alexander V. Evako}, journal={arXiv preprint arXiv:cs/0511064}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511064}, primaryClass={cs.CV cs.DM} }
evako2005the
arxiv-673531
cs/0511065
Performance Analysis of MIMO-MRC in Double-Correlated Rayleigh Environments
<|reference_start|>Performance Analysis of MIMO-MRC in Double-Correlated Rayleigh Environments: We consider multiple-input multiple-output (MIMO) transmit beamforming systems with maximum ratio combining (MRC) receivers. The operating environment is Rayleigh-fading with both transmit and receive spatial correlation. We present exact expressions for the probability density function (p.d.f.) of the output signal-to-noise ratio (SNR), as well as the system outage probability. The results are based on explicit closed-form expressions which we derive for the p.d.f. and c.d.f. of the maximum eigenvalue of double-correlated complex Wishart matrices. For systems with two antennas at either the transmitter or the receiver, we also derive exact closed-form expressions for the symbol error rate (SER). The new expressions are used to prove that MIMO-MRC achieves the maximum available spatial diversity order, and to demonstrate the effect of spatial correlation. The analysis is validated through comparison with Monte-Carlo simulations.<|reference_end|>
arxiv
@article{mckay2005performance, title={Performance Analysis of MIMO-MRC in Double-Correlated Rayleigh Environments}, author={Matthew R. McKay, Alex J. Grant, and Iain B. Collings}, journal={arXiv preprint arXiv:cs/0511065}, year={2005}, doi={10.1109/TCOMM.2007.892450}, archivePrefix={arXiv}, eprint={cs/0511065}, primaryClass={cs.IT math.IT} }
mckay2005performance
arxiv-673532
cs/0511066
An introspective algorithm for the integer determinant
<|reference_start|>An introspective algorithm for the integer determinant: We present an algorithm computing the determinant of an integer matrix A. The algorithm is introspective in the sense that it uses several distinct algorithms that run in a concurrent manner. During the course of the algorithm partial results coming from distinct methods can be combined. Then, depending on the current running time of each method, the algorithm can emphasize a particular variant. With the use of very fast modular routines for linear algebra, our implementation is an order of magnitude faster than other existing implementations. Moreover, we prove that the expected complexity of our algorithm is only O(n^3 log^{2.5}(n ||A||)) bit operations in the dense case and O(Omega n^{1.5} log^2(n ||A||) + n^{2.5}log^3(n||A||)) in the sparse case, where ||A|| is the largest entry in absolute value of the matrix and Omega is the cost of matrix-vector multiplication in the case of a sparse matrix.<|reference_end|>
arxiv
@article{dumas2005an, title={An introspective algorithm for the integer determinant}, author={Jean-Guillaume Dumas (LJK), Anna Urbanska (LJK)}, journal={arXiv preprint arXiv:cs/0511066}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511066}, primaryClass={cs.SC} }
dumas2005an
arxiv-673533
cs/0511067
Effects of Initial Stance of Quadruped Trotting on Walking Stability
<|reference_start|>Effects of Initial Stance of Quadruped Trotting on Walking Stability: It is very important for quadruped walking machine to keep its stability in high speed walking. It has been indicated that moment around the supporting diagonal line of quadruped in trotting gait largely influences walking stability. In this paper, moment around the supporting diagonal line of quadruped in trotting gait is modeled and its effects on body attitude are analyzed. The degree of influence varies with different initial stances of quadruped and we get the optimal initial stance of quadruped in trotting gait with maximal walking stability. Simulation results are presented. Keywords: quadruped, trotting, attitude, walking stability.<|reference_end|>
arxiv
@article{he2005effects, title={Effects of Initial Stance of Quadruped Trotting on Walking Stability}, author={Dongqing He and Peisun Ma}, journal={arXiv preprint arXiv:cs/0511067}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511067}, primaryClass={cs.RO} }
he2005effects
arxiv-673534
cs/0511068
An Agent-based Manufacturing Management System for Production and Logistics within Cross-Company Regional and National Production Networks
<|reference_start|>An Agent-based Manufacturing Management System for Production and Logistics within Cross-Company Regional and National Production Networks: The goal is the development of a simultaneous, dynamic, technological as well as logistical real-time planning and an organizational control of the production by the production units themselves, working in the production network under the use of Multi-Agent-Technology. The design of the multi-agent-based manufacturing management system, the models of the single agents, algorithms for the agent-based, decentralized dispatching of orders, strategies and data management concepts as well as their integration into the SCM, basing on the solution described, will be explained in the following. Keywords: production engineering and management, dynamic manufacturing planning and control, multi-agentsystems (MAS), supply-chain-management (SCM), e-manufacturing<|reference_end|>
arxiv
@article{heinrich2005an, title={An Agent-based Manufacturing Management System for Production and Logistics within Cross-Company Regional and National Production Networks}, author={S. Heinrich, H. Durr, T. Hanel and J. Lassig}, journal={arXiv preprint arXiv:cs/0511068}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511068}, primaryClass={cs.RO} }
heinrich2005an
arxiv-673535
cs/0511069
Nonlinear Receding-Horizon Control of Rigid Link Robot Manipulators
<|reference_start|>Nonlinear Receding-Horizon Control of Rigid Link Robot Manipulators: The approximate nonlinear receding-horizon control law is used to treat the trajectory tracking control problem of rigid link robot manipulators. The derived nonlinear predictive law uses a quadratic performance index of the predicted tracking error and the predicted control effort. A key feature of this control law is that, for their implementation, there is no need to perform an online optimization, and asymptotic tracking of smooth reference trajectories is guaranteed. It is shown that this controller achieves the positions tracking objectives via link position measurements. The stability convergence of the output tracking error to the origin is proved. To enhance the robustness of the closed loop system with respect to payload uncertainties and viscous friction, an integral action is introduced in the loop. A nonlinear observer is used to estimate velocity. Simulation results for a two-link rigid robot are performed to validate the performance of the proposed controller. Keywords: receding-horizon control, nonlinear observer, robot manipulators, integral action, robustness.<|reference_end|>
arxiv
@article{hedjar2005nonlinear, title={Nonlinear Receding-Horizon Control of Rigid Link Robot Manipulators}, author={R. Hedjar and P. Boucher}, journal={arXiv preprint arXiv:cs/0511069}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511069}, primaryClass={cs.RO} }
hedjar2005nonlinear
arxiv-673536
cs/0511070
A particle can carry more than one bit of information
<|reference_start|>A particle can carry more than one bit of information: It is believed that a particle cannot carry more than one bit of information. It is pointed out that particle or single-particle quantum state can carry more than one bit of information. It implies that minimum energy cost of transmitting a bit will be less than the accepted limit KTlog2.<|reference_end|>
arxiv
@article{mitra2005a, title={A particle can carry more than one bit of information}, author={Arindam Mitra}, journal={arXiv preprint arXiv:cs/0511070}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511070}, primaryClass={cs.IT math.IT} }
mitra2005a
arxiv-673537
cs/0511071
A polynomial-time heuristic for Circuit-SAT
<|reference_start|>A polynomial-time heuristic for Circuit-SAT: In this paper is presented an heuristic that, in polynomial time and space in the input dimension, determines if a circuit describes a tautology or a contradiction. If the circuit is neither a tautology nor a contradiction, then the heuristic finds an assignment to the circuit inputs such that the circuit is satisfied.<|reference_end|>
arxiv
@article{capasso2005a, title={A polynomial-time heuristic for Circuit-SAT}, author={Francesco Capasso}, journal={arXiv preprint arXiv:cs/0511071}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511071}, primaryClass={cs.CC cs.DS} }
capasso2005a
arxiv-673538
cs/0511072
Explicit Codes Achieving List Decoding Capacity: Error-correction with Optimal Redundancy
<|reference_start|>Explicit Codes Achieving List Decoding Capacity: Error-correction with Optimal Redundancy: We present error-correcting codes that achieve the information-theoretically best possible trade-off between the rate and error-correction radius. Specifically, for every $0 < R < 1$ and $\eps> 0$, we present an explicit construction of error-correcting codes of rate $R$ that can be list decoded in polynomial time up to a fraction $(1-R-\eps)$ of {\em worst-case} errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are {\em folded Reed-Solomon codes}, which are in fact {\em exactly} Reed-Solomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in {\em phased bursts}. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on $\eps$) using ideas concerning ``list recovery'' and expander-based codes from \cite{GI-focs01,GI-ieeejl}. Concatenating the folded RS codes with suitable inner codes also gives us polynomial time constructible binary codes that can be efficiently list decoded up to the Zyablov bound, i.e., up to twice the radius achieved by the standard GMD decoding of concatenated codes.<|reference_end|>
arxiv
@article{guruswami2005explicit, title={Explicit Codes Achieving List Decoding Capacity: Error-correction with Optimal Redundancy}, author={Venkatesan Guruswami and Atri Rudra}, journal={arXiv preprint arXiv:cs/0511072}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511072}, primaryClass={cs.IT math.IT} }
guruswami2005explicit
arxiv-673539
cs/0511073
Stochastic Process Semantics for Dynamical Grammar Syntax: An Overview
<|reference_start|>Stochastic Process Semantics for Dynamical Grammar Syntax: An Overview: We define a class of probabilistic models in terms of an operator algebra of stochastic processes, and a representation for this class in terms of stochastic parameterized grammars. A syntactic specification of a grammar is mapped to semantics given in terms of a ring of operators, so that grammatical composition corresponds to operator addition or multiplication. The operators are generators for the time-evolution of stochastic processes. Within this modeling framework one can express data clustering models, logic programs, ordinary and stochastic differential equations, graph grammars, and stochastic chemical reaction kinetics. This mathematical formulation connects these apparently distant fields to one another and to mathematical methods from quantum field theory and operator algebra.<|reference_end|>
arxiv
@article{mjolsness2005stochastic, title={Stochastic Process Semantics for Dynamical Grammar Syntax: An Overview}, author={Eric Mjolsness}, journal={arXiv preprint arXiv:cs/0511073}, year={2005}, number={UCI ICS TR# 05-14}, archivePrefix={arXiv}, eprint={cs/0511073}, primaryClass={cs.AI cs.LO nlin.AO} }
mjolsness2005stochastic
arxiv-673540
cs/0511074
Every Sequence is Decompressible from a Random One
<|reference_start|>Every Sequence is Decompressible from a Random One: Kucera and Gacs independently showed that every infinite sequence is Turing reducible to a Martin-Lof random sequence. This result is extended by showing that every infinite sequence S is Turing reducible to a Martin-Lof random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S. It is shown that this is the optimal ratio of query bits to computed bits achievable with Turing reductions. As an application of this result, a new characterization of constructive dimension is given in terms of Turing reduction compression ratios.<|reference_end|>
arxiv
@article{doty2005every, title={Every Sequence is Decompressible from a Random One}, author={David Doty}, journal={arXiv preprint arXiv:cs/0511074}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511074}, primaryClass={cs.IT cs.CC math.IT} }
doty2005every
arxiv-673541
cs/0511075
Identifying Interaction Sites in "Recalcitrant" Proteins: Predicted Protein and Rna Binding Sites in Rev Proteins of Hiv-1 and Eiav Agree with Experimental Data
<|reference_start|>Identifying Interaction Sites in "Recalcitrant" Proteins: Predicted Protein and Rna Binding Sites in Rev Proteins of Hiv-1 and Eiav Agree with Experimental Data: Protein-protein and protein nucleic acid interactions are vitally important for a wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses. We have developed machine learning approaches for predicting which amino acids of a protein participate in its interactions with other proteins and/or nucleic acids, using only the protein sequence as input. In this paper, we describe an application of classifiers trained on datasets of well-characterized protein-protein and protein-RNA complexes for which experimental structures are available. We apply these classifiers to the problem of predicting protein and RNA binding sites in the sequence of a clinically important protein for which the structure is not known: the regulatory protein Rev, essential for the replication of HIV-1 and other lentiviruses. We compare our predictions with published biochemical, genetic and partial structural information for HIV-1 and EIAV Rev and with our own published experimental mapping of RNA binding sites in EIAV Rev. The predicted and experimentally determined binding sites are in very good agreement. The ability to predict reliably the residues of a protein that directly contribute to specific binding events - without the requirement for structural information regarding either the protein or complexes in which it participates - can potentially generate new disease intervention strategies.<|reference_end|>
arxiv
@article{terribilini2005identifying, title={Identifying Interaction Sites in "Recalcitrant" Proteins: Predicted Protein and Rna Binding Sites in Rev Proteins of Hiv-1 and Eiav Agree with Experimental Data}, author={Michael Terribilini, Jae-Hyung Lee, Changhui Yan, Robert L. Jernigan, Susan Carpenter, Vasant Honavar, Drena Dobbs}, journal={arXiv preprint arXiv:cs/0511075}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511075}, primaryClass={cs.LG cs.AI} }
terribilini2005identifying
arxiv-673542
cs/0511076
Using phonetic constraints in acoustic-to-articulatory inversion
<|reference_start|>Using phonetic constraints in acoustic-to-articulatory inversion: The goal of this work is to recover articulatory information from the speech signal by acoustic-to-articulatory inversion. One of the main difficulties with inversion is that the problem is underdetermined and inversion methods generally offer no guarantee on the phonetical realism of the inverse solutions. A way to adress this issue is to use additional phonetic constraints. Knowledge of the phonetic caracteristics of French vowels enable the derivation of reasonable articulatory domains in the space of Maeda parameters: given the formants frequencies (F1,F2,F3) of a speech sample, and thus the vowel identity, an "ideal" articulatory domain can be derived. The space of formants frequencies is partitioned into vowels, using either speaker-specific data or generic information on formants. Then, to each articulatory vector can be associated a phonetic score varying with the distance to the "ideal domain" associated with the corresponding vowel. Inversion experiments were conducted on isolated vowels and vowel-to-vowel transitions. Articulatory parameters were compared with those obtained without using these constraints and those measured from X-ray data.<|reference_end|>
arxiv
@article{potard2005using, title={Using phonetic constraints in acoustic-to-articulatory inversion}, author={Blaise Potard (INRIA Lorraine - LORIA), Yves Laprie (INRIA Lorraine - LORIA)}, journal={Proceedings of Interspeech, 9th European Conference on Speech Communication and Technology (2005) 3217-3220}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511076}, primaryClass={cs.CL} }
potard2005using
arxiv-673543
cs/0511077
The Availability and Persistence of Web References in D-Lib Magazine
<|reference_start|>The Availability and Persistence of Web References in D-Lib Magazine: We explore the availability and persistence of URLs cited in articles published in D-Lib Magazine. We extracted 4387 unique URLs referenced in 453 articles published from July 1995 to August 2004. The availability was checked three times a week for 25 weeks from September 2004 to February 2005. We found that approximately 28% of those URLs failed to resolve initially, and 30% failed to resolve at the last check. A majority of the unresolved URLs were due to 404 (page not found) and 500 (internal server error) errors. The content pointed to by the URLs was relatively stable; only 16% of the content registered more than a 1 KB change during the testing period. We explore possible factors which may cause a URL to fail by examining its age, path depth, top-level domain and file extension. Based on the data collected, we found the half-life of a URL referenced in a D-Lib Magazine article is approximately 10 years. We also found that URLs were more likely to be unavailable if they pointed to resources in the .net, .edu or country-specific top-level domain, used non-standard ports (i.e., not port 80), or pointed to resources with uncommon or deprecated extensions (e.g., .shtml, .ps, .txt).<|reference_end|>
arxiv
@article{mccown2005the, title={The Availability and Persistence of Web References in D-Lib Magazine}, author={Frank McCown, Sheffan Chan, Michael L. Nelson, Johan Bollen}, journal={5th International Web Archiving Workshop (IWAW05), Vienna, Austria, 2005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511077}, primaryClass={cs.DL} }
mccown2005the
arxiv-673544
cs/0511078
Uniqueness of Nonextensive entropy under Renyi's Recipe
<|reference_start|>Uniqueness of Nonextensive entropy under Renyi's Recipe: By replacing linear averaging in Shannon entropy with Kolmogorov-Nagumo average (KN-averages) or quasilinear mean and further imposing the additivity constraint, R\'{e}nyi proposed the first formal generalization of Shannon entropy. Using this recipe of R\'{e}nyi, one can prepare only two information measures: Shannon and R\'{e}nyi entropy. Indeed, using this formalism R\'{e}nyi characterized these additive entropies in terms of axioms of quasilinear mean. As additivity is a characteristic property of Shannon entropy, pseudo-additivity of the form $x \oplus_{q} y = x + y + (1-q)x y$ is a characteristic property of nonextensive (or Tsallis) entropy. One can apply R\'{e}nyi's recipe in the nonextensive case by replacing the linear averaging in Tsallis entropy with KN-averages and thereby imposing the constraint of pseudo-additivity. In this paper we show that nonextensive entropy is unique under the R\'{e}nyi's recipe, and there by give a characterization.<|reference_end|>
arxiv
@article{dukkipati2005uniqueness, title={Uniqueness of Nonextensive entropy under Renyi's Recipe}, author={Ambedkar Dukkipati, M. Narasimha Murty and Shalabh Bhatnagar}, journal={arXiv preprint arXiv:cs/0511078}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511078}, primaryClass={cs.IT math.IT} }
dukkipati2005uniqueness
arxiv-673545
cs/0511079
An elitist approach for extracting automatically well-realized speech sounds with high confidence
<|reference_start|>An elitist approach for extracting automatically well-realized speech sounds with high confidence: This paper presents an "elitist approach" for extracting automatically well-realized speech sounds with high confidence. The elitist approach uses a speech recognition system based on Hidden Markov Models (HMM). The HMM are trained on speech sounds which are systematically well-detected in an iterative procedure. The results show that, by using the HMM models defined in the training phase, the speech recognizer detects reliably specific speech sounds with a small rate of errors.<|reference_end|>
arxiv
@article{maj2005an, title={An elitist approach for extracting automatically well-realized speech sounds with high confidence}, author={Jean-Baptiste Maj (LORIA), Anne Bonneau (LORIA), Dominique Fohr (LORIA), Yves Laprie (LORIA)}, journal={arXiv preprint arXiv:cs/0511079}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511079}, primaryClass={cs.CL} }
maj2005an
arxiv-673546
cs/0511080
A dissemination strategy for immunizing scale-free networks
<|reference_start|>A dissemination strategy for immunizing scale-free networks: We consider the problem of distributing a vaccine for immunizing a scale-free network against a given virus or worm. We introduce a new method, based on vaccine dissemination, that seems to reflect more accurately what is expected to occur in real-world networks. Also, since the dissemination is performed using only local information, the method can be easily employed in practice. Using a random-graph framework, we analyze our method both mathematically and by means of simulations. We demonstrate its efficacy regarding the trade-off between the expected number of nodes that receive the vaccine and the network's resulting vulnerability to develop an epidemic as the virus or worm attempts to infect one of its nodes. For some scenarios, the new method is seen to render the network practically invulnerable to attacks while requiring only a small fraction of the nodes to receive the vaccine.<|reference_end|>
arxiv
@article{stauffer2005a, title={A dissemination strategy for immunizing scale-free networks}, author={Alexandre O. Stauffer, Valmir C. Barbosa}, journal={Physical Review E 74 (2006), 056105}, year={2005}, doi={10.1103/PhysRevE.74.056105}, archivePrefix={arXiv}, eprint={cs/0511080}, primaryClass={cs.NI} }
stauffer2005a
arxiv-673547
cs/0511081
Writing on Fading Paper and Causal Transmitter CSI
<|reference_start|>Writing on Fading Paper and Causal Transmitter CSI: A wideband fading channel is considered with causal channel state information (CSI) at the transmitter and no receiver CSI. A simple orthogonal code with energy detection rule at the receiver (similar to [6]) is shown to achieve the capacity of this channel in the limit of large bandwidth. This code transmits energy only when the channel gain is large enough. In this limit, this capacity without any receiver CSI is the same as the capacity with full receiver CSI--a phenomenon also true for dirty paper coding. For Rayleigh fading, this capacity (per unit time) is proportional to the logarithm of the bandwidth. Our coding scheme is motivated from the Gel'fand-Pinsker [2,3] coding and dirty paper coding [4]. Nonetheless, for our case, only causal CSI is required at the transmitter in contrast with dirty-paper coding and Gel'fand-Pinsker coding, where non-causal CSI is required. Then we consider a general discrete channel with i.i.d. states. Each input has an associated cost and a zero cost input "0" exists. The channel state is assumed be to be known at the transmitter in a causal manner. Capacity per unit cost is found for this channel and a simple orthogonal code is shown to achieve this capacity. Later, a novel orthogonal coding scheme is proposed for the case of causal transmitter CSI and a condition for equivalence of capacity per unit cost for causal and non-causal transmitter CSI is derived. Finally, some connections are made to the case of non-causal transmitter CSI in [8].<|reference_end|>
arxiv
@article{borade2005writing, title={Writing on Fading Paper and Causal Transmitter CSI}, author={Shashi Borade and Lizhong Zheng}, journal={arXiv preprint arXiv:cs/0511081}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511081}, primaryClass={cs.IT math.IT} }
borade2005writing
arxiv-673548
cs/0511082
Approximating Clustering of Fingerprint Vectors with Missing Values
<|reference_start|>Approximating Clustering of Fingerprint Vectors with Missing Values: The problem of clustering fingerprint vectors is an interesting problem in Computational Biology that has been proposed in (Figureroa et al. 2004). In this paper we show some improvements in closing the gaps between the known lower bounds and upper bounds on the approximability of some variants of the biological problem. Namely we are able to prove that the problem is APX-hard even when each fingerprint contains only two unknown position. Moreover we have studied some variants of the orginal problem, and we give two 2-approximation algorithm for the IECMV and OECMV problems when the number of unknown entries for each vector is at most a constant.<|reference_end|>
arxiv
@article{bonizzoni2005approximating, title={Approximating Clustering of Fingerprint Vectors with Missing Values}, author={Paola Bonizzoni, Gianluca Della Vedova, Riccardo Dondi}, journal={arXiv preprint arXiv:cs/0511082}, year={2005}, doi={10.1007/s00453-008-9265-0}, archivePrefix={arXiv}, eprint={cs/0511082}, primaryClass={cs.DS} }
bonizzoni2005approximating
arxiv-673549
cs/0511083
Gradient Based Routing in Wireless Sensor Networks: a Mixed Strategy
<|reference_start|>Gradient Based Routing in Wireless Sensor Networks: a Mixed Strategy: We show how recent theoretical advances for data-propagation in Wireless Sensor Networks (WSNs) can be combined to improve gradient-based routing (GBR) in Wireless Sensor Networks. We propose a mixed-strategy of direct transmission and multi-hop propagation of data which improves the lifespan of WSNs by reaching better energy-load-balancing amongst sensor nodes.<|reference_end|>
arxiv
@article{powell2005gradient, title={Gradient Based Routing in Wireless Sensor Networks: a Mixed Strategy}, author={Olivier Powell, Aubin Jarry, Pierre Leone, Jose Rolim}, journal={arXiv preprint arXiv:cs/0511083}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511083}, primaryClass={cs.DC} }
powell2005gradient
arxiv-673550
cs/0511084
Ramsey partitions and proximity data structures
<|reference_start|>Ramsey partitions and proximity data structures: This paper addresses two problems lying at the intersection of geometric analysis and theoretical computer science: The non-linear isomorphic Dvoretzky theorem and the design of good approximate distance oracles for large distortion. We introduce the notion of Ramsey partitions of a finite metric space, and show that the existence of good Ramsey partitions implies a solution to the metric Ramsey problem for large distortion (a.k.a. the non-linear version of the isomorphic Dvoretzky theorem, as introduced by Bourgain, Figiel, and Milman). We then proceed to construct optimal Ramsey partitions, and use them to show that for every e\in (0,1), any n-point metric space has a subset of size n^{1-e} which embeds into Hilbert space with distortion O(1/e). This result is best possible and improves part of the metric Ramsey theorem of Bartal, Linial, Mendel and Naor, in addition to considerably simplifying its proof. We use our new Ramsey partitions to design the best known approximate distance oracles when the distortion is large, closing a gap left open by Thorup and Zwick. Namely, we show that for any $n$ point metric space X, and k>1, there exists an O(k)-approximate distance oracle whose storage requirement is O(n^{1+1/k}), and whose query time is a universal constant. We also discuss applications of Ramsey partitions to various other geometric data structure problems, such as the design of efficient data structures for approximate ranking.<|reference_end|>
arxiv
@article{mendel2005ramsey, title={Ramsey partitions and proximity data structures}, author={Manor Mendel and Assaf Naor}, journal={J. European Math. Soc. 9(2): 253-275, 2007}, year={2005}, doi={10.4171/JEMS/79}, archivePrefix={arXiv}, eprint={cs/0511084}, primaryClass={cs.DS cs.CG math.FA math.MG} }
mendel2005ramsey
arxiv-673551
cs/0511085
Proving that P is not equal to NP and that P is not equal to the intersection of NP and co-NP
<|reference_start|>Proving that P is not equal to NP and that P is not equal to the intersection of NP and co-NP: The open question, P=NP?, was presented by Cook (1971). In this paper, a proof that P is not equal to NP is presented. In addition, it is shown that P is not equal to the intersection of NP and co-NP. Finally, the exact inclusion relationships between the classes P, NP and co-NP are presented.<|reference_end|>
arxiv
@article{cohen2005proving, title={Proving that P is not equal to NP and that P is not equal to the intersection of NP and co-NP}, author={R. A. Cohen}, journal={arXiv preprint arXiv:cs/0511085}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511085}, primaryClass={cs.CC} }
cohen2005proving
arxiv-673552
cs/0511086
Energy-Efficient Resource Allocation in Time Division Multiple-Access over Fading Channels
<|reference_start|>Energy-Efficient Resource Allocation in Time Division Multiple-Access over Fading Channels: We investigate energy-efficiency issues and resource allocation policies for time division multi-access (TDMA) over fading channels in the power-limited regime. Supposing that the channels are frequency-flat block-fading and transmitters have full or quantized channel state information (CSI), we first minimize power under a weighted sum-rate constraint and show that the optimal rate and time allocation policies can be obtained by water-filling over realizations of convex envelopes of the minima for cost-reward functions. We then address a related minimization under individual rate constraints and derive the optimal allocation policies via greedy water-filling. Using water-filling across frequencies and fading states, we also extend our results to frequency-selective channels. Our approaches not only provide fundamental power limits when each user can support an infinite number of capacity-achieving codebooks, but also yield guidelines for practical designs where users can only support a finite number of adaptive modulation and coding (AMC) modes with prescribed symbol error probabilities, and also for systems where only discrete-time allocations are allowed.<|reference_end|>
arxiv
@article{wang2005energy-efficient, title={Energy-Efficient Resource Allocation in Time Division Multiple-Access over Fading Channels}, author={Xin Wang and Georgios B. Giannakis}, journal={arXiv preprint arXiv:cs/0511086}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511086}, primaryClass={cs.IT math.IT} }
wang2005energy-efficient
arxiv-673553
cs/0511087
Robust Inference of Trees
<|reference_start|>Robust Inference of Trees: This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m^4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.<|reference_end|>
arxiv
@article{zaffalon2005robust, title={Robust Inference of Trees}, author={Marco Zaffalon and Marcus Hutter}, journal={Annals of Mathematics and Artificial Intelligence, 45 (2005) 215-239}, year={2005}, doi={10.1007/s10472-005-9007-9}, number={IDSIA-11-03}, archivePrefix={arXiv}, eprint={cs/0511087}, primaryClass={cs.LG cs.AI cs.IT math.IT} }
zaffalon2005robust
arxiv-673554
cs/0511088
Bounds on Query Convergence
<|reference_start|>Bounds on Query Convergence: The problem of finding an optimum using noisy evaluations of a smooth cost function arises in many contexts, including economics, business, medicine, experiment design, and foraging theory. We derive an asymptotic bound E[ (x_t - x*)^2 ] >= O(1/sqrt(t)) on the rate of convergence of a sequence (x_0, x_1, >...) generated by an unbiased feedback process observing noisy evaluations of an unknown quadratic function maximised at x*. The bound is tight, as the proof leads to a simple algorithm which meets it. We further establish a bound on the total regret, E[ sum_{i=1..t} (x_i - x*)^2 ] >= O(sqrt(t)) These bounds may impose practical limitations on an agent's performance, as O(eps^-4) queries are made before the queries converge to x* with eps accuracy.<|reference_end|>
arxiv
@article{pearlmutter2005bounds, title={Bounds on Query Convergence}, author={Barak A. Pearlmutter}, journal={arXiv preprint arXiv:cs/0511088}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511088}, primaryClass={cs.LG} }
pearlmutter2005bounds
arxiv-673555
cs/0511089
Continued Fraction Expansion as Isometry: The Law of the Iterated Logarithm for Linear, Jump, and 2--Adic Complexity
<|reference_start|>Continued Fraction Expansion as Isometry: The Law of the Iterated Logarithm for Linear, Jump, and 2--Adic Complexity: In the cryptanalysis of stream ciphers and pseudorandom sequences, the notions of linear, jump, and 2-adic complexity arise naturally to measure the (non)randomness of a given string. We define an isometry K on F_q^\infty that is the precise equivalent to Euclid's algorithm over the reals to calculate the continued fraction expansion of a formal power series. The continued fraction expansion allows to deduce the linear and jump complexity profiles of the input sequence. Since K is an isometry, the resulting F_q^\infty-sequence is i.i.d. for i.i.d. input. Hence the linear and jump complexity profiles may be modelled via Bernoulli experiments (for F_2: coin tossing), and we can apply the very precise bounds as collected by Revesz, among others the Law of the Iterated Logarithm. The second topic is the 2-adic span and complexity, as defined by Goresky and Klapper. We derive again an isometry, this time on the dyadic integers Z_2 which induces an isometry A on F_2}^\infty. The corresponding jump complexity behaves on average exactly like coin tossing. Index terms: Formal power series, isometry, linear complexity, jump complexity, 2-adic complexity, 2-adic span, law of the iterated logarithm, Levy classes, stream ciphers, pseudorandom sequences<|reference_end|>
arxiv
@article{vielhaber2005continued, title={Continued Fraction Expansion as Isometry: The Law of the Iterated Logarithm for Linear, Jump, and 2--Adic Complexity}, author={Michael Vielhaber}, journal={arXiv preprint arXiv:cs/0511089}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511089}, primaryClass={cs.IT math.IT} }
vielhaber2005continued
arxiv-673556
cs/0511090
Integration of Declarative and Constraint Programming
<|reference_start|>Integration of Declarative and Constraint Programming: Combining a set of existing constraint solvers into an integrated system of cooperating solvers is a useful and economic principle to solve hybrid constraint problems. In this paper we show that this approach can also be used to integrate different language paradigms into a unified framework. Furthermore, we study the syntactic, semantic and operational impacts of this idea for the amalgamation of declarative and constraint programming.<|reference_end|>
arxiv
@article{hofstedt2005integration, title={Integration of Declarative and Constraint Programming}, author={Petra Hofstedt, Peter Pepper}, journal={arXiv preprint arXiv:cs/0511090}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511090}, primaryClass={cs.PL cs.AI} }
hofstedt2005integration
arxiv-673557
cs/0511091
Evolution of Voronoi based Fuzzy Recurrent Controllers
<|reference_start|>Evolution of Voronoi based Fuzzy Recurrent Controllers: A fuzzy controller is usually designed by formulating the knowledge of a human expert into a set of linguistic variables and fuzzy rules. Among the most successful methods to automate the fuzzy controllers development process are evolutionary algorithms. In this work, we propose the Recurrent Fuzzy Voronoi (RFV) model, a representation for recurrent fuzzy systems. It is an extension of the FV model proposed by Kavka and Schoenauer that extends the application domain to include temporal problems. The FV model is a representation for fuzzy controllers based on Voronoi diagrams that can represent fuzzy systems with synergistic rules, fulfilling the $\epsilon$-completeness property and providing a simple way to introduce a priory knowledge. In the proposed representation, the temporal relations are embedded by including internal units that provide feedback by connecting outputs to inputs. These internal units act as memory elements. In the RFV model, the semantic of the internal units can be specified together with the a priori rules. The geometric interpretation of the rules allows the use of geometric variational operators during the evolution. The representation and the algorithms are validated in two problems in the area of system identification and evolutionary robotics.<|reference_end|>
arxiv
@article{kavka2005evolution, title={Evolution of Voronoi based Fuzzy Recurrent Controllers}, author={Carlos Kavka (INRIA Futurs, UNSL-DI), Patricia Roggero (UNSL-DI), Marc Schoenauer (INRIA Futurs)}, journal={Dans GECCO 2005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511091}, primaryClass={cs.AI} }
kavka2005evolution
arxiv-673558
cs/0511092
The SL synchronous language, revisited
<|reference_start|>The SL synchronous language, revisited: We revisit the SL synchronous programming model introduced by Boussinot and De Simone (IEEE, Trans. on Soft. Eng., 1996). We discuss an alternative design of the model including thread spawning and recursive definitions and we explore some basic properties of the revised model: determinism, reactivity, CPS translation to a tail recursive form, computational expressivity, and a compositional notion of program equivalence.<|reference_end|>
arxiv
@article{amadio2005the, title={The SL synchronous language, revisited}, author={Roberto Amadio (PPS)}, journal={Journal of Logic and Algebraic Programming 70 (15/02/2007) 121-150}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511092}, primaryClass={cs.PL} }
amadio2005the
arxiv-673559
cs/0511093
Artificial Agents and Speculative Bubbles
<|reference_start|>Artificial Agents and Speculative Bubbles: Pertaining to Agent-based Computational Economics (ACE), this work presents two models for the rise and downfall of speculative bubbles through an exchange price fixing based on double auction mechanisms. The first model is based on a finite time horizon context, where the expected dividends decrease along time. The second model follows the {\em greater fool} hypothesis; the agent behaviour depends on the comparison of the estimated risk with the greater fool's. Simulations shed some light on the influent parameters and the necessary conditions for the apparition of speculative bubbles in an asset market within the considered framework.<|reference_end|>
arxiv
@article{semet2005artificial, title={Artificial Agents and Speculative Bubbles}, author={Yann Semet (INRIA Futurs), Sylvain Gelly (INRIA Futurs), Marc Schoenauer (INRIA Futurs), Mich`ele Sebag (INRIA Futurs)}, journal={arXiv preprint arXiv:cs/0511093}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511093}, primaryClass={cs.GT cs.AI} }
semet2005artificial
arxiv-673560
cs/0511094
A Machine-Independent port of the MPD language run time system to NetBSD
<|reference_start|>A Machine-Independent port of the MPD language run time system to NetBSD: SR (synchronizing resources) is a PASCAL - style language enhanced with constructs for concurrent programming developed at the University of Arizona in the late 1980s. MPD (presented in Gregory Andrews' book about Foundations of Multithreaded, Parallel, and Distributed Programming) is its successor, providing the same language primitives with a different, more C-style, syntax. The run-time system (in theory, identical, but not designed for sharing) of those languages provides the illusion of a multiprocessor machine on a single Unix-like system or a (local area) network of Unix-like machines. Chair V of the Computer Science Department of the University of Bonn is operating a laboratory for a practical course in parallel programming consisting of computing nodes running NetBSD/arm, normally used via PVM, MPI etc. We are considering to offer SR and MPD for this, too. As the original language distributions were only targeted at a few commercial Unix systems, some porting effort is needed. However, some of the porting effort of our earlier SR port should be reusable. The integrated POSIX threads support of NetBSD-2.0 and later allows us to use library primitives provided for NetBSD's phtread system to implement the primitives needed by the SR run-time system, thus implementing 13 target CPUs at once and automatically making use of SMP on VAX, Alpha, PowerPC, Sparc, 32-bit Intel and 64 bit AMD CPUs. We'll present some methods used for the impementation and compare some performance values to the traditional implementation.<|reference_end|>
arxiv
@article{souvatzis2005a, title={A Machine-Independent port of the MPD language run time system to NetBSD}, author={Ignatios Souvatzis}, journal={Christian Tschudin et al. (Eds.): Proceedings of the Fourth European BSD Conference, 2005 Basel, Switzerland}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511094}, primaryClass={cs.DC cs.PL} }
souvatzis2005a
arxiv-673561
cs/0511095
Carbon Copying Onto Dirty Paper
<|reference_start|>Carbon Copying Onto Dirty Paper: A generalization of the problem of writing on dirty paper is considered in which one transmitter sends a common message to multiple receivers. Each receiver experiences on its link an additive interference (in addition to the additive noise), which is known noncausally to the transmitter but not to any of the receivers. Applications range from wireless multi-antenna multicasting to robust dirty paper coding. We develop results for memoryless channels in Gaussian and binary special cases. In most cases, we observe that the availability of side information at the transmitter increases capacity relative to systems without such side information, and that the lack of side information at the receivers decreases capacity relative to systems with such side information. For the noiseless binary case, we establish the capacity when there are two receivers. When there are many receivers, we show that the transmitter side information provides a vanishingly small benefit. When the interference is large and independent across the users, we show that time sharing is optimal. For the Gaussian case we present a coding scheme and establish its optimality in the high signal-to-interference-plus-noise limit when there are two receivers. When the interference is large and independent across users we show that time-sharing is again optimal. Connections to the problem of robust dirty paper coding are also discussed.<|reference_end|>
arxiv
@article{khisti2005carbon, title={Carbon Copying Onto Dirty Paper}, author={Ashish Khisti, Uri Erez, Amos Lapidoth, Gregory Wornell}, journal={arXiv preprint arXiv:cs/0511095}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511095}, primaryClass={cs.IT math.IT} }
khisti2005carbon
arxiv-673562
cs/0511096
A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources
<|reference_start|>A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources: The capacity region of the multiple access channel with arbitrarily correlated sources remains an open problem. Cover, El Gamal and Salehi gave an achievable region in the form of single-letter entropy and mutual information expressions, without a single-letter converse. Cover, El Gamal and Salehi also gave a converse in terms of some n-letter mutual informations, which are incomputable. In this paper, we derive an upper bound for the sum rate of this channel in a single-letter expression by using spectrum analysis. The incomputability of the sum rate of Cover, El Gamal and Salehi scheme comes from the difficulty of characterizing the possible joint distributions for the n-letter channel inputs. Here we introduce a new data processing inequality, which leads to a single-letter necessary condition for these possible joint distributions. We develop a single-letter upper bound for the sum rate by using this single-letter necessary condition on the possible joint distributions.<|reference_end|>
arxiv
@article{kang2005a, title={A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources}, author={W. Kang and S. Ulukus}, journal={arXiv preprint arXiv:cs/0511096}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511096}, primaryClass={cs.IT math.IT} }
kang2005a
arxiv-673563
cs/0511097
Modularizing the Elimination of r=0 in Kleene Algebra
<|reference_start|>Modularizing the Elimination of r=0 in Kleene Algebra: Given a universal Horn formula of Kleene algebra with hypotheses of the form r = 0, it is already known that we can efficiently construct an equation which is valid if and only if the Horn formula is valid. This is an example of <i>elimination of hypotheses</i>, which is useful because the equational theory of Kleene algebra is decidable while the universal Horn theory is not. We show that hypotheses of the form r = 0 can still be eliminated in the presence of other hypotheses. This lets us extend any technique for eliminating hypotheses to include hypotheses of the form r = 0.<|reference_end|>
arxiv
@article{hardin2005modularizing, title={Modularizing the Elimination of r=0 in Kleene Algebra}, author={Christopher Hardin}, journal={Logical Methods in Computer Science, Volume 1, Issue 3 (December 21, 2005) lmcs:2264}, year={2005}, doi={10.2168/LMCS-1(3:4)2005}, archivePrefix={arXiv}, eprint={cs/0511097}, primaryClass={cs.LO} }
hardin2005modularizing
arxiv-673564
cs/0511098
Information and Stock Prices: A Simple Introduction
<|reference_start|>Information and Stock Prices: A Simple Introduction: This article summarizes recent research in financial economics about why information, such as earnings announcements, moves stock prices. The article does not presume any prior exposure to finance beyond what you might read in newspapers.<|reference_end|>
arxiv
@article{yee2005information, title={Information and Stock Prices: A Simple Introduction}, author={Kenton K. Yee}, journal={arXiv preprint arXiv:cs/0511098}, year={2005}, number={http://www.columbia.edu/~kky2001/}, archivePrefix={arXiv}, eprint={cs/0511098}, primaryClass={cs.CY cs.IT math.IT nlin.AO physics.soc-ph} }
yee2005information
arxiv-673565
cs/0511099
Symmetric Boolean Function with Maximum Algebraic Immunity on Odd Number of Variables
<|reference_start|>Symmetric Boolean Function with Maximum Algebraic Immunity on Odd Number of Variables: To resist algebraic attack, a Boolean function should possess good algebraic immunity (AI). Several papers constructed symmetric functions with the maximum algebraic immunity $\lceil \frac{n}{2}\rceil $. In this correspondence we prove that for each odd $n$, there is exactly one trivial balanced $n$-variable symmetric Boolean function achieving the algebraic immunity $\lceil \frac{n}{2}\rceil $. And we also obtain a necessary condition for the algebraic normal form of a symmetric Boolean function with maximum algebraic immunity.<|reference_end|>
arxiv
@article{li2005symmetric, title={Symmetric Boolean Function with Maximum Algebraic Immunity on Odd Number of Variables}, author={Na Li and Wen-feng Qi}, journal={arXiv preprint arXiv:cs/0511099}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511099}, primaryClass={cs.CR} }
li2005symmetric
arxiv-673566
cs/0511100
Density Evolution, Thresholds and the Stability Condition for Non-binary LDPC Codes
<|reference_start|>Density Evolution, Thresholds and the Stability Condition for Non-binary LDPC Codes: We derive the density evolution equations for non-binary low-density parity-check (LDPC) ensembles when transmission takes place over the binary erasure channel. We introduce ensembles defined with respect to the general linear group over the binary field. For these ensembles the density evolution equations can be written compactly. The density evolution for the general linear group helps us in understanding the density evolution for codes defined with respect to finite fields. We compute thresholds for different alphabet sizes for various LDPC ensembles. Surprisingly, the threshold is not a monotonic function of the alphabet size. We state the stability condition for non-binary LDPC ensembles over any binary memoryless symmetric channel. We also give upper bounds on the MAP thresholds for various non-binary ensembles based on EXIT curves and the area theorem.<|reference_end|>
arxiv
@article{rathi2005density, title={Density Evolution, Thresholds and the Stability Condition for Non-binary LDPC Codes}, author={Vishwambhar Rathi, Rudiger Urbanke}, journal={arXiv preprint arXiv:cs/0511100}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511100}, primaryClass={cs.IT math.IT} }
rathi2005density
arxiv-673567
cs/0511101
Chinese Internet AS-level Topology
<|reference_start|>Chinese Internet AS-level Topology: We present the first complete measurement of the Chinese Internet topology at the autonomous systems (AS) level based on traceroute data probed from servers of major ISPs in mainland China. We show that both the Chinese Internet AS graph and the global Internet AS graph can be accurately reproduced by the Positive-Feedback Preference (PFP) model with the same parameters. This result suggests that the Chinese Internet preserves well the topological characteristics of the global Internet. This is the first demonstration of the Internet's topological fractality, or self-similarity, performed at the level of topology evolution modeling.<|reference_end|>
arxiv
@article{zhou2005chinese, title={Chinese Internet AS-level Topology}, author={Shi Zhou, Guo-Qiang Zhang and Guo-Qing Zhang}, journal={IET Communications, vol.1, no.2, pp.209-214, 2007}, year={2005}, doi={10.1049/iet-com:20060518}, archivePrefix={arXiv}, eprint={cs/0511101}, primaryClass={cs.NI} }
zhou2005chinese
arxiv-673568
cs/0511102
Evaluating Mobility Pattern Space Routing for DTNs
<|reference_start|>Evaluating Mobility Pattern Space Routing for DTNs: Because a delay tolerant network (DTN) can often be partitioned, the problem of routing is very challenging. However, routing benefits considerably if one can take advantage of knowledge concerning node mobility. This paper addresses this problem with a generic algorithm based on the use of a high-dimensional Euclidean space, that we call MobySpace, constructed upon nodes' mobility patterns. We provide here an analysis and the large scale evaluation of this routing scheme in the context of ambient networking by replaying real mobility traces. The specific MobySpace evaluated is based on the frequency of visit of nodes for each possible location. We show that the MobySpace can achieve good performance compared to that of the other algorithms we implemented, especially when we perform routing on the nodes that have a high connection time. We determine that the degree of homogeneity of mobility patterns of nodes has a high impact on routing. And finally, we study the ability of nodes to learn their own mobility patterns.<|reference_end|>
arxiv
@article{leguay2005evaluating, title={Evaluating Mobility Pattern Space Routing for DTNs}, author={Jeremie Leguay, Timur Friedman, Vania Conan}, journal={arXiv preprint arXiv:cs/0511102}, year={2005}, doi={10.1109/INFOCOM.2006.299}, archivePrefix={arXiv}, eprint={cs/0511102}, primaryClass={cs.NI} }
leguay2005evaluating
arxiv-673569
cs/0511103
An Infeasibility Result for the Multiterminal Source-Coding Problem
<|reference_start|>An Infeasibility Result for the Multiterminal Source-Coding Problem: We prove a new outer bound on the rate-distortion region for the multiterminal source-coding problem. This bound subsumes the best outer bound in the literature and improves upon it strictly in some cases. The improved bound enables us to obtain a new, conclusive result for the binary erasure version of the "CEO problem." The bound recovers many of the converse results that have been established for special cases of the problem, including the recent one for the Gaussian version of the CEO problem.<|reference_end|>
arxiv
@article{wagner2005an, title={An Infeasibility Result for the Multiterminal Source-Coding Problem}, author={Aaron B. Wagner and Venkat Anantharam}, journal={arXiv preprint arXiv:cs/0511103}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511103}, primaryClass={cs.IT math.IT} }
wagner2005an
arxiv-673570
cs/0511104
Channel Model and Upper Bound on the Information Capacity of the Fiber Optical Communication Channel Based on the Effects of XPM Induced Nonlinearity
<|reference_start|>Channel Model and Upper Bound on the Information Capacity of the Fiber Optical Communication Channel Based on the Effects of XPM Induced Nonlinearity: An upper bound to the information capacity of a wavelength-division multi- plexed optical fiber communication system is derived in a model incorporating the nonlinear propagation effects of cross-phase modulation (XPM). This work is based on the paper by Mitra et al., finding lower bounds to the channel capacity, in which physical models for propagation are used to calculate statistical properties of the conditional probability distribution relating input and output in a single WDM channel. In this paper we present a tractable channel model incorporating the effects of cross phase modulation. Using this model we find an upper bound to the information capacity of the fiber optical communication channel at high SNR. The results provide physical insight into the manner in which nonlinearities degrade the information capacity.<|reference_end|>
arxiv
@article{kakavand2005channel, title={Channel Model and Upper Bound on the Information Capacity of the Fiber Optical Communication Channel Based on the Effects of XPM Induced Nonlinearity}, author={Hossein Kakavand}, journal={arXiv preprint arXiv:cs/0511104}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511104}, primaryClass={cs.IT math.IT} }
kakavand2005channel
arxiv-673571
cs/0511105
The Signed Distance Function: A New Tool for Binary Classification
<|reference_start|>The Signed Distance Function: A New Tool for Binary Classification: From a geometric perspective most nonlinear binary classification algorithms, including state of the art versions of Support Vector Machine (SVM) and Radial Basis Function Network (RBFN) classifiers, and are based on the idea of reconstructing indicator functions. We propose instead to use reconstruction of the signed distance function (SDF) as a basis for binary classification. We discuss properties of the signed distance function that can be exploited in classification algorithms. We develop simple versions of such classifiers and test them on several linear and nonlinear problems. On linear tests accuracy of the new algorithm exceeds that of standard SVM methods, with an average of 50% fewer misclassifications. Performance of the new methods also matches or exceeds that of standard methods on several nonlinear problems including classification of benchmark diagnostic micro-array data sets.<|reference_end|>
arxiv
@article{boczko2005the, title={The Signed Distance Function: A New Tool for Binary Classification}, author={Erik M. Boczko, Todd R. Young}, journal={arXiv preprint arXiv:cs/0511105}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511105}, primaryClass={cs.LG cs.CG} }
boczko2005the
arxiv-673572
cs/0511106
Benefits of InterSite Pre-Processing and Clustering Methods in E-Commerce Domain
<|reference_start|>Benefits of InterSite Pre-Processing and Clustering Methods in E-Commerce Domain: This paper presents our preprocessing and clustering analysis on the clickstream dataset proposed for the ECMLPKDD 2005 Discovery Challenge. The main contributions of this article are double. First, after presenting the clickstream dataset, we show how we build a rich data warehouse based an advanced preprocesing. We take into account the intersite aspects in the given ecommerce domain, which offers an interesting data structuration. A preliminary statistical analysis based on time period clickstreams is given, emphasing the importance of intersite user visits in such a context. Secondly, we describe our crossed-clustering method which is applied on data generated from our data warehouse. Our preliminary results are interesting and promising illustrating the benefits of our WUM methods, even if more investigations are needed on the same dataset.<|reference_end|>
arxiv
@article{chelcea2005benefits, title={Benefits of InterSite Pre-Processing and Clustering Methods in E-Commerce Domain}, author={Sergiu Theodor Chelcea (INRIA Rocquencourt / INRIA Sophia Antipolis), Alzennyr Da Silva (INRIA Rocquencourt / INRIA Sophia Antipolis), Yves Lechevallier (INRIA Rocquencourt / INRIA Sophia Antipolis), Doru Tanasa (INRIA Rocquencourt / INRIA Sophia Antipolis), Brigitte Trousse (INRIA Rocquencourt / INRIA Sophia Antipolis)}, journal={Dans Proceedings of the ECML/PKDD2005 Discovery Challenge, A Collaborative Effort in Knowledge Discovery from Databases}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511106}, primaryClass={cs.DB} }
chelcea2005benefits
arxiv-673573
cs/0511107
Phase transition in the assignment problem for random matrices
<|reference_start|>Phase transition in the assignment problem for random matrices: We report an analytic and numerical study of a phase transition in a P problem (the assignment problem) that separates two phases whose representatives are the simple matching problem (an easy P problem) and the traveling salesman problem (a NP-complete problem). Like other phase transitions found in combinatoric problems (K-satisfiability, number partitioning) this can help to understand the nature of the difficulties in solving NP problems an to find more accurate algorithms for them.<|reference_end|>
arxiv
@article{esteve2005phase, title={Phase transition in the assignment problem for random matrices}, author={J. G. Esteve and F. Falceto}, journal={arXiv preprint arXiv:cs/0511107}, year={2005}, doi={10.1209/epl/i2005-10296-6}, archivePrefix={arXiv}, eprint={cs/0511107}, primaryClass={cs.CC cond-mat.stat-mech} }
esteve2005phase
arxiv-673574
cs/0511108
Parameter Estimation of Hidden Diffusion Processes: Particle Filter vs Modified Baum-Welch Algorithm
<|reference_start|>Parameter Estimation of Hidden Diffusion Processes: Particle Filter vs Modified Baum-Welch Algorithm: We propose a new method for the estimation of parameters of hidden diffusion processes. Based on parametrization of the transition matrix, the Baum-Welch algorithm is improved. The algorithm is compared to the particle filter in application to the noisy periodic systems. It is shown that the modified Baum-Welch algorithm is capable of estimating the system parameters with better accuracy than particle filters.<|reference_end|>
arxiv
@article{benabdallah2005parameter, title={Parameter Estimation of Hidden Diffusion Processes: Particle Filter vs. Modified Baum-Welch Algorithm}, author={A. Benabdallah and G. Radons}, journal={arXiv preprint arXiv:cs/0511108}, year={2005}, archivePrefix={arXiv}, eprint={cs/0511108}, primaryClass={cs.DS cs.LG} }
benabdallah2005parameter
arxiv-673575
cs/0512001
Which n-Venn diagrams can be drawn with convex k-gons?
<|reference_start|>Which n-Venn diagrams can be drawn with convex k-gons?: We establish a new lower bound for the number of sides required for the component curves of simple Venn diagrams made from polygons. Specifically, for any n-Venn diagram of convex k-gons, we prove that k >= (2^n - 2 - n) / (n (n-2)). In the process we prove that Venn diagrams of seven curves, simple or not, cannot be formed from triangles. We then give an example achieving the new lower bound of a (simple, symmetric) Venn diagram of seven quadrilaterals. Previously Grunbaum had constructed a 7-Venn diagram of non-convex 5-gons [``Venn Diagrams II'', Geombinatorics 2:25-31, 1992].<|reference_end|>
arxiv
@article{carroll2005which, title={Which n-Venn diagrams can be drawn with convex k-gons?}, author={Jeremy Carroll (1), Frank Ruskey (2), Mark Weston (2) ((1) HP Laboratories, Bristol, UK, (2) University of Victoria, Canada)}, journal={arXiv preprint arXiv:cs/0512001}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512001}, primaryClass={cs.CG} }
carroll2005which
arxiv-673576
cs/0512002
On Self-Regulated Swarms, Societal Memory, Speed and Dynamics
<|reference_start|>On Self-Regulated Swarms, Societal Memory, Speed and Dynamics: We propose a Self-Regulated Swarm (SRS) algorithm which hybridizes the advantageous characteristics of Swarm Intelligence as the emergence of a societal environmental memory or cognitive map via collective pheromone laying in the landscape (properly balancing the exploration/exploitation nature of our dynamic search strategy), with a simple Evolutionary mechanism that trough a direct reproduction procedure linked to local environmental features is able to self-regulate the above exploratory swarm population, speeding it up globally. In order to test his adaptive response and robustness, we have recurred to different dynamic multimodal complex functions as well as to Dynamic Optimization Control problems, measuring reaction speeds and performance. Final comparisons were made with standard Genetic Algorithms (GAs), Bacterial Foraging strategies (BFOA), as well as with recent Co-Evolutionary approaches. SRS's were able to demonstrate quick adaptive responses, while outperforming the results obtained by the other approaches. Additionally, some successful behaviors were found. One of the most interesting illustrate that the present SRS collective swarm of bio-inspired ant-like agents is able to track about 65% of moving peaks traveling up to ten times faster than the velocity of a single individual composing that precise swarm tracking system.<|reference_end|>
arxiv
@article{ramos2005on, title={On Self-Regulated Swarms, Societal Memory, Speed and Dynamics}, author={Vitorino Ramos, Carlos Fernandes, Agostinho C. Rosa}, journal={arXiv preprint arXiv:cs/0512002}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512002}, primaryClass={cs.NE cs.AI} }
ramos2005on
arxiv-673577
cs/0512003
Societal Implicit Memory and his Speed on Tracking Extrema over Dynamic Environments using Self-Regulatory Swarms
<|reference_start|>Societal Implicit Memory and his Speed on Tracking Extrema over Dynamic Environments using Self-Regulatory Swarms: In order to overcome difficult dynamic optimization and environment extrema tracking problems, We propose a Self-Regulated Swarm (SRS) algorithm which hybridizes the advantageous characteristics of Swarm Intelligence as the emergence of a societal environmental memory or cognitive map via collective pheromone laying in the landscape (properly balancing the exploration/exploitation nature of our dynamic search strategy), with a simple Evolutionary mechanism that trough a direct reproduction procedure linked to local environmental features is able to self-regulate the above exploratory swarm population, speeding it up globally. In order to test his adaptive response and robustness, we have recurred to different dynamic multimodal complex functions as well as to Dynamic Optimization Control problems, measuring reaction speeds and performance. Final comparisons were made with standard Genetic Algorithms (GAs), Bacterial Foraging strategies (BFOA), as well as with recent Co-Evolutionary approaches. SRS's were able to demonstrate quick adaptive responses, while outperforming the results obtained by the other approaches. Additionally, some successful behaviors were found. One of the most interesting illustrate that the present SRS collective swarm of bio-inspired ant-like agents is able to track about 65% of moving peaks traveling up to ten times faster than the velocity of a single individual composing that precise swarm tracking system.<|reference_end|>
arxiv
@article{ramos2005societal, title={Societal Implicit Memory and his Speed on Tracking Extrema over Dynamic Environments using Self-Regulatory Swarms}, author={Vitorino Ramos, Carlos Fernandes, Agostinho C. Rosa}, journal={arXiv preprint arXiv:cs/0512003}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512003}, primaryClass={cs.MA cs.AI} }
ramos2005societal
arxiv-673578
cs/0512004
Self-Regulated Artificial Ant Colonies on Digital Image Habitats
<|reference_start|>Self-Regulated Artificial Ant Colonies on Digital Image Habitats: Artificial life models, swarm intelligent and evolutionary computation algorithms are usually built on fixed size populations. Some studies indicate however that varying the population size can increase the adaptability of these systems and their capability to react to changing environments. In this paper we present an extended model of an artificial ant colony system designed to evolve on digital image habitats. We will show that the present swarm can adapt the size of the population according to the type of image on which it is evolving and reacting faster to changing images, thus converging more rapidly to the new desired regions, regulating the number of his image foraging agents. Finally, we will show evidences that the model can be associated with the Mathematical Morphology Watershed algorithm to improve the segmentation of digital grey-scale images. KEYWORDS: Swarm Intelligence, Perception and Image Processing, Pattern Recognition, Mathematical Morphology, Social Cognitive Maps, Social Foraging, Self-Organization, Distributed Search.<|reference_end|>
arxiv
@article{fernandes2005self-regulated, title={Self-Regulated Artificial Ant Colonies on Digital Image Habitats}, author={Carlos Fernandes, Vitorino Ramos and Agostinho C. Rosa}, journal={in International Journal of Lateral Computing, IJLC, vol. 2 (1), pp. 1-8, ISSN 0973-208X, Dec. 2005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512004}, primaryClass={cs.MA cs.AI} }
fernandes2005self-regulated
arxiv-673579
cs/0512005
On Ants, Bacteria and Dynamic Environments
<|reference_start|>On Ants, Bacteria and Dynamic Environments: Wasps, bees, ants and termites all make effective use of their environment and resources by displaying collective swarm intelligence. Termite colonies - for instance - build nests with a complexity far beyond the comprehension of the individual termite, while ant colonies dynamically allocate labor to various vital tasks such as foraging or defense without any central decision-making ability. Recent research suggests that microbial life can be even richer: highly social, intricately networked, and teeming with interactions, as found in bacteria. What strikes from these observations is that both ant colonies and bacteria have similar natural mechanisms based on Stigmergy and Self-Organization in order to emerge coherent and sophisticated patterns of global behaviour. Keeping in mind the above characteristics we will present a simple model to tackle the collective adaptation of a social swarm based on real ant colony behaviors (SSA algorithm) for tracking extrema in dynamic environments and highly multimodal complex functions described in the well-know De Jong test suite. Then, for the purpose of comparison, a recent model of artificial bacterial foraging (BFOA algorithm) based on similar stigmergic features is described and analyzed. Final results indicate that the SSA collective intelligence is able to cope and quickly adapt to unforeseen situations even when over the same cooperative foraging period, the community is requested to deal with two different and contradictory purposes, while outperforming BFOA in adaptive speed.<|reference_end|>
arxiv
@article{ramos2005on, title={On Ants, Bacteria and Dynamic Environments}, author={Vitorino Ramos, Carlos Fernandes and Agostinho C. Rosa}, journal={arXiv preprint arXiv:cs/0512005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512005}, primaryClass={cs.DC} }
ramos2005on
arxiv-673580
cs/0512006
Capacity-Achieving Ensembles of Accumulate-Repeat-Accumulate Codes for the Erasure Channel with Bounded Complexity
<|reference_start|>Capacity-Achieving Ensembles of Accumulate-Repeat-Accumulate Codes for the Erasure Channel with Bounded Complexity: The paper introduces ensembles of accumulate-repeat-accumulate (ARA) codes which asymptotically achieve capacity on the binary erasure channel (BEC) with {\em bounded complexity}, per information bit, of encoding and decoding. It also introduces symmetry properties which play a central role in the construction of capacity-achieving ensembles for the BEC with bounded complexity. The results here improve on the tradeoff between performance and complexity provided by previous constructions of capacity-achieving ensembles of codes defined on graphs. The superiority of ARA codes with moderate to large block length is exemplified by computer simulations which compare their performance with those of previously reported capacity-achieving ensembles of LDPC and IRA codes. The ARA codes also have the advantage of being systematic.<|reference_end|>
arxiv
@article{pfister2005capacity-achieving, title={Capacity-Achieving Ensembles of Accumulate-Repeat-Accumulate Codes for the Erasure Channel with Bounded Complexity}, author={Henry D. Pfister and Igal Sason}, journal={IEEE Transactions on Information Theory, Vol. 53 (6), pp. 2088-2115, June 2007}, year={2005}, doi={10.1109/TIT.2007.896873}, archivePrefix={arXiv}, eprint={cs/0512006}, primaryClass={cs.IT math.IT} }
pfister2005capacity-achieving
arxiv-673581
cs/0512007
Entangled messages
<|reference_start|>Entangled messages: It is sometimes necessary to send copies of the same email to different parties, but it is impossible to ensure that if one party reads the message the other parties will bound to read it. We propose an entanglement based scheme where if one party reads the message the other party will bound to read it simultaneously.<|reference_end|>
arxiv
@article{mitra2005entangled, title={Entangled messages}, author={Arindam Mitra}, journal={arXiv preprint arXiv:cs/0512007}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512007}, primaryClass={cs.CR cs.IR} }
mitra2005entangled
arxiv-673582
cs/0512008
Cohomology in Grothendieck Topologies and Lower Bounds in Boolean Complexity
<|reference_start|>Cohomology in Grothendieck Topologies and Lower Bounds in Boolean Complexity: This paper is motivated by questions such as P vs. NP and other questions in Boolean complexity theory. We describe an approach to attacking such questions with cohomology, and we show that using Grothendieck topologies and other ideas from the Grothendieck school gives new hope for such an attack. We focus on circuit depth complexity, and consider only finite topological spaces or Grothendieck topologies based on finite categories; as such, we do not use algebraic geometry or manifolds. Given two sheaves on a Grothendieck topology, their "cohomological complexity" is the sum of the dimensions of their Ext groups. We seek to model the depth complexity of Boolean functions by the cohomological complexity of sheaves on a Grothendieck topology. We propose that the logical AND of two Boolean functions will have its corresponding cohomological complexity bounded in terms of those of the two functions using ``virtual zero extensions.'' We propose that the logical negation of a function will have its corresponding cohomological complexity equal to that of the original function using duality theory. We explain these approaches and show that they are stable under pullbacks and base change. It is the subject of ongoing work to achieve AND and negation bounds simultaneously in a way that yields an interesting depth lower bound.<|reference_end|>
arxiv
@article{friedman2005cohomology, title={Cohomology in Grothendieck Topologies and Lower Bounds in Boolean Complexity}, author={Joel Friedman}, journal={arXiv preprint arXiv:cs/0512008}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512008}, primaryClass={cs.CC math.AG} }
friedman2005cohomology
arxiv-673583
cs/0512009
Almost periodic functions, constructively
<|reference_start|>Almost periodic functions, constructively: The almost periodic functions form a natural example of a non-separable normed space. As such, it has been a challenge for constructive mathematicians to find a natural treatment of them. Here we present a simple proof of Bohr's fundamental theorem for almost periodic functions which we then generalize to almost periodic functions on general topological groups.<|reference_end|>
arxiv
@article{spitters2005almost, title={Almost periodic functions, constructively}, author={Bas Spitters}, journal={Logical Methods in Computer Science, Volume 1, Issue 3 (December 20, 2005) lmcs:2263}, year={2005}, doi={10.2168/LMCS-1(3:3)2005}, archivePrefix={arXiv}, eprint={cs/0512009}, primaryClass={cs.LO} }
spitters2005almost
arxiv-673584
cs/0512010
A geometry of information, I: Nerves, posets and differential forms
<|reference_start|>A geometry of information, I: Nerves, posets and differential forms: The main theme of this workshop (Dagstuhl seminar 04351) is `Spatial Representation: Continuous vs. Discrete'. Spatial representation has two contrasting but interacting aspects (i) representation of spaces' and (ii) representation by spaces. In this paper, we will examine two aspects that are common to both interpretations of the theme, namely nerve constructions and refinement. Representations change, data changes, spaces change. We will examine the possibility of a `differential geometry' of spatial representations of both types, and in the sequel give an algebra of differential forms that has the potential to handle the dynamical aspect of such a geometry. We will discuss briefly a conjectured class of spaces, generalising the Cantor set which would seem ideal as a test-bed for the set of tools we are developing.<|reference_end|>
arxiv
@article{gratus2005a, title={A geometry of information, I: Nerves, posets and differential forms}, author={Jonathan Gratus and Timothy Porter}, journal={arXiv preprint arXiv:cs/0512010}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512010}, primaryClass={cs.AI cs.GR} }
gratus2005a
arxiv-673585
cs/0512011
Understanding the internet topology evolution dynamics
<|reference_start|>Understanding the internet topology evolution dynamics: The internet structure is extremely complex. The Positive-Feedback Preference (PFP) model is a recently introduced internet topology generator. The model uses two generic algorithms to replicate the evolution dynamics observed on the internet historic data. The phenomenological model was originally designed to match only two topology properties of the internet, i.e. the rich-club connectivity and the exact form of degree distribution. Whereas numerical evaluation has shown that the PFP model accurately reproduces a large set of other nontrivial characteristics as well. This paper aims to investigate why and how this generative model captures so many diverse properties of the internet. Based on comprehensive simulation results, the paper presents a detailed analysis on the exact origin of each of the topology properties produced by the model. This work reveals how network evolution mechanisms control the obtained topology properties and it also provides insights on correlations between various structural characteristics of complex networks.<|reference_end|>
arxiv
@article{zhou2005understanding, title={Understanding the internet topology evolution dynamics}, author={Shi Zhou}, journal={Physical Review E, vol. 70, no. 016124, July 2006.}, year={2005}, doi={10.1103/PhysRevE.74.016124}, archivePrefix={arXiv}, eprint={cs/0512011}, primaryClass={cs.NI} }
zhou2005understanding
arxiv-673586
cs/0512012
Extending the theory of Owicki and Gries with a logic of progress
<|reference_start|>Extending the theory of Owicki and Gries with a logic of progress: This paper describes a logic of progress for concurrent programs. The logic is based on that of UNITY, molded to fit a sequential programming model. Integration of the two is achieved by using auxiliary variables in a systematic way that incorporates program counters into the program text. The rules for progress in UNITY are then modified to suit this new system. This modification is however subtle enough to allow the theory of Owicki and Gries to be used without change.<|reference_end|>
arxiv
@article{dongol2005extending, title={Extending the theory of Owicki and Gries with a logic of progress}, author={Brijesh Dongol and Doug Goldson}, journal={Logical Methods in Computer Science, Volume 2, Issue 1 (March 10, 2006) lmcs:2260}, year={2005}, doi={10.2168/LMCS-2(1:6)2006}, archivePrefix={arXiv}, eprint={cs/0512012}, primaryClass={cs.LO} }
dongol2005extending
arxiv-673587
cs/0512013
The Water-Filling Game in Fading Multiple Access Channels
<|reference_start|>The Water-Filling Game in Fading Multiple Access Channels: We adopt a game theoretic approach for the design and analysis of distributed resource allocation algorithms in fading multiple access channels. The users are assumed to be selfish, rational, and limited by average power constraints. We show that the sum-rate optimal point on the boundary of the multipleaccess channel capacity region is the unique Nash Equilibrium of the corresponding water-filling game. This result sheds a new light on the opportunistic communication principle and argues for the fairness of the sum-rate optimal point, at least from a game theoretic perspective. The base-station is then introduced as a player interested in maximizing a weighted sum of the individual rates. We propose a Stackelberg formulation in which the base-station is the designated game leader. In this set-up, the base-station announces first its strategy defined as the decoding order of the different users, in the successive cancellation receiver, as a function of the channel state. In the second stage, the users compete conditioned on this particular decoding strategy. We show that this formulation allows for achieving all the corner points of the capacity region, in addition to the sum-rate optimal point. On the negative side, we prove the non-existence of a base-station strategy in this formulation that achieves the rest of the boundary points. To overcome this limitation, we present a repeated game approach which achieves the capacity region of the fading multiple access channel. Finally, we extend our study to vector channels highlighting interesting differences between this scenario and the scalar channel case.<|reference_end|>
arxiv
@article{lai2005the, title={The Water-Filling Game in Fading Multiple Access Channels}, author={Lifeng Lai and Hesham El Gamal}, journal={arXiv preprint arXiv:cs/0512013}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512013}, primaryClass={cs.IT math.IT} }
lai2005the
arxiv-673588
cs/0512014
A Game-Theoretic Approach to Energy-Efficient Power Control in Multi-Carrier CDMA Systems
<|reference_start|>A Game-Theoretic Approach to Energy-Efficient Power Control in Multi-Carrier CDMA Systems: A game-theoretic model for studying power control in multi-carrier CDMA systems is proposed. Power control is modeled as a non-cooperative game in which each user decides how much power to transmit over each carrier to maximize its own utility. The utility function considered here measures the number of reliable bits transmitted over all the carriers per Joule of energy consumed and is particularly suitable for networks where energy efficiency is important. The multi-dimensional nature of users' strategies and the non-quasiconcavity of the utility function make the multi-carrier problem much more challenging than the single-carrier or throughput-based-utility case. It is shown that, for all linear receivers including the matched filter, the decorrelator, and the minimum-mean-square-error (MMSE) detector, a user's utility is maximized when the user transmits only on its "best" carrier. This is the carrier that requires the least amount of power to achieve a particular target signal-to-interference-plus-noise ratio (SINR) at the output of the receiver. The existence and uniqueness of Nash equilibrium for the proposed power control game are studied. In particular, conditions are given that must be satisfied by the channel gains for a Nash equilibrium to exist, and the distribution of the users among the carriers at equilibrium is also characterized. In addition, an iterative and distributed algorithm for reaching the equilibrium (when it exists) is presented. It is shown that the proposed approach results in significant improvements in the total utility achieved at equilibrium compared to a single-carrier system and also to a multi-carrier system in which each user maximizes its utility over each carrier independently.<|reference_end|>
arxiv
@article{meshkati2005a, title={A Game-Theoretic Approach to Energy-Efficient Power Control in Multi-Carrier CDMA Systems}, author={Farhad Meshkati, Mung Chiang, H. Vincent Poor and Stuart C. Schwartz}, journal={arXiv preprint arXiv:cs/0512014}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512014}, primaryClass={cs.IT math.IT} }
meshkati2005a
arxiv-673589
cs/0512015
Joint fixed-rate universal lossy coding and identification of continuous-alphabet memoryless sources
<|reference_start|>Joint fixed-rate universal lossy coding and identification of continuous-alphabet memoryless sources: The problem of joint universal source coding and identification is considered in the setting of fixed-rate lossy coding of continuous-alphabet memoryless sources. For a wide class of bounded distortion measures, it is shown that any compactly parametrized family of $\R^d$-valued i.i.d. sources with absolutely continuous distributions satisfying appropriate smoothness and Vapnik--Chervonenkis learnability conditions, admits a joint scheme for universal lossy block coding and parameter estimation, such that when the block length $n$ tends to infinity, the overhead per-letter rate and the distortion redundancies converge to zero as $O(n^{-1}\log n)$ and $O(\sqrt{n^{-1}\log n})$, respectively. Moreover, the active source can be determined at the decoder up to a ball of radius $O(\sqrt{n^{-1} \log n})$ in variational distance, asymptotically almost surely. The system has finite memory length equal to the block length, and can be thought of as blockwise application of a time-invariant nonlinear filter with initial conditions determined from the previous block. Comparisons are presented with several existing schemes for universal vector quantization, which do not include parameter estimation explicitly, and an extension to unbounded distortion measures is outlined. Finally, finite mixture classes and exponential families are given as explicit examples of parametric sources admitting joint universal compression and modeling schemes of the kind studied here.<|reference_end|>
arxiv
@article{raginsky2005joint, title={Joint fixed-rate universal lossy coding and identification of continuous-alphabet memoryless sources}, author={Maxim Raginsky}, journal={arXiv preprint arXiv:cs/0512015}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512015}, primaryClass={cs.IT cs.LG math.IT} }
raginsky2005joint
arxiv-673590
cs/0512016
A linear-time algorithm for finding the longest segment which scores above a given threshold
<|reference_start|>A linear-time algorithm for finding the longest segment which scores above a given threshold: This paper describes a linear-time algorithm that finds the longest stretch in a sequence of real numbers (``scores'') in which the sum exceeds an input parameter. The algorithm also solves the problem of finding the longest interval in which the average of the scores is above a fixed threshold. The problem originates from molecular sequence analysis: for instance, the algorithm can be employed to identify long GC-rich regions in DNA sequences. The algorithm can also be used to trim low-quality ends of shotgun sequences in a preprocessing step of whole-genome assembly.<|reference_end|>
arxiv
@article{csűrös2005a, title={A linear-time algorithm for finding the longest segment which scores above a given threshold}, author={Mikl'os CsH{u}r"os}, journal={arXiv preprint arXiv:cs/0512016}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512016}, primaryClass={cs.DS cs.CE} }
csűrös2005a
arxiv-673591
cs/0512017
Approximately Universal Codes over Slow Fading Channels
<|reference_start|>Approximately Universal Codes over Slow Fading Channels: Performance of reliable communication over a coherent slow fading channel at high SNR is succinctly captured as a fundamental tradeoff between diversity and multiplexing gains. We study the problem of designing codes that optimally tradeoff the diversity and multiplexing gains. Our main contribution is a precise characterization of codes that are universally tradeoff-optimal, i.e., they optimally tradeoff the diversity and multiplexing gains for every statistical characterization of the fading channel. We denote this characterization as one of approximate universality where the approximation is in the connection between error probability and outage capacity with diversity and multiplexing gains, respectively. The characterization of approximate universality is then used to construct new coding schemes as well as to show optimality of several schemes proposed in the space-time coding literature.<|reference_end|>
arxiv
@article{tavildar2005approximately, title={Approximately Universal Codes over Slow Fading Channels}, author={Saurabha Tavildar and Pramod Viswanath}, journal={arXiv preprint arXiv:cs/0512017}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512017}, primaryClass={cs.IT math.IT} }
tavildar2005approximately
arxiv-673592
cs/0512018
DAMNED: A Distributed and Multithreaded Neural Event-Driven simulation framework
<|reference_start|>DAMNED: A Distributed and Multithreaded Neural Event-Driven simulation framework: In a Spiking Neural Networks (SNN), spike emissions are sparsely and irregularly distributed both in time and in the network architecture. Since a current feature of SNNs is a low average activity, efficient implementations of SNNs are usually based on an Event-Driven Simulation (EDS). On the other hand, simulations of large scale neural networks can take advantage of distributing the neurons on a set of processors (either workstation cluster or parallel computer). This article presents DAMNED, a large scale SNN simulation framework able to gather the benefits of EDS and parallel computing. Two levels of parallelism are combined: Distributed mapping of the neural topology, at the network level, and local multithreaded allocation of resources for simultaneous processing of events, at the neuron level. Based on the causality of events, a distributed solution is proposed for solving the complex problem of scheduling without synchronization barrier.<|reference_end|>
arxiv
@article{mouraud2005damned:, title={DAMNED: A Distributed and Multithreaded Neural Event-Driven simulation framework}, author={Anthony Mouraud (GRIMAAG, ISC), Didier Puzenat (GRIMAAG), H'el`ene Paugam-Moisy (ISC)}, journal={arXiv preprint arXiv:cs/0512018}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512018}, primaryClass={cs.NE cs.LG} }
mouraud2005damned:
arxiv-673593
cs/0512019
Amazing geometry of genetic space or are genetic algorithms convergent?
<|reference_start|>Amazing geometry of genetic space or are genetic algorithms convergent?: There is no proof yet of convergence of Genetic Algorithms. We do not supply it too. Instead, we present some thoughts and arguments to convince the Reader, that Genetic Algorithms are essentially bound for success. For this purpose, we consider only the crossover operators, single- or multiple-point, together with selection procedure. We also give a proof that the soft selection is superior to other selection schemes.<|reference_end|>
arxiv
@article{gutowski2005amazing, title={Amazing geometry of genetic space or are genetic algorithms convergent?}, author={Marek W. Gutowski}, journal={arXiv preprint arXiv:cs/0512019}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512019}, primaryClass={cs.NE cs.DM cs.SE} }
gutowski2005amazing
arxiv-673594
cs/0512020
A Practical Approach to Joint Network-Source Coding
<|reference_start|>A Practical Approach to Joint Network-Source Coding: We are interested in how to best communicate a real valued source to a number of destinations (sinks) over a network with capacity constraints in a collective fidelity metric over all the sinks, a problem which we call joint network-source coding. It is demonstrated that multiple description codes along with proper diversity routing provide a powerful solution to joint network-source coding. A systematic optimization approach is proposed. It consists of optimizing the network routing given a multiple description code and designing optimal multiple description code for the corresponding optimized routes.<|reference_end|>
arxiv
@article{sarshar2005a, title={A Practical Approach to Joint Network-Source Coding}, author={Nima Sarshar and Xiaolin Wu}, journal={arXiv preprint arXiv:cs/0512020}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512020}, primaryClass={cs.IT math.IT} }
sarshar2005a
arxiv-673595
cs/0512021
The Poster Session of SSS 2005
<|reference_start|>The Poster Session of SSS 2005: This technical report documents the poster session of SSS 2005, the Symposium on Self-Stabilizing Systems published by Springer as LNCS volume 3764. The poster session included five presentations. Two of these presentations are summarized in brief abstracts contained in this technical report.<|reference_end|>
arxiv
@article{hamid2005the, title={The Poster Session of SSS 2005}, author={Brahim Hamid (1), Ted Herman (2), Morten Mjelde (3) ((1) LaBRI University of Bordeaux-1 France, (2) University of Iowa, (3) University in Bergen Norway)}, journal={arXiv preprint arXiv:cs/0512021}, year={2005}, number={TR-05-13}, archivePrefix={arXiv}, eprint={cs/0512021}, primaryClass={cs.DC cs.DS} }
hamid2005the
arxiv-673596
cs/0512022
Fat Tailed Distributions in Catastrophe Prediction
<|reference_start|>Fat Tailed Distributions in Catastrophe Prediction: This paper discusses the use of fat-tailed distributions in catastrophe prediction as opposed to the more common use of the Normal Distribution.<|reference_end|>
arxiv
@article{mello2005fat, title={Fat Tailed Distributions in Catastrophe Prediction}, author={Louis Mello}, journal={arXiv preprint arXiv:cs/0512022}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512022}, primaryClass={cs.OH} }
mello2005fat
arxiv-673597
cs/0512023
Perfect Space-Time Codes with Minimum and Non-Minimum Delay for Any Number of Antennas
<|reference_start|>Perfect Space-Time Codes with Minimum and Non-Minimum Delay for Any Number of Antennas: Perfect space-time codes were first introduced by Oggier et. al. to be the space-time codes that have full rate, full diversity-gain, non-vanishing determinant for increasing spectral efficiency, uniform average transmitted energy per antenna and good shaping of the constellation. These defining conditions jointly correspond to optimality with respect to the Zheng-Tse D-MG tradeoff, independent of channel statistics, as well as to near optimality in maximizing mutual information. All the above traits endow the code with error performance that is currently unmatched. Yet perfect space-time codes have been constructed only for 2,3,4 and 6 transmit antennas. We construct minimum and non-minimum delay perfect codes for all channel dimensions.<|reference_end|>
arxiv
@article{elia2005perfect, title={Perfect Space-Time Codes with Minimum and Non-Minimum Delay for Any Number of Antennas}, author={Petros Elia, B. A. Sethuraman and P. Vijay Kumar}, journal={arXiv preprint arXiv:cs/0512023}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512023}, primaryClass={cs.IT math.IT} }
elia2005perfect
arxiv-673598
cs/0512024
A bound on Grassmannian codes
<|reference_start|>A bound on Grassmannian codes: We give a new asymptotic upper bound on the size of a code in the Grassmannian space. The bound is better than the upper bounds known previously in the entire range of distances except very large values.<|reference_end|>
arxiv
@article{barg2005a, title={A bound on Grassmannian codes}, author={Alexander Barg and Dmitry Nogin}, journal={Journal of Combinatorial Theory, Ser. A, vol.113,no.8, 2006, pp.1629-1635}, year={2005}, doi={10.1016/j.jcta.2006.03.025}, archivePrefix={arXiv}, eprint={cs/0512024}, primaryClass={cs.IT math.IT math.MG} }
barg2005a
arxiv-673599
cs/0512025
Spectral approach to linear programming bounds on codes
<|reference_start|>Spectral approach to linear programming bounds on codes: We give new proofs of asymptotic upper bounds of coding theory obtained within the frame of Delsarte's linear programming method. The proofs rely on the analysis of eigenvectors of some finite-dimensional operators related to orthogonal polynomials. The examples of the method considered in the paper include binary codes, binary constant-weight codes, spherical codes, and codes in the projective spaces.<|reference_end|>
arxiv
@article{barg2005spectral, title={Spectral approach to linear programming bounds on codes}, author={Alexander Barg and Dmitry Nogin}, journal={Problems of Information Transmission 42, 2, 2006, 77-89}, year={2005}, doi={10.1134/S0032946006020025}, archivePrefix={arXiv}, eprint={cs/0512025}, primaryClass={cs.IT math.CO math.IT} }
barg2005spectral
arxiv-673600
cs/0512026
Checking C++ Programs for Dimensional Consistency
<|reference_start|>Checking C++ Programs for Dimensional Consistency: I will present my implementation 'n-units' of physical units into C++ programs. It allows the compiler to check for dimensional consistency.<|reference_end|>
arxiv
@article{josopait2005checking, title={Checking C++ Programs for Dimensional Consistency}, author={I. Josopait}, journal={arXiv preprint arXiv:cs/0512026}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512026}, primaryClass={cs.PL} }
josopait2005checking