corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-674001
cs/0603088
Novel BCD Adders and Their Reversible Logic Implementation for IEEE 754r Format
<|reference_start|>Novel BCD Adders and Their Reversible Logic Implementation for IEEE 754r Format: IEEE 754r is the ongoing revision to the IEEE 754 floating point standard and a major enhancement to the standard is the addition of decimal format. This paper proposes two novel BCD adders called carry skip and carry look-ahead BCD adders respectively. Furthermore, in the recent years, reversible logic has emerged as a promising technology having its applications in low power CMOS, quantum computing, nanotechnology, and optical computing. It is not possible to realize quantum computing without reversible logic. Thus, this paper also paper provides the reversible logic implementation of the conventional BCD adder as the well as the proposed Carry Skip BCD adder using a recently proposed TSG gate. Furthermore, a new reversible gate called TS-3 is also being proposed and it has been shown that the proposed reversible logic implementation of the BCD Adders is much better compared to recently proposed one, in terms of number of reversible gates used and garbage outputs produced. The reversible BCD circuits designed and proposed here form the basis of the decimal ALU of a primitive quantum CPU.<|reference_end|>
arxiv
@article{thapliyal2006novel, title={Novel BCD Adders and Their Reversible Logic Implementation for IEEE 754r Format}, author={Himanshu Thapliyal, Saurabh Kotiyal and M.B Srinivas}, journal={arXiv preprint arXiv:cs/0603088}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603088}, primaryClass={cs.AR} }
thapliyal2006novel
arxiv-674002
cs/0603089
Convex Separation from Optimization via Heuristics
<|reference_start|>Convex Separation from Optimization via Heuristics: Let $K$ be a full-dimensional convex subset of $\mathbb{R}^n$. We describe a new polynomial-time Turing reduction from the weak separation problem for $K$ to the weak optimization problem for $K$ that is based on a geometric heuristic. We compare our reduction, which relies on analytic centers, with the standard, more general reduction.<|reference_end|>
arxiv
@article{ioannou2006convex, title={Convex Separation from Optimization via Heuristics}, author={Lawrence M. Ioannou and Benjamin C. Travaglione and Donny Cheung}, journal={arXiv preprint arXiv:cs/0603089}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603089}, primaryClass={cs.DS math.OC} }
ioannou2006convex
arxiv-674003
cs/0603090
Topological Grammars for Data Approximation
<|reference_start|>Topological Grammars for Data Approximation: A method of {\it topological grammars} is proposed for multidimensional data approximation. For data with complex topology we define a {\it principal cubic complex} of low dimension and given complexity that gives the best approximation for the dataset. This complex is a generalization of linear and non-linear principal manifolds and includes them as particular cases. The problem of optimal principal complex construction is transformed into a series of minimization problems for quadratic functionals. These quadratic functionals have a physically transparent interpretation in terms of elastic energy. For the energy computation, the whole complex is represented as a system of nodes and springs. Topologically, the principal complex is a product of one-dimensional continuums (represented by graphs), and the grammars describe how these continuums transform during the process of optimal complex construction. This factorization of the whole process onto one-dimensional transformations using minimization of quadratic energy functionals allow us to construct efficient algorithms.<|reference_end|>
arxiv
@article{gorban2006topological, title={Topological Grammars for Data Approximation}, author={A.N. Gorban, N.R. Sumner, A.Y. Zinovyev}, journal={Applied Mathematics Letters 20 (2007) 382--386}, year={2006}, doi={10.1016/j.aml.2006.04.022}, archivePrefix={arXiv}, eprint={cs/0603090}, primaryClass={cs.NE cs.LG} }
gorban2006topological
arxiv-674004
cs/0603091
A New Reversible TSG Gate and Its Application For Designing Efficient Adder Circuits
<|reference_start|>A New Reversible TSG Gate and Its Application For Designing Efficient Adder Circuits: In the recent years, reversible logic has emerged as a promising technology having its applications in low power CMOS, quantum computing, nanotechnology, and optical computing. The classical set of gates such as AND, OR, and EXOR are not reversible. This paper proposes a new 4 * 4 reversible gate called TSG gate. The proposed gate is used to design efficient adder units. The most significant aspect of the proposed gate is that it can work singly as a reversible full adder i.e reversible full adder can now be implemented with a single gate only. The proposed gate is then used to design reversible ripple carry and carry skip adders. It is demonstrated that the adder architectures designed using the proposed gate are much better and optimized, compared to their existing counterparts in literature; in terms of number of reversible gates and garbage outputs. Thus, this paper provides the initial threshold to building of more complex system which can execute more complicated operations using reversible logic.<|reference_end|>
arxiv
@article{thapliyal2006a, title={A New Reversible TSG Gate and Its Application For Designing Efficient Adder Circuits}, author={Himanshu Thapliyal and M.B Srinivas}, journal={arXiv preprint arXiv:cs/0603091}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603091}, primaryClass={cs.AR} }
thapliyal2006a
arxiv-674005
cs/0603092
An Extension to DNA Based Fredkin Gate Circuits: Design of Reversible Sequential Circuits using Fredkin Gates
<|reference_start|>An Extension to DNA Based Fredkin Gate Circuits: Design of Reversible Sequential Circuits using Fredkin Gates: In recent years, reversible logic has emerged as a promising computing paradigm having its applications in low power computing, quantum computing, nanotechnology, optical computing and DNA computing. The classical set of gates such as AND, OR, and EXOR are not reversible. Recently, it has been shown how to encode information in DNA and use DNA amplification to implement Fredkin gates. Furthermore, in the past Fredkin gates have been constructed using DNA, whose outputs are used as inputs for other Fredkin gates. Thus, it can be concluded that arbitrary circuits of Fredkin gates can be constructed using DNA. This paper provides the initial threshold to building of more complex system having reversible sequential circuits and which can execute more complicated operations. The novelty of the paper is the reversible designs of sequential circuits using Fredkin gate. Since, Fredkin gate has already been realized using DNA, it is expected that this work will initiate the building of complex systems using DNA. The reversible circuits designed here are highly optimized in terms of number of gates and garbage outputs. The modularization approach that is synthesizing small circuits and thereafter using them to construct bigger circuits is used for designing the optimal reversible sequential circuits.<|reference_end|>
arxiv
@article{thapliyal2006an, title={An Extension to DNA Based Fredkin Gate Circuits: Design of Reversible Sequential Circuits using Fredkin Gates}, author={Himanshu Thapliyal and M.B Srinivas}, journal={arXiv preprint arXiv:cs/0603092}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603092}, primaryClass={cs.AR} }
thapliyal2006an
arxiv-674006
cs/0603093
About the domino problem in the hyperbolic plane from an algorithmic point of view
<|reference_start|>About the domino problem in the hyperbolic plane from an algorithmic point of view: In this paper, we prove that the general problem of tiling the hyperbolic plane with \`a la Wang tiles is undecidable.<|reference_end|>
arxiv
@article{margenstern2006about, title={About the domino problem in the hyperbolic plane from an algorithmic point of view}, author={Maurice Margenstern}, journal={About the domino problem in the hyperbolic plane from an algorithmic point of view, Theoretical Informatics and Applications, 42(1), (2008), 21-36}, year={2006}, doi={10.1051/ita:2007045}, archivePrefix={arXiv}, eprint={cs/0603093}, primaryClass={cs.CG cs.DM} }
margenstern2006about
arxiv-674007
cs/0603094
On the Capacity Achieving Transmit Covariance Matrices of MIMO Correlated Rician Channels: A Large System Approach
<|reference_start|>On the Capacity Achieving Transmit Covariance Matrices of MIMO Correlated Rician Channels: A Large System Approach: We determine the capacity-achieving input covariance matrices for coherent block-fading correlated MIMO Rician channels. In contrast with the Rayleigh and uncorrelated Rician cases, no closed-form expressions for the eigenvectors of the optimum input covariance matrix are available. Both the eigenvectors and eigenvalues have to be evaluated by using numerical techniques. As the corresponding optimization algorithms are not very attractive, we evaluate the limit of the average mutual information when the number of transmit and receive antennas converge to infinity at the same rate. If the channel is semi-correlated, we propose an attractive optimization algorithm of the large system approximant, and establish some convergence results. Simulation results show that our approach provide reliable results even for a quite moderate number of transmit and receive antennas.<|reference_end|>
arxiv
@article{dumont2006on, title={On the Capacity Achieving Transmit Covariance Matrices of MIMO Correlated Rician Channels: A Large System Approach}, author={Julien Dumont, Philippe Loubaton, Samson Lasaulce}, journal={arXiv preprint arXiv:cs/0603094}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603094}, primaryClass={cs.IT math.IT} }
dumont2006on
arxiv-674008
cs/0603095
A Turbo Coding System for High Speed Communications
<|reference_start|>A Turbo Coding System for High Speed Communications: Conventional turbo codes (CTCs) usually employ a block-oriented interleaving so that each block is separately encoded and decoded. As interleaving and de-interleaving are performed within a block, the message-passing process associated with an iterative decoder is limited to proceed within the corresponding range. This paper presents a new turbo coding scheme that uses a special interleaver structure and a multiple-round early termination test involving both sign check and a CRC code. The new interleaver structure is naturally suited for high speed parallel processing and the resulting coding system offers new design options and tradeoffs that are not available to CTCs. In particular, it becomes possible for the decoder to employ an efficient inter-block collaborative decoding algorithm, passing the information obtained from termination test proved blocks to other unproved blocks. It also becomes important to have a proper decoding schedule. The combined effect is improved performance and reduction in the average decoding delay (whence the required computing power). A memory (storage) management mechanism is included as a critical part of the decoder so as to provide additional design tradeoff between performance and memory size. It is shown that the latter has a modular-like effect in that additional memory units render enhanced performance due not only to less forced early terminations but to possible increases of the interleaving depth. Depending on the decoding schedule, the degree of parallelism and other decoding resources available, the proposed scheme admits a variety of decoder architectures that meet a large range of throughput and performance demands.<|reference_end|>
arxiv
@article{zheng2006a, title={A Turbo Coding System for High Speed Communications}, author={Yan-Xiu Zheng and Yu T. Su}, journal={arXiv preprint arXiv:cs/0603095}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603095}, primaryClass={cs.IT math.IT} }
zheng2006a
arxiv-674009
cs/0603096
On Reduced Complexity Soft-Output MIMO ML detection
<|reference_start|>On Reduced Complexity Soft-Output MIMO ML detection: In multiple-input multiple-output (MIMO) fading channels maximum likelihood (ML) detection is desirable to achieve high performance, but its complexity grows exponentially with the spectral efficiency. The current state of the art in MIMO detection is list decoding and lattice decoding. This paper proposes a new class of lattice detectors that combines some of the principles of both list and lattice decoding, thus resulting in an efficient parallelizable implementation and near optimal soft-ouput ML performance. The novel detector is called layered orthogonal lattice detector (LORD), because it adopts a new lattice formulation and relies on a channel orthogonalization process. It should be noted that the algorithm achieves optimal hard-output ML performance in case of two transmit antennas. For two transmit antennas max-log bit soft-output information can be generated and for greater than two antennas approximate max-log detection is achieved. Simulation results show that LORD, in MIMO system employing orthogonal frequency division multiplexing (OFDM) and bit interleaved coded modulation (BICM) is able to achieve very high signal-to-noise ratio (SNR) gains compared to practical soft-output detectors such as minimum-mean square error (MMSE), in either linear or nonlinear iterative scheme. Besides, the performance comparison with hard-output decoded algebraic space time codes shows the fundamental importance of soft-output generation capability for practical wireless applications.<|reference_end|>
arxiv
@article{siti2006on, title={On Reduced Complexity Soft-Output MIMO ML detection}, author={Massimiliano Siti, Michael P. Fitz}, journal={arXiv preprint arXiv:cs/0603096}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603096}, primaryClass={cs.IT math.IT} }
siti2006on
arxiv-674010
cs/0603097
On Pinsker's Type Inequalities and Csiszar's f-divergences Part I: Second and Fourth-Order Inequalities
<|reference_start|>On Pinsker's Type Inequalities and Csiszar's f-divergences Part I: Second and Fourth-Order Inequalities: We study conditions on $f$ under which an $f$-divergence $D_f$ will satisfy $D_f \geq c_f V^2$ or $D_f \geq c_{2,f} V^2 + c_{4,f} V^4$, where $V$ denotes variational distance and the coefficients $c_f$, $c_{2,f}$ and $c_{4,f}$ are {\em best possible}. As a consequence, we obtain lower bounds in terms of $V$ for many well known distance and divergence measures. For instance, let $D_{(\alpha)} (P,Q) = [\alpha (\alpha-1)]^{-1} [\int q^{\alpha} p^{1-\alpha} d \mu -1]$ and ${\cal I}_\alpha (P,Q) = (\alpha -1)^{-1} \log [\int p^\alpha q^{1-\alpha} d \mu]$ be respectively the {\em relative information of type} ($1-\alpha$) and {\em R\'{e}nyi's information gain of order} $\alpha$. We show that $D_{(\alpha)} \geq {1/2} V^2 + {1/72} (\alpha+1)(2-\alpha) V^4$ whenever $-1 \leq \alpha \leq 2$, $\alpha \not= 0,1$ and that ${\cal I}_{\alpha} = \frac{\alpha}{2} V^2 + {1/36} \alpha (1 + 5 \alpha - 5 \alpha^2) V^4$ for $0 < \alpha < 1$. Pinsker's inequality $D \geq {1/2} V^2$ and its extension $D \geq {1/2} V^2 + {1/36} V^4$ are special cases of each one of these.<|reference_end|>
arxiv
@article{gilardoni2006on, title={On Pinsker's Type Inequalities and Csiszar's f-divergences. Part I: Second and Fourth-Order Inequalities}, author={Gustavo L. Gilardoni}, journal={IEEE Transactions on Information Theory, Vol. 56(11), pp. 5377-5386, 2010}, year={2006}, doi={10.1109/TIT.2010.2068710}, archivePrefix={arXiv}, eprint={cs/0603097}, primaryClass={cs.IT math.IT} }
gilardoni2006on
arxiv-674011
cs/0603098
A SIMO Fiber Aided Wireless Network Architecture
<|reference_start|>A SIMO Fiber Aided Wireless Network Architecture: The concept of a fiber aided wireless network architecture (FAWNA) is introduced in [Ray et al., Allerton Conference 2005], which allows high-speed mobile connectivity by leveraging the speed of optical networks. In this paper, we consider a single-input, multiple-output (SIMO) FAWNA, which consists of a SIMO wireless channel and an optical fiber channel, connected through wireless-optical interfaces. We propose a scheme where the received wireless signal at each interface is quantized and sent over the fiber. Though our architecture is similar to that of the classical CEO problem, our problem is different from it. We show that the capacity of our scheme approaches the capacity of the architecture, exponentially with fiber capacity. We also show that for a given fiber capacity, there is an optimal operating wireless bandwidth and an optimal number of wireless-optical interfaces. The wireless-optical interfaces of our scheme have low complexity and do not require knowledge of the transmitter code book. They are also extendable to FAWNAs with large number of transmitters and interfaces and, offer adaptability to variable rates, changing channel conditions and node positions.<|reference_end|>
arxiv
@article{ray2006a, title={A SIMO Fiber Aided Wireless Network Architecture}, author={Siddharth Ray, Muriel Medard and Lizhong Zheng}, journal={arXiv preprint arXiv:cs/0603098}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603098}, primaryClass={cs.IT math.IT} }
ray2006a
arxiv-674012
cs/0603099
Benchmark Problems for Constraint Solving
<|reference_start|>Benchmark Problems for Constraint Solving: Constraint Programming is roughly a new software technology introduced by Jaffar and Lassez in 1987 for description and effective solving of large, particularly combinatorial, problems especially in areas of planning and scheduling. In the following we define three problems for constraint solving from the domain of electrical networks; based on them we define 43 related problems. For the defined set of problems we benchmarked five systems: ILOG OPL, AMPL, GAMS, Mathematica and UniCalc. As expected some of the systems performed very well for some problems while others performed very well on others.<|reference_end|>
arxiv
@article{suciu2006benchmark, title={Benchmark Problems for Constraint Solving}, author={Alin Suciu, Rodica Potolea, Tudor Muresan}, journal={arXiv preprint arXiv:cs/0603099}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603099}, primaryClass={cs.PF cs.SC} }
suciu2006benchmark
arxiv-674013
cs/0603100
Efficient Compression of Prolog Programs
<|reference_start|>Efficient Compression of Prolog Programs: We propose a special-purpose class of compression algorithms for efficient compression of Prolog programs. It is a dictionary-based compression method, specially designed for the compression of Prolog code, and therefore we name it PCA (Prolog Compression Algorithm). According to the experimental results this method provides better compression than state-of-the-art general-purpose compression algorithms. Since the algorithm works with Prolog syntactic entities (e.g. atoms, terms, etc.) the implementation of a Prolog prototype is straightforward and very easy to use in any Prolog application that needs compression. Although the algorithm is designed for Prolog programs, the idea can be easily applied for the compression of programs written in other (logic) languages.<|reference_end|>
arxiv
@article{suciu2006efficient, title={Efficient Compression of Prolog Programs}, author={Alin Suciu, Kalman Pusztai}, journal={arXiv preprint arXiv:cs/0603100}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603100}, primaryClass={cs.PL} }
suciu2006efficient
arxiv-674014
cs/0603101
Prolog Server Pages
<|reference_start|>Prolog Server Pages: Prolog Server Pages (PSP) is a scripting language, based on Prolog, than can be embedded in HTML documents. To run PSP applications one needs a web server, a web browser and a PSP interpreter. The code is executed, by the interpreter, on the server-side (web server) and the output (together with the html code in witch the PSP code is embedded) is sent to the client-side (browser). The current implementation supports Apache Web Server. We implemented an Apache web server module that handles PSP files, and sends the result (an html document) to the client. PSP supports both GET and POST http requests. It also provides methods for working with http cookies.<|reference_end|>
arxiv
@article{suciu2006prolog, title={Prolog Server Pages}, author={Alin Suciu, Kalman Pusztai, Andrei Vancea}, journal={arXiv preprint arXiv:cs/0603101}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603101}, primaryClass={cs.NI cs.PL} }
suciu2006prolog
arxiv-674015
cs/0603102
Enhanced Prolog Remote Predicate Call Protocol
<|reference_start|>Enhanced Prolog Remote Predicate Call Protocol: Following the ideas of the Remote Procedure Call model, we have developed a logic programming counterpart, naturally called Prolog Remote Predicate Call (Prolog RPC). The Prolog RPC protocol facilitates the integration of Prolog code in multi-language applications as well as the development of distributed intelligent applications. One use of the protocol's most important uses could be the development of distributed applications that use Prolog at least partially to achieve their goals. Most notably the Distributed Artificial Intelligence (DAI) applications that are suitable for logic programming can profit from the use of the protocol. After proving its usefulness, we went further, developing a new version of the protocol, making it more reliable and extending its functionality. Because it has a new syntax and the new set of commands, we call this version Enhanced Prolog Remote Procedure Call. This paper describes the new features and modifications this second version introduced.<|reference_end|>
arxiv
@article{suciu2006enhanced, title={Enhanced Prolog Remote Predicate Call Protocol}, author={Alin Suciu, Kalman Pusztai, Andrei Diaconu}, journal={arXiv preprint arXiv:cs/0603102}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603102}, primaryClass={cs.NI cs.PL} }
suciu2006enhanced
arxiv-674016
cs/0603103
Bargaining over the interference channel
<|reference_start|>Bargaining over the interference channel: In this paper we analyze the interference channel as a conflict situation. This viewpoint implies that certain points in the rate region are unreasonable to one of the players. Therefore these points cannot be considered achievable based on game theoretic considerations. We then propose to use Nash bargaining solution as a tool that provides preferred points on the boundary of the game theoretic rate region. We provide analysis for the 2x2 intereference channel using the FDM achievable rate region. We also outline how to generalize our results to other achievable rate regions for the interference channel as well as the multiple access channel. Keywords: Spectrum optimization, distributed coordination, game theory, interference channel, multiple access channel.<|reference_end|>
arxiv
@article{leshem2006bargaining, title={Bargaining over the interference channel}, author={Amir Leshem and Ephraim Zehavi}, journal={arXiv preprint arXiv:cs/0603103}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603103}, primaryClass={cs.IT math.IT} }
leshem2006bargaining
arxiv-674017
cs/0603104
Verification of Ptime reducibility for system F terms via Dual Light Affine Logic
<|reference_start|>Verification of Ptime reducibility for system F terms via Dual Light Affine Logic: In a previous work we introduced Dual Light Affine Logic (DLAL) ([BaillotTerui04]) as a variant of Light Linear Logic suitable for guaranteeing complexity properties on lambda-calculus terms: all typable terms can be evaluated in polynomial time and all Ptime functions can be represented. In the present work we address the problem of typing lambda-terms in second-order DLAL. For that we give a procedure which, starting with a term typed in system F, finds all possible ways to decorate it into a DLAL typed term. We show that our procedure can be run in time polynomial in the size of the original Church typed system F term.<|reference_end|>
arxiv
@article{atassi2006verification, title={Verification of Ptime reducibility for system F terms via Dual Light Affine Logic}, author={Vincent Atassi (LIPN), Patrick Baillot (LIPN), Kazushige Terui (NII)}, journal={A para\^{i}tre dans Proceedings Computer Science Logic 2006 (CSL'06), LNCS, Springer. (2006)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603104}, primaryClass={cs.LO} }
atassi2006verification
arxiv-674018
cs/0603105
A unifying framework for seed sensitivity and its application to subset seeds (Extended abstract)
<|reference_start|>A unifying framework for seed sensitivity and its application to subset seeds (Extended abstract): We propose a general approach to compute the seed sensitivity, that can be applied to different definitions of seeds. It treats separately three components of the seed sensitivity problem - a set of target alignments, an associated probability distribution, and a seed model - that are specified by distinct finite automata. The approach is then applied to a new concept of subset seeds for which we propose an efficient automaton construction. Experimental results confirm that sensitive subset seeds can be efficiently designed using our approach, and can then be used in similarity search producing better results than ordinary spaced seeds.<|reference_end|>
arxiv
@article{kucherov2006a, title={A unifying framework for seed sensitivity and its application to subset seeds (Extended abstract)}, author={Gregory Kucherov (LIFL), Laurent Noe (LIFL), Mikhail Roytberg (LIFL)}, journal={Algorithms in Bioinformatics, LNBI 3692 : 251-263, 2005}, year={2006}, doi={10.1007/11557067_21}, archivePrefix={arXiv}, eprint={cs/0603105}, primaryClass={cs.OH} }
kucherov2006a
arxiv-674019
cs/0603106
Estimating seed sensitivity on homogeneous alignments
<|reference_start|>Estimating seed sensitivity on homogeneous alignments: We address the problem of estimating the sensitivity of seed-based similarity search algorithms. In contrast to approaches based on Markov models [18, 6, 3, 4, 10], we study the estimation based on homogeneous alignments. We describe an algorithm for counting and random generation of those alignments and an algorithm for exact computation of the sensitivity for a broad class of seed strategies. We provide experimental results demonstrating a bias introduced by ignoring the homogeneousness condition.<|reference_end|>
arxiv
@article{kucherov2006estimating, title={Estimating seed sensitivity on homogeneous alignments}, author={Gregory Kucherov (LIFL), Laurent Noe (LIFL), Yann Ponty (LRI)}, journal={Proceedings of the Fourth IEEE Symposium on Bioinformatics and Bioengineering (BIBE), 387-394, 2004}, year={2006}, doi={10.1109/BIBE.2004.1317369}, archivePrefix={arXiv}, eprint={cs/0603106}, primaryClass={cs.OH} }
kucherov2006estimating
arxiv-674020
cs/0603107
Towards an information-theoretically safe cryptographic protocol
<|reference_start|>Towards an information-theoretically safe cryptographic protocol: We introduce what --if some kind of group action exists-- is a truly (information theoretically) safe cryptographic communication system: a protocol which provides \emph{zero} information to any passive adversary having full access to the channel.<|reference_end|>
arxiv
@article{ayuso2006towards, title={Towards an information-theoretically safe cryptographic protocol}, author={Pedro Fortuny Ayuso}, journal={arXiv preprint arXiv:cs/0603107}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603107}, primaryClass={cs.CR} }
ayuso2006towards
arxiv-674021
cs/0603108
Minimizing Symmetric Set Functions Faster
<|reference_start|>Minimizing Symmetric Set Functions Faster: We describe a combinatorial algorithm which, given a monotone and consistent symmetric set function d on a finite set V in the sense of Rizzi, constructs a non trivial set S minimizing d(S,V-S). This includes the possibility for the minimization of symmetric submodular functions. The presented algorithm requires at most as much time as the one described by Rizzi, but depending on the function d, it may allow several improvements.<|reference_end|>
arxiv
@article{brinkmeier2006minimizing, title={Minimizing Symmetric Set Functions Faster}, author={Michael Brinkmeier}, journal={arXiv preprint arXiv:cs/0603108}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603108}, primaryClass={cs.DM math.CO} }
brinkmeier2006minimizing
arxiv-674022
cs/0603109
Encoding of Functions of Correlated Sources
<|reference_start|>Encoding of Functions of Correlated Sources: This submission is being withdrawn due to serious errors in the achievability proofs. The reviewers of the journal I had submitted to had found errors back in 2006. I had forgotten about this paper until I saw the CFP for a JSAC issue on in-network computation. http://www.jsac.ucsd.edu/Calls/in-networkcomputationcfp.pdf.<|reference_end|>
arxiv
@article{vijayakumaran2006encoding, title={Encoding of Functions of Correlated Sources}, author={Saravanan Vijayakumaran}, journal={arXiv preprint arXiv:cs/0603109}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603109}, primaryClass={cs.IT math.IT} }
vijayakumaran2006encoding
arxiv-674023
cs/0603110
Asymptotic Learnability of Reinforcement Problems with Arbitrary Dependence
<|reference_start|>Asymptotic Learnability of Reinforcement Problems with Arbitrary Dependence: We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.<|reference_end|>
arxiv
@article{ryabko2006asymptotic, title={Asymptotic Learnability of Reinforcement Problems with Arbitrary Dependence}, author={Daniil Ryabko and Marcus Hutter}, journal={Proc. 17th International Conf. on Algorithmic Learning Theory (ALT 2006) pages 334-347}, year={2006}, number={IDSIA-09-06}, archivePrefix={arXiv}, eprint={cs/0603110}, primaryClass={cs.LG cs.AI} }
ryabko2006asymptotic
arxiv-674024
cs/0603111
Remote-control and clustering of physical computations using the XML-RPC protocol and the open-Mosix system
<|reference_start|>Remote-control and clustering of physical computations using the XML-RPC protocol and the open-Mosix system: The applications of the remote control of physical simulations performed in clustered computers running under an open-Mosix system are presented. Results from the simulation of a 2-dimensional ferromagnetic system of spins in the Ising scheme are provided. Basic parameters of a simulated hysteresis loop like coercivity and exchange bias due to pinning of ferromagnetic spins are given. The paper describes in physicists terminology a cost effective solution which utilizes an XML-RPC protocol (Extensible Markup Language - Remote Procedure Calling) and standard C++ and Python languages.<|reference_end|>
arxiv
@article{blachowicz2006remote-control, title={Remote-control and clustering of physical computations using the XML-RPC protocol and the open-Mosix system}, author={T. Blachowicz and M. Wieja}, journal={arXiv preprint arXiv:cs/0603111}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603111}, primaryClass={cs.DC cs.NI} }
blachowicz2006remote-control
arxiv-674025
cs/0603112
A General Framework for Scalability and Performance Analysis of DHT Routing Systems
<|reference_start|>A General Framework for Scalability and Performance Analysis of DHT Routing Systems: In recent years, many DHT-based P2P systems have been proposed, analyzed, and certain deployments have reached a global scale with nearly one million nodes. One is thus faced with the question of which particular DHT system to choose, and whether some are inherently more robust and scalable. Toward developing such a comparative framework, we present the reachable component method (RCM) for analyzing the performance of different DHT routing systems subject to random failures. We apply RCM to five DHT systems and obtain analytical expressions that characterize their routability as a continuous function of system size and node failure probability. An important consequence is that in the large-network limit, the routability of certain DHT systems go to zero for any non-zero probability of node failure. These DHT routing algorithms are therefore unscalable, while some others, including Kademlia, which powers the popular eDonkey P2P system, are found to be scalable.<|reference_end|>
arxiv
@article{kong2006a, title={A General Framework for Scalability and Performance Analysis of DHT Routing Systems}, author={Joseph S. Kong, Jesse S. A. Bridgewater and Vwani P. Roychowdhury}, journal={arXiv preprint arXiv:cs/0603112}, year={2006}, doi={10.1109/DSN.2006.4}, archivePrefix={arXiv}, eprint={cs/0603112}, primaryClass={cs.DC} }
kong2006a
arxiv-674026
cs/0603113
Mathematical Modeling of Aerodynamic Space -to - Surface Flight with Trajectory for Avoid Intercepting Process
<|reference_start|>Mathematical Modeling of Aerodynamic Space -to - Surface Flight with Trajectory for Avoid Intercepting Process: Modeling has been created for a Space-to-Surface system defined for an optimal trajectory for targeting in terminal phase with avoids an intercepting process. The modeling includes models for simulation atmosphere, speed of sound, aerodynamic flight and navigation by an infrared system. The modeling and simulation includes statistical analysis of the modeling results.<|reference_end|>
arxiv
@article{gornev2006mathematical, title={Mathematical Modeling of Aerodynamic Space -to - Surface Flight with Trajectory for Avoid Intercepting Process}, author={Serge Gornev}, journal={arXiv preprint arXiv:cs/0603113}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603113}, primaryClass={cs.OH} }
gornev2006mathematical
arxiv-674027
cs/0603114
Using SMART for Customized Monitoring of Windows Services
<|reference_start|>Using SMART for Customized Monitoring of Windows Services: We focus on examining and working with an important category of computer software called Services, which are provided as a part of newer Microsoft Windows operating systems. A typical Windows user transparently utilizes many of these services but is frequently unaware of their existence. Since some services have the potential to create significant problems when they are executing, it is important for a system administrator to identify which services are running on the network, the types of processing done by each service, and any interrelationships among the various services. This information can then be used to improve the overall integrity of both the individual computer where a questionable service is running and in aggregate an entire network of computers. NCSA has developed an application called SMART (Services Monitoring And Reporting Tool) that can be used to identify and display all services currently running in the network. A commercial program called Hyena remotely monitors the services on all computers attached to the network and exports this information to SMART. SMART produces various outputs that the system administrator can analyze and then determine appropriate actions to take. In particular, SMART provides a color coordinated user interface to quickly identify and classify both potentially hazardous services and also unknown services.<|reference_end|>
arxiv
@article{pluta2006using, title={Using SMART for Customized Monitoring of Windows Services}, author={Gregory A. Pluta, Larry Brumbaugh, William Yurcik}, journal={arXiv preprint arXiv:cs/0603114}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603114}, primaryClass={cs.NI cs.CR} }
pluta2006using
arxiv-674028
cs/0603115
Implementation of float-float operators on graphics hardware
<|reference_start|>Implementation of float-float operators on graphics hardware: The Graphic Processing Unit (GPU) has evolved into a powerful and flexible processor. The latest graphic processors provide fully programmable vertex and pixel processing units that support vector operations up to single floating-point precision. This computational power is now being used for general-purpose computations. However, some applications require higher precision than single precision. This paper describes the emulation of a 44-bit floating-point number format and its corresponding operations. An implementation is presented along with performance and accuracy results.<|reference_end|>
arxiv
@article{da graçca2006implementation, title={Implementation of float-float operators on graphics hardware}, author={Guillaume Da Grac{c}ca (LP2A), David Defour (LP2A)}, journal={arXiv preprint arXiv:cs/0603115}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603115}, primaryClass={cs.AR cs.GR} }
da graçca2006implementation
arxiv-674029
cs/0603116
Fourier Analysis and Holographic Representations of 1D and 2D Signals
<|reference_start|>Fourier Analysis and Holographic Representations of 1D and 2D Signals: In this paper, we focus on Fourier analysis and holographic transforms for signal representation. For instance, in the case of image processing, the holographic representation has the property that an arbitrary portion of the transformed image enables reconstruction of the whole image with details missing. We focus on holographic representation defined through the Fourier Transforms. Thus, We firstly review some results in Fourier transform and Fourier series. Next, we review the Discrete Holographic Fourier Transform (DHFT) for image representation. Then, we describe the contributions of our work. We show a simple scheme for progressive transmission based on the DHFT. Next, we propose the Continuous Holographic Fourier Transform (CHFT) and discuss some theoretical aspects of it for 1D signals. Finally, some testes are presented in the experimental results<|reference_end|>
arxiv
@article{giraldi2006fourier, title={Fourier Analysis and Holographic Representations of 1D and 2D Signals}, author={G.A. Giraldi and B.F. Moutinho and D.M.L. de Carvalho and J.C. de Oliveira}, journal={arXiv preprint arXiv:cs/0603116}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603116}, primaryClass={cs.CV} }
giraldi2006fourier
arxiv-674030
cs/0603117
Affine functions and series with co-inductive real numbers
<|reference_start|>Affine functions and series with co-inductive real numbers: We extend the work of A. Ciaffaglione and P. Di Gianantonio on mechanical verification of algorithms for exact computation on real numbers, using infinite streams of digits implemented as co-inductive types. Four aspects are studied: the first aspect concerns the proof that digit streams can be related to the axiomatized real numbers that are already axiomatized in the proof system (axiomatized, but with no fixed representation). The second aspect re-visits the definition of an addition function, looking at techniques to let the proof search mechanism perform the effective construction of an algorithm that is correct by construction. The third aspect concerns the definition of a function to compute affine formulas with positive rational coefficients. This should be understood as a testbed to describe a technique to combine co-recursion and recursion to obtain a model for an algorithm that appears at first sight to be outside the expressive power allowed by the proof system. The fourth aspect concerns the definition of a function to compute series, with an application on the series that is used to compute Euler's number e. All these experiments should be reproducible in any proof system that supports co-inductive types, co-recursion and general forms of terminating recursion, but we performed with the Coq system [12, 3, 14].<|reference_end|>
arxiv
@article{bertot2006affine, title={Affine functions and series with co-inductive real numbers}, author={Yves Bertot (INRIA Sophia Antipolis)}, journal={Mathematical Structures in Computer Science 17, 1 (2006)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603117}, primaryClass={cs.LO} }
bertot2006affine
arxiv-674031
cs/0603118
Coq in a Hurry
<|reference_start|>Coq in a Hurry: These notes provide a quick introduction to the Coq system and show how it can be used to define logical concepts and functions and reason about them. It is designed as a tutorial, so that readers can quickly start their own experiments, learning only a few of the capabilities of the system. A much more comprehensive study is provided in [1], which also provides an extensive collection of exercises to train on.<|reference_end|>
arxiv
@article{bertot2006coq, title={Coq in a Hurry}, author={Yves Bertot (INRIA Sophia Antipolis)}, journal={Cours (2008) 22 pages}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603118}, primaryClass={cs.LO} }
bertot2006coq
arxiv-674032
cs/0603119
CoInduction in Coq
<|reference_start|>CoInduction in Coq: We describe the basic notions of co-induction as they are available in the coq system. As an application, we describe arithmetic properties for simple representations of real numbers.<|reference_end|>
arxiv
@article{bertot2006coinduction, title={CoInduction in Coq}, author={Yves Bertot (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:cs/0603119}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603119}, primaryClass={cs.LO} }
bertot2006coinduction
arxiv-674033
cs/0603120
Approximation Algorithms for K-Modes Clustering
<|reference_start|>Approximation Algorithms for K-Modes Clustering: In this paper, we study clustering with respect to the k-modes objective function, a natural formulation of clustering for categorical data. One of the main contributions of this paper is to establish the connection between k-modes and k-median, i.e., the optimum of k-median is at most twice the optimum of k-modes for the same categorical data clustering problem. Based on this observation, we derive a deterministic algorithm that achieves an approximation factor of 2. Furthermore, we prove that the distance measure in k-modes defines a metric. Hence, we are able to extend existing approximation algorithms for metric k-median to k-modes. Empirical results verify the superiority of our method.<|reference_end|>
arxiv
@article{he2006approximation, title={Approximation Algorithms for K-Modes Clustering}, author={Zengyou He}, journal={arXiv preprint arXiv:cs/0603120}, year={2006}, number={Tr-06-0330}, archivePrefix={arXiv}, eprint={cs/0603120}, primaryClass={cs.AI} }
he2006approximation
arxiv-674034
cs/0603121
minimUML: A Minimalist Approach to UML Diagraming for Early Computer Science Education
<|reference_start|>minimUML: A Minimalist Approach to UML Diagraming for Early Computer Science Education: The Unified Modeling Language (UML) is commonly used in introductory Computer Science to teach basic object-oriented design. However, there appears to be a lack of suitable software to support this task. Many of the available programs that support UML focus on developing code and not on enhancing learning. Those that were designed for educational use sometimes have poor interfaces or are missing common and important features, such as multiple selection and undo/redo. There is a need for software that is tailored to an instructional environment and has all the useful and needed functionality for that specific task. This is the purpose of minimUML. minimUML provides a minimum amount of UML, just what is commonly used in beginning programming classes, while providing a simple, usable interface. In particular, minimUML was designed to support abstract design while supplying features for exploratory learning and error avoidance. In addition, it allows for the annotation of diagrams, through text or freeform drawings, so students can receive feedback on their work. minimUML was developed with the goal of supporting ease of use, supporting novice students, and a requirement of no prior-training for its use.<|reference_end|>
arxiv
@article{turner2006minimuml:, title={minimUML: A Minimalist Approach to UML Diagraming for Early Computer Science Education}, author={Scott Turner, Manuel A. Perez-Quinones and Stephen H. Edwards}, journal={arXiv preprint arXiv:cs/0603121}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603121}, primaryClass={cs.HC cs.SE} }
turner2006minimuml:
arxiv-674035
cs/0603122
Complexity of Monadic inf-datalog Application to temporal logic
<|reference_start|>Complexity of Monadic inf-datalog Application to temporal logic: In [11] we defined Inf-Datalog and characterized the fragments of Monadic inf-Datalog that have the same expressive power as Modal Logic (resp. $CTL$, alternation-free Modal $\mu$-calculus and Modal $\mu$-calculus). We study here the time and space complexity of evaluation of Monadic inf-Datalog programs on finite models. We deduce a new unified proof that model checking has 1. linear data and program complexities (both in time and space) for $CTL$ and alternation-free Modal $\mu$-calculus, and 2. linear-space (data and program) complexities, linear-time program complexity and polynomial-time data complexity for $L\mu\_k$ (Modal $\mu$-calculus with fixed alternation-depth at most $k$).}<|reference_end|>
arxiv
@article{foustoucos2006complexity, title={Complexity of Monadic inf-datalog. Application to temporal logic}, author={Eug'enie Foustoucos (MPLA), Irene Guessarian (LIAFA)}, journal={Proc. 4th Panhellenic Logic Symposium (2003) 95-99}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603122}, primaryClass={cs.DS} }
foustoucos2006complexity
arxiv-674036
cs/0603123
Towards the Optimal Amplify-and-Forward Cooperative Diversity Scheme
<|reference_start|>Towards the Optimal Amplify-and-Forward Cooperative Diversity Scheme: In a slow fading channel, how to find a cooperative diversity scheme that achieves the transmit diversity bound is still an open problem. In fact, all previously proposed amplify-and-forward (AF) and decode-and-forward (DF) schemes do not improve with the number of relays in terms of the diversity multiplexing tradeoff (DMT) for multiplexing gains r higher than 0.5. In this work, we study the class of slotted amplify-and-forward (SAF) schemes. We first establish an upper bound on the DMT for any SAF scheme with an arbitrary number of relays N and number of slots M. Then, we propose a sequential SAF scheme that can exploit the potential diversity gain in the high multiplexing gain regime. More precisely, in certain conditions, the sequential SAF scheme achieves the proposed DMT upper bound which tends to the transmit diversity bound when M goes to infinity. In particular, for the two-relay case, the three-slot sequential SAF scheme achieves the proposed upper bound and outperforms the two-relay non-orthorgonal amplify-and-forward (NAF) scheme of Azarian et al. for multiplexing gains r < 2/3. Numerical results reveal a significant gain of our scheme over the previously proposed AF schemes, especially in high spectral efficiency and large network size regime.<|reference_end|>
arxiv
@article{yang2006towards, title={Towards the Optimal Amplify-and-Forward Cooperative Diversity Scheme}, author={Sheng Yang and Jean-Claude Belfiore}, journal={arXiv preprint arXiv:cs/0603123}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603123}, primaryClass={cs.IT math.IT} }
yang2006towards
arxiv-674037
cs/0603124
Diversity-Multiplexing Tradeoff of Double Scattering MIMO Channels
<|reference_start|>Diversity-Multiplexing Tradeoff of Double Scattering MIMO Channels: It is well known that the presence of double scattering degrades the performance of a MIMO channel, in terms of both the multiplexing gain and the diversity gain. In this paper, a closed-form expression of the diversity-multiplexing tradeoff (DMT) of double scattering MIMO channels is obtained. It is shown that, for a channel with nT transmit antennas, nR receive antennas and nS scatterers, the DMT only depends on the ordered version of the triple (nT,nS,nR), for arbitrary nT, nS and nR. The condition under which the double scattering channel has the same DMT as the single scattering channel is also established.<|reference_end|>
arxiv
@article{yang2006diversity-multiplexing, title={Diversity-Multiplexing Tradeoff of Double Scattering MIMO Channels}, author={Sheng Yang and Jean-Claude Belfiore}, journal={arXiv preprint arXiv:cs/0603124}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603124}, primaryClass={cs.IT math.IT} }
yang2006diversity-multiplexing
arxiv-674038
cs/0603125
If a tree casts a shadow is it telling the time?
<|reference_start|>If a tree casts a shadow is it telling the time?: Physical processes are computations only when we use them to externalize thought. Computation is the performance of one or more fixed processes within a contingent environment. We reformulate the Church-Turing thesis so that it applies to programs rather than to computability. When suitably formulated agent-based computing in an open, multi-scalar environment represents the current consensus view of how we interact with the world. But we don't know how to formulate multi-scalar environments.<|reference_end|>
arxiv
@article{abbott2006if, title={If a tree casts a shadow is it telling the time?}, author={Russ Abbott}, journal={arXiv preprint arXiv:cs/0603125}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603125}, primaryClass={cs.MA cs.GL} }
abbott2006if
arxiv-674039
cs/0603126
Open at the Top; Open at the Bottom; and Continually (but Slowly) Evolving
<|reference_start|>Open at the Top; Open at the Bottom; and Continually (but Slowly) Evolving: Systems of systems differ from traditional systems in that they are open at the top, open at the bottom, and continually (but slowly) evolving. "Open at the top" means that there is no pre-defined top level application. New applications may be created at any time. "Open at the bottom" means that the system primitives are defined functionally rather than concretely. This allows the implementation of these primitives to be modified as technology changes. "Continually (but slowly) evolving" means that the system's functionality is stable enough to be useful but is understood to be subject to modification. Systems with these properties tend to be environments within which other systems operate--and hence are systems of systems. It is also important to understand the larger environment within which a system of systems exists.<|reference_end|>
arxiv
@article{abbott2006open, title={Open at the Top; Open at the Bottom; and Continually (but Slowly) Evolving}, author={Russ Abbott}, journal={arXiv preprint arXiv:cs/0603126}, year={2006}, doi={10.1109/SYSOSE.2006.1652271}, archivePrefix={arXiv}, eprint={cs/0603126}, primaryClass={cs.MA} }
abbott2006open
arxiv-674040
cs/0603127
Complex Systems + Systems Engineering = Complex Systems Engineeri
<|reference_start|>Complex Systems + Systems Engineering = Complex Systems Engineeri: One may define a complex system as a system in which phenomena emerge as a consequence of multiscale interaction among the system's components and their environments. The field of Complex Systems is the study of such systems--usually naturally occurring, either bio-logical or social. Systems Engineering may be understood to include the conceptualising and building of systems that consist of a large number of concurrently operating and interacting components--usually including both human and non-human elements. It has become increasingly apparent that the kinds of systems that systems engineers build have many of the same multiscale characteristics as those of naturally occurring complex systems. In other words, systems engineering is the engineering of complex systems. This paper and the associated panel will explore some of the connections between the fields of complex systems and systems engineering.<|reference_end|>
arxiv
@article{abbott2006complex, title={Complex Systems + Systems Engineering = Complex Systems Engineeri}, author={Russ Abbott}, journal={arXiv preprint arXiv:cs/0603127}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603127}, primaryClass={cs.MA} }
abbott2006complex
arxiv-674041
cs/0603128
On Cosets of the Generalized First-Order Reed-Muller Code with Low PMEPR
<|reference_start|>On Cosets of the Generalized First-Order Reed-Muller Code with Low PMEPR: Golay sequences are well suited for the use as codewords in orthogonal frequency-division multiplexing (OFDM), since their peak-to-mean envelope power ratio (PMEPR) in q-ary phase-shift keying (PSK) modulation is at most 2. It is known that a family of polyphase Golay sequences of length 2^m organizes in m!/2 cosets of a q-ary generalization of the first-order Reed-Muller code, RM_q(1,m). In this paper a more general construction technique for cosets of RM_q(1,m) with low PMEPR is established. These cosets contain so-called near-complementary sequences. The application of this theory is then illustrated by providing some construction examples. First, it is shown that the m!/2 cosets of RM_q(1,m) comprised of Golay sequences just arise as a special case. Second, further families of cosets of RM_q(1,m) with maximum PMEPR between 2 and 4 are presented, showing that some previously unexplained phenomena can now be understood within a unified framework. A lower bound on the PMEPR of cosets of RM_q(1,m) is proved as well, and it is demonstrated that the upper bound on the PMEPR is tight in many cases. Finally it is shown that all upper bounds on the PMEPR of cosets of RM_q(1,m) also hold for the peak-to-average power ratio (PAPR) under the Walsh-Hadamard transform.<|reference_end|>
arxiv
@article{schmidt2006on, title={On Cosets of the Generalized First-Order Reed-Muller Code with Low PMEPR}, author={Kai-Uwe Schmidt}, journal={IEEE Trans. Inf. Theory, vol. 52, no. 7, pp. 3220-3232, July 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603128}, primaryClass={cs.IT math.IT} }
schmidt2006on
arxiv-674042
cs/0603129
A Business Goal Driven Approach for Understanding and Specifying Information Security Requirements
<|reference_start|>A Business Goal Driven Approach for Understanding and Specifying Information Security Requirements: In this paper we present an approach for specifying and prioritizing information security requirements in organizations. It is important to prioritize security requirements since hundred per cent security is not achievable and the limited resources available should be directed to satisfy the most important ones. We propose to link explicitly security requirements with the organization's business vision, i.e. to provide business rationale for security requirements. The rationale is then used as a basis for comparing the importance of different security requirements. A conceptual framework is presented, where the relationships between business vision, critical impact factors and valuable assets (together with their security requirements) are shown.<|reference_end|>
arxiv
@article{su2006a, title={A Business Goal Driven Approach for Understanding and Specifying Information Security Requirements}, author={Xiamoneg Su, Damiano Bolzoni, Pascal van Eck}, journal={arXiv preprint arXiv:cs/0603129}, year={2006}, number={TR-CTIT-06-08}, archivePrefix={arXiv}, eprint={cs/0603129}, primaryClass={cs.CR} }
su2006a
arxiv-674043
cs/0603130
Digital watermarking in the singular vector domain
<|reference_start|>Digital watermarking in the singular vector domain: Many current watermarking algorithms insert data in the spatial or transform domains like the discrete cosine, the discrete Fourier, and the discrete wavelet transforms. In this paper, we present a data-hiding algorithm that exploits the singular value decomposition (SVD) representation of the data. We compute the SVD of the host image and the watermark and embed the watermark in the singular vectors of the host image. The proposed method leads to an imperceptible scheme for digital images, both in grey scale and color and is quite robust against attacks like noise and JPEG compression.<|reference_end|>
arxiv
@article{agarwal2006digital, title={Digital watermarking in the singular vector domain}, author={Rashmi Agarwal and M. S. Santhanam}, journal={International Journal of Image and Graphics, volume 8, page 351 (2008)}, year={2006}, doi={10.1142/S0219467808003131}, archivePrefix={arXiv}, eprint={cs/0603130}, primaryClass={cs.MM} }
agarwal2006digital
arxiv-674044
cs/0603131
Error Rate Analysis for Coded Multicarrier Systems over Quasi-Static Fading Channels
<|reference_start|>Error Rate Analysis for Coded Multicarrier Systems over Quasi-Static Fading Channels: This paper presents two methods for approximating the performance of coded multicarrier systems operating over frequency-selective, quasi-static fading channels with non-ideal interleaving. The first method is based on approximating the performance of the system over each realization of the channel, and is suitable for obtaining the outage performance of this type of system. The second method is based on knowledge of the correlation matrix of the frequency-domain channel gains and can be used to directly obtain the average performance. Both of the methods are applicable for convolutionally-coded interleaved systems employing Quadrature Amplitude Modulation (QAM). As examples, both methods are used to study the performance of the Multiband Orthogonal Frequency Division Multiplexing (OFDM) proposal for high data-rate Ultra-Wideband (UWB) communication.<|reference_end|>
arxiv
@article{snow2006error, title={Error Rate Analysis for Coded Multicarrier Systems over Quasi-Static Fading Channels}, author={C. Snow, L. Lampe, R. Schober}, journal={arXiv preprint arXiv:cs/0603131}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603131}, primaryClass={cs.IT math.IT} }
snow2006error
arxiv-674045
cs/0603132
Graphics Turing Test
<|reference_start|>Graphics Turing Test: We define a Graphics Turing Test to measure graphics performance in a similar manner to the definition of the traditional Turing Test. To pass the test one needs to reach a computational scale, the Graphics Turing Scale, for which Computer Generated Imagery becomes comparatively indistinguishable from real images while also being interactive. We derive an estimate for this computational scale which, although large, is within reach of todays supercomputers. We consider advantages and disadvantages of various computer systems designed to pass the Graphics Turing Test. Finally we discuss commercial applications from the creation of such a system, in particular Interactive Cinema.<|reference_end|>
arxiv
@article{mcguigan2006graphics, title={Graphics Turing Test}, author={Michael McGuigan}, journal={arXiv preprint arXiv:cs/0603132}, year={2006}, archivePrefix={arXiv}, eprint={cs/0603132}, primaryClass={cs.GR} }
mcguigan2006graphics
arxiv-674046
cs/0604001
Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs
<|reference_start|>Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs: Many real world data are sampled functions. As shown by Functional Data Analysis (FDA) methods, spectra, time series, images, gesture recognition data, etc. can be processed more efficiently if their functional nature is taken into account during the data analysis process. This is done by extending standard data analysis methods so that they can apply to functional inputs. A general way to achieve this goal is to compute projections of the functional data onto a finite dimensional sub-space of the functional space. The coordinates of the data on a basis of this sub-space provide standard vector representations of the functions. The obtained vectors can be processed by any standard method. In our previous work, this general approach has been used to define projection based Multilayer Perceptrons (MLPs) with functional inputs. We study in this paper important theoretical properties of the proposed model. We show in particular that MLPs with functional inputs are universal approximators: they can approximate to arbitrary accuracy any continuous mapping from a compact sub-space of a functional space to R. Moreover, we provide a consistency result that shows that any mapping from a functional space to R can be learned thanks to examples by a projection based MLP: the generalization mean square error of the MLP decreases to the smallest possible mean square error on the data when the number of examples goes to infinity.<|reference_end|>
arxiv
@article{rossi2006theoretical, title={Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs}, author={Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Brieuc Conan-Guez (LITA)}, journal={arXiv preprint arXiv:cs/0604001}, year={2006}, doi={10.1007/s11063-005-3100-2}, archivePrefix={arXiv}, eprint={cs/0604001}, primaryClass={cs.NE} }
rossi2006theoretical
arxiv-674047
cs/0604002
Complexity of Consistent Query Answering in Databases under Cardinality-Based and Incremental Repair Semantics
<|reference_start|>Complexity of Consistent Query Answering in Databases under Cardinality-Based and Incremental Repair Semantics: Consistent Query Answering (CQA) is the problem of computing from a database the answers to a query that are consistent with respect to certain integrity constraints that the database, as a whole, may fail to satisfy. Consistent answers have been characterized as those that are invariant under certain minimal forms of restoration of the database consistency. We investigate algorithmic and complexity theoretic issues of CQA under database repairs that minimally depart -wrt the cardinality of the symmetric difference- from the original database. We obtain first tight complexity bounds. We also address the problem of incremental complexity of CQA, that naturally occurs when an originally consistent database becomes inconsistent after the execution of a sequence of update operations. Tight bounds on incremental complexity are provided for various semantics under denial constraints. Fixed parameter tractability is also investigated in this dynamic context, where the size of the update sequence becomes the relevant parameter.<|reference_end|>
arxiv
@article{lopatenko2006complexity, title={Complexity of Consistent Query Answering in Databases under Cardinality-Based and Incremental Repair Semantics}, author={Andrei Lopatenko and Leopoldo Bertossi}, journal={arXiv preprint arXiv:cs/0604002}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604002}, primaryClass={cs.DB cs.CC} }
lopatenko2006complexity
arxiv-674048
cs/0604003
Hypercomputing the Mandelbrot Set?
<|reference_start|>Hypercomputing the Mandelbrot Set?: The Mandelbrot set is an extremely well-known mathematical object that can be described in a quite simple way but has very interesting and non-trivial properties. This paper surveys some results that are known concerning the (non-)computability of the set. It considers two models of decidability over the reals (which have been treated much more thoroughly and technically by Hertling (2005), Blum, Shub and Smale, Brattka (2003) and Weihrauch (1999 and 2003) among others), two over the computable reals (the Russian school and hypercomputation) and a model over the rationals.<|reference_end|>
arxiv
@article{potgieter2006hypercomputing, title={Hypercomputing the Mandelbrot Set?}, author={Petrus H. Potgieter}, journal={arXiv preprint arXiv:cs/0604003}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604003}, primaryClass={cs.CC} }
potgieter2006hypercomputing
arxiv-674049
cs/0604004
The Poincare conjecture for digital spaces Properties of digital n-dimensional disks and spheres
<|reference_start|>The Poincare conjecture for digital spaces Properties of digital n-dimensional disks and spheres: Motivated by the Poincare conjecture, we study properties of digital n-dimensional spheres and disks, which are digital models of their continuous counterparts. We introduce homeomorphic transformations of digital manifolds, which retain the connectedness, the dimension, the Euler characteristics and the homology groups of manifolds. We find conditions where an n-dimensional digital manifold is the n-dimensional digital sphere and discuss the link between continuous closed n-manifolds and their digital models.<|reference_end|>
arxiv
@article{evako2006the, title={The Poincare conjecture for digital spaces. Properties of digital n-dimensional disks and spheres}, author={Alexander V. Evako}, journal={arXiv preprint arXiv:cs/0604004}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604004}, primaryClass={cs.DM cs.CV math.AT} }
evako2006the
arxiv-674050
cs/0604005
Multiterminal Source Coding with Two Encoders--I: A Computable Outer Bound
<|reference_start|>Multiterminal Source Coding with Two Encoders--I: A Computable Outer Bound: In this first part, a computable outer bound is proved for the multiterminal source coding problem, for a setup with two encoders, discrete memoryless sources, and bounded distortion measures.<|reference_end|>
arxiv
@article{servetto2006multiterminal, title={Multiterminal Source Coding with Two Encoders--I: A Computable Outer Bound}, author={Sergio D. Servetto (Cornell University)}, journal={arXiv preprint arXiv:cs/0604005}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604005}, primaryClass={cs.IT math.IT} }
servetto2006multiterminal
arxiv-674051
cs/0604006
Sparse Matrix Implementation in Octave
<|reference_start|>Sparse Matrix Implementation in Octave: There are many classes of mathematical problems which give rise to matrices, where a large number of the elements are zero. In this case it makes sense to have a special matrix type to handle this class of problems where only the non-zero elements of the matrix are stored. Not only does this reduce the amount of memory to store the matrix, but it also means that operations on this type of matrix can take advantage of the a-priori knowledge of the positions of the non-zero elements to accelerate their calculations. A matrix type that stores only the non-zero elements is generally called sparse. Until recently Octave has lacked a full implementation of sparse matrices. This article address the implementation of sparse matrices within Octave, including their storage, creation, fundamental algorithms used, their implementations and the basic operations and functions implemented for sparse matrices. Mathematical issues such as the return types of sparse operations, matrix fill-in and reordering for sparse matrix factorization is discussed in the context of a real example. Benchmarking of Octave's implementation of sparse operations compared to their equivalent in Matlab are given and their implications discussed. Results are presented for multiplication and linear algebra operations for various matrix orders and densities. Furthermore, the use of Octave's sparse matrix implementation is demonstrated using a real example of a finite element model (FEM) problem. Finally, the method of using sparse matrices with Octave's oct-files is discussed. The means of creating, using and returning sparse matrices within oct-files is discussed as well as the differences between Octave's Sparse and Array classes.<|reference_end|>
arxiv
@article{bateman2006sparse, title={Sparse Matrix Implementation in Octave}, author={David Bateman and Andy Adler}, journal={arXiv preprint arXiv:cs/0604006}, year={2006}, number={Octave2006/04}, archivePrefix={arXiv}, eprint={cs/0604006}, primaryClass={cs.MS} }
bateman2006sparse
arxiv-674052
cs/0604007
On the Complexity of Limit Sets of Cellular Automata Associated with Probability Measures
<|reference_start|>On the Complexity of Limit Sets of Cellular Automata Associated with Probability Measures: We study the notion of limit sets of cellular automata associated with probability measures (mu-limit sets). This notion was introduced by P. Kurka and A. Maass. It is a refinement of the classical notion of omega-limit sets dealing with the typical long term behavior of cellular automata. It focuses on the words whose probability of appearance does not tend to 0 as time tends to infinity (the persistent words). In this paper, we give a characterisation of the persistent language for non sensible cellular automata associated with Bernouilli measures. We also study the computational complexity of these languages. We show that the persistent language can be non-recursive. But our main result is that the set of quasi-nilpotent cellular automata (those with a single configuration in their mu-limit set) is neither recursively enumerable nor co-recursively enumerable.<|reference_end|>
arxiv
@article{boyer2006on, title={On the Complexity of Limit Sets of Cellular Automata Associated with Probability Measures}, author={Laurent Boyer (LIP), Victor Poupet (LIP), Guillaume Theyssier (LM-Savoie)}, journal={Mathematical Foundations of Computer Science 2006Springer (Ed.) (28/08/2006) 190-201}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604007}, primaryClass={cs.DM cs.CC math.DS} }
boyer2006on
arxiv-674053
cs/0604008
Minimum-Cost Coverage of Point Sets by Disks
<|reference_start|>Minimum-Cost Coverage of Point Sets by Disks: We consider a class of geometric facility location problems in which the goal is to determine a set X of disks given by their centers (t_j) and radii (r_j) that cover a given set of demand points Y in the plane at the smallest possible cost. We consider cost functions of the form sum_j f(r_j), where f(r)=r^alpha is the cost of transmission to radius r. Special cases arise for alpha=1 (sum of radii) and alpha=2 (total area); power consumption models in wireless network design often use an exponent alpha>2. Different scenarios arise according to possible restrictions on the transmission centers t_j, which may be constrained to belong to a given discrete set or to lie on a line, etc. We obtain several new results, including (a) exact and approximation algorithms for selecting transmission points t_j on a given line in order to cover demand points Y in the plane; (b) approximation algorithms (and an algebraic intractability result) for selecting an optimal line on which to place transmission points to cover Y; (c) a proof of NP-hardness for a discrete set of transmission points in the plane and any fixed alpha>1; and (d) a polynomial-time approximation scheme for the problem of computing a minimum cost covering tour (MCCT), in which the total cost is a linear combination of the transmission cost for the set of disks and the length of a tour/path that connects the centers of the disks.<|reference_end|>
arxiv
@article{arkin2006minimum-cost, title={Minimum-Cost Coverage of Point Sets by Disks}, author={Esther M. Arkin and Herve Broennimann and Jeff Erickson and Sandor P. Fekete and Christian Knauer and Jonathan Lenchner and Joseph S. B. Mitchell and Kim Whittlesey}, journal={arXiv preprint arXiv:cs/0604008}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604008}, primaryClass={cs.DS cs.CG} }
arkin2006minimum-cost
arxiv-674054
cs/0604009
Can an Organism Adapt Itself to Unforeseen Circumstances?
<|reference_start|>Can an Organism Adapt Itself to Unforeseen Circumstances?: A model of an organism as an autonomous intelligent system has been proposed. This model was used to analyze learning of an organism in various environmental conditions. Processes of learning were divided into two types: strong and weak processes taking place in the absence and the presence of aprioristic information about an object respectively. Weak learning is synonymous to adaptation when aprioristic programs already available in a system (an organism) are started. It was shown that strong learning is impossible for both an organism and any autonomous intelligent system. It was shown also that the knowledge base of an organism cannot be updated. Therefore, all behavior programs of an organism are congenital. A model of a conditioned reflex as a series of consecutive measurements of environmental parameters has been advanced. Repeated measurements are necessary in this case to reduce the error during decision making.<|reference_end|>
arxiv
@article{melkikh2006can, title={Can an Organism Adapt Itself to Unforeseen Circumstances?}, author={Alexey V. Melkikh}, journal={arXiv preprint arXiv:cs/0604009}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604009}, primaryClass={cs.AI} }
melkikh2006can
arxiv-674055
cs/0604010
Nearly optimal exploration-exploitation decision thresholds
<|reference_start|>Nearly optimal exploration-exploitation decision thresholds: While in general trading off exploration and exploitation in reinforcement learning is hard, under some formulations relatively simple solutions exist. In this paper, we first derive upper bounds for the utility of selecting different actions in the multi-armed bandit setting. Unlike the common statistical upper confidence bounds, these explicitly link the planning horizon, uncertainty and the need for exploration explicit. The resulting algorithm can be seen as a generalisation of the classical Thompson sampling algorithm. We experimentally test these algorithms, as well as $\epsilon$-greedy and the value of perfect information heuristics. Finally, we also introduce the idea of bagging for reinforcement learning. By employing a version of online bootstrapping, we can efficiently sample from an approximate posterior distribution.<|reference_end|>
arxiv
@article{dimitrakakis2006nearly, title={Nearly optimal exploration-exploitation decision thresholds}, author={Christos Dimitrakakis}, journal={arXiv preprint arXiv:cs/0604010}, year={2006}, number={IDIAP-RR-06-12}, archivePrefix={arXiv}, eprint={cs/0604010}, primaryClass={cs.AI cs.LG} }
dimitrakakis2006nearly
arxiv-674056
cs/0604011
Semi-Supervised Learning -- A Statistical Physics Approach
<|reference_start|>Semi-Supervised Learning -- A Statistical Physics Approach: We present a novel approach to semi-supervised learning which is based on statistical physics. Most of the former work in the field of semi-supervised learning classifies the points by minimizing a certain energy function, which corresponds to a minimal k-way cut solution. In contrast to these methods, we estimate the distribution of classifications, instead of the sole minimal k-way cut, which yields more accurate and robust results. Our approach may be applied to all energy functions used for semi-supervised learning. The method is based on sampling using a Multicanonical Markov chain Monte-Carlo algorithm, and has a straightforward probabilistic interpretation, which allows for soft assignments of points to classes, and also to cope with yet unseen class types. The suggested approach is demonstrated on a toy data set and on two real-life data sets of gene expression.<|reference_end|>
arxiv
@article{getz2006semi-supervised, title={Semi-Supervised Learning -- A Statistical Physics Approach}, author={Gad Getz, Noam Shental, Eytan Domany}, journal={arXiv preprint arXiv:cs/0604011}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604011}, primaryClass={cs.LG cond-mat.stat-mech cs.CV} }
getz2006semi-supervised
arxiv-674057
cs/0604012
The Aryabhata Algorithm Using Least Absolute Remainders
<|reference_start|>The Aryabhata Algorithm Using Least Absolute Remainders: This paper presents an introduction to the Aryabhata algorithm for finding multiplicative inverses and solving linear congruences, both of which have applications in cryptography. We do so by the use of the least absolute remainders. The exposition of the Aryabhata algorithm provided here can have performance that could exceed what was described recently by Rao and Yang.<|reference_end|>
arxiv
@article{vuppala2006the, title={The Aryabhata Algorithm Using Least Absolute Remainders}, author={Sreeram Vuppala}, journal={arXiv preprint arXiv:cs/0604012}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604012}, primaryClass={cs.CR} }
vuppala2006the
arxiv-674058
cs/0604013
On Covering a Graph Optimally with Induced Subgraphs
<|reference_start|>On Covering a Graph Optimally with Induced Subgraphs: We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number of vertices in each subgraph is minimized. We prove NP-completeness of the problem, prove lower bounds, and give approximation algorithms for certain graph classes.<|reference_end|>
arxiv
@article{thite2006on, title={On Covering a Graph Optimally with Induced Subgraphs}, author={Shripad Thite}, journal={arXiv preprint arXiv:cs/0604013}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604013}, primaryClass={cs.DM} }
thite2006on
arxiv-674059
cs/0604014
Towards Analog Reverse Time Computation
<|reference_start|>Towards Analog Reverse Time Computation: We report the consequences of a destabilization process on a simulated General Purpose Analog Computer. This new technology overcomes problems linked with serial ambiguity, and provides an analog bias to encode algorithms whose complexity is over polynomial. We also implicitly demonstrate how countermesures of the Stochastic Aperture Degeneracy could efficiently reach higher computational classes, and would open a road towards Analog Reverse Time Computation.<|reference_end|>
arxiv
@article{habibi2006towards, title={Towards Analog Reverse Time Computation}, author={O. Habibi, U.R. Patihnedj, M.O. Dhar}, journal={arXiv preprint arXiv:cs/0604014}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604014}, primaryClass={cs.CC} }
habibi2006towards
arxiv-674060
cs/0604015
Revealing the Autonomous System Taxonomy: The Machine Learning Approach
<|reference_start|>Revealing the Autonomous System Taxonomy: The Machine Learning Approach: Although the Internet AS-level topology has been extensively studied over the past few years, little is known about the details of the AS taxonomy. An AS "node" can represent a wide variety of organizations, e.g., large ISP, or small private business, university, with vastly different network characteristics, external connectivity patterns, network growth tendencies, and other properties that we can hardly neglect while working on veracious Internet representations in simulation environments. In this paper, we introduce a radically new approach based on machine learning techniques to map all the ASes in the Internet into a natural AS taxonomy. We successfully classify 95.3% of ASes with expected accuracy of 78.1%. We release to the community the AS-level topology dataset augmented with: 1) the AS taxonomy information and 2) the set of AS attributes we used to classify ASes. We believe that this dataset will serve as an invaluable addition to further understanding of the structure and evolution of the Internet.<|reference_end|>
arxiv
@article{dimitropoulos2006revealing, title={Revealing the Autonomous System Taxonomy: The Machine Learning Approach}, author={Xenofontas Dimitropoulos, Dmitri Krioukov, George Riley, kc claffy}, journal={PAM 2006, best paper award}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604015}, primaryClass={cs.NI cs.LG} }
dimitropoulos2006revealing
arxiv-674061
cs/0604016
On Conditional Branches in Optimal Search Trees
<|reference_start|>On Conditional Branches in Optimal Search Trees: Algorithms for efficiently finding optimal alphabetic decision trees -- such as the Hu-Tucker algorithm -- are well established and commonly used. However, such algorithms generally assume that the cost per decision is uniform and thus independent of the outcome of the decision. The few algorithms without this assumption instead use one cost if the decision outcome is ``less than'' and another cost otherwise. In practice, neither assumption is accurate for software optimized for today's microprocessors. Such software generally has one cost for the more likely decision outcome and a greater cost -- often far greater -- for the less likely decision outcome. This problem and generalizations thereof are thus applicable to hard coding static decision tree instances in software, e.g., for optimizing program bottlenecks or for compiling switch statements. An O(n^3)-time O(n^2)-space dynamic programming algorithm can solve this optimal binary decision tree problem, and this approach has many generalizations that optimize for the behavior of processors with predictive branch capabilities, both static and dynamic. Solutions to this formulation are often faster in practice than ``optimal'' decision trees as formulated in the literature. Different search paradigms can sometimes yield even better performance.<|reference_end|>
arxiv
@article{baer2006on, title={On Conditional Branches in Optimal Search Trees}, author={Michael B. Baer}, journal={arXiv preprint arXiv:cs/0604016}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604016}, primaryClass={cs.PF cs.DS cs.IR} }
baer2006on
arxiv-674062
cs/0604017
AS Relationships: Inference and Validation
<|reference_start|>AS Relationships: Inference and Validation: Research on performance, robustness, and evolution of the global Internet is fundamentally handicapped without accurate and thorough knowledge of the nature and structure of the contractual relationships between Autonomous Systems (ASs). In this work we introduce novel heuristics for inferring AS relationships. Our heuristics improve upon previous works in several technical aspects, which we outline in detail and demonstrate with several examples. Seeking to increase the value and reliability of our inference results, we then focus on validation of inferred AS relationships. We perform a survey with ASs' network administrators to collect information on the actual connectivity and policies of the surveyed ASs. Based on the survey results, we find that our new AS relationship inference techniques achieve high levels of accuracy: we correctly infer 96.5% customer to provider (c2p), 82.8% peer to peer (p2p), and 90.3% sibling to sibling (s2s) relationships. We then cross-compare the reported AS connectivity with the AS connectivity data contained in BGP tables. We find that BGP tables miss up to 86.2% of the true adjacencies of the surveyed ASs. The majority of the missing links are of the p2p type, which highlights the limitations of present measuring techniques to capture links of this type. Finally, to make our results easily accessible and practically useful for the community, we open an AS relationship repository where we archive, on a weekly basis, and make publicly available the complete Internet AS-level topology annotated with AS relationship information for every pair of AS neighbors.<|reference_end|>
arxiv
@article{dimitropoulos2006as, title={AS Relationships: Inference and Validation}, author={Xenofontas Dimitropoulos, Dmitri Krioukov, Marina Fomenkov, Bradley Huffaker, Young Hyun, kc claffy, George Riley}, journal={ACM SIGCOMM Computer Communication Review (CCR), v.37, n.1, p.29-40, 2007}, year={2006}, doi={10.1145/1198255.1198259}, archivePrefix={arXiv}, eprint={cs/0604017}, primaryClass={cs.NI} }
dimitropoulos2006as
arxiv-674063
cs/0604018
Cryptographic Pseudo-Random Sequences from the Chaotic Henon Map
<|reference_start|>Cryptographic Pseudo-Random Sequences from the Chaotic Henon Map: A scheme for pseudo-random binary sequence generation based on the two-dimensional discrete-time Henon map is proposed. Properties of the proposed sequences pertaining to linear complexity, linear complexity profile, correlation and auto-correlation are investigated. All these properties of the sequences suggest a strong resemblance to random sequences. Results of statistical testing of the sequences are found encouraging. An attempt is made to estimate the keyspace size if the proposed scheme is used for cryptographic applications. The keyspace size is found to be large and is dependent on the precision of the computing platform used.<|reference_end|>
arxiv
@article{suneel2006cryptographic, title={Cryptographic Pseudo-Random Sequences from the Chaotic Henon Map}, author={Madhekar Suneel}, journal={arXiv preprint arXiv:cs/0604018}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604018}, primaryClass={cs.CR nlin.CD} }
suneel2006cryptographic
arxiv-674064
cs/0604019
The Case for Modeling Security, Privacy, Usability and Reliability (SPUR) in Automotive Software
<|reference_start|>The Case for Modeling Security, Privacy, Usability and Reliability (SPUR) in Automotive Software: Over the past five years, there has been considerable growth and established value in the practice of modeling automotive software requirements. Much of this growth has been centered on requirements of software associated with the established functional areas of an automobile, such as those associated with powertrain, chassis, body, safety and infotainment. This paper makes a case for modeling four additional attributes that are increasingly important as vehicles become information conduits: security, privacy, usability, and reliability. These four attributes are important in creating specifications for embedded in-vehicle automotive software.<|reference_end|>
arxiv
@article{prasad2006the, title={The Case for Modeling Security, Privacy, Usability and Reliability (SPUR) in Automotive Software}, author={K. Venkatesh Prasad, TJ Giuli, and David Watson}, journal={arXiv preprint arXiv:cs/0604019}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604019}, primaryClass={cs.SE cs.CR cs.HC} }
prasad2006the
arxiv-674065
cs/0604020
Approximation Algorithms for Restricted Cycle Covers Based on Cycle Decompositions
<|reference_start|>Approximation Algorithms for Restricted Cycle Covers Based on Cycle Decompositions: A cycle cover of a graph is a set of cycles such that every vertex is part of exactly one cycle. An L-cycle cover is a cycle cover in which the length of every cycle is in the set L. The weight of a cycle cover of an edge-weighted graph is the sum of the weights of its edges. We come close to settling the complexity and approximability of computing L-cycle covers. On the one hand, we show that for almost all L, computing L-cycle covers of maximum weight in directed and undirected graphs is APX-hard and NP-hard. Most of our hardness results hold even if the edge weights are restricted to zero and one. On the other hand, we show that the problem of computing L-cycle covers of maximum weight can be approximated within a factor of 2 for undirected graphs and within a factor of 8/3 in the case of directed graphs. This holds for arbitrary sets L.<|reference_end|>
arxiv
@article{manthey2006approximation, title={Approximation Algorithms for Restricted Cycle Covers Based on Cycle Decompositions}, author={Bodo Manthey}, journal={arXiv preprint arXiv:cs/0604020}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604020}, primaryClass={cs.DS cs.CC cs.DM} }
manthey2006approximation
arxiv-674066
cs/0604021
Low Latency Wireless Ad-Hoc Networking: Power and Bandwidth Challenges and a Hierarchical Solution
<|reference_start|>Low Latency Wireless Ad-Hoc Networking: Power and Bandwidth Challenges and a Hierarchical Solution: This paper is concerned with the scaling of the number of hops in a large scale wireless ad-hoc network (WANET), a quantity we call network latency. A large network latency affects all aspects of data communication in a WANET, including an increase in delay, packet loss, required processing power and memory. We consider network management and data routing challenges in WANETs with scalable network latency. On the physical side, reducing network latency imposes a significantly higher power and bandwidth demand on nodes, as is reflected in a set of new bounds. On the protocol front, designing distributed routing protocols that can guarantee the delivery of data packets within scalable number of hops is a challenging task. To solve this, we introduce multi-resolution randomized hierarchy (MRRH), a novel power and bandwidth efficient WANET protocol with scalable network latency. MRRH uses a randomized algorithm for building and maintaining a random hierarchical network topology, which together with the proposed routing algorithm can guarantee efficient delivery of data packets in the wireless network. For a network of size $N$, MRRH can provide an average latency of only $O(\log^{3} N)$. The power and bandwidth consumption of MRRH are shown to be \emph{nearly} optimal for the latency it provides. Therefore, MRRH, is a provably efficient candidate for truly large scale wireless ad-hoc networking.<|reference_end|>
arxiv
@article{sarshar2006low, title={Low Latency Wireless Ad-Hoc Networking: Power and Bandwidth Challenges and a Hierarchical Solution}, author={Nima Sarshar and Behnam A. Rezaei and Vwani P. Roychowdhury}, journal={arXiv preprint arXiv:cs/0604021}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604021}, primaryClass={cs.IT math.IT} }
sarshar2006low
arxiv-674067
cs/0604022
Locked and Unlocked Chains of Planar Shapes
<|reference_start|>Locked and Unlocked Chains of Planar Shapes: We extend linkage unfolding results from the well-studied case of polygonal linkages to the more general case of linkages of polygons. More precisely, we consider chains of nonoverlapping rigid planar shapes (Jordan regions) that are hinged together sequentially at rotatable joints. Our goal is to characterize the families of planar shapes that admit locked chains, where some configurations cannot be reached by continuous reconfiguration without self-intersection, and which families of planar shapes guarantee universal foldability, where every chain is guaranteed to have a connected configuration space. Previously, only obtuse triangles were known to admit locked shapes, and only line segments were known to guarantee universal foldability. We show that a surprisingly general family of planar shapes, called slender adornments, guarantees universal foldability: roughly, the distance from each edge along the path along the boundary of the slender adornment to each hinge should be monotone. In contrast, we show that isosceles triangles with any desired apex angle less than 90 degrees admit locked chains, which is precisely the threshold beyond which the inward-normal property no longer holds.<|reference_end|>
arxiv
@article{connelly2006locked, title={Locked and Unlocked Chains of Planar Shapes}, author={Robert Connelly and Erik D. Demaine and Martin L. Demaine and Sandor P. Fekete and Stefan Langerman and Joseph S. B. Mitchell and Ares Ribo and Guenter Rote}, journal={Discrete and Computational Geometry 44 (2010), 439-462}, year={2006}, doi={10.1007/s00454-010-9262-3}, archivePrefix={arXiv}, eprint={cs/0604022}, primaryClass={cs.CG} }
connelly2006locked
arxiv-674068
cs/0604023
Communication Bottlenecks in Scale-Free Networks
<|reference_start|>Communication Bottlenecks in Scale-Free Networks: We consider the effects of network topology on the optimality of packet routing quantified by $\gamma_c$, the rate of packet insertion beyond which congestion and queue growth occurs. The key result of this paper is to show that for any network, there exists an absolute upper bound, expressed in terms of vertex separators, for the scaling of $\gamma_c$ with network size $N$, irrespective of the routing algorithm used. We then derive an estimate to this upper bound for scale-free networks, and introduce a novel static routing protocol which is superior to shortest path routing under intense packet insertion rates.<|reference_end|>
arxiv
@article{sreenivasan2006communication, title={Communication Bottlenecks in Scale-Free Networks}, author={Sameet Sreenivasan, Reuven Cohen, Eduardo L'opez, Zolt'an Toroczkai, and H. Eugene Stanley}, journal={Phys. Rev. E 75, 036105 (2007)}, year={2006}, doi={10.1103/PhysRevE.75.036105}, archivePrefix={arXiv}, eprint={cs/0604023}, primaryClass={cs.NI cond-mat.stat-mech} }
sreenivasan2006communication
arxiv-674069
cs/0604024
Cohomology in Grothendieck Topologies and Lower Bounds in Boolean Complexity II: A Simple Example
<|reference_start|>Cohomology in Grothendieck Topologies and Lower Bounds in Boolean Complexity II: A Simple Example: In a previous paper we have suggested a number of ideas to attack circuit size complexity with cohomology. As a simple example, we take circuits that can only compute the AND of two inputs, which essentially reduces to SET COVER. We show a very special case of the cohomological approach (one particular free category, using injective and superskyscraper sheaves) gives the linear programming bound coming from the relaxation of the standard integer programming reformulation of SET COVER.<|reference_end|>
arxiv
@article{friedman2006cohomology, title={Cohomology in Grothendieck Topologies and Lower Bounds in Boolean Complexity II: A Simple Example}, author={Joel Friedman}, journal={arXiv preprint arXiv:cs/0604024}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604024}, primaryClass={cs.CC math.AG} }
friedman2006cohomology
arxiv-674070
cs/0604025
An Extremal Inequality Motivated by Multiterminal Information Theoretic Problems
<|reference_start|>An Extremal Inequality Motivated by Multiterminal Information Theoretic Problems: We prove a new extremal inequality, motivated by the vector Gaussian broadcast channel and the distributed source coding with a single quadratic distortion constraint problems. As a corollary, this inequality yields a generalization of the classical entropy-power inequality (EPI). As another corollary, this inequality sheds insight into maximizing the differential entropy of the sum of two dependent random variables.<|reference_end|>
arxiv
@article{liu2006an, title={An Extremal Inequality Motivated by Multiterminal Information Theoretic Problems}, author={Tie Liu and Pramod Viswanath}, journal={arXiv preprint arXiv:cs/0604025}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604025}, primaryClass={cs.IT math.IT} }
liu2006an
arxiv-674071
cs/0604026
APHRODITE: an Anomaly-based Architecture for False Positive Reduction
<|reference_start|>APHRODITE: an Anomaly-based Architecture for False Positive Reduction: We present APHRODITE, an architecture designed to reduce false positives in network intrusion detection systems. APHRODITE works by detecting anomalies in the output traffic, and by correlating them with the alerts raised by the NIDS working on the input traffic. Benchmarks show a substantial reduction of false positives and that APHRODITE is effective also after a "quick setup", i.e. in the realistic case in which it has not been "trained" and set up optimally<|reference_end|>
arxiv
@article{bolzoni2006aphrodite:, title={APHRODITE: an Anomaly-based Architecture for False Positive Reduction}, author={Damiano Bolzoni, Sandro Etalle}, journal={arXiv preprint arXiv:cs/0604026}, year={2006}, number={TR-CTIT-06-13}, archivePrefix={arXiv}, eprint={cs/0604026}, primaryClass={cs.CR} }
bolzoni2006aphrodite:
arxiv-674072
cs/0604027
Unification of multi-lingual scientific terminological resources using the ISO 16642 standard The TermSciences initiative
<|reference_start|>Unification of multi-lingual scientific terminological resources using the ISO 16642 standard The TermSciences initiative: This paper presents the TermSciences portal, which deals with the implementation of a conceptual model that uses the recent ISO 16642 standard (Terminological Markup Framework). This standard turns out to be suitable for concept modeling since it allowed for organizing the original resources by concepts and to associate the various terms for a given concept. Additional structuring is produced by sharing conceptual relationships, that is, cross-linking of resource results through the introduction of semantic relations which may have initially be missing.<|reference_end|>
arxiv
@article{khayari2006unification, title={Unification of multi-lingual scientific terminological resources using the ISO 16642 standard. The TermSciences initiative}, author={Majid Khayari (INIST), St'ephane Schneider (INIST), Isabelle Kramer (LORIA), Laurent Romary (LORIA), the termsciences Collaboration}, journal={arXiv preprint arXiv:cs/0604027}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604027}, primaryClass={cs.CL} }
khayari2006unification
arxiv-674073
cs/0604028
Two Proofs of the Fisher Information Inequality via Data Processing Arguments
<|reference_start|>Two Proofs of the Fisher Information Inequality via Data Processing Arguments: Two new proofs of the Fisher information inequality (FII) using data processing inequalities for mutual information and conditional variance are presented.<|reference_end|>
arxiv
@article{liu2006two, title={Two Proofs of the Fisher Information Inequality via Data Processing Arguments}, author={Tie Liu and Pramod Viswanath}, journal={arXiv preprint arXiv:cs/0604028}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604028}, primaryClass={cs.IT math.IT} }
liu2006two
arxiv-674074
cs/0604029
Order-Optimal Data Aggregation in Wireless Sensor Networks - Part I: Regular Networks
<|reference_start|>Order-Optimal Data Aggregation in Wireless Sensor Networks - Part I: Regular Networks: The predominate traffic patterns in a wireless sensor network are many-to-one and one-to-many communication. Hence, the performance of wireless sensor networks is characterized by the rate at which data can be disseminated from or aggregated to a data sink. In this paper, we consider the data aggregation problem. We demonstrate that a data aggregation rate of O(log(n)/n) is optimal and that this rate can be achieved in wireless sensor networks using a generalization of cooperative beamforming called cooperative time-reversal communication.<|reference_end|>
arxiv
@article{barton2006order-optimal, title={Order-Optimal Data Aggregation in Wireless Sensor Networks - Part I: Regular Networks}, author={Richard J. Barton and Rong Zheng}, journal={IEEE Transactions on Information Theory, vol. 56, no. 11, pp. 5811-5821, November 2010}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604029}, primaryClass={cs.IT math.IT} }
barton2006order-optimal
arxiv-674075
cs/0604030
The Influence of Adaptive Multicoding on Mutual Information and Channel Capacity for Uncertain Wideband CDMA Rayleigh Fading Channels
<|reference_start|>The Influence of Adaptive Multicoding on Mutual Information and Channel Capacity for Uncertain Wideband CDMA Rayleigh Fading Channels: We consider the problem of adaptive modulation for wideband DS-CDMA Rayleigh fading channels with imperfect channel state information (CSI). We assume a multidimensional signal subspace spanned by a collection of random spreading codes (multicoding) and study the effects of both the subspace dimension and the probability distribution of the transmitted symbols on the mutual information between the channel input and output in the presence of uncertainty regarding the true state of the channel. We develop approximations for the mutual information as well as both upper and lower bounds on the mutual information that are stated explicitly in terms of the dimension of the signal constellation, the number of resolvable fading paths on the channel, the current estimate of channel state, and the mean-squared-error of the channel estimate. We analyze these approximations and bounds in order to quantify the impact of signal dimension and symbol distribution on system performance.<|reference_end|>
arxiv
@article{barton2006the, title={The Influence of Adaptive Multicoding on Mutual Information and Channel Capacity for Uncertain Wideband CDMA Rayleigh Fading Channels}, author={Richard J. Barton}, journal={arXiv preprint arXiv:cs/0604030}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604030}, primaryClass={cs.IT math.IT} }
barton2006the
arxiv-674076
cs/0604031
On the Low SNR Capacity of Peak-Limited Non-Coherent Fading Channels with Memory
<|reference_start|>On the Low SNR Capacity of Peak-Limited Non-Coherent Fading Channels with Memory: The capacity of non-coherent stationary Gaussian fading channels with memory under a peak-power constraint is studied in the asymptotic weak-signal regime. It is assumed that the fading law is known to both transmitter and receiver but that neither is cognizant of the fading realization. A connection is demonstrated between the asymptotic behavior of channel capacity in this regime and the asymptotic behavior of the prediction error incurred in predicting the fading process from very noisy observations of its past. This connection can be viewed as the low signal-to-noise ratio (SNR) analog of recent results by Lapidoth & Moser and by Lapidoth demonstrating connections between the high SNR capacity growth and the noiseless or almost-noiseless prediction error. We distinguish between two families of fading laws: the ``slowly forgetting'' and the ``quickly forgetting''. For channels in the former category the low SNR capacity is achieved by IID inputs, whereas in the latter such inputs are typically sub-optimal. Instead, the asymptotic capacity can be approached by inputs with IID phase but block-constant magnitude.<|reference_end|>
arxiv
@article{lapidoth2006on, title={On the Low SNR Capacity of Peak-Limited Non-Coherent Fading Channels with Memory}, author={Amos Lapidoth and Ligong Wang}, journal={arXiv preprint arXiv:cs/0604031}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604031}, primaryClass={cs.IT math.IT} }
lapidoth2006on
arxiv-674077
cs/0604032
Real Computational Universality: The Word Problem for a class of groups with infinite presentation
<|reference_start|>Real Computational Universality: The Word Problem for a class of groups with infinite presentation: The word problem for discrete groups is well-known to be undecidable by a Turing Machine; more precisely, it is reducible both to and from and thus equivalent to the discrete Halting Problem. The present work introduces and studies a real extension of the word problem for a certain class of groups which are presented as quotient groups of a free group and a normal subgroup. Most important, the free group will be generated by an uncountable set of generators with index running over certain sets of real numbers. This allows to include many mathematically important groups which are not captured in the framework of the classical word problem. Our contribution extends computational group theory from the discrete to the Blum-Shub-Smale (BSS) model of real number computation. We believe this to be an interesting step towards applying BSS theory, in addition to semi-algebraic geometry, also to further areas of mathematics. The main result establishes the word problem for such groups to be not only semi-decidable (and thus reducible FROM) but also reducible TO the Halting Problem for such machines. It thus provides the first non-trivial example of a problem COMPLETE, that is, computationally universal for this model.<|reference_end|>
arxiv
@article{ziegler2006real, title={Real Computational Universality: The Word Problem for a class of groups with infinite presentation}, author={Martin Ziegler and Klaus Meer}, journal={arXiv preprint arXiv:cs/0604032}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604032}, primaryClass={cs.LO cs.SC} }
ziegler2006real
arxiv-674078
cs/0604033
Statistical Properties of Eigen-Modes and Instantaneous Mutual Information in MIMO Time-Varying Rayleigh Channels
<|reference_start|>Statistical Properties of Eigen-Modes and Instantaneous Mutual Information in MIMO Time-Varying Rayleigh Channels: In this paper, we study two important metrics in multiple-input multiple-output (MIMO) time-varying Rayleigh flat fading channels. One is the eigen-mode, and the other is the instantaneous mutual information (IMI). Their second-order statistics, such as the correlation coefficient, level crossing rate (LCR), and average fade/outage duration, are investigated, assuming a general nonisotropic scattering environment. Exact closed-form expressions are derived and Monte Carlo simulations are provided to verify the accuracy of the analytical results. For the eigen-modes, we found they tend to be spatio-temporally uncorrelated in large MIMO systems. For the IMI, the results show that its correlation coefficient can be well approximated by the squared amplitude of the correlation coefficient of the channel, under certain conditions. Moreover, we also found the LCR of IMI is much more sensitive to the scattering environment than that of each eigen-mode.<|reference_end|>
arxiv
@article{wang2006statistical, title={Statistical Properties of Eigen-Modes and Instantaneous Mutual Information in MIMO Time-Varying Rayleigh Channels}, author={Shuangquan Wang, Ali Abdi}, journal={arXiv preprint arXiv:cs/0604033}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604033}, primaryClass={cs.IT math.IT} }
wang2006statistical
arxiv-674079
cs/0604034
Squarepants in a Tree: Sum of Subtree Clustering and Hyperbolic Pants Decomposition
<|reference_start|>Squarepants in a Tree: Sum of Subtree Clustering and Hyperbolic Pants Decomposition: We provide efficient constant factor approximation algorithms for the problems of finding a hierarchical clustering of a point set in any metric space, minimizing the sum of minimimum spanning tree lengths within each cluster, and in the hyperbolic or Euclidean planes, minimizing the sum of cluster perimeters. Our algorithms for the hyperbolic and Euclidean planes can also be used to provide a pants decomposition, that is, a set of disjoint simple closed curves partitioning the plane minus the input points into subsets with exactly three boundary components, with approximately minimum total length. In the Euclidean case, these curves are squares; in the hyperbolic case, they combine our Euclidean square pants decomposition with our tree clustering method for general metric spaces.<|reference_end|>
arxiv
@article{eppstein2006squarepants, title={Squarepants in a Tree: Sum of Subtree Clustering and Hyperbolic Pants Decomposition}, author={David Eppstein}, journal={ACM Trans. Algorithms 5(3): 29, 2009}, year={2006}, doi={10.1145/1541885.1541890}, archivePrefix={arXiv}, eprint={cs/0604034}, primaryClass={cs.CG} }
eppstein2006squarepants
arxiv-674080
cs/0604035
Certain new M-matrices and their properties and applications
<|reference_start|>Certain new M-matrices and their properties and applications: The Mn-matrix was defined by Mohan [20] in which he has shown a method of constructing (1,-1)-matrices and studied some of their properties. The (1,-1)-matrices were constructed and studied by Cohn [5],Wang [33], Ehrlich [8] and Ehrlich and Zeller[9]. But in this paper, while giving some resemblances of this matrix with Hadamard matrix, and by naming it as M-matrix, we show how to construct partially balanced incomplete block (PBIB) designs and some regular bipartite graphs by it. We have considered two types of these M- matrices. Also we will make a mention of certain applications of these M-matrices in signal and communication processing, and network systems and end with some open problems.<|reference_end|>
arxiv
@article{mohan2006certain, title={Certain new M-matrices and their properties and applications}, author={R.N.Mohan, Sanpei Kageyama, Moon Ho Lee, Gao Yang}, journal={arXiv preprint arXiv:cs/0604035}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604035}, primaryClass={cs.DM} }
mohan2006certain
arxiv-674081
cs/0604036
Collaborative thesaurus tagging the Wikipedia way
<|reference_start|>Collaborative thesaurus tagging the Wikipedia way: This paper explores the system of categories that is used to classify articles in Wikipedia. It is compared to collaborative tagging systems like del.icio.us and to hierarchical classification like the Dewey Decimal Classification (DDC). Specifics and commonalitiess of these systems of subject indexing are exposed. Analysis of structural and statistical properties (descriptors per record, records per descriptor, descriptor levels) shows that the category system of Wikimedia is a thesaurus that combines collaborative tagging and hierarchical subject indexing in a special way.<|reference_end|>
arxiv
@article{voss2006collaborative, title={Collaborative thesaurus tagging the Wikipedia way}, author={Jakob Voss}, journal={arXiv preprint arXiv:cs/0604036}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604036}, primaryClass={cs.IR cs.DL} }
voss2006collaborative
arxiv-674082
cs/0604037
An O(n^3)-Time Algorithm for Tree Edit Distance
<|reference_start|>An O(n^3)-Time Algorithm for Tree Edit Distance: The {\em edit distance} between two ordered trees with vertex labels is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. In this paper, we present a worst-case $O(n^3)$-time algorithm for this problem, improving the previous best $O(n^3\log n)$-time algorithm~\cite{Klein}. Our result requires a novel adaptive strategy for deciding how a dynamic program divides into subproblems (which is interesting in its own right), together with a deeper understanding of the previous algorithms for the problem. We also prove the optimality of our algorithm among the family of \emph{decomposition strategy} algorithms--which also includes the previous fastest algorithms--by tightening the known lower bound of $\Omega(n^2\log^2 n)$~\cite{Touzet} to $\Omega(n^3)$, matching our algorithm's running time. Furthermore, we obtain matching upper and lower bounds of $\Theta(n m^2 (1 + \log \frac{n}{m}))$ when the two trees have different sizes $m$ and~$n$, where $m < n$.<|reference_end|>
arxiv
@article{demaine2006an, title={An O(n^3)-Time Algorithm for Tree Edit Distance}, author={Erik D. Demaine, Shay Mozes, Benjamin Rossman, Oren Weimann}, journal={ACM Transactions on Algorithms 6(1): (2009)}, year={2006}, doi={10.1145/1644015.1644017}, archivePrefix={arXiv}, eprint={cs/0604037}, primaryClass={cs.DS} }
demaine2006an
arxiv-674083
cs/0604038
UniCalcLIN: a linear constraint solver for the UniCalc system
<|reference_start|>UniCalcLIN: a linear constraint solver for the UniCalc system: In this short paper we present a linear constraint solver for the UniCalc system, an environment for reliable solution of mathematical modeling problems.<|reference_end|>
arxiv
@article{petrov2006unicalc.lin:, title={UniCalc.LIN: a linear constraint solver for the UniCalc system}, author={E. Petrov, Yu. Kostov, E. Botoeva}, journal={arXiv preprint arXiv:cs/0604038}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604038}, primaryClass={cs.MS cs.AI} }
petrov2006unicalc.lin:
arxiv-674084
cs/0604039
A Fixed-Point Type for Octave
<|reference_start|>A Fixed-Point Type for Octave: This paper announces the availability of a fixed point toolbox for the Matlab compatible software package Octave. This toolbox is released under the GNU Public License, and can be used to model the losses in algorithms implemented in hardware. Furthermore, this paper presents as an example of the use of this toolbox, the effects of a fixed point implementation on the precision of an OFDM modulator.<|reference_end|>
arxiv
@article{bateman2006a, title={A Fixed-Point Type for Octave}, author={David Bateman, Laurent Mazet, Veronique Buzenac-Settineri and Markus Muck}, journal={arXiv preprint arXiv:cs/0604039}, year={2006}, number={octave2006/12}, archivePrefix={arXiv}, eprint={cs/0604039}, primaryClass={cs.MS} }
bateman2006a
arxiv-674085
cs/0604040
Optimal Distortion-Power Tradeoffs in Sensor Networks: Gauss-Markov Random Processes
<|reference_start|>Optimal Distortion-Power Tradeoffs in Sensor Networks: Gauss-Markov Random Processes: We investigate the optimal performance of dense sensor networks by studying the joint source-channel coding problem. The overall goal of the sensor network is to take measurements from an underlying random process, code and transmit those measurement samples to a collector node in a cooperative multiple access channel with feedback, and reconstruct the entire random process at the collector node. We provide lower and upper bounds for the minimum achievable expected distortion when the underlying random process is stationary and Gaussian. In the case where the random process is also Markovian, we evaluate the lower and upper bounds explicitly and show that they are of the same order for a wide range of sum power constraints. Thus, for a Gauss-Markov random process, under these sum power constraints, we determine the achievability scheme that is order-optimal, and express the minimum achievable expected distortion as a function of the sum power constraint.<|reference_end|>
arxiv
@article{liu2006optimal, title={Optimal Distortion-Power Tradeoffs in Sensor Networks: Gauss-Markov Random Processes}, author={Nan Liu and Sennur Ulukus}, journal={arXiv preprint arXiv:cs/0604040}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604040}, primaryClass={cs.IT math.IT} }
liu2006optimal
arxiv-674086
cs/0604041
On Orthogonality of Latin Squares
<|reference_start|>On Orthogonality of Latin Squares: An arrangement of s elements in s rows and s columns, such that no element repeats more than once in each row and each column is called a Latin square of order s. If two Latin squares of the same order superimposed one on the other and in the resultant array if each ordered pair occurs once and only once then they are called othogonal Latin Squares. A frequency square is an nxn matrix, such that each element from the list of n elements, occurs t times in each row and in each column. These two concepts lead to a new third concept called as t orthogonal latin squares, where from a set of m orthogonal Latin squares, if t orthogonal Latin squares are superimposed and each ordered t tuple in the resultant array occurs once and only once then it is t othogonal Latin square. In this paper it is proposed to construct such t othogonal latin squares<|reference_end|>
arxiv
@article{mohan2006on, title={On Orthogonality of Latin Squares}, author={R.N.Mohan, Moon Ho Lee, and Subash Pokreal}, journal={arXiv preprint arXiv:cs/0604041}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604041}, primaryClass={cs.DM} }
mohan2006on
arxiv-674087
cs/0604042
Adaptative combination rule and proportional conflict redistribution rule for information fusion
<|reference_start|>Adaptative combination rule and proportional conflict redistribution rule for information fusion: This paper presents two new promising rules of combination for the fusion of uncertain and potentially highly conflicting sources of evidences in the framework of the theory of belief functions in order to palliate the well-know limitations of Dempster's rule and to work beyond the limits of applicability of the Dempster-Shafer theory. We present both a new class of adaptive combination rules (ACR) and a new efficient Proportional Conflict Redistribution (PCR) rule allowing to deal with highly conflicting sources for static and dynamic fusion applications.<|reference_end|>
arxiv
@article{florea2006adaptative, title={Adaptative combination rule and proportional conflict redistribution rule for information fusion}, author={M. C. Florea, J. Dezert, P. Valin, F. Smarandache, Anne-Laure Jousselme}, journal={arXiv preprint arXiv:cs/0604042}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604042}, primaryClass={cs.AI} }
florea2006adaptative
arxiv-674088
cs/0604043
Demand-driven Inlining in a Region-based Optimizer for ILP Architectures
<|reference_start|>Demand-driven Inlining in a Region-based Optimizer for ILP Architectures: Region-based compilation repartitions a program into more desirable compilation units using profiling information and procedure inlining to enable region formation analysis. Heuristics play a key role in determining when it is most beneficial to inline procedures during region formation. An ILP optimizing compiler using a region-based approach restructures a program to better reflect dynamic behavior and increase interprocedural optimization and scheduling opportunities. This paper presents an interprocedural compilation technique which performs procedure inlining on-demand, rather than as a separate phase, to improve the ability of a region-based optimizer to control code growth, compilation time and memory usage while improving performance. The interprocedural region formation algorithm utilizes a demand-driven, heuristics-guided approach to inlining, restructuring an input program into interprocedural regions. Experimental results are presented to demonstrate the impact of the algorithm and several inlining heuristics upon a number of traditional and novel compilation characteristics within a region-based ILP compiler and simulator.<|reference_end|>
arxiv
@article{way2006demand-driven, title={Demand-driven Inlining in a Region-based Optimizer for ILP Architectures}, author={Thomas P. Way and Lori L. Pollock}, journal={arXiv preprint arXiv:cs/0604043}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604043}, primaryClass={cs.DC cs.PL} }
way2006demand-driven
arxiv-674089
cs/0604044
A new M-matrix of Type III, its properties and applications
<|reference_start|>A new M-matrix of Type III, its properties and applications: Some binary matrices like (1,-1) and (1,0) were studied by many authors like Cohn, Wang, Ehlich and Ehlich and Zeller, and Mohan, Kageyama, Lee, and Gao. In this recent paper by Mohan et al considered the M-matrices of Type I and II by studying some of their properties and applications. In the present paper they discussed the M-matrices of Type III, and studied their properties and applications. They gave some constructions of SPBIB designs and some corresponding M-graphs, which are being constructed by it. This is the continuation of our earlier research work in this direction, and these papers establish the importance of non-orthogonal matrices as well.<|reference_end|>
arxiv
@article{mohan2006a, title={A new M-matrix of Type III, its properties and applications}, author={R.N.Mohan, Moon Ho Lee, and Ram Paudal}, journal={arXiv preprint arXiv:cs/0604044}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604044}, primaryClass={cs.DM} }
mohan2006a
arxiv-674090
cs/0604045
An exact algorithm for higher-dimensional orthogonal packing
<|reference_start|>An exact algorithm for higher-dimensional orthogonal packing: Higher-dimensional orthogonal packing problems have a wide range of practical applications, including packing, cutting, and scheduling. Combining the use of our data structure for characterizing feasible packings with our new classes of lower bounds, and other heuristics, we develop a two-level tree search algorithm for solving higher-dimensional packing problems to optimality. Computational results are reported, including optimal solutions for all two--dimensional test problems from recent literature. This is the third in a series of articles describing new approaches to higher-dimensional packing; see cs.DS/0310032 and cs.DS/0402044.<|reference_end|>
arxiv
@article{fekete2006an, title={An exact algorithm for higher-dimensional orthogonal packing}, author={Sandor P. Fekete and Joerg Schepers and Jan C. van der Veen}, journal={arXiv preprint arXiv:cs/0604045}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604045}, primaryClass={cs.DS} }
fekete2006an
arxiv-674091
cs/0604046
Concerning the differentiability of the energy function in vector quantization algorithms
<|reference_start|>Concerning the differentiability of the energy function in vector quantization algorithms: The adaptation rule for Vector Quantization algorithms, and consequently the convergence of the generated sequence, depends on the existence and properties of a function called the energy function, defined on a topological manifold. Our aim is to investigate the conditions of existence of such a function for a class of algorithms examplified by the initial ''K-means'' and Kohonen algorithms. The results presented here supplement previous studies and show that the energy function is not always a potential but at least the uniform limit of a series of potential functions which we call a pseudo-potential. Our work also shows that a large number of existing vector quantization algorithms developped by the Artificial Neural Networks community fall into this category. The framework we define opens the way to study the convergence of all the corresponding adaptation rules at once, and a theorem gives promising insights in that direction. We also demonstrate that the ''K-means'' energy function is a pseudo-potential but not a potential in general. Consequently, the energy function associated to the ''Neural-Gas'' is not a potential in general.<|reference_end|>
arxiv
@article{lepetz2006concerning, title={Concerning the differentiability of the energy function in vector quantization algorithms}, author={Dominique Lepetz, Max Nemoz-Gaillard and Michael Aupetit}, journal={arXiv preprint arXiv:cs/0604046}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604046}, primaryClass={cs.LG cs.NE} }
lepetz2006concerning
arxiv-674092
cs/0604047
Efficient algorithms for deciding the type of growth of products of integer matrices
<|reference_start|>Efficient algorithms for deciding the type of growth of products of integer matrices: For a given finite set $\Sigma$ of matrices with nonnegative integer entries we study the growth of $$ \max_t(\Sigma) = \max\{\|A_{1}... A_{t}\|: A_i \in \Sigma\}.$$ We show how to determine in polynomial time whether the growth with $t$ is bounded, polynomial, or exponential, and we characterize precisely all possible behaviors.<|reference_end|>
arxiv
@article{jungers2006efficient, title={Efficient algorithms for deciding the type of growth of products of integer matrices}, author={Rapha"el Jungers, Vladimir Protasov, Vincent D. Blondel}, journal={arXiv preprint arXiv:cs/0604047}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604047}, primaryClass={cs.CC} }
jungers2006efficient
arxiv-674093
cs/0604048
Will the Butterfly Cipher keep your Network Data secure? Developments in Computer Encryption
<|reference_start|>Will the Butterfly Cipher keep your Network Data secure? Developments in Computer Encryption: This paper explains the recent developments in security and encryption. The Butterfly cipher and quantum cryptography are reviewed and compared. Examples of their relative uses are discussed and suggestions for future developments considered. In addition application to network security together with a substantial review of classification of encryption systems and a summary of security weaknesses are considered.<|reference_end|>
arxiv
@article{hinze-hoare2006will, title={Will the Butterfly Cipher keep your Network Data secure? Developments in Computer Encryption}, author={Vita Hinze-Hoare}, journal={arXiv preprint arXiv:cs/0604048}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604048}, primaryClass={cs.CR} }
hinze-hoare2006will
arxiv-674094
cs/0604049
Low SNR Capacity of Fading Channels with Peak and Average Power Constraints
<|reference_start|>Low SNR Capacity of Fading Channels with Peak and Average Power Constraints: Flat-fading channels that are correlated in time are considered under peak and average power constraints. For discrete-time channels, a new upper bound on the capacity per unit time is derived. A low SNR analysis of a full-scattering vector channel is used to derive a complimentary lower bound. Together, these bounds allow us to identify the exact scaling of channel capacity for a fixed peak to average ratio, as the average power converges to zero. The upper bound is also asymptotically tight as the average power converges to zero for a fixed peak power. For a continuous time infinite bandwidth channel, Viterbi identified the capacity for M-FSK modulation. Recently, Zhang and Laneman showed that the capacity can be achieved with non-bursty signaling (QPSK). An additional contribution of this paper is to obtain similar results under peak and average power constraints.<|reference_end|>
arxiv
@article{sethuraman2006low, title={Low SNR Capacity of Fading Channels with Peak and Average Power Constraints}, author={Vignesh Sethuraman, Bruce Hajek}, journal={arXiv preprint arXiv:cs/0604049}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604049}, primaryClass={cs.IT math.IT} }
sethuraman2006low
arxiv-674095
cs/0604050
On Hadamard Conjecture
<|reference_start|>On Hadamard Conjecture: In this note, while giving an overview of the state of art of the well known Hadamard conjecture, which is more than a century old and now it has been established by using the methods given in the two papers by Mohan et al [6,7].<|reference_end|>
arxiv
@article{mohan2006on, title={On Hadamard Conjecture}, author={R.N.Mohan}, journal={arXiv preprint arXiv:cs/0604050}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604050}, primaryClass={cs.DM} }
mohan2006on
arxiv-674096
cs/0604051
Structural Alignments of pseudo-knotted RNA-molecules in polynomial time
<|reference_start|>Structural Alignments of pseudo-knotted RNA-molecules in polynomial time: An RNA molecule is structured on several layers. The primary and most obvious structure is its sequence of bases, i.e. a word over the alphabet {A,C,G,U}. The higher structure is a set of one-to-one base-pairings resulting in a two-dimensional folding of the one-dimensional sequence. One speaks of a secondary structure if these pairings do not cross and of a tertiary structure otherwise. Since the folding of the molecule is important for its function, the search for related RNA molecules should not only be restricted to the primary structure. It seems sensible to incorporate the higher structures in the search. Based on this assumption and certain edit-operations a distance between two arbitrary structures can be defined. It is known that the general calculation of this measure is NP-complete \cite{zhang02similarity}. But for some special cases polynomial algorithms are known. Using a new formal description of secondary and tertiary structures, we extend the class of structures for which the distance can be calculated in polynomial time. In addition the presented algorithm may be used to approximate the edit-distance between two arbitrary structures with a constant ratio.<|reference_end|>
arxiv
@article{brinkmeier2006structural, title={Structural Alignments of pseudo-knotted RNA-molecules in polynomial time}, author={Michael Brinkmeier}, journal={arXiv preprint arXiv:cs/0604051}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604051}, primaryClass={cs.DS cs.CC cs.DM} }
brinkmeier2006structural
arxiv-674097
cs/0604052
Extension of the functionality of the symbolic program FORM by external software
<|reference_start|>Extension of the functionality of the symbolic program FORM by external software: We describe the implementation of facilities for the communication with external resources in the Symbolic Manipulation System FORM. This is done according to the POSIX standards defined for the UNIX operating system. We present a number of examples that illustrate the increased power due to these new capabilities.<|reference_end|>
arxiv
@article{tentyukov2006extension, title={Extension of the functionality of the symbolic program FORM by external software}, author={M. Tentyukov and J.A.M. Vermaseren}, journal={Comput.Phys.Commun.176:385-405,2007}, year={2006}, doi={10.1016/j.cpc.2006.11.007}, number={SFB/CPP-06-15, TTP06-12, NIKHEF 06-002}, archivePrefix={arXiv}, eprint={cs/0604052}, primaryClass={cs.SC hep-ph} }
tentyukov2006extension
arxiv-674098
cs/0604053
Survivable Routing in IP-over-WDM Networks in the Presence of Multiple Failures
<|reference_start|>Survivable Routing in IP-over-WDM Networks in the Presence of Multiple Failures: Failure restoration at the IP layer in IP-over-WDM networks requires to map the IP topology on the WDM topology in such a way that a failure at the WDM layer leaves the IP topology connected. Such a mapping is called $survivable$. As finding a survivable mapping is known to be NP-complete, in practice it requires a heuristic approach. We have introduced in [1] a novel algorithm called ``SMART'', that is more effective and scalable than the heuristics known to date. Moreover, the formal analysis of SMART [2] has led to new applications: the formal verification of the existence of a survivable mapping, and a tool tracing and repairing the vulnerable areas of the network. In this paper we extend the theoretical analysis in [2] by considering $multiple failures$.<|reference_end|>
arxiv
@article{kurant2006survivable, title={Survivable Routing in IP-over-WDM Networks in the Presence of Multiple Failures}, author={Maciej Kurant, Patrick Thiran}, journal={arXiv preprint arXiv:cs/0604053}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604053}, primaryClass={cs.NI} }
kurant2006survivable
arxiv-674099
cs/0604054
New results on rewrite-based satisfiability procedures
<|reference_start|>New results on rewrite-based satisfiability procedures: Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.<|reference_end|>
arxiv
@article{armando2006new, title={New results on rewrite-based satisfiability procedures}, author={Alessandro Armando, Maria Paola Bonacina, Silvio Ranise, Stephan Schulz}, journal={ACM Transactions on Computational Logic, 10(1):129-179, January 2009}, year={2006}, doi={10.1145/1459010.1459014}, number={RR 36/2005}, archivePrefix={arXiv}, eprint={cs/0604054}, primaryClass={cs.AI cs.LO} }
armando2006new
arxiv-674100
cs/0604055
Beyond Hirsch Conjecture: walks on random polytopes and smoothed complexity of the simplex method
<|reference_start|>Beyond Hirsch Conjecture: walks on random polytopes and smoothed complexity of the simplex method: The smoothed analysis of algorithms is concerned with the expected running time of an algorithm under slight random perturbations of arbitrary inputs. Spielman and Teng proved that the shadow-vertex simplex method has polynomial smoothed complexity. On a slight random perturbation of an arbitrary linear program, the simplex method finds the solution after a walk on polytope(s) with expected length polynomial in the number of constraints n, the number of variables d and the inverse standard deviation of the perturbation 1/sigma. We show that the length of walk in the simplex method is actually polylogarithmic in the number of constraints n. Spielman-Teng's bound on the walk was O(n^{86} d^{55} sigma^{-30}), up to logarithmic factors. We improve this to O(log^7 n (d^9 + d^3 \s^{-4})). This shows that the tight Hirsch conjecture n-d on the length of walk on polytopes is not a limitation for the smoothed Linear Programming. Random perturbations create short paths between vertices. We propose a randomized phase-I for solving arbitrary linear programs, which is of independent interest. Instead of finding a vertex of a feasible set, we add a vertex at random to the feasible set. This does not affect the solution of the linear program with constant probability. This overcomes one of the major difficulties of smoothed analysis of the simplex method -- one can now statistically decouple the walk from the smoothed linear program. This yields a much better reduction of the smoothed complexity to a geometric quantity -- the size of planar sections of random polytopes. We also improve upon the known estimates for that size, showing that it is polylogarithmic in the number of vertices.<|reference_end|>
arxiv
@article{vershynin2006beyond, title={Beyond Hirsch Conjecture: walks on random polytopes and smoothed complexity of the simplex method}, author={Roman Vershynin}, journal={SIAM Journal on Computing 39 (2009), 646--678. Conference version in: FOCS'06, 133--142}, year={2006}, archivePrefix={arXiv}, eprint={cs/0604055}, primaryClass={cs.DS math.FA} }
vershynin2006beyond