corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-2801
0802.3106
Studies of Polymer Deformation and Recovery in Hot Embossing
<|reference_start|>Studies of Polymer Deformation and Recovery in Hot Embossing: In large area micro hot embossing, the process temperature plays a critical role to both the local fidelity of microstructure formation and global uniformity. The significance of low temperature hot embossing is to improve global flatness of embossed devices. This paper reports on experimental studies of polymer deformation and relaxation in micro embossing when the process temperatures are below or near its glass transition temperature (Tg). In this investigation, an indentation system and a micro embosser were used to investigate the relationship of microstructure formation versus process temperature and load pressure. The depth of indentation was controlled and the load force at a certain indentation depth was measured. Experiments were carried out using 1 mm thick PMMA films with the process temperature ranging from Tg-55 degrees C to Tg +20 degrees C. The embossed structures included a single micro cavity and groups of micro cavity arrays. It was found that at temperature of Tg-55 degrees C, elastic deformation dominated the formation of microstructures and significant relaxation happened after embossing. From Tg-20 degrees C to Tg, plastic deformation dominated polymer deformation, and permanent cavities could be formed on PMMA substrates without obvious relaxation. However, the formation of protrusive structures as micro pillars was not complete since there was little polymer flow. With an increase in process temperature, microstructure could be formed under lower loading pressure. Considering the fidelity of a single microstructure and global flatness of embossed substrates, micro hot embossing at a low process temperature, but with good fidelity, should be preferred.<|reference_end|>
arxiv
@article{shan2008studies, title={Studies of Polymer Deformation and Recovery in Hot Embossing}, author={X. C. Shan, Y. C. Liu, H. J. Lu, Z. F. Wang, Y. C. Lam}, journal={Dans Symposium on Design, Test, Integration and Packaging of MEMS/MOEMS - DTIP 2007, Stresa, lago Maggiore : Italie (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0802.3106}, primaryClass={cs.OH} }
shan2008studies
arxiv-2802
0802.3107
Evaluation of the thermal and hydraulic performances of a very thin sintered copper flat heat pipe for 3D microsystem packages
<|reference_start|>Evaluation of the thermal and hydraulic performances of a very thin sintered copper flat heat pipe for 3D microsystem packages: The reported research work presents numerical studies validated by experimental results of a flat micro heat pipe with sintered copper wick structure. The objectives of this project are to produce and demonstrate the efficiency of the passive cooling technology (heat pipe) integrated in a very thin electronic substrate that is a part of a multifunctional 3-D electronic package. The enhanced technology is dedicated to the thermal management of high dissipative microsystems having heat densities of more than 10W/cm2. Future applications are envisaged in the avionics sector. In this research 2D numerical hydraulic model has been developed to investigate the performance of a very thin flat micro heat pipe with sintered copper wick structure, using water as a refrigerant. Finite difference method has been used to develop the model. The model has been used to determine the mass transfer and fluid flow in order to evaluate the limits of heat transport capacity as functions of the dimensions of the wick and the vapour space and for various copper spheres radii. The results are presented in terms of liquid and vapour pressures within the heat pipe. The simulated results are validated by experiments and proved that the method can be further used to predict thermal performance of the heat pipe and to optimise its design.<|reference_end|>
arxiv
@article{tzanova2008evaluation, title={Evaluation of the thermal and hydraulic performances of a very thin sintered copper flat heat pipe for 3D microsystem packages}, author={S. Tzanova, L. Kamenova, Y. Avenas, Ch. Schaeffer (CIME)}, journal={Dans Symposium on Design, Test, Integration and Packaging of MEMS/MOEMS - DTIP 2007, Stresa, lago Maggiore : Italie (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0802.3107}, primaryClass={cs.OH} }
tzanova2008evaluation
arxiv-2803
0802.3110
An entropic view of Pickands' theorem
<|reference_start|>An entropic view of Pickands' theorem: It is shown that distributions arising in Renyi-Tsallis maximum entropy setting are related to the Generalized Pareto Distributions (GPD) that are widely used for modeling the tails of distributions. The relevance of such modelization, as well as the ubiquity of GPD in practical situations follows from Balkema-De Haan-Pickands theorem on the distribution of excesses (over a high threshold). We provide an entropic view of this result, by showing that the distribution of a suitably normalized excess variable converges to the solution of a maximum Tsallis entropy, which is the GPD. This highlights the relevance of the so-called Tsallis distributions in many applications as well as some relevance to the use of the corresponding entropy.<|reference_end|>
arxiv
@article{bercher2008an, title={An entropic view of Pickands' theorem}, author={J.-F. Bercher and C. Vignat}, journal={arXiv preprint arXiv:0802.3110}, year={2008}, archivePrefix={arXiv}, eprint={0802.3110}, primaryClass={cs.IT math.IT} }
bercher2008an
arxiv-2804
0802.3137
Design and Implementation of Aggregate Functions in the DLV System
<|reference_start|>Design and Implementation of Aggregate Functions in the DLV System: Disjunctive Logic Programming (DLP) is a very expressive formalism: it allows for expressing every property of finite structures that is decidable in the complexity class SigmaP2 (= NP^NP). Despite this high expressiveness, there are some simple properties, often arising in real-world applications, which cannot be encoded in a simple and natural manner. Especially properties that require the use of arithmetic operators (like sum, times, or count) on a set or multiset of elements, which satisfy some conditions, cannot be naturally expressed in classic DLP. To overcome this deficiency, we extend DLP by aggregate functions in a conservative way. In particular, we avoid the introduction of constructs with disputed semantics, by requiring aggregates to be stratified. We formally define the semantics of the extended language (called DLP^A), and illustrate how it can be profitably used for representing knowledge. Furthermore, we analyze the computational complexity of DLP^A, showing that the addition of aggregates does not bring a higher cost in that respect. Finally, we provide an implementation of DLP^A in DLV -- a state-of-the-art DLP system -- and report on experiments which confirm the usefulness of the proposed extension also for the efficiency of computation.<|reference_end|>
arxiv
@article{faber2008design, title={Design and Implementation of Aggregate Functions in the DLV System}, author={Wolfgang Faber, Gerald Pfeifer, Nicola Leone, Tina Dell'Armi, Giuseppe Ielpa}, journal={arXiv preprint arXiv:0802.3137}, year={2008}, archivePrefix={arXiv}, eprint={0802.3137}, primaryClass={cs.AI cs.LO} }
faber2008design
arxiv-2805
0802.3235
Characterization of the convergence of stationary Fokker-Planck learning
<|reference_start|>Characterization of the convergence of stationary Fokker-Planck learning: The convergence properties of the stationary Fokker-Planck algorithm for the estimation of the asymptotic density of stochastic search processes is studied. Theoretical and empirical arguments for the characterization of convergence of the estimation in the case of separable and nonseparable nonlinear optimization problems are given. Some implications of the convergence of stationary Fokker-Planck learning for the inference of parameters in artificial neural network models are outlined.<|reference_end|>
arxiv
@article{berrones2008characterization, title={Characterization of the convergence of stationary Fokker-Planck learning}, author={Arturo Berrones}, journal={arXiv preprint arXiv:0802.3235}, year={2008}, doi={10.1016/j.neucom.2008.12.042}, archivePrefix={arXiv}, eprint={0802.3235}, primaryClass={cs.NE cond-mat.dis-nn cs.AI} }
berrones2008characterization
arxiv-2806
0802.3253
On the Capacity and Design of Limited Feedback Multiuser MIMO Uplinks
<|reference_start|>On the Capacity and Design of Limited Feedback Multiuser MIMO Uplinks: The theory of multiple-input multiple-output (MIMO) technology has been well-developed to increase fading channel capacity over single-input single-output (SISO) systems. This capacity gain can often be leveraged by utilizing channel state information at the transmitter and the receiver. Users make use of this channel state information for transmit signal adaptation. In this correspondence, we derive the capacity region for the MIMO multiple access channel (MIMO MAC) when partial channel state information is available at the transmitters, where we assume a synchronous MIMO multiuser uplink. The partial channel state information feedback has a cardinality constraint and is fed back from the basestation to the users using a limited rate feedback channel. Using this feedback information, we propose a finite codebook design method to maximize sum-rate. In this correspondence, the codebook is a set of transmit signal covariance matrices. We also derive the capacity region and codebook design methods in the case that the covariance matrix is rank-one (i.e., beamforming). This is motivated by the fact that beamforming is optimal in certain conditions. The simulation results show that when the number of feedback bits increases, the capacity also increases. Even with a small number of feedback bits, the performance of the proposed system is close to an optimal solution with the full feedback.<|reference_end|>
arxiv
@article{kim2008on, title={On the Capacity and Design of Limited Feedback Multiuser MIMO Uplinks}, author={Il Han Kim and David J. Love}, journal={arXiv preprint arXiv:0802.3253}, year={2008}, archivePrefix={arXiv}, eprint={0802.3253}, primaryClass={cs.IT cs.MM math.IT} }
kim2008on
arxiv-2807
0802.3254
General Algorithms for Testing the Ambiguity of Finite Automata
<|reference_start|>General Algorithms for Testing the Ambiguity of Finite Automata: This paper presents efficient algorithms for testing the finite, polynomial, and exponential ambiguity of finite automata with $\epsilon$-transitions. It gives an algorithm for testing the exponential ambiguity of an automaton $A$ in time $O(|A|_E^2)$, and finite or polynomial ambiguity in time $O(|A|_E^3)$. These complexities significantly improve over the previous best complexities given for the same problem. Furthermore, the algorithms presented are simple and are based on a general algorithm for the composition or intersection of automata. We also give an algorithm to determine the degree of polynomial ambiguity of a finite automaton $A$ that is polynomially ambiguous in time $O(|A|_E^3)$. Finally, we present an application of our algorithms to an approximate computation of the entropy of a probabilistic automaton.<|reference_end|>
arxiv
@article{allauzen2008general, title={General Algorithms for Testing the Ambiguity of Finite Automata}, author={Cyril Allauzen and Mehryar Mohri and Ashish Rastogi}, journal={arXiv preprint arXiv:0802.3254}, year={2008}, archivePrefix={arXiv}, eprint={0802.3254}, primaryClass={cs.CC} }
allauzen2008general
arxiv-2808
0802.3267
The Forgiving Tree: A Self-Healing Distributed Data Structure
<|reference_start|>The Forgiving Tree: A Self-Healing Distributed Data Structure: We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletes an arbitrary node from the network, then the network responds by quickly adding a small number of new edges. We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than $O(\log \Delta)$ times its original diameter, where $\Delta$ is the maximum degree of the network initially. We note that for many peer-to-peer systems, $\Delta$ is polylogarithmic, so the diameter increase would be a O(log log n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.<|reference_end|>
arxiv
@article{hayes2008the, title={The Forgiving Tree: A Self-Healing Distributed Data Structure}, author={Tom Hayes and Navin Rustagi and Jared Saia and Amitabh Trehan}, journal={PODC '08: Proceedings of the twenty-seventh ACM symposium on Principles of distributed computing. 2008, pages 203--212}, year={2008}, archivePrefix={arXiv}, eprint={0802.3267}, primaryClass={cs.DC cs.NI} }
hayes2008the
arxiv-2809
0802.3283
An integrated model of traffic, geography and economy in the Internet
<|reference_start|>An integrated model of traffic, geography and economy in the Internet: Modeling Internet growth is important both for understanding the current network and to predict and improve its future. To date, Internet models have typically attempted to explain a subset of the following characteristics: network structure, traffic flow, geography, and economy. In this paper we present a discrete, agent-based model, that integrates all of them. We show that the model generates networks with topologies, dynamics, and (more speculatively) spatial distributions that are similar to the Internet.<|reference_end|>
arxiv
@article{holme2008an, title={An integrated model of traffic, geography and economy in the Internet}, author={Petter Holme, Josh Karlin, Stephanie Forrest}, journal={ACM SIGCOMM Computer Communication Review 38, 7-15 (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0802.3283}, primaryClass={cs.NI} }
holme2008an
arxiv-2810
0802.3284
Tur\'an Graphs, Stability Number, and Fibonacci Index
<|reference_start|>Tur\'an Graphs, Stability Number, and Fibonacci Index: The Fibonacci index of a graph is the number of its stable sets. This parameter is widely studied and has applications in chemical graph theory. In this paper, we establish tight upper bounds for the Fibonacci index in terms of the stability number and the order of general graphs and connected graphs. Tur\'an graphs frequently appear in extremal graph theory. We show that Tur\'an graphs and a connected variant of them are also extremal for these particular problems.<|reference_end|>
arxiv
@article{bruyère2008tur\'an, title={Tur\'an Graphs, Stability Number, and Fibonacci Index}, author={V'eronique Bruy`ere, and Hadrien M'elot}, journal={arXiv preprint arXiv:0802.3284}, year={2008}, doi={10.1007/978-3-540-85097-7_12}, archivePrefix={arXiv}, eprint={0802.3284}, primaryClass={cs.DM} }
bruyère2008tur\'an
arxiv-2811
0802.3285
Some Aspects of Testing Process for Transport Streams in Digital Video Broadcasting
<|reference_start|>Some Aspects of Testing Process for Transport Streams in Digital Video Broadcasting: This paper presents some aspects related to the DVB (Digital Video Broadcasting) investigation. The basic aspects of DVB are presented, with an emphasis on DVB-T version of standard. The main purpose of this research is to analyze the way that the transmission of the transport streams is realized in case of the Terrestrial Digital Video Broadcasting (DVB-T). To accomplish this, first, Digital Video Broadcasting standard is presented, and then the main aspects of DVB testing and analysis of the transport streams are investigated. The paper presents also the results obtained using two programs designed for DVB analysis: Mosalina and TSA.<|reference_end|>
arxiv
@article{arsinte2008some, title={Some Aspects of Testing Process for Transport Streams in Digital Video Broadcasting}, author={Radu Arsinte, Ciprian Ilioaei}, journal={Acta Technica Napocensis, Electronics and Telecommunications, nr.1/2004 pp.59-74}, year={2008}, archivePrefix={arXiv}, eprint={0802.3285}, primaryClass={cs.CV cs.MM} }
arsinte2008some
arxiv-2812
0802.3288
Implementing a Test Strategy for an Advanced Video Acquisition and Processing Architecture
<|reference_start|>Implementing a Test Strategy for an Advanced Video Acquisition and Processing Architecture: This paper presents some aspects related to test process of an advanced video system used in remote IP surveillance. The system is based on a Pentium compatible architecture using the industrial standard PC104+. First the overall architecture of the system is presented, involving both hardware or software aspects. The acquisition board which is developed in a special, nonstandard architecture, is also briefly presented. The main purpose of this research was to set a coherent set of procedures in order to test all the aspects of the video acquisition board. To accomplish this, it was necessary to set-up a procedure in two steps: stand alone video board test (functional test) and an in-system test procedure verifying the compatibility with both OS: Linux and Windows. The paper presents also the results obtained using this procedure.<|reference_end|>
arxiv
@article{arsinte2008implementing, title={Implementing a Test Strategy for an Advanced Video Acquisition and Processing Architecture}, author={Radu Arsinte}, journal={Acta Technica Napocensis, Electronics and Telecommunications, nr.2/2005 pp.15-20}, year={2008}, archivePrefix={arXiv}, eprint={0802.3288}, primaryClass={cs.CV cs.MM} }
arsinte2008implementing
arxiv-2813
0802.3293
Use of Rapid Probabilistic Argumentation for Ranking on Large Complex Networks
<|reference_start|>Use of Rapid Probabilistic Argumentation for Ranking on Large Complex Networks: We introduce a family of novel ranking algorithms called ERank which run in linear/near linear time and build on explicitly modeling a network as uncertain evidence. The model uses Probabilistic Argumentation Systems (PAS) which are a combination of probability theory and propositional logic, and also a special case of Dempster-Shafer Theory of Evidence. ERank rapidly generates approximate results for the NP-complete problem involved enabling the use of the technique in large networks. We use a previously introduced PAS model for citation networks generalizing it for all networks. We propose a statistical test to be used for comparing the performances of different ranking algorithms based on a clustering validity test. Our experimentation using this test on a real-world network shows ERank to have the best performance in comparison to well-known algorithms including PageRank, closeness, and betweenness.<|reference_end|>
arxiv
@article{cetin2008use, title={Use of Rapid Probabilistic Argumentation for Ranking on Large Complex Networks}, author={Burak Cetin, Haluk Bingol}, journal={arXiv preprint arXiv:0802.3293}, year={2008}, archivePrefix={arXiv}, eprint={0802.3293}, primaryClass={cs.AI cs.IR} }
cetin2008use
arxiv-2814
0802.3300
Projective Expected Utility
<|reference_start|>Projective Expected Utility: Motivated by several classic decision-theoretic paradoxes, and by analogies with the paradoxes which in physics motivated the development of quantum mechanics, we introduce a projective generalization of expected utility along the lines of the quantum-mechanical generalization of probability theory. The resulting decision theory accommodates the dominant paradoxes, while retaining significant simplicity and tractability. In particular, every finite game within this larger class of preferences still has an equilibrium.<|reference_end|>
arxiv
@article{la mura2008projective, title={Projective Expected Utility}, author={Pierfrancesco La Mura}, journal={J. of Math. Psychology, 53:5 (2009)}, year={2008}, doi={10.1016/j.jmp.2009.02.001}, archivePrefix={arXiv}, eprint={0802.3300}, primaryClass={quant-ph cs.GT econ.TH} }
la mura2008projective
arxiv-2815
0802.3328
An Algebraic Characterization of Security of Cryptographic Protocols
<|reference_start|>An Algebraic Characterization of Security of Cryptographic Protocols: Several of the basic cryptographic constructs have associated algebraic structures. Formal models proposed by Dolev and Yao to study the (unconditional) security of public key protocols form a group. The security of some types of protocols can be neatly formulated in this algebraic setting. We investigate classes of two-party protocols. We then consider extension of the formal algebraic framework to private-key protocols. We also discuss concrete realization of the formal models. In this case, we propose a definition in terms of pseudo-free groups.<|reference_end|>
arxiv
@article{patra2008an, title={An Algebraic Characterization of Security of Cryptographic Protocols}, author={Manas K Patra and Yan Zhang}, journal={arXiv preprint arXiv:0802.3328}, year={2008}, archivePrefix={arXiv}, eprint={0802.3328}, primaryClass={cs.CR} }
patra2008an
arxiv-2816
0802.3355
PVM-Distributed Implementation of the Radiance Code
<|reference_start|>PVM-Distributed Implementation of the Radiance Code: The Parallel Virtual Machine (PVM) tool has been used for a distributed implementation of Greg Ward's Radiance code. In order to generate exactly the same primary rays with both the sequential and the parallel codes, the quincunx sampling technique used in Radiance for the reduction of the number of primary rays by interpolation, must be left untouched in the parallel implementation. The octree of local ambient values used in Radiance for the indirect illumination has been shared among all the processors. Both static and dynamic image partitioning techniques which replicate the octree of the complete scene in all the processors and have load-balancing, have been developed for one frame rendering. Speedups larger than 7.5 have been achieved in a network of 8 workstations. For animation sequences, a new dynamic partitioning distribution technique with superlinear speedups has also been developed.<|reference_end|>
arxiv
@article{villatoro2008pvm-distributed, title={PVM-Distributed Implementation of the Radiance Code}, author={Francisco R. Villatoro, Antonio J. Nebro, and Jose E. Fern'andez}, journal={arXiv preprint arXiv:0802.3355}, year={2008}, archivePrefix={arXiv}, eprint={0802.3355}, primaryClass={cs.DC cs.GR} }
villatoro2008pvm-distributed
arxiv-2817
0802.3401
On the Structure of the Capacity Region of Asynchronous Memoryless Multiple-Access Channels
<|reference_start|>On the Structure of the Capacity Region of Asynchronous Memoryless Multiple-Access Channels: The asynchronous capacity region of memoryless multiple-access channels is the union of certain polytopes. It is well-known that vertices of such polytopes may be approached via a technique called successive decoding. It is also known that an extension of successive decoding applies to the dominant face of such polytopes. The extension consists of forming groups of users in such a way that users within a group are decoded jointly whereas groups are decoded successively. This paper goes one step further. It is shown that successive decoding extends to every face of the above mentioned polytopes. The group composition as well as the decoding order for all rates on a face of interest are obtained from a label assigned to that face. From the label one can extract a number of structural properties, such as the dimension of the corresponding face and whether or not two faces intersect. Expressions for the the number of faces of any given dimension are also derived from the labels.<|reference_end|>
arxiv
@article{marina2008on, title={On the Structure of the Capacity Region of Asynchronous Memoryless Multiple-Access Channels}, author={Ninoslav Marina, Bixio Rimoldi}, journal={arXiv preprint arXiv:0802.3401}, year={2008}, doi={10.1109/TIT.2012.2191469}, archivePrefix={arXiv}, eprint={0802.3401}, primaryClass={cs.IT math.IT} }
marina2008on
arxiv-2818
0802.3414
A Universal In-Place Reconfiguration Algorithm for Sliding Cube-Shaped Robots in a Quadratic Number of Moves
<|reference_start|>A Universal In-Place Reconfiguration Algorithm for Sliding Cube-Shaped Robots in a Quadratic Number of Moves: In the modular robot reconfiguration problem, we are given $n$ cube-shaped modules (or robots) as well as two configurations, i.e., placements of the $n$ modules so that their union is face-connected. The goal is to find a sequence of moves that reconfigures the modules from one configuration to the other using "sliding moves," in which a module slides over the face or edge of a neighboring module, maintaining connectivity of the configuration at all times. For many years it has been known that certain module configurations in this model require at least $\Omega(n^2)$ moves to reconfigure between them. In this paper, we introduce the first universal reconfiguration algorithm -- i.e., we show that any $n$-module configuration can reconfigure itself into any specified $n$-module configuration using just sliding moves. Our algorithm achieves reconfiguration in $O(n^2)$ moves, making it asymptotically tight. We also present a variation that reconfigures in-place, it ensures that throughout the reconfiguration process, all modules, except for one, will be contained in the union of the bounding boxes of the start and end configuration.<|reference_end|>
arxiv
@article{abel2008a, title={A Universal In-Place Reconfiguration Algorithm for Sliding Cube-Shaped Robots in a Quadratic Number of Moves}, author={Zachary Abel, Hugo A. Akitaya, Scott Duke Kominers, Matias Korman, Frederick Stock}, journal={arXiv preprint arXiv:0802.3414}, year={2008}, archivePrefix={arXiv}, eprint={0802.3414}, primaryClass={cs.CG cs.MA cs.RO} }
abel2008a
arxiv-2819
0802.3419
Randomized Frameproof Codes: Fingerprinting Plus Validation Minus Tracing
<|reference_start|>Randomized Frameproof Codes: Fingerprinting Plus Validation Minus Tracing: We propose randomized frameproof codes for content protection, which arise by studying a variation of the Boneh-Shaw fingerprinting problem. In the modified system, whenever a user tries to access his fingerprinted copy, the fingerprint is submitted to a validation algorithm to verify that it is indeed permissible before the content can be executed. We show an improvement in the achievable rates compared to deterministic frameproof codes and traditional fingerprinting codes. For coalitions of an arbitrary fixed size, we construct randomized frameproof codes which have an $O(n^2)$ complexity validation algorithm and probability of error $\exp(-\Omega(n)),$ where $n$ denotes the length of the fingerprints. Finally, we present a connection between linear frameproof codes and minimal vectors for size-2 coalitions.<|reference_end|>
arxiv
@article{anthapadmanabhan2008randomized, title={Randomized Frameproof Codes: Fingerprinting Plus Validation Minus Tracing}, author={N. Prasanth Anthapadmanabhan and Alexander Barg}, journal={arXiv preprint arXiv:0802.3419}, year={2008}, archivePrefix={arXiv}, eprint={0802.3419}, primaryClass={cs.IT cs.CR math.IT} }
anthapadmanabhan2008randomized
arxiv-2820
0802.3429
Quasi-Large Sparse-Sequence CDMA: Approach to Single-User Bound by Linearly-Complex LAS Detectors
<|reference_start|>Quasi-Large Sparse-Sequence CDMA: Approach to Single-User Bound by Linearly-Complex LAS Detectors: We have proposed a quasi-large random-sequence (QLRS) CDMA where K users access a point through a common channel with spectral spreading factor N. Each bit is extended by a temporal spreading factor B and hopped on a BN-chip random sequence that is spread in time and frequency. Each user multiplexes and transmits B extended bits and the total channel load is alpha = K/N bits/s/Hz. The linearly-complex LAS detectors detect the transmitted bits. We have obtained that as B tends to infinity, if alpha < 1/2 - 1/(4ln2), each transmitted bit achieves the single-bit bound in BER in high SNR regime as if there was no interference bit. In simulation, when bit number BK >= 500, each bit can approach the single-bit bound for alpha as high as 1 bit/s/Hz. In this paper, we further propose the quasi-large sparse-sequence (QLSS) CDMA by replacing the dense sequence in QLRS-CDMA with sparse sequence. Simulation results show that when the nonzero chips are as few as 16, the BER is already near that of QLRS-CDMA while the complexity is significantly reduced due to sequence sparsity.<|reference_end|>
arxiv
@article{sun2008quasi-large, title={Quasi-Large Sparse-Sequence CDMA: Approach to Single-User Bound by Linearly-Complex LAS Detectors}, author={Yi Sun}, journal={arXiv preprint arXiv:0802.3429}, year={2008}, archivePrefix={arXiv}, eprint={0802.3429}, primaryClass={cs.IT math.IT} }
sun2008quasi-large
arxiv-2821
0802.3430
A Class of Nonbinary Codes and Their Weight Distribution
<|reference_start|>A Class of Nonbinary Codes and Their Weight Distribution: In this paper, for an even integer $n\geq 4$ and any positive integer $k$ with ${\rm gcd}(n/2,k)={\rm gcd}(n/2-k,2k)=d$ being odd, a class of $p$-ary codes $\mathcal{C}^k$ is defined and their weight distribution is completely determined, where $p$ is an odd prime. As an application, a class of nonbinary sequence families is constructed from these codes, and the correlation distribution is also determined.<|reference_end|>
arxiv
@article{zeng2008a, title={A Class of Nonbinary Codes and Their Weight Distribution}, author={Xiangyong Zeng, Nian Li, and Lei Hu}, journal={arXiv preprint arXiv:0802.3430}, year={2008}, archivePrefix={arXiv}, eprint={0802.3430}, primaryClass={cs.IT math.IT} }
zeng2008a
arxiv-2822
0802.3437
On Cusick-Cheon's Conjecture About Balanced Boolean Functions in the Cosets of the Binary Reed-Muller Code
<|reference_start|>On Cusick-Cheon's Conjecture About Balanced Boolean Functions in the Cosets of the Binary Reed-Muller Code: It is proved an amplification of Cusick-Cheon's conjecture on balanced Boolean functions in the cosets of the binary Reed-Muller code RM(k,m) of order k and length 2^m, in the cases where k = 1 or k >= (m-1)/2.<|reference_end|>
arxiv
@article{borissov2008on, title={On Cusick-Cheon's Conjecture About Balanced Boolean Functions in the Cosets of the Binary Reed-Muller Code}, author={Yuri L. Borissov}, journal={arXiv preprint arXiv:0802.3437}, year={2008}, archivePrefix={arXiv}, eprint={0802.3437}, primaryClass={cs.IT math.IT} }
borissov2008on
arxiv-2823
0802.3441
Efficient implementation of GALS systems over commercial synchronous FPGAs: a new approach
<|reference_start|>Efficient implementation of GALS systems over commercial synchronous FPGAs: a new approach: The new vision presented is aimed to overcome the logic overhead issues that previous works exhibit when applying GALS techniques to programmable logic devices. The proposed new view relies in a 2-phase, bundled data parity based protocol for data transfer and clock generation tasks. The ability of the introduced methodology for smart real-time delay selection allows the implementation of a variety of new methodologies for electromagnetic interference mitigation and device environment changes adaptation.<|reference_end|>
arxiv
@article{garcia-lasheras2008efficient, title={Efficient implementation of GALS systems over commercial synchronous FPGAs: a new approach}, author={Javier D. Garcia-Lasheras}, journal={"Implementacion eficiente de sistemas GALS sobre FPGAs", Jornadas de Computacion Reconfigurable y Aplicaciones (JCRA'07), Zaragoza (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0802.3441}, primaryClass={cs.AR} }
garcia-lasheras2008efficient
arxiv-2824
0802.3444
Automatic Verification of Correspondences for Security Protocols
<|reference_start|>Automatic Verification of Correspondences for Security Protocols: We present a new technique for verifying correspondences in security protocols. In particular, correspondences can be used to formalize authentication. Our technique is fully automatic, it can handle an unbounded number of sessions of the protocol, and it is efficient in practice. It significantly extends a previous technique for the verification of secrecy. The protocol is represented in an extension of the pi calculus with fairly arbitrary cryptographic primitives. This protocol representation includes the specification of the correspondence to be verified, but no other annotation. This representation is then translated into an abstract representation by Horn clauses, which is used to prove the desired correspondence. Our technique has been proved correct and implemented. We have tested it on various protocols from the literature. The experimental results show that these protocols can be verified by our technique in less than 1 s.<|reference_end|>
arxiv
@article{blanchet2008automatic, title={Automatic Verification of Correspondences for Security Protocols}, author={Bruno Blanchet}, journal={arXiv preprint arXiv:0802.3444}, year={2008}, archivePrefix={arXiv}, eprint={0802.3444}, primaryClass={cs.CR cs.LO} }
blanchet2008automatic
arxiv-2825
0802.3448
Sketch-Based Estimation of Subpopulation-Weight
<|reference_start|>Sketch-Based Estimation of Subpopulation-Weight: Summaries of massive data sets support approximate query processing over the original data. A basic aggregate over a set of records is the weight of subpopulations specified as a predicate over records' attributes. Bottom-k sketches are a powerful summarization format of weighted items that includes priority sampling and the classic weighted sampling without replacement. They can be computed efficiently for many representations of the data including distributed databases and data streams. We derive novel unbiased estimators and efficient confidence bounds for subpopulation weight. Our estimators and bounds are tailored by distinguishing between applications (such as data streams) where the total weight of the sketched set can be computed by the summarization algorithm without a significant use of additional resources, and applications (such as sketches of network neighborhoods) where this is not the case. Our rigorous derivations are based on clever applications of the Horvitz-Thompson estimator, and are complemented by efficient computational methods. We demonstrate their benefit on a wide range of Pareto distributions.<|reference_end|>
arxiv
@article{cohen2008sketch-based, title={Sketch-Based Estimation of Subpopulation-Weight}, author={Edith Cohen and Haim Kaplan}, journal={arXiv preprint arXiv:0802.3448}, year={2008}, archivePrefix={arXiv}, eprint={0802.3448}, primaryClass={cs.DB cs.DS cs.NI cs.PF} }
cohen2008sketch-based
arxiv-2826
0802.3457
Spreadsheet Errors: What We Know What We Think We Can Do
<|reference_start|>Spreadsheet Errors: What We Know What We Think We Can Do: Fifteen years of research studies have concluded unanimously that spreadsheet errors are both common and non-trivial. Now we must seek ways to reduce spreadsheet errors. Several approaches have been suggested, some of which are promising and others, while appealing because they are easy to do, are not likely to be effective. To date, only one technique, cell-by-cell code inspection, has been demonstrated to be effective. We need to conduct further research to determine the degree to which other techniques can reduce spreadsheet errors.<|reference_end|>
arxiv
@article{panko2008spreadsheet, title={Spreadsheet Errors: What We Know. What We Think We Can Do}, author={Raymond R. Panko}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2000 7-17 ISBN:1 86166 158 4}, year={2008}, archivePrefix={arXiv}, eprint={0802.3457}, primaryClass={cs.SE cs.HC} }
panko2008spreadsheet
arxiv-2827
0802.3473
On Cobweb Posets and Discrete F-Boxes Tilings
<|reference_start|>On Cobweb Posets and Discrete F-Boxes Tilings: F-boxes defined in [6] as hyper-boxes in N^{\infty} discrete space were applied here for the geometric description of the cobweb posetes Hasse diagrams tilings. The F-boxes edges sizes are taken to be values of terms of natural numbers' valued sequence F. The problem of partitions of hyper-boxes represented by graphs into blocks of special form is considered and these are to be called F-tilings. The proof of such tilings' existence for certain sub-family of admissible sequences F is delivered. The family of F-tilings which we consider here includes among others F = Natural numbers, Fibonacci numbers, Gaussian integers with their corresponding F-nomial (Binomial, Fibonomial, Gaussian) coefficients. Extension of this tiling problem onto the general case multi F-nomial coefficients is here proposed. Reformulation of the present cobweb tiling problem into a clique problem of a graph specially invented for that purpose - is proposed here too. To this end we illustrate the area of our reconnaissance by means of the Venn type map of various cobweb sequences families.<|reference_end|>
arxiv
@article{dziemianczuk2008on, title={On Cobweb Posets and Discrete F-Boxes Tilings}, author={M. Dziemianczuk}, journal={arXiv preprint arXiv:0802.3473}, year={2008}, archivePrefix={arXiv}, eprint={0802.3473}, primaryClass={math.CO cs.DM} }
dziemianczuk2008on
arxiv-2828
0802.3475
Spreadsheet Development Methodologies using Resolver: Moving spreadsheets into the 21st Century
<|reference_start|>Spreadsheet Development Methodologies using Resolver: Moving spreadsheets into the 21st Century: We intend to demonstrate the innate problems with existing spreadsheet products and to show how to tackle these issues using a new type of spreadsheet program called Resolver. It addresses the issues head-on and thereby moves the 1980's "VisiCalc paradigm" on to match the advances in computer languages and user requirements. Continuous display of the spreadsheet grid and the equivalent computer program, together with the ability to interact and add code through either interface, provides a number of new methodologies for spreadsheet development.<|reference_end|>
arxiv
@article{kemmis2008spreadsheet, title={Spreadsheet Development Methodologies using Resolver: Moving spreadsheets into the 21st Century}, author={Patrick Kemmis, Giles Thomas}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 93-104 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0802.3475}, primaryClass={cs.SE cs.HC} }
kemmis2008spreadsheet
arxiv-2829
0802.3476
Fun Boy Three Were Wrong: it is what you do, not the way that you do it
<|reference_start|>Fun Boy Three Were Wrong: it is what you do, not the way that you do it: I revisit some classic publications on modularity, to show what problems its pioneers wanted to solve. These problems occur with spreadsheets too: to recognise them may help us avoid them.<|reference_end|>
arxiv
@article{paine2008fun, title={Fun Boy Three Were Wrong: it is what you do, not the way that you do it}, author={Jocelyn Paine}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 105-116 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0802.3476}, primaryClass={cs.HC cs.SE} }
paine2008fun
arxiv-2830
0802.3477
Concerning the Feasibility of Example-driven Modelling Techniques
<|reference_start|>Concerning the Feasibility of Example-driven Modelling Techniques: We report on a series of experiments concerning the feasibility of example driven modelling. The main aim was to establish experimentally within an academic environment: the relationship between error and task complexity using a) Traditional spreadsheet modelling; b) example driven techniques. We report on the experimental design, sampling, research methods and the tasks set for both control and treatment groups. Analysis of the completed tasks allows comparison of several different variables. The experimental results compare the performance indicators for the treatment and control groups by comparing accuracy, experience, training, confidence measures, perceived difficulty and perceived completeness. The various results are thoroughly tested for statistical significance using: the Chi squared test, Fisher's exact test for significance, Cochran's Q test and McNemar's test on difficulty.<|reference_end|>
arxiv
@article{thorne2008concerning, title={Concerning the Feasibility of Example-driven Modelling Techniques}, author={Simon R. Thorne, David Ball, Z. Lawson}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 117-130 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0802.3477}, primaryClass={cs.HC} }
thorne2008concerning
arxiv-2831
0802.3478
It Ain't What You View, But The Way That You View It: documenting spreadsheets with Excelsior, semantic wikis, and literate programming
<|reference_start|>It Ain't What You View, But The Way That You View It: documenting spreadsheets with Excelsior, semantic wikis, and literate programming: I describe preliminary experiments in documenting Excelsior versions of spreadsheets using semantic wikis and literate programming. The objective is to create well-structured and comprehensive documentation, easy to use by those unfamiliar with the spreadsheets documented. I discuss why so much documentation is hard to use, and briefly explain semantic wikis and literate programming; although parts of the paper are Excelsior-specific, these sections may be of more general interest.<|reference_end|>
arxiv
@article{paine2008it, title={It Ain't What You View, But The Way That You View It: documenting spreadsheets with Excelsior, semantic wikis, and literate programming}, author={Jocelyn Paine}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 131-142 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0802.3478}, primaryClass={cs.HC cs.SE} }
paine2008it
arxiv-2832
0802.3479
An Empirical Study of End-User Behaviour in Spreadsheet Error Detection & Correction
<|reference_start|>An Empirical Study of End-User Behaviour in Spreadsheet Error Detection & Correction: Very little is known about the process by which end-user developers detect and correct spreadsheet errors. Any research pertaining to the development of spreadsheet testing methodologies or auditing tools would benefit from information on how end-users perform the debugging process in practice. Thirteen industry-based professionals and thirty-four accounting & finance students took part in a current ongoing experiment designed to record and analyse end-user behaviour in spreadsheet error detection and correction. Professionals significantly outperformed students in correcting certain error types. Time-based cell activity analysis showed that a strong correlation exists between the percentage of cells inspected and the number of errors corrected. The cell activity data was gathered through a purpose written VBA Excel plug-in that records the time and detail of all cell selection and cell change actions of individuals.<|reference_end|>
arxiv
@article{bishop2008an, title={An Empirical Study of End-User Behaviour in Spreadsheet Error Detection & Correction}, author={Brian Bishop, Kevin McDaid}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 165-176 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0802.3479}, primaryClass={cs.HC cs.CY} }
bishop2008an
arxiv-2833
0802.3480
Why Task-Based Training is Superior to Traditional Training Methods
<|reference_start|>Why Task-Based Training is Superior to Traditional Training Methods: The risks of spreadsheet use do not just come from the misuse of formulae. As such, training needs to go beyond this technical aspect of spreadsheet use and look at the spreadsheet in its full business context. While standard training is by and large unable to do this, task-based training is perfectly suited to a contextual approach to training.<|reference_end|>
arxiv
@article{mcguire2008why, title={Why Task-Based Training is Superior to Traditional Training Methods}, author={Kath McGuire}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 191-196 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0802.3480}, primaryClass={cs.HC} }
mcguire2008why
arxiv-2834
0802.3481
Establishing A Minimum Generic Skill Set For Risk Management Teaching In A Spreadsheet Training Course
<|reference_start|>Establishing A Minimum Generic Skill Set For Risk Management Teaching In A Spreadsheet Training Course: Past research shows that spreadsheet models are prone to such a high frequency of errors and data security implications that the risk management of spreadsheet development and spreadsheet use is of great importance to both industry and academia. The underlying rationale for this paper is that spreadsheet training courses should specifically address risk management in the development process both from a generic and a domain-specific viewpoint. This research specifically focuses on one of these namely those generic issues of risk management that should be present in a training course that attempts to meet good-practice within industry. A pilot questionnaire was constructed showing a possible minimum set of risk management issues and sent to academics and industry practitioners for feedback. The findings from this pilot survey will be used to refine the questionnaire for sending to a larger body of possible respondents. It is expected these findings will form the basis of a risk management teaching approach to be trialled in a number of selected ongoing spreadsheet training courses.<|reference_end|>
arxiv
@article{chadwick2008establishing, title={Establishing A Minimum Generic Skill Set For Risk Management Teaching In A Spreadsheet Training Course}, author={David Chadwick}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 197-208 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0802.3481}, primaryClass={cs.HC} }
chadwick2008establishing
arxiv-2835
0802.3483
Voice-controlled Debugging of Spreadsheets
<|reference_start|>Voice-controlled Debugging of Spreadsheets: Developments in Mobile Computing are putting pressure on the software industry to research new modes of interaction that do not rely on the traditional keyboard and mouse combination. Computer users suffering from Repetitive Strain Injury also seek an alternative to keyboard and mouse devices to reduce suffering in wrist and finger joints. Voice-control is an alternative approach to spreadsheet development and debugging that has been researched and used successfully in other domains. While voice-control technology for spreadsheets is available its effectiveness has not been investigated. This study is the first to compare the performance of a set of expert spreadsheet developers that debugged a spreadsheet using voice-control technology and another set that debugged the same spreadsheet using keyboard and mouse. The study showed that voice, despite its advantages, proved to be slower and less accurate. However, it also revealed ways in which the technology might be improved to redress this imbalance.<|reference_end|>
arxiv
@article{flood2008voice-controlled, title={Voice-controlled Debugging of Spreadsheets}, author={Derek Flood, Kevin Mc Daid}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 155-164 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0802.3483}, primaryClass={cs.HC} }
flood2008voice-controlled
arxiv-2836
0802.3490
On capacity of wireless ad hoc networks with MIMO MMSE receivers
<|reference_start|>On capacity of wireless ad hoc networks with MIMO MMSE receivers: Widely adopted at home, business places, and hot spots, wireless ad-hoc networks are expected to provide broadband services parallel to their wired counterparts in near future. To address this need, MIMO techniques, which are capable of offering several-fold increase in capacity, hold significant promise. Most previous work on capacity analysis of ad-hoc networks is based on an implicit assumption that each node has only one antenna. Core to the analysis therein is the characterization of a geometric area, referred to as the exclusion region, which quantizes the amount of spatial resource occupied by a link. When multiple antennas are deployed at each node, however, multiple links can transmit in the vicinity of each other simultaneously, as interference can now be suppressed by spatial signal processing. As such, a link no longer exclusively occupies a geometric area, making the concept of "exclusion region" not applicable any more. In this paper, we investigate link-layer throughput capacity of MIMO ad-hoc networks. In contrast to previous work, the amount of spatial resource occupied by each link is characterized by the actual interference it imposes on other links. To calculate the link-layer capacity, we first derive the probability distribution of post-detection SINR at a receiver. The result is then used to calculate the number of active links and the corresponding data rates that can be sustained within an area. Our analysis will serve as a guideline for the design of medium access protocols for MIMO ad-hoc networks. To the best of knowledge, this paper is the first attempt to characterize the capacity of MIMO ad-hoc networks by considering the actual PHY-layer signal and interference model.<|reference_end|>
arxiv
@article{ma2008on, title={On capacity of wireless ad hoc networks with MIMO MMSE receivers}, author={Jing Ma and Ying Jun Zhang}, journal={arXiv preprint arXiv:0802.3490}, year={2008}, archivePrefix={arXiv}, eprint={0802.3490}, primaryClass={cs.NI cs.IT math.IT} }
ma2008on
arxiv-2837
0802.3492
The RDF Virtual Machine
<|reference_start|>The RDF Virtual Machine: The Resource Description Framework (RDF) is a semantic network data model that is used to create machine-understandable descriptions of the world and is the basis of the Semantic Web. This article discusses the application of RDF to the representation of computer software and virtual computing machines. The Semantic Web is posited as not only a web of data, but also as a web of programs and processes.<|reference_end|>
arxiv
@article{rodriguez2008the, title={The RDF Virtual Machine}, author={Marko A. Rodriguez}, journal={Knowledge-Based Systems, 24(6), 890-903, August 2011}, year={2008}, doi={10.1016/j.knosys.2011.04.004}, number={LA-UR-08-03925}, archivePrefix={arXiv}, eprint={0802.3492}, primaryClass={cs.PL} }
rodriguez2008the
arxiv-2838
0802.3495
Gaussian Interference Networks: Sum Capacity in the Low Interference Regime and New Outer Bounds on the Capacity Region
<|reference_start|>Gaussian Interference Networks: Sum Capacity in the Low Interference Regime and New Outer Bounds on the Capacity Region: Establishing the capacity region of a Gaussian interference network is an open problem in information theory. Recent progress on this problem has led to the characterization of the capacity region of a general two user Gaussian interference channel within one bit. In this paper, we develop new, improved outer bounds on the capacity region. Using these bounds, we show that treating interference as noise achieves the sum capacity of the two user Gaussian interference channel in a low interference regime, where the interference parameters are below certain thresholds. We then generalize our techniques and results to Gaussian interference networks with more than two users. In particular, we demonstrate that the total interference threshold, below which treating interference as noise achieves the sum capacity, increases with the number of users.<|reference_end|>
arxiv
@article{annapureddy2008gaussian, title={Gaussian Interference Networks: Sum Capacity in the Low Interference Regime and New Outer Bounds on the Capacity Region}, author={V. Sreekanth Annapureddy and Venugopal V. Veeravalli}, journal={arXiv preprint arXiv:0802.3495}, year={2008}, archivePrefix={arXiv}, eprint={0802.3495}, primaryClass={cs.IT math.IT} }
annapureddy2008gaussian
arxiv-2839
0802.3513
The Complexity of Node Blocking for Dags
<|reference_start|>The Complexity of Node Blocking for Dags: We consider the following modification of annihilation game called node blocking. Given a directed graph, each vertex can be occupied by at most one token. There are two types of tokens, each player can move his type of tokens. The players alternate their moves and the current player $i$ selects one token of type $i$ and moves the token along a directed edge to an unoccupied vertex. If a player cannot make a move then he loses. We consider the problem of determining the complexity of the game: given an arbitrary configuration of tokens in a directed acyclic graph, does the current player has a winning strategy? We prove that the problem is PSPACE-complete.<|reference_end|>
arxiv
@article{dereniowski2008the, title={The Complexity of Node Blocking for Dags}, author={Dariusz Dereniowski}, journal={Journal of Combinatorial Theory, Series A 118 (2011) 248-256}, year={2008}, doi={10.1016/j.jcta.2010.03.011}, archivePrefix={arXiv}, eprint={0802.3513}, primaryClass={cs.GT cs.DM} }
dereniowski2008the
arxiv-2840
0802.3522
Time Warp Edit Distance
<|reference_start|>Time Warp Edit Distance: This technical report details a family of time warp distances on the set of discrete time series. This family is constructed as an editing distance whose elementary operations apply on linear segments. A specific parameter allows controlling the stiffness of the elastic matching. It is well suited for the processing of event data for which each data sample is associated with a timestamp, not necessarily obtained according to a constant sampling rate. Some properties verified by these distances are proposed and proved in this report.<|reference_end|>
arxiv
@article{marteau2008time, title={Time Warp Edit Distance}, author={Pierre-Franc{c}ois Marteau (VALORIA)}, journal={arXiv preprint arXiv:0802.3522}, year={2008}, number={VALORIA.2008.1V5}, archivePrefix={arXiv}, eprint={0802.3522}, primaryClass={cs.IR} }
marteau2008time
arxiv-2841
0802.3528
Wavelet and Curvelet Moments for Image Classification: Application to Aggregate Mixture Grading
<|reference_start|>Wavelet and Curvelet Moments for Image Classification: Application to Aggregate Mixture Grading: We show the potential for classifying images of mixtures of aggregate, based themselves on varying, albeit well-defined, sizes and shapes, in order to provide a far more effective approach compared to the classification of individual sizes and shapes. While a dominant (additive, stationary) Gaussian noise component in image data will ensure that wavelet coefficients are of Gaussian distribution, long tailed distributions (symptomatic, for example, of extreme values) may well hold in practice for wavelet coefficients. Energy (2nd order moment) has often been used for image characterization for image content-based retrieval, and higher order moments may be important also, not least for capturing long tailed distributional behavior. In this work, we assess 2nd, 3rd and 4th order moments of multiresolution transform -- wavelet and curvelet transform -- coefficients as features. As analysis methodology, taking account of image types, multiresolution transforms, and moments of coefficients in the scales or bands, we use correspondence analysis as well as k-nearest neighbors supervised classification.<|reference_end|>
arxiv
@article{murtagh2008wavelet, title={Wavelet and Curvelet Moments for Image Classification: Application to Aggregate Mixture Grading}, author={Fionn Murtagh and Jean-Luc Starck}, journal={Pattern Recognition Letters, 29, 1557-1564, 2008}, year={2008}, archivePrefix={arXiv}, eprint={0802.3528}, primaryClass={cs.CV} }
murtagh2008wavelet
arxiv-2842
0802.3535
Approximate Capacity of Gaussian Relay Networks
<|reference_start|>Approximate Capacity of Gaussian Relay Networks: We present an achievable rate for general Gaussian relay networks. We show that the achievable rate is within a constant number of bits from the information-theoretic cut-set upper bound on the capacity of these networks. This constant depends on the topology of the network, but not the values of the channel gains. Therefore, we uniformly characterize the capacity of Gaussian relay networks within a constant number of bits, for all channel parameters.<|reference_end|>
arxiv
@article{avestimehr2008approximate, title={Approximate Capacity of Gaussian Relay Networks}, author={Amir Salman Avestimehr, Suhas N. Diggavi and David N C. Tse}, journal={arXiv preprint arXiv:0802.3535}, year={2008}, doi={10.1109/ISIT.2008.4595031}, archivePrefix={arXiv}, eprint={0802.3535}, primaryClass={cs.IT math.IT} }
avestimehr2008approximate
arxiv-2843
0802.3554
Data Traffic Dynamics and Saturation on a Single Link
<|reference_start|>Data Traffic Dynamics and Saturation on a Single Link: The dynamics of User Datagram Protocol (UDP) traffic over Ethernet between two computers are analyzed using nonlinear dynamics which shows that there are two clear regimes in the data flow: free flow and saturated. The two most important variables affecting this are the packet size and packet flow rate. However, this transition is due to a transcritical bifurcation rather than phase transition in models such as in vehicle traffic or theorized large-scale computer network congestion. It is hoped this model will help lay the groundwork for further research on the dynamics of networks, especially computer networks.<|reference_end|>
arxiv
@article{smith2008data, title={Data Traffic Dynamics and Saturation on a Single Link}, author={Reginald D. Smith}, journal={International Journal of Computer, Information, and Systems Science, and Engineering, vol 3, no. 1, 11-16 2009}, year={2008}, archivePrefix={arXiv}, eprint={0802.3554}, primaryClass={cs.NI cs.PF} }
smith2008data
arxiv-2844
0802.3563
Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes
<|reference_start|>Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes: The paper develops DILOC, a \emph{distributive}, \emph{iterative} algorithm that locates M sensors in $\mathbb{R}^m, m\geq 1$, with respect to a minimal number of m+1 anchors with known locations. The sensors exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there centralized knowledge about the sensors' locations. DILOC uses the barycentric coordinates of a sensor with respect to its neighbors that are computed using the Cayley-Menger determinants. These are the determinants of matrices of inter-sensor distances. We show convergence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the anchors. We introduce a stochastic approximation version extending DILOC to random environments when the knowledge about the intercommunications among sensors and the inter-sensor distances are noisy, and the communication links among neighbors fail at random times. We show a.s. convergence of the modified DILOC and characterize the error between the final estimates and the true values of the sensors' locations. Numerical studies illustrate DILOC under a variety of deterministic and random operating conditions.<|reference_end|>
arxiv
@article{khan2008distributed, title={Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes}, author={Usman A. Khan, Soummya Kar, and Jose' M. F. Moura}, journal={U. A. Khan, S. Kar, and J. M. F. Moura, "Distributed sensor localization in random environments using minimal number of anchor nodes," IEEE Transactions on Signal Processing, vol. 57, no. 5, pp. 2000-2016, May 2009}, year={2008}, doi={10.1109/TSP.2009.2014812}, archivePrefix={arXiv}, eprint={0802.3563}, primaryClass={cs.IT math.IT} }
khan2008distributed
arxiv-2845
0802.3569
Delay Analysis for Wireless Local Area Networks with Multipacket Reception under Finite Load
<|reference_start|>Delay Analysis for Wireless Local Area Networks with Multipacket Reception under Finite Load: To date, most analysis of WLANs has been focused on their operation under saturation condition. This work is an attempt to understand the fundamental performance of WLANs under unsaturated condition. In particular, we are interested in the delay performance when collisions of packets are resolved by an exponential backoff mechanism. Using a multiple-vacation queueing model, we derive an explicit expression for packet delay distribution, from which necessary conditions for finite mean delay and delay jitter are established. It is found that under some circumstances, mean delay and delay jitter may approach infinity even when the traffic load is way below the saturation throughput. Saturation throughput is therefore not a sound measure of WLAN capacity when the underlying applications are delay sensitive. To bridge the gap, we define safe-bounded-mean-delay (SBMD) throughput and safe-bounded-delay-jitter (SBDJ) throughput that reflect the actual network capacity users can enjoy when they require bounded mean delay and delay jitter, respectively. The analytical model in this paper is general enough to cover both single-packet reception (SPR) and multi-packet reception (MPR) WLANs, as well as carrier-sensing and non-carrier-sensing networks. We show that the SBMD and SBDJ throughputs scale super-linearly with the MPR capability of a network. Together with our earlier work that proves super-linear throughput scaling under saturation condition, our results here complete the demonstration of MPR as a powerful capacity-enhancement technique for both delay-sensitive and delay-tolerant applications.<|reference_end|>
arxiv
@article{zhang2008delay, title={Delay Analysis for Wireless Local Area Networks with Multipacket Reception under Finite Load}, author={Ying Jun Zhang, Soung Chang Liew, and Darui Chen}, journal={arXiv preprint arXiv:0802.3569}, year={2008}, archivePrefix={arXiv}, eprint={0802.3569}, primaryClass={cs.NI} }
zhang2008delay
arxiv-2846
0802.3570
Asymptotic Behaviour of Random Vandermonde Matrices with Entries on the Unit Circle
<|reference_start|>Asymptotic Behaviour of Random Vandermonde Matrices with Entries on the Unit Circle: Analytical methods for finding moments of random Vandermonde matrices with entries on the unit circle are developed. Vandermonde Matrices play an important role in signal processing and wireless applications such as direction of arrival estimation, precoding, and sparse sampling theory, just to name a few. Within this framework, we extend classical freeness results on random matrices with independent, identically distributed (i.i.d.) entries and show that Vandermonde structured matrices can be treated in the same vein with different tools. We focus on various types of matrices, such as Vandermonde matrices with and without uniform phase distributions, as well as generalized Vandermonde matrices. In each case, we provide explicit expressions of the moments of the associated Gram matrix, as well as more advanced models involving the Vandermonde matrix. Comparisons with classical i.i.d. random matrix theory are provided, and deconvolution results are discussed. We review some applications of the results to the fields of signal processing and wireless communications.<|reference_end|>
arxiv
@article{ryan2008asymptotic, title={Asymptotic Behaviour of Random Vandermonde Matrices with Entries on the Unit Circle}, author={{O}yvind Ryan, Merouane Debbah}, journal={arXiv preprint arXiv:0802.3570}, year={2008}, archivePrefix={arXiv}, eprint={0802.3570}, primaryClass={cs.IT math.IT} }
ryan2008asymptotic
arxiv-2847
0802.3572
Random Vandermonde Matrices-Part II: Applications
<|reference_start|>Random Vandermonde Matrices-Part II: Applications: This paper has been withdrawn by the authors, since it has been merged with Part I (ID 0802.3570)<|reference_end|>
arxiv
@article{ryan2008random, title={Random Vandermonde Matrices-Part II: Applications}, author={{O}yvind Ryan, Merouane Debbah}, journal={arXiv preprint arXiv:0802.3572}, year={2008}, archivePrefix={arXiv}, eprint={0802.3572}, primaryClass={cs.IT math.IT} }
ryan2008random
arxiv-2848
0802.3582
Neural Networks and Database Systems
<|reference_start|>Neural Networks and Database Systems: Object-oriented database systems proved very valuable at handling and administrating complex objects. In the following guidelines for embedding neural networks into such systems are presented. It is our goal to treat networks as normal data in the database system. From the logical point of view, a neural network is a complex data value and can be stored as a normal data object. It is generally accepted that rule-based reasoning will play an important role in future database applications. The knowledge base consists of facts and rules, which are both stored and handled by the underlying database system. Neural networks can be seen as representation of intensional knowledge of intelligent database systems. So they are part of a rule based knowledge pool and can be used like conventional rules. The user has a unified view about his knowledge base regardless of the origin of the unique rules.<|reference_end|>
arxiv
@article{schikuta2008neural, title={Neural Networks and Database Systems}, author={Erich Schikuta}, journal={pp. 133-152, 2007, publisher Austrian Computer Society}, year={2008}, archivePrefix={arXiv}, eprint={0802.3582}, primaryClass={cs.DB cs.NE} }
schikuta2008neural
arxiv-2849
0802.3597
Processing Information in Quantum Decision Theory
<|reference_start|>Processing Information in Quantum Decision Theory: A survey is given summarizing the state of the art of describing information processing in Quantum Decision Theory, which has been recently advanced as a novel variant of decision making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intended actions. The theory characterizes entangled decision making, non-commutativity of subsequent decisions, and intention interference. The self-consistent procedure of decision making, in the frame of the quantum decision theory, takes into account both the available objective information as well as subjective contextual effects. This quantum approach avoids any paradox typical of classical decision theory. Conditional maximization of entropy, equivalent to the minimization of an information functional, makes it possible to connect the quantum and classical decision theories, showing that the latter is the limit of the former under vanishing interference terms.<|reference_end|>
arxiv
@article{yukalov2008processing, title={Processing Information in Quantum Decision Theory}, author={V.I. Yukalov and D. Sornette}, journal={Entropy 11 (2009) 1073-1120}, year={2008}, doi={10.3390/e11041073}, archivePrefix={arXiv}, eprint={0802.3597}, primaryClass={physics.soc-ph cs.AI quant-ph} }
yukalov2008processing
arxiv-2850
0802.3611
Power Allocation for Fading Channels with Peak-to-Average Power Constraints
<|reference_start|>Power Allocation for Fading Channels with Peak-to-Average Power Constraints: Power allocation with peak-to-average power ratio constraints is investigated for transmission over Nakagami-m fading channels with arbitrary input distributions. In the case of delay-limited block-fading channels, we find the solution to the minimum outage power allocation scheme with peak-to-average power constraints and arbitrary input distributions, and show that the signal-to-noise ratio exponent for any finite peak-to-average power ratio is the same as that of the peak-power limited problem, resulting in an error floor. In the case of the ergodic fully-interleaved channel, we find the power allocation rule that yields the maximal information rate for an arbitrary input distribution and show that capacities with peak-to-average power ratio constraints, even for small ratios, are very close to capacities without peak-power restrictions.<|reference_end|>
arxiv
@article{nguyen2008power, title={Power Allocation for Fading Channels with Peak-to-Average Power Constraints}, author={Khoa D. Nguyen, Albert Guillen i Fabregas, Lars K. Rasmussen}, journal={arXiv preprint arXiv:0802.3611}, year={2008}, archivePrefix={arXiv}, eprint={0802.3611}, primaryClass={cs.IT math.IT} }
nguyen2008power
arxiv-2851
0802.3617
Towards a formalization of budgets
<|reference_start|>Towards a formalization of budgets: We go into the need for, and the requirements on, a formal theory of budgets. We present a simple algebraic theory of rational budgets, i.e., budgets in which amounts of money are specified by functions on the rational numbers. This theory is based on the tuplix calculus. We go into the importance of using totalized models for the rational numbers. We present a case study on the educational budget of a university department offering master programs.<|reference_end|>
arxiv
@article{bergstra2008towards, title={Towards a formalization of budgets}, author={Jan A. Bergstra, Sanne Nolst Trenit'e, Mark B. van der Zwaag}, journal={arXiv preprint arXiv:0802.3617}, year={2008}, number={PRG0712}, archivePrefix={arXiv}, eprint={0802.3617}, primaryClass={cs.LO} }
bergstra2008towards
arxiv-2852
0802.3626
Color Graphs: An Efficient Model For Two-Dimensional Cellular Automata Linear Rules
<|reference_start|>Color Graphs: An Efficient Model For Two-Dimensional Cellular Automata Linear Rules: Two-dimensional nine neighbor hood rectangular Cellular Automata rules can be modeled using many different techniques like Rule matrices, State Transition Diagrams, Boolean functions, Algebraic Normal Form etc. In this paper, a new model is introduced using color graphs to model all the 512 linear rules. The graph theoretic properties therefore studied in this paper simplifies the analysis of all linear rules in comparison with other ways of its study.<|reference_end|>
arxiv
@article{nayak2008color, title={Color Graphs: An Efficient Model For Two-Dimensional Cellular Automata Linear Rules}, author={Birendra Kumar Nayak, Sudhakar Sahoo, Sushant Kumar Rout}, journal={arXiv preprint arXiv:0802.3626}, year={2008}, archivePrefix={arXiv}, eprint={0802.3626}, primaryClass={cs.LO} }
nayak2008color
arxiv-2853
0802.3627
Clusters of solutions and replica symmetry breaking in random k-satisfiability
<|reference_start|>Clusters of solutions and replica symmetry breaking in random k-satisfiability: We study the set of solutions of random k-satisfiability formulae through the cavity method. It is known that, for an interval of the clause-to-variables ratio, this decomposes into an exponential number of pure states (clusters). We refine substantially this picture by: (i) determining the precise location of the clustering transition; (ii) uncovering a second `condensation' phase transition in the structure of the solution set for k larger or equal than 4. These results both follow from computing the large deviation rate of the internal entropy of pure states. From a technical point of view our main contributions are a simplified version of the cavity formalism for special values of the Parisi replica symmetry breaking parameter m (in particular for m=1 via a correspondence with the tree reconstruction problem) and new large-k expansions.<|reference_end|>
arxiv
@article{montanari2008clusters, title={Clusters of solutions and replica symmetry breaking in random k-satisfiability}, author={Andrea Montanari, Federico Ricci-Tersenghi, Guilhem Semerjian}, journal={J. Stat. Mech. P04004 (2008)}, year={2008}, doi={10.1088/1742-5468/2008/04/P04004}, archivePrefix={arXiv}, eprint={0802.3627}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.CC} }
montanari2008clusters
arxiv-2854
0802.3628
Dynamic data models: an application of MOP-based persistence in Common Lisp
<|reference_start|>Dynamic data models: an application of MOP-based persistence in Common Lisp: The data model of an application, the nature and format of data stored across executions, is typically a very rigid part of its early specification, even when prototyping, and changing it after code that relies on it was written can prove quite expensive and error-prone. Code and data in a running Lisp image can be dynamically modified. A MOP-based persistence library can bring this dynamicity to the data model. This enables to extend the easy prototyping way of development to the storage of data and helps avoiding interruptions of service. This article presents the conditions to do this portably and transparently.<|reference_end|>
arxiv
@article{thierry2008dynamic, title={Dynamic data models: an application of MOP-based persistence in Common Lisp}, author={Pierre Thierry and Simon E. B. Thierry}, journal={arXiv preprint arXiv:0802.3628}, year={2008}, archivePrefix={arXiv}, eprint={0802.3628}, primaryClass={cs.SE} }
thierry2008dynamic
arxiv-2855
0802.3634
Local Information Based Algorithms for Packet Transport in Complex Networks
<|reference_start|>Local Information Based Algorithms for Packet Transport in Complex Networks: We introduce four algorithms for packet transport in complex networks. These algorithms use deterministic rules which depend, in different ways, on the degree of the node, the number of packets posted down each edge, the mean delivery time of packets sent down each edge to each destination and the time since an edge last transmitted a packet. On scale-free networks all our algorithms are considerably more efficient and can handle a larger load than the random walk algorithm. We consider in detail various attributes of our algorithms, for instance we show that an algorithm that bases its decisions on the mean delivery time jams unless it incorporates information about the degree of the destination node.<|reference_end|>
arxiv
@article{kujawski2008local, title={Local Information Based Algorithms for Packet Transport in Complex Networks}, author={B. Kujawski, G.J. Rodgers, and Bosiljka Tadi'c}, journal={In V.N. Alexnandrov et l., editor, ICCS 2006, volume \textbf{3993} of Lecture Notes in Computer Science, pages 1024-1031, Berlin, 2006, Springer}, year={2008}, archivePrefix={arXiv}, eprint={0802.3634}, primaryClass={cs.NI} }
kujawski2008local
arxiv-2856
0802.3665
Outward Accessibility in Urban Street Networks: Characterization and Improvements
<|reference_start|>Outward Accessibility in Urban Street Networks: Characterization and Improvements: The dynamics of transportation through towns and cities is strongly affected by the topology of the connections and routes. The current work describes an approach combining complex networks and self-avoiding random walk dynamics in order to quantify in objective and accurate manner, along a range of spatial scales, the accessibility of places in towns and cities. The transition probabilities are estimated for several lengths of the walks and used to calculate the outward accessibility of each node. The potential of the methodology is illustrated with respect to the characterization and improvements of the accessibility of the town of Sao Carlos.<|reference_end|>
arxiv
@article{travençolo2008outward, title={Outward Accessibility in Urban Street Networks: Characterization and Improvements}, author={Bruno Augusto Nassif Travenc{c}olo and Luciano da Fontoura Costa}, journal={arXiv preprint arXiv:0802.3665}, year={2008}, archivePrefix={arXiv}, eprint={0802.3665}, primaryClass={cs.CY} }
travençolo2008outward
arxiv-2857
0802.3703
On incidence algebras description of cobweb posets
<|reference_start|>On incidence algebras description of cobweb posets: The explicite formulas for Mobius function and some other important elements of the incidence algebra of an arbitrary cobweb poset are delivered. For that to do one uses Kwasniewski's construction of his cobweb posets . The digraph representation of these cobweb posets constitutes a newly discovered class of orderable DAG's named here down KoDAGs with a kind of universality now being investigated. Namely cobweb posets' and thus KoDAGs's defining di-bicliques are links of any complete relations' chains.<|reference_end|>
arxiv
@article{krot-sieniawska2008on, title={On incidence algebras description of cobweb posets}, author={Ewa Krot-Sieniawska}, journal={arXiv preprint arXiv:0802.3703}, year={2008}, archivePrefix={arXiv}, eprint={0802.3703}, primaryClass={math.CO cs.DM} }
krot-sieniawska2008on
arxiv-2858
0802.3718
Preventing Coordinated Attacks Via Distributed Alert Exchange
<|reference_start|>Preventing Coordinated Attacks Via Distributed Alert Exchange: Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish/subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish/subscribe middleware and evaluate our approach for GNU/Linux systems.<|reference_end|>
arxiv
@article{garcia-alfaro2008preventing, title={Preventing Coordinated Attacks Via Distributed Alert Exchange}, author={Joaquin Garcia-Alfaro, Michael A. Jaeger, Gero Muehl, and Joan Borrell}, journal={IFIP International Conference on Intelligence in Communication Systems (INTELLCOMM 2005) (17/10/2005) 87-98}, year={2008}, archivePrefix={arXiv}, eprint={0802.3718}, primaryClass={cs.CR cs.NI} }
garcia-alfaro2008preventing
arxiv-2859
0802.3734
Generic case complexity and One-Way functions
<|reference_start|>Generic case complexity and One-Way functions: The goal of this paper is to introduce ideas and methodology of the generic case complexity to cryptography community. This relatively new approach allows one to analyze the behavior of an algorithm on ''most'' inputs in a simple and intuitive fashion which has some practical advantages over classical methods based on averaging. We present an alternative definition of one-way function using the concepts of generic case complexity and show its equivalence to the standard definition. In addition we demonstrate the convenience of the new approach by giving a short proof that extending adversaries to a larger class of partial algorithms with errors does not change the strength of the security assumption.<|reference_end|>
arxiv
@article{myasnikov2008generic, title={Generic case complexity and One-Way functions}, author={Alex D. Myasnikov}, journal={arXiv preprint arXiv:0802.3734}, year={2008}, archivePrefix={arXiv}, eprint={0802.3734}, primaryClass={cs.CC cs.CR} }
myasnikov2008generic
arxiv-2860
0802.3746
Information Hiding Techniques: A Tutorial Review
<|reference_start|>Information Hiding Techniques: A Tutorial Review: The purpose of this tutorial is to present an overview of various information hiding techniques. A brief history of steganography is provided along with techniques that were used to hide information. Text, image and audio based information hiding techniques are discussed. This paper also provides a basic introduction to digital watermarking.<|reference_end|>
arxiv
@article{thampi2008information, title={Information Hiding Techniques: A Tutorial Review}, author={Sabu M. Thampi}, journal={arXiv preprint arXiv:0802.3746}, year={2008}, archivePrefix={arXiv}, eprint={0802.3746}, primaryClass={cs.CR cs.IR} }
thampi2008information
arxiv-2861
0802.3767
Architecture for Integrated Mems Resonators Quality Factor Measurement
<|reference_start|>Architecture for Integrated Mems Resonators Quality Factor Measurement: In this paper, an architecture designed for electrical measurement of the quality factor of MEMS resonators is proposed. An estimation of the measurement performance is made using PSPICE simulations taking into account the component's non-idealities. An error on the measured Q value of only several percent is achievable, at a small integration cost, for sufficiently high quality factor values (Q > 100).<|reference_end|>
arxiv
@article{mathias2008architecture, title={Architecture for Integrated Mems Resonators Quality Factor Measurement}, author={H. Mathias (IEF), F. Parrain (IEF), J.-P. Gilles (IEF), S. Megherbi (IEF), M. Zhang (IEF), Ph. Coste (IEF), A. Dupret (IEF)}, journal={Dans Symposium on Design, Test, Integration and Packaging of MEMS/MOEMS - DTIP 2007, Stresa, lago Maggiore : Italie (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0802.3767}, primaryClass={cs.OH} }
mathias2008architecture
arxiv-2862
0802.3768
Optimization of Cricket-inspired, Biomimetic Artificial Hair Sensors for Flow Sensing
<|reference_start|>Optimization of Cricket-inspired, Biomimetic Artificial Hair Sensors for Flow Sensing: High density arrays of artificial hair sensors, biomimicking the extremely sensitive mechanoreceptive filiform hairs found on cerci of crickets have been fabricated successfully. We assess the sensitivity of these artificial sensors and present a scheme for further optimization addressing the deteriorating effects of stress in the structures. We show that, by removing a portion of chromium electrodes close to the torsional beams, the upward lift at the edges of the membrane due to the stress, will decrease hence increase the sensitivity.<|reference_end|>
arxiv
@article{izadi2008optimization, title={Optimization of Cricket-inspired, Biomimetic Artificial Hair Sensors for Flow Sensing}, author={N. Izadi, R. K. Jaganatharaja, J. Floris, G. Krijnen}, journal={Dans Symposium on Design, Test, Integration and Packaging of MEMS/MOEMS - DTIP 2007, Stresa, lago Maggiore : Italie (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0802.3768}, primaryClass={cs.OH} }
izadi2008optimization
arxiv-2863
0802.3784
Pattern-Oriented Analysis and Design (POAD) Theory
<|reference_start|>Pattern-Oriented Analysis and Design (POAD) Theory: Pattern-Oriented Analysis and Design (POAD) is the practice of building complex software by applying proven designs to specific problem domains. Although a great deal of research and practice has been devoted to formalizing existing design patterns and discovering new ones, there has been relatively little research into methods for combining these patterns into software applications. This is partly because the creation of complex software applications is so expensive. This paper proposes a mathematical model of POAD that may allow future research in pattern-oriented techniques to be performed using less expensive formal techniques rather than expensive, complex software development.<|reference_end|>
arxiv
@article{overton2008pattern-oriented, title={Pattern-Oriented Analysis and Design (POAD) Theory}, author={Jerry Overton}, journal={arXiv preprint arXiv:0802.3784}, year={2008}, archivePrefix={arXiv}, eprint={0802.3784}, primaryClass={cs.SE cs.IT math.IT} }
overton2008pattern-oriented
arxiv-2864
0802.3789
Knowledge Technologies
<|reference_start|>Knowledge Technologies: Several technologies are emerging that provide new ways to capture, store, present and use knowledge. This book is the first to provide a comprehensive introduction to five of the most important of these technologies: Knowledge Engineering, Knowledge Based Engineering, Knowledge Webs, Ontologies and Semantic Webs. For each of these, answers are given to a number of key questions (What is it? How does it operate? How is a system developed? What can it be used for? What tools are available? What are the main issues?). The book is aimed at students, researchers and practitioners interested in Knowledge Management, Artificial Intelligence, Design Engineering and Web Technologies. During the 1990s, Nick worked at the University of Nottingham on the application of AI techniques to knowledge management and on various knowledge acquisition projects to develop expert systems for military applications. In 1999, he joined Epistemics where he worked on numerous knowledge projects and helped establish knowledge management programmes at large organisations in the engineering, technology and legal sectors. He is author of the book "Knowledge Acquisition in Practice", which describes a step-by-step procedure for acquiring and implementing expertise. He maintains strong links with leading research organisations working on knowledge technologies, such as knowledge-based engineering, ontologies and semantic technologies.<|reference_end|>
arxiv
@article{milton2008knowledge, title={Knowledge Technologies}, author={Nick Milton}, journal={"Publishing studies" book series, edited by Giandomenico Sica, ISSN 1973-6061 (Printed edition), ISSN 1973-6053 (Electronic edition)}, year={2008}, archivePrefix={arXiv}, eprint={0802.3789}, primaryClass={cs.CY cs.AI cs.LG cs.SE} }
milton2008knowledge
arxiv-2865
0802.3820
On the Kuratowski graph planarity criterion
<|reference_start|>On the Kuratowski graph planarity criterion: This paper is purely expositional. The statement of the Kuratowski graph planarity criterion is simple and well-known. However, its classical proof is not easy. In this paper we present the Makarychev proof (with further simplifications by Prasolov, Telishev, Zaslavski and the author) which is possibly the simplest. In the Rusian version before the proof we present all the necessary definitions, and afterwards we state some close results on graphs and more general spaces. The paper is accessible for students familiar with the notion of a graph, and could be an interesting easy reading for mature mathematicians.<|reference_end|>
arxiv
@article{skopenkov2008on, title={On the Kuratowski graph planarity criterion}, author={A. Skopenkov}, journal={Mat. Prosveschenie, 9 (2005), 116-128, and 11 (2007), 159--160}, year={2008}, archivePrefix={arXiv}, eprint={0802.3820}, primaryClass={math.GT cs.DM math.CO} }
skopenkov2008on
arxiv-2866
0802.3851
Joint Source Channel Coding with Side Information Using Hybrid Digital Analog Codes
<|reference_start|>Joint Source Channel Coding with Side Information Using Hybrid Digital Analog Codes: We study the joint source channel coding problem of transmitting an analog source over a Gaussian channel in two cases - (i) the presence of interference known only to the transmitter and (ii) in the presence of side information known only to the receiver. We introduce hybrid digital analog forms of the Costa and Wyner-Ziv coding schemes. Our schemes are based on random coding arguments and are different from the nested lattice schemes by Kochman and Zamir that use dithered quantization. We also discuss superimposed digital and analog schemes for the above problems which show that there are infinitely many schemes for achieving the optimal distortion for these problems. This provides an extension of the schemes by Bross et al to the interference/side information case. We then discuss applications of the hybrid digital analog schemes for transmitting under a channel signal-to-noise ratio mismatch and for broadcasting a Gaussian source with bandwidth compression.<|reference_end|>
arxiv
@article{wilson2008joint, title={Joint Source Channel Coding with Side Information Using Hybrid Digital Analog Codes}, author={Makesh Pravin Wilson, Krishna Narayanan and Giuseppe Caire}, journal={arXiv preprint arXiv:0802.3851}, year={2008}, archivePrefix={arXiv}, eprint={0802.3851}, primaryClass={cs.IT math.IT} }
wilson2008joint
arxiv-2867
0802.3855
The Discrete Hilbert Transform for Non-Periodic Signals
<|reference_start|>The Discrete Hilbert Transform for Non-Periodic Signals: This note investigates the size of the guard band for non-periodic discrete Hilbert transform, which has recently been proposed for data hiding and security applications. It is shown that a guard band equal to the duration of the message is sufficient for a variety of analog signals and is, therefore, likely to be adequate for discrete or digital data.<|reference_end|>
arxiv
@article{gangasani2008the, title={The Discrete Hilbert Transform for Non-Periodic Signals}, author={Sumanth Kumar Reddy Gangasani}, journal={arXiv preprint arXiv:0802.3855}, year={2008}, archivePrefix={arXiv}, eprint={0802.3855}, primaryClass={cs.CR} }
gangasani2008the
arxiv-2868
0802.3860
Separating NOF communication complexity classes RP and NP
<|reference_start|>Separating NOF communication complexity classes RP and NP: We provide a non-explicit separation of the number-on-forehead communication complexity classes RP and NP when the number of players is up to \delta log(n) for any \delta<1. Recent lower bounds on Set-Disjointness [LS08,CA08] provide an explicit separation between these classes when the number of players is only up to o(loglog(n)).<|reference_end|>
arxiv
@article{david2008separating, title={Separating NOF communication complexity classes RP and NP}, author={Matei David and Toniann Pitassi}, journal={arXiv preprint arXiv:0802.3860}, year={2008}, archivePrefix={arXiv}, eprint={0802.3860}, primaryClass={cs.CC} }
david2008separating
arxiv-2869
0802.3875
Are complex systems hard to evolve?
<|reference_start|>Are complex systems hard to evolve?: Evolutionary complexity is here measured by the number of trials/evaluations needed for evolving a logical gate in a non-linear medium. Behavioural complexity of the gates evolved is characterised in terms of cellular automata behaviour. We speculate that hierarchies of behavioural and evolutionary complexities are isomorphic up to some degree, subject to substrate specificity of evolution and the spectrum of evolution parameters.<|reference_end|>
arxiv
@article{adamatzky2008are, title={Are complex systems hard to evolve?}, author={Andy Adamatzky and Larry Bull}, journal={Volume 14, Issue 6, pages 15-20, July/August 2009}, year={2008}, doi={10.1002/cplx.20269}, archivePrefix={arXiv}, eprint={0802.3875}, primaryClass={cs.NE} }
adamatzky2008are
arxiv-2870
0802.3881
Deriving Sorting Algorithms
<|reference_start|>Deriving Sorting Algorithms: This paper proposes new derivations of three well-known sorting algorithms, in their functional formulation. The approach we use is based on three main ingredients: first, the algorithms are derived from a simpler algorithm, i.e. the specification is already a solution to the problem (in this sense our derivations are program transformations). Secondly, a mixture of inductive and coinductive arguments are used in a uniform, algebraic style in our reasoning. Finally, the approach uses structural invariants so as to strengthen the equational reasoning with logical arguments that cannot be captured in the algebraic framework.<|reference_end|>
arxiv
@article{almeida2008deriving, title={Deriving Sorting Algorithms}, author={Jos'e Bacelar Almeida, Jorge Sousa Pinto}, journal={arXiv preprint arXiv:0802.3881}, year={2008}, number={DI-PURe-06.04.01}, archivePrefix={arXiv}, eprint={0802.3881}, primaryClass={cs.DS cs.LO} }
almeida2008deriving
arxiv-2871
0802.3885
Rich, Sturmian, and trapezoidal words
<|reference_start|>Rich, Sturmian, and trapezoidal words: In this paper we explore various interconnections between rich words, Sturmian words, and trapezoidal words. Rich words, first introduced in arXiv:0801.1656 by the second and third authors together with J. Justin and S. Widmer, constitute a new class of finite and infinite words characterized by having the maximal number of palindromic factors. Every finite Sturmian word is rich, but not conversely. Trapezoidal words were first introduced by the first author in studying the behavior of the subword complexity of finite Sturmian words. Unfortunately this property does not characterize finite Sturmian words. In this note we show that the only trapezoidal palindromes are Sturmian. More generally we show that Sturmian palindromes can be characterized either in terms of their subword complexity (the trapezoidal property) or in terms of their palindromic complexity. We also obtain a similar characterization of rich palindromes in terms of a relation between palindromic complexity and subword complexity.<|reference_end|>
arxiv
@article{de luca2008rich,, title={Rich, Sturmian, and trapezoidal words}, author={Aldo de Luca, Amy Glen, Luca Q. Zamboni}, journal={Theoretical Computer Science 407 (2008) 569--573}, year={2008}, doi={10.1016/j.tcs.2008.06.009}, archivePrefix={arXiv}, eprint={0802.3885}, primaryClass={math.CO cs.DM} }
de luca2008rich,
arxiv-2872
0802.3888
Directive words of episturmian words: equivalences and normalization
<|reference_start|>Directive words of episturmian words: equivalences and normalization: Episturmian morphisms constitute a powerful tool to study episturmian words. Indeed, any episturmian word can be infinitely decomposed over the set of pure episturmian morphisms. Thus, an episturmian word can be defined by one of its morphic decompositions or, equivalently, by a certain directive word. Here we characterize pairs of words directing a common episturmian word. We also propose a way to uniquely define any episturmian word through a normalization of its directive words. As a consequence of these results, we characterize episturmian words having a unique directive word.<|reference_end|>
arxiv
@article{glen2008directive, title={Directive words of episturmian words: equivalences and normalization}, author={Amy Glen, Florence Lev'e, Gw'ena"el Richomme}, journal={RAIRO - Theoretical Informatics and Applications 43 (2009) 299-319}, year={2008}, doi={10.1051/ita:2008029}, archivePrefix={arXiv}, eprint={0802.3888}, primaryClass={cs.DM math.CO} }
glen2008directive
arxiv-2873
0802.3895
Complexity Metrics for Spreadsheet Models
<|reference_start|>Complexity Metrics for Spreadsheet Models: Several complexity metrics are described which are related to logic structure, data structure and size of spreadsheet models. They primarily concentrate on the dispersion of cell references and cell paths. Most metrics are newly defined, while some are adapted from traditional software engineering. Their purpose is the identification of cells which are liable to errors. In addition, they can be used to estimate the values of dependent process metrics, such as the development duration and effort, and especially to adjust the cell error rate in accordance with the contents of each individual cell, in order to accurately asses the reliability of a model. Finally, two conceptual constructs - the reference branching condition cell and the condition block - are discussed, aiming at improving the reliability, modifiability, auditability and comprehensibility of logical tests.<|reference_end|>
arxiv
@article{bregar2008complexity, title={Complexity Metrics for Spreadsheet Models}, author={Andrej Bregar}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2004 85-93 ISBN 1 902724 94 1}, year={2008}, archivePrefix={arXiv}, eprint={0802.3895}, primaryClass={cs.SE} }
bregar2008complexity
arxiv-2874
0802.3919
A Paradigm for Spreadsheet Engineering Methodologies
<|reference_start|>A Paradigm for Spreadsheet Engineering Methodologies: Spreadsheet engineering methodologies are diverse and sometimes contradictory. It is difficult for spreadsheet developers to identify a spreadsheet engineering methodology that is appropriate for their class of spreadsheet, with its unique combination of goals, type of problem, and available time and resources. There is a lack of well-organized, proven methodologies with known costs and benefits for well-defined spreadsheet classes. It is difficult to compare and critically evaluate methodologies. We present a paradigm for organizing and interpreting spreadsheet engineering recommendations. It systematically addresses the myriad choices made when developing a spreadsheet, and explicitly considers resource constraints and other development parameters. This paradigm provides a framework for evaluation, comparison, and selection of methodologies, and a list of essential elements for developers or codifiers of new methodologies. This paradigm identifies gaps in our knowledge that merit further research.<|reference_end|>
arxiv
@article{grossman2008a, title={A Paradigm for Spreadsheet Engineering Methodologies}, author={Thomas A. Grossman, Ozgur Ozluk}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2004 23-33 ISBN 1 902724 94 1}, year={2008}, archivePrefix={arXiv}, eprint={0802.3919}, primaryClass={cs.HC} }
grossman2008a
arxiv-2875
0802.3924
A Toolkit for Scalable Spreadsheet Visualization
<|reference_start|>A Toolkit for Scalable Spreadsheet Visualization: This paper presents a toolkit for spreadsheet visualization based on logical areas, semantic classes and data modules. Logical areas, semantic classes and data modules are abstract representations of spreadsheet programs that are meant to reduce the auditing and comprehension effort, especially for large and regular spreadsheets. The toolkit is integrated as a plug-in in the Gnumeric spreadsheet system for Linux. It can process large, industry scale spreadsheet programs in reasonable time and is tightly integrated with its host spreadsheet system. Users can generate hierarchical and graph-based representations of their spreadsheets. This allows them to spot conceptual similarities in different regions of the spreadsheet, that would otherwise not fit on a screen. As it is assumed that the learning effort for effective use of such a tool should be kept low, we aim for intuitive handling of most of the tool's functions.<|reference_end|>
arxiv
@article{clermont2008a, title={A Toolkit for Scalable Spreadsheet Visualization}, author={Markus Clermont}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2004 95-106 ISBN 1 902724 94 1}, year={2008}, archivePrefix={arXiv}, eprint={0802.3924}, primaryClass={cs.HC} }
clermont2008a
arxiv-2876
0802.3939
Using Layout Information for Spreadsheet Visualization
<|reference_start|>Using Layout Information for Spreadsheet Visualization: This paper extends a spreadsheet visualization technique by using layout information. The original approach identifies logically or semantically related cells by relying exclusively on the content of cells for identifying semantic classes. A disadvantage of semantic classes is that users have to supply parameters which describe the possible shapes of these blocks. The correct parametrization requires a certain degree of experience and is thus not suitable for untrained users. To avoid this constraint, the approach reported in this paper uses row/column-labels as well as common format information for locating areas with common, recurring semantics. Heuristics are provided to distinguish between cell groups with intended common semantics and cell groups related in an ad-hoc manner.<|reference_end|>
arxiv
@article{hipfl2008using, title={Using Layout Information for Spreadsheet Visualization}, author={Sabine Hipfl}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2004 107-119 ISBN 1 902724 94 1}, year={2008}, archivePrefix={arXiv}, eprint={0802.3939}, primaryClass={cs.HC} }
hipfl2008using
arxiv-2877
0802.3940
Spreadsheet Structure Discovery with Logic Programming
<|reference_start|>Spreadsheet Structure Discovery with Logic Programming: Our term "structure discovery" denotes the recovery of structure, such as the grouping of cells, that was intended by a spreadsheet's author but is not explicit in the spreadsheet. We are implementing structure discovery tools in the logic-programming language Prolog for our spreadsheet analysis program Model Master, by writing grammars for spreadsheet structures. The objective is an "intelligent structure monitor" to run beside Excel, allowing users to reconfigure spreadsheets to the representational needs of the task at hand. This could revolutionise spreadsheet "best practice". We also describe a formulation of spreadsheet reverse-engineering based on "arrows".<|reference_end|>
arxiv
@article{paine2008spreadsheet, title={Spreadsheet Structure Discovery with Logic Programming}, author={Jocelyn Paine}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2004 121-133 ISBN 1 902724 94 1}, year={2008}, archivePrefix={arXiv}, eprint={0802.3940}, primaryClass={cs.SE} }
paine2008spreadsheet
arxiv-2878
0802.3950
Belief Propagation and Loop Series on Planar Graphs
<|reference_start|>Belief Propagation and Loop Series on Planar Graphs: We discuss a generic model of Bayesian inference with binary variables defined on edges of a planar graph. The Loop Calculus approach of [1, 2] is used to evaluate the resulting series expansion for the partition function. We show that, for planar graphs, truncating the series at single-connected loops reduces, via a map reminiscent of the Fisher transformation [3], to evaluating the partition function of the dimer matching model on an auxiliary planar graph. Thus, the truncated series can be easily re-summed, using the Pfaffian formula of Kasteleyn [4]. This allows to identify a big class of computationally tractable planar models reducible to a dimer model via the Belief Propagation (gauge) transformation. The Pfaffian representation can also be extended to the full Loop Series, in which case the expansion becomes a sum of Pfaffian contributions, each associated with dimer matchings on an extension to a subgraph of the original graph. Algorithmic consequences of the Pfaffian representation, as well as relations to quantum and non-planar models, are discussed.<|reference_end|>
arxiv
@article{chertkov2008belief, title={Belief Propagation and Loop Series on Planar Graphs}, author={Michael Chertkov, Vladimir Y. Chernyak and Razvan Teodorescu}, journal={J. Stat. Mech. (2008) P05003}, year={2008}, doi={10.1088/1742-5468/2008/05/P05003}, archivePrefix={arXiv}, eprint={0802.3950}, primaryClass={cond-mat.stat-mech cs.AI cs.IT math.IT} }
chertkov2008belief
arxiv-2879
0802.3974
Syntax diagrams as a formalism for representation of syntactic relations of formal languages
<|reference_start|>Syntax diagrams as a formalism for representation of syntactic relations of formal languages: The new approach to representation of syntax of formal languages-- a formalism of syntax diagrams is offered. Syntax diagrams look a convenient language for the description of syntactic relations in the languages having nonlinear representation of texts, for example, for representation of syntax lows of the language of structural chemical formulas. The formalism of neighbourhood grammar is used to describe the set of correct syntax constructs. The neighbourhood the grammar consists of a set of families of "neighbourhoods"-- the diagrams defined for each symbol of the language's alphabet. The syntax diagram is correct if each symbol is included into this diagram together with some neighbourhood. In other words, correct diagrams are needed to be covered by elements of the neighbourhood grammar. Thus, the grammar of formal language can be represented as system of the covers defined for each correct syntax diagram.<|reference_end|>
arxiv
@article{lapshin2008syntax, title={Syntax diagrams as a formalism for representation of syntactic relations of formal languages}, author={Vladimir Lapshin}, journal={arXiv preprint arXiv:0802.3974}, year={2008}, archivePrefix={arXiv}, eprint={0802.3974}, primaryClass={cs.LO} }
lapshin2008syntax
arxiv-2880
0802.3992
Polynomial Filtering for Fast Convergence in Distributed Consensus
<|reference_start|>Polynomial Filtering for Fast Convergence in Distributed Consensus: In the past few years, the problem of distributed consensus has received a lot of attention, particularly in the framework of ad hoc sensor networks. Most methods proposed in the literature address the consensus averaging problem by distributed linear iterative algorithms, with asymptotic convergence of the consensus solution. The convergence rate of such distributed algorithms typically depends on the network topology and the weights given to the edges between neighboring sensors, as described by the network matrix. In this paper, we propose to accelerate the convergence rate for given network matrices by the use of polynomial filtering algorithms. The main idea of the proposed methodology is to apply a polynomial filter on the network matrix that will shape its spectrum in order to increase the convergence rate. Such an algorithm is equivalent to periodic updates in each of the sensors by aggregating a few of its previous estimates. We formulate the computation of the coefficients of the optimal polynomial as a semi-definite program that can be efficiently and globally solved for both static and dynamic network topologies. We finally provide simulation results that demonstrate the effectiveness of the proposed solutions in accelerating the convergence of distributed consensus averaging problems.<|reference_end|>
arxiv
@article{kokiopoulou2008polynomial, title={Polynomial Filtering for Fast Convergence in Distributed Consensus}, author={Effrosyni Kokiopoulou and Pascal Frossard}, journal={arXiv preprint arXiv:0802.3992}, year={2008}, doi={10.1109/TSP.2008.2006147}, number={LTS-2008-005}, archivePrefix={arXiv}, eprint={0802.3992}, primaryClass={cs.IT math.IT} }
kokiopoulou2008polynomial
arxiv-2881
0802.4002
Sensing Danger: Innate Immunology for Intrusion Detection
<|reference_start|>Sensing Danger: Innate Immunology for Intrusion Detection: The immune system provides an ideal metaphor for anomaly detection in general and computer security in particular. Based on this idea, artificial immune systems have been used for a number of years for intrusion detection, unfortunately so far with little success. However, these previous systems were largely based on immunological theory from the 1970s and 1980s and over the last decade our understanding of immunological processes has vastly improved. In this paper we present two new immune inspired algorithms based on the latest immunological discoveries, such as the behaviour of Dendritic Cells. The resultant algorithms are applied to real world intrusion problems and show encouraging results. Overall, we believe there is a bright future for these next generation artificial immune algorithms.<|reference_end|>
arxiv
@article{aickelin2008sensing, title={Sensing Danger: Innate Immunology for Intrusion Detection}, author={Uwe Aickelin and Julie Greensmith}, journal={Information Security Technical Report, 12(4), pp 218-227, 2007}, year={2008}, doi={10.1016/j.istr.2007.10.003}, archivePrefix={arXiv}, eprint={0802.4002}, primaryClass={cs.NE cs.CR} }
aickelin2008sensing
arxiv-2882
0802.4010
Brain architecture: A design for natural computation
<|reference_start|>Brain architecture: A design for natural computation: Fifty years ago, John von Neumann compared the architecture of the brain with that of computers that he invented and which is still in use today. In those days, the organisation of computers was based on concepts of brain organisation. Here, we give an update on current results on the global organisation of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing, and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.<|reference_end|>
arxiv
@article{kaiser2008brain, title={Brain architecture: A design for natural computation}, author={Marcus Kaiser}, journal={Philosophical Transactions of The Royal Society A, 365: 3033-3045, 2007}, year={2008}, doi={10.1098/rsta.2007.0007}, archivePrefix={arXiv}, eprint={0802.4010}, primaryClass={q-bio.NC cs.AI cs.NE physics.soc-ph} }
kaiser2008brain
arxiv-2883
0802.4018
Algebraic Pattern Matching in Join Calculus
<|reference_start|>Algebraic Pattern Matching in Join Calculus: We propose an extension of the join calculus with pattern matching on algebraic data types. Our initial motivation is twofold: to provide an intuitive semantics of the interaction between concurrency and pattern matching; to define a practical compilation scheme from extended join definitions into ordinary ones plus ML pattern matching. To assess the correctness of our compilation scheme, we develop a theory of the applied join calculus, a calculus with value passing and value matching. We implement this calculus as an extension of the current JoCaml system.<|reference_end|>
arxiv
@article{ma2008algebraic, title={Algebraic Pattern Matching in Join Calculus}, author={Qin Ma and Luc Maranget}, journal={Logical Methods in Computer Science, Volume 4, Issue 1 (March 21, 2008) lmcs:770}, year={2008}, doi={10.2168/LMCS-4(1:7)2008}, archivePrefix={arXiv}, eprint={0802.4018}, primaryClass={cs.PL cs.DC} }
ma2008algebraic
arxiv-2884
0802.4040
Analysis of the Karmarkar-Karp Differencing Algorithm
<|reference_start|>Analysis of the Karmarkar-Karp Differencing Algorithm: The Karmarkar-Karp differencing algorithm is the best known polynomial time heuristic for the number partitioning problem, fundamental in both theoretical computer science and statistical physics. We analyze the performance of the differencing algorithm on random instances by mapping it to a nonlinear rate equation. Our analysis reveals strong finite size effects that explain why the precise asymptotics of the differencing solution is hard to establish by simulations. The asymptotic series emerging from the rate equation satisfies all known bounds on the Karmarkar-Karp algorithm and projects a scaling $n^{-c\ln n}$, where $c=1/(2\ln2)=0.7213...$. Our calculations reveal subtle relations between the algorithm and Fibonacci-like sequences, and we establish an explicit identity to that effect.<|reference_end|>
arxiv
@article{boettcher2008analysis, title={Analysis of the Karmarkar-Karp Differencing Algorithm}, author={Stefan Boettcher, Stephan Mertens}, journal={European Physics Journal B 65, 131-140 (2008)}, year={2008}, doi={10.1140/epjb/e2008-00320-9}, archivePrefix={arXiv}, eprint={0802.4040}, primaryClass={cs.NA cond-mat.dis-nn cs.DM cs.DS} }
boettcher2008analysis
arxiv-2885
0802.4057
A Qualitative Modal Representation of Quantum Register Transformations
<|reference_start|>A Qualitative Modal Representation of Quantum Register Transformations: We introduce two modal natural deduction systems that are suitable to represent and reason about transformations of quantum registers in an abstract, qualitative, way. Quantum registers represent quantum systems, and can be viewed as the structure of quantum data for quantum operations. Our systems provide a modal framework for reasoning about operations on quantum registers (unitary transformations and measurements), in terms of possible worlds (as abstractions of quantum registers) and accessibility relations between these worlds. We give a Kripke--style semantics that formally describes quantum register transformations and prove the soundness and completeness of our systems with respect to this semantics.<|reference_end|>
arxiv
@article{masini2008a, title={A Qualitative Modal Representation of Quantum Register Transformations}, author={Andrea Masini, Luca Vigan`o, Margherita Zorzi}, journal={arXiv preprint arXiv:0802.4057}, year={2008}, archivePrefix={arXiv}, eprint={0802.4057}, primaryClass={cs.LO} }
masini2008a
arxiv-2886
0802.4079
Families of LDPC Codes Derived from Nonprimitive BCH Codes and Cyclotomic Cosets
<|reference_start|>Families of LDPC Codes Derived from Nonprimitive BCH Codes and Cyclotomic Cosets: Low-density parity check (LDPC) codes are an important class of codes with many applications. Two algebraic methods for constructing regular LDPC codes are derived -- one based on nonprimitive narrow-sense BCH codes and the other directly based on cyclotomic cosets. The constructed codes have high rates and are free of cycles of length four; consequently, they can be decoded using standard iterative decoding algorithms. The exact dimension and bounds for the minimum distance and stopping distance are derived. These constructed codes can be used to derive quantum error-correcting codes.<|reference_end|>
arxiv
@article{aly2008families, title={Families of LDPC Codes Derived from Nonprimitive BCH Codes and Cyclotomic Cosets}, author={Salah A. Aly}, journal={arXiv preprint arXiv:0802.4079}, year={2008}, archivePrefix={arXiv}, eprint={0802.4079}, primaryClass={cs.IT math.IT} }
aly2008families
arxiv-2887
0802.4089
An algorithmic complexity interpretation of Lin's third law of information theory
<|reference_start|>An algorithmic complexity interpretation of Lin's third law of information theory: Instead of static entropy we assert that the Kolmogorov complexity of a static structure such as a solid is the proper measure of disorder (or chaoticity). A static structure in a surrounding perfectly-random universe acts as an interfering entity which introduces local disruption in randomness. This is modeled by a selection rule $R$ which selects a subsequence of the random input sequence that hits the structure. Through the inequality that relates stochasticity and chaoticity of random binary sequences we maintain that Lin's notion of stability corresponds to the stability of the frequency of 1s in the selected subsequence. This explains why more complex static structures are less stable. Lin's third law is represented as the inevitable change that static structure undergo towards conforming to the universe's perfect randomness.<|reference_end|>
arxiv
@article{ratsaby2008an, title={An algorithmic complexity interpretation of Lin's third law of information theory}, author={Joel Ratsaby}, journal={arXiv preprint arXiv:0802.4089}, year={2008}, doi={10.3390/entropy-e10010006}, archivePrefix={arXiv}, eprint={0802.4089}, primaryClass={cs.CC cs.IT math.IT} }
ratsaby2008an
arxiv-2888
0802.4095
For each $\alpha$ > 2 there is an infinite binary word with critical exponent $\alpha$
<|reference_start|>For each $\alpha$ > 2 there is an infinite binary word with critical exponent $\alpha$: For each $\alpha > 2$ there is a binary word with critical exponent $\alpha$.<|reference_end|>
arxiv
@article{currie2008for, title={For each $\alpha$ > 2 there is an infinite binary word with critical exponent $\alpha$}, author={James D. Currie, Narad Rampersad}, journal={arXiv preprint arXiv:0802.4095}, year={2008}, archivePrefix={arXiv}, eprint={0802.4095}, primaryClass={math.CO cs.FL} }
currie2008for
arxiv-2889
0802.4101
New bounds on classical and quantum one-way communication complexity
<|reference_start|>New bounds on classical and quantum one-way communication complexity: In this paper we provide new bounds on classical and quantum distributional communication complexity in the two-party, one-way model of communication. In the classical model, our bound extends the well known upper bound of Kremer, Nisan and Ron to include non-product distributions. We show that for a boolean function f:X x Y -> {0,1} and a non-product distribution mu on X x Y and epsilon in (0,1/2) constant: D_{epsilon}^{1, mu}(f)= O((I(X:Y)+1) vc(f)), where D_{epsilon}^{1, mu}(f) represents the one-way distributional communication complexity of f with error at most epsilon under mu; vc(f) represents the Vapnik-Chervonenkis dimension of f and I(X:Y) represents the mutual information, under mu, between the random inputs of the two parties. For a non-boolean function f:X x Y ->[k], we show a similar upper bound on D_{epsilon}^{1, mu}(f) in terms of k, I(X:Y) and the pseudo-dimension of f' = f/k. In the quantum one-way model we provide a lower bound on the distributional communication complexity, under product distributions, of a function f, in terms the well studied complexity measure of f referred to as the rectangle bound or the corruption bound of f . We show for a non-boolean total function f : X x Y -> Z and a product distribution mu on XxY, Q_{epsilon^3/8}^{1, mu}(f) = Omega(rec_ epsilon^{1, mu}(f)), where Q_{epsilon^3/8}^{1, mu}(f) represents the quantum one-way distributional communication complexity of f with error at most epsilon^3/8 under mu and rec_ epsilon^{1, mu}(f) represents the one-way rectangle bound of f with error at most epsilon under mu . Similarly for a non-boolean partial function f:XxY -> Z U {*} and a product distribution mu on X x Y, we show, Q_{epsilon^6/(2 x 15^4)}^{1, mu}(f) = Omega(rec_ epsilon^{1, mu}(f)).<|reference_end|>
arxiv
@article{jain2008new, title={New bounds on classical and quantum one-way communication complexity}, author={Rahul Jain and Shengyu Zhang}, journal={arXiv preprint arXiv:0802.4101}, year={2008}, archivePrefix={arXiv}, eprint={0802.4101}, primaryClass={cs.IT cs.DC math.IT} }
jain2008new
arxiv-2890
0802.4112
Hubs in Languages: Scale Free Networks of Synonyms
<|reference_start|>Hubs in Languages: Scale Free Networks of Synonyms: Natural languages are described in this paper in terms of networks of synonyms: a word is identified with a node, and synonyms are connected by undirected links. Our statistical analysis of the network of synonyms in Polish language showed it is scale-free; similar to what is known for English. The statistical properties of the networks are also similar. Thus, the statistical aspects of the networks are good candidates for culture independent elements of human language. We hypothesize that optimization for robustness and efficiency is responsible for this universality. Despite the statistical similarity, there is no one-to-one mapping between networks of these two languages. Although many hubs in Polish are translated into similarly highly connected hubs in English, there are also hubs specific to one of these languages only: a single word in one language is equivalent to many different and disconnected words in the other, in accordance with the Whorf hypothesis about language relativity. Identifying language-specific hubs is vitally important for automatic translation, and for understanding contextual, culturally related messages that are frequently missed or twisted in a naive, literary translation.<|reference_end|>
arxiv
@article{makaruk2008hubs, title={Hubs in Languages: Scale Free Networks of Synonyms}, author={Hanna E. Makaruk, Robert Owczarek}, journal={arXiv preprint arXiv:0802.4112}, year={2008}, number={LA-UR-08-0084}, archivePrefix={arXiv}, eprint={0802.4112}, primaryClass={physics.soc-ph cs.CL physics.data-an} }
makaruk2008hubs
arxiv-2891
0802.4126
Hospital Case Cost Estimates Modelling - Algorithm Comparison
<|reference_start|>Hospital Case Cost Estimates Modelling - Algorithm Comparison: Ontario (Canada) Health System stakeholders support the idea and necessity of the integrated source of data that would include both clinical (e.g. diagnosis, intervention, length of stay, case mix group) and financial (e.g. cost per weighted case, cost per diem) characteristics of the Ontario healthcare system activities at the patient-specific level. At present, the actual patient-level case costs in the explicit form are not available in the financial databases for all hospitals. The goal of this research effort is to develop financial models that will assign each clinical case in the patient-specific data warehouse a dollar value, representing the cost incurred by the Ontario health care facility which treated the patient. Five mathematical models have been developed and verified using real dataset. All models can be classified into two groups based on their underlying method: 1. Models based on using relative intensity weights of the cases, and 2. Models based on using cost per diem.<|reference_end|>
arxiv
@article{andru2008hospital, title={Hospital Case Cost Estimates Modelling - Algorithm Comparison}, author={Peter Andru, Alexei Botchkarev}, journal={arXiv preprint arXiv:0802.4126}, year={2008}, archivePrefix={arXiv}, eprint={0802.4126}, primaryClass={cs.CE cs.DB} }
andru2008hospital
arxiv-2892
0802.4130
Wideband Spectrum Sensing in Cognitive Radio Networks
<|reference_start|>Wideband Spectrum Sensing in Cognitive Radio Networks: Spectrum sensing is an essential enabling functionality for cognitive radio networks to detect spectrum holes and opportunistically use the under-utilized frequency bands without causing harmful interference to legacy networks. This paper introduces a novel wideband spectrum sensing technique, called multiband joint detection, which jointly detects the signal energy levels over multiple frequency bands rather than consider one band at a time. The proposed strategy is efficient in improving the dynamic spectrum utilization and reducing interference to the primary users. The spectrum sensing problem is formulated as a class of optimization problems in interference limited cognitive radio networks. By exploiting the hidden convexity in the seemingly non-convex problem formulations, optimal solutions for multiband joint detection are obtained under practical conditions. Simulation results show that the proposed spectrum sensing schemes can considerably improve the system performance. This paper establishes important principles for the design of wideband spectrum sensing algorithms in cognitive radio networks.<|reference_end|>
arxiv
@article{quan2008wideband, title={Wideband Spectrum Sensing in Cognitive Radio Networks}, author={Zhi Quan, Shuguang Cui, Ali H. Sayed, and H. Vincent Poor}, journal={Proceedings of the 2008 IEEE International Conference on Communications, Beijing, May 19-23, 2008}, year={2008}, doi={10.1109/ICC.2008.177}, archivePrefix={arXiv}, eprint={0802.4130}, primaryClass={cs.IT math.IT} }
quan2008wideband
arxiv-2893
0802.4131
Language of Boolean functions its Grammar and Machine
<|reference_start|>Language of Boolean functions its Grammar and Machine: In this paper an algorithm is designed which generates in-equivalent Boolean functions of any number of variables from the four Boolean functions of single variable. The grammar for such set of Boolean function is provided. The Turing Machine that accepts such set is constructed.<|reference_end|>
arxiv
@article{nayak2008language, title={Language of Boolean functions its Grammar and Machine}, author={Birendra Kumar Nayak (1), Sudhakar Sahoo (2)}, journal={arXiv preprint arXiv:0802.4131}, year={2008}, archivePrefix={arXiv}, eprint={0802.4131}, primaryClass={cs.LO} }
nayak2008language
arxiv-2894
0802.4191
HyperSmooth : calcul et visualisation de cartes de potentiel interactives
<|reference_start|>HyperSmooth : calcul et visualisation de cartes de potentiel interactives: The HyperCarte research group wishes to offer a new cartographic tool for spatial analysis of social data, using the potential smoothing method. The purpose of this method is to view the spreading of phenomena's in a continuous way, at a macroscopic scale, basing on data sampled on administrative areas. We aim to offer an interactive tool, accessible via the Web, but guarantying the confidentiality of data. The major difficulty is induced by the high complexity of the calculus, working on a great amount of data. We present our solution to such a technical challenge, and our perspectives of enhancements.<|reference_end|>
arxiv
@article{plumejeaud2008hypersmooth, title={HyperSmooth : calcul et visualisation de cartes de potentiel interactives}, author={Christine Plumejeaud (INRIA Rh^one-Alpes / LIG Laboratoire d'Informatique de Grenoble), Jean-Marc Vincent (INRIA Rh^one-Alpes / LIG laboratoire d'Informatique de Grenoble), Claude Grasland (GC, RIATE), J'er^ome Gensel (LSR - IMAG), H'el`ene Mathian (GC), Serge Guelton (INRIA Rh^one-Alpes / LIG laboratoire d'Informatique de Grenoble), Jo"el Boulier (GC)}, journal={Dans SAGEO 2007, Rencontres internationales G\'eomatique et territoire. CdRom. - SAGEO 2007, Rencontres internationales G\'eomatique et territoire, France (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0802.4191}, primaryClass={stat.AP cs.HC} }
plumejeaud2008hypersmooth
arxiv-2895
0802.4198
Some properties of the Ukrainian writing system
<|reference_start|>Some properties of the Ukrainian writing system: We investigate the grapheme-phoneme relation in Ukrainian and some properties of the Ukrainian version of the Cyrillic alphabet.<|reference_end|>
arxiv
@article{buk2008some, title={Some properties of the Ukrainian writing system}, author={Solomija Buk, J'an Mav{c}utek, Andrij Rovenchak}, journal={Glottometrics 16, 63-79 (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0802.4198}, primaryClass={cs.CL} }
buk2008some
arxiv-2896
0802.4215
Equilibrium (Zipf) and Dynamic (Grasseberg-Procaccia) method based analyses of human texts A comparison of natural (english) and artificial (esperanto) languages
<|reference_start|>Equilibrium (Zipf) and Dynamic (Grasseberg-Procaccia) method based analyses of human texts A comparison of natural (english) and artificial (esperanto) languages: A comparison of two english texts from Lewis Carroll, one (Alice in wonderland), also translated into esperanto, the other (Through a looking glass) are discussed in order to observe whether natural and artificial languages significantly differ from each other. One dimensional time series like signals are constructed using only word frequencies (FTS) or word lengths (LTS). The data is studied through (i) a Zipf method for sorting out correlations in the FTS and (ii) a Grassberger-Procaccia (GP) technique based method for finding correlations in LTS. Features are compared : different power laws are observed with characteristic exponents for the ranking properties, and the {\it phase space attractor dimensionality}. The Zipf exponent can take values much less than unity ($ca.$ 0.50 or 0.30) depending on how a sentence is defined. This non-universality is conjectured to be a measure of the author $style$. Moreover the attractor dimension $r$ is a simple function of the so called phase space dimension $n$, i.e., $r = n^{\lambda}$, with $\lambda = 0.79$. Such an exponent should also conjecture to be a measure of the author $creativity$. However, even though there are quantitative differences between the original english text and its esperanto translation, the qualitative differences are very minutes, indicating in this case a translation relatively well respecting, along our analysis lines, the content of the author writing.<|reference_end|>
arxiv
@article{ausloos2008equilibrium, title={Equilibrium (Zipf) and Dynamic (Grasseberg-Procaccia) method based analyses of human texts. A comparison of natural (english) and artificial (esperanto) languages}, author={M. Ausloos}, journal={Physica A 387 (25) 6411-6420 (2008)}, year={2008}, doi={10.1016/j.physa.2008.07.016}, archivePrefix={arXiv}, eprint={0802.4215}, primaryClass={physics.soc-ph cs.CL physics.data-an} }
ausloos2008equilibrium
arxiv-2897
0802.4233
Adaptive Sum Power Iterative Waterfilling for MIMO Cognitive Radio Channels
<|reference_start|>Adaptive Sum Power Iterative Waterfilling for MIMO Cognitive Radio Channels: In this paper, the sum capacity of the Gaussian Multiple Input Multiple Output (MIMO) Cognitive Radio Channel (MCC) is expressed as a convex problem with finite number of linear constraints, allowing for polynomial time interior point techniques to find the solution. In addition, a specialized class of sum power iterative waterfilling algorithms is determined that exploits the inherent structure of the sum capacity problem. These algorithms not only determine the maximizing sum capacity value, but also the transmit policies that achieve this optimum. The paper concludes by providing numerical results which demonstrate that the algorithm takes very few iterations to converge to the optimum.<|reference_end|>
arxiv
@article{soundararajan2008adaptive, title={Adaptive Sum Power Iterative Waterfilling for MIMO Cognitive Radio Channels}, author={Rajiv Soundararajan and Sriram Vishwanath}, journal={arXiv preprint arXiv:0802.4233}, year={2008}, archivePrefix={arXiv}, eprint={0802.4233}, primaryClass={cs.IT math.IT} }
soundararajan2008adaptive
arxiv-2898
0802.4237
Safety alternating automata on data words
<|reference_start|>Safety alternating automata on data words: A data word is a sequence of pairs of a letter from a finite alphabet and an element from an infinite set, where the latter can only be compared for equality. Safety one-way alternating automata with one register on infinite data words are considered, their nonemptiness is shown EXPSPACE-complete, and their inclusion decidable but not primitive recursive. The same complexity bounds are obtained for satisfiability and refinement, respectively, for the safety fragment of linear temporal logic with freeze quantification. Dropping the safety restriction, adding past temporal operators, or adding one more register, each causes undecidability.<|reference_end|>
arxiv
@article{lazic2008safety, title={Safety alternating automata on data words}, author={Ranko Lazic}, journal={arXiv preprint arXiv:0802.4237}, year={2008}, archivePrefix={arXiv}, eprint={0802.4237}, primaryClass={cs.LO} }
lazic2008safety
arxiv-2899
0802.4244
Call Admission Control Algorithm for pre-stored VBR video streams
<|reference_start|>Call Admission Control Algorithm for pre-stored VBR video streams: We examine the problem of accepting a new request for a pre-stored VBR video stream that has been smoothed using any of the smoothing algorithms found in the literature. The output of these algorithms is a piecewise constant-rate schedule for a Variable Bit-Rate (VBR) stream. The schedule guarantees that the decoder buffer does not overflow or underflow. The problem addressed in this paper is the determination of the minimal time displacement of each new requested VBR stream so that it can be accomodated by the network and/or the video server without overbooking the committed traffic. We prove that this call-admission control problem for multiple requested VBR streams is NP-complete and inapproximable within a constant factor, by reducing it from the VERTEX COLOR problem. We also present a deterministic morphology-sensitive algorithm that calculates the minimal time displacement of a VBR stream request. The complexity of the proposed algorithm make it suitable for real-time determination of the time displacement parameter during the call admission phase.<|reference_end|>
arxiv
@article{tryfonas2008call, title={Call Admission Control Algorithm for pre-stored VBR video streams}, author={Christos Tryfonas, Dimitris Papamichail, Andrew Mehler, Steven Skiena}, journal={arXiv preprint arXiv:0802.4244}, year={2008}, archivePrefix={arXiv}, eprint={0802.4244}, primaryClass={cs.NI cs.DS} }
tryfonas2008call
arxiv-2900
0802.4270
Propagation Rules of Subsystem Codes
<|reference_start|>Propagation Rules of Subsystem Codes: We demonstrate propagation rules of subsystem code constructions by extending, shortening and combining given subsystem codes. Given an $[[n,k,r,d]]_q$ subsystem code, we drive new subsystem codes with parameters $[[n+1,k,r,\geq d]]_q$, $[[n-1,k+1,r,\geq d-1]]_q$, $[[n,k-1,r+1,d]]_q$. The interested readers shall consult our companion papers for upper and lower bounds on subsystem codes parameters, and introduction, trading dimensions, families, and references on subsystem codes [1][2][3] and references therein.<|reference_end|>
arxiv
@article{aly2008propagation, title={Propagation Rules of Subsystem Codes}, author={Salah A. Aly}, journal={arXiv preprint arXiv:0802.4270}, year={2008}, archivePrefix={arXiv}, eprint={0802.4270}, primaryClass={quant-ph cs.IT math.IT} }
aly2008propagation