corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-4901
0809.3140
Monadic Datalog over Finite Structures with Bounded Treewidth
<|reference_start|>Monadic Datalog over Finite Structures with Bounded Treewidth: Bounded treewidth and Monadic Second Order (MSO) logic have proved to be key concepts in establishing fixed-parameter tractability results. Indeed, by Courcelle's Theorem we know: Any property of finite structures, which is expressible by an MSO sentence, can be decided in linear time (data complexity) if the structures have bounded treewidth. In principle, Courcelle's Theorem can be applied directly to construct concrete algorithms by transforming the MSO evaluation problem into a tree language recognition problem. The latter can then be solved via a finite tree automaton (FTA). However, this approach has turned out to be problematical, since even relatively simple MSO formulae may lead to a ``state explosion'' of the FTA. In this work we propose monadic datalog (i.e., datalog where all intentional predicate symbols are unary) as an alternative method to tackle this class of fixed-parameter tractable problems. We show that if some property of finite structures is expressible in MSO then this property can also be expressed by means of a monadic datalog program over the structure plus the tree decomposition. Moreover, we show that the resulting fragment of datalog can be evaluated in linear time (both w.r.t. the program size and w.r.t. the data size). This new approach is put to work by devising new algorithms for the 3-Colorability problem of graphs and for the PRIMALITY problem of relational schemas (i.e., testing if some attribute in a relational schema is part of a key). We also report on experimental results with a prototype implementation.<|reference_end|>
arxiv
@article{gottlob2008monadic, title={Monadic Datalog over Finite Structures with Bounded Treewidth}, author={Georg Gottlob, Reinhard Pichler, Fang Wei}, journal={arXiv preprint arXiv:0809.3140}, year={2008}, archivePrefix={arXiv}, eprint={0809.3140}, primaryClass={cs.DB cs.CC cs.LO} }
gottlob2008monadic
arxiv-4902
0809.3159
A Geometrical Description of the SINR Region of the Gaussian Interference Channel: the two and three-user case
<|reference_start|>A Geometrical Description of the SINR Region of the Gaussian Interference Channel: the two and three-user case: This paper addresses the problem of computing the achievable rates for two (and three) users sharing a same frequency band without coordination and thus interfering with each other. It is thus primarily related to the field of cognitive radio studies as we look for the achievable increase in the spectrum use efficiency. It is also strongly related to the long standing problem of the capacity region of a Gaussian interference channel (GIC) because of the assumption of no user coordination (and the underlying assumption that all signals and interferences are Gaussian). We give a geometrical description of the SINR region for the two-user and three-user channels. This geometric approach provides a closed-form expression of the capacity region of the two-user interference channel and an insightful of known optimal power allocation scheme.<|reference_end|>
arxiv
@article{bagayoko2008a, title={A Geometrical Description of the SINR Region of the Gaussian Interference Channel: the two and three-user case}, author={Abdoulaye Bagayoko and Patrick Tortelier}, journal={arXiv preprint arXiv:0809.3159}, year={2008}, number={1569150569}, archivePrefix={arXiv}, eprint={0809.3159}, primaryClass={cs.IT math.IT} }
bagayoko2008a
arxiv-4903
0809.3170
A New Framework of Multistage Hypothesis Tests
<|reference_start|>A New Framework of Multistage Hypothesis Tests: In this paper, we have established a general framework of multistage hypothesis tests which applies to arbitrarily many mutually exclusive and exhaustive composite hypotheses. Within the new framework, we have constructed specific multistage tests which rigorously control the risk of committing decision errors and are more efficient than previous tests in terms of average sample number and the number of sampling operations. Without truncation, the sample numbers of our testing plans are absolutely bounded.<|reference_end|>
arxiv
@article{chen2008a, title={A New Framework of Multistage Hypothesis Tests}, author={Xinjia Chen}, journal={arXiv preprint arXiv:0809.3170}, year={2008}, archivePrefix={arXiv}, eprint={0809.3170}, primaryClass={math.ST cs.LG math.PR stat.ME stat.TH} }
chen2008a
arxiv-4904
0809.3179
Kinematic and Dynamic Analyses of the Orthoglide 5-axis
<|reference_start|>Kinematic and Dynamic Analyses of the Orthoglide 5-axis: This paper deals with the kinematic and dynamic analyses of the Orthoglide 5-axis, a five-degree-of-freedom manipulator. It is derived from two manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and dynamic models of the Orthoglide 5-axis are developed. The geometric and inertial parameters of the manipulator are determined by means of a CAD software. Then, the required motors performances are evaluated for some test trajectories. Finally, the motors are selected in the catalogue from the previous results.<|reference_end|>
arxiv
@article{ur-rehman2008kinematic, title={Kinematic and Dynamic Analyses of the Orthoglide 5-axis}, author={Raza Ur-Rehman (IRCCyN), St'ephane Caro (IRCCyN), Damien Chablat (IRCCyN), Philippe Wenger (IRCCyN)}, journal={Congress on Mechatronics, Le Grand-Bornand : France (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0809.3179}, primaryClass={cs.RO} }
ur-rehman2008kinematic
arxiv-4905
0809.3180
Singularity Analysis of Limited-dof Parallel Manipulators using Grassmann-Cayley Algebra
<|reference_start|>Singularity Analysis of Limited-dof Parallel Manipulators using Grassmann-Cayley Algebra: This paper characterizes geometrically the singularities of limited DOF parallel manipulators. The geometric conditions associated with the dependency of six Pl\"ucker vector of lines (finite and infinite) constituting the rows of the inverse Jacobian matrix are formulated using Grassmann-Cayley algebra. Manipulators under consideration do not need to have a passive spherical joint somewhere in each leg. This study is illustrated with three example robots<|reference_end|>
arxiv
@article{kanaan2008singularity, title={Singularity Analysis of Limited-dof Parallel Manipulators using Grassmann-Cayley Algebra}, author={Daniel Kanaan (IRCCyN), Philippe Wenger (IRCCyN), Damien Chablat (IRCCyN)}, journal={11th International Symposium on Advances in Robot Kinematics, France (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0809.3180}, primaryClass={cs.RO} }
kanaan2008singularity
arxiv-4906
0809.3181
Framework for Dynamic Evaluation of Muscle Fatigue in Manual Handling Work
<|reference_start|>Framework for Dynamic Evaluation of Muscle Fatigue in Manual Handling Work: Muscle fatigue is defined as the point at which the muscle is no longer able to sustain the required force or work output level. The overexertion of muscle force and muscle fatigue can induce acute pain and chronic pain in human body. When muscle fatigue is accumulated, the functional disability can be resulted as musculoskeletal disorders (MSD). There are several posture exposure analysis methods useful for rating the MSD risks, but they are mainly based on static postures. Even in some fatigue evaluation methods, muscle fatigue evaluation is only available for static postures, but not suitable for dynamic working process. Meanwhile, some existing muscle fatigue models based on physiological models cannot be easily used in industrial ergonomic evaluations. The external dynamic load is definitely the most important factor resulting muscle fatigue, thus we propose a new fatigue model under a framework for evaluating fatigue in dynamic working processes. Under this framework, virtual reality system is taken to generate virtual working environment, which can be interacted with the work with haptic interfaces and optical motion capture system. The motion information and load information are collected and further processed to evaluate the overall work load of the worker based on dynamic muscle fatigue models and other work evaluation criterions and to give new information to characterize the penibility of the task in design process.<|reference_end|>
arxiv
@article{ma2008framework, title={Framework for Dynamic Evaluation of Muscle Fatigue in Manual Handling Work}, author={Liang Ma (IRCCyN), Fouad Bennis (IRCCyN), Damien Chablat (IRCCyN), Wei Zhang (DIE)}, journal={arXiv preprint arXiv:0809.3181}, year={2008}, archivePrefix={arXiv}, eprint={0809.3181}, primaryClass={cs.RO} }
ma2008framework
arxiv-4907
0809.3182
SINGULAB - A Graphical user Interface for the Singularity Analysis of Parallel Robots based on Grassmann-Cayley Algebra
<|reference_start|>SINGULAB - A Graphical user Interface for the Singularity Analysis of Parallel Robots based on Grassmann-Cayley Algebra: This paper presents SinguLab, a graphical user interface for the singularity analysis of parallel robots. The algorithm is based on Grassmann-Cayley algebra. The proposed tool is interactive and introduces the designer to the singularity analysis performed by this method, showing all the stages along the procedure and eventually showing the solution algebraically and graphically, allowing as well the singularity verification of different robot poses.<|reference_end|>
arxiv
@article{ben-horin2008singulab, title={SINGULAB - A Graphical user Interface for the Singularity Analysis of Parallel Robots based on Grassmann-Cayley Algebra}, author={Patricia Ben-Horin (Technion), Moshe Shoham (Technion), St'ephane Caro (IRCCyN), Damien Chablat (IRCCyN), Philippe Wenger (IRCCyN)}, journal={arXiv preprint arXiv:0809.3182}, year={2008}, archivePrefix={arXiv}, eprint={0809.3182}, primaryClass={cs.RO} }
ben-horin2008singulab
arxiv-4908
0809.3187
A Control Variate Approach for Improving Efficiency of Ensemble Monte Carlo
<|reference_start|>A Control Variate Approach for Improving Efficiency of Ensemble Monte Carlo: In this paper we present a new approach to control variates for improving computational efficiency of Ensemble Monte Carlo. We present the approach using simulation of paths of a time-dependent nonlinear stochastic equation. The core idea is to extract information at one or more nominal model parameters and use this information to gain estimation efficiency at neighboring parameters. This idea is the basis of a general strategy, called DataBase Monte Carlo (DBMC), for improving efficiency of Monte Carlo. In this paper we describe how this strategy can be implemented using the variance reduction technique of Control Variates (CV). We show that, once an initial setup cost for extracting information is incurred, this approach can lead to significant gains in computational efficiency. The initial setup cost is justified in projects that require a large number of estimations or in those that are to be performed under real-time constraints.<|reference_end|>
arxiv
@article{borogovac2008a, title={A Control Variate Approach for Improving Efficiency of Ensemble Monte Carlo}, author={T. Borogovac, F. J. Alexander, P. Vakili}, journal={arXiv preprint arXiv:0809.3187}, year={2008}, number={LA-UR-08-05399}, archivePrefix={arXiv}, eprint={0809.3187}, primaryClass={cs.CE cond-mat.stat-mech stat.CO} }
borogovac2008a
arxiv-4909
0809.3204
Extended ASP tableaux and rule redundancy in normal logic programs
<|reference_start|>Extended ASP tableaux and rule redundancy in normal logic programs: We introduce an extended tableau calculus for answer set programming (ASP). The proof system is based on the ASP tableaux defined in [Gebser&Schaub, ICLP 2006], with an added extension rule. We investigate the power of Extended ASP Tableaux both theoretically and empirically. We study the relationship of Extended ASP Tableaux with the Extended Resolution proof system defined by Tseitin for sets of clauses, and separate Extended ASP Tableaux from ASP Tableaux by giving a polynomial-length proof for a family of normal logic programs P_n for which ASP Tableaux has exponential-length minimal proofs with respect to n. Additionally, Extended ASP Tableaux imply interesting insight into the effect of program simplification on the lengths of proofs in ASP. Closely related to Extended ASP Tableaux, we empirically investigate the effect of redundant rules on the efficiency of ASP solving. To appear in Theory and Practice of Logic Programming (TPLP).<|reference_end|>
arxiv
@article{järvisalo2008extended, title={Extended ASP tableaux and rule redundancy in normal logic programs}, author={Matti J"arvisalo and Emilia Oikarinen}, journal={Theory and Practice of Logic Programming, 8(5-6):691-716, 2008}, year={2008}, doi={10.1017/S1471068408003578}, archivePrefix={arXiv}, eprint={0809.3204}, primaryClass={cs.AI} }
järvisalo2008extended
arxiv-4910
0809.3214
A Statistical Approach to Modeling Indian Classical Music Performance
<|reference_start|>A Statistical Approach to Modeling Indian Classical Music Performance: A raga is a melodic structure with fixed notes and a set of rules characterizing a certain mood endorsed through performance. By a vadi swar is meant that note which plays the most significant role in expressing the raga. A samvadi swar similarly is the second most significant note. However, the determination of their significance has an element of subjectivity and hence we are motivated to find some truths through an objective analysis. The paper proposes a probabilistic method of note detection and demonstrates how the relative frequency (relative number of occurrences of the pitch) of the more important notes stabilize far more quickly than that of others. In addition, a count for distinct transitory and similar looking non-transitory (fundamental) frequency movements (but possibly embedding distinct emotions!) between the notes is also taken depicting the varnalankars or musical ornaments decorating the notes and note sequences as rendered by the artist. They reflect certain structural properties of the ragas. Several case studies are presented.<|reference_end|>
arxiv
@article{chakraborty2008a, title={A Statistical Approach to Modeling Indian Classical Music Performance}, author={Soubhik Chakraborty, Sandeep Singh Solanki, Sayan Roy, Shivee Chauhan, Sanjaya Shankar Tripathy and Kartik Mahto}, journal={arXiv preprint arXiv:0809.3214}, year={2008}, archivePrefix={arXiv}, eprint={0809.3214}, primaryClass={cs.SD stat.AP} }
chakraborty2008a
arxiv-4911
0809.3232
A Local Clustering Algorithm for Massive Graphs and its Application to Nearly-Linear Time Graph Partitioning
<|reference_start|>A Local Clustering Algorithm for Massive Graphs and its Application to Nearly-Linear Time Graph Partitioning: We study the design of local algorithms for massive graphs. A local algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster--a subset of vertices whose internal connections are significantly richer than its external connections--near a given vertex. The running time of our algorithm, when it finds a non-empty local cluster, is nearly linear in the size of the cluster it outputs. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and web-graphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph. Using the partitioning algorithm of this paper, we have designed a nearly-linear time algorithm for constructing spectral sparsifiers of graphs, which we in turn use in a nearly-linear time algorithm for solving linear systems in symmetric, diagonally-dominant matrices. The linear system solver also leads to a nearly linear-time algorithm for approximating the second-smallest eigenvalue and corresponding eigenvector of the Laplacian matrix of a graph. These other results are presented in two companion papers.<|reference_end|>
arxiv
@article{spielman2008a, title={A Local Clustering Algorithm for Massive Graphs and its Application to Nearly-Linear Time Graph Partitioning}, author={Daniel A. Spielman, Shang-Hua Teng}, journal={arXiv preprint arXiv:0809.3232}, year={2008}, archivePrefix={arXiv}, eprint={0809.3232}, primaryClass={cs.DS cs.DM} }
spielman2008a
arxiv-4912
0809.3250
Using descriptive mark-up to formalize translation quality assessment
<|reference_start|>Using descriptive mark-up to formalize translation quality assessment: The paper deals with using descriptive mark-up to emphasize translation mistakes. The author postulates the necessity to develop a standard and formal XML-based way of describing translation mistakes. It is considered to be important for achieving impersonal translation quality assessment. Marked-up translations can be used in corpus translation studies; moreover, automatic translation assessment based on marked-up mistakes is possible. The paper concludes with setting up guidelines for further activity within the described field.<|reference_end|>
arxiv
@article{kutuzov2008using, title={Using descriptive mark-up to formalize translation quality assessment}, author={Andrey Kutuzov}, journal={Published in Russian in 'Translation industry and information supply in international business activities: materials of international conference' - Perm, 2008, pp. 90-101}, year={2008}, archivePrefix={arXiv}, eprint={0809.3250}, primaryClass={cs.CL} }
kutuzov2008using
arxiv-4913
0809.3273
Direct and Reverse Secret-Key Capacities of a Quantum Channel
<|reference_start|>Direct and Reverse Secret-Key Capacities of a Quantum Channel: We define the direct and reverse secret-key capacities of a memoryless quantum channel as the optimal rates that entanglement-based quantum key distribution protocols can reach by using a single forward classical communication (direct reconciliation) or a single feedback classical communication (reverse reconciliation). In particular, the reverse secret-key capacity can be positive for antidegradable channels, where no forward strategy is known to be secure. This property is explicitly shown in the continuous variable framework by considering arbitrary one-mode Gaussian channels.<|reference_end|>
arxiv
@article{pirandola2008direct, title={Direct and Reverse Secret-Key Capacities of a Quantum Channel}, author={Stefano Pirandola, Raul Garcia-Patron, Samuel L. Braunstein, and Seth Lloyd}, journal={Phys. Rev. Lett. 102, 050503 (2009)}, year={2008}, doi={10.1103/PhysRevLett.102.050503}, archivePrefix={arXiv}, eprint={0809.3273}, primaryClass={quant-ph cs.CR cs.IT math.IT physics.optics} }
pirandola2008direct
arxiv-4914
0809.3276
Criteria on Utility Designing of Convex Optimization in FDMA Networks
<|reference_start|>Criteria on Utility Designing of Convex Optimization in FDMA Networks: In this paper, we investigate the network utility maximization problem in FDMA systems. We summarize with a suite of criteria on designing utility functions so as to achieve the global optimization convex. After proposing the general form of the utility functions, we present examples of commonly used utility function forms that are consistent with the criteria proposed in this paper, which include the well-known proportional fairness function and the sigmoidal-like functions. In the second part of this paper, we use numerical results to demonstrate a case study based on the criteria mentioned above, which deals with the subcarrier scheduling problem with dynamic rate allocation in FDMA system.<|reference_end|>
arxiv
@article{sun2008criteria, title={Criteria on Utility Designing of Convex Optimization in FDMA Networks}, author={Zheng Sun, Wenjun Xu, Zhiqiang He, Kai Niu}, journal={arXiv preprint arXiv:0809.3276}, year={2008}, archivePrefix={arXiv}, eprint={0809.3276}, primaryClass={cs.NI} }
sun2008criteria
arxiv-4915
0809.3279
Distributed Spiral Optimization in Wireless Sensor Networks without Fusion Centers
<|reference_start|>Distributed Spiral Optimization in Wireless Sensor Networks without Fusion Centers: A distributed spiral algorithm for distributed optimization in WSN is proposed. By forming a spiral-shape message passing scheme among clusters, without loss of estimation accuracy and convergence speed, the algorithm is proved to converge with a lower total transport cost than the distributed in-cluster algorithm.<|reference_end|>
arxiv
@article{sun2008distributed, title={Distributed Spiral Optimization in Wireless Sensor Networks without Fusion Centers}, author={Zheng Sun}, journal={arXiv preprint arXiv:0809.3279}, year={2008}, archivePrefix={arXiv}, eprint={0809.3279}, primaryClass={cs.NI} }
sun2008distributed
arxiv-4916
0809.3280
A Heuristic Scheduling Scheme in Multiuser OFDMA Networks
<|reference_start|>A Heuristic Scheduling Scheme in Multiuser OFDMA Networks: Conventional heterogeneous-traffic scheduling schemes utilize zero-delay constraint for real-time services, which aims to minimize the average packet delay among real-time users. However, in light or moderate load networks this strategy is unnecessary and leads to low data throughput for non-real-time users. In this paper, we propose a heuristic scheduling scheme to solve this problem. The scheme measures and assigns scheduling priorities to both real-time and non-real-time users, and schedules the radio resources for the two user classes simultaneously. Simulation results show that the proposed scheme efficiently handles the heterogeneous-traffic scheduling with diverse QoS requirements and alleviates the unfairness between real-time and non-real-time services under various traffic loads.<|reference_end|>
arxiv
@article{sun2008a, title={A Heuristic Scheduling Scheme in Multiuser OFDMA Networks}, author={Zheng Sun, Zhiqiang He, Ruochen Wang, Kai Niu}, journal={arXiv preprint arXiv:0809.3280}, year={2008}, archivePrefix={arXiv}, eprint={0809.3280}, primaryClass={cs.NI} }
sun2008a
arxiv-4917
0809.3283
Performance Comparison of Cooperative and Distributed Spectrum Sensing in Cognitive Radio
<|reference_start|>Performance Comparison of Cooperative and Distributed Spectrum Sensing in Cognitive Radio: In this paper, we compare the performances of cooperative and distributed spectrum sensing in wireless sensor networks. After introducing the basic problem, we describe two strategies: 1) a cooperative sensing strategy, which takes advantage of cooperation diversity gain to increase probability of detection and 2) a distributed sensing strategy, which by passing the results in an inter-node manner increases energy efficiency and fairness among nodes. Then, we compare the performances of the strategies in terms of three criteria: agility, energy efficiency, and robustness against SNR changes, and summarize the comparison. It shows that: 1) the non-cooperative strategy has the best fairness of energy consumption, 2) the cooperative strategy leads to the best agility, and 3) the distributed strategy leads to the lowest energy consumption and the best robustness against SNR changes.<|reference_end|>
arxiv
@article{sun2008performance, title={Performance Comparison of Cooperative and Distributed Spectrum Sensing in Cognitive Radio}, author={Zheng Sun, Wenjun Xu, Zhiqiang He, Kai Niu}, journal={arXiv preprint arXiv:0809.3283}, year={2008}, archivePrefix={arXiv}, eprint={0809.3283}, primaryClass={cs.NI} }
sun2008performance
arxiv-4918
0809.3285
Load Balancing Strategies to Solve Flowshop Scheduling on Parallel Computing
<|reference_start|>Load Balancing Strategies to Solve Flowshop Scheduling on Parallel Computing: This paper first presents a parallel solution for the Flowshop Scheduling Problem in parallel environment, and then proposes a novel load balancing strategy. The proposed Proportional Fairness Strategy (PFS) takes computational performance of computing process sets into account, and assigns additional load to computing nodes proportionally to their evaluated performance. In order to efficiently utilize the power of parallel resource, we also discuss the data structure used in communications among computational nodes and design an optimized data transfer strategy. This data transfer strategy combined with the proposed load balancing strategy have been implemented and tested on a super computer consisted of 86 CPUs using MPI as the middleware. The results show that the proposed PFS achieves better performance in terms of computing time than the existing Adaptive Contracting Within Neighborhood Strategy. We also show that the combination of both the Proportional Fairness Strategy and the proposed data transferring strategy achieves additional 13~15% improvement in efficiency of parallelism.<|reference_end|>
arxiv
@article{sun2008load, title={Load Balancing Strategies to Solve Flowshop Scheduling on Parallel Computing}, author={Zheng Sun, Xiaohong Huang, and Yan Ma}, journal={arXiv preprint arXiv:0809.3285}, year={2008}, archivePrefix={arXiv}, eprint={0809.3285}, primaryClass={cs.NI} }
sun2008load
arxiv-4919
0809.3352
Generalized Prediction Intervals for Arbitrary Distributed High-Dimensional Data
<|reference_start|>Generalized Prediction Intervals for Arbitrary Distributed High-Dimensional Data: This paper generalizes the traditional statistical concept of prediction intervals for arbitrary probability density functions in high-dimensional feature spaces by introducing significance level distributions, which provides interval-independent probabilities for continuous random variables. The advantage of the transformation of a probability density function into a significance level distribution is that it enables one-class classification or outlier detection in a direct manner.<|reference_end|>
arxiv
@article{kuehn2008generalized, title={Generalized Prediction Intervals for Arbitrary Distributed High-Dimensional Data}, author={Steffen Kuehn}, journal={arXiv preprint arXiv:0809.3352}, year={2008}, archivePrefix={arXiv}, eprint={0809.3352}, primaryClass={cs.CV cs.AI cs.LG} }
kuehn2008generalized
arxiv-4920
0809.3357
More on Combinatorial Batch Codes
<|reference_start|>More on Combinatorial Batch Codes: Paterson, Stinson and Wei \cite{PSW} introduced Combinatorial batch codes, which are combinatorial description of Batch code. Batch codes were first presented by Ishai, Kushilevita, Ostrovsky and Sahai \cite{IKOS} in STOC'04. In this paper we answer some of the questions put forward by Paterson, Stinson and Wei and give some results for the general case $t>1$ which were not studied by the authors.<|reference_end|>
arxiv
@article{ruj2008more, title={More on Combinatorial Batch Codes}, author={Sushmita Ruj and Bimal Roy}, journal={arXiv preprint arXiv:0809.3357}, year={2008}, archivePrefix={arXiv}, eprint={0809.3357}, primaryClass={cs.CR cs.DM} }
ruj2008more
arxiv-4921
0809.3365
Algebraic reduction for space-time codes based on quaternion algebras
<|reference_start|>Algebraic reduction for space-time codes based on quaternion algebras: In this paper we introduce a new right preprocessing method for the decoding of 2x2 algebraic STBCs, called algebraic reduction, which exploits the multiplicative structure of the code. The principle of the new reduction is to absorb part of the channel into the code, by approximating the channel matrix with an element of the maximal order of the algebra. We prove that algebraic reduction attains the receive diversity when followed by a simple ZF detection. Simulation results for the Golden Code show that using MMSE-GDFE left preprocessing, algebraic reduction with simple ZF detection has a loss of only $3 \dB$ with respect to ML decoding.<|reference_end|>
arxiv
@article{luzzi2008algebraic, title={Algebraic reduction for space-time codes based on quaternion algebras}, author={Laura Luzzi, Ghaya Rekaya-Ben Othman and Jean-Claude Belfiore}, journal={arXiv preprint arXiv:0809.3365}, year={2008}, archivePrefix={arXiv}, eprint={0809.3365}, primaryClass={cs.IT math.IT} }
luzzi2008algebraic
arxiv-4922
0809.3370
Achievability of the Rate $1/2\log(1+\es)$ in the Discrete-Time Poisson Channel
<|reference_start|>Achievability of the Rate $1/2\log(1+\es)$ in the Discrete-Time Poisson Channel: A simple lower bound to the capacity of the discrete-time Poisson channel with average energy $\es$ is derived. The rate ${1/2}\log(1+\es)$ is shown to be the generalized mutual information of a modified minimum-distance decoder, when the input follows a gamma distribution of parameter 1/2 and mean $\es$.<|reference_end|>
arxiv
@article{martinez2008achievability, title={Achievability of the Rate ${1/2}\log(1+\es)$ in the Discrete-Time Poisson Channel}, author={Alfonso Martinez}, journal={arXiv preprint arXiv:0809.3370}, year={2008}, archivePrefix={arXiv}, eprint={0809.3370}, primaryClass={cs.IT math.IT} }
martinez2008achievability
arxiv-4923
0809.3384
Changing Assembly Modes without Passing Parallel Singularities in Non-Cuspidal 3-R\underlinePR Planar Parallel Robots
<|reference_start|>Changing Assembly Modes without Passing Parallel Singularities in Non-Cuspidal 3-R\underlinePR Planar Parallel Robots: This paper demonstrates that any general 3-DOF three-legged planar parallel robot with extensible legs can change assembly modes without passing through parallel singularities (configurations where the mobile platform loses its stiffness). While the results are purely theoretical, this paper questions the very definition of parallel singularities.<|reference_end|>
arxiv
@article{bonev2008changing, title={Changing Assembly Modes without Passing Parallel Singularities in Non-Cuspidal 3-R\underline{P}R Planar Parallel Robots}, author={Ilian Bonev (GPA), S'ebastien Briot (GPA), Philippe Wenger (IRCCyN), Damien Chablat (IRCCyN)}, journal={arXiv preprint arXiv:0809.3384}, year={2008}, archivePrefix={arXiv}, eprint={0809.3384}, primaryClass={cs.RO} }
bonev2008changing
arxiv-4924
0809.3415
Ten weeks in the life of an eDonkey server
<|reference_start|>Ten weeks in the life of an eDonkey server: This paper presents a capture of the queries managed by an eDonkey server during almost 10 weeks, leading to the observation of almost 9 billion messages involving almost 90 million users and more than 275 million distinct files. Acquisition and management of such data raises several challenges, which we discuss as well as the solutions we developed. We obtain a very rich dataset, orders of magnitude larger than previously avalaible ones, which we provide for public use. We finally present basic analysis of the obtained data, which already gives evidence of non-trivial features.<|reference_end|>
arxiv
@article{aidouni2008ten, title={Ten weeks in the life of an eDonkey server}, author={Frederic Aidouni, Matthieu Latapy and Clemence Magnien}, journal={arXiv preprint arXiv:0809.3415}, year={2008}, archivePrefix={arXiv}, eprint={0809.3415}, primaryClass={cs.NI} }
aidouni2008ten
arxiv-4925
0809.3447
An Exploratory Study of Calendar Use
<|reference_start|>An Exploratory Study of Calendar Use: In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.<|reference_end|>
arxiv
@article{tungare2008an, title={An Exploratory Study of Calendar Use}, author={Manas Tungare, Manuel Perez-Quinones, Alyssa Sams}, journal={arXiv preprint arXiv:0809.3447}, year={2008}, archivePrefix={arXiv}, eprint={0809.3447}, primaryClass={cs.HC cs.IR} }
tungare2008an
arxiv-4926
0809.3479
Fermions and Loops on Graphs I Loop Calculus for Determinant
<|reference_start|>Fermions and Loops on Graphs I Loop Calculus for Determinant: This paper is the first in the series devoted to evaluation of the partition function in statistical models on graphs with loops in terms of the Berezin/fermion integrals. The paper focuses on a representation of the determinant of a square matrix in terms of a finite series, where each term corresponds to a loop on the graph. The representation is based on a fermion version of the Loop Calculus, previously introduced by the authors for graphical models with finite alphabets. Our construction contains two levels. First, we represent the determinant in terms of an integral over anti-commuting Grassman variables, with some reparametrization/gauge freedom hidden in the formulation. Second, we show that a special choice of the gauge, called BP (Bethe-Peierls or Belief Propagation) gauge, yields the desired loop representation. The set of gauge-fixing BP conditions is equivalent to the Gaussian BP equations, discussed in the past as efficient (linear scaling) heuristics for estimating the covariance of a sparse positive matrix.<|reference_end|>
arxiv
@article{chernyak2008fermions, title={Fermions and Loops on Graphs. I. Loop Calculus for Determinant}, author={Vladimir Y. Chernyak and Michael Chertkov}, journal={J.Stat.Mech.0812:P12011,2008}, year={2008}, doi={10.1088/1742-5468/2008/12/P12011}, number={LA-UR-08-05537}, archivePrefix={arXiv}, eprint={0809.3479}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC cs.IT hep-th math.IT} }
chernyak2008fermions
arxiv-4927
0809.3481
Fermions and Loops on Graphs II Monomer-Dimer Model as Series of Determinants
<|reference_start|>Fermions and Loops on Graphs II Monomer-Dimer Model as Series of Determinants: We continue the discussion of the fermion models on graphs that started in the first paper of the series. Here we introduce a Graphical Gauge Model (GGM) and show that : (a) it can be stated as an average/sum of a determinant defined on the graph over $\mathbb{Z}_{2}$ (binary) gauge field; (b) it is equivalent to the Monomer-Dimer (MD) model on the graph; (c) the partition function of the model allows an explicit expression in terms of a series over disjoint directed cycles, where each term is a product of local contributions along the cycle and the determinant of a matrix defined on the remainder of the graph (excluding the cycle). We also establish a relation between the MD model on the graph and the determinant series, discussed in the first paper, however, considered using simple non-Belief-Propagation choice of the gauge. We conclude with a discussion of possible analytic and algorithmic consequences of these results, as well as related questions and challenges.<|reference_end|>
arxiv
@article{chernyak2008fermions, title={Fermions and Loops on Graphs. II. Monomer-Dimer Model as Series of Determinants}, author={Vladimir Y. Chernyak and Michael Chertkov}, journal={J.Stat.Mech.0812:P12012,2008}, year={2008}, doi={10.1088/1742-5468/2008/12/P12012}, number={LA-UR-08-05678}, archivePrefix={arXiv}, eprint={0809.3481}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC cs.IT hep-th math.IT} }
chernyak2008fermions
arxiv-4928
0809.3485
A First Step to Convolutive Sparse Representation
<|reference_start|>A First Step to Convolutive Sparse Representation: In this paper an extension of the sparse decomposition problem is considered and an algorithm for solving it is presented. In this extension, it is known that one of the shifted versions of a signal s (not necessarily the original signal itself) has a sparse representation on an overcomplete dictionary, and we are looking for the sparsest representation among the representations of all the shifted versions of s. Then, the proposed algorithm finds simultaneously the amount of the required shift, and the sparse representation. Experimental results emphasize on the performance of our algorithm.<|reference_end|>
arxiv
@article{firouzi2008a, title={A First Step to Convolutive Sparse Representation}, author={Hamed Firouzi, Massoud Babaie-Zadeh, Aria Ghasemian, Christian Jutten}, journal={arXiv preprint arXiv:0809.3485}, year={2008}, archivePrefix={arXiv}, eprint={0809.3485}, primaryClass={cs.MM cs.OH} }
firouzi2008a
arxiv-4929
0809.3503
JDATATRANS for Array Obfuscation in Java Source Code to Defeat Reverse Engineering from Decompiled Codes
<|reference_start|>JDATATRANS for Array Obfuscation in Java Source Code to Defeat Reverse Engineering from Decompiled Codes: Software obfuscation or obscuring a software is an approach to defeat the practice of reverse engineering a software for using its functionality illegally in the development of another software. Java applications are more amenable to reverse engineering and re-engineering attacks through methods such as decompilation because Java class files store the program in a semi complied form called 'byte' codes. The existing obfuscation systems obfuscate the Java class files. Obfuscated source code produce obfuscated byte codes and hence two level obfuscation (source code and byte code level) of the program makes it more resilient to reverse engineering attacks. But source code obfuscation is much more difficult due to richer set of programming constructs and the scope of the different variables used in the program and only very little progress has been made on this front. Hence programmers resort to adhoc manual ways of obscuring their program which makes it difficult for its maintenance and usability. To address this issue partially, we developed a user friendly tool JDATATRANS to obfuscate Java source code by obscuring the array usages. Using various array restructuring techniques such as 'array splitting', 'array folding' and 'array flattening', in addition to constant hiding, our system obfuscate the input Java source code and produce an obfuscated Java source code that is functionally equivalent to the input program. We also perform a number of experiments to measure the potency, resilience and cost incurred by our tool.<|reference_end|>
arxiv
@article{sivadasan2008jdatatrans, title={JDATATRANS for Array Obfuscation in Java Source Code to Defeat Reverse Engineering from Decompiled Codes}, author={Praveen Sivadasan, P Sojan Lal, Naveen Sivadasan}, journal={arXiv preprint arXiv:0809.3503}, year={2008}, archivePrefix={arXiv}, eprint={0809.3503}, primaryClass={cs.CR} }
sivadasan2008jdatatrans
arxiv-4930
0809.3527
Inferring Company Structure from Limited Available Information
<|reference_start|>Inferring Company Structure from Limited Available Information: In this paper we present several algorithmic techniques for inferring the structure of a company when only a limited amount of information is available. We consider problems with two types of inputs: the number of pairs of employees with a given property and restricted information about the hierarchical structure of the company. We provide dynamic programming and greedy algorithms for these problems.<|reference_end|>
arxiv
@article{andreica2008inferring, title={Inferring Company Structure from Limited Available Information}, author={Mugurel Ionut Andreica, Angela Andreica, Romulus Andreica}, journal={International Symposium on Social Development and Economic Performance, Satu Mare : Romania (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0809.3527}, primaryClass={cs.DS} }
andreica2008inferring
arxiv-4931
0809.3528
Locating Restricted Facilities on Binary Maps
<|reference_start|>Locating Restricted Facilities on Binary Maps: In this paper we consider several facility location problems with applications to cost and social welfare optimization, when the area map is encoded as a binary (0,1) mxn matrix. We present algorithmic solutions for all the problems. Some cases are too particular to be used in practical situations, but they are at least a starting point for more generic solutions.<|reference_end|>
arxiv
@article{andreica2008locating, title={Locating Restricted Facilities on Binary Maps}, author={Mugurel Ionut Andreica, Cristina Teodora Andreica, Madalina Ecaterina Andreica}, journal={International Symposium on Social Development and Economic Performance, Satu Mare : Romania (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0809.3528}, primaryClass={cs.DS} }
andreica2008locating
arxiv-4932
0809.3540
A Note on the Equivalence of Gibbs Free Energy and Information Theoretic Capacity
<|reference_start|>A Note on the Equivalence of Gibbs Free Energy and Information Theoretic Capacity: The minimization of Gibbs free energy is based on the changes in work and free energy that occur in a physical or chemical system. The maximization of mutual information, the capacity, of a noisy channel is determined based on the marginal probabilities and conditional entropies associated with a communications system. As different as the procedures might first appear, through the exploration of a simple, "dual use" Ising model, it is seen that the two concepts are in fact the same. In particular, the case of a binary symmetric channel is calculated in detail.<|reference_end|>
arxiv
@article{ford2008a, title={A Note on the Equivalence of Gibbs Free Energy and Information Theoretic Capacity}, author={David Ford}, journal={arXiv preprint arXiv:0809.3540}, year={2008}, archivePrefix={arXiv}, eprint={0809.3540}, primaryClass={cond-mat.stat-mech cs.IT math.IT} }
ford2008a
arxiv-4933
0809.3542
A High Performance Memory Database for Web Application Caches
<|reference_start|>A High Performance Memory Database for Web Application Caches: This paper presents the architecture and characteristics of a memory database intended to be used as a cache engine for web applications. Primary goals of this database are speed and efficiency while running on SMP systems with several CPU cores (four and more). A secondary goal is the support for simple metadata structures associated with cached data that can aid in efficient use of the cache. Due to these goals, some data structures and algorithms normally associated with this field of computing needed to be adapted to the new environment.<|reference_end|>
arxiv
@article{voras2008a, title={A High Performance Memory Database for Web Application Caches}, author={Ivan Voras, Danko Basch, Mario Zagar}, journal={arXiv preprint arXiv:0809.3542}, year={2008}, archivePrefix={arXiv}, eprint={0809.3542}, primaryClass={cs.NI cs.DC} }
voras2008a
arxiv-4934
0809.3546
Universal Secure Network Coding via Rank-Metric Codes
<|reference_start|>Universal Secure Network Coding via Rank-Metric Codes: The problem of securing a network coding communication system against an eavesdropper adversary is considered. The network implements linear network coding to deliver n packets from source to each receiver, and the adversary can eavesdrop on \mu arbitrarily chosen links. The objective is to provide reliable communication to all receivers, while guaranteeing that the source information remains information-theoretically secure from the adversary. A coding scheme is proposed that can achieve the maximum possible rate of n-\mu packets. The scheme, which is based on rank-metric codes, has the distinctive property of being universal: it can be applied on top of any communication network without requiring knowledge of or any modifications on the underlying network code. The only requirement of the scheme is that the packet length be at least n, which is shown to be strictly necessary for universal communication at the maximum rate. A further scenario is considered where the adversary is allowed not only to eavesdrop but also to inject up to t erroneous packets into the network, and the network may suffer from a rank deficiency of at most \rho. In this case, the proposed scheme can be extended to achieve the rate of n-\rho-2t-\mu packets. This rate is shown to be optimal under the assumption of zero-error communication.<|reference_end|>
arxiv
@article{silva2008universal, title={Universal Secure Network Coding via Rank-Metric Codes}, author={Danilo Silva, Frank R. Kschischang}, journal={IEEE Transactions on Information Theory, vol. 57, no. 2, pp. 1124-1135, Feb. 2011}, year={2008}, doi={10.1109/TIT.2010.2090212}, archivePrefix={arXiv}, eprint={0809.3546}, primaryClass={cs.IT cs.CR math.IT} }
silva2008universal
arxiv-4935
0809.3554
The Approximate Capacity of the Many-to-One and One-to-Many Gaussian Interference Channels
<|reference_start|>The Approximate Capacity of the Many-to-One and One-to-Many Gaussian Interference Channels: Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within one bit/s/Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal scale. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal scale.<|reference_end|>
arxiv
@article{bresler2008the, title={The Approximate Capacity of the Many-to-One and One-to-Many Gaussian Interference Channels}, author={Guy Bresler, Abhay Parekh, and David Tse}, journal={arXiv preprint arXiv:0809.3554}, year={2008}, doi={10.1109/TIT.2010.2054590}, archivePrefix={arXiv}, eprint={0809.3554}, primaryClass={cs.IT math.IT} }
bresler2008the
arxiv-4936
0809.3565
On fractionality of the path packing problem
<|reference_start|>On fractionality of the path packing problem: In this paper, we study fractional multiflows in undirected graphs. A fractional multiflow in a graph G with a node subset T, called terminals, is a collection of weighted paths with ends in T such that the total weights of paths traversing each edge does not exceed 1. Well-known fractional path packing problem consists of maximizing the total weight of paths with ends in a subset S of TxT over all fractional multiflows. Together, G,T and S form a network. A network is an Eulerian network if all nodes in N\T have even degrees. A term "fractionality" was defined for the fractional path packing problem by A. Karzanov as the smallest natural number D so that there exists a solution to the problem that becomes integer-valued when multiplied by D. A. Karzanov has defined the class of Eulerian networks in terms of T and S, outside which D is infinite and proved that whithin this class D can be 1,2 or 4. He conjectured that D should be 1 or 2 for this class of networks. In this paper we prove this conjecture.<|reference_end|>
arxiv
@article{vanetik2008on, title={On fractionality of the path packing problem}, author={N. Vanetik}, journal={arXiv preprint arXiv:0809.3565}, year={2008}, doi={10.1007/s10878-011-9405-3}, archivePrefix={arXiv}, eprint={0809.3565}, primaryClass={cs.DM} }
vanetik2008on
arxiv-4937
0809.3571
Evaluation of an Intelligent Assistive Technology for Voice Navigation of Spreadsheets
<|reference_start|>Evaluation of an Intelligent Assistive Technology for Voice Navigation of Spreadsheets: An integral part of spreadsheet auditing is navigation. For sufferers of Repetitive Strain Injury who need to use voice recognition technology this navigation can be highly problematic. To counter this the authors have developed an intelligent voice navigation system, iVoice, which replicates common spreadsheet auditing behaviours through simple voice commands. This paper outlines the iVoice system and summarizes the results of a study to evaluate iVoice when compared to a leading voice recognition technology.<|reference_end|>
arxiv
@article{flood2008evaluation, title={Evaluation of an Intelligent Assistive Technology for Voice Navigation of Spreadsheets}, author={Derek Flood, Kevin Mc Daid, Fergal Mc Caffery, Brian Bishop}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 69-78 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3571}, primaryClass={cs.HC} }
flood2008evaluation
arxiv-4938
0809.3574
Spreadsheet modelling for solving combinatorial problems: The vendor selection problem
<|reference_start|>Spreadsheet modelling for solving combinatorial problems: The vendor selection problem: Spreadsheets have grown up and became very powerful and easy to use tools in applying analytical techniques for solving business problems. Operations managers, production managers, planners and schedulers can work with them in developing solid and practical Do-It-Yourself Decision Support Systems. Small and Medium size organizations, can apply OR methodologies without the presence of specialized software and trained personnel, which in many cases cannot afford anyway. This paper examines an efficient approach in solving combinatorial programming problems with the use of spreadsheets. A practical application, which demonstrates the approach, concerns the development of a spreadsheet-based DSS for the Multi Item Procurement Problem with Fixed Vendor Cost. The DSS has been build using exclusively standard spreadsheet feature and can solve real problems of substantial size. The benefits and limitations of the approach are also discussed.<|reference_end|>
arxiv
@article{ipsilandis2008spreadsheet, title={Spreadsheet modelling for solving combinatorial problems: The vendor selection problem}, author={Pandelis G. Ipsilandis}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 95-107 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3574}, primaryClass={cs.SE cs.HC} }
ipsilandis2008spreadsheet
arxiv-4939
0809.3577
Dynamic tree algorithms
<|reference_start|>Dynamic tree algorithms: In this paper, a general tree algorithm processing a random flow of arrivals is analyzed. Capetanakis--Tsybakov--Mikhailov's protocol in the context of communication networks with random access is an example of such an algorithm. In computer science, this corresponds to a trie structure with a dynamic input. Mathematically, it is related to a stopped branching process with exogeneous arrivals (immigration). Under quite general assumptions on the distribution of the number of arrivals and on the branching procedure, it is shown that there exists a positive constant $\lambda_c$ so that if the arrival rate is smaller than $\lambda_c$, then the algorithm is stable under the flow of requests, that is, that the total size of an associated tree is integrable. At the same time, a gap in the earlier proofs of stability in the literature is fixed. When the arrivals are Poisson, an explicit characterization of $\lambda_c$ is given. Under the stability condition, the asymptotic behavior of the average size of a tree starting with a large number of individuals is analyzed. The results are obtained with the help of a probabilistic rewriting of the functional equations describing the dynamics of the system. The proofs use extensively this stochastic background throughout the paper. In this analysis, two basic limit theorems play a key role: the renewal theorem and the convergence to equilibrium of an auto-regressive process with a moving average.<|reference_end|>
arxiv
@article{mohamed2008dynamic, title={Dynamic tree algorithms}, author={Han`ene Mohamed, Philippe Robert}, journal={Annals of Applied Probability 2010, Vol. 20, No. 1, 26-51}, year={2008}, doi={10.1214/09-AAP617}, number={IMS-AAP-AAP617}, archivePrefix={arXiv}, eprint={0809.3577}, primaryClass={math.PR cs.DS} }
mohamed2008dynamic
arxiv-4940
0809.3584
Spreadsheet Components For All
<|reference_start|>Spreadsheet Components For All: We have prototyped a "spreadsheet component repository" Web site, from which users can copy "components" into their own Excel or Google spreadsheets. Components are collections of cells containing formulae: in real life, they would do useful calculations that many practitioners find hard to program, and would be rigorously tested and documented. Crucially, the user can tell the repository which cells in their spreadsheet to use for a componen's inputs and outputs. The repository will then reshape the component to fit. A single component can therefore be used in many different sizes and shapes of spreadsheet. We hope to set up a spreadsheet equivalent of the high-quality numerical subroutine libraries that revolutionised scientific computing, but where instead of subroutines, the library contains such components.<|reference_end|>
arxiv
@article{paine2008spreadsheet, title={Spreadsheet Components For All}, author={Jocelyn Paine}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 109-127 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3584}, primaryClass={cs.SE cs.HC} }
paine2008spreadsheet
arxiv-4941
0809.3586
A Primer on Spreadsheet Analytics
<|reference_start|>A Primer on Spreadsheet Analytics: This paper provides guidance to an analyst who wants to extract insight from a spreadsheet model. It discusses the terminology of spreadsheet analytics, how to prepare a spreadsheet model for analysis, and a hierarchy of analytical techniques. These techniques include sensitivity analysis, tornado charts,and backsolving (or goal-seeking). This paper presents native-Excel approaches for automating these techniques, and discusses add-ins that are even more efficient. Spreadsheet optimization and spreadsheet Monte Carlo simulation are briefly discussed. The paper concludes by calling for empirical research, and describing desired features spreadsheet sensitivity analysis and spreadsheet optimization add-ins.<|reference_end|>
arxiv
@article{grossman2008a, title={A Primer on Spreadsheet Analytics}, author={Thomas A. Grossman}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 129-140 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3586}, primaryClass={cs.SE cs.HC} }
grossman2008a
arxiv-4942
0809.3587
Spreadsheet End-User Behaviour Analysis
<|reference_start|>Spreadsheet End-User Behaviour Analysis: To aid the development of spreadsheet debugging tools, a knowledge of end-users natural behaviour within the Excel environment would be advantageous. This paper details the design and application of a novel data acquisition tool, which can be used for the unobtrusive recording of end-users mouse, keyboard and Excel specific actions during the debugging of Excel spreadsheets. A debugging experiment was conducted using this data acquisition tool, and based on analysis of end-users performance and behaviour data, the authors developed a "spreadsheet cell coverage feedback" debugging tool. Results from the debugging experiment are presented in terms of enduser debugging performance and behaviour, and the outcomes of an evaluation experiment with the debugging tool are detailed.<|reference_end|>
arxiv
@article{bishop2008spreadsheet, title={Spreadsheet End-User Behaviour Analysis}, author={Brian Bishop, Kevin McDaid}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 141-152 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3587}, primaryClass={cs.SE cs.HC} }
bishop2008spreadsheet
arxiv-4943
0809.3595
Controlling End User Computing Applications - a case study
<|reference_start|>Controlling End User Computing Applications - a case study: We report the results of a project to control the use of end user computing tools for business critical applications in a banking environment. Several workstreams were employed in order to bring about a cultural change within the bank towards the use of spreadsheets and other end-user tools, covering policy development, awareness and skills training, inventory monitoring, user licensing, key risk metrics and mitigation approaches. The outcomes of these activities are discussed, and conclusions are drawn as to the need for appropriate organisational models to guide the use of these tools.<|reference_end|>
arxiv
@article{chambers2008controlling, title={Controlling End User Computing Applications - a case study}, author={Jamie Chambers, John Hamill}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 153-161 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3595}, primaryClass={cs.HC} }
chambers2008controlling
arxiv-4944
0809.3597
Spreadsheets: Aiming the Accountant's Hammer to Hit the Nail on the Head
<|reference_start|>Spreadsheets: Aiming the Accountant's Hammer to Hit the Nail on the Head: Accounting and Finance (A&F) Professionals are arguably the most loyal and concentrated population of spreadsheet users. The work that they perform in spreadsheets has the most significant impact on financial data and business processes within global organizations today. Spreadsheets offer the flexibility and ease of use of a desktop application, combined with the power to perform complex data analysis. They are also the lowest cost business IT tool when stacked up against other functional tools. As a result, spreadsheets are used to support critical business processes in most organizations. In fact, research indicates that over half of financial management reporting is performed with spreadsheets by an accounting and finance professional. A disparity exists in the business world between the importance of spreadsheets on financial data (created by A&F Professionals) and the resources devoted to: The development and oversight of global spreadsheet standards; A recognized and accredited certification in spreadsheet proficiency; Corporate sponsored and required training; Awareness of emerging technologies as it relates to spreadsheet use. This management paper focuses on the current topics relevant to the largest user group (A&F Professionals) of the most widely used financial software application, spreadsheets, also known as the accountant's hammer.<|reference_end|>
arxiv
@article{alliy2008spreadsheets:, title={Spreadsheets: Aiming the Accountant's Hammer to Hit the Nail on the Head}, author={Mbwana Alliy, Patty Brown}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 163-170 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3597}, primaryClass={cs.HC} }
alliy2008spreadsheets:
arxiv-4945
0809.3600
On the Capacity Improvement of Multicast Traffic with Network Coding
<|reference_start|>On the Capacity Improvement of Multicast Traffic with Network Coding: In this paper, we study the contribution of network coding (NC) in improving the multicast capacity of random wireless ad hoc networks when nodes are endowed with multi-packet transmission (MPT) and multi-packet reception (MPR) capabilities. We show that a per session throughput capacity of $\Theta(nT^{3}(n))$, where $n$ is the total number of nodes and T(n) is the communication range, can be achieved as a tight bound when each session contains a constant number of sinks. Surprisingly, an identical order capacity can be achieved when nodes have only MPR and MPT capabilities. This result proves that NC does not contribute to the order capacity of multicast traffic in wireless ad hoc networks when MPR and MPT are used in the network. The result is in sharp contrast to the general belief (conjecture) that NC improves the order capacity of multicast. Furthermore, if the communication range is selected to guarantee the connectivity in the network, i.e., $T(n)\ge \Theta(\sqrt{\log n/n})$, then the combination of MPR and MPT achieves a throughput capacity of $\Theta(\frac{\log^{{3/2}} n}{\sqrt{n}})$ which provides an order capacity gain of $\Theta(\log^2 n)$ compared to the point-to-point multicast capacity with the same number of destinations.<|reference_end|>
arxiv
@article{wang2008on, title={On the Capacity Improvement of Multicast Traffic with Network Coding}, author={Zheng Wang, Shirish Karande, Hamid R. Sadjadpour and J.J. Garcia-Luna-Aceves}, journal={arXiv preprint arXiv:0809.3600}, year={2008}, archivePrefix={arXiv}, eprint={0809.3600}, primaryClass={cs.IT math.IT} }
wang2008on
arxiv-4946
0809.3609
Information and Data Quality in Spreadsheets
<|reference_start|>Information and Data Quality in Spreadsheets: The quality of the data in spreadsheets is less discussed than the structural integrity of the formulas. Yet it is an area of great interest to the owners and users of the spreadsheet. This paper provides an overview of Information Quality (IQ) and Data Quality (DQ) with specific reference to how data is sourced, structured, and presented in spreadsheets.<|reference_end|>
arxiv
@article{o'beirne2008information, title={Information and Data Quality in Spreadsheets}, author={Patrick O'Beirne}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 171-185 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3609}, primaryClass={cs.SE cs.HC} }
o'beirne2008information
arxiv-4947
0809.3612
Overview and main results of the DidaTab project
<|reference_start|>Overview and main results of the DidaTab project: The DidaTab project (Didactics of Spreadsheet, teaching and learning spreadsheets) is a three year project (2005-2007) funded by the French Ministry of Research and dedicated to the study of personal and classroom uses of spreadsheets in the French context, focussing on the processes of appropriation and uses by secondary school students. In this paper, we present an overview of the project, briefly report the studies performed in the framework of the DidaTab project, and give the main results we obtained. We then explore the new research tracks we intend to develop, more in connection with EuSpRIG. Our main result is that the use of spreadsheet during secondary education (grade 6 to 12) is rather sparse for school work (and even more seldom at home) and that student competencies are weak. Curricula have to be reviewed to include more training of dynamics tabular tools (including databases queries) in order to ensure sufficient mastery of computer tools that have became necessary in many educational activities.<|reference_end|>
arxiv
@article{blondel2008overview, title={Overview and main results of the DidaTab project}, author={Francois-Marie Blondel, Eric Bruillard, Francoise Tort}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 187-198 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3612}, primaryClass={cs.HC} }
blondel2008overview
arxiv-4948
0809.3613
Revisiting the Panko-Halverson Taxonomy of Spreadsheet Errors
<|reference_start|>Revisiting the Panko-Halverson Taxonomy of Spreadsheet Errors: The purpose of this paper is to revisit the Panko-Halverson taxonomy of spreadsheet errors and suggest revisions. There are several reasons for doing so: First, the taxonomy has been widely used. Therefore, it should have scrutiny; Second, the taxonomy has not been widely available in its original form and most users refer to secondary sources. Consequently, they often equate the taxonomy with the simplified extracts used in particular experiments or field studies; Third, perhaps as a consequence, most users use only a fraction of the taxonomy. In particular, they tend not to use the taxonomy's life-cycle dimension; Fourth, the taxonomy has been tested against spreadsheets in experiments and spreadsheets in operational use. It is time to review how it has fared in these tests; Fifth, the taxonomy was based on the types of spreadsheet errors that were known to the authors in the mid-1990s. Subsequent experience has shown that the taxonomy needs to be extended for situations beyond those original experiences; Sixth, the omission category in the taxonomy has proven to be too narrow. Although this paper will focus on the Panko-Halverson taxonomy, this does not mean that that it is the only possible error taxonomy or even the best error taxonomy.<|reference_end|>
arxiv
@article{panko2008revisiting, title={Revisiting the Panko-Halverson Taxonomy of Spreadsheet Errors}, author={Raymond R. Panko}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 199-220 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0809.3613}, primaryClass={cs.SE cs.HC} }
panko2008revisiting
arxiv-4949
0809.3614
Improved Monotone Circuit Depth Upper Bound for Directed Graph Reachability
<|reference_start|>Improved Monotone Circuit Depth Upper Bound for Directed Graph Reachability: We prove that the directed graph reachability problem (transitive closure) can be solved by monotone fan-in 2 boolean circuits of depth (1/2+o(1))(log n)^2, where n is the number of nodes. This improves the previous known upper bound (1+o(1))(log n)^2. The proof is non-constructive, but we give a constructive proof of the upper bound (7/8+o(1))(log n)^2.<|reference_end|>
arxiv
@article{volkov2008improved, title={Improved Monotone Circuit Depth Upper Bound for Directed Graph Reachability}, author={Sergey Volkov}, journal={arXiv preprint arXiv:0809.3614}, year={2008}, archivePrefix={arXiv}, eprint={0809.3614}, primaryClass={cs.CC} }
volkov2008improved
arxiv-4950
0809.3618
Robust Near-Isometric Matching via Structured Learning of Graphical Models
<|reference_start|>Robust Near-Isometric Matching via Structured Learning of Graphical Models: Models for near-rigid shape matching are typically based on distance-related features, in order to infer matches that are consistent with the isometric assumption. However, real shapes from image datasets, even when expected to be related by "almost isometric" transformations, are actually subject not only to noise but also, to some limited degree, to variations in appearance and scale. In this paper, we introduce a graphical model that parameterises appearance, distance, and angle features and we learn all of the involved parameters via structured prediction. The outcome is a model for near-rigid shape matching which is robust in the sense that it is able to capture the possibly limited but still important scale and appearance variations. Our experimental results reveal substantial improvements upon recent successful models, while maintaining similar running times.<|reference_end|>
arxiv
@article{mcauley2008robust, title={Robust Near-Isometric Matching via Structured Learning of Graphical Models}, author={Julian J. McAuley, Tiberio S. Caetano, Alexander J. Smola}, journal={arXiv preprint arXiv:0809.3618}, year={2008}, archivePrefix={arXiv}, eprint={0809.3618}, primaryClass={cs.CV cs.LG} }
mcauley2008robust
arxiv-4951
0809.3646
Approximating acyclicity parameters of sparse hypergraphs
<|reference_start|>Approximating acyclicity parameters of sparse hypergraphs: The notions of hypertree width and generalized hypertree width were introduced by Gottlob, Leone, and Scarcello in order to extend the concept of hypergraph acyclicity. These notions were further generalized by Grohe and Marx, who introduced the fractional hypertree width of a hypergraph. All these width parameters on hypergraphs are useful for extending tractability of many problems in database theory and artificial intelligence. In this paper, we study the approximability of (generalized, fractional) hyper treewidth of sparse hypergraphs where the criterion of sparsity reflects the sparsity of their incidence graphs. Our first step is to prove that the (generalized, fractional) hypertree width of a hypergraph H is constant-factor sandwiched by the treewidth of its incidence graph, when the incidence graph belongs to some apex-minor-free graph class. This determines the combinatorial borderline above which the notion of (generalized, fractional) hypertree width becomes essentially more general than treewidth, justifying that way its functionality as a hypergraph acyclicity measure. While for more general sparse families of hypergraphs treewidth of incidence graphs and all hypertree width parameters may differ arbitrarily, there are sparse families where a constant factor approximation algorithm is possible. In particular, we give a constant factor approximation polynomial time algorithm for (generalized, fractional) hypertree width on hypergraphs whose incidence graphs belong to some H-minor-free graph class.<|reference_end|>
arxiv
@article{fomin2008approximating, title={Approximating acyclicity parameters of sparse hypergraphs}, author={Fedor V. Fomin, Petr A. Golovach and Dimitrios M. Thilikos}, journal={arXiv preprint arXiv:0809.3646}, year={2008}, archivePrefix={arXiv}, eprint={0809.3646}, primaryClass={cs.DS cs.CC} }
fomin2008approximating
arxiv-4952
0809.3650
Hierarchical Bayesian sparse image reconstruction with application to MRFM
<|reference_start|>Hierarchical Bayesian sparse image reconstruction with application to MRFM: This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g. by maximizing the estimated posterior distribution. In our fully Bayesian approach the posteriors of all the parameters are available. Thus our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of our hierarchical Bayesian sparse reconstruction method is illustrated on synthetic and real data collected from a tobacco virus sample using a prototype MRFM instrument.<|reference_end|>
arxiv
@article{dobigeon2008hierarchical, title={Hierarchical Bayesian sparse image reconstruction with application to MRFM}, author={Nicolas Dobigeon, Alfred O. Hero and Jean-Yves Tourneret}, journal={IEEE Trans. Image Processing, vol. 18, no. 9, pp. 2059-2070, Sept. 2009}, year={2008}, doi={10.1109/TIP.2009.2024067}, archivePrefix={arXiv}, eprint={0809.3650}, primaryClass={physics.data-an cs.IT math.IT stat.ME} }
dobigeon2008hierarchical
arxiv-4953
0809.3688
Mathematical and computer tools of discrete dynamic modeling and analysis of complex systems in control loop
<|reference_start|>Mathematical and computer tools of discrete dynamic modeling and analysis of complex systems in control loop: We present a method of discrete modeling and analysis of multilevel dynamics of complex large-scale hierarchical dynamic systems subject to external dynamic control mechanism. Architectural model of information system supporting simulation and analysis of dynamic processes and development scenarios (strategies) of complex large-scale hierarchical systems is also proposed.<|reference_end|>
arxiv
@article{bagdasaryan2008mathematical, title={Mathematical and computer tools of discrete dynamic modeling and analysis of complex systems in control loop}, author={Armen Bagdasaryan}, journal={Int J Mathematical Models and Methods in Applied Sciences, vol. 2, issue 1, 2008, pp. 82-95}, year={2008}, archivePrefix={arXiv}, eprint={0809.3688}, primaryClass={cs.CE cs.MA} }
bagdasaryan2008mathematical
arxiv-4954
0809.3690
Modeling and Control with Local Linearizing Nadaraya Watson Regression
<|reference_start|>Modeling and Control with Local Linearizing Nadaraya Watson Regression: Black box models of technical systems are purely descriptive. They do not explain why a system works the way it does. Thus, black box models are insufficient for some problems. But there are numerous applications, for example, in control engineering, for which a black box model is absolutely sufficient. In this article, we describe a general stochastic framework with which such models can be built easily and fully automated by observation. Furthermore, we give a practical example and show how this framework can be used to model and control a motorcar powertrain.<|reference_end|>
arxiv
@article{kühn2008modeling, title={Modeling and Control with Local Linearizing Nadaraya Watson Regression}, author={Steffen K"uhn and Clemens G"uhmann}, journal={arXiv preprint arXiv:0809.3690}, year={2008}, archivePrefix={arXiv}, eprint={0809.3690}, primaryClass={cs.CV} }
kühn2008modeling
arxiv-4955
0809.3731
Uncertainty Relations for Shift-Invariant Analog Signals
<|reference_start|>Uncertainty Relations for Shift-Invariant Analog Signals: The past several years have witnessed a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the finite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictionary. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be sparsely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely-generated shift-invariant (SI) space. This class of signals is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle similar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited lowpass train that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we show how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infinite domain with infinitely many variables and constraints, under certain conditions on the dictionary spectrum our algorithm can find the sparsest representation by solving a finite-dimensional problem.<|reference_end|>
arxiv
@article{eldar2008uncertainty, title={Uncertainty Relations for Shift-Invariant Analog Signals}, author={Yonina C. Eldar}, journal={arXiv preprint arXiv:0809.3731}, year={2008}, doi={10.1109/TIT.2009.2032711}, archivePrefix={arXiv}, eprint={0809.3731}, primaryClass={cs.IT math.IT} }
eldar2008uncertainty
arxiv-4956
0809.3908
Optimal Energy Management Policies for Energy Harvesting Sensor Nodes
<|reference_start|>Optimal Energy Management Policies for Energy Harvesting Sensor Nodes: We study a sensor node with an energy harvesting source. The generated energy can be stored in a buffer. The sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time. We obtain energy management policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue.We also compare performance of several easily implementable sub-optimal energy management policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay.<|reference_end|>
arxiv
@article{sharma2008optimal, title={Optimal Energy Management Policies for Energy Harvesting Sensor Nodes}, author={Vinod Sharma, Utpal Mukherji, Vinay Joseph, Shrey Gupta}, journal={arXiv preprint arXiv:0809.3908}, year={2008}, doi={10.1109/TWC.2010.04.080749}, archivePrefix={arXiv}, eprint={0809.3908}, primaryClass={cs.NI} }
sharma2008optimal
arxiv-4957
0809.3935
Characterizing graphs with convex and connected configuration spaces
<|reference_start|>Characterizing graphs with convex and connected configuration spaces: We define and study exact, efficient representations of realization spaces Euclidean Distance Constraint Systems (EDCS), which includes Linkages and Frameworks. Each representation corresponds to a choice of Cayley parameters and yields a different parametrized configuration space. Significantly, we give purely graph-theoretic, forbidden minor characterizations that capture (i) the class of graphs that always admit efficient configuration spaces and (ii) the possible choices of representation parameters that yield efficient configuration spaces for a given graph. In addition, our results are tight: we show counterexamples to obvious extensions. This is the first step in a systematic and graded program of combinatorial characterizations of efficient configuration spaces. We discuss several future theoretical and applied research directions. Some of our proofs employ an unusual interplay of (a) classical analytic results related to positive semi-definiteness of Euclidean distance matrices, with (b) recent forbidden minor characterizations and algorithms related to the notion of d-realizability of EDCS. We further introduce a novel type of restricted edge contraction or reduction to a graph minor, a "trick" that we anticipate will be useful in other situations.<|reference_end|>
arxiv
@article{sitharam2008characterizing, title={Characterizing graphs with convex and connected configuration spaces}, author={Meera Sitharam, Heping Gao}, journal={arXiv preprint arXiv:0809.3935}, year={2008}, archivePrefix={arXiv}, eprint={0809.3935}, primaryClass={cs.CG} }
sitharam2008characterizing
arxiv-4958
0809.3942
A Reconfigurable Programmable Logic Block for a Multi-Style Asynchronous FPGA resistant to Side-Channel Attacks
<|reference_start|>A Reconfigurable Programmable Logic Block for a Multi-Style Asynchronous FPGA resistant to Side-Channel Attacks: Side-channel attacks are efficient attacks against cryptographic devices. They use only quantities observable from outside, such as the duration and the power consumption. Attacks against synchronous devices using electric observations are facilitated by the fact that all transitions occur simultaneously with some global clock signal. Asynchronous control remove this synchronization and therefore makes it more difficult for the attacker to insulate \emph{interesting intervals}. In addition the coding of data in an asynchronous circuit is inherently more difficult to attack. This article describes the Programmable Logic Block of an asynchronous FPGA resistant against \emph{side-channel attacks}. Additionally it can implement different styles of asynchronous control and of data representation.<|reference_end|>
arxiv
@article{hoogvorst2008a, title={A Reconfigurable Programmable Logic Block for a Multi-Style Asynchronous FPGA resistant to Side-Channel Attacks}, author={Philippe Hoogvorst, Sylvain Guilley, Sumanta Chaudhuri, Jean-Luc Danger, Taha Beyrouthy and Laurent Fesquet}, journal={arXiv preprint arXiv:0809.3942}, year={2008}, archivePrefix={arXiv}, eprint={0809.3942}, primaryClass={cs.CR cs.OH} }
hoogvorst2008a
arxiv-4959
0809.3960
Formalising the pi-calculus using nominal logic
<|reference_start|>Formalising the pi-calculus using nominal logic: We formalise the pi-calculus using the nominal datatype package, based on ideas from the nominal logic by Pitts et al., and demonstrate an implementation in Isabelle/HOL. The purpose is to derive powerful induction rules for the semantics in order to conduct machine checkable proofs, closely following the intuitive arguments found in manual proofs. In this way we have covered many of the standard theorems of bisimulation equivalence and congruence, both late and early, and both strong and weak in a uniform manner. We thus provide one of the most extensive formalisations of a process calculus ever done inside a theorem prover. A significant gain in our formulation is that agents are identified up to alpha-equivalence, thereby greatly reducing the arguments about bound names. This is a normal strategy for manual proofs about the pi-calculus, but that kind of hand waving has previously been difficult to incorporate smoothly in an interactive theorem prover. We show how the nominal logic formalism and its support in Isabelle accomplishes this and thus significantly reduces the tedium of conducting completely formal proofs. This improves on previous work using weak higher order abstract syntax since we do not need extra assumptions to filter out exotic terms and can keep all arguments within a familiar first-order logic.<|reference_end|>
arxiv
@article{bengtson2008formalising, title={Formalising the pi-calculus using nominal logic}, author={Jesper Bengtson, Joachim Parrow}, journal={Logical Methods in Computer Science, Volume 5, Issue 2 (June 30, 2009) lmcs:832}, year={2008}, doi={10.2168/LMCS-5(2:16)2009}, archivePrefix={arXiv}, eprint={0809.3960}, primaryClass={cs.LO} }
bengtson2008formalising
arxiv-4960
0809.3994
Regularities of the distribution of abstract van der Corput sequences
<|reference_start|>Regularities of the distribution of abstract van der Corput sequences: Similarly to $\beta$-adic van der Corput sequences, abstract van der Corput sequences can be defined for abstract numeration systems. Under some assumptions, these sequences are low discrepancy sequences. The discrepancy function is computed explicitely, and a characterization of bounded remainder sets of the form $[0,y)$ is provided.<|reference_end|>
arxiv
@article{steiner2008regularities, title={Regularities of the distribution of abstract van der Corput sequences}, author={Wolfgang Steiner (LIAFA)}, journal={arXiv preprint arXiv:0809.3994}, year={2008}, archivePrefix={arXiv}, eprint={0809.3994}, primaryClass={math.NT cs.DM} }
steiner2008regularities
arxiv-4961
0809.4017
Termination Criteria for Solving Concurrent Safety and Reachability Games
<|reference_start|>Termination Criteria for Solving Concurrent Safety and Reachability Games: We consider concurrent games played on graphs. At every round of a game, each player simultaneously and independently selects a move; the moves jointly determine the transition to a successor state. Two basic objectives are the safety objective to stay forever in a given set of states, and its dual, the reachability objective to reach a given set of states. We present in this paper a strategy improvement algorithm for computing the value of a concurrent safety game, that is, the maximal probability with which player~1 can enforce the safety objective. The algorithm yields a sequence of player-1 strategies which ensure probabilities of winning that converge monotonically to the value of the safety game. Our result is significant because the strategy improvement algorithm provides, for the first time, a way to approximate the value of a concurrent safety game from below. Since a value iteration algorithm, or a strategy improvement algorithm for reachability games, can be used to approximate the same value from above, the combination of both algorithms yields a method for computing a converging sequence of upper and lower bounds for the values of concurrent reachability and safety games. Previous methods could approximate the values of these games only from one direction, and as no rates of convergence are known, they did not provide a practical way to solve these games.<|reference_end|>
arxiv
@article{chatterjee2008termination, title={Termination Criteria for Solving Concurrent Safety and Reachability Games}, author={Krishnendu Chatterjee and Luca de Alfaro and Thomas A. Henzinger}, journal={arXiv preprint arXiv:0809.4017}, year={2008}, archivePrefix={arXiv}, eprint={0809.4017}, primaryClass={cs.GT cs.LO} }
chatterjee2008termination
arxiv-4962
0809.4019
Throughput Scaling of Wireless Networks With Random Connections
<|reference_start|>Throughput Scaling of Wireless Networks With Random Connections: This work studies the throughput scaling laws of ad hoc wireless networks in the limit of a large number of nodes. A random connections model is assumed in which the channel connections between the nodes are drawn independently from a common distribution. Transmitting nodes are subject to an on-off strategy, and receiving nodes employ conventional single-user decoding. The following results are proven: 1) For a class of connection models with finite mean and variance, the throughput scaling is upper-bounded by $O(n^{1/3})$ for single-hop schemes, and $O(n^{1/2})$ for two-hop (and multihop) schemes. 2) The $\Theta (n^{1/2})$ throughput scaling is achievable for a specific connection model by a two-hop opportunistic relaying scheme, which employs full, but only local channel state information (CSI) at the receivers, and partial CSI at the transmitters. 3) By relaxing the constraints of finite mean and variance of the connection model, linear throughput scaling $\Theta (n)$ is achievable with Pareto-type fading models.<|reference_end|>
arxiv
@article{cui2008throughput, title={Throughput Scaling of Wireless Networks With Random Connections}, author={Shengshan Cui, Alexander M. Haimovich, Oren Somekh, H. Vincent Poor, Shlomo Shamai (Shitz)}, journal={arXiv preprint arXiv:0809.4019}, year={2008}, doi={10.1109/ICC.2009.5199528}, archivePrefix={arXiv}, eprint={0809.4019}, primaryClass={cs.IT math.IT} }
cui2008throughput
arxiv-4963
0809.4058
Target Localization Accuracy Gain in MIMO Radar Based Systems
<|reference_start|>Target Localization Accuracy Gain in MIMO Radar Based Systems: This paper presents an analysis of target localization accuracy, attainable by the use of MIMO (Multiple-Input Multiple-Output) radar systems, configured with multiple transmit and receive sensors, widely distributed over a given area. The Cramer-Rao lower bound (CRLB) for target localization accuracy is developed for both coherent and non-coherent processing. Coherent processing requires a common phase reference for all transmit and receive sensors. The CRLB is shown to be inversely proportional to the signal effective bandwidth in the non-coherent case, but is approximately inversely proportional to the carrier frequency in the coherent case. We further prove that optimization over the sensors' positions lowers the CRLB by a factor equal to the product of the number of transmitting and receiving sensors. The best linear unbiased estimator (BLUE) is derived for the MIMO target localization problem. The BLUE's utility is in providing a closed form localization estimate that facilitates the analysis of the relations between sensors locations, target location, and localization accuracy. Geometric dilution of precision (GDOP) contours are used to map the relative performance accuracy for a given layout of radars over a given geographic area.<|reference_end|>
arxiv
@article{godrich2008target, title={Target Localization Accuracy Gain in MIMO Radar Based Systems}, author={Hana Godrich, Alexander M. Haimovich, and Rick S. Blum}, journal={arXiv preprint arXiv:0809.4058}, year={2008}, doi={10.1109/TIT.2010.2046246}, archivePrefix={arXiv}, eprint={0809.4058}, primaryClass={cs.IT math.IT} }
godrich2008target
arxiv-4964
0809.4059
Information transmission in oscillatory neural activity
<|reference_start|>Information transmission in oscillatory neural activity: Periodic neural activity not locked to the stimulus or to motor responses is usually ignored. Here, we present new tools for modeling and quantifying the information transmission based on periodic neural activity that occurs with quasi-random phase relative to the stimulus. We propose a model to reproduce characteristic features of oscillatory spike trains, such as histograms of inter-spike intervals and phase locking of spikes to an oscillatory influence. The proposed model is based on an inhomogeneous Gamma process governed by a density function that is a product of the usual stimulus-dependent rate and a quasi-periodic function. Further, we present an analysis method generalizing the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the information content in such data. We demonstrate these tools on recordings from relay cells in the lateral geniculate nucleus of the cat.<|reference_end|>
arxiv
@article{koepsell2008information, title={Information transmission in oscillatory neural activity}, author={Kilian Koepsell and Friedrich T. Sommer}, journal={Biological Cybernetics (2008) 99:403-416}, year={2008}, doi={10.1007/s00422-008-0273-6}, archivePrefix={arXiv}, eprint={0809.4059}, primaryClass={q-bio.NC cs.IT math.IT q-bio.QM} }
koepsell2008information
arxiv-4965
0809.4082
Multiprocessor Global Scheduling on Frame-Based DVFS Systems
<|reference_start|>Multiprocessor Global Scheduling on Frame-Based DVFS Systems: In this ongoing work, we are interested in multiprocessor energy efficient systems, where task durations are not known in advance, but are know stochastically. More precisely, we consider global scheduling algorithms for frame-based multiprocessor stochastic DVFS (Dynamic Voltage and Frequency Scaling) systems. Moreover, we consider processors with a discrete set of available frequencies.<|reference_end|>
arxiv
@article{berten2008multiprocessor, title={Multiprocessor Global Scheduling on Frame-Based DVFS Systems}, author={Vandy Berten and Jo"el Goossens}, journal={arXiv preprint arXiv:0809.4082}, year={2008}, archivePrefix={arXiv}, eprint={0809.4082}, primaryClass={cs.OS} }
berten2008multiprocessor
arxiv-4966
0809.4086
Learning Hidden Markov Models using Non-Negative Matrix Factorization
<|reference_start|>Learning Hidden Markov Models using Non-Negative Matrix Factorization: The Baum-Welsh algorithm together with its derivatives and variations has been the main technique for learning Hidden Markov Models (HMM) from observational data. We present an HMM learning algorithm based on the non-negative matrix factorization (NMF) of higher order Markovian statistics that is structurally different from the Baum-Welsh and its associated approaches. The described algorithm supports estimation of the number of recurrent states of an HMM and iterates the non-negative matrix factorization (NMF) algorithm to improve the learned HMM parameters. Numerical examples are provided as well.<|reference_end|>
arxiv
@article{cybenko2008learning, title={Learning Hidden Markov Models using Non-Negative Matrix Factorization}, author={George Cybenko and Valentino Crespi}, journal={arXiv preprint arXiv:0809.4086}, year={2008}, archivePrefix={arXiv}, eprint={0809.4086}, primaryClass={cs.LG cs.AI cs.IT math.IT} }
cybenko2008learning
arxiv-4967
0809.4093
Perspective Drawing of Surfaces with Line Hidden Line Elimination, Dibujando Superficies En Perspectiva Con Eliminacion De Lineas Ocultas
<|reference_start|>Perspective Drawing of Surfaces with Line Hidden Line Elimination, Dibujando Superficies En Perspectiva Con Eliminacion De Lineas Ocultas: An efficient computer algorithm is described for the perspective drawing of a wide class of surfaces. The class includes surfaces corresponding lo single-valued, continuous functions which are defined over rectangular domains. The algorithm automatically computes and eliminates hidden lines. The number of computations in the algorithm grows linearly with the number of sample points on the surface to be drawn. An analysis of the algorithm is presented, and extensions lo certain multi-valued functions are indicated. The algorithm is implemented and tested on .Net 2.0 platform that left interactive use. Running times are found lo be exceedingly efficient for visualization, where interaction on-line and view-point control, enables effective and rapid examination of a surfaces from many perspectives.<|reference_end|>
arxiv
@article{vega-paez2008perspective, title={Perspective Drawing of Surfaces with Line Hidden Line Elimination, Dibujando Superficies En Perspectiva Con Eliminacion De Lineas Ocultas}, author={Ignacio Vega-Paez, Jose Angel Ortega, Georgina G. Pulido}, journal={Proceedings in Technical Memory, XI Congreso Nacional de Ingenieria Electromecanica y de Sistemas, pp. 136-144, Mexico, DF., Nov 2009}, year={2008}, number={IBP-TR2008-08}, archivePrefix={arXiv}, eprint={0809.4093}, primaryClass={cs.GR cs.CG} }
vega-paez2008perspective
arxiv-4968
0809.4101
On Gaussian MIMO BC-MAC Duality With Multiple Transmit Covariance Constraints
<|reference_start|>On Gaussian MIMO BC-MAC Duality With Multiple Transmit Covariance Constraints: Owing to the structure of the Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC), associated optimization problems such as capacity region computation and beamforming optimization are typically non-convex, and cannot be solved directly. One feasible approach to these problems is to transform them into their dual multiple access channel (MAC) problems, which are easier to deal with due to their convexity properties. The conventional BC-MAC duality is established via BC-MAC signal transformation, and has been successfully applied to solve beamforming optimization, signal-to-interference-plus-noise ratio (SINR) balancing, and capacity region computation. However, this conventional duality approach is applicable only to the case, in which the base station (BS) of the BC is subject to a single sum power constraint. An alternative approach is minimax duality, established by Yu in the framework of Lagrange duality, which can be applied to solve the per-antenna power constraint problem. This paper extends the conventional BC-MAC duality to the general linear constraint case, and thereby establishes a general BC-MAC duality. This new duality is applied to solve the capacity computation and beamforming optimization for the MIMO and multiple-input single-output (MISO) BC, respectively, with multiple linear constraints. Moreover, the relationship between this new general BC-MAC duality and minimax duality is also presented. It is shown that the general BC-MAC duality offers more flexibility in solving BC optimization problems relative to minimax duality. Numerical results are provided to illustrate the effectiveness of the proposed algorithms.<|reference_end|>
arxiv
@article{zhang2008on, title={On Gaussian MIMO BC-MAC Duality With Multiple Transmit Covariance Constraints}, author={Lan Zhang, Rui Zhang, Ying-Chang Liang, Yan Xin, H. Vincent Poor}, journal={arXiv preprint arXiv:0809.4101}, year={2008}, archivePrefix={arXiv}, eprint={0809.4101}, primaryClass={cs.IT math.IT} }
zhang2008on
arxiv-4969
0809.4107
Modelling interdependencies between the electricity and information infrastructures
<|reference_start|>Modelling interdependencies between the electricity and information infrastructures: The aim of this paper is to provide qualitative models characterizing interdependencies related failures of two critical infrastructures: the electricity infrastructure and the associated information infrastructure. The interdependencies of these two infrastructures are increasing due to a growing connection of the power grid networks to the global information infrastructure, as a consequence of market deregulation and opening. These interdependencies increase the risk of failures. We focus on cascading, escalating and common-cause failures, which correspond to the main causes of failures due to interdependencies. We address failures in the electricity infrastructure, in combination with accidental failures in the information infrastructure, then we show briefly how malicious attacks in the information infrastructure can be addressed.<|reference_end|>
arxiv
@article{laprie2008modelling, title={Modelling interdependencies between the electricity and information infrastructures}, author={Jean-Claude Laprie (LAAS), Karama Kanoun (LAAS), Mohamed Kaaniche (LAAS)}, journal={26th International Conference on Computer Safety, Reliability and Security, SAFECOMP-2007, Nurenberg : Allemagne (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0809.4107}, primaryClass={cs.DC} }
laprie2008modelling
arxiv-4970
0809.4108
The ADAPT Tool: From AADL Architectural Models to Stochastic Petri Nets through Model Transformation
<|reference_start|>The ADAPT Tool: From AADL Architectural Models to Stochastic Petri Nets through Model Transformation: ADAPT is a tool that aims at easing the task of evaluating dependability measures in the context of modern model driven engineering processes based on AADL (Architecture Analysis and Design Language). Hence, its input is an AADL architectural model annotated with dependability-related information. Its output is a dependability evaluation model in the form of a Generalized Stochastic Petri Net (GSPN). The latter can be processed by existing dependability evaluation tools, to compute quantitative measures such as reliability, availability, etc.. ADAPT interfaces OSATE (the Open Source AADL Tool Environment) on the AADL side and SURF-2, on the dependability evaluation side. In addition, ADAPT provides the GSPN in XML/XMI format, which represents a gateway to other dependability evaluation tools, as the processing techniques for XML files allow it to be easily converted to a tool-specific GSPN.<|reference_end|>
arxiv
@article{rugina2008the, title={The ADAPT Tool: From AADL Architectural Models to Stochastic Petri Nets through Model Transformation}, author={Ana E. Rugina (LAAS), Karama Kanoun (LAAS), Mohamed Kaaniche (LAAS)}, journal={7th European Dependable Computing Conference (EDCC), Kaunas : Lituanie (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0809.4108}, primaryClass={cs.SE} }
rugina2008the
arxiv-4971
0809.4109
Software dependability modeling using an industry-standard architecture description language
<|reference_start|>Software dependability modeling using an industry-standard architecture description language: Performing dependability evaluation along with other analyses at architectural level allows both making architectural tradeoffs and predicting the effects of architectural decisions on the dependability of an application. This paper gives guidelines for building architectural dependability models for software systems using the AADL (Architecture Analysis and Design Language). It presents reusable modeling patterns for fault-tolerant applications and shows how the presented patterns can be used in the context of a subsystem of a real-life application.<|reference_end|>
arxiv
@article{rugina2008software, title={Software dependability modeling using an industry-standard architecture description language}, author={Ana-Elena Rugina (LAAS), Peter H. Feiler (CMU-SEI), Karama Kanoun (LAAS), Mohamed Kaaniche (LAAS)}, journal={arXiv preprint arXiv:0809.4109}, year={2008}, archivePrefix={arXiv}, eprint={0809.4109}, primaryClass={cs.SE} }
rugina2008software
arxiv-4972
0809.4115
Bisimilarity and Behaviour-Preserving Reconfigurations of Open Petri Nets
<|reference_start|>Bisimilarity and Behaviour-Preserving Reconfigurations of Open Petri Nets: We propose a framework for the specification of behaviour-preserving reconfigurations of systems modelled as Petri nets. The framework is based on open nets, a mild generalisation of ordinary Place/Transition nets suited to model open systems which might interact with the surrounding environment and endowed with a colimit-based composition operation. We show that natural notions of bisimilarity over open nets are congruences with respect to the composition operation. The considered behavioural equivalences differ for the choice of the observations, which can be single firings or parallel steps. Additionally, we consider weak forms of such equivalences, arising in the presence of unobservable actions. We also provide an up-to technique for facilitating bisimilarity proofs. The theory is used to identify suitable classes of reconfiguration rules (in the double-pushout approach to rewriting) whose application preserves the observational semantics of the net.<|reference_end|>
arxiv
@article{baldan2008bisimilarity, title={Bisimilarity and Behaviour-Preserving Reconfigurations of Open Petri Nets}, author={Paolo Baldan, Andrea Corradini, Hartmut Ehrig, Reiko Heckel, Barbara K"onig}, journal={Logical Methods in Computer Science, Volume 4, Issue 4 (October 21, 2008) lmcs:1165}, year={2008}, doi={10.2168/LMCS-4(4:3)2008}, archivePrefix={arXiv}, eprint={0809.4115}, primaryClass={cs.LO} }
baldan2008bisimilarity
arxiv-4973
0809.4149
Block Network Error Control Codes and Syndrome-based Complete Maximum Likelihood Decoding
<|reference_start|>Block Network Error Control Codes and Syndrome-based Complete Maximum Likelihood Decoding: In this paper, network error control coding is studied for robust and efficient multicast in a directed acyclic network with imperfect links. The block network error control coding framework, BNEC, is presented and the capability of the scheme to correct a mixture of symbol errors and packet erasures and to detect symbol errors is studied. The idea of syndrome-based decoding and error detection is introduced for BNEC, which removes the effect of input data and hence decreases the complexity. Next, an efficient three-stage syndrome-based BNEC decoding scheme for network error correction is proposed, in which prior to finding the error values, the position of the edge errors are identified based on the error spaces at the receivers. In addition to bounded-distance decoding schemes for error correction up to the refined Singleton bound, a complete decoding scheme for BNEC is also introduced. Specifically, it is shown that using the proposed syndrome-based complete decoding, a network error correcting code with redundancy order d for receiver t, can correct d-1 random additive errors with a probability sufficiently close to 1, if the field size is sufficiently large. Also, a complete maximum likelihood decoding scheme for BNEC is proposed. As the probability of error in different network edges is not equal in general, and given the equivalency of certain edge errors within the network at a particular receiver, the number of edge errors, assessed in the refined Singleton bound, is not a sufficient statistic for ML decoding.<|reference_end|>
arxiv
@article{bahramgiri2008block, title={Block Network Error Control Codes and Syndrome-based Complete Maximum Likelihood Decoding}, author={Hossein Bahramgiri and Farshad Lahouti}, journal={arXiv preprint arXiv:0809.4149}, year={2008}, archivePrefix={arXiv}, eprint={0809.4149}, primaryClass={cs.IT math.IT} }
bahramgiri2008block
arxiv-4974
0809.4183
An Asymptotically Optimal RFID Authentication Protocol Against Relay Attacks
<|reference_start|>An Asymptotically Optimal RFID Authentication Protocol Against Relay Attacks: Relay attacks are a major concern for RFID systems: during an authentication process an adversary transparently relays messages between a verifier and a remote legitimate prover. We present an authentication protocol suited for RFID systems. Our solution is the first that prevents relay attacks without degrading the authentication security level: it minimizes the probability that the verifier accepts a fake proof of identity, whether or not a relay attack occurs.<|reference_end|>
arxiv
@article{avoine2008an, title={An Asymptotically Optimal RFID Authentication Protocol Against Relay Attacks}, author={Gildas Avoine and Aslan Tchamkerten}, journal={arXiv preprint arXiv:0809.4183}, year={2008}, archivePrefix={arXiv}, eprint={0809.4183}, primaryClass={cs.CR cs.IT math.IT} }
avoine2008an
arxiv-4975
0809.4194
Rate based call gapping with priorities and fairness between traffic classes
<|reference_start|>Rate based call gapping with priorities and fairness between traffic classes: This paper presents a new rate based call gapping method. The main advantage is that it provides maximal throughput, priority handling and fairness for traffic classes without queues, unlike Token Bucket which provides only the first two or Weighted Fair Queuing that uses queues. The Token Bucket is used for call gapping because it has good throughput characteristics. For this reason we present a mixture of the two methods keeping the good properties of both. A mathematical model has been developed to support our proposal. It defines the three requirements and proves theorems about if they are satisfied with the different call gapping mechanisms. Simulation, numerical results and statistical discussion are also presented to underpin the findings.<|reference_end|>
arxiv
@article{kovacs2008rate, title={Rate based call gapping with priorities and fairness between traffic classes}, author={Benedek Kovacs}, journal={arXiv preprint arXiv:0809.4194}, year={2008}, archivePrefix={arXiv}, eprint={0809.4194}, primaryClass={cs.NI} }
kovacs2008rate
arxiv-4976
0809.4296
State dependent computation using coupled recurrent networks
<|reference_start|>State dependent computation using coupled recurrent networks: Although conditional branching between possible behavioural states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem we demonstrate by theoretical analysis and simulation how networks of richly inter-connected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable robust finite state machines. We show how a multi-stable neuronal network containing a number of states can be created very simply, by coupling two recurrent networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogenous locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicted that state is withdrawn. In addition, a small number of 'transition neurons' implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit.<|reference_end|>
arxiv
@article{rutishauser2008state, title={State dependent computation using coupled recurrent networks}, author={Ueli Rutishauser, Rodney J. Douglas}, journal={Neural computation, 21(2):478-509, 2009}, year={2008}, doi={10.1162/neco.2008.03-08-734}, archivePrefix={arXiv}, eprint={0809.4296}, primaryClass={q-bio.NC cs.NE} }
rutishauser2008state
arxiv-4977
0809.4316
A Layered Lattice Coding Scheme for a Class of Three User Gaussian Interference Channels
<|reference_start|>A Layered Lattice Coding Scheme for a Class of Three User Gaussian Interference Channels: The paper studies a class of three user Gaussian interference channels. A new layered lattice coding scheme is introduced as a transmission strategy. The use of lattice codes allows for an "alignment" of the interference observed at each receiver. The layered lattice coding is shown to achieve more than one degree of freedom for a class of interference channels and also achieves rates which are better than the rates obtained using the Han-Kobayashi coding scheme.<|reference_end|>
arxiv
@article{sridharan2008a, title={A Layered Lattice Coding Scheme for a Class of Three User Gaussian Interference Channels}, author={Sriram Sridharan, Amin Jafarian, Sriram Vishwanath, Syed A. Jafar and Shlomo Shamai (Shitz)}, journal={arXiv preprint arXiv:0809.4316}, year={2008}, archivePrefix={arXiv}, eprint={0809.4316}, primaryClass={cs.IT math.IT} }
sridharan2008a
arxiv-4978
0809.4317
On the Effect of Quantum Interaction Distance on Quantum Addition Circuits
<|reference_start|>On the Effect of Quantum Interaction Distance on Quantum Addition Circuits: We investigate the theoretical limits of the effect of the quantum interaction distance on the speed of exact quantum addition circuits. For this study, we exploit graph embedding for quantum circuit analysis. We study a logical mapping of qubits and gates of any $\Omega(\log n)$-depth quantum adder circuit for two $n$-qubit registers onto a practical architecture, which limits interaction distance to the nearest neighbors only and supports only one- and two-qubit logical gates. Unfortunately, on the chosen $k$-dimensional practical architecture, we prove that the depth lower bound of any exact quantum addition circuits is no longer $\Omega(\log {n})$, but $\Omega(\sqrt[k]{n})$. This result, the first application of graph embedding to quantum circuits and devices, provides a new tool for compiler development, emphasizes the impact of quantum computer architecture on performance, and acts as a cautionary note when evaluating the time performance of quantum algorithms.<|reference_end|>
arxiv
@article{choi2008on, title={On the Effect of Quantum Interaction Distance on Quantum Addition Circuits}, author={Byung-Soo Choi and Rodney Van Meter}, journal={arXiv preprint arXiv:0809.4317}, year={2008}, doi={10.1145/2000502.2000504}, archivePrefix={arXiv}, eprint={0809.4317}, primaryClass={quant-ph cs.AR} }
choi2008on
arxiv-4979
0809.4325
On the Unicast Capacity of Stationary Multi-channel Multi-radio Wireless Networks: Separability and Multi-channel Routing
<|reference_start|>On the Unicast Capacity of Stationary Multi-channel Multi-radio Wireless Networks: Separability and Multi-channel Routing: The first result is on the separability of the unicast capacity of stationary multi-channel multi-radio wireless networks, i.e., whether the capacity of such a network is equal to the sum of the capacities of the corresponding single-channel single-radio wireless networks. For both the Arbitrary Network model and the Random Network model, given a channel assignment, the separability property does not always hold. However, if the number of radio interfaces at each node is equal to the number of channels, the separability property holds. The second result is on the impact of multi-channel routing (i.e., routing a bit through multiple channels as opposed to through a single channel) on the network capacity. For both network models, the network capacities conditioned on a channel assignment under the two routing schemes are not always equal, but if again the number of radio interfaces at each node is equal to the number of channels, the two routing schemes yield equal network capacities.<|reference_end|>
arxiv
@article{ma2008on, title={On the Unicast Capacity of Stationary Multi-channel Multi-radio Wireless Networks: Separability and Multi-channel Routing}, author={Liangping Ma}, journal={arXiv preprint arXiv:0809.4325}, year={2008}, archivePrefix={arXiv}, eprint={0809.4325}, primaryClass={cs.IT math.IT} }
ma2008on
arxiv-4980
0809.4326
Algorithms for Game Metrics
<|reference_start|>Algorithms for Game Metrics: Simulation and bisimulation metrics for stochastic systems provide a quantitative generalization of the classical simulation and bisimulation relations. These metrics capture the similarity of states with respect to quantitative specifications written in the quantitative {\mu}-calculus and related probabilistic logics. We first show that the metrics provide a bound for the difference in long-run average and discounted average behavior across states, indicating that the metrics can be used both in system verification, and in performance evaluation. For turn-based games and MDPs, we provide a polynomial-time algorithm for the computation of the one-step metric distance between states. The algorithm is based on linear programming; it improves on the previous known exponential-time algorithm based on a reduction to the theory of reals. We then present PSPACE algorithms for both the decision problem and the problem of approximating the metric distance between two states, matching the best known algorithms for Markov chains. For the bisimulation kernel of the metric our algorithm works in time O(n^4) for both turn-based games and MDPs; improving the previously best known O(n^9\cdot log(n)) time algorithm for MDPs. For a concurrent game G, we show that computing the exact distance between states is at least as hard as computing the value of concurrent reachability games and the square-root-sum problem in computational geometry. We show that checking whether the metric distance is bounded by a rational r, can be done via a reduction to the theory of real closed fields, involving a formula with three quantifier alternations, yielding O(|G|^O(|G|^5)) time complexity, improving the previously known reduction, which yielded O(|G|^O(|G|^7)) time complexity. These algorithms can be iterated to approximate the metrics using binary search.<|reference_end|>
arxiv
@article{chatterjee2008algorithms, title={Algorithms for Game Metrics}, author={Krishnendu Chatterjee (Institute of Science and Technology, Vienna, Austria), Luca de Alfaro (University of California, Santa Cruz, USA), Rupak Majumdar (University of California, Los Angeles, USA), Vishwanath Raman (University of California, Santa Cruz, USA)}, journal={Logical Methods in Computer Science, Volume 6, Issue 3 (September 1, 2010) lmcs:783}, year={2008}, doi={10.2168/LMCS-6(3:13)2010}, archivePrefix={arXiv}, eprint={0809.4326}, primaryClass={cs.GT} }
chatterjee2008algorithms
arxiv-4981
0809.4332
From one solution of a 3-satisfiability formula to a solution cluster: Frozen variables and entropy
<|reference_start|>From one solution of a 3-satisfiability formula to a solution cluster: Frozen variables and entropy: A solution to a 3-satisfiability (3-SAT) formula can be expanded into a cluster, all other solutions of which are reachable from this one through a sequence of single-spin flips. Some variables in the solution cluster are frozen to the same spin values by one of two different mechanisms: frozen-core formation and long-range frustrations. While frozen cores are identified by a local whitening algorithm, long-range frustrations are very difficult to trace, and they make an entropic belief-propagation (BP) algorithm fail to converge. For BP to reach a fixed point the spin values of a tiny fraction of variables (chosen according to the whitening algorithm) are externally fixed during the iteration. From the calculated entropy values, we infer that, for a large random 3-SAT formula with constraint density close to the satisfiability threshold, the solutions obtained by the survey-propagation or the walksat algorithm belong neither to the most dominating clusters of the formula nor to the most abundant clusters. This work indicates that a single solution cluster of a random 3-SAT formula may have further community structures.<|reference_end|>
arxiv
@article{li2008from, title={From one solution of a 3-satisfiability formula to a solution cluster: Frozen variables and entropy}, author={Kang Li, Hui Ma, and Haijun Zhou}, journal={Physical Review E 79, 031102 (2009)}, year={2008}, doi={10.1103/PhysRevE.79.031102}, archivePrefix={arXiv}, eprint={0809.4332}, primaryClass={cond-mat.dis-nn cs.CC} }
li2008from
arxiv-4982
0809.4342
Towards a More Accurate Carrier Sensing Model for CSMA Wireless Networks
<|reference_start|>Towards a More Accurate Carrier Sensing Model for CSMA Wireless Networks: This work calls into question a substantial body of past work on CSMA wireless networks. In the majority of studies on CSMA wireless networks, a contention graph is used to model the carrier sensing relationships (CS) among links. This is a 0-1 model in which two links can either sense each other completely or not. In real experiments, we observed that this is generally not the case: the CS relationship between the links are often probabilistic and can vary dynamically over time. This is the case even if the distance between the links is fixed and there is no drastic change in the environment. Furthermore, this partial carrier sensing relationship is prevalent and occurs over a wide range of distances between the links. This observation is not consistent with the 0-1 contention graph and implies that many results and conclusions drawn from previous theoretical studies need to be re-examined. This paper establishes a more accurate CS model with the objective of laying down a foundation for future theoretical studies that reflect reality. Towards that end, we set up detailed experiments to investigate the partial carrier sensing phenomenon. We discuss the implications and the use of our partial carrier sensing model in network analysis.<|reference_end|>
arxiv
@article{kai2008towards, title={Towards a More Accurate Carrier Sensing Model for CSMA Wireless Networks}, author={Caihong Kai, Soung Chang Liew}, journal={arXiv preprint arXiv:0809.4342}, year={2008}, archivePrefix={arXiv}, eprint={0809.4342}, primaryClass={cs.NI} }
kai2008towards
arxiv-4983
0809.4395
Content Sharing for Mobile Devices
<|reference_start|>Content Sharing for Mobile Devices: The miniaturisation of computing devices has seen computing devices become increasingly pervasive in society. With this increased pervasiveness, the technologies of small computing devices have also improved. Mobile devices are now capable of capturing various forms of multimedia and able to communicate wirelessly using increasing numbers of communication techniques. The owners and creators of local content are motivated to share this content in ever increasing volume; the conclusion has been that social networks sites are seeing a revolution in the sharing of information between communities of people. As load on centralised systems increases, we present a novel decentralised peer-to-peer approach dubbed the Market Contact Protocol (MCP) to achieve cost effective, scalable and efficient content sharing using opportunistic networking (pocket switched networking), incentive, context-awareness, social contact and mobile devices. Within the report we describe how the MCP is simulated with a superimposed geographic framework on top of the JiST (Java in Simulation Time) framework to evaluate and measure its capability to share content between massively mobile peers. The MCP is shown in conclusion to be a powerful means by which to share content in a massively mobile ad-hoc environment.<|reference_end|>
arxiv
@article{ball2008content, title={Content Sharing for Mobile Devices}, author={Rudi Ball}, journal={arXiv preprint arXiv:0809.4395}, year={2008}, archivePrefix={arXiv}, eprint={0809.4395}, primaryClass={cs.DC cs.NI} }
ball2008content
arxiv-4984
0809.4398
Multistep greedy algorithm identifies community structure in real-world and computer-generated networks
<|reference_start|>Multistep greedy algorithm identifies community structure in real-world and computer-generated networks: We have recently introduced a multistep extension of the greedy algorithm for modularity optimization. The extension is based on the idea that merging l pairs of communities (l>1) at each iteration prevents premature condensation into few large communities. Here, an empirical formula is presented for the choice of the step width l that generates partitions with (close to) optimal modularity for 17 real-world and 1100 computer-generated networks. Furthermore, an in-depth analysis of the communities of two real-world networks (the metabolic network of the bacterium E. coli and the graph of coappearing words in the titles of papers coauthored by Martin Karplus) provides evidence that the partition obtained by the multistep greedy algorithm is superior to the one generated by the original greedy algorithm not only with respect to modularity but also according to objective criteria. In other words, the multistep extension of the greedy algorithm reduces the danger of getting trapped in local optima of modularity and generates more reasonable partitions.<|reference_end|>
arxiv
@article{schuetz2008multistep, title={Multistep greedy algorithm identifies community structure in real-world and computer-generated networks}, author={Philipp Schuetz, Amedeo Caflisch}, journal={Phys. Rev. E 78, 026112 (2008)}, year={2008}, doi={10.1103/PhysRevE.78.026112}, archivePrefix={arXiv}, eprint={0809.4398}, primaryClass={cs.DS cond-mat.dis-nn physics.soc-ph q-bio.MN q-bio.QM} }
schuetz2008multistep
arxiv-4985
0809.4484
Llull and Copeland Voting Computationally Resist Bribery and Control
<|reference_start|>Llull and Copeland Voting Computationally Resist Bribery and Control: The only systems previously known to be resistant to all the standard control types were highly artificial election systems created by hybridization. We study a parameterized version of Copeland voting, denoted by Copeland^\alpha, where the parameter \alpha is a rational number between 0 and 1 that specifies how ties are valued in the pairwise comparisons of candidates. We prove that Copeland^{0.5}, the system commonly referred to as "Copeland voting," provides full resistance to constructive control, and we prove the same for Copeland^\alpha, for all rational \alpha, 0 < \alpha < 1. Copeland voting is the first natural election system proven to have full resistance to constructive control. We also prove that both Copeland^1 (Llull elections) and Copeland^0 are resistant to all standard types of constructive control other than one variant of addition of candidates. Moreover, we show that for each rational \alpha, 0 \leq \alpha \leq 1, Copeland^\alpha voting is fully resistant to bribery attacks, and we establish fixed-parameter tractability of bounded-case control for Copeland^\alpha. We also study Copeland^\alpha elections under more flexible models such as microbribery and extended control and we integrate the potential irrationality of voter preferences into many of our results.<|reference_end|>
arxiv
@article{faliszewski2008llull, title={Llull and Copeland Voting Computationally Resist Bribery and Control}, author={Piotr Faliszewski, Edith Hemaspaandra, Lane A. Hemaspaandra, Joerg Rothe}, journal={arXiv preprint arXiv:0809.4484}, year={2008}, number={URCS-TR-2008-933}, archivePrefix={arXiv}, eprint={0809.4484}, primaryClass={cs.GT cs.CC cs.MA} }
faliszewski2008llull
arxiv-4986
0809.4501
Audio Classification from Time-Frequency Texture
<|reference_start|>Audio Classification from Time-Frequency Texture: Time-frequency representations of audio signals often resemble texture images. This paper derives a simple audio classification algorithm based on treating sound spectrograms as texture images. The algorithm is inspired by an earlier visual classification scheme particularly efficient at classifying textures. While solely based on time-frequency texture features, the algorithm achieves surprisingly good performance in musical instrument classification experiments.<|reference_end|>
arxiv
@article{yu2008audio, title={Audio Classification from Time-Frequency Texture}, author={Guoshen Yu, Jean-Jacques Slotine}, journal={arXiv preprint arXiv:0809.4501}, year={2008}, archivePrefix={arXiv}, eprint={0809.4501}, primaryClass={cs.CV cs.SD} }
yu2008audio
arxiv-4987
0809.4529
The Equivalence of Semidefinite Relaxation MIMO Detectors for Higher-Order QAM
<|reference_start|>The Equivalence of Semidefinite Relaxation MIMO Detectors for Higher-Order QAM: In multi-input-multi-output (MIMO) detection, semidefinite relaxation (SDR) has been shown to be an efficient high-performance approach. Developed initially for BPSK and QPSK, SDR has been found to be capable of providing near-optimal performance (for those constellations). This has stimulated a number of recent research endeavors that aim to apply SDR to the high-order QAM cases. These independently developed SDRs are different in concept and structure, and presently no serious analysis has been given to compare these methods. This paper analyzes the relationship of three such SDR methods, namely the polynomial-inspired SDR (PI-SDR) by Wiesel et al., the bound-constrained SDR (BC-SDR) by Sidiropoulos and Luo, and the virtually-antipodal SDR (VA-SDR) by Mao et al. The result that we have proven is somehow unexpected: the three SDRs are equivalent. Simply speaking, we show that solving any one SDR is equivalent to solving the other SDRs. This paper also discusses some implications arising from the SDR equivalence, and provides simulation results to verify our theoretical findings.<|reference_end|>
arxiv
@article{ma2008the, title={The Equivalence of Semidefinite Relaxation MIMO Detectors for Higher-Order QAM}, author={Wing-Kin Ma, Chao-Cheng Su, Joakim Jalden, Tsung-Hui Chang, and Chong-Yung Chi}, journal={arXiv preprint arXiv:0809.4529}, year={2008}, doi={10.1109/JSTSP.2009.2035798}, archivePrefix={arXiv}, eprint={0809.4529}, primaryClass={cs.IT math.IT math.OC} }
ma2008the
arxiv-4988
0809.4530
Mining Meaning from Wikipedia
<|reference_start|>Mining Meaning from Wikipedia: Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced.<|reference_end|>
arxiv
@article{medelyan2008mining, title={Mining Meaning from Wikipedia}, author={Olena Medelyan, David Milne, Catherine Legg and Ian H. Witten}, journal={arXiv preprint arXiv:0809.4530}, year={2008}, number={ISSN 1177-777X}, archivePrefix={arXiv}, eprint={0809.4530}, primaryClass={cs.AI cs.CL cs.IR} }
medelyan2008mining
arxiv-4989
0809.4576
On-the-Fly Coding to Enable Full Reliability Without Retransmission
<|reference_start|>On-the-Fly Coding to Enable Full Reliability Without Retransmission: This paper proposes a new reliability algorithm specifically useful when retransmission is either problematic or not possible. In case of multimedia or multicast communications and in the context of the Delay Tolerant Networking (DTN), the classical retransmission schemes can be counterproductive in terms of data transfer performance or not possible when the acknowledgment path is not always available. Indeed, over long delay links, packets retransmission has a meaning of cost and must be minimized.In this paper, we detail a novel reliability mechanism with an implicit acknowledgment strategy that could be used either within these new DTN proposals, for multimedia traffic or in the context of multicast transport protocols. This proposal is based on a new on-the-fly erasure coding concept specifically designed to operate efficient reliable transfer over bi-directional links. This proposal, named Tetrys, allows to unify a full reliability with an error correction scheme. In this paper, we model the performance of this proposal and demonstrate with a prototype, that we can achieve a full reliability without acknowledgment path confirmation. Indeed, the main findings are that Tetrys is not sensitive to the loss of acknowledgments while ensuring a faster data availability to the application compared to other traditional acknowledgment schemes. Finally, we pave the first step of the integration of such algorithm inside a congestion controlled protocol.<|reference_end|>
arxiv
@article{lacan2008on-the-fly, title={On-the-Fly Coding to Enable Full Reliability Without Retransmission}, author={Jerome Lacan and Emmanuel Lochin}, journal={arXiv preprint arXiv:0809.4576}, year={2008}, archivePrefix={arXiv}, eprint={0809.4576}, primaryClass={cs.NI} }
lacan2008on-the-fly
arxiv-4990
0809.4577
A Generic Top-Down Dynamic-Programming Approach to Prefix-Free Coding
<|reference_start|>A Generic Top-Down Dynamic-Programming Approach to Prefix-Free Coding: Given a probability distribution over a set of n words to be transmitted, the Huffman Coding problem is to find a minimal-cost prefix free code for transmitting those words. The basic Huffman coding problem can be solved in O(n log n) time but variations are more difficult. One of the standard techniques for solving these variations utilizes a top-down dynamic programming approach. In this paper we show that this approach is amenable to dynamic programming speedup techniques, permitting a speedup of an order of magnitude for many algorithms in the literature for such variations as mixed radix, reserved length and one-ended coding. These speedups are immediate implications of a general structural property that permits batching together the calculation of many DP entries.<|reference_end|>
arxiv
@article{golin2008a, title={A Generic Top-Down Dynamic-Programming Approach to Prefix-Free Coding}, author={Mordecai Golin, Xiaoming Xu and Jiajin Yu}, journal={arXiv preprint arXiv:0809.4577}, year={2008}, archivePrefix={arXiv}, eprint={0809.4577}, primaryClass={cs.DS cs.IT math.IT} }
golin2008a
arxiv-4991
0809.4582
Achieving compositionality of the stable model semantics for Smodels programs
<|reference_start|>Achieving compositionality of the stable model semantics for Smodels programs: In this paper, a Gaifman-Shapiro-style module architecture is tailored to the case of Smodels programs under the stable model semantics. The composition of Smodels program modules is suitably limited by module conditions which ensure the compatibility of the module system with stable models. Hence the semantics of an entire Smodels program depends directly on stable models assigned to its modules. This result is formalized as a module theorem which truly strengthens Lifschitz and Turner's splitting-set theorem for the class of Smodels programs. To streamline generalizations in the future, the module theorem is first proved for normal programs and then extended to cover Smodels programs using a translation from the latter class of programs to the former class. Moreover, the respective notion of module-level equivalence, namely modular equivalence, is shown to be a proper congruence relation: it is preserved under substitutions of modules that are modularly equivalent. Principles for program decomposition are also addressed. The strongly connected components of the respective dependency graph can be exploited in order to extract a module structure when there is no explicit a priori knowledge about the modules of a program. The paper includes a practical demonstration of tools that have been developed for automated (de)composition of Smodels programs. To appear in Theory and Practice of Logic Programming.<|reference_end|>
arxiv
@article{oikarinen2008achieving, title={Achieving compositionality of the stable model semantics for Smodels programs}, author={Emilia Oikarinen, Tomi Janhunen}, journal={arXiv preprint arXiv:0809.4582}, year={2008}, archivePrefix={arXiv}, eprint={0809.4582}, primaryClass={cs.AI} }
oikarinen2008achieving
arxiv-4992
0809.4622
A computational approach to the covert and overt deployment of spatial attention
<|reference_start|>A computational approach to the covert and overt deployment of spatial attention: Popular computational models of visual attention tend to neglect the influence of saccadic eye movements whereas it has been shown that the primates perform on average three of them per seconds and that the neural substrate for the deployment of attention and the execution of an eye movement might considerably overlap. Here we propose a computational model in which the deployment of attention with or without a subsequent eye movement emerges from local, distributed and numerical computations.<|reference_end|>
arxiv
@article{fix2008a, title={A computational approach to the covert and overt deployment of spatial attention}, author={J'er'emy Fix (INRIA Lorraine - Loria), Nicolas P. Rougier (INRIA Lorraine - Loria, University of Colorado, Boulder), Fr'ed'eric Alexandre (INRIA Lorraine - Loria)}, journal={Dans NeuroComp 2008 : 2i\`eme Conf\'erence Fran\c{c}aise de Neurosciences Computationnelles (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0809.4622}, primaryClass={cs.NE} }
fix2008a
arxiv-4993
0809.4632
Surrogate Learning - An Approach for Semi-Supervised Classification
<|reference_start|>Surrogate Learning - An Approach for Semi-Supervised Classification: We consider the task of learning a classifier from the feature space $\mathcal{X}$ to the set of classes $\mathcal{Y} = \{0, 1\}$, when the features can be partitioned into class-conditionally independent feature sets $\mathcal{X}_1$ and $\mathcal{X}_2$. We show the surprising fact that the class-conditional independence can be used to represent the original learning task in terms of 1) learning a classifier from $\mathcal{X}_2$ to $\mathcal{X}_1$ and 2) learning the class-conditional distribution of the feature set $\mathcal{X}_1$. This fact can be exploited for semi-supervised learning because the former task can be accomplished purely from unlabeled samples. We present experimental evaluation of the idea in two real world applications.<|reference_end|>
arxiv
@article{veeramachaneni2008surrogate, title={Surrogate Learning - An Approach for Semi-Supervised Classification}, author={Sriharsha Veeramachaneni and Ravikumar Kondadadi}, journal={arXiv preprint arXiv:0809.4632}, year={2008}, archivePrefix={arXiv}, eprint={0809.4632}, primaryClass={cs.LG} }
veeramachaneni2008surrogate
arxiv-4994
0809.4635
Mechanistic Behavior of Single-Pass Instruction Sequences
<|reference_start|>Mechanistic Behavior of Single-Pass Instruction Sequences: Earlier work on program and thread algebra detailed the functional, observable behavior of programs under execution. In this article we add the modeling of unobservable, mechanistic processing, in particular processing due to jump instructions. We model mechanistic processing preceding some further behavior as a delay of that behavior; we borrow a unary delay operator from discrete time process algebra. We define a mechanistic improvement ordering on threads and observe that some threads do not have an optimal implementation.<|reference_end|>
arxiv
@article{bergstra2008mechanistic, title={Mechanistic Behavior of Single-Pass Instruction Sequences}, author={Jan A. Bergstra and Mark B. van der Zwaag}, journal={arXiv preprint arXiv:0809.4635}, year={2008}, archivePrefix={arXiv}, eprint={0809.4635}, primaryClass={cs.PL cs.LO} }
bergstra2008mechanistic
arxiv-4995
0809.4668
Faceted Ranking of Egos in Collaborative Tagging Systems
<|reference_start|>Faceted Ranking of Egos in Collaborative Tagging Systems: Multimedia uploaded content is tagged and recommended by users of collaborative systems, resulting in informal classifications also known as folksonomies. Faceted web ranking has been proved a reasonable alternative to a single ranking which does not take into account a personalized context. In this paper we analyze the online computation of rankings of users associated to facets made up of multiple tags. Possible applications are user reputation evaluation (ego-ranking) and improvement of content quality in case of retrieval. We propose a solution based on PageRank as centrality measure: (i) a ranking for each tag is computed offline on the basis of the corresponding tag-dependent subgraph; (ii) a faceted order is generated by merging rankings corresponding to all the tags in the facet. The fundamental assumption, validated by empirical observations, is that step (i) is scalable. We also present algorithms for part (ii) having time complexity O(k), where k is the number of tags in the facet, well suited to online computation.<|reference_end|>
arxiv
@article{orlicki2008faceted, title={Faceted Ranking of Egos in Collaborative Tagging Systems}, author={Jose Ignacio Orlicki (CoreLabs, ITBA), Pablo Ignacio Fierens (ITBA), Jos'e Ignacio Alvarez-Hamelin (ITBA, CONICET)}, journal={WEBIST 2009, Lisboa : Portugal (2009)}, year={2008}, archivePrefix={arXiv}, eprint={0809.4668}, primaryClass={cs.IR} }
orlicki2008faceted
arxiv-4996
0809.4743
The Imaginary Sliding Window As a New Data Structure for Adaptive Algorithms
<|reference_start|>The Imaginary Sliding Window As a New Data Structure for Adaptive Algorithms: The scheme of the sliding window is known in Information Theory, Computer Science, the problem of predicting and in stastistics. Let a source with unknown statistics generate some word $... x_{-1}x_{0}x_{1}x_{2}...$ in some alphabet $A$. For every moment $t, t=... $ $-1, 0, 1, ...$, one stores the word ("window") $ x_{t-w} x_{t-w+1}... x_{t-1}$ where $w$,$w \geq 1$, is called "window length". In the theory of universal coding, the code of the $x_{t}$ depends on source ststistics estimated by the window, in the problem of predicting, each letter $x_{t}$ is predicted using information of the window, etc. After that the letter $x_{t}$ is included in the window on the right, while $x_{t-w}$ is removed from the window. It is the sliding window scheme. This scheme has two merits: it allows one i) to estimate the source statistics quite precisely and ii) to adapt the code in case of a change in the source' statistics. However this scheme has a defect, namely, the necessity to store the window (i.e. the word $x_{t-w}... x_{t-1})$ which needs a large memory size for large $w$. A new scheme named "the Imaginary Sliding Window (ISW)" is constructed. The gist of this scheme is that not the last element $x_{t-w}$ but rather a random one is removed from the window. This allows one to retain both merits of the sliding window as well as the possibility of not storing the window and thus significantly decreasing the memory size.<|reference_end|>
arxiv
@article{ryabko2008the, title={The Imaginary Sliding Window As a New Data Structure for Adaptive Algorithms}, author={Boris Ryabko}, journal={arXiv preprint arXiv:0809.4743}, year={2008}, archivePrefix={arXiv}, eprint={0809.4743}, primaryClass={cs.IT cs.DS math.IT} }
ryabko2008the
arxiv-4997
0809.4747
On parsimonious edge-colouring of graphs with maximum degree three
<|reference_start|>On parsimonious edge-colouring of graphs with maximum degree three: In a graph $G$ of maximum degree $\Delta$ let $\gamma$ denote the largest fraction of edges that can be $\Delta$ edge-coloured. Albertson and Haas showed that $\gamma \geq 13/15$ when $G$ is cubic . We show here that this result can be extended to graphs with maximum degree 3 with the exception of a graph on 5 vertices. Moreover, there are exactly two graphs with maximum degree 3 (one being obviously the Petersen graph) for which $\gamma = 13/15$. This extends a result given by Steffen. These results are obtained by using structural properties of the so called $\delta$-minimum edge colourings for graphs with maximum degree 3. Keywords : Cubic graph; Edge-colouring<|reference_end|>
arxiv
@article{fouquet2008on, title={On parsimonious edge-colouring of graphs with maximum degree three}, author={Jean-Luc Fouquet (LIFO), Jean-Marie Vanherpe (LIFO)}, journal={arXiv preprint arXiv:0809.4747}, year={2008}, archivePrefix={arXiv}, eprint={0809.4747}, primaryClass={cs.DM} }
fouquet2008on
arxiv-4998
0809.4784
A Computational Study on Emotions and Temperament in Multi-Agent Systems
<|reference_start|>A Computational Study on Emotions and Temperament in Multi-Agent Systems: Recent advances in neurosciences and psychology have provided evidence that affective phenomena pervade intelligence at many levels, being inseparable from the cognitionaction loop. Perception, attention, memory, learning, decisionmaking, adaptation, communication and social interaction are some of the aspects influenced by them. This work draws its inspirations from neurobiology, psychophysics and sociology to approach the problem of building autonomous robots capable of interacting with each other and building strategies based on temperamental decision mechanism. Modelling emotions is a relatively recent focus in artificial intelligence and cognitive modelling. Such models can ideally inform our understanding of human behavior. We may see the development of computational models of emotion as a core research focus that will facilitate advances in the large array of computational systems that model, interpret or influence human behavior. We propose a model based on a scalable, flexible and modular approach to emotion which allows runtime evaluation between emotional quality and performance. The results achieved showed that the strategies based on temperamental decision mechanism strongly influence the system performance and there are evident dependency between emotional state of the agents and their temperamental type, as well as the dependency between the team performance and the temperamental configuration of the team members, and this enable us to conclude that the modular approach to emotional programming based on temperamental theory is the good choice to develop computational mind models for emotional behavioral Multi-Agent systems.<|reference_end|>
arxiv
@article{reis2008a, title={A Computational Study on Emotions and Temperament in Multi-Agent Systems}, author={Luis Paulo Reis, Daria Barteneva, Nuno Lau}, journal={arXiv preprint arXiv:0809.4784}, year={2008}, archivePrefix={arXiv}, eprint={0809.4784}, primaryClass={cs.AI cs.MA cs.RO} }
reis2008a
arxiv-4999
0809.4792
On Pure and (approximate) Strong Equilibria of Facility Location Games
<|reference_start|>On Pure and (approximate) Strong Equilibria of Facility Location Games: We study social cost losses in Facility Location games, where $n$ selfish agents install facilities over a network and connect to them, so as to forward their local demand (expressed by a non-negative weight per agent). Agents using the same facility share fairly its installation cost, but every agent pays individually a (weighted) connection cost to the chosen location. We study the Price of Stability (PoS) of pure Nash equilibria and the Price of Anarchy of strong equilibria (SPoA), that generalize pure equilibria by being resilient to coalitional deviations. A special case of recently studied network design games, Facility Location merits separate study as a classic model with numerous applications and individual characteristics: our analysis for unweighted agents on metric networks reveals constant upper and lower bounds for the PoS, while an $O(\ln n)$ upper bound implied by previous work is tight for non-metric networks. Strong equilibria do not always exist, even for the unweighted metric case. We show that $e$-approximate strong equilibria exist ($e=2.718...$). The SPoA is generally upper bounded by $O(\ln W)$ ($W$ is the sum of agents' weights), which becomes tight $\Theta(\ln n)$ for unweighted agents. For the unweighted metric case we prove a constant upper bound. We point out several challenging open questions that arise.<|reference_end|>
arxiv
@article{hansen2008on, title={On Pure and (approximate) Strong Equilibria of Facility Location Games}, author={Thomas Dueholm Hansen, Orestis A. Telelis}, journal={arXiv preprint arXiv:0809.4792}, year={2008}, archivePrefix={arXiv}, eprint={0809.4792}, primaryClass={cs.GT} }
hansen2008on
arxiv-5000
0809.4794
Efficient, Differentially Private Point Estimators
<|reference_start|>Efficient, Differentially Private Point Estimators: Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (further) compelling evidence that rigorous notions of privacy in statistical databases can be consistent with statistically valid inference.<|reference_end|>
arxiv
@article{smith2008efficient,, title={Efficient, Differentially Private Point Estimators}, author={Adam Smith}, journal={arXiv preprint arXiv:0809.4794}, year={2008}, archivePrefix={arXiv}, eprint={0809.4794}, primaryClass={cs.CR cs.DS} }
smith2008efficient,