corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-4301
0807.2011
Altruism in Atomic Congestion Games
<|reference_start|>Altruism in Atomic Congestion Games: This paper studies the effects of introducing altruistic agents into atomic congestion games. Altruistic behavior is modeled by a trade-off between selfish and social objectives. In particular, we assume agents optimize a linear combination of personal delay of a strategy and the resulting increase in social cost. Our model can be embedded in the framework of congestion games with player-specific latency functions. Stable states are the Nash equilibria of these games, and we examine their existence and the convergence of sequential best-response dynamics. Previous work shows that for symmetric singleton games with convex delays Nash equilibria are guaranteed to exist. For concave delay functions we observe that there are games without Nash equilibria and provide a polynomial time algorithm to decide existence for symmetric singleton games with arbitrary delay functions. Our algorithm can be extended to compute best and worst Nash equilibria if they exist. For more general congestion games existence becomes NP-hard to decide, even for symmetric network games with quadratic delay functions. Perhaps surprisingly, if all delay functions are linear, then there is always a Nash equilibrium in any congestion game with altruists and any better-response dynamics converges. In addition to these results for uncoordinated dynamics, we consider a scenario in which a central altruistic institution can motivate agents to act altruistically. We provide constructive and hardness results for finding the minimum number of altruists to stabilize an optimal congestion profile and more general mechanisms to incentivize agents to adopt favorable behavior.<|reference_end|>
arxiv
@article{hoefer2008altruism, title={Altruism in Atomic Congestion Games}, author={Martin Hoefer and Alexander Skopalik}, journal={arXiv preprint arXiv:0807.2011}, year={2008}, archivePrefix={arXiv}, eprint={0807.2011}, primaryClass={cs.GT} }
hoefer2008altruism
arxiv-4302
0807.2023
Beyond Node Degree: Evaluating AS Topology Models
<|reference_start|>Beyond Node Degree: Evaluating AS Topology Models: Many models have been proposed to generate Internet Autonomous System (AS) topologies, most of which make structural assumptions about the AS graph. In this paper we compare AS topology generation models with several observed AS topologies. In contrast to most previous works, we avoid making assumptions about which topological properties are important to characterize the AS topology. Our analysis shows that, although matching degree-based properties, the existing AS topology generation models fail to capture the complexity of the local interconnection structure between ASs. Furthermore, we use BGP data from multiple vantage points to show that additional measurement locations significantly affect local structure properties, such as clustering and node centrality. Degree-based properties, however, are not notably affected by additional measurements locations. These observations are particularly valid in the core. The shortcomings of AS topology generation models stems from an underestimation of the complexity of the connectivity in the core caused by inappropriate use of BGP data.<|reference_end|>
arxiv
@article{haddadi2008beyond, title={Beyond Node Degree: Evaluating AS Topology Models}, author={Hamed Haddadi, Damien Fay, Almerima Jamakovic, Olaf Maennel, Andrew W. Moore, Richard Mortier, Miguel Rio, Steve Uhlig}, journal={arXiv preprint arXiv:0807.2023}, year={2008}, archivePrefix={arXiv}, eprint={0807.2023}, primaryClass={cs.NI} }
haddadi2008beyond
arxiv-4303
0807.2028
On Krause's multi-agent consensus model with state-dependent connectivity (Extended version)
<|reference_start|>On Krause's multi-agent consensus model with state-dependent connectivity (Extended version): We study a model of opinion dynamics introduced by Krause: each agent has an opinion represented by a real number, and updates its opinion by averaging all agent opinions that differ from its own by less than 1. We give a new proof of convergence into clusters of agents, with all agents in the same cluster holding the same opinion. We then introduce a particular notion of equilibrium stability and provide lower bounds on the inter-cluster distances at a stable equilibrium. To better understand the behavior of the system when the number of agents is large, we also introduce and study a variant involving a continuum of agents, obtaining partial convergence results and lower bounds on inter-cluster distances, under some mild assumptions.<|reference_end|>
arxiv
@article{blondel2008on, title={On Krause's multi-agent consensus model with state-dependent connectivity (Extended version)}, author={Vincent D. Blondel, Julien M. Hendrickx and John N. Tsitsiklis}, journal={arXiv preprint arXiv:0807.2028}, year={2008}, archivePrefix={arXiv}, eprint={0807.2028}, primaryClass={cs.MA} }
blondel2008on
arxiv-4304
0807.2043
Intrusion Detection Using Cost-Sensitive Classification
<|reference_start|>Intrusion Detection Using Cost-Sensitive Classification: Intrusion Detection is an invaluable part of computer networks defense. An important consideration is the fact that raising false alarms carries a significantly lower cost than not detecting at- tacks. For this reason, we examine how cost-sensitive classification methods can be used in Intrusion Detection systems. The performance of the approach is evaluated under different experimental conditions, cost matrices and different classification models, in terms of expected cost, as well as detection and false alarm rates. We find that even under unfavourable conditions, cost-sensitive classification can improve performance significantly, if only slightly.<|reference_end|>
arxiv
@article{mitrokotsa2008intrusion, title={Intrusion Detection Using Cost-Sensitive Classification}, author={Aikaterini Mitrokotsa and Christos Dimitrakakis and Christos Douligeris}, journal={arXiv preprint arXiv:0807.2043}, year={2008}, archivePrefix={arXiv}, eprint={0807.2043}, primaryClass={cs.CR cs.CV cs.NI} }
mitrokotsa2008intrusion
arxiv-4305
0807.2047
The Five Points Pose Problem : A New and Accurate Solution Adapted to any Geometric Configuration
<|reference_start|>The Five Points Pose Problem : A New and Accurate Solution Adapted to any Geometric Configuration: The goal of this paper is to estimate directly the rotation and translation between two stereoscopic images with the help of five homologous points. The methodology presented does not mix the rotation and translation parameters, which is comparably an important advantage over the methods using the well-known essential matrix. This results in correct behavior and accuracy for situations otherwise known as quite unfavorable, such as planar scenes, or panoramic sets of images (with a null base length), while providing quite comparable results for more "standard" cases. The resolution of the algebraic polynomials resulting from the modeling of the coplanarity constraint is made with the help of powerful algebraic solver tools (the Groebner bases and the Rational Univariate Representation).<|reference_end|>
arxiv
@article{kalantari2008the, title={The Five Points Pose Problem : A New and Accurate Solution Adapted to any Geometric Configuration}, author={Mahzad Kalantari, Franck Jung, JeanPierre Guedon, Nicolas Paparoditis}, journal={arXiv preprint arXiv:0807.2047}, year={2008}, archivePrefix={arXiv}, eprint={0807.2047}, primaryClass={cs.CV} }
kalantari2008the
arxiv-4306
0807.2049
Intrusion Detection in Mobile Ad Hoc Networks Using Classification Algorithms
<|reference_start|>Intrusion Detection in Mobile Ad Hoc Networks Using Classification Algorithms: In this paper we present the design and evaluation of intrusion detection models for MANETs using supervised classification algorithms. Specifically, we evaluate the performance of the MultiLayer Perceptron (MLP), the Linear classifier, the Gaussian Mixture Model (GMM), the Naive Bayes classifier and the Support Vector Machine (SVM). The performance of the classification algorithms is evaluated under different traffic conditions and mobility patterns for the Black Hole, Forging, Packet Dropping, and Flooding attacks. The results indicate that Support Vector Machines exhibit high accuracy for almost all simulated attacks and that Packet Dropping is the hardest attack to detect.<|reference_end|>
arxiv
@article{mitrokotsa2008intrusion, title={Intrusion Detection in Mobile Ad Hoc Networks Using Classification Algorithms}, author={Aikaterini Mitrokotsa and Manolis Tsagkaris and Christos Douligeris}, journal={arXiv preprint arXiv:0807.2049}, year={2008}, archivePrefix={arXiv}, eprint={0807.2049}, primaryClass={cs.CR cs.NI} }
mitrokotsa2008intrusion
arxiv-4307
0807.2053
Towards an Effective Intrusion Response Engine Combined with Intrusion Detection in Ad Hoc Networks
<|reference_start|>Towards an Effective Intrusion Response Engine Combined with Intrusion Detection in Ad Hoc Networks: In this paper, we present an effective intrusion response engine combined with intrusion detection in ad hoc networks. The intrusion response engine is composed of a secure communication module, a local and a global response module. Its function is based on an innovative tree-based key agreement protocol while the intrusion detection engine is based on a class of neural networks called eSOM. The proposed intrusion response model and the tree-based protocol, it is based on, are analyzed concerning key secrecy while the intrusion detection engine is evaluated for MANET under different traffic conditions and mobility patterns. The results show a high detection rate for packet dropping attacks.<|reference_end|>
arxiv
@article{mitrokotsa2008towards, title={Towards an Effective Intrusion Response Engine Combined with Intrusion Detection in Ad Hoc Networks}, author={Aikaterini Mitrokotsa and Nikos Komninos and Christos Douligeris}, journal={arXiv preprint arXiv:0807.2053}, year={2008}, archivePrefix={arXiv}, eprint={0807.2053}, primaryClass={cs.CR cs.NI} }
mitrokotsa2008towards
arxiv-4308
0807.2108
On dual Schur domain decomposition method for linear first-order transient problems
<|reference_start|>On dual Schur domain decomposition method for linear first-order transient problems: This paper addresses some numerical and theoretical aspects of dual Schur domain decomposition methods for linear first-order transient partial differential equations. In this work, we consider the trapezoidal family of schemes for integrating the ordinary differential equations (ODEs) for each subdomain and present four different coupling methods, corresponding to different algebraic constraints, for enforcing kinematic continuity on the interface between the subdomains. Method 1 (d-continuity) is based on the conventional approach using continuity of the primary variable and we show that this method is unstable for a lot of commonly used time integrators including the mid-point rule. To alleviate this difficulty, we propose a new Method 2 (Modified d-continuity) and prove its stability for coupling all time integrators in the trapezoidal family (except the forward Euler). Method 3 (v-continuity) is based on enforcing the continuity of the time derivative of the primary variable. However, this constraint introduces a drift in the primary variable on the interface. We present Method 4 (Baumgarte stabilized) which uses Baumgarte stabilization to limit this drift and we derive bounds for the stabilization parameter to ensure stability. Our stability analysis is based on the ``energy'' method, and one of the main contributions of this paper is the extension of the energy method (which was previously introduced in the context of numerical methods for ODEs) to assess the stability of numerical formulations for index-2 differential-algebraic equations (DAEs).<|reference_end|>
arxiv
@article{nakshatrala2008on, title={On dual Schur domain decomposition method for linear first-order transient problems}, author={K.B.Nakshatrala, A. Prakash, K.D.Hjelmstad}, journal={arXiv preprint arXiv:0807.2108}, year={2008}, doi={10.1016/j.jcp.2009.07.016}, archivePrefix={arXiv}, eprint={0807.2108}, primaryClass={cs.NA cs.CE} }
nakshatrala2008on
arxiv-4309
0807.2120
Derandomizing the Lovasz Local Lemma more effectively
<|reference_start|>Derandomizing the Lovasz Local Lemma more effectively: The famous Lovasz Local Lemma [EL75] is a powerful tool to non-constructively prove the existence of combinatorial objects meeting a prescribed collection of criteria. Kratochvil et al. applied this technique to prove that a k-CNF in which each variable appears at most 2^k/(ek) times is always satisfiable [KST93]. In a breakthrough paper, Beck found that if we lower the occurrences to O(2^(k/48)/k), then a deterministic polynomial-time algorithm can find a satisfying assignment to such an instance [Bec91]. Alon randomized the algorithm and required O(2^(k/8)/k) occurrences [Alo91]. In [Mos06], we exhibited a refinement of his method which copes with O(2^(k/6)/k) of them. The hitherto best known randomized algorithm is due to Srinivasan and is capable of solving O(2^(k/4)/k) occurrence instances [Sri08]. Answering two questions asked by Srinivasan, we shall now present an approach that tolerates O(2^(k/2)/k) occurrences per variable and which can most easily be derandomized. The new algorithm bases on an alternative type of witness tree structure and drops a number of limiting aspects common to all previous methods.<|reference_end|>
arxiv
@article{moser2008derandomizing, title={Derandomizing the Lovasz Local Lemma more effectively}, author={Robin A. Moser}, journal={arXiv preprint arXiv:0807.2120}, year={2008}, archivePrefix={arXiv}, eprint={0807.2120}, primaryClass={cs.DS cs.CC} }
moser2008derandomizing
arxiv-4310
0807.2158
Universally-composable privacy amplification from causality constraints
<|reference_start|>Universally-composable privacy amplification from causality constraints: We consider schemes for secret key distribution which use as a resource correlations that violate Bell inequalities. We provide the first security proof for such schemes, according to the strongest notion of security, the so called universally-composable security. Our security proof does not rely on the validity of quantum mechanics, it solely relies on the impossibility of arbitrarily-fast signaling between separate physical systems. This allows for secret communication in situations where the participants distrust their quantum devices.<|reference_end|>
arxiv
@article{masanes2008universally-composable, title={Universally-composable privacy amplification from causality constraints}, author={Lluis Masanes}, journal={Phys. Rev. Lett. 102, 140501 (2009)}, year={2008}, doi={10.1103/PhysRevLett.102.140501}, archivePrefix={arXiv}, eprint={0807.2158}, primaryClass={quant-ph cs.CR cs.IT math.IT} }
masanes2008universally-composable
arxiv-4311
0807.2178
Ranking Unit Squares with Few Visibilities
<|reference_start|>Ranking Unit Squares with Few Visibilities: Given a set of n unit squares in the plane, the goal is to rank them in space in such a way that only few squares see each other vertically. We prove that ranking the squares according to the lexicographic order of their centers results in at most 3n-7 pairwise visibilities for n at least 4. We also show that this bound is best possible, by exhibiting a set of n squares with at least 3n-7 pairwise visibilities under any ranking.<|reference_end|>
arxiv
@article{gärtner2008ranking, title={Ranking Unit Squares with Few Visibilities}, author={Bernd G"artner}, journal={arXiv preprint arXiv:0807.2178}, year={2008}, archivePrefix={arXiv}, eprint={0807.2178}, primaryClass={cs.CG cs.DS} }
gärtner2008ranking
arxiv-4312
0807.2218
Isometric Diamond Subgraphs
<|reference_start|>Isometric Diamond Subgraphs: We describe polynomial time algorithms for determining whether an undirected graph may be embedded in a distance-preserving way into the hexagonal tiling of the plane, the diamond structure in three dimensions, or analogous structures in higher dimensions. The graphs that may be embedded in this way form an interesting subclass of the partial cubes.<|reference_end|>
arxiv
@article{eppstein2008isometric, title={Isometric Diamond Subgraphs}, author={David Eppstein}, journal={arXiv preprint arXiv:0807.2218}, year={2008}, archivePrefix={arXiv}, eprint={0807.2218}, primaryClass={cs.CG} }
eppstein2008isometric
arxiv-4313
0807.2268
Multihop Diversity in Wideband OFDM Systems: The Impact of Spatial Reuse and Frequency Selectivity
<|reference_start|>Multihop Diversity in Wideband OFDM Systems: The Impact of Spatial Reuse and Frequency Selectivity: The goal of this paper is to establish which practical routing schemes for wireless networks are most suitable for wideband systems in the power-limited regime, which is, for example, a practically relevant mode of operation for the analysis of ultrawideband (UWB) mesh networks. For this purpose, we study the tradeoff between energy efficiency and spectral efficiency (known as the power-bandwidth tradeoff) in a wideband linear multihop network in which transmissions employ orthogonal frequency-division multiplexing (OFDM) modulation and are affected by quasi-static, frequency-selective fading. Considering open-loop (fixed-rate) and closed-loop (rate-adaptive) multihop relaying techniques, we characterize the impact of routing with spatial reuse on the statistical properties of the end-to-end conditional mutual information (conditioned on the specific values of the channel fading parameters and therefore treated as a random variable) and on the energy and spectral efficiency measures of the wideband regime. Our analysis particularly deals with the convergence of these end-to-end performance measures in the case of large number of hops, i.e., the phenomenon first observed in \cite{Oyman06b} and named as ``multihop diversity''. Our results demonstrate the realizability of the multihop diversity advantages in the case of routing with spatial reuse for wideband OFDM systems under wireless channel effects such as path-loss and quasi-static frequency-selective multipath fading.<|reference_end|>
arxiv
@article{oyman2008multihop, title={Multihop Diversity in Wideband OFDM Systems: The Impact of Spatial Reuse and Frequency Selectivity}, author={Ozgur Oyman, J. Nicholas Laneman}, journal={arXiv preprint arXiv:0807.2268}, year={2008}, doi={10.1109/ISSSTA.2008.45}, archivePrefix={arXiv}, eprint={0807.2268}, primaryClass={cs.IT math.IT} }
oyman2008multihop
arxiv-4314
0807.2269
An Efficient Algorithm for a Sharp Approximation of Universally Quantified Inequalities
<|reference_start|>An Efficient Algorithm for a Sharp Approximation of Universally Quantified Inequalities: This paper introduces a new algorithm for solving a sub-class of quantified constraint satisfaction problems (QCSP) where existential quantifiers precede universally quantified inequalities on continuous domains. This class of QCSPs has numerous applications in engineering and design. We propose here a new generic branch and prune algorithm for solving such continuous QCSPs. Standard pruning operators and solution identification operators are specialized for universally quantified inequalities. Special rules are also proposed for handling the parameters of the constraints. First experimentation show that our algorithm outperforms the state of the art methods.<|reference_end|>
arxiv
@article{goldsztejn2008an, title={An Efficient Algorithm for a Sharp Approximation of Universally Quantified Inequalities}, author={Alexandre Goldsztejn (LINA), Claude Michel (I3S, Laboratoire I3S), Michel Rueher (I3S, Laboratoire I3S)}, journal={arXiv preprint arXiv:0807.2269}, year={2008}, archivePrefix={arXiv}, eprint={0807.2269}, primaryClass={cs.NA cs.DS} }
goldsztejn2008an
arxiv-4315
0807.2282
Hardware/Software Co-Design for Spike Based Recognition
<|reference_start|>Hardware/Software Co-Design for Spike Based Recognition: The practical applications based on recurrent spiking neurons are limited due to their non-trivial learning algorithms. The temporal nature of spiking neurons is more favorable for hardware implementation where signals can be represented in binary form and communication can be done through the use of spikes. This work investigates the potential of recurrent spiking neurons implementations on reconfigurable platforms and their applicability in temporal based applications. A theoretical framework of reservoir computing is investigated for hardware/software implementation. In this framework, only readout neurons are trained which overcomes the burden of training at the network level. These recurrent neural networks are termed as microcircuits which are viewed as basic computational units in cortical computation. This paper investigates the potential of recurrent neural reservoirs and presents a novel hardware/software strategy for their implementation on FPGAs. The design is implemented and the functionality is tested in the context of speech recognition application.<|reference_end|>
arxiv
@article{ghani2008hardware/software, title={Hardware/Software Co-Design for Spike Based Recognition}, author={Arfan Ghani, Martin McGinnity, Liam Maguire, Jim Harkin}, journal={arXiv preprint arXiv:0807.2282}, year={2008}, archivePrefix={arXiv}, eprint={0807.2282}, primaryClass={cs.NE cs.AI cs.CE} }
ghani2008hardware/software
arxiv-4316
0807.2292
Rate and power allocation under the pairwise distributed source coding constraint
<|reference_start|>Rate and power allocation under the pairwise distributed source coding constraint: We consider the problem of rate and power allocation for a sensor network under the pairwise distributed source coding constraint. For noiseless source-terminal channels, we show that the minimum sum rate assignment can be found by finding a minimum weight arborescence in an appropriately defined directed graph. For orthogonal noisy source-terminal channels, the minimum sum power allocation can be found by finding a minimum weight matching forest in a mixed graph. Numerical results are presented for both cases showing that our solutions always outperform previously proposed solutions. The gains are considerable when source correlations are high.<|reference_end|>
arxiv
@article{li2008rate, title={Rate and power allocation under the pairwise distributed source coding constraint}, author={Shizheng Li and Aditya Ramamoorthy}, journal={arXiv preprint arXiv:0807.2292}, year={2008}, archivePrefix={arXiv}, eprint={0807.2292}, primaryClass={cs.IT math.IT} }
li2008rate
arxiv-4317
0807.2303
A new characteristic property of rich words
<|reference_start|>A new characteristic property of rich words: Originally introduced and studied by the third and fourth authors together with J. Justin and S. Widmer in arXiv:0801.1656, rich words constitute a new class of finite and infinite words characterized by containing the maximal number of distinct palindromes. Several characterizations of rich words have already been established. A particularly nice characteristic property is that all 'complete returns' to palindromes are palindromes. In this note, we prove that rich words are also characterized by the property that each factor is uniquely determined by its longest palindromic prefix and its longest palindromic suffix.<|reference_end|>
arxiv
@article{bucci2008a, title={A new characteristic property of rich words}, author={Michelangelo Bucci, Alessandro De Luca, Amy Glen, Luca Q. Zamboni}, journal={Theoretical Computer Science 410 (2009) 2860-2863}, year={2008}, doi={10.1016/j.tcs.2008.11.001}, archivePrefix={arXiv}, eprint={0807.2303}, primaryClass={math.CO cs.DM} }
bucci2008a
arxiv-4318
0807.2328
Avatar Mobility in Networked Virtual Environments: Measurements, Analysis, and Implications
<|reference_start|>Avatar Mobility in Networked Virtual Environments: Measurements, Analysis, and Implications: We collected mobility traces of 84,208 avatars spanning 22 regions over two months in Second Life, a popular networked virtual environment. We analyzed the traces to characterize the dynamics of the avatars mobility and behavior, both temporally and spatially. We discuss the implications of the our findings to the design of peer-to-peer networked virtual environments, interest management, mobility modeling of avatars, server load balancing and zone partitioning, client-side caching, and prefetching.<|reference_end|>
arxiv
@article{liang2008avatar, title={Avatar Mobility in Networked Virtual Environments: Measurements, Analysis, and Implications}, author={Huiguang Liang, Ian Tay, Ming Feng Neo, Wei Tsang Ooi, Mehul Motani}, journal={arXiv preprint arXiv:0807.2328}, year={2008}, archivePrefix={arXiv}, eprint={0807.2328}, primaryClass={cs.NI cs.MM} }
liang2008avatar
arxiv-4319
0807.2330
Optimal Acyclic Hamiltonian Path Completion for Outerplanar Triangulated st-Digraphs (with Application to Upward Topological Book Embeddings)
<|reference_start|>Optimal Acyclic Hamiltonian Path Completion for Outerplanar Triangulated st-Digraphs (with Application to Upward Topological Book Embeddings): Given an embedded planar acyclic digraph G, we define the problem of "acyclic hamiltonian path completion with crossing minimization (Acyclic-HPCCM)" to be the problem of determining an hamiltonian path completion set of edges such that, when these edges are embedded on G, they create the smallest possible number of edge crossings and turn G to a hamiltonian digraph. Our results include: --We provide a characterization under which a triangulated st-digraph G is hamiltonian. --For an outerplanar triangulated st-digraph G, we define the st-polygon decomposition of G and, based on its properties, we develop a linear-time algorithm that solves the Acyclic-HPCCM problem with at most one crossing per edge of G. --For the class of st-planar digraphs, we establish an equivalence between the Acyclic-HPCCM problem and the problem of determining an upward 2-page topological book embedding with minimum number of spine crossings. We infer (based on this equivalence) for the class of outerplanar triangulated st-digraphs an upward topological 2-page book embedding with minimum number of spine crossings and at most one spine crossing per edge. To the best of our knowledge, it is the first time that edge-crossing minimization is studied in conjunction with the acyclic hamiltonian completion problem and the first time that an optimal algorithm with respect to spine crossing minimization is presented for upward topological book embeddings.<|reference_end|>
arxiv
@article{mchedlidze2008optimal, title={Optimal Acyclic Hamiltonian Path Completion for Outerplanar Triangulated st-Digraphs (with Application to Upward Topological Book Embeddings)}, author={Tamara Mchedlidze, Antonios Symvonis}, journal={arXiv preprint arXiv:0807.2330}, year={2008}, archivePrefix={arXiv}, eprint={0807.2330}, primaryClass={cs.DS cs.DM} }
mchedlidze2008optimal
arxiv-4320
0807.2358
Polygon Exploration with Time-Discrete Vision
<|reference_start|>Polygon Exploration with Time-Discrete Vision: With the advent of autonomous robots with two- and three-dimensional scanning capabilities, classical visibility-based exploration methods from computational geometry have gained in practical importance. However, real-life laser scanning of useful accuracy does not allow the robot to scan continuously while in motion; instead, it has to stop each time it surveys its environment. This requirement was studied by Fekete, Klein and Nuechter for the subproblem of looking around a corner, but until now has not been considered in an online setting for whole polygonal regions. We give the first algorithmic results for this important algorithmic problem that combines stationary art gallery-type aspects with watchman-type issues in an online scenario: We demonstrate that even for orthoconvex polygons, a competitive strategy can be achieved only for limited aspect ratio A (the ratio of the maximum and minimum edge length of the polygon), i.e., for a given lower bound on the size of an edge; we give a matching upper bound by providing an O(log A)-competitive strategy for simple rectilinear polygons, using the assumption that each edge of the polygon has to be fully visible from some scan point.<|reference_end|>
arxiv
@article{fekete2008polygon, title={Polygon Exploration with Time-Discrete Vision}, author={Sandor P. Fekete and Christiane Schmidt}, journal={Computational Geometry: Theory and Applications, 43 (2010), 148-168}, year={2008}, archivePrefix={arXiv}, eprint={0807.2358}, primaryClass={cs.CG cs.RO} }
fekete2008polygon
arxiv-4321
0807.2381
Analyse des suites al\'eatoires engendr\'ees par des automates cellulaires et applications \`a la cryptographie
<|reference_start|>Analyse des suites al\'eatoires engendr\'ees par des automates cellulaires et applications \`a la cryptographie: This paper considers interactions between cellular automata and cryptology. It is known that non-linear elementary rule which is correlation-immune don't exist. This results limits the use of cellular automata as pseudo-random generators suitable for cryptographic applications. In addition, for this kind of pseudo-random generators, a successful cryptanalysis was proposed by Meier and Staffelbach. However, other ways to design cellular automata capable to generate good pseudo-random sequences remain and will be discussed in the end of this article.<|reference_end|>
arxiv
@article{martin2008analyse, title={Analyse des suites al\'eatoires engendr\'ees par des automates cellulaires et applications \`a la cryptographie}, author={Bruno Martin (I3S)}, journal={arXiv preprint arXiv:0807.2381}, year={2008}, archivePrefix={arXiv}, eprint={0807.2381}, primaryClass={cs.CR} }
martin2008analyse
arxiv-4322
0807.2382
Revisiting the upper bounding process in a safe Branch and Bound algorithm
<|reference_start|>Revisiting the upper bounding process in a safe Branch and Bound algorithm: Finding feasible points for which the proof succeeds is a critical issue in safe Branch and Bound algorithms which handle continuous problems. In this paper, we introduce a new strategy to compute very accurate approximations of feasible points. This strategy takes advantage of the Newton method for under-constrained systems of equations and inequalities. More precisely, it exploits the optimal solution of a linear relaxation of the problem to compute efficiently a promising upper bound. First experiments on the Coconuts benchmarks demonstrate that this approach is very effective.<|reference_end|>
arxiv
@article{goldsztejn2008revisiting, title={Revisiting the upper bounding process in a safe Branch and Bound algorithm}, author={Alexandre Goldsztejn (I3S), Yahia Lebbah (I3S), Claude Michel (I3S), Michel Rueher (I3S)}, journal={arXiv preprint arXiv:0807.2382}, year={2008}, archivePrefix={arXiv}, eprint={0807.2382}, primaryClass={cs.NA cs.MS math.OC} }
goldsztejn2008revisiting
arxiv-4323
0807.2383
CPBVP: A Constraint-Programming Framework for Bounded Program Verification
<|reference_start|>CPBVP: A Constraint-Programming Framework for Bounded Program Verification: This paper studies how to verify the conformity of a program with its specification and proposes a novel constraint-programming framework for bounded program verification (CPBPV). The CPBPV framework uses constraint stores to represent the specification and the program and explores execution paths nondeterministically. The input program is partially correct if each constraint store so produced implies the post-condition. CPBPV does not explore spurious execution paths as it incrementally prunes execution paths early by detecting that the constraint store is not consistent. CPBPV uses the rich language of constraint programming to express the constraint store. Finally, CPBPV is parametrized with a list of solvers which are tried in sequence, starting with the least expensive and less general. Experimental results often produce orders of magnitude improvements over earlier approaches, running times being often independent of the variable domains. Moreover, CPBPV was able to detect subtle errors in some programs while other frameworks based on model checking have failed.<|reference_end|>
arxiv
@article{collavizza2008cpbvp:, title={CPBVP: A Constraint-Programming Framework for Bounded Program Verification}, author={H'el`ene Collavizza (I3S), Michel Rueher (I3S), Pascal Van Hentenryck (Brown University)}, journal={The 14th International Conference on Principles and Practice of Constraint Programming, Sydney : Australie (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0807.2383}, primaryClass={cs.SE cs.AI cs.LO} }
collavizza2008cpbvp:
arxiv-4324
0807.2387
Fast computation of magnetostatic fields by Non-uniform Fast Fourier Transforms
<|reference_start|>Fast computation of magnetostatic fields by Non-uniform Fast Fourier Transforms: The bottleneck of micromagnetic simulations is the computation of the long-ranged magnetostatic fields. This can be tackled on regular N-node grids with Fast Fourier Transforms in time N logN, whereas the geometrically more versatile finite element methods (FEM) are bounded to N^4/3 in the best case. We report the implementation of a Non-uniform Fast Fourier Transform algorithm which brings a N logN convergence to FEM, with no loss of accuracy in the results.<|reference_end|>
arxiv
@article{kritsikis2008fast, title={Fast computation of magnetostatic fields by Non-uniform Fast Fourier Transforms}, author={Evaggelos Kritsikis (SPINTEC), Jean-Christophe Toussaint (NEEL), Olivier Fruchart (NEEL)}, journal={Applied Physics Letters 93, 13 (2008) 132508}, year={2008}, doi={10.1063/1.2995850}, archivePrefix={arXiv}, eprint={0807.2387}, primaryClass={cond-mat.mtrl-sci cs.NA} }
kritsikis2008fast
arxiv-4325
0807.2440
Construction of Error-Correcting Codes for Random Network Coding
<|reference_start|>Construction of Error-Correcting Codes for Random Network Coding: In this work we present error-correcting codes for random network coding based on rank- metric codes, Ferrers diagrams, and puncturing. For most parameters, the constructed codes are larger than all previously known codes.<|reference_end|>
arxiv
@article{etzion2008construction, title={Construction of Error-Correcting Codes for Random Network Coding}, author={Tuvi Etzion, Natalia Silberstein}, journal={arXiv preprint arXiv:0807.2440}, year={2008}, archivePrefix={arXiv}, eprint={0807.2440}, primaryClass={cs.IT math.IT} }
etzion2008construction
arxiv-4326
0807.2464
On "Bit-Interleaved Coded Multiple Beamforming"
<|reference_start|>On "Bit-Interleaved Coded Multiple Beamforming": The interleaver design criteria described in [1] should take into account all error patterns of interest.<|reference_end|>
arxiv
@article{akay2008on, title={On "Bit-Interleaved Coded Multiple Beamforming"}, author={E. Akay, H. J. Park, E. Ayanoglu}, journal={arXiv preprint arXiv:0807.2464}, year={2008}, archivePrefix={arXiv}, eprint={0807.2464}, primaryClass={cs.IT math.IT} }
akay2008on
arxiv-4327
0807.2466
A Grateful Dead Analysis: The Relationship Between Concert and Listening Behavior
<|reference_start|>A Grateful Dead Analysis: The Relationship Between Concert and Listening Behavior: The Grateful Dead were an American band that was born out of the San Francisco, California psychedelic movement of the 1960s. The band played music together from 1965 to 1995 and is well known for concert performances containing extended improvisations and long and unique set lists. This article presents a comparative analysis between 1,590 of the Grateful Dead's concert set lists from 1972 to 1995 and 2,616,990 last.fm Grateful Dead listening events from August 2005 to October 2007. While there is a strong correlation between how songs were played in concert and how they are listened to by last.fm members, the outlying songs in this trend identify interesting aspects of the band and their fans 10 years after the band's dissolution.<|reference_end|>
arxiv
@article{rodriguez2008a, title={A Grateful Dead Analysis: The Relationship Between Concert and Listening Behavior}, author={Marko A. Rodriguez and Vadas Gintautas and Alberto Pepe}, journal={First Monday, volume 14, number 1, ISSN:1396-0466, University of Illinois at Chicago Library, January 2009.}, year={2008}, number={LA-UR-08-04421}, archivePrefix={arXiv}, eprint={0807.2466}, primaryClass={cs.CY cs.GL} }
rodriguez2008a
arxiv-4328
0807.2471
An ESPRIT-based approach for Initial Ranging in OFDMA systems
<|reference_start|>An ESPRIT-based approach for Initial Ranging in OFDMA systems: This work presents a novel Initial Ranging scheme for orthogonal frequency-division multiple-access networks. Users that intend to establish a communication link with the base station (BS) are normally misaligned both in time and frequency and the goal is to jointly estimate their timing errors and carrier frequency offsets with respect to the BS local references. This is accomplished with affordable complexity by resorting to the ESPRIT algorithm. Computer simulations are used to assess the effectiveness of the proposed solution and to make comparisons with existing alternatives.<|reference_end|>
arxiv
@article{sanguinetti2008an, title={An ESPRIT-based approach for Initial Ranging in OFDMA systems}, author={Luca Sanguinetti, Michele Morelli and H. Vincent Poor}, journal={arXiv preprint arXiv:0807.2471}, year={2008}, archivePrefix={arXiv}, eprint={0807.2471}, primaryClass={cs.IT math.IT} }
sanguinetti2008an
arxiv-4329
0807.2472
Inapproximability for metric embeddings into R^d
<|reference_start|>Inapproximability for metric embeddings into R^d: We consider the problem of computing the smallest possible distortion for embedding of a given n-point metric space into R^d, where d is fixed (and small). For d=1, it was known that approximating the minimum distortion with a factor better than roughly n^(1/12) is NP-hard. From this result we derive inapproximability with factor roughly n^(1/(22d-10)) for every fixed d\ge 2, by a conceptually very simple reduction. However, the proof of correctness involves a nontrivial result in geometric topology (whose current proof is based on ideas due to Jussi Vaisala). For d\ge 3, we obtain a stronger inapproximability result by a different reduction: assuming P \ne NP, no polynomial-time algorithm can distinguish between spaces embeddable in R^d with constant distortion from spaces requiring distortion at least n^(c/d), for a constant c>0. The exponent c/d has the correct order of magnitude, since every n-point metric space can be embedded in R^d with distortion O(n^{2/d}\log^{3/2}n) and such an embedding can be constructed in polynomial time by random projection. For d=2, we give an example of a metric space that requires a large distortion for embedding in R^2, while all not too large subspaces of it embed almost isometrically.<|reference_end|>
arxiv
@article{matousek2008inapproximability, title={Inapproximability for metric embeddings into R^d}, author={Jiri Matousek, Anastasios Sidiropoulos}, journal={arXiv preprint arXiv:0807.2472}, year={2008}, archivePrefix={arXiv}, eprint={0807.2472}, primaryClass={cs.CG cs.CC} }
matousek2008inapproximability
arxiv-4330
0807.2475
Opportunistic Collaborative Beamforming with One-Bit Feedback
<|reference_start|>Opportunistic Collaborative Beamforming with One-Bit Feedback: An energy-efficient opportunistic collaborative beamformer with one-bit feedback is proposed for ad hoc sensor networks over Rayleigh fading channels. In contrast to conventional collaborative beamforming schemes in which each source node uses channel state information to correct its local carrier offset and channel phase, the proposed beamforming scheme opportunistically selects a subset of source nodes whose received signals combine in a quasi-coherent manner at the intended receiver. No local phase-precompensation is performed by the nodes in the opportunistic collaborative beamformer. As a result, each node requires only one-bit of feedback from the destination in order to determine if it should or shouldn't participate in the collaborative beamformer. Theoretical analysis shows that the received signal power obtained with the proposed beamforming scheme scales linearly with the number of available source nodes. Since the the optimal node selection rule requires an exhaustive search over all possible subsets of source nodes, two low-complexity selection algorithms are developed. Simulation results confirm the effectiveness of opportunistic collaborative beamforming with the low-complexity selection algorithms.<|reference_end|>
arxiv
@article{pun2008opportunistic, title={Opportunistic Collaborative Beamforming with One-Bit Feedback}, author={Man-On Pun, D. Richard Brown III and H. Vincent Poor}, journal={arXiv preprint arXiv:0807.2475}, year={2008}, doi={10.1109/TWC.2009.080512}, archivePrefix={arXiv}, eprint={0807.2475}, primaryClass={cs.IT math.IT} }
pun2008opportunistic
arxiv-4331
0807.2496
Hybrid Keyword Search Auctions
<|reference_start|>Hybrid Keyword Search Auctions: Search auctions have become a dominant source of revenue generation on the Internet. Such auctions have typically used per-click bidding and pricing. We propose the use of hybrid auctions where an advertiser can make a per-impression as well as a per-click bid, and the auctioneer then chooses one of the two as the pricing mechanism. We assume that the advertiser and the auctioneer both have separate beliefs (called priors) on the click-probability of an advertisement. We first prove that the hybrid auction is truthful, assuming that the advertisers are risk-neutral. We then show that this auction is superior to the existing per-click auction in multiple ways: 1) It takes into account the risk characteristics of the advertisers. 2) For obscure keywords, the auctioneer is unlikely to have a very sharp prior on the click-probabilities. In such situations, the hybrid auction can result in significantly higher revenue. 3) An advertiser who believes that its click-probability is much higher than the auctioneer's estimate can use per-impression bids to correct the auctioneer's prior without incurring any extra cost. 4) The hybrid auction can allow the advertiser and auctioneer to implement complex dynamic programming strategies. As Internet commerce matures, we need more sophisticated pricing models to exploit all the information held by each of the participants. We believe that hybrid auctions could be an important step in this direction.<|reference_end|>
arxiv
@article{goel2008hybrid, title={Hybrid Keyword Search Auctions}, author={Ashish Goel and Kamesh Munagala}, journal={arXiv preprint arXiv:0807.2496}, year={2008}, archivePrefix={arXiv}, eprint={0807.2496}, primaryClass={cs.GT cs.DS cs.IR} }
goel2008hybrid
arxiv-4332
0807.2515
The Dark Energy Survey Data Management System
<|reference_start|>The Dark Energy Survey Data Management System: The Dark Energy Survey collaboration will study cosmic acceleration with a 5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The DES data management (DESDM) system will be used to process and archive these data and the resulting science ready data products. The DESDM system consists of an integrated archive, a processing framework, an ensemble of astronomy codes and a data access framework. We are developing the DESDM system for operation in the high performance computing (HPC) environments at NCSA and Fermilab. Operating the DESDM system in an HPC environment offers both speed and flexibility. We will employ it for our regular nightly processing needs, and for more compute-intensive tasks such as large scale image coaddition campaigns, extraction of weak lensing shear from the full survey dataset, and massive seasonal reprocessing of the DES data. Data products will be available to the Collaboration and later to the public through a virtual-observatory compatible web portal. Our approach leverages investments in publicly available HPC systems, greatly reducing hardware and maintenance costs to the project, which must deploy and maintain only the storage, database platforms and orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we tested the current DESDM system on both simulated and real survey data. We used Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and calibrating approximately 250 million objects into the DES Archive database. We also used DESDM to process and calibrate over 50 nights of survey data acquired with the Mosaic2 camera. Comparison to truth tables in the case of the simulated data and internal crosschecks in the case of the real data indicate that astrometric and photometric data quality is excellent.<|reference_end|>
arxiv
@article{mohr2008the, title={The Dark Energy Survey Data Management System}, author={Joseph J. Mohr (1), Wayne Barkhouse (2), Cristina Beldica (1), Emmanuel Bertin (3), Y. Dora Cai (1), Luiz da Costa (4), J. Anthony Darnell (1), Gregory E. Daues (1), Michael Jarvis (5), Michelle Gower (1), Huan Lin (6), leandro Martelli (4), Eric Neilsen (6), Chow-Choong Ngeow (1), Ricardo Ogando (4), Alex Parga (1), Erin Sheldon (7), Douglas Tucker (6), Nikolay Kuropatkin (6), Chris Stoughton (6) ((1) University of Illinois, (2) University of North Dakota, (3) Institut d'Astrophysque, Paris, (4) Observatorio Nacional, Brasil, (5) University of Pennsylvania, (6) Fermilab, (7) New York University)}, journal={arXiv preprint arXiv:0807.2515}, year={2008}, doi={10.1117/12.789550}, archivePrefix={arXiv}, eprint={0807.2515}, primaryClass={astro-ph cs.DC} }
mohr2008the
arxiv-4333
0807.2543
Two Fuzzy Logic Programming Paradoxes Imply Continuum Hypothesis="False" & Axiom of Choice="False" Imply ZFC is Inconsistent
<|reference_start|>Two Fuzzy Logic Programming Paradoxes Imply Continuum Hypothesis="False" & Axiom of Choice="False" Imply ZFC is Inconsistent: Two different paradoxes of the fuzzy logic programming system of [29] are presented. The first paradox is due to two distinct (contradictory) truth values for every ground atom of FLP, one is syntactical, the other is semantical. The second paradox concerns the cardinality of the valid FLP formulas which is found to have contradictory values: both $\aleph_0$ the cardinality of the natural numbers, and $c$, the cardinality of the continuum. The result is that CH="False" and Axiom of Choice="False". Hence, ZFC is inconsistent.<|reference_end|>
arxiv
@article{kamouna2008two, title={Two Fuzzy Logic Programming Paradoxes Imply Continuum Hypothesis="False" & Axiom of Choice="False" Imply ZFC is Inconsistent}, author={Rafee Ebrahim Kamouna}, journal={arXiv preprint arXiv:0807.2543}, year={2008}, archivePrefix={arXiv}, eprint={0807.2543}, primaryClass={cs.LO} }
kamouna2008two
arxiv-4334
0807.2569
Text Data Mining: Theory and Methods
<|reference_start|>Text Data Mining: Theory and Methods: This paper provides the reader with a very brief introduction to some of the theory and methods of text data mining. The intent of this article is to introduce the reader to some of the current methodologies that are employed within this discipline area while at the same time making the reader aware of some of the interesting challenges that remain to be solved within the area. Finally, the articles serves as a very rudimentary tutorial on some of techniques while also providing the reader with a list of references for additional study.<|reference_end|>
arxiv
@article{solka2008text, title={Text Data Mining: Theory and Methods}, author={Jeffrey Solka}, journal={Statistics Surveys 2008, Vol. 2, 94-112}, year={2008}, doi={10.1214/07-SS016}, number={IMS-SS-SS_2007_16}, archivePrefix={arXiv}, eprint={0807.2569}, primaryClass={stat.ML cs.IR stat.CO} }
solka2008text
arxiv-4335
0807.2628
Un cadre de conception pour r\'eunir les mod\`eles d'interaction et l'ing\'enierie des interfaces
<|reference_start|>Un cadre de conception pour r\'eunir les mod\`eles d'interaction et l'ing\'enierie des interfaces: We present HIC (Human-system Interaction Container), a general framework for the integration of advanced interaction in the software development process. We show how this framework allows to reconcile the software development methods (such MDA, MDE) with the architectural models of software design such as MVC or PAC. We illustrate our approach thanks to two different types of implementation for this concept in two different business areas: one software design pattern, MVIC (Model View Interaction Control) and one architectural model, IM (Interaction Middleware).<|reference_end|>
arxiv
@article{lard2008un, title={Un cadre de conception pour r\'eunir les mod\`eles d'interaction et l'ing\'enierie des interfaces}, author={J'er^ome Lard (LRI), Fr'ed'eric Landragin (LaTTice), Olivier Grisvard (ATOL), David Faure}, journal={Ing\'enierie des Syst\`emes d'Information (ISI) 12, 6 (2007) 67-91}, year={2008}, archivePrefix={arXiv}, eprint={0807.2628}, primaryClass={cs.HC} }
lard2008un
arxiv-4336
0807.2636
Topological Observations on Multiplicative Additive Linear Logic
<|reference_start|>Topological Observations on Multiplicative Additive Linear Logic: As an attempt to uncover the topological nature of composition of strategies in game semantics, we present a ``topological'' game for Multiplicative Additive Linear Logic without propositional variables, including cut moves. We recast the notion of (winning) strategy and the question of cut elimination in this context, and prove a cut elimination theorem. Finally, we prove soundness and completeness. The topology plays a crucial role, in particular through the fact that strategies form a sheaf.<|reference_end|>
arxiv
@article{hirschowitz2008topological, title={Topological Observations on Multiplicative Additive Linear Logic}, author={Andr'e Hirschowitz (JAD), Michel Hirschowitz (LIX, LIST), Tom Hirschowitz (LM-Savoie)}, journal={arXiv preprint arXiv:0807.2636}, year={2008}, archivePrefix={arXiv}, eprint={0807.2636}, primaryClass={cs.LO} }
hirschowitz2008topological
arxiv-4337
0807.2648
On Endogenous Reconfiguration in Mobile Robotic Networks
<|reference_start|>On Endogenous Reconfiguration in Mobile Robotic Networks: In this paper, our focus is on certain applications for mobile robotic networks, where reconfiguration is driven by factors intrinsic to the network rather than changes in the external environment. In particular, we study a version of the coverage problem useful for surveillance applications, where the objective is to position the robots in order to minimize the average distance from a random point in a given environment to the closest robot. This problem has been well-studied for omni-directional robots and it is shown that optimal configuration for the network is a centroidal Voronoi configuration and that the coverage cost belongs to $\Theta(m^{-1/2})$, where $m$ is the number of robots in the network. In this paper, we study this problem for more realistic models of robots, namely the double integrator (DI) model and the differential drive (DD) model. We observe that the introduction of these motion constraints in the algorithm design problem gives rise to an interesting behavior. For a \emph{sparser} network, the optimal algorithm for these models of robots mimics that for omni-directional robots. We propose novel algorithms whose performances are within a constant factor of the optimal asymptotically (i.e., as $m \to +\infty$). In particular, we prove that the coverage cost for the DI and DD models of robots is of order $m^{-1/3}$. Additionally, we show that, as the network grows, these novel algorithms outperform the conventional algorithm; hence necessitating a reconfiguration in the network in order to maintain optimal quality of service.<|reference_end|>
arxiv
@article{savla2008on, title={On Endogenous Reconfiguration in Mobile Robotic Networks}, author={Ketan Savla and Emilio Frazzoli}, journal={arXiv preprint arXiv:0807.2648}, year={2008}, archivePrefix={arXiv}, eprint={0807.2648}, primaryClass={cs.RO} }
savla2008on
arxiv-4338
0807.2666
Source and Channel Coding for Correlated Sources Over Multiuser Channels
<|reference_start|>Source and Channel Coding for Correlated Sources Over Multiuser Channels: Source and channel coding over multiuser channels in which receivers have access to correlated source side information is considered. For several multiuser channel models necessary and sufficient conditions for optimal separation of the source and channel codes are obtained. In particular, the multiple access channel, the compound multiple access channel, the interference channel and the two-way channel with correlated sources and correlated receiver side information are considered, and the optimality of separation is shown to hold for certain source and side information structures. Interestingly, the optimal separate source and channel codes identified for these models are not necessarily the optimal codes for the underlying source coding or the channel coding problems. In other words, while separation of the source and channel codes is optimal, the nature of these optimal codes is impacted by the joint design criterion.<|reference_end|>
arxiv
@article{gunduz2008source, title={Source and Channel Coding for Correlated Sources Over Multiuser Channels}, author={Deniz Gunduz, Elza Erkip, Andrea Goldsmith, H. Vincent Poor}, journal={arXiv preprint arXiv:0807.2666}, year={2008}, archivePrefix={arXiv}, eprint={0807.2666}, primaryClass={cs.IT math.IT} }
gunduz2008source
arxiv-4339
0807.2677
Algorithms for Dynamic Spectrum Access with Learning for Cognitive Radio
<|reference_start|>Algorithms for Dynamic Spectrum Access with Learning for Cognitive Radio: We study the problem of dynamic spectrum sensing and access in cognitive radio systems as a partially observed Markov decision process (POMDP). A group of cognitive users cooperatively tries to exploit vacancies in primary (licensed) channels whose occupancies follow a Markovian evolution. We first consider the scenario where the cognitive users have perfect knowledge of the distribution of the signals they receive from the primary users. For this problem, we obtain a greedy channel selection and access policy that maximizes the instantaneous reward, while satisfying a constraint on the probability of interfering with licensed transmissions. We also derive an analytical universal upper bound on the performance of the optimal policy. Through simulation, we show that our scheme achieves good performance relative to the upper bound and improved performance relative to an existing scheme. We then consider the more practical scenario where the exact distribution of the signal from the primary is unknown. We assume a parametric model for the distribution and develop an algorithm that can learn the true distribution, still guaranteeing the constraint on the interference probability. We show that this algorithm outperforms the naive design that assumes a worst case value for the parameter. We also provide a proof for the convergence of the learning algorithm.<|reference_end|>
arxiv
@article{unnikrishnan2008algorithms, title={Algorithms for Dynamic Spectrum Access with Learning for Cognitive Radio}, author={Jayakrishnan Unnikrishnan and Venugopal Veeravalli}, journal={arXiv preprint arXiv:0807.2677}, year={2008}, archivePrefix={arXiv}, eprint={0807.2677}, primaryClass={cs.NI cs.LG} }
unnikrishnan2008algorithms
arxiv-4340
0807.2678
Eigenfactor : Does the Principle of Repeated Improvement Result in Better Journal Impact Estimates than Raw Citation Counts?
<|reference_start|>Eigenfactor : Does the Principle of Repeated Improvement Result in Better Journal Impact Estimates than Raw Citation Counts?: Eigenfactor.org, a journal evaluation tool which uses an iterative algorithm to weight citations (similar to the PageRank algorithm used for Google) has been proposed as a more valid method for calculating the impact of journals. The purpose of this brief communication is to investigate whether the principle of repeated improvement provides different rankings of journals than does a simple unweighted citation count (the method used by ISI).<|reference_end|>
arxiv
@article{davis2008eigenfactor, title={Eigenfactor : Does the Principle of Repeated Improvement Result in Better Journal Impact Estimates than Raw Citation Counts?}, author={Philip M. Davis}, journal={Journal of the American Society for Information Science & Technology (2008) v59 n12 p.2186-2188}, year={2008}, doi={10.1002/asi.20943}, archivePrefix={arXiv}, eprint={0807.2678}, primaryClass={cs.DL cs.DB} }
davis2008eigenfactor
arxiv-4341
0807.2680
Group Divisible Codes and Their Application in the Construction of Optimal Constant-Composition Codes of Weight Three
<|reference_start|>Group Divisible Codes and Their Application in the Construction of Optimal Constant-Composition Codes of Weight Three: The concept of group divisible codes, a generalization of group divisible designs with constant block size, is introduced in this paper. This new class of codes is shown to be useful in recursive constructions for constant-weight and constant-composition codes. Large classes of group divisible codes are constructed which enabled the determination of the sizes of optimal constant-composition codes of weight three (and specified distance), leaving only four cases undetermined. Previously, the sizes of constant-composition codes of weight three were known only for those of sufficiently large length.<|reference_end|>
arxiv
@article{chee2008group, title={Group Divisible Codes and Their Application in the Construction of Optimal Constant-Composition Codes of Weight Three}, author={Yeow Meng Chee, Gennian Ge, Alan C. H. Ling}, journal={IEEE Transactions on Information Theory, vol. 54, no. 8, pp. 3552-3564, 2008}, year={2008}, doi={10.1109/TIT.2008.926349}, archivePrefix={arXiv}, eprint={0807.2680}, primaryClass={cs.IT cs.DM math.CO math.IT} }
chee2008group
arxiv-4342
0807.2694
Algorithms for Scheduling Weighted Packets with Deadlines in a Bounded Queue
<|reference_start|>Algorithms for Scheduling Weighted Packets with Deadlines in a Bounded Queue: Motivated by the Quality-of-Service (QoS) buffer management problem, we consider online scheduling of packets with hard deadlines in a finite capacity queue. At any time, a queue can store at most $b \in \mathbb Z^+$ packets. Packets arrive over time. Each packet is associated with a non-negative value and an integer deadline. In each time step, only one packet is allowed to be sent. Our objective is to maximize the total value gained by the packets sent by their deadlines in an online manner. Due to the Internet traffic's chaotic characteristics, no stochastic assumptions are made on the packet input sequences. This model is called a {\em finite-queue model}. We use competitive analysis to measure an online algorithm's performance versus an unrealizable optimal offline algorithm who constructs the worst possible input based on the knowledge of the online algorithm. For the finite-queue model, we first present a deterministic 3-competitive memoryless online algorithm. Then, we give a randomized ($\phi^2 = ((1 + \sqrt{5}) / 2)^2 \approx 2.618$)-competitive memoryless online algorithm. The algorithmic framework and its theoretical analysis include several interesting features. First, our algorithms use (possibly) modified characteristics of packets; these characteristics may not be same as those specified in the input sequence. Second, our analysis method is different from the classical potential function approach.<|reference_end|>
arxiv
@article{li2008algorithms, title={Algorithms for Scheduling Weighted Packets with Deadlines in a Bounded Queue}, author={Fei Li}, journal={arXiv preprint arXiv:0807.2694}, year={2008}, archivePrefix={arXiv}, eprint={0807.2694}, primaryClass={cs.DS} }
li2008algorithms
arxiv-4343
0807.2701
A Cutting Plane Method based on Redundant Rows for Improving Fractional Distance
<|reference_start|>A Cutting Plane Method based on Redundant Rows for Improving Fractional Distance: In this paper, an idea of the cutting plane method is employed to improve the fractional distance of a given binary parity check matrix. The fractional distance is the minimum weight (with respect to l1-distance) of vertices of the fundamental polytope. The cutting polytope is defined based on redundant rows of the parity check matrix and it plays a key role to eliminate unnecessary fractional vertices in the fundamental polytope. We propose a greedy algorithm and its efficient implementation for improving the fractional distance based on the cutting plane method.<|reference_end|>
arxiv
@article{miwa2008a, title={A Cutting Plane Method based on Redundant Rows for Improving Fractional Distance}, author={Makoto Miwa, Tadashi Wadayama, Ichi Takumi}, journal={arXiv preprint arXiv:0807.2701}, year={2008}, archivePrefix={arXiv}, eprint={0807.2701}, primaryClass={cs.IT math.IT} }
miwa2008a
arxiv-4344
0807.2724
An Asymptotic Analysis of the MIMO BC under Linear Filtering
<|reference_start|>An Asymptotic Analysis of the MIMO BC under Linear Filtering: We investigate the MIMO broadcast channel in the high SNR regime when linear filtering is applied instead of dirty paper coding. Using a user-wise rate duality where the streams of every single user are not treated as self-interference as in the hitherto existing stream-wise rate dualities for linear filtering, we solve the weighted sum rate maximization problem of the broadcast channel in the dual multiple access channel. Thus, we can exactly quantify the asymptotic rate loss of linear filtering compared to dirty paper coding for any channel realization. Having converted the optimum covariance matrices to the broadcast channel by means of the duality, we observe that the optimal covariance matrices in the broadcast channel feature quite complicated but still closed form expressions although the respective transmit covariance matrices in the dual multiple access channel share a very simple structure. We immediately come to the conclusion that block-diagonalization is the asymptotically optimum transmit strategy in the broadcast channel. Out of the set of block-diagonalizing precoders, we present the one which achieves the largest sum rate and thus corresponds to the optimum solution found in the dual multiple access channel. Additionally, we quantify the ergodic rate loss of linear coding compared to dirty paper coding for Gaussian channels with correlations at the mobiles.<|reference_end|>
arxiv
@article{hunger2008an, title={An Asymptotic Analysis of the MIMO BC under Linear Filtering}, author={Raphael Hunger and Michael Joham}, journal={arXiv preprint arXiv:0807.2724}, year={2008}, archivePrefix={arXiv}, eprint={0807.2724}, primaryClass={cs.IT math.IT} }
hunger2008an
arxiv-4345
0807.2728
Iterative ('Turbo') Multiuser Detectors For Impulse Radio Systems
<|reference_start|>Iterative ('Turbo') Multiuser Detectors For Impulse Radio Systems: In recent years, there has been a growing interest in multiple access communication systems that spread their transmitted energy over very large bandwidths. These systems, which are referred to as ultra wide-band (UWB) systems, have various advantages over narrow-band and conventional wide-band systems. The importance of multiuser detection for achieving high data or low bit error rates in these systems has already been established in several studies. This paper presents iterative ('turbo') multiuser detection for impulse radio (IR) UWB systems over multipath channels. While this approach is demonstrated for UWB signals, it can also be used in other systems that use similar types of signaling. When applied to the type of signals used by UWB systems, the complexity of the proposed detector can be quite low. Also, two very low complexity implementations of the iterative multiuser detection scheme are proposed based on Gaussian approximation and soft interference cancellation. The performance of these detectors is assessed using simulations that demonstrate their favorable properties.<|reference_end|>
arxiv
@article{fishler2008iterative, title={Iterative ('Turbo') Multiuser Detectors For Impulse Radio Systems}, author={E. Fishler, S. Gezici, and H. V. Poor}, journal={arXiv preprint arXiv:0807.2728}, year={2008}, doi={10.1109/TWC.2008.060711}, archivePrefix={arXiv}, eprint={0807.2728}, primaryClass={cs.IT math.IT} }
fishler2008iterative
arxiv-4346
0807.2730
Position Estimation via Ultra-Wideband Signals
<|reference_start|>Position Estimation via Ultra-Wideband Signals: The high time resolution of ultra-wideband (UWB) signals facilitates very precise position estimation in many scenarios, which makes a variety applications possible. This paper reviews the problem of position estimation in UWB systems, beginning with an overview of the basic structure of UWB signals and their positioning applications. This overview is followed by a discussion of various position estimation techniques, with an emphasis on time-based approaches, which are particularly suitable for UWB positioning systems. Practical issues arising in UWB signal design and hardware implementation are also discussed.<|reference_end|>
arxiv
@article{gezici2008position, title={Position Estimation via Ultra-Wideband Signals}, author={S. Gezici and H. V. Poor}, journal={arXiv preprint arXiv:0807.2730}, year={2008}, archivePrefix={arXiv}, eprint={0807.2730}, primaryClass={cs.IT math.IT} }
gezici2008position
arxiv-4347
0807.2743
Information Societies and Digital Divides
<|reference_start|>Information Societies and Digital Divides: The book argues ICT are part of the set of goods and services that determine quality of life, social inequality and the chances for economic development. Therefore understanding the digital divide demands a broader discussion of the place of ICT within each society and in the international system. The author argues against the perspectives that either isolates ICT from other basic social goods (in particular education and employment) as well as those that argue that new technologies are luxury of a consumer society. Though the author accepts that new technologies are not a panacea for the problems of inequality, access to them become a condition of full integration of social life. Using examples mainly from Latin America, the work presents some general policy proposals on the fight against the digital divide which take in consideration other dimensions of social inequality and access to public goods. Bernardo Sorj was born in Montevideo, Uruguay. He is a naturalized Brazilian, living in Brazil since 1976. He studied anthropology and philosophy in Uruguay, and holds a B.A. and an M.A. in History and Sociology from Haifa University, Israel. He received his Ph.D. in Sociology from the University of Manchester in England. Sorj was a professor at the Department of Political Science at the Federal University of Minas Gerais and at the Institute for International Relations, PUC/RJ. The author of 20 books and more than 100 articles, was visiting professor and chair at many European and North American universities...<|reference_end|>
arxiv
@article{sorj2008information, title={Information Societies and Digital Divides}, author={Bernardo Sorj}, journal={arXiv preprint arXiv:0807.2743}, year={2008}, archivePrefix={arXiv}, eprint={0807.2743}, primaryClass={cs.CY} }
sorj2008information
arxiv-4348
0807.2829
Congestion Reduction Using Ad hoc Message Dissemination in Vehicular Networks
<|reference_start|>Congestion Reduction Using Ad hoc Message Dissemination in Vehicular Networks: Vehicle-to-vehicle communications can be used effectively for intelligent transport systems (ITS) and location-aware services. The ability to disseminate information in an ad-hoc fashion allows pertinent information to propagate faster through the network. In the realm of ITS, the ability to spread warning information faster and further is of great advantage to the receivers of this information. In this paper we propose and present a message-dissemination procedure that uses vehicular wireless protocols for influencing traffic flow, reducing congestion in road networks. The computational experiments presented in this paper show how an intelligent driver model (IDM) and car-following model can be adapted to 'react' to the reception of information. This model also presents the advantages of coupling together traffic modelling tools and network simulation tools.<|reference_end|>
arxiv
@article{hewer2008congestion, title={Congestion Reduction Using Ad hoc Message Dissemination in Vehicular Networks}, author={Thomas D. Hewer and Maziar Nekovee}, journal={Proceedings of the 4th IEEE International Workshop on Vehicle-to-Vehicle Communications, pages 1-7, 2008}, year={2008}, doi={10.1007/978-3-642-11284-3_14}, archivePrefix={arXiv}, eprint={0807.2829}, primaryClass={cs.NI} }
hewer2008congestion
arxiv-4349
0807.2836
Ordinateur port\'e support de r\'ealit\'e augment\'ee pour des activit\'es de maintenance et de d\'epannage
<|reference_start|>Ordinateur port\'e support de r\'ealit\'e augment\'ee pour des activit\'es de maintenance et de d\'epannage: In this paper we present a case study of use of wearable computer within the framework of activities of maintenance and repairing. Besides the study of configuration of this wearable computer and its peripherals, we show the integration of context, in-situ storage, traceability and regulation in these activities. This case study is in the scope of a huge project called HMTD (Help Me To Do) which aim is to apply MOCOCO (Mobility, COoperation, COntextualisation) and IMERA (Mobile Interaction in the Augmented Real Environment) principles for better use, maintenance and repairing of equipments in the domestic, public and professional situations.<|reference_end|>
arxiv
@article{champalle2008ordinateur, title={Ordinateur port\'e support de r\'ealit\'e augment\'ee pour des activit\'es de maintenance et de d\'epannage}, author={Olivier Champalle (ICTT, Liesp), Bertrand David (ICTT, Liesp), Ren'e Chalon (ICTT, Liesp), Guillaume Masserey (ICTT, Liesp)}, journal={arXiv preprint arXiv:0807.2836}, year={2008}, archivePrefix={arXiv}, eprint={0807.2836}, primaryClass={cs.HC} }
champalle2008ordinateur
arxiv-4350
0807.2844
On the Performance of Selection Relaying
<|reference_start|>On the Performance of Selection Relaying: Interest in selection relaying is growing. The recent developments in this area have largely focused on information theoretic analyses such as outage performance. Some of these analyses are accurate only at high SNR regimes. In this paper error rate analyses that are sufficiently accurate over a wide range of SNR regimes are provided. The motivations for this work are that practical systems operate at far lower SNR values than those supported by the high SNR analysis. To enable designers to make informed decisions regarding network design and deployment, it is imperative that system performance is evaluated with a reasonable degree of accuracy over practical SNR regimes. Simulations have been used to corroborate the analytical results, as close agreement between the two is observed.<|reference_end|>
arxiv
@article{adinoyi2008on, title={On the Performance of Selection Relaying}, author={Abdulkareem Adinoyi, Yijia Fan, Halim Yanikomeroglu and H. Vincent Poor}, journal={arXiv preprint arXiv:0807.2844}, year={2008}, doi={10.1109/VETECF.2008.347}, archivePrefix={arXiv}, eprint={0807.2844}, primaryClass={cs.IT math.IT} }
adinoyi2008on
arxiv-4351
0807.2859
The Transport Capacity of a Wireless Network is a Subadditive Euclidean Functional
<|reference_start|>The Transport Capacity of a Wireless Network is a Subadditive Euclidean Functional: The transport capacity of a dense ad hoc network with n nodes scales like \sqrt(n). We show that the transport capacity divided by \sqrt(n) approaches a non-random limit with probability one when the nodes are i.i.d. distributed on the unit square. We prove that the transport capacity under the protocol model is a subadditive Euclidean functional and use the machinery of subadditive functions in the spirit of Steele to show the existence of the limit.<|reference_end|>
arxiv
@article{ganti2008the, title={The Transport Capacity of a Wireless Network is a Subadditive Euclidean Functional}, author={Radha Krishna Ganti and Martin Haenggi}, journal={arXiv preprint arXiv:0807.2859}, year={2008}, doi={10.1239/jap/1285335416}, archivePrefix={arXiv}, eprint={0807.2859}, primaryClass={cs.IT math.IT} }
ganti2008the
arxiv-4352
0807.2928
Visual Grouping by Neural Oscillators
<|reference_start|>Visual Grouping by Neural Oscillators: Distributed synchronization is known to occur at several scales in the brain, and has been suggested as playing a key functional role in perceptual grouping. State-of-the-art visual grouping algorithms, however, seem to give comparatively little attention to neural synchronization analogies. Based on the framework of concurrent synchronization of dynamic systems, simple networks of neural oscillators coupled with diffusive connections are proposed to solve visual grouping problems. Multi-layer algorithms and feedback mechanisms are also studied. The same algorithm is shown to achieve promising results on several classical visual grouping problems, including point clustering, contour integration and image segmentation.<|reference_end|>
arxiv
@article{yu2008visual, title={Visual Grouping by Neural Oscillators}, author={Guoshen Yu and Jean-Jacques Slotine}, journal={arXiv preprint arXiv:0807.2928}, year={2008}, archivePrefix={arXiv}, eprint={0807.2928}, primaryClass={cs.CV cs.NE} }
yu2008visual
arxiv-4353
0807.2961
Perturbed affine arithmetic for invariant computation in numerical program analysis
<|reference_start|>Perturbed affine arithmetic for invariant computation in numerical program analysis: We completely describe a new domain for abstract interpretation of numerical programs. Fixpoint iteration in this domain is proved to converge to finite precise invariants for (at least) the class of stable linear recursive filters of any order. Good evidence shows it behaves well also for some non-linear schemes. The result, and the structure of the domain, rely on an interesting interplay between order and topology.<|reference_end|>
arxiv
@article{goubault2008perturbed, title={Perturbed affine arithmetic for invariant computation in numerical program analysis}, author={Eric Goubault and Sylvie Putot}, journal={arXiv preprint arXiv:0807.2961}, year={2008}, archivePrefix={arXiv}, eprint={0807.2961}, primaryClass={cs.LO cs.NA} }
goubault2008perturbed
arxiv-4354
0807.2972
DescribeX: A Framework for Exploring and Querying XML Web Collections
<|reference_start|>DescribeX: A Framework for Exploring and Querying XML Web Collections: This thesis introduces DescribeX, a powerful framework that is capable of describing arbitrarily complex XML summaries of web collections, providing support for more efficient evaluation of XPath workloads. DescribeX permits the declarative description of document structure using all axes and language constructs in XPath, and generalizes many of the XML indexing and summarization approaches in the literature. DescribeX supports the construction of heterogeneous summaries where different document elements sharing a common structure can be declaratively defined and refined by means of path regular expressions on axes, or axis path regular expression (AxPREs). DescribeX can significantly help in the understanding of both the structure of complex, heterogeneous XML collections and the behaviour of XPath queries evaluated on them. Experimental results demonstrate the scalability of DescribeX summary refinements and stabilizations (the key enablers for tailoring summaries) with multi-gigabyte web collections. A comparative study suggests that using a DescribeX summary created from a given workload can produce query evaluation times orders of magnitude better than using existing summaries. DescribeX's light-weight approach of combining summaries with a file-at-a-time XPath processor can be a very competitive alternative, in terms of performance, to conventional fully-fledged XML query engines that provide DB-like functionality such as security, transaction processing, and native storage.<|reference_end|>
arxiv
@article{rizzolo2008describex:, title={DescribeX: A Framework for Exploring and Querying XML Web Collections}, author={Flavio Rizzolo}, journal={arXiv preprint arXiv:0807.2972}, year={2008}, archivePrefix={arXiv}, eprint={0807.2972}, primaryClass={cs.DB} }
rizzolo2008describex:
arxiv-4355
0807.2983
On Probability Distributions for Trees: Representations, Inference and Learning
<|reference_start|>On Probability Distributions for Trees: Representations, Inference and Learning: We study probability distributions over free algebras of trees. Probability distributions can be seen as particular (formal power) tree series [Berstel et al 82, Esik et al 03], i.e. mappings from trees to a semiring K . A widely studied class of tree series is the class of rational (or recognizable) tree series which can be defined either in an algebraic way or by means of multiplicity tree automata. We argue that the algebraic representation is very convenient to model probability distributions over a free algebra of trees. First, as in the string case, the algebraic representation allows to design learning algorithms for the whole class of probability distributions defined by rational tree series. Note that learning algorithms for rational tree series correspond to learning algorithms for weighted tree automata where both the structure and the weights are learned. Second, the algebraic representation can be easily extended to deal with unranked trees (like XML trees where a symbol may have an unbounded number of children). Both properties are particularly relevant for applications: nondeterministic automata are required for the inference problem to be relevant (recall that Hidden Markov Models are equivalent to nondeterministic string automata); nowadays applications for Web Information Extraction, Web Services and document processing consider unranked trees.<|reference_end|>
arxiv
@article{denis2008on, title={On Probability Distributions for Trees: Representations, Inference and Learning}, author={Franc{c}ois Denis (LIF), Amaury Habrard (LIF), R'emi Gilleron (LIFL, INRIA Futurs), Marc Tommasi (LIFL, INRIA Futurs, GRAPPA), 'Edouard Gilbert (INRIA Futurs)}, journal={Dans NIPS Workshop on Representations and Inference on Probability Distributions (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0807.2983}, primaryClass={cs.LG} }
denis2008on
arxiv-4356
0807.2993
Establishing and Measuring Standard Spreadsheet Practices for End-Users
<|reference_start|>Establishing and Measuring Standard Spreadsheet Practices for End-Users: This paper offers a brief review of cognitive verbs typically used in the literature to describe standard spreadsheet practices. The verbs identified are then categorised in terms of Bloom's Taxonomy of Hierarchical Levels, and then rated and arranged to distinguish some of their qualities and characteristics. Some measurement items are then evaluated to see how well computerised test question items validate or reinforce training or certification. The paper considers how establishing standard practices in spreadsheet training and certification can help reduce some of the risks associated with spreadsheets, and help promote productivity.<|reference_end|>
arxiv
@article{cleere2008establishing, title={Establishing and Measuring Standard Spreadsheet Practices for End-Users}, author={Garry Cleere}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 1-15 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0807.2993}, primaryClass={cs.HC} }
cleere2008establishing
arxiv-4357
0807.2997
Reducing Spreadsheet Risk with FormulaDataSleuth
<|reference_start|>Reducing Spreadsheet Risk with FormulaDataSleuth: A new MS Excel application has been developed which seeks to reduce the risks associated with the development, operation and auditing of Excel spreadsheets. FormulaDataSleuth provides a means of checking spreadsheet formulas and data as they are developed or used, enabling the users to identify actual or potential errors quickly and thereby halt their propagation. In this paper, we will describe, with examples, how the application works and how it can be applied to reduce the risks associated with Excel spreadsheets.<|reference_end|>
arxiv
@article{bekenn2008reducing, title={Reducing Spreadsheet Risk with FormulaDataSleuth}, author={Bill Bekenn and Ray Hooper}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 33-44 ISBN 978-905617-69-2}, year={2008}, archivePrefix={arXiv}, eprint={0807.2997}, primaryClass={cs.HC cs.SE} }
bekenn2008reducing
arxiv-4358
0807.3006
The rank convergence of HITS can be slow
<|reference_start|>The rank convergence of HITS can be slow: We prove that HITS, to "get right" h of the top k ranked nodes of an N>=2k node graph, can require h^(Omega(N h/k)) iterations (i.e. a substantial Omega(N h log(h)/k) matrix multiplications even with a "squaring trick"). Our proof requires no algebraic tools and is entirely self-contained.<|reference_end|>
arxiv
@article{peserico2008the, title={The rank convergence of HITS can be slow}, author={Enoch Peserico and Luca Pretto}, journal={arXiv preprint arXiv:0807.3006}, year={2008}, archivePrefix={arXiv}, eprint={0807.3006}, primaryClass={cs.DS cs.IR} }
peserico2008the
arxiv-4359
0807.3026
Finding paths of length k in O*(2^k) time
<|reference_start|>Finding paths of length k in O*(2^k) time: We give a randomized algorithm that determines if a given graph has a simple path of length at least k in O(2^k poly(n,k)) time.<|reference_end|>
arxiv
@article{williams2008finding, title={Finding paths of length k in O*(2^k) time}, author={Ryan Williams}, journal={Information Processing Letters, 109(6):315--318, February 2009}, year={2008}, archivePrefix={arXiv}, eprint={0807.3026}, primaryClass={cs.DS cs.DM} }
williams2008finding
arxiv-4360
0807.3050
Dimensionally Distributed Learning: Models and Algorithm
<|reference_start|>Dimensionally Distributed Learning: Models and Algorithm: This paper introduces a framework for regression with dimensionally distributed data with a fusion center. A cooperative learning algorithm, the iterative conditional expectation algorithm (ICEA), is designed within this framework. The algorithm can effectively discover linear combinations of individual estimators trained by each agent without transferring and storing large amount of data amongst the agents and the fusion center. The convergence of ICEA is explored. Specifically, for a two agent system, each complete round of ICEA is guaranteed to be a non-expansive map on the function space of each agent. The advantages and limitations of ICEA are also discussed for data sets with various distributions and various hidden rules. Moreover, several techniques are also designed to leverage the algorithm to effectively learn more complex hidden rules that are not linearly decomposable.<|reference_end|>
arxiv
@article{zheng2008dimensionally, title={Dimensionally Distributed Learning: Models and Algorithm}, author={Haipeng Zheng, Sanjeev R. Kulkarni, H. Vincent Poor}, journal={arXiv preprint arXiv:0807.3050}, year={2008}, archivePrefix={arXiv}, eprint={0807.3050}, primaryClass={cs.IT math.IT} }
zheng2008dimensionally
arxiv-4361
0807.3065
Sharp Bounds for Optimal Decoding of Low Density Parity Check Codes
<|reference_start|>Sharp Bounds for Optimal Decoding of Low Density Parity Check Codes: Consider communication over a binary-input memoryless output-symmetric channel with low density parity check (LDPC) codes and maximum a posteriori (MAP) decoding. The replica method of spin glass theory allows to conjecture an analytic formula for the average input-output conditional entropy per bit in the infinite block length limit. Montanari proved a lower bound for this entropy, in the case of LDPC ensembles with convex check degree polynomial, which matches the replica formula. Here we extend this lower bound to any irregular LDPC ensemble. The new feature of our work is an analysis of the second derivative of the conditional input-output entropy with respect to noise. A close relation arises between this second derivative and correlation or mutual information of codebits. This allows us to extend the realm of the interpolation method, in particular we show how channel symmetry allows to control the fluctuations of the overlap parameters.<|reference_end|>
arxiv
@article{kudekar2008sharp, title={Sharp Bounds for Optimal Decoding of Low Density Parity Check Codes}, author={Shrinivas Kudekar, Nicolas Macris}, journal={arXiv preprint arXiv:0807.3065}, year={2008}, doi={10.1109/TIT.2009.2027523}, archivePrefix={arXiv}, eprint={0807.3065}, primaryClass={cs.IT math.IT} }
kudekar2008sharp
arxiv-4362
0807.3094
Energy-Efficient Resource Allocation in Multiuser MIMO Systems: A Game-Theoretic Framework
<|reference_start|>Energy-Efficient Resource Allocation in Multiuser MIMO Systems: A Game-Theoretic Framework: This paper focuses on the cross-layer issue of resource allocation for energy efficiency in the uplink of a multiuser MIMO wireless communication system. Assuming that all of the transmitters and the uplink receiver are equipped with multiple antennas, the situation considered is that in which each terminal is allowed to vary its transmit power, beamforming vector, and uplink receiver in order to maximize its own utility, which is defined as the ratio of data throughput to transmit power; the case in which non-linear interference cancellation is used at the receiver is also investigated. Applying a game-theoretic formulation, several non-cooperative games for utility maximization are thus formulated, and their performance is compared in terms of achieved average utility, achieved average SINR and average transmit power at the Nash equilibrium. Numerical results show that the use of the proposed cross-layer resource allocation policies brings remarkable advantages to the network performance.<|reference_end|>
arxiv
@article{buzzi2008energy-efficient, title={Energy-Efficient Resource Allocation in Multiuser MIMO Systems: A Game-Theoretic Framework}, author={Stefano Buzzi, H. Vincent Poor, Daniela Saturnino}, journal={arXiv preprint arXiv:0807.3094}, year={2008}, archivePrefix={arXiv}, eprint={0807.3094}, primaryClass={cs.IT cs.GT math.IT} }
buzzi2008energy-efficient
arxiv-4363
0807.3096
Stochastic Maximum Principle for a PDEs with noise and control on the boundary
<|reference_start|>Stochastic Maximum Principle for a PDEs with noise and control on the boundary: In this paper we prove necessary conditions for optimality of a stochastic control problem for a class of stochastic partial differential equations that is controlled through the boundary. This kind of problems can be interpreted as a stochastic control problem for an evolution system in an Hilbert space. The regularity of the solution of the adjoint equation, that is a backward stochastic equation in infinite dimension, plays a crucial role in the formulation of the maximum principle.<|reference_end|>
arxiv
@article{guatteri2008stochastic, title={Stochastic Maximum Principle for a PDEs with noise and control on the boundary}, author={Giuseppina Guatteri}, journal={Systems Control Lett. 60 (2011), no. 3, 198/204}, year={2008}, archivePrefix={arXiv}, eprint={0807.3096}, primaryClass={math.PR cs.SY math.OC} }
guatteri2008stochastic
arxiv-4364
0807.3097
Energy-Efficient Power Control in Multipath CDMA Channels via Large System Analysis
<|reference_start|>Energy-Efficient Power Control in Multipath CDMA Channels via Large System Analysis: This paper is focused on the design and analysis of power control procedures for the uplink of multipath code-division-multiple-access (CDMA) channels based on the large system analysis (LSA). Using the tools of LSA, a new decentralized power control algorithm aimed at energy efficiency maximization and requiring very little prior information on the interference background is proposed; moreover, it is also shown that LSA can be used to predict with good accuracy the performance and operational conditions of a large network operating at the equilibrium over a multipath channel, i.e. the power, signal-to-interference-plus-noise ratio (SINR) and utility profiles across users, wherein the utility is defined as the number of bits reliably delivered to the receiver for each energy-unit used for transmission. Additionally, an LSA-based performance comparison among linear receivers is carried out in terms of achieved energy efficiency at the equilibrium. Finally, the problem of the choice of the utility-maximizing training length is also considered. Numerical results show a very satisfactory agreement of the theoretical analysis with simulation results obtained with reference to systems with finite (and not so large) numbers of users.<|reference_end|>
arxiv
@article{buzzi2008energy-efficient, title={Energy-Efficient Power Control in Multipath CDMA Channels via Large System Analysis}, author={Stefano Buzzi, Valeria Massaro, H. Vincent Poor}, journal={arXiv preprint arXiv:0807.3097}, year={2008}, doi={10.1109/PIMRC.2008.4699484}, archivePrefix={arXiv}, eprint={0807.3097}, primaryClass={cs.IT cs.GT math.IT} }
buzzi2008energy-efficient
arxiv-4365
0807.3156
Algorithmic randomness and splitting of supermartingales
<|reference_start|>Algorithmic randomness and splitting of supermartingales: Randomness in the sense of Martin-L\"of can be defined in terms of lower semicomputable supermartingales. We show that such a supermartingale cannot be replaced by a pair of supermartingales that bet only on the even bits (the first one) and on the odd bits (the second one) knowing all preceding bits.<|reference_end|>
arxiv
@article{muchnik2008algorithmic, title={Algorithmic randomness and splitting of supermartingales}, author={Andrej Muchnik}, journal={arXiv preprint arXiv:0807.3156}, year={2008}, archivePrefix={arXiv}, eprint={0807.3156}, primaryClass={cs.IT math.IT} }
muchnik2008algorithmic
arxiv-4366
0807.3168
Audit and Change Analysis of Spreadsheets
<|reference_start|>Audit and Change Analysis of Spreadsheets: Because spreadsheets have a large and growing importance in real-world work, their contents need to be controlled and validated. Generally spreadsheets have been difficult to verify, since data and executable information are stored together. Spreadsheet applications with multiple authors are especially difficult to verify, since controls over access are difficult to enforce. Facing similar problems, traditional software engineering has developed numerous tools and methodologies to control, verify and audit large applications with multiple developers. We present some tools we have developed to enable 1) the audit of selected, filtered, or all changes in a spreadsheet, that is, when a cell was changed, its original and new contents and who made the change, and 2) control of access to the spreadsheet file(s) so that auditing is trustworthy. Our tools apply to OpenOffice.org calc spreadsheets, which can generally be exchanged with Microsoft Excel.<|reference_end|>
arxiv
@article{nash2008audit, title={Audit and Change Analysis of Spreadsheets}, author={John C. Nash, Neil Smith, Andy Adler}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2003 81-90 ISBN 1 86166 199 1}, year={2008}, archivePrefix={arXiv}, eprint={0807.3168}, primaryClass={cs.HC cs.SE} }
nash2008audit
arxiv-4367
0807.3183
Accuracy in Spreadsheet Modelling Systems
<|reference_start|>Accuracy in Spreadsheet Modelling Systems: Accuracy in spreadsheet modelling systems can be reduced due to difficulties with the inputs, the model itself, or the spreadsheet implementation of the model. When the "true" outputs from the system are unknowable, accuracy is evaluated subjectively. Less than perfect accuracy can be acceptable depending on the purpose of the model, problems with inputs, or resource constraints. Users build modelling systems iteratively, and choose to allocate limited resources to the inputs, the model, the spreadsheet implementation, and to employing the system for business analysis. When making these choices, users can suffer from expectation bias and diagnosis bias. Existing research results tend to focus on errors in the spreadsheet implementation. Because industry has tolerance for system inaccuracy, errors in spreadsheet implementations may not be a serious concern. Spreadsheet productivity may be of more interest.<|reference_end|>
arxiv
@article{grossman2008accuracy, title={Accuracy in Spreadsheet Modelling Systems}, author={Thomas A. Grossman}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2003 91-97 ISBN 1 86166 199 1}, year={2008}, archivePrefix={arXiv}, eprint={0807.3183}, primaryClass={cs.HC cs.SE} }
grossman2008accuracy
arxiv-4368
0807.3184
Research Strategy and Scoping Survey on Spreadsheet Practices
<|reference_start|>Research Strategy and Scoping Survey on Spreadsheet Practices: We propose a research strategy for creating and deploying prescriptive recommendations for spreadsheet practice. Empirical data on usage can be used to create a taxonomy of spreadsheet classes. Within each class, existing practices and ideal practices can he combined into proposed best practices for deployment. As a first step we propose a scoping survey to gather non-anecdotal data on spreadsheet usage. The scoping survey will interview people who develop spreadsheets. We will investigate the determinants of spreadsheet importance, identify current industry practices, and document existing standards for creation and use of spreadsheets. The survey will provide insight into user attributes, spreadsheet importance, and current practices. Results will be valuable in themselves, and will guide future empirical research.<|reference_end|>
arxiv
@article{grossman2008research, title={Research Strategy and Scoping Survey on Spreadsheet Practices}, author={Thomas A. Grossman, Ozgur Ozluk}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2003 23-32 ISBN 1 86166 199 1}, year={2008}, archivePrefix={arXiv}, eprint={0807.3184}, primaryClass={cs.HC} }
grossman2008research
arxiv-4369
0807.3186
New Guidelines For Spreadsheets
<|reference_start|>New Guidelines For Spreadsheets: Current prescriptions for spreadsheet style specify modular separation of data, calcu1ation and output, based on the notion that writing a spreadsheet is like writing a computer program. Instead of a computer programming style, this article examines rules of style for text, graphics, and mathematics. Much 'common wisdom' in spreadsheets contradicts rules for these well-developed arts. A case is made here for a new style for spreadsheets that emphasises readability. The new style is described in detail with an example, and contrasted with the programming style.<|reference_end|>
arxiv
@article{raffensperger2008new, title={New Guidelines For Spreadsheets}, author={John F. Raffensperger}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2001 61-76 ISBN:1 86166 179 7}, year={2008}, archivePrefix={arXiv}, eprint={0807.3186}, primaryClass={cs.HC} }
raffensperger2008new
arxiv-4370
0807.3187
When, why and how to test spreadsheets
<|reference_start|>When, why and how to test spreadsheets: Testing is a vital part of software development, and spreadsheets are like any other software in this respect. This paper discusses the testing of spreadsheets in the light of one practitioner's experience. It considers the concept of software testing and how it differs from reviewing, and describes when it might take place. Different types of testing are described, and some techniques for performing them presented. Some of the commonly encountered problems are discussed.<|reference_end|>
arxiv
@article{pryor2008when,, title={When, why and how to test spreadsheets}, author={Louise Pryor}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2004 145-151 ISBN 1 902724 94 1}, year={2008}, archivePrefix={arXiv}, eprint={0807.3187}, primaryClass={cs.SE} }
pryor2008when,
arxiv-4371
0807.3198
On algebras admitting a complete set of near weights, evaluation codes and Goppa codes
<|reference_start|>On algebras admitting a complete set of near weights, evaluation codes and Goppa codes: In 1998 Hoholdt, van Lint and Pellikaan introduced the concept of a ``weight function'' defined on a F_q-algebra and used it to construct linear codes, obtaining among them the algebraic-geometric (AG) codes supported on one point. Later it was proved by Matsumoto that all codes produced using a weight function are actually AG codes supported on one point. Recently, ``near weight functions'' (a generalization of weight functions), also defined on a F_q-algebra, were introduced to study codes supported on two points. In this paper we show that an algebra admits a set of m near weight functions having a compatibility property, namely, the set is a ``complete set'', if and only if it is the ring of regular functions of an affine geometrically irreducible algebraic curve defined over F_q whose points at infinity have a total of m rational branches. Then the codes produced using the near weight functions are exactly the AG codes supported on m points. A formula for the minimum distance of these codes is presented with examples which show that in some situations it compares better than the usual Goppa bound.<|reference_end|>
arxiv
@article{carvalho2008on, title={On algebras admitting a complete set of near weights, evaluation codes and Goppa codes}, author={Cicero Carvalho and Ercilio Silva}, journal={arXiv preprint arXiv:0807.3198}, year={2008}, archivePrefix={arXiv}, eprint={0807.3198}, primaryClass={cs.IT math.IT} }
carvalho2008on
arxiv-4372
0807.3212
Construction of Large Constant Dimension Codes With a Prescribed Minimum Distance
<|reference_start|>Construction of Large Constant Dimension Codes With a Prescribed Minimum Distance: In this paper we construct constant dimension space codes with prescribed minimum distance. There is an increased interest in space codes since a paper by Koetter and Kschischang were they gave an application in network coding. There is also a connection to the theory of designs over finite fields. We will modify a method of Braun, Kerber and Laue which they used for the construction of designs over finite fields to do the construction of space codes. Using this approach we found many new constant dimension spaces codes with a larger number of codewords than previously known codes. We will finally give a table of the best found constant dimension space codes.<|reference_end|>
arxiv
@article{kohnert2008construction, title={Construction of Large Constant Dimension Codes With a Prescribed Minimum Distance}, author={Axel Kohnert, Sascha Kurz}, journal={Lecture Notes Computer Science Vol. 5393, 2008, p. 31 - 42}, year={2008}, doi={10.1007/978-3-540-89994-5_4}, archivePrefix={arXiv}, eprint={0807.3212}, primaryClass={cs.IT cs.DM math.CO math.IT} }
kohnert2008construction
arxiv-4373
0807.3222
The two-user Gaussian interference channel: a deterministic view
<|reference_start|>The two-user Gaussian interference channel: a deterministic view: This paper explores the two-user Gaussian interference channel through the lens of a natural deterministic channel model. The main result is that the deterministic channel uniformly approximates the Gaussian channel, the capacity regions differing by a universal constant. The problem of finding the capacity of the Gaussian channel to within a constant error is therefore reduced to that of finding the capacity of the far simpler deterministic channel. Thus, the paper provides an alternative derivation of the recent constant gap capacity characterization of Etkin, Tse, and Wang. Additionally, the deterministic model gives significant insight towards the Gaussian channel.<|reference_end|>
arxiv
@article{bresler2008the, title={The two-user Gaussian interference channel: a deterministic view}, author={Guy Bresler and David Tse}, journal={Draft of version in Euro. Trans. Telecomm., Volume 19, Issue 4, pp. 333-354, June 2008}, year={2008}, archivePrefix={arXiv}, eprint={0807.3222}, primaryClass={cs.IT math.IT} }
bresler2008the
arxiv-4374
0807.3223
The NAO humanoid: a combination of performance and affordability
<|reference_start|>The NAO humanoid: a combination of performance and affordability: This article presents the design of the autonomous humanoid robot called NAO that is built by the French company Aldebaran-Robotics. With its height of 0.57 m and its weight about 4.5 kg, this innovative robot is lightweight and compact. It distinguishes itself from its existing Japanese, American, and other counterparts thanks to its pelvis kinematics design, its proprietary actuation system based on brush DC motors, its electronic, computer and distributed software architectures. This robot has been designed to be affordable without sacrificing quality and performance. It is an open and easy-to-handle platform where the user can change all the embedded system software or just add some applications to make the robot adopt specific behaviours. The robot's head and forearms are modular and can be changed to promote further evolution. The comprehensive and functional design is one of the reasons that helped select NAO to replace the AIBO quadrupeds in the 2008 RoboCup standard league.<|reference_end|>
arxiv
@article{gouaillier2008the, title={The NAO humanoid: a combination of performance and affordability}, author={David Gouaillier, Vincent Hugel, Pierre Blazevic, Chris Kilner, Jerome Monceaux, Pascal Lafourcade, Brice Marnier, Julien Serre, Bruno Maisonnier}, journal={arXiv preprint arXiv:0807.3223}, year={2008}, archivePrefix={arXiv}, eprint={0807.3223}, primaryClass={cs.RO} }
gouaillier2008the
arxiv-4375
0807.3225
Exploiting Bird Locomotion Kinematics Data for Robotics Modeling
<|reference_start|>Exploiting Bird Locomotion Kinematics Data for Robotics Modeling: We present here the results of an analysis carried out by biologists and roboticists with the aim of modeling bird locomotion kinematics for robotics purposes. The aim was to develop a bio-inspired kinematic model of the bird leg from biological data. We first acquired and processed kinematic data for sagittal and top views obtained by X-ray radiography of quails walking. Data processing involved filtering and specific data reconstruction in three dimensions, as two-dimensional views cannot be synchronized. We then designed a robotic model of a bird-like leg based on a kinematic analysis of the biological data. Angular velocity vectors were calculated to define the number of degrees of freedom (DOF) at each joint and the orientation of the rotation axes.<|reference_end|>
arxiv
@article{hugel2008exploiting, title={Exploiting Bird Locomotion Kinematics Data for Robotics Modeling}, author={Vincent Hugel, Remi Hackert, Anick Abourachid}, journal={arXiv preprint arXiv:0807.3225}, year={2008}, archivePrefix={arXiv}, eprint={0807.3225}, primaryClass={cs.RO} }
hugel2008exploiting
arxiv-4376
0807.3277
Another Co*cryption Method
<|reference_start|>Another Co*cryption Method: We consider the enciphering of a data stream while being compressed by a LZ algorithm. This has to be compared to the classical encryption after compression methods used in security protocols. Actually, most cryptanalysis techniques exploit patterns found in the plaintext to crack the cipher; compression techniques reduce these attacks. Our scheme is based on a LZ compression in which a Vernam cipher has been added. We make some security remarks by trying to measure its randomness with statistical tests. Such a scheme could be employed to increase the speed of security protocols and to decrease the computing power for mobile devices.<|reference_end|>
arxiv
@article{martin2008another, title={Another Co*cryption Method}, author={Bruno Martin (I3S)}, journal={International Conference on Science and Technology (JICT), Malaga : Espagne (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0807.3277}, primaryClass={cs.CR} }
martin2008another
arxiv-4377
0807.3287
Constructing a Knowledge Base for Gene Regulatory Dynamics by Formal Concept Analysis Methods
<|reference_start|>Constructing a Knowledge Base for Gene Regulatory Dynamics by Formal Concept Analysis Methods: Our aim is to build a set of rules, such that reasoning over temporal dependencies within gene regulatory networks is possible. The underlying transitions may be obtained by discretizing observed time series, or they are generated based on existing knowledge, e.g. by Boolean networks or their nondeterministic generalization. We use the mathematical discipline of formal concept analysis (FCA), which has been applied successfully in domains as knowledge representation, data mining or software engineering. By the attribute exploration algorithm, an expert or a supporting computer program is enabled to decide about the validity of a minimal set of implications and thus to construct a sound and complete knowledge base. From this all valid implications are derivable that relate to the selected properties of a set of genes. We present results of our method for the initiation of sporulation in Bacillus subtilis. However the formal structures are exhibited in a most general manner. Therefore the approach may be adapted to signal transduction or metabolic networks, as well as to discrete temporal transitions in many biological and nonbiological areas.<|reference_end|>
arxiv
@article{wollbold2008constructing, title={Constructing a Knowledge Base for Gene Regulatory Dynamics by Formal Concept Analysis Methods}, author={Johannes Wollbold, Reinhard Guthke, Bernhard Ganter}, journal={K. Horimoto et al. (Eds.): AB 2008, LNCS 5147. Springer, Heidelberg 2008, pp. 230-244}, year={2008}, archivePrefix={arXiv}, eprint={0807.3287}, primaryClass={q-bio.MN cs.AI math.LO} }
wollbold2008constructing
arxiv-4378
0807.3326
An $O(\log n)$-approximation for the Set Cover Problem with Set Ownership
<|reference_start|>An $O(\log n)$-approximation for the Set Cover Problem with Set Ownership: In highly distributed Internet measurement systems distributed agents periodically measure the Internet using a tool called {\tt traceroute}, which discovers a path in the network graph. Each agent performs many traceroute measurement to a set of destinations in the network, and thus reveals a portion of the Internet graph as it is seen from the agent locations. In every period we need to check whether previously discovered edges still exist in this period, a process termed {\em validation}. For this end we maintain a database of all the different measurements performed by each agent. Our aim is to be able to {\em validate} the existence of all previously discovered edges in the minimum possible time. In this work we formulate the validation problem as a generalization of the well know set cover problem. We reduce the set cover problem to the validation problem, thus proving that the validation problem is ${\cal NP}$-hard. We present a $O(\log n)$-approximation algorithm to the validation problem, where $n$ in the number of edges that need to be validated. We also show that unless ${\cal P = NP}$ the approximation ratio of the validation problem is $\Omega(\log n)$.<|reference_end|>
arxiv
@article{gonen2008an, title={An $O(\log n)$-approximation for the Set Cover Problem with Set Ownership}, author={Mira Gonen and Yuval Shavitt}, journal={arXiv preprint arXiv:0807.3326}, year={2008}, archivePrefix={arXiv}, eprint={0807.3326}, primaryClass={cs.NI cs.CC} }
gonen2008an
arxiv-4379
0807.3332
Energy-efficient Scheduling of Delay Constrained Traffic over Fading Channels
<|reference_start|>Energy-efficient Scheduling of Delay Constrained Traffic over Fading Channels: A delay-constrained scheduling problem for point-to-point communication is considered: a packet of $B$ bits must be transmitted by a hard deadline of $T$ slots over a time-varying channel. The transmitter/scheduler must determine how many bits to transmit, or equivalently how much energy to transmit with, during each time slot based on the current channel quality and the number of unserved bits, with the objective of minimizing expected total energy. In order to focus on the fundamental scheduling problem, it is assumed that no other packets are scheduled during this time period and no outage is allowed. Assuming transmission at capacity of the underlying Gaussian noise channel, a closed-form expression for the optimal scheduling policy is obtained for the case T=2 via dynamic programming; for $T>2$, the optimal policy can only be numerically determined. Thus, the focus of the work is on derivation of simple, near-optimal policies based on intuition from the T=2 solution and the structure of the general problem. The proposed bit-allocation policies consist of a linear combination of a delay-associated term and an opportunistic (channel-aware) term. In addition, a variation of the problem in which the entire packet must be transmitted in a single slot is studied, and a channel-threshold policy is shown to be optimal.<|reference_end|>
arxiv
@article{lee2008energy-efficient, title={Energy-efficient Scheduling of Delay Constrained Traffic over Fading Channels}, author={Juyul Lee and Nihar Jindal}, journal={arXiv preprint arXiv:0807.3332}, year={2008}, archivePrefix={arXiv}, eprint={0807.3332}, primaryClass={cs.IT math.IT} }
lee2008energy-efficient
arxiv-4380
0807.3337
Algebraic constructions of LDPC codes with no short cycles
<|reference_start|>Algebraic constructions of LDPC codes with no short cycles: An algebraic group ring method for constructing codes with no short cycles in the check matrix is derived. It is shown that the matrix of a group ring element has no short cycles if and only if the collection of group differences of this element has no repeats. When applied to elements in the group ring with small support this gives a general method for constructing and analysing low density parity check (LDPC) codes with no short cycles from group rings. Examples of LDPC codes with no short cycles are constructed from group ring elements and these are simulated and compared with known LDPC codes, including those adopted for wireless standards.<|reference_end|>
arxiv
@article{hurley2008algebraic, title={Algebraic constructions of LDPC codes with no short cycles}, author={Ted Hurley, Paul McEvoy, Jakub Wenus}, journal={arXiv preprint arXiv:0807.3337}, year={2008}, archivePrefix={arXiv}, eprint={0807.3337}, primaryClass={math.RA cs.IT math.IT} }
hurley2008algebraic
arxiv-4381
0807.3374
The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena
<|reference_start|>The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena: The Internet is the most complex system ever created in human history. Therefore, its dynamics and traffic unsurprisingly take on a rich variety of complex dynamics, self-organization, and other phenomena that have been researched for years. This paper is a review of the complex dynamics of Internet traffic. Departing from normal treatises, we will take a view from both the network engineering and physics perspectives showing the strengths and weaknesses as well as insights of both. In addition, many less covered phenomena such as traffic oscillations, large-scale effects of worm traffic, and comparisons of the Internet and biological models will be covered.<|reference_end|>
arxiv
@article{smith2008the, title={The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena}, author={Reginald D. Smith}, journal={Advances in Complex Systems, 14, 6 p. 905-949 (2011)}, year={2008}, doi={10.1142/S0219525911003451}, archivePrefix={arXiv}, eprint={0807.3374}, primaryClass={nlin.AO cs.NI nlin.CD} }
smith2008the
arxiv-4382
0807.3383
Recover plaintext attack to block ciphers
<|reference_start|>Recover plaintext attack to block ciphers: we will present an estimation for the upper-bound of the amount of 16-bytes plaintexts for English texts, which indicates that the block ciphers with block length no more than 16-bytes will be subject to recover plaintext attacks in the occasions of plaintext -known or plaintext-chosen attacks.<|reference_end|>
arxiv
@article{li2008recover, title={Recover plaintext attack to block ciphers}, author={An-Ping Li}, journal={arXiv preprint arXiv:0807.3383}, year={2008}, archivePrefix={arXiv}, eprint={0807.3383}, primaryClass={cs.CR} }
li2008recover
arxiv-4383
0807.3387
Phylogenetic estimation with partial likelihood tensors
<|reference_start|>Phylogenetic estimation with partial likelihood tensors: We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach.<|reference_end|>
arxiv
@article{sumner2008phylogenetic, title={Phylogenetic estimation with partial likelihood tensors}, author={J. G. Sumner and M. A. Charleston}, journal={arXiv preprint arXiv:0807.3387}, year={2008}, archivePrefix={arXiv}, eprint={0807.3387}, primaryClass={q-bio.QM cs.DS q-bio.PE} }
sumner2008phylogenetic
arxiv-4384
0807.3396
Universal Denoising of Discrete-time Continuous-Amplitude Signals
<|reference_start|>Universal Denoising of Discrete-time Continuous-Amplitude Signals: We consider the problem of reconstructing a discrete-time signal (sequence) with continuous-valued components corrupted by a known memoryless channel. When performance is measured using a per-symbol loss function satisfying mild regularity conditions, we develop a sequence of denoisers that, although independent of the distribution of the underlying `clean' sequence, is universally optimal in the limit of large sequence length. This sequence of denoisers is universal in the sense of performing as well as any sliding window denoising scheme which may be optimized for the underlying clean signal. Our results are initially developed in a ``semi-stochastic'' setting, where the noiseless signal is an unknown individual sequence, and the only source of randomness is due to the channel noise. It is subsequently shown that in the fully stochastic setting, where the noiseless sequence is a stationary stochastic process, our schemes universally attain optimum performance. The proposed schemes draw from nonparametric density estimation techniques and are practically implementable. We demonstrate efficacy of the proposed schemes in denoising gray-scale images in the conventional additive white Gaussian noise setting, with additional promising results for less conventional noise distributions.<|reference_end|>
arxiv
@article{sivaramakrishnan2008universal, title={Universal Denoising of Discrete-time Continuous-Amplitude Signals}, author={Kamakshi Sivaramakrishnan and Tsachy Weissman}, journal={arXiv preprint arXiv:0807.3396}, year={2008}, archivePrefix={arXiv}, eprint={0807.3396}, primaryClass={cs.IT cs.LG math.IT math.ST stat.TH} }
sivaramakrishnan2008universal
arxiv-4385
0807.3427
A characterization of 2-player mechanisms for scheduling
<|reference_start|>A characterization of 2-player mechanisms for scheduling: We study the mechanism design problem of scheduling unrelated machines and we completely characterize the decisive truthful mechanisms for two players when the domain contains both positive and negative values. We show that the class of truthful mechanisms is very limited: A decisive truthful mechanism partitions the tasks into groups so that the tasks in each group are allocated independently of the other groups. Tasks in a group of size at least two are allocated by an affine minimizer and tasks in singleton groups by a task-independent mechanism. This characterization is about all truthful mechanisms, including those with unbounded approximation ratio. A direct consequence of this approach is that the approximation ratio of mechanisms for two players is 2, even for two tasks. In fact, it follows that for two players, VCG is the unique algorithm with optimal approximation 2. This characterization provides some support that any decisive truthful mechanism (for 3 or more players) partitions the tasks into groups some of which are allocated by affine minimizers, while the rest are allocated by a threshold mechanism (in which a task is allocated to a player when it is below a threshold value which depends only on the values of the other players). We also show here that the class of threshold mechanisms is identical to the class of additive mechanisms.<|reference_end|>
arxiv
@article{christodoulou2008a, title={A characterization of 2-player mechanisms for scheduling}, author={George Christodoulou (1), Elias Koutsoupias (2), Angelina Vidali (2)((1) Max-Planck-Institut f"ur Informatik, Saarbr"ucken, Germany, (2) Department of Informatics, University of Athens)}, journal={arXiv preprint arXiv:0807.3427}, year={2008}, archivePrefix={arXiv}, eprint={0807.3427}, primaryClass={cs.GT} }
christodoulou2008a
arxiv-4386
0807.3451
A Non-Termination Criterion for Binary Constraint Logic Programs
<|reference_start|>A Non-Termination Criterion for Binary Constraint Logic Programs: On the one hand, termination analysis of logic programs is now a fairly established research topic within the logic programming community. On the other hand, non-termination analysis seems to remain a much less attractive subject. If we divide this line of research into two kinds of approaches: dynamic versus static analysis, this paper belongs to the latter. It proposes a criterion for detecting non-terminating atomic queries with respect to binary CLP rules, which strictly generalizes our previous works on this subject. We give a generic operational definition and an implemented logical form of this criterion. Then we show that the logical form is correct and complete with respect to the operational definition.<|reference_end|>
arxiv
@article{payet2008a, title={A Non-Termination Criterion for Binary Constraint Logic Programs}, author={Etienne Payet and Fred Mesnard}, journal={arXiv preprint arXiv:0807.3451}, year={2008}, archivePrefix={arXiv}, eprint={0807.3451}, primaryClass={cs.PL} }
payet2008a
arxiv-4387
0807.3483
Implementing general belief function framework with a practical codification for low complexity
<|reference_start|>Implementing general belief function framework with a practical codification for low complexity: In this chapter, we propose a new practical codification of the elements of the Venn diagram in order to easily manipulate the focal elements. In order to reduce the complexity, the eventual constraints must be integrated in the codification at the beginning. Hence, we only consider a reduced hyper power set $D_r^\Theta$ that can be $2^\Theta$ or $D^\Theta$. We describe all the steps of a general belief function framework. The step of decision is particularly studied, indeed, when we can decide on intersections of the singletons of the discernment space no actual decision functions are easily to use. Hence, two approaches are proposed, an extension of previous one and an approach based on the specificity of the elements on which to decide. The principal goal of this chapter is to provide practical codes of a general belief function framework for the researchers and users needing the belief function theory.<|reference_end|>
arxiv
@article{martin2008implementing, title={Implementing general belief function framework with a practical codification for low complexity}, author={Arnaud Martin (E3I2)}, journal={arXiv preprint arXiv:0807.3483}, year={2008}, archivePrefix={arXiv}, eprint={0807.3483}, primaryClass={cs.AI} }
martin2008implementing
arxiv-4388
0807.3566
Stabilizer Quantum Codes: A Unified View based on Forney-style Factor Graphs
<|reference_start|>Stabilizer Quantum Codes: A Unified View based on Forney-style Factor Graphs: Quantum error-correction codes (QECCs) are a vital ingredient of quantum computation and communication systems. In that context it is highly desirable to design QECCs that can be represented by graphical models which possess a structure that enables efficient and close-to-optimal iterative decoding. In this paper we focus on stabilizer QECCs, a class of QECCs whose construction is rendered non-trivial by the fact that the stabilizer label code, a code that is associated with a stabilizer QECC, has to satisfy a certain self-orthogonality condition. In order to design graphical models of stabilizer label codes that satisfy this condition, we extend a duality result for Forney-style factor graphs (FFGs) to the stabilizer label code framework. This allows us to formulate a simple FFG design rule for constructing stabilizer label codes, a design rule that unifies several earlier stabilizer label code constructions.<|reference_end|>
arxiv
@article{vontobel2008stabilizer, title={Stabilizer Quantum Codes: A Unified View based on Forney-style Factor Graphs}, author={Pascal O. Vontobel}, journal={arXiv preprint arXiv:0807.3566}, year={2008}, archivePrefix={arXiv}, eprint={0807.3566}, primaryClass={quant-ph cs.IT math.IT} }
vontobel2008stabilizer
arxiv-4389
0807.3574
Mode Switching for MIMO Communication Based on Delay and Channel Quantization
<|reference_start|>Mode Switching for MIMO Communication Based on Delay and Channel Quantization: This paper has been withdrawn by the author as a major revision is made and a new version is uploaded at arXiv:0812.3120<|reference_end|>
arxiv
@article{zhang2008mode, title={Mode Switching for MIMO Communication Based on Delay and Channel Quantization}, author={Jun Zhang, Jeffrey G. Andrews, and Robert W. Heath Jr}, journal={arXiv preprint arXiv:0807.3574}, year={2008}, archivePrefix={arXiv}, eprint={0807.3574}, primaryClass={cs.IT math.IT} }
zhang2008mode
arxiv-4390
0807.3582
Error Correction Capability of Column-Weight-Three LDPC Codes: Part II
<|reference_start|>Error Correction Capability of Column-Weight-Three LDPC Codes: Part II: The relation between the girth and the error correction capability of column-weight-three LDPC codes is investigated. Specifically, it is shown that the Gallager A algorithm can correct $g/2-1$ errors in $g/2$ iterations on a Tanner graph of girth $g \geq 10$.<|reference_end|>
arxiv
@article{chilappagari2008error, title={Error Correction Capability of Column-Weight-Three LDPC Codes: Part II}, author={Shashi Kiran Chilappagari, Dung Viet Nguyen, Bane Vasic and Michael W. Marcellin}, journal={arXiv preprint arXiv:0807.3582}, year={2008}, doi={10.1109/TIT.2009.2015990}, archivePrefix={arXiv}, eprint={0807.3582}, primaryClass={cs.IT math.IT} }
chilappagari2008error
arxiv-4391
0807.3590
Counting the Faces of Randomly-Projected Hypercubes and Orthants, with Applications
<|reference_start|>Counting the Faces of Randomly-Projected Hypercubes and Orthants, with Applications: Let $A$ be an $n$ by $N$ real valued random matrix, and $\h$ denote the $N$-dimensional hypercube. For numerous random matrix ensembles, the expected number of $k$-dimensional faces of the random $n$-dimensional zonotope $A\h$ obeys the formula $E f_k(A\h) /f_k(\h) = 1-P_{N-n,N-k}$, where $P_{N-n,N-k}$ is a fair-coin-tossing probability. The formula applies, for example, where the columns of $A$ are drawn i.i.d. from an absolutely continuous symmetric distribution. The formula exploits Wendel's Theorem\cite{We62}. Let $\po$ denote the positive orthant; the expected number of $k$-faces of the random cone$A \po$ obeys $ {\cal E} f_k(A\po) /f_k(\po) = 1 - P_{N-n,N-k}$. The formula applies to numerous matrix ensembles, including those with iid random columns from an absolutely continuous, centrally symmetric distribution. There is an asymptotically sharp threshold in the behavior of face counts of the projected hypercube; thresholds known for projecting the simplex and the cross-polytope, occur at very different locations. We briefly consider face counts of the projected orthant when $A$ does not have mean zero; these do behave similarly to those for the projected simplex. We consider non-random projectors of the orthant; the 'best possible' $A$ is the one associated with the first $n$ rows of the Fourier matrix. These geometric face-counting results have implications for signal processing, information theory, inverse problems, and optimization. Most of these flow in some way from the fact that face counting is related to conditions for uniqueness of solutions of underdetermined systems of linear equations.<|reference_end|>
arxiv
@article{donoho2008counting, title={Counting the Faces of Randomly-Projected Hypercubes and Orthants, with Applications}, author={David L. Donoho and Jared Tanner}, journal={arXiv preprint arXiv:0807.3590}, year={2008}, archivePrefix={arXiv}, eprint={0807.3590}, primaryClass={math.MG cs.IT math.IT math.OC math.PR} }
donoho2008counting
arxiv-4392
0807.3593
An outer bound for 2-receiver discrete memoryless broadcast channels
<|reference_start|>An outer bound for 2-receiver discrete memoryless broadcast channels: An outer bound to the two-receiver discrete memoryless broadcast channel is presented. We compare it to the known outer bounds and show that the outer bound presented is at least as tight as the existing bounds.<|reference_end|>
arxiv
@article{nair2008an, title={An outer bound for 2-receiver discrete memoryless broadcast channels}, author={Chandra Nair}, journal={arXiv preprint arXiv:0807.3593}, year={2008}, archivePrefix={arXiv}, eprint={0807.3593}, primaryClass={cs.IT math.IT} }
nair2008an
arxiv-4393
0807.3600
A new upper bound for 3-SAT
<|reference_start|>A new upper bound for 3-SAT: We show that a randomly chosen 3-CNF formula over n variables with clauses-to-variables ratio at least 4.4898 is, as n grows large, asymptotically almost surely unsatisfiable. The previous best such bound, due to Dubois in 1999, was 4.506. The first such bound, independently discovered by many groups of researchers since 1983, was 5.19. Several decreasing values between 5.19 and 4.506 were published in the years between. The probabilistic techniques we use for the proof are, we believe, of independent interest.<|reference_end|>
arxiv
@article{diaz2008a, title={A new upper bound for 3-SAT}, author={J. Diaz, L. Kirousis, D. Mitsche, X. Perez-Gimenez}, journal={arXiv preprint arXiv:0807.3600}, year={2008}, archivePrefix={arXiv}, eprint={0807.3600}, primaryClass={cs.DM} }
diaz2008a
arxiv-4394
0807.3622
TuLiPA: Towards a Multi-Formalism Parsing Environment for Grammar Engineering
<|reference_start|>TuLiPA: Towards a Multi-Formalism Parsing Environment for Grammar Engineering: In this paper, we present an open-source parsing environment (Tuebingen Linguistic Parsing Architecture, TuLiPA) which uses Range Concatenation Grammar (RCG) as a pivot formalism, thus opening the way to the parsing of several mildly context-sensitive formalisms. This environment currently supports tree-based grammars (namely Tree-Adjoining Grammars, TAG) and Multi-Component Tree-Adjoining Grammars with Tree Tuples (TT-MCTAG)) and allows computation not only of syntactic structures, but also of the corresponding semantic representations. It is used for the development of a tree-based grammar for German.<|reference_end|>
arxiv
@article{kallmeyer2008tulipa:, title={TuLiPA: Towards a Multi-Formalism Parsing Environment for Grammar Engineering}, author={Laura Kallmeyer (SFB 441), Timm Lichte (SFB 441), Wolfgang Maier (SFB 441), Yannick Parmentier (INRIA Lorraine - LORIA), Johannes Dellert (SFB 441), Kilian Evang (SFB 441)}, journal={arXiv preprint arXiv:0807.3622}, year={2008}, archivePrefix={arXiv}, eprint={0807.3622}, primaryClass={cs.CL} }
kallmeyer2008tulipa:
arxiv-4395
0807.3632
How to Compute Times of Random Walks based Distributed Algorithms
<|reference_start|>How to Compute Times of Random Walks based Distributed Algorithms: Random walk based distributed algorithms make use of a token that circulates in the system according to a random walk scheme to achieve their goal. To study their efficiency and compare it to one of the deterministic solutions, one is led to compute certain quantities, namely the hitting times and the cover time. Until now, only bounds on these quantities were defined. First, this paper presents two generalizations of the notions of hitting and cover times to weighted graphs. Indeed, the properties of random walks on symmetrically weighted graphs provide interesting results on random walk based distributed algorithms, such as local load balancing. Both of these generalization are proposed to precisely represent the behaviour of these algorithms, and to take into account what the weights represent. Then, we propose an algorithm to compute the n^2 hitting times on a weighted graph of n vertices, which we improve to obtain a O(n^3) complexity. This complexity is the lowest up to now. This algorithm computes both of the generalizations that we propose for the hitting times on a weighted graph. Finally, we provide the first algorithm to compute the cover time (in both senses) of a graph. We improve it to achieve a complexity of O(n^3 2^n). The algorithms that we present are all robust to a topological change in a limited number of edges. This property allows us to use them on dynamic graphs.<|reference_end|>
arxiv
@article{bui2008how, title={How to Compute Times of Random Walks based Distributed Algorithms}, author={Alain Bui and Devan Sohier}, journal={arXiv preprint arXiv:0807.3632}, year={2008}, archivePrefix={arXiv}, eprint={0807.3632}, primaryClass={cs.DC cs.DM} }
bui2008how
arxiv-4396
0807.3648
Proposition Algebra with Projective Limits
<|reference_start|>Proposition Algebra with Projective Limits: Sequential propositional logic deviates from ordinary propositional logic by taking into account that during the sequential evaluation of a propositional statement,atomic propositions may yield different Boolean values at repeated occurrences. We introduce `free valuations' to capture this dynamics of a propositional statement's environment. The resulting logic is phrased as an equationally specified algebra rather than in the form of proof rules, and is named `proposition algebra'. It is strictly more general than Boolean algebra to the extent that the classical connectives fail to be expressively complete in the sequential case. The four axioms for free valuation congruence are then combined with other axioms in order define a few more valuation congruences that gradually identify more propositional statements, up to static valuation congruence (which is the setting of conventional propositional logic). Proposition algebra is developed in a fashion similar to the process algebra ACP and the program algebra PGA, via an algebraic specification which has a meaningful initial algebra for which a range of coarser congruences are considered important as well. In addition infinite objects (that is propositional statements, processes and programs respectively) are dealt with by means of an inverse limit construction which allows the transfer of knowledge concerning finite objects to facts about infinite ones while reducing all facts about infinite objects to an infinity of facts about finite ones in return.<|reference_end|>
arxiv
@article{bergstra2008proposition, title={Proposition Algebra with Projective Limits}, author={J.A. Bergstra and A. Ponse}, journal={ACM Transactions on Computational Logic, 12 (3), Article 21, 2011}, year={2008}, doi={10.1145/1929954.1929958}, archivePrefix={arXiv}, eprint={0807.3648}, primaryClass={cs.LO} }
bergstra2008proposition
arxiv-4397
0807.3669
A new probabilistic transformation of belief mass assignment
<|reference_start|>A new probabilistic transformation of belief mass assignment: In this paper, we propose in Dezert-Smarandache Theory (DSmT) framework, a new probabilistic transformation, called DSmP, in order to build a subjective probability measure from any basic belief assignment defined on any model of the frame of discernment. Several examples are given to show how the DSmP transformation works and we compare it to main existing transformations proposed in the literature so far. We show the advantages of DSmP over classical transformations in term of Probabilistic Information Content (PIC). The direct extension of this transformation for dealing with qualitative belief assignments is also presented.<|reference_end|>
arxiv
@article{dezert2008a, title={A new probabilistic transformation of belief mass assignment}, author={Jean Dezert (ONERA), Florentin Smarandache}, journal={Fusion 2008 International Conference on Information Fusion, Cologne : Allemagne (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0807.3669}, primaryClass={cs.AI} }
dezert2008a
arxiv-4398
0807.3699
Multiplication in Cyclotomic Rings and its Application to Finite Fields
<|reference_start|>Multiplication in Cyclotomic Rings and its Application to Finite Fields: A representation of finite fields that has proved useful when implementing finite field arithmetic in hardware is based on an isomorphism between subrings and fields. In this paper, we present an unified formulation for multiplication in cyclotomic rings and cyclotomic fields in that most arithmetic operations are done on vectors. From this formulation we can generate optimized algorithms for multiplication. For example, one of the proposed algorithms requires approximately half the number of coordinate-level multiplications at the expense of extra coordinate-level additions. Our method is then applied to the finite fields GF(q^m) to further reduce the number of operations. We then present optimized algorithms for multiplication in finite fields with type-I and type-II optimal normal bases.<|reference_end|>
arxiv
@article{arguello2008multiplication, title={Multiplication in Cyclotomic Rings and its Application to Finite Fields}, author={Francisco Arguello}, journal={arXiv preprint arXiv:0807.3699}, year={2008}, archivePrefix={arXiv}, eprint={0807.3699}, primaryClass={cs.DM} }
arguello2008multiplication
arxiv-4399
0807.3732
An adaptive embedded architecture for real-time Particle Image Velocimetry algorithms
<|reference_start|>An adaptive embedded architecture for real-time Particle Image Velocimetry algorithms: Particle Image Velocimetry (PIV) is a method of im-aging and analysing fields of flows. The PIV tech-niques compute and display all the motion vectors of the field in a resulting image. Speeds more than thou-sand vectors per second can be required, each speed being environment-dependent. Essence of this work is to propose an adaptive FPGA-based system for real-time PIV algorithms. The proposed structure is ge-neric so that this unique structure can be re-used for any PIV applications that uses the cross-correlation technique. The major structure remains unchanged, adaptations only concern the number of processing operations. The required speed (corresponding to the number of vector per second) is obtained thanks to a parallel processing strategy. The image processing designer duplicates the processing modules to distrib-ute the operations. The result is a FPGA-based archi-tecture, which is easily adapted to algorithm specifica-tions without any hardware requirement. The design flow is fast and reliable.<|reference_end|>
arxiv
@article{aubert2008an, title={An adaptive embedded architecture for real-time Particle Image Velocimetry algorithms}, author={Alain Aubert (LAHC), Nathalie Bochard (LAHC), Virginie Fresse (LAHC)}, journal={arXiv preprint arXiv:0807.3732}, year={2008}, archivePrefix={arXiv}, eprint={0807.3732}, primaryClass={cs.AR} }
aubert2008an
arxiv-4400
0807.3755
Approximating Document Frequency with Term Count Values
<|reference_start|>Approximating Document Frequency with Term Count Values: For bounded datasets such as the TREC Web Track (WT10g) the computation of term frequency (TF) and inverse document frequency (IDF) is not difficult. However, when the corpus is the entire web, direct IDF calculation is impossible and values must instead be estimated. Most available datasets provide values for term count (TC) meaning the number of times a certain term occurs in the entire corpus. Intuitively this value is different from document frequency (DF), the number of documents (e.g., web pages) a certain term occurs in. We conduct a comparison study between TC and DF values within the Web as Corpus (WaC). We found a very strong correlation with Spearman's rho >0.8 (p<0.005) which makes us confident in claiming that for such recently created corpora the TC and DF values can be used interchangeably to compute IDF values. These results are useful for the generation of accurate lexical signatures based on the TF-IDF scheme.<|reference_end|>
arxiv
@article{klein2008approximating, title={Approximating Document Frequency with Term Count Values}, author={Martin Klein, Michael L. Nelson}, journal={arXiv preprint arXiv:0807.3755}, year={2008}, archivePrefix={arXiv}, eprint={0807.3755}, primaryClass={cs.IR cs.DL} }
klein2008approximating