corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-5101
0810.1261
Soft Uncoupling of Markov Chains for Permeable Language Distinction: A New Algorithm
<|reference_start|>Soft Uncoupling of Markov Chains for Permeable Language Distinction: A New Algorithm: Without prior knowledge, distinguishing different languages may be a hard task, especially when their borders are permeable. We develop an extension of spectral clustering -- a powerful unsupervised classification toolbox -- that is shown to resolve accurately the task of soft language distinction. At the heart of our approach, we replace the usual hard membership assignment of spectral clustering by a soft, probabilistic assignment, which also presents the advantage to bypass a well-known complexity bottleneck of the method. Furthermore, our approach relies on a novel, convenient construction of a Markov chain out of a corpus. Extensive experiments with a readily available system clearly display the potential of the method, which brings a visually appealing soft distinction of languages that may define altogether a whole corpus.<|reference_end|>
arxiv
@article{nock2008soft, title={Soft Uncoupling of Markov Chains for Permeable Language Distinction: A New Algorithm}, author={Richard Nock, Pascal Vaillant, Frank Nielsen and Claudia Henry}, journal={ECAI 2006: 17th European Conference on Artificial Intelligence. Riva del Garda, Italy, 29 August - 1st September 2006}, year={2008}, archivePrefix={arXiv}, eprint={0810.1261}, primaryClass={cs.CL cs.IR} }
nock2008soft
arxiv-5102
0810.1267
Information Theory vs Queueing Theory for Resource Allocation in Multiple Access Channels
<|reference_start|>Information Theory vs Queueing Theory for Resource Allocation in Multiple Access Channels: We consider the problem of rate allocation in a fading Gaussian multiple-access channel with fixed transmission powers. The goal is to maximize a general concave utility function of the expected achieved rates of the users. There are different approaches to this problem in the literature. From an information theoretic point of view, rates are allocated only by using the channel state information. The queueing theory approach utilizes the global queue-length information for rate allocation to guarantee throughput optimality as well as maximizing a utility function of the rates. In this work, we make a connection between these two approaches by showing that the information theoretic capacity region of a multiple-access channel and its stability region are equivalent. Moreover, our numerical results show that a simple greedy policy which does not use the queue-length information can outperform queue-length based policies in terms of convergence rate and fairness.<|reference_end|>
arxiv
@article{parandehgheibi2008information, title={Information Theory vs. Queueing Theory for Resource Allocation in Multiple Access Channels}, author={Ali ParandehGheibi, Muriel Medard, Asuman Ozdaglar, Atilla Eryilmaz}, journal={arXiv preprint arXiv:0810.1267}, year={2008}, archivePrefix={arXiv}, eprint={0810.1267}, primaryClass={cs.IT cs.NI math.IT math.OC} }
parandehgheibi2008information
arxiv-5103
0810.1268
Bi-directional half-duplex protocols with multiple relays
<|reference_start|>Bi-directional half-duplex protocols with multiple relays: In a bi-directional relay channel, two nodes wish to exchange independent messages over a shared wireless half-duplex channel with the help of relays. Recent work has considered information theoretic limits of the bi-directional relay channel with a single relay. In this work we consider bi-directional relaying with multiple relays. We derive achievable rate regions and outer bounds for half-duplex protocols with multiple decode and forward relays and compare these to the same protocols with amplify and forward relays in an additive white Gaussian noise channel. We consider three novel classes of half-duplex protocols: the (m,2) 2 phase protocol with m relays, the (m,3) 3 phase protocol with m relays, and general (m, t) Multiple Hops and Multiple Relays (MHMR) protocols, where m is the total number of relays and 3<t< m+3 is the number of temporal phases in the protocol. The (m,2) and (m,3) protocols extend previous bi-directional relaying protocols for a single m=1 relay, while the new (m,t) protocol efficiently combines multi-hop routing with message-level network coding. Finally, we provide a comprehensive treatment of the MHMR protocols with decode and forward relaying and amplify and forward relaying in the Gaussian noise, obtaining their respective achievable rate regions, outer bounds and relative performance under different SNRs and relay geometries, including an analytical comparison on the protocols at low and high SNR.<|reference_end|>
arxiv
@article{kim2008bi-directional, title={Bi-directional half-duplex protocols with multiple relays}, author={Sang Joon Kim, Natasha Devroye, Vahid Tarokh}, journal={arXiv preprint arXiv:0810.1268}, year={2008}, archivePrefix={arXiv}, eprint={0810.1268}, primaryClass={cs.IT math.IT} }
kim2008bi-directional
arxiv-5104
0810.1316
The meaning of concurrent programs
<|reference_start|>The meaning of concurrent programs: The semantics of assignment and mutual exclusion in concurrent and multi-core/multi-processor systems is presented with attention to low level architectural features in an attempt to make the presentation realistic. Recursive functions on event sequences are used to define state dependent functions and variables in ordinary (non-formal-method) algebra.<|reference_end|>
arxiv
@article{yodaiken2008the, title={The meaning of concurrent programs}, author={Victor Yodaiken}, journal={arXiv preprint arXiv:0810.1316}, year={2008}, archivePrefix={arXiv}, eprint={0810.1316}, primaryClass={cs.DM cs.OS} }
yodaiken2008the
arxiv-5105
0810.1319
ARQ-Based Secret Key Sharing
<|reference_start|>ARQ-Based Secret Key Sharing: This paper develops a novel framework for sharing secret keys using existing Automatic Repeat reQuest (ARQ) protocols. Our approach exploits the multi-path nature of the wireless environment to hide the key from passive eavesdroppers. The proposed framework does not assume the availability of any prior channel state information (CSI) and exploits only the one bit ACK/NACK feedback from the legitimate receiver. Compared with earlier approaches, the main innovation lies in the distribution of key bits among multiple ARQ frames. Interestingly, this idea allows for achieving a positive secrecy rate even when the eavesdropper experiences more favorable channel conditions, on average, than the legitimate receiver. In the sequel, we characterize the information theoretic limits of the proposed schemes, develop low complexity explicit implementations, and conclude with numerical results that validate our theoretical claims.<|reference_end|>
arxiv
@article{ghany2008arq-based, title={ARQ-Based Secret Key Sharing}, author={Mohamed Abdel Ghany, Ahmed Sultan and Hesham El Gamal}, journal={arXiv preprint arXiv:0810.1319}, year={2008}, number={WINC-TR-1002}, archivePrefix={arXiv}, eprint={0810.1319}, primaryClass={cs.IT cs.CR math.IT} }
ghany2008arq-based
arxiv-5106
0810.1355
Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters
<|reference_start|>Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters: A large body of work has been devoted to defining and identifying clusters or communities in social and information networks. We explore from a novel perspective several questions related to identifying meaningful communities in large social and information networks, and we come to several striking conclusions. We employ approximation algorithms for the graph partitioning problem to characterize as a function of size the statistical and structural properties of partitions of graphs that could plausibly be interpreted as communities. In particular, we define the network community profile plot, which characterizes the "best" possible community--according to the conductance measure--over a wide range of size scales. We study over 100 large real-world social and information networks. Our results suggest a significantly more refined picture of community structure in large networks than has been appreciated previously. In particular, we observe tight communities that are barely connected to the rest of the network at very small size scales; and communities of larger size scales gradually "blend into" the expander-like core of the network and thus become less "community-like." This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models. Moreover, it is exactly the opposite of what one would expect based on intuition from expander graphs, low-dimensional or manifold-like graphs, and from small social networks that have served as testbeds of community detection algorithms. We have found that a generative graph model, in which new edges are added via an iterative "forest fire" burning process, is able to produce graphs exhibiting a network community profile plot similar to what we observe in our network datasets.<|reference_end|>
arxiv
@article{leskovec2008community, title={Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters}, author={Jure Leskovec, Kevin J. Lang, Anirban Dasgupta, and Michael W. Mahoney}, journal={arXiv preprint arXiv:0810.1355}, year={2008}, archivePrefix={arXiv}, eprint={0810.1355}, primaryClass={cs.DS physics.data-an physics.soc-ph} }
leskovec2008community
arxiv-5107
0810.1368
Demonstration of Time-Reversal in Indoor Ultra-Wideband Communication: Time Domain Measurement
<|reference_start|>Demonstration of Time-Reversal in Indoor Ultra-Wideband Communication: Time Domain Measurement: Using time domain measurements, we assess the feasibility of time-reversal technique in ultra-wideband (UWB) communication. A typical indoor propagation channel is selected for the exploration. The channel response between receive and transmit antenna pairs is measured using time domain equipments which include an arbitrary wave generator (AWG) and a digital storage oscilloscope (DSO). The time-reversed version of the channel response is constructed with AWG and re-transmitted in the channel. The equivalent time reversed channel response is recorded. The properties of the time reversal technique in the line of sight (LOS) co-polar and cross-polar scenarios are measured.<|reference_end|>
arxiv
@article{khaleghi2008demonstration, title={Demonstration of Time-Reversal in Indoor Ultra-Wideband Communication: Time Domain Measurement}, author={Ali Khaleghi (IETR), Gha"is El Zein (IETR), Ijaz Haider Naqvi (IETR)}, journal={IEEE International Symposium on Wireless Communication Systems 2007, Trondheim : Norv\`ege (2007)}, year={2008}, doi={10.1109/ISWCS.2007.4392383}, archivePrefix={arXiv}, eprint={0810.1368}, primaryClass={cs.NI} }
khaleghi2008demonstration
arxiv-5108
0810.1383
Sequential pivotal mechanisms for public project problems
<|reference_start|>Sequential pivotal mechanisms for public project problems: It is well-known that for several natural decision problems no budget balanced Groves mechanisms exist. This has motivated recent research on designing variants of feasible Groves mechanisms (termed as `redistribution of VCG (Vickrey-Clarke-Groves) payments') that generate reduced deficit. With this in mind, we study sequential mechanisms and consider optimal strategies that could reduce the deficit resulting under the simultaneous mechanism. We show that such strategies exist for the sequential pivotal mechanism of the well-known public project problem. We also exhibit an optimal strategy with the property that a maximal social welfare is generated when each player follows it. Finally, we show that these strategies can be achieved by an implementation in Nash equilibrium.<|reference_end|>
arxiv
@article{apt2008sequential, title={Sequential pivotal mechanisms for public project problems}, author={Krzysztof R. Apt, Arantza Est'evez-Fern'andez}, journal={arXiv preprint arXiv:0810.1383}, year={2008}, doi={10.1007/978-3-642-04645-2_9}, archivePrefix={arXiv}, eprint={0810.1383}, primaryClass={cs.GT} }
apt2008sequential
arxiv-5109
0810.1424
"Real" Slepian-Wolf Codes
<|reference_start|>"Real" Slepian-Wolf Codes: We provide a novel achievability proof of the Slepian-Wolf theorem for i.i.d. sources over finite alphabets. We demonstrate that random codes that are linear over the real field achieve the classical Slepian-Wolf rate-region. For finite alphabets we show that typicality decoding is equivalent to solving an integer program. Minimum entropy decoding is also shown to achieve exponentially small probability of error. The techniques used may be of independent interest for code design for a wide class of information theory problems, and for the field of compressed sensing.<|reference_end|>
arxiv
@article{dey2008"real", title={"Real" Slepian-Wolf Codes}, author={Bikash Kumar Dey, Sidharth Jaggi, and Michael Langberg}, journal={arXiv preprint arXiv:0810.1424}, year={2008}, archivePrefix={arXiv}, eprint={0810.1424}, primaryClass={cs.IT math.IT} }
dey2008"real"
arxiv-5110
0810.1430
Blind Cognitive MAC Protocols
<|reference_start|>Blind Cognitive MAC Protocols: We consider the design of cognitive Medium Access Control (MAC) protocols enabling an unlicensed (secondary) transmitter-receiver pair to communicate over the idle periods of a set of licensed channels, i.e., the primary network. The objective is to maximize data throughput while maintaining the synchronization between secondary users and avoiding interference with licensed (primary) users. No statistical information about the primary traffic is assumed to be available a-priori to the secondary user. We investigate two distinct sensing scenarios. In the first, the secondary transmitter is capable of sensing all the primary channels, whereas it senses one channel only in the second scenario. In both cases, we propose MAC protocols that efficiently learn the statistics of the primary traffic online. Our simulation results demonstrate that the proposed blind protocols asymptotically achieve the throughput obtained when prior knowledge of primary traffic statistics is available.<|reference_end|>
arxiv
@article{mehanna2008blind, title={Blind Cognitive MAC Protocols}, author={Omar Mehanna, Ahmed Sultan and Hesham El Gamal}, journal={arXiv preprint arXiv:0810.1430}, year={2008}, archivePrefix={arXiv}, eprint={0810.1430}, primaryClass={cs.NI cs.LG} }
mehanna2008blind
arxiv-5111
0810.1481
An Evidential Path Logic for Multi-Relational Networks
<|reference_start|>An Evidential Path Logic for Multi-Relational Networks: Multi-relational networks are used extensively to structure knowledge. Perhaps the most popular instance, due to the widespread adoption of the Semantic Web, is the Resource Description Framework (RDF). One of the primary purposes of a knowledge network is to reason; that is, to alter the topology of the network according to an algorithm that uses the existing topological structure as its input. There exist many such reasoning algorithms. With respect to the Semantic Web, the bivalent, monotonic reasoners of the RDF Schema (RDFS) and the Web Ontology Language (OWL) are the most prevalent. However, nothing prevents other forms of reasoning from existing in the Semantic Web. This article presents a non-bivalent, non-monotonic, evidential logic and reasoner that is an algebraic ring over a multi-relational network equipped with two binary operations that can be composed to execute various forms of inference. Given its multi-relational grounding, it is possible to use the presented evidential framework as another method for structuring knowledge and reasoning in the Semantic Web. The benefits of this framework are that it works with arbitrary, partial, and contradictory knowledge while, at the same time, it supports a tractable approximate reasoning process.<|reference_end|>
arxiv
@article{rodriguez2008an, title={An Evidential Path Logic for Multi-Relational Networks}, author={Marko A. Rodriguez, Joe Geldart}, journal={Proceedings of the Association for the Advancement of Artificial Intelligence Spring Symposium: Technosocial Predictive Analytics, volume SS-09-09, pages 114-119, ISBN:978-1-57735-416-1, AAAI Press, Stanford University, March 2009.}, year={2008}, number={LA-UR-08-06397}, archivePrefix={arXiv}, eprint={0810.1481}, primaryClass={cs.LO cs.SC} }
rodriguez2008an
arxiv-5112
0810.1499
Constraint satisfaction problems with isolated solutions are hard
<|reference_start|>Constraint satisfaction problems with isolated solutions are hard: We study the phase diagram and the algorithmic hardness of the random `locked' constraint satisfaction problems, and compare them to the commonly studied 'non-locked' problems like satisfiability of boolean formulas or graph coloring. The special property of the locked problems is that clusters of solutions are isolated points. This simplifies significantly the determination of the phase diagram, which makes the locked problems particularly appealing from the mathematical point of view. On the other hand we show empirically that the clustered phase of these problems is extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. Our results suggest that the easy/hard transition (for currently known algorithms) in the locked problems coincides with the clustering transition. These should thus be regarded as new benchmarks of really hard constraint satisfaction problems.<|reference_end|>
arxiv
@article{zdeborová2008constraint, title={Constraint satisfaction problems with isolated solutions are hard}, author={Lenka Zdeborov'a and Marc M'ezard}, journal={J. Stat. Mech. (2008) P12004}, year={2008}, doi={10.1088/1742-5468/2008/12/P12004}, archivePrefix={arXiv}, eprint={0810.1499}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.CC cs.DS} }
zdeborová2008constraint
arxiv-5113
0810.1506
Performance Enhancement of Multiuser Time Reversal UWB Communication System
<|reference_start|>Performance Enhancement of Multiuser Time Reversal UWB Communication System: UWB communication is a recent research area for indoor propagation channels. Time Reversal (TR) communication in UWB has shown promising results for improving the system performance. In multiuser environment, the system performance is significantly degraded due to the interference among different users. TR reduces the interference caused by multiusers due to its spatial focusing property. The performance of a multiuser TR communication system is further improved if the TR filter is modified. In this paper, multiuser TR in UWB communication is investigated using simple TR filter and a modified TR filter with circular shift operation. The concept of circular shift in TR is analytically studied. Thereafter, the channel impulse responses (CIR) of a typical indoor laboratory environment are measured. The measured CIRs are used to analyze the received signal peak power and signal to interference ratio (SIR) with and without performing the circular shift operation in a multiuser environment.<|reference_end|>
arxiv
@article{naqvi2008performance, title={Performance Enhancement of Multiuser Time Reversal UWB Communication System}, author={Ijaz Haider Naqvi (IETR), Ali Khaleghi (IETR), Gha"is El Zein (IETR)}, journal={IEEE International Symposium on Wireless Communication Systems 2007, Trondheim : Norv\`ege (2007)}, year={2008}, doi={10.1109/ISWCS.2007.4392404}, archivePrefix={arXiv}, eprint={0810.1506}, primaryClass={cs.NI} }
naqvi2008performance
arxiv-5114
0810.1513
A DCCP Congestion Control Mechanism for Wired- cum-Wireless Environments
<|reference_start|>A DCCP Congestion Control Mechanism for Wired- cum-Wireless Environments: Existing transport protocols, be it TCP, SCTP or DCCP, do not provide an efficient congestion control mechanism for heterogeneous wired-cum-wireless networks. Solutions involving implicit loss discrimination schemes have been proposed but were never implemented. Appropriate mechanisms can dramatically improve bandwidth usage over the Internet, especially for multimedia transport based on partial reliability. In this paper we have implemented and evaluated a congestion control mechanism that implicitly discriminates congestion and wireless losses in the datagram congestion control protocol (DCCP) congestion control identification (CCID) framework. The new CCID was implemented as a NS-2 module. Comparisons were made with the TCP-like CCID and showed that the bandwidth utilization was improved by more than 30% and up to 50% in significant setups.<|reference_end|>
arxiv
@article{naqvi2008a, title={A DCCP Congestion Control Mechanism for Wired- cum-Wireless Environments}, author={Ijaz Haider Naqvi (IETR), Tanguy P'erennou (LAAS)}, journal={IEEE Wireless Communications and Networking Conference, Hong-Kong (2007)}, year={2008}, doi={10.1109/WCNC.2007.715}, archivePrefix={arXiv}, eprint={0810.1513}, primaryClass={cs.NI} }
naqvi2008a
arxiv-5115
0810.1571
An Analytical Model of Information Dissemination for a Gossip-based Protocol
<|reference_start|>An Analytical Model of Information Dissemination for a Gossip-based Protocol: We develop an analytical model of information dissemination for a gossiping protocol that combines both pull and push approaches. With this model we analyse how fast an item is replicated through a network, and how fast the item spreads in the network, and how fast the item covers the network. We also determine the optimal size of the exchange buffer, to obtain fast replication. Our results are confirmed by large-scale simulation experiments.<|reference_end|>
arxiv
@article{bakhshi2008an, title={An Analytical Model of Information Dissemination for a Gossip-based Protocol}, author={Rena Bakhshi and Daniela Gavidia and Wan Fokkink and Maarten van Steen}, journal={arXiv preprint arXiv:0810.1571}, year={2008}, doi={10.1016/j.comnet.2009.03.017}, archivePrefix={arXiv}, eprint={0810.1571}, primaryClass={cs.DC cs.DM cs.IT cs.PF math.IT} }
bakhshi2008an
arxiv-5116
0810.1574
Liouvillian Solutions of Difference-Differential Equations
<|reference_start|>Liouvillian Solutions of Difference-Differential Equations: For a field k$with an automorphism \sigma and a derivation \delta, we introduce the notion of liouvillian solutions of linear difference-differential systems {\sigma(Y) = AY, \delta(Y) = BY} over k and characterize the existence of liouvillian solutions in terms of the Galois group of the systems. We will give an algorithm to decide whether such a system has liouvillian solutions when k = C(x,t), \sigma(x) = x+1, \delta = d/dt$ and the size of the system is a prime.<|reference_end|>
arxiv
@article{feng2008liouvillian, title={Liouvillian Solutions of Difference-Differential Equations}, author={Ruyong Feng, Michael F. Singer, Min Wu}, journal={arXiv preprint arXiv:0810.1574}, year={2008}, archivePrefix={arXiv}, eprint={0810.1574}, primaryClass={cs.SC math.CA} }
feng2008liouvillian
arxiv-5117
0810.1624
Peer-to-Peer Secure Multi-Party Numerical Computation
<|reference_start|>Peer-to-Peer Secure Multi-Party Numerical Computation: We propose an efficient framework for enabling secure multi-party numerical computations in a Peer-to-Peer network. This problem arises in a range of applications such as collaborative filtering, distributed computation of trust and reputation, monitoring and numerous other tasks, where the computing nodes would like to preserve the privacy of their inputs while performing a joint computation of a certain function. Although there is a rich literature in the field of distributed systems security concerning secure multi-party computation, in practice it is hard to deploy those methods in very large scale Peer-to-Peer networks. In this work, we examine several possible approaches and discuss their feasibility. Among the possible approaches, we identify a single approach which is both scalable and theoretically secure. An additional novel contribution is that we show how to compute the neighborhood based collaborative filtering, a state-of-the-art collaborative filtering algorithm, winner of the Netflix progress prize of the year 2007. Our solution computes this algorithm in a Peer-to-Peer network, using a privacy preserving computation, without loss of accuracy. Using extensive large scale simulations on top of real Internet topologies, we demonstrate the applicability of our approach. As far as we know, we are the first to implement such a large scale secure multi-party simulation of networks of millions of nodes and hundreds of millions of edges.<|reference_end|>
arxiv
@article{bickson2008peer-to-peer, title={Peer-to-Peer Secure Multi-Party Numerical Computation}, author={Danny Bickson, Genia Bezman, Danny Dolev and Benny Pinkas}, journal={The 8th IEEE Peer-to-Peer Computing (P2P 2008), Aachen, Germany, Sept. 2008}, year={2008}, doi={10.1109/P2P.2008.22}, archivePrefix={arXiv}, eprint={0810.1624}, primaryClass={cs.CR cs.DC} }
bickson2008peer-to-peer
arxiv-5118
0810.1628
Distributed Kalman Filter via Gaussian Belief Propagation
<|reference_start|>Distributed Kalman Filter via Gaussian Belief Propagation: Recent result shows how to compute distributively and efficiently the linear MMSE for the multiuser detection problem, using the Gaussian BP algorithm. In the current work, we extend this construction, and show that operating this algorithm twice on the matching inputs, has several interesting interpretations. First, we show equivalence to computing one iteration of the Kalman filter. Second, we show that the Kalman filter is a special case of the Gaussian information bottleneck algorithm, when the weight parameter $\beta = 1$. Third, we discuss the relation to the Affine-scaling interior-point method and show it is a special case of Kalman filter. Besides of the theoretical interest of this linking estimation, compression/clustering and optimization, we allow a single distributed implementation of those algorithms, which is a highly practical and important task in sensor and mobile ad-hoc networks. Application to numerous problem domains includes collaborative signal processing and distributed allocation of resources in a communication network.<|reference_end|>
arxiv
@article{bickson2008distributed, title={Distributed Kalman Filter via Gaussian Belief Propagation}, author={Danny Bickson, Ori Shental and Danny Dolev}, journal={The 46th Annual Allerton Conference on Communication, Control and Computing, Allerton House, Illinois, Sept. 2008}, year={2008}, doi={10.1109/ALLERTON.2008.4797617}, archivePrefix={arXiv}, eprint={0810.1628}, primaryClass={cs.IT math.IT} }
bickson2008distributed
arxiv-5119
0810.1631
Polynomial Linear Programming with Gaussian Belief Propagation
<|reference_start|>Polynomial Linear Programming with Gaussian Belief Propagation: Interior-point methods are state-of-the-art algorithms for solving linear programming (LP) problems with polynomial complexity. Specifically, the Karmarkar algorithm typically solves LP problems in time O(n^{3.5}), where $n$ is the number of unknown variables. Karmarkar's celebrated algorithm is known to be an instance of the log-barrier method using the Newton iteration. The main computational overhead of this method is in inverting the Hessian matrix of the Newton iteration. In this contribution, we propose the application of the Gaussian belief propagation (GaBP) algorithm as part of an efficient and distributed LP solver that exploits the sparse and symmetric structure of the Hessian matrix and avoids the need for direct matrix inversion. This approach shifts the computation from realm of linear algebra to that of probabilistic inference on graphical models, thus applying GaBP as an efficient inference engine. Our construction is general and can be used for any interior-point algorithm which uses the Newton method, including non-linear program solvers.<|reference_end|>
arxiv
@article{bickson2008polynomial, title={Polynomial Linear Programming with Gaussian Belief Propagation}, author={Danny Bickson, Yoav Tock, Ori Shental and Danny Dolev}, journal={The 46th Annual Allerton Conference on Communication, Control and Computing, Allerton House, Illinois, Sept. 2008}, year={2008}, doi={10.1109/ALLERTON.2008.4797652}, archivePrefix={arXiv}, eprint={0810.1631}, primaryClass={cs.IT math.IT} }
bickson2008polynomial
arxiv-5120
0810.1639
Identifying almost sorted permutations from TCP buffer dynamics
<|reference_start|>Identifying almost sorted permutations from TCP buffer dynamics: Associate to each sequence $A$ of integers (intending to represent packet IDs) a sequence of positive integers of the same length ${\mathcal M}(A)$. The $i$'th entry of ${\mathcal M}(A)$ is the size (at time $i$) of the smallest buffer needed to hold out-of-order packets, where space is accounted for unreceived packets as well. Call two sequences $A$, $B$ {\em equivalent} (written $A\equiv_{FB} B$) if ${\mathcal M}(A)={\mathcal M}(B)$. We prove the following result: any two permutations $A,B$ of the same length with $SUS(A)$, $SUS(B)\leq 3$ (where SUS is the {\em shuffled-up-sequences} reordering measure), and such that $A\equiv_{FB} B$ are identical. The result (which is no longer valid if we replace the upper bound 3 by 4) was motivated by RESTORED, a receiver-oriented model of network traffic we have previously introduced.<|reference_end|>
arxiv
@article{istrate2008identifying, title={Identifying almost sorted permutations from TCP buffer dynamics}, author={Gabriel Istrate}, journal={arXiv preprint arXiv:0810.1639}, year={2008}, archivePrefix={arXiv}, eprint={0810.1639}, primaryClass={cs.DS cs.DM math.CO} }
istrate2008identifying
arxiv-5121
0810.1648
A Gaussian Belief Propagation Solver for Large Scale Support Vector Machines
<|reference_start|>A Gaussian Belief Propagation Solver for Large Scale Support Vector Machines: Support vector machines (SVMs) are an extremely successful type of classification and regression algorithms. Building an SVM entails solving a constrained convex quadratic programming problem, which is quadratic in the number of training samples. We introduce an efficient parallel implementation of an support vector regression solver, based on the Gaussian Belief Propagation algorithm (GaBP). In this paper, we demonstrate that methods from the complex system domain could be utilized for performing efficient distributed computation. We compare the proposed algorithm to previously proposed distributed and single-node SVM solvers. Our comparison shows that the proposed algorithm is just as accurate as these solvers, while being significantly faster, especially for large datasets. We demonstrate scalability of the proposed algorithm to up to 1,024 computing nodes and hundreds of thousands of data points using an IBM Blue Gene supercomputer. As far as we know, our work is the largest parallel implementation of belief propagation ever done, demonstrating the applicability of this algorithm for large scale distributed computing systems.<|reference_end|>
arxiv
@article{bickson2008a, title={A Gaussian Belief Propagation Solver for Large Scale Support Vector Machines}, author={Danny Bickson, Elad Yom-Tov and Danny Dolev}, journal={The 5th European Complex Systems Conference (ECCS 2008), Jerusalem, Sept. 2008}, year={2008}, archivePrefix={arXiv}, eprint={0810.1648}, primaryClass={cs.LG cs.IT math.IT} }
bickson2008a
arxiv-5122
0810.1650
Demand allocation with latency cost functions
<|reference_start|>Demand allocation with latency cost functions: We address the exact resolution of a MINLP model where resources can be activated in order to satisfy a demand (a partitioning constraint) while minimizing total cost. Cost functions are convex latency functions plus a fixed activation cost. A branch and bound algorithm is devised, featuring three important characteristics. First, the lower bound (therefore each subproblem) can be computed in O(nlog n). Second, to break symmetries resulting in improved efficiency, the branching scheme is n-ary (instead of the "classical" binary). Third, a very affective heuristic is used to compute a good upper bound at the root node of the enumeration tree. All three features lead to a successful comparison against CPLEX MIPQ, which is the fastest among several commercial and open-source solvers: computational results showing this fact are provided.<|reference_end|>
arxiv
@article{agnetis2008demand, title={Demand allocation with latency cost functions}, author={Alessandro Agnetis, Enrico Grande, Andrea Pacifici}, journal={arXiv preprint arXiv:0810.1650}, year={2008}, archivePrefix={arXiv}, eprint={0810.1650}, primaryClass={cs.DM cs.GT} }
agnetis2008demand
arxiv-5123
0810.1655
On the Utility of Anonymized Flow Traces for Anomaly Detection
<|reference_start|>On the Utility of Anonymized Flow Traces for Anomaly Detection: The sharing of network traces is an important prerequisite for the development and evaluation of efficient anomaly detection mechanisms. Unfortunately, privacy concerns and data protection laws prevent network operators from sharing these data. Anonymization is a promising solution in this context; however, it is unclear if the sanitization of data preserves the traffic characteristics or introduces artifacts that may falsify traffic analysis results. In this paper, we examine the utility of anonymized flow traces for anomaly detection. We quantitatively evaluate the impact of IP address anonymization, namely variations of permutation and truncation, on the detectability of large-scale anomalies. Specifically, we analyze three weeks of un-sampled and non-anonymized network traces from a medium-sized backbone network. We find that all anonymization techniques, except prefix-preserving permutation, degrade the utility of data for anomaly detection. We show that the degree of degradation depends to a large extent on the nature and mix of anomalies present in a trace. Moreover, we present a case study that illustrates how traffic characteristics of individual hosts are distorted by anonymization.<|reference_end|>
arxiv
@article{burkhart2008on, title={On the Utility of Anonymized Flow Traces for Anomaly Detection}, author={Martin Burkhart, Daniela Brauckhoff, Martin May}, journal={Proceedings of the 19th ITC Specialist Seminar on Network Usage and Traffic (ITC SS 19), October 2008, Berlin, Germany}, year={2008}, archivePrefix={arXiv}, eprint={0810.1655}, primaryClass={cs.NI} }
burkhart2008on
arxiv-5124
0810.1729
Gaussian Belief Propagation Based Multiuser Detection
<|reference_start|>Gaussian Belief Propagation Based Multiuser Detection: In this work, we present a novel construction for solving the linear multiuser detection problem using the Gaussian Belief Propagation algorithm. Our algorithm yields an efficient, iterative and distributed implementation of the MMSE detector. We compare our algorithm's performance to a recent result and show an improved memory consumption, reduced computation steps and a reduction in the number of sent messages. We prove that recent work by Montanari et al. is an instance of our general algorithm, providing new convergence results for both algorithms.<|reference_end|>
arxiv
@article{bickson2008gaussian, title={Gaussian Belief Propagation Based Multiuser Detection}, author={Danny Bickson, Danny Dolev, Ori Shental, Paul H. Siegel and Jack K. Wolf}, journal={The 2008 IEEE International Symposium on Information Theory (ISIT 2008), Toronto, July 2008}, year={2008}, doi={10.1109/ISIT.2008.4595314}, archivePrefix={arXiv}, eprint={0810.1729}, primaryClass={cs.IT math.IT} }
bickson2008gaussian
arxiv-5125
0810.1732
Introduction to Searching with Regular Expressions
<|reference_start|>Introduction to Searching with Regular Expressions: The explosive rate of information growth and availability often makes it increasingly difficult to locate information pertinent to your needs. These problems are often compounded when keyword based search methodologies are not adequate for describing the information you seek. In many instances, information such as Web site URLs, phone numbers, etc. can often be better identified through the use of a textual pattern than by keyword. For example, many more phone numbers could be picked up by a search for the pattern (XXX) XXX-XXXX, where X could be any digit, than would be by a search for any specific phone number (i.e. the keyword approach). Programming languages typically allow for the matching of textual patterns via the usage of regular expressions. This tutorial will provide an introduction to the basics of programming regular expressions as well as provide an introduction to how regular expressions can be applied to data processing tasks such as information extraction and search refinement.<|reference_end|>
arxiv
@article{frenz2008introduction, title={Introduction to Searching with Regular Expressions}, author={Christopher M. Frenz}, journal={arXiv preprint arXiv:0810.1732}, year={2008}, archivePrefix={arXiv}, eprint={0810.1732}, primaryClass={cs.IR} }
frenz2008introduction
arxiv-5126
0810.1735
Network Coding in a Multicast Switch
<|reference_start|>Network Coding in a Multicast Switch: The problem of serving multicast flows in a crossbar switch is considered. Intra-flow linear network coding is shown to achieve a larger rate region than the case without coding. A traffic pattern is presented which is achievable with coding but requires a switch speedup when coding is not allowed. The rate region with coding can be characterized in a simple graph-theoretic manner, in terms of the stable set polytope of the "enhanced conflict graph". No such graph-theoretic characterization is known for the case of fanout splitting without coding. The minimum speedup needed to achieve 100% throughput with coding is shown to be upper bounded by the imperfection ratio of the enhanced conflict graph. When applied to KxN switches with unicasts and broadcasts only, this gives a bound of min{(2K-1)/K,2N/(N+1)} on the speedup. This shows that speedup, which is usually implemented in hardware, can often be substituted by network coding, which can be done in software. Computing an offline schedule (using prior knowledge of the flow rates) is reduced to fractional weighted graph coloring. A graph-theoretic online scheduling algorithm (using only queue occupancy information) is also proposed, that stabilizes the queues for all rates within the rate region.<|reference_end|>
arxiv
@article{kim2008network, title={Network Coding in a Multicast Switch}, author={MinJi Kim, Jay Kumar Sundararajan, Muriel Medard, Atilla Eryilmaz, Ralf Koetter}, journal={arXiv preprint arXiv:0810.1735}, year={2008}, doi={10.1109/TIT.2010.2090213}, archivePrefix={arXiv}, eprint={0810.1735}, primaryClass={cs.NI cs.IT math.IT} }
kim2008network
arxiv-5127
0810.1736
Gaussian Belief Propagation Solver for Systems of Linear Equations
<|reference_start|>Gaussian Belief Propagation Solver for Systems of Linear Equations: The canonical problem of solving a system of linear equations arises in numerous contexts in information theory, communication theory, and related fields. In this contribution, we develop a solution based upon Gaussian belief propagation (GaBP) that does not involve direct matrix inversion. The iterative nature of our approach allows for a distributed message-passing implementation of the solution algorithm. We also address some properties of the GaBP solver, including convergence, exactness, its max-product version and relation to classical solution methods. The application example of decorrelation in CDMA is used to demonstrate the faster convergence rate of the proposed solver in comparison to conventional linear-algebraic iterative solution methods.<|reference_end|>
arxiv
@article{shental2008gaussian, title={Gaussian Belief Propagation Solver for Systems of Linear Equations}, author={Ori Shental, Paul H. Siegel, Jack K. Wolf, Danny Bickson and Danny Dolev}, journal={The 2008 IEEE International Symposium on Information Theory (ISIT 2008), Toronto, July 2008}, year={2008}, doi={10.1109/ISIT.2008.4595311}, archivePrefix={arXiv}, eprint={0810.1736}, primaryClass={cs.IT math.IT} }
shental2008gaussian
arxiv-5128
0810.1756
Near-Optimal Radio Use For Wireless Network Synchronization
<|reference_start|>Near-Optimal Radio Use For Wireless Network Synchronization: We consider the model of communication where wireless devices can either switch their radios off to save energy, or switch their radios on and engage in communication. We distill a clean theoretical formulation of this problem of minimizing radio use and present near-optimal solutions. Our base model ignores issues of communication interference, although we also extend the model to handle this requirement. We assume that nodes intend to communicate periodically, or according to some time-based schedule. Clearly, perfectly synchronized devices could switch their radios on for exactly the minimum periods required by their joint schedules. The main challenge in the deployment of wireless networks is to synchronize the devices' schedules, given that their initial schedules may be offset relative to one another (even if their clocks run at the same speed). We significantly improve previous results, and show optimal use of the radio for two processors and near-optimal use of the radio for synchronization of an arbitrary number of processors. In particular, for two processors we prove deterministically matching $\Theta(\sqrt{n})$ upper and lower bounds on the number of times the radio has to be on, where $n$ is the discretized uncertainty period of the clock shift between the two processors. (In contrast, all previous results for two processors are randomized.) For $m=n^\beta$ processors (for any $\beta < 1$) we prove $\Omega(n^{(1-\beta)/2})$ is the lower bound on the number of times the radio has to be switched on (per processor), and show a nearly matching (in terms of the radio use) $\~{O}(n^{(1-\beta)/2})$ randomized upper bound per processor, with failure probability exponentially close to 0. For $\beta \geq 1$ our algorithm runs with at most $poly-log(n)$ radio invocations per processor. Our bounds also hold in a radio-broadcast model where interference must be taken into account.<|reference_end|>
arxiv
@article{bradonjic2008near-optimal, title={Near-Optimal Radio Use For Wireless Network Synchronization}, author={Milan Bradonjic, Eddie Kohler, Rafail Ostrovsky}, journal={arXiv preprint arXiv:0810.1756}, year={2008}, doi={10.1016/j.tcs.2011.09.026}, archivePrefix={arXiv}, eprint={0810.1756}, primaryClass={cs.DS cs.DM} }
bradonjic2008near-optimal
arxiv-5129
0810.1773
Finite Word Length Effects on Transmission Rate in Zero Forcing Linear Precoding for Multichannel DSL
<|reference_start|>Finite Word Length Effects on Transmission Rate in Zero Forcing Linear Precoding for Multichannel DSL: Crosstalk interference is the limiting factor in transmission over copper lines. Crosstalk cancelation techniques show great potential for enabling the next leap in DSL transmission rates. An important issue when implementing crosstalk cancelation techniques in hardware is the effect of finite world length on performance. In this paper we provide an analysis of the performance of linear zero-forcing precoders, used for crosstalk compensation, in the presence of finite word length errors. We quantify analytically the trade off between precoder word length and transmission rate degradation. More specifically, we prove a simple formula for the transmission rate loss as a function of the number of bits used for precoding, the signal to noise ratio, and the standard line parameters. We demonstrate, through simulations on real lines, the accuracy of our estimates. Moreover, our results are stable in the presence of channel estimation errors. Finally, we show how to use these estimates as a design tool for DSL linear crosstalk precoders. For example, we show that for standard VDSL2 precoded systems, 14 bits representation of the precoder entries results in capacity loss below 1% for lines over 300m.<|reference_end|>
arxiv
@article{sayag2008finite, title={Finite Word Length Effects on Transmission Rate in Zero Forcing Linear Precoding for Multichannel DSL}, author={Eitan Sayag, Amir Leshem, Nikolaos D. Sidiropoulos}, journal={arXiv preprint arXiv:0810.1773}, year={2008}, doi={10.1109/TSP.2009.2012889}, archivePrefix={arXiv}, eprint={0810.1773}, primaryClass={cs.IT math.IT} }
sayag2008finite
arxiv-5130
0810.1808
A Central Limit Theorem for the SINR at the LMMSE Estimator Output for Large Dimensional Signals
<|reference_start|>A Central Limit Theorem for the SINR at the LMMSE Estimator Output for Large Dimensional Signals: This paper is devoted to the performance study of the Linear Minimum Mean Squared Error estimator for multidimensional signals in the large dimension regime. Such an estimator is frequently encountered in wireless communications and in array processing, and the Signal to Interference and Noise Ratio (SINR) at its output is a popular performance index. The SINR can be modeled as a random quadratic form which can be studied with the help of large random matrix theory, if one assumes that the dimension of the received and transmitted signals go to infinity at the same pace. This paper considers the asymptotic behavior of the SINR for a wide class of multidimensional signal models that includes general multi-antenna as well as spread spectrum transmission models. The expression of the deterministic approximation of the SINR in the large dimension regime is recalled and the SINR fluctuations around this deterministic approximation are studied. These fluctuations are shown to converge in distribution to the Gaussian law in the large dimension regime, and their variance is shown to decrease as the inverse of the signal dimension.<|reference_end|>
arxiv
@article{kammoun2008a, title={A Central Limit Theorem for the SINR at the LMMSE Estimator Output for Large Dimensional Signals}, author={Abla Kammoun (LTCI), Malika Kharouf (LTCI), Walid Hachem (LTCI), Jamal Najim (LTCI)}, journal={arXiv preprint arXiv:0810.1808}, year={2008}, archivePrefix={arXiv}, eprint={0810.1808}, primaryClass={cs.IT math.IT} }
kammoun2008a
arxiv-5131
0810.1823
Split decomposition and graph-labelled trees: characterizations and fully-dynamic algorithms for totally decomposable graphs
<|reference_start|>Split decomposition and graph-labelled trees: characterizations and fully-dynamic algorithms for totally decomposable graphs: In this paper, we revisit the split decomposition of graphs and give new combinatorial and algorithmic results for the class of totally decomposable graphs, also known as the distance hereditary graphs, and for two non-trivial subclasses, namely the cographs and the 3-leaf power graphs. Precisely, we give strutural and incremental characterizations, leading to optimal fully-dynamic recognition algorithms for vertex and edge modifications, for each of these classes. These results rely on a new framework to represent the split decomposition, namely the graph-labelled trees, which also captures the modular decomposition of graphs and thereby unify these two decompositions techniques. The point of the paper is to use bijections between these graph classes and trees whose nodes are labelled by cliques and stars. Doing so, we are also able to derive an intersection model for distance hereditary graphs, which answers an open problem.<|reference_end|>
arxiv
@article{gioan2008split, title={Split decomposition and graph-labelled trees: characterizations and fully-dynamic algorithms for totally decomposable graphs}, author={Emeric Gioan and Christophe Paul}, journal={arXiv preprint arXiv:0810.1823}, year={2008}, archivePrefix={arXiv}, eprint={0810.1823}, primaryClass={cs.DM cs.DS} }
gioan2008split
arxiv-5132
0810.1851
125 Approximation Algorithm for the Steiner Tree Problem with Distances One and Two
<|reference_start|>125 Approximation Algorithm for the Steiner Tree Problem with Distances One and Two: We give a 1.25 approximation algorithm for the Steiner Tree Problem with distances one and two, improving on the best known bound for that problem.<|reference_end|>
arxiv
@article{berman20081.25, title={1.25 Approximation Algorithm for the Steiner Tree Problem with Distances One and Two}, author={Piotr Berman, Marek Karpinski, Alex Zelikovsky}, journal={arXiv preprint arXiv:0810.1851}, year={2008}, archivePrefix={arXiv}, eprint={0810.1851}, primaryClass={cs.CC cs.DM cs.DS} }
berman20081.25
arxiv-5133
0810.1858
SOSEMANUK: a fast software-oriented stream cipher
<|reference_start|>SOSEMANUK: a fast software-oriented stream cipher: Sosemanuk is a new synchronous software-oriented stream cipher, corresponding to Profile 1 of the ECRYPT call for stream cipher primitives. Its key length is variable between 128 and 256 bits. It ac- commodates a 128-bit initial value. Any key length is claimed to achieve 128-bit security. The Sosemanuk cipher uses both some basic design principles from the stream cipher SNOW 2.0 and some transformations derived from the block cipher SERPENT. Sosemanuk aims at improv- ing SNOW 2.0 both from the security and from the efficiency points of view. Most notably, it uses a faster IV-setup procedure. It also requires a reduced amount of static data, yielding better performance on several architectures.<|reference_end|>
arxiv
@article{berbain2008sosemanuk:, title={SOSEMANUK: a fast software-oriented stream cipher}, author={Come Berbain (FT R&D), Olivier Billet (FT R&D), Anne Canteaut (INRIA Rocquencourt), Nicolas Courtois, Henri Gilbert (FT R&D), Louis Goubin, Aline Gouget, Louis Granboulan, Cedric Lauradoux (INRIA Rocquencourt), Marine Minier (INRIA Rocquencourt), Thomas Pornin, Herve Sibert}, journal={New Stream Cipher Designs - The eSTREAM finalists (2008) 98-118}, year={2008}, archivePrefix={arXiv}, eprint={0810.1858}, primaryClass={cs.CR} }
berbain2008sosemanuk:
arxiv-5134
0810.1904
Unsatisfiable (k,(4*2^k/k))-CNF formulas
<|reference_start|>Unsatisfiable (k,(4*2^k/k))-CNF formulas: A boolean formula in a conjuctive normal form is called a (k,s)-formula if every clause contains exactly k variables and every variable occurs in at most s clauses. We prove the existence of a (k, 4 * (2^k/k))-CNF formula which is unsatisfiable.<|reference_end|>
arxiv
@article{gebauer2008unsatisfiable, title={Unsatisfiable (k,(4*2^k/k))-CNF formulas}, author={Heidi Gebauer}, journal={arXiv preprint arXiv:0810.1904}, year={2008}, archivePrefix={arXiv}, eprint={0810.1904}, primaryClass={cs.DM cs.GT} }
gebauer2008unsatisfiable
arxiv-5135
0810.1973
Alphabet Sizes of Auxiliary Variables in Canonical Inner Bounds
<|reference_start|>Alphabet Sizes of Auxiliary Variables in Canonical Inner Bounds: Alphabet size of auxiliary random variables in our canonical description is derived. Our analysis improves upon estimates known in special cases, and generalizes to an arbitrary multiterminal setup. The salient steps include decomposition of constituent rate polytopes into orthants, translation of a hyperplane till it becomes tangent to the achievable region at an extreme point, and derivation of minimum auxiliary alphabet sizes based on Caratheodory's theorem.<|reference_end|>
arxiv
@article{jana2008alphabet, title={Alphabet Sizes of Auxiliary Variables in Canonical Inner Bounds}, author={Soumya Jana}, journal={arXiv preprint arXiv:0810.1973}, year={2008}, doi={10.1109/CISS.2009.5054692}, archivePrefix={arXiv}, eprint={0810.1973}, primaryClass={cs.IT math.IT} }
jana2008alphabet
arxiv-5136
0810.1980
Error Exponents of Optimum Decoding for the Interference Channel
<|reference_start|>Error Exponents of Optimum Decoding for the Interference Channel: Exponential error bounds for the finite-alphabet interference channel (IFC) with two transmitter-receiver pairs, are investigated under the random coding regime. Our focus is on optimum decoding, as opposed to heuristic decoding rules that have been used in previous works, like joint typicality decoding, decoding based on interference cancellation, and decoding that considers the interference as additional noise. Indeed, the fact that the actual interfering signal is a codeword and not an i.i.d. noise process complicates the application of conventional techniques to the performance analysis of the optimum decoder. Using analytical tools rooted in statistical physics, we derive a single letter expression for error exponents achievable under optimum decoding and demonstrate strict improvement over error exponents obtainable using suboptimal decoding rules, but which are amenable to more conventional analysis.<|reference_end|>
arxiv
@article{etkin2008error, title={Error Exponents of Optimum Decoding for the Interference Channel}, author={Raul Etkin, Neri Merhav, Erik Ordentlich}, journal={arXiv preprint arXiv:0810.1980}, year={2008}, archivePrefix={arXiv}, eprint={0810.1980}, primaryClass={cs.IT math.IT} }
etkin2008error
arxiv-5137
0810.1981
Disproving the Neighborhood Conjecture
<|reference_start|>Disproving the Neighborhood Conjecture: We study the following Maker/Breaker game. Maker and Breaker take turns in choosing vertices from a given n-uniform hypergraph F, with Maker going first. Maker's goal is to completely occupy a hyperedge and Breaker tries to avoid this. Beck conjectures that if the maximum neighborhood size of F is at most 2^(n-1) then Breaker has a winning strategy. We disprove this conjecture by establishing an n-uniform hypergraph with maximum neighborhood size 3*2^(n-3) where Maker has a winning strategy. Moreover, we show how to construct an n-uniform hypergraph with maximum degree 2^(n-1)/n where Maker has a winning strategy. Finally we show that each n-uniform hypergraph with maximum degree at most 2^(n-2)/(en) has a proper halving 2-coloring, which solves another open problem posed by Beck related to the Neighborhood Conjecture.<|reference_end|>
arxiv
@article{gebauer2008disproving, title={Disproving the Neighborhood Conjecture}, author={Heidi Gebauer}, journal={arXiv preprint arXiv:0810.1981}, year={2008}, archivePrefix={arXiv}, eprint={0810.1981}, primaryClass={cs.GT cs.DM} }
gebauer2008disproving
arxiv-5138
0810.1991
A global physician-oriented medical information system
<|reference_start|>A global physician-oriented medical information system: We propose to improve medical decision making and reduce global health care costs by employing a free Internet-based medical information system with two main target groups: practicing physicians and medical researchers. After acquiring patients' consent, physicians enter medical histories, physiological data and symptoms or disorders into the system; an integrated expert system can then assist in diagnosis and statistical software provides a list of the most promising treatment options and medications, tailored to the patient. Physicians later enter information about the outcomes of the chosen treatments, data the system uses to optimize future treatment recommendations. Medical researchers can analyze the aggregate data to compare various drugs or treatments in defined patient populations on a large scale.<|reference_end|>
arxiv
@article{boldt2008a, title={A global physician-oriented medical information system}, author={Axel Boldt and Michael Janich}, journal={arXiv preprint arXiv:0810.1991}, year={2008}, archivePrefix={arXiv}, eprint={0810.1991}, primaryClass={cs.CY cs.AI cs.DB} }
boldt2008a
arxiv-5139
0810.1997
Characterizing 1-Dof Henneberg-I graphs with efficient configuration spaces
<|reference_start|>Characterizing 1-Dof Henneberg-I graphs with efficient configuration spaces: We define and study exact, efficient representations of realization spaces of a natural class of underconstrained 2D Euclidean Distance Constraint Systems(EDCS) or Frameworks based on 1-dof Henneberg-I graphs. Each representation corresponds to a choice of parameters and yields a different parametrized configuration space. Our notion of efficiency is based on the algebraic complexities of sampling the configuration space and of obtaining a realization from the sample (parametrized) configuration. Significantly, we give purely combinatorial characterizations that capture (i) the class of graphs that have efficient configuration spaces and (ii) the possible choices of representation parameters that yield efficient configuration spaces for a given graph. Our results automatically yield an efficient algorithm for sampling realizations, without missing extreme or boundary realizations. In addition, our results formally show that our definition of efficient configuration space is robust and that our characterizations are tight. We choose the class of 1-dof Henneberg-I graphs in order to take the next step in a systematic and graded program of combinatorial characterizations of efficient configuration spaces. In particular, the results presented here are the first characterizations that go beyond graphs that have connected and convex configuration spaces.<|reference_end|>
arxiv
@article{gao2008characterizing, title={Characterizing 1-Dof Henneberg-I graphs with efficient configuration spaces}, author={Heping Gao, Meera Sitharam}, journal={arXiv preprint arXiv:0810.1997}, year={2008}, archivePrefix={arXiv}, eprint={0810.1997}, primaryClass={cs.CG cs.RO cs.SC} }
gao2008characterizing
arxiv-5140
0810.2021
Visualization Optimization : Application to the RoboCup Rescue Domain
<|reference_start|>Visualization Optimization : Application to the RoboCup Rescue Domain: In this paper we demonstrate the use of intelligent optimization methodologies on the visualization optimization of virtual / simulated environments. The problem of automatic selection of an optimized set of views, which better describes an on-going simulation over a virtual environment is addressed in the context of the RoboCup Rescue Simulation domain. A generic architecture for optimization is proposed and described. We outline the possible extensions of this architecture and argue on how several problems within the fields of Interactive Rendering and Visualization can benefit from it.<|reference_end|>
arxiv
@article{moreira2008visualization, title={Visualization Optimization : Application to the RoboCup Rescue Domain}, author={Pedro Miguel Moreira, Lu'is Paulo Reis and Ant'onio Augusto de Sousa}, journal={Proceedings SIACG 2006 - Ibero American Symposyum in Computer Graphics, Santiago de Compostela, Spain, 5-7 July 2006}, year={2008}, archivePrefix={arXiv}, eprint={0810.2021}, primaryClass={cs.GR cs.AI} }
moreira2008visualization
arxiv-5141
0810.2042
Counting cocircuits and convex two-colourings is #P-complete
<|reference_start|>Counting cocircuits and convex two-colourings is #P-complete: We prove that the problem of counting the number of colourings of the vertices of a graph with at most two colours, such that the colour classes induce connected subgraphs is #P-complete. We also show that the closely related problem of counting the number of cocircuits of a graph is #P-complete.<|reference_end|>
arxiv
@article{goodall2008counting, title={Counting cocircuits and convex two-colourings is #P-complete}, author={Andrew J. Goodall, Steven D. Noble}, journal={arXiv preprint arXiv:0810.2042}, year={2008}, archivePrefix={arXiv}, eprint={0810.2042}, primaryClass={math.CO cs.CC} }
goodall2008counting
arxiv-5142
0810.2046
Modeling of Social Transitions Using Intelligent Systems
<|reference_start|>Modeling of Social Transitions Using Intelligent Systems: In this study, we reproduce two new hybrid intelligent systems, involve three prominent intelligent computing and approximate reasoning methods: Self Organizing feature Map (SOM), Neruo-Fuzzy Inference System and Rough Set Theory (RST),called SONFIS and SORST. We show how our algorithms can be construed as a linkage of government-society interactions, where government catches various states of behaviors: solid (absolute) or flexible. So, transition of society, by changing of connectivity parameters (noise) from order to disorder is inferred.<|reference_end|>
arxiv
@article{owladeghaffari2008modeling, title={Modeling of Social Transitions Using Intelligent Systems}, author={Hamed Owladeghaffari, Witold Pedrycz, Mostafa Sharifzadeh}, journal={arXiv preprint arXiv:0810.2046}, year={2008}, doi={10.1109/CANS.2008.8}, archivePrefix={arXiv}, eprint={0810.2046}, primaryClass={cs.AI} }
owladeghaffari2008modeling
arxiv-5143
0810.2061
On characterising strong bisimilarity in a fragment of CCS with replication
<|reference_start|>On characterising strong bisimilarity in a fragment of CCS with replication: We provide a characterisation of strong bisimilarity in a fragment of CCS that contains only prefix, parallel composition, synchronisation and a limited form of replication. The characterisation is not an axiomatisation, but is instead presented as a rewriting system. We discuss how our method allows us to derive a new congruence result in the $\pi$-calculus: congruence holds in the sub-calculus that does not include restriction nor sum, and features a limited form of replication. We have not formalised the latter result in all details.<|reference_end|>
arxiv
@article{hirschkoff2008on, title={On characterising strong bisimilarity in a fragment of CCS with replication}, author={Daniel Hirschkoff (LIP), Damien Pous (INRIA Rh^one-Alpes / LIG Laboratoire d'Informatique de Grenoble)}, journal={arXiv preprint arXiv:0810.2061}, year={2008}, archivePrefix={arXiv}, eprint={0810.2061}, primaryClass={cs.LO} }
hirschkoff2008on
arxiv-5144
0810.2063
Initial Offset Placement in p2p Live Streaming Systems
<|reference_start|>Initial Offset Placement in p2p Live Streaming Systems: Initial offset placement in p2p streaming systems is studied in this paper. Proportional placement (PP) scheme is proposed. In this scheme, peer places the initial offset as the offset reported by other reference peer with a shift proportional to the buffer width or offset lag of this reference peer. This will introduce a stable placement that supports larger buffer width for peers and small buffer width for tracker. Real deployed placement method in PPLive is studied through measurement. It shows that, instead of based on offset lag, the placement is based on buffer width of the reference peer to facilitate the initial chunk fetching. We will prove that, such a PP scheme may not be stable under arbitrary buffer occupation in the reference peer. The required average buffer width then is derived. A simple good peer selection mechanism to check the buffer occupation of reference peer is proposed for a stable PP scheme based on buffer width<|reference_end|>
arxiv
@article{li2008initial, title={Initial Offset Placement in p2p Live Streaming Systems}, author={Chunxi Li and Changjia Chen}, journal={arXiv preprint arXiv:0810.2063}, year={2008}, archivePrefix={arXiv}, eprint={0810.2063}, primaryClass={cs.MM} }
li2008initial
arxiv-5145
0810.2067
Divisibility, Smoothness and Cryptographic Applications
<|reference_start|>Divisibility, Smoothness and Cryptographic Applications: This paper deals with products of moderate-size primes, familiarly known as smooth numbers. Smooth numbers play a crucial role in information theory, signal processing and cryptography. We present various properties of smooth numbers relating to their enumeration, distribution and occurrence in various integer sequences. We then turn our attention to cryptographic applications in which smooth numbers play a pivotal role.<|reference_end|>
arxiv
@article{naccache2008divisibility,, title={Divisibility, Smoothness and Cryptographic Applications}, author={David Naccache and Igor E. Shparlinski}, journal={arXiv preprint arXiv:0810.2067}, year={2008}, archivePrefix={arXiv}, eprint={0810.2067}, primaryClass={math.NT cs.CC cs.CR} }
naccache2008divisibility,
arxiv-5146
0810.2133
Diversity-Multiplexing Tradeoff of the Half-Duplex Relay Channel
<|reference_start|>Diversity-Multiplexing Tradeoff of the Half-Duplex Relay Channel: We show that the diversity-multiplexing tradeoff of a half-duplex single-relay channel with identically distributed Rayleigh fading channel gains meets the 2 by 1 MISO bound. We generalize the result to the case when there are N non-interfering relays and show that the diversity-multiplexing tradeoff is equal to the N + 1 by 1 MISO bound.<|reference_end|>
arxiv
@article{pawar2008diversity-multiplexing, title={Diversity-Multiplexing Tradeoff of the Half-Duplex Relay Channel}, author={Sameer Pawar, Amir Salman Avestimehr and David N. C. Tse}, journal={arXiv preprint arXiv:0810.2133}, year={2008}, archivePrefix={arXiv}, eprint={0810.2133}, primaryClass={cs.IT math.IT} }
pawar2008diversity-multiplexing
arxiv-5147
0810.2134
Fetching Strategy in the Startup Stage of p2p Live Streaming
<|reference_start|>Fetching Strategy in the Startup Stage of p2p Live Streaming: A protocol named Threshold Bipolar (TB) is proposed as a fetching strategy at the startup stage of p2p live streaming systems. In this protocol, chunks are fetched consecutively from buffer head at the beginning. After the buffer is filled into a threshold, chunks at the buffer tail will be fetched first while keeping the contiguously filled part in the buffer above the threshold even when the buffer is drained at a playback rate. High download rate, small startup latency and natural strategy handover can be reached at the same time by this protocol. Important parameters in this protocol are identified. The buffer progress under this protocol is then expressed as piecewise lines specified by those parameters. Startup traces of peers measured from PPLive are studied to show the real performance of TB protocol in a real system. A simple design model of TB protocol is proposed to reveal important considerations in a practical design.<|reference_end|>
arxiv
@article{li2008fetching, title={Fetching Strategy in the Startup Stage of p2p Live Streaming}, author={Chunxi Li and Changjia Chen}, journal={arXiv preprint arXiv:0810.2134}, year={2008}, archivePrefix={arXiv}, eprint={0810.2134}, primaryClass={cs.NI} }
li2008fetching
arxiv-5148
0810.2144
Asymptotics of Entropy Rate in Special Families of Hidden Markov Chains
<|reference_start|>Asymptotics of Entropy Rate in Special Families of Hidden Markov Chains: We derive an asymptotic formula for entropy rate of a hidden Markov chain around a "weak Black Hole". We also discuss applications of the asymptotic formula to the asymptotic behaviors of certain channels.<|reference_end|>
arxiv
@article{han2008asymptotics, title={Asymptotics of Entropy Rate in Special Families of Hidden Markov Chains}, author={Guangyue Han, Brian Marcus}, journal={arXiv preprint arXiv:0810.2144}, year={2008}, archivePrefix={arXiv}, eprint={0810.2144}, primaryClass={cs.IT math.IT} }
han2008asymptotics
arxiv-5149
0810.2150
A Model for Communication in Clusters of Multi-core Machines
<|reference_start|>A Model for Communication in Clusters of Multi-core Machines: A common paradigm for scientific computing is distributed message-passing systems, and a common approach to these systems is to implement them across clusters of high-performance workstations. As multi-core architectures become increasingly mainstream, these clusters are very likely to include multi-core machines. However, the theoretical models which are currently used to develop communication algorithms across these systems do not take into account the unique properties of processes running on shared-memory architectures, including shared external network connections and communication via shared memory locations. Because of this, existing algorithms are far from optimal for modern clusters. Additionally, recent attempts to adapt these algorithms to multicore systems have proceeded without the introduction of a more accurate formal model and have generally neglected to capitalize on the full power these systems offer. We propose a new model which simply and effectively captures the strengths of multi-core machines in collective communications patterns and suggest how it could be used to properly optimize these patterns.<|reference_end|>
arxiv
@article{task2008a, title={A Model for Communication in Clusters of Multi-core Machines}, author={Christine Task, Arun Chauhan}, journal={arXiv preprint arXiv:0810.2150}, year={2008}, archivePrefix={arXiv}, eprint={0810.2150}, primaryClass={cs.DC cs.DS} }
task2008a
arxiv-5150
0810.2164
Joint source-channel coding via statistical mechanics: thermal equilibrium between the source and the channel
<|reference_start|>Joint source-channel coding via statistical mechanics: thermal equilibrium between the source and the channel: We examine the classical joint source--channel coding problem from the viewpoint of statistical physics and demonstrate that in the random coding regime, the posterior probability distribution of the source given the channel output is dominated by source sequences, which exhibit a behavior that is highly parallel to that of thermal equilibrium between two systems of particles that exchange energy, where one system corresponds to the source and the other corresponds to the channel. The thermodynamical entopies of the dual physical problem are analogous to conditional and unconditional Shannon entropies of the source, and so, their balance in thermal equilibrium yields a simple formula for the mutual information between the source and the channel output, that is induced by the typical code in an ensemble of joint source--channel codes under certain conditions. We also demonstrate how our results can be used in applications, like the wiretap channel, and how can it be extended to multiuser scenarios, like that of the multiple access channel.<|reference_end|>
arxiv
@article{merhav2008joint, title={Joint source-channel coding via statistical mechanics: thermal equilibrium between the source and the channel}, author={Neri Merhav}, journal={arXiv preprint arXiv:0810.2164}, year={2008}, archivePrefix={arXiv}, eprint={0810.2164}, primaryClass={cs.IT math.IT} }
merhav2008joint
arxiv-5151
0810.2175
A simple local 3-approximation algorithm for vertex cover
<|reference_start|>A simple local 3-approximation algorithm for vertex cover: We present a local algorithm (constant-time distributed algorithm) for finding a 3-approximate vertex cover in bounded-degree graphs. The algorithm is deterministic, and no auxiliary information besides port numbering is required.<|reference_end|>
arxiv
@article{polishchuk2008a, title={A simple local 3-approximation algorithm for vertex cover}, author={Valentin Polishchuk, Jukka Suomela}, journal={Information Processing Letters 109 (2009) 642-645}, year={2008}, doi={10.1016/j.ipl.2009.02.017}, archivePrefix={arXiv}, eprint={0810.2175}, primaryClass={cs.DC} }
polishchuk2008a
arxiv-5152
0810.2179
Structural abstract interpretation, A formal study using Coq
<|reference_start|>Structural abstract interpretation, A formal study using Coq: interpreters are tools to compute approximations for behaviors of a program. These approximations can then be used for optimisation or for error detection. In this paper, we show how to describe an abstract interpreter using the type-theory based theorem prover Coq, using inductive types for syntax and structural recursive programming for the abstract interpreter's kernel. The abstract interpreter can then be proved correct with respect to a Hoare logic for the programming language.<|reference_end|>
arxiv
@article{bertot2008structural, title={Structural abstract interpretation, A formal study using Coq}, author={Yves Bertot (INRIA Sophia Antipolis)}, journal={Dans LERNET Summer School (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0810.2179}, primaryClass={cs.LO} }
bertot2008structural
arxiv-5153
0810.2208
Multipath Channels of Unbounded Capacity
<|reference_start|>Multipath Channels of Unbounded Capacity: The capacity of discrete-time, noncoherent, multipath fading channels is considered. It is shown that if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the transmit power.<|reference_end|>
arxiv
@article{koch2008multipath, title={Multipath Channels of Unbounded Capacity}, author={Tobias Koch and Amos Lapidoth}, journal={arXiv preprint arXiv:0810.2208}, year={2008}, archivePrefix={arXiv}, eprint={0810.2208}, primaryClass={cs.IT math.IT} }
koch2008multipath
arxiv-5154
0810.2226
Enabling Lock-Free Concurrent Fine-Grain Access to Massive Distributed Data: Application to Supernovae Detection
<|reference_start|>Enabling Lock-Free Concurrent Fine-Grain Access to Massive Distributed Data: Application to Supernovae Detection: We consider the problem of efficiently managing massive data in a large-scale distributed environment. We consider data strings of size in the order of Terabytes, shared and accessed by concurrent clients. On each individual access, a segment of a string, of the order of Megabytes, is read or modified. Our goal is to provide the clients with efficient fine-grain access the data string as concurrently as possible, without locking the string itself. This issue is crucial in the context of applications in the field of astronomy, databases, data mining and multimedia. We illustrate these requiremens with the case of an application for searching supernovae. Our solution relies on distributed, RAM-based data storage, while leveraging a DHT-based, parallel metadata management scheme. The proposed architecture and algorithms have been validated through a software prototype and evaluated in a cluster environment.<|reference_end|>
arxiv
@article{nicolae2008enabling, title={Enabling Lock-Free Concurrent Fine-Grain Access to Massive Distributed Data: Application to Supernovae Detection}, author={Bogdan Nicolae (IRISA), Gabriel Antoniu (IRISA), Luc Boug'e (IRISA)}, journal={Dans IEEE Cluster 2008 - Poster Session (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0810.2226}, primaryClass={cs.DC} }
nicolae2008enabling
arxiv-5155
0810.2227
Distributed Management of Massive Data: an Efficient Fine-Grain Data Access Scheme
<|reference_start|>Distributed Management of Massive Data: an Efficient Fine-Grain Data Access Scheme: This paper addresses the problem of efficiently storing and accessing massive data blocks in a large-scale distributed environment, while providing efficient fine-grain access to data subsets. This issue is crucial in the context of applications in the field of databases, data mining and multimedia. We propose a data sharing service based on distributed, RAM-based storage of data, while leveraging a DHT-based, natively parallel metadata management scheme. As opposed to the most commonly used grid storage infrastructures that provide mechanisms for explicit data localization and transfer, we provide a transparent access model, where data are accessed through global identifiers. Our proposal has been validated through a prototype implementation whose preliminary evaluation provides promising results.<|reference_end|>
arxiv
@article{nicolae2008distributed, title={Distributed Management of Massive Data: an Efficient Fine-Grain Data Access Scheme}, author={Bogdan Nicolae (IRISA), Gabriel Antoniu (IRISA), Luc Boug'e (IRISA)}, journal={Dans VECPAR 2008 (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0810.2227}, primaryClass={cs.DC} }
nicolae2008distributed
arxiv-5156
0810.2263
A continuous rating method for preferential voting
<|reference_start|>A continuous rating method for preferential voting: A method is given for quantitatively rating the social acceptance of different options which are the matter of a preferential vote. The proposed method is proved to satisfy certain desirable conditions, among which there is a majority principle, a property of clone consistency, and the continuity of the rates with respect to the data. One can view this method as a quantitative complement for a qualitative method introduced in 1997 by Markus Schulze. It is also related to certain methods of one-dimensional scaling or cluster analysis.<|reference_end|>
arxiv
@article{camps2008a, title={A continuous rating method for preferential voting}, author={Rosa Camps, Xavier Mora, Laia Saumell}, journal={arXiv preprint arXiv:0810.2263}, year={2008}, archivePrefix={arXiv}, eprint={0810.2263}, primaryClass={math.OC cs.GT} }
camps2008a
arxiv-5157
0810.2279
On finite functions with non-trivial arity gap
<|reference_start|>On finite functions with non-trivial arity gap: Given an $n$-ary $k-$valued function $f$, $gap(f)$ denotes the minimal number of essential variables in $f$ which become fictive when identifying any two distinct essential variables in $f$. We particularly solve a problem concerning the explicit determination of $n$-ary $k-$valued functions $f$ with $2\leq gap(f)\leq n\leq k$. Our methods yield new combinatorial results about the number of such functions.<|reference_end|>
arxiv
@article{shtrakov2008on, title={On finite functions with non-trivial arity gap}, author={Slavcho Shtrakov and Joerg Koppitz}, journal={arXiv preprint arXiv:0810.2279}, year={2008}, archivePrefix={arXiv}, eprint={0810.2279}, primaryClass={cs.DM cs.CC} }
shtrakov2008on
arxiv-5158
0810.2311
Non-Negative Matrix Factorization, Convexity and Isometry
<|reference_start|>Non-Negative Matrix Factorization, Convexity and Isometry: In this paper we explore avenues for improving the reliability of dimensionality reduction methods such as Non-Negative Matrix Factorization (NMF) as interpretive exploratory data analysis tools. We first explore the difficulties of the optimization problem underlying NMF, showing for the first time that non-trivial NMF solutions always exist and that the optimization problem is actually convex, by using the theory of Completely Positive Factorization. We subsequently explore four novel approaches to finding globally-optimal NMF solutions using various ideas from convex optimization. We then develop a new method, isometric NMF (isoNMF), which preserves non-negativity while also providing an isometric embedding, simultaneously achieving two properties which are helpful for interpretation. Though it results in a more difficult optimization problem, we show experimentally that the resulting method is scalable and even achieves more compact spectra than standard NMF.<|reference_end|>
arxiv
@article{vasiloglou2008non-negative, title={Non-Negative Matrix Factorization, Convexity and Isometry}, author={Nikolaos Vasiloglou, Alexander G. Gray, David V. Anderson}, journal={arXiv preprint arXiv:0810.2311}, year={2008}, archivePrefix={arXiv}, eprint={0810.2311}, primaryClass={cs.AI cs.CV} }
vasiloglou2008non-negative
arxiv-5159
0810.2323
On Outage and Error Rate Analysis of the Ordered V-BLAST
<|reference_start|>On Outage and Error Rate Analysis of the Ordered V-BLAST: Outage and error rate performance of the ordered BLAST with more than 2 transmit antennas is evaluated for i.i.d. Rayleigh fading channels. A number of lower and upper bounds on the 1st step outage probability at any SNR are derived, which are further used to obtain accurate approximations to average block and total error rates. For m Tx antennas, the effect of the optimal ordering at the first step is an m-fold SNR gain. As m increases to infinity, the BLER decreases to zero, which is a manifestation of the space-time autocoding effect in the V-BLAST. While the sub-optimal ordering (based on the before-projection SNR) suffers a few dB SNR penalty compared to the optimal one, it has a lower computational complexity and a 3 dB SNR gain compared to the unordered V-BLAST and can be an attractive solution for low-complexity/low-energy systems. Uncoded D-BLAST exhibits the same outage and error rate performance as that of the V-BLAST. An SNR penalty of the linear receiver interfaces compared to the BLAST is also evaluated.<|reference_end|>
arxiv
@article{loyka2008on, title={On Outage and Error Rate Analysis of the Ordered V-BLAST}, author={Sergey Loyka, Francois Gagnon}, journal={arXiv preprint arXiv:0810.2323}, year={2008}, doi={10.1109/T-WC.2008.070271}, archivePrefix={arXiv}, eprint={0810.2323}, primaryClass={cs.IT math.IT} }
loyka2008on
arxiv-5160
0810.2336
A Mordell Inequality for Lattices over Maximal Orders
<|reference_start|>A Mordell Inequality for Lattices over Maximal Orders: In this paper we prove an analogue of Mordell's inequality for lattices in finite-dimensional complex or quaternionic Hermitian space that are modules over a maximal order in an imaginary quadratic number field or a totally definite rational quaternion algebra. This inequality implies that the 16-dimensional Barnes-Wall lattice has optimal density among all 16-dimensional lattices with Hurwitz structures.<|reference_end|>
arxiv
@article{vance2008a, title={A Mordell Inequality for Lattices over Maximal Orders}, author={Stephanie Vance}, journal={Trans. Amer. Math. Soc. 362 (2010), no. 7, 3827-3839}, year={2008}, archivePrefix={arXiv}, eprint={0810.2336}, primaryClass={math.MG cs.IT math.IT math.NT} }
vance2008a
arxiv-5161
0810.2352
Interference Channels with Correlated Receiver Side Information
<|reference_start|>Interference Channels with Correlated Receiver Side Information: The problem of joint source-channel coding in transmitting independent sources over interference channels with correlated receiver side information is studied. When each receiver has side information correlated with its own desired source, it is shown that source-channel code separation is optimal. When each receiver has side information correlated with the interfering source, sufficient conditions for reliable transmission are provided based on a joint source-channel coding scheme using the superposition encoding and partial decoding idea of Han and Kobayashi. When the receiver side information is a deterministic function of the interfering source, source-channel code separation is again shown to be optimal. As a special case, for a class of Z-interference channels, when the side information of the receiver facing interference is a deterministic function of the interfering source, necessary and sufficient conditions for reliable transmission are provided in the form of single letter expressions. As a byproduct of these joint source-channel coding results, the capacity region of a class of Z-channels with degraded message sets is also provided.<|reference_end|>
arxiv
@article{liu2008interference, title={Interference Channels with Correlated Receiver Side Information}, author={Nan Liu, Deniz Gunduz, Andrea J. Goldsmith, H. Vincent Poor}, journal={arXiv preprint arXiv:0810.2352}, year={2008}, archivePrefix={arXiv}, eprint={0810.2352}, primaryClass={cs.IT math.IT} }
liu2008interference
arxiv-5162
0810.2390
Efficient Pattern Matching on Binary Strings
<|reference_start|>Efficient Pattern Matching on Binary Strings: The binary string matching problem consists in finding all the occurrences of a pattern in a text where both strings are built on a binary alphabet. This is an interesting problem in computer science, since binary data are omnipresent in telecom and computer network applications. Moreover the problem finds applications also in the field of image processing and in pattern matching on compressed texts. Recently it has been shown that adaptations of classical exact string matching algorithms are not very efficient on binary data. In this paper we present two efficient algorithms for the problem adapted to completely avoid any reference to bits allowing to process pattern and text byte by byte. Experimental results show that the new algorithms outperform existing solutions in most cases.<|reference_end|>
arxiv
@article{faro2008efficient, title={Efficient Pattern Matching on Binary Strings}, author={Simone Faro and Thierry Lecroq}, journal={arXiv preprint arXiv:0810.2390}, year={2008}, archivePrefix={arXiv}, eprint={0810.2390}, primaryClass={cs.DS cs.IR} }
faro2008efficient
arxiv-5163
0810.2434
Faster and better: a machine learning approach to corner detection
<|reference_start|>Faster and better: a machine learning approach to corner detection: The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. The repeatability is importand because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations [Schmid et al 2000]. The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. Three advances are described in this paper. First, we present a new heuristic for feature detection, and using machine learning we derive a feature detector from this which can fully process live PAL video using less than 5% of the available processing time. By comparison, most other detectors cannot even operate at frame rate (Harris detector 115%, SIFT 195%). Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency. Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes. We show that despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors. Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and very high quality.<|reference_end|>
arxiv
@article{rosten2008faster, title={Faster and better: a machine learning approach to corner detection}, author={Edward Rosten, Reid Porter, Tom Drummond}, journal={IEEE Trans. PAMI, 32 (2010), 105--119}, year={2008}, doi={10.1109/TPAMI.2008.275}, number={07-3912}, archivePrefix={arXiv}, eprint={0810.2434}, primaryClass={cs.CV cs.LG} }
rosten2008faster
arxiv-5164
0810.2486
Dynamic assignment: there is an equilibrium !
<|reference_start|>Dynamic assignment: there is an equilibrium !: Given a network with a continuum of users at some origins, suppose that the users wish to reach specific destinations, but that they are not indifferent to the time needed to reach their destination. They may have several possibilities (of routes or deparure time), but their choices modify the travel times on the network. Hence, each user faces the following problem: given a pattern of travel times for the different possible routes that reach the destination, find a shortest path. The situation in a context of perfect information is a so-called Nash equilibrium, and the question whether there is such an equilibrium and of finding it if it exists is the so-called equilibrium assignment problem. It arises for various kind of networks, such as computers, communication or transportation network. When each user occupies permanently the whole route from the origin to its destination, we call it the static assignment problem, which has been extensively studied with pioneers works by Wardrop or Beckmann. A less studied, but more realistic, and maybe more difficult, problem is when the time needed to reach an arc is taken into account. We speak then of a dynamic assignment problem. Several models have been proposed. For some of them, the existence of an equilibrium has been proved, but always under some technical assumptions or in a very special case (a network with one arc for the case when the users may chose their departure time). The present paper proposes a compact model, with minimal and natural assumptions. For this model, we prove that there is always an equilibrium. To our knowledge, this imply all previous results about existence of an equilibrium for the dynamic assignment problem.<|reference_end|>
arxiv
@article{meunier2008dynamic, title={Dynamic assignment: there is an equilibrium !}, author={Fr'ed'eric Meunier and Nicolas Wagner}, journal={arXiv preprint arXiv:0810.2486}, year={2008}, archivePrefix={arXiv}, eprint={0810.2486}, primaryClass={cs.GT} }
meunier2008dynamic
arxiv-5165
0810.2513
The Impact of Mobility on Gossip Algorithms
<|reference_start|>The Impact of Mobility on Gossip Algorithms: The influence of node mobility on the convergence time of averaging gossip algorithms in networks is studied. It is shown that a small number of fully mobile nodes can yield a significant decrease in convergence time. A method is developed for deriving lower bounds on the convergence time by merging nodes according to their mobility pattern. This method is used to show that if the agents have one-dimensional mobility in the same direction the convergence time is improved by at most a constant. Upper bounds are obtained on the convergence time using techniques from the theory of Markov chains and show that simple models of mobility can dramatically accelerate gossip as long as the mobility paths significantly overlap. Simulations verify that different mobility patterns can have significantly different effects on the convergence of distributed algorithms.<|reference_end|>
arxiv
@article{sarwate2008the, title={The Impact of Mobility on Gossip Algorithms}, author={Anand D. Sarwate, Alexandros G. Dimakis}, journal={arXiv preprint arXiv:0810.2513}, year={2008}, archivePrefix={arXiv}, eprint={0810.2513}, primaryClass={cs.NI cs.DC cs.IT math.IT} }
sarwate2008the
arxiv-5166
0810.2529
On the Throughput Maximization in Dencentralized Wireless Networks
<|reference_start|>On the Throughput Maximization in Dencentralized Wireless Networks: A distributed single-hop wireless network with $K$ links is considered, where the links are partitioned into a fixed number ($M$) of clusters each operating in a subchannel with bandwidth $\frac{W}{M}$. The subchannels are assumed to be orthogonal to each other. A general shadow-fading model, described by parameters $(\alpha,\varpi)$, is considered where $\alpha$ denotes the probability of shadowing and $\varpi$ ($\varpi \leq 1$) represents the average cross-link gains. The main goal of this paper is to find the maximum network throughput in the asymptotic regime of $K \to \infty$, which is achieved by: i) proposing a distributed and non-iterative power allocation strategy, where the objective of each user is to maximize its best estimate (based on its local information, i.e., direct channel gain) of the average network throughput, and ii) choosing the optimum value for $M$. In the first part of the paper, the network hroughput is defined as the \textit{average sum-rate} of the network, which is shown to scale as $\Theta (\log K)$. Moreover, it is proved that in the strong interference scenario, the optimum power allocation strategy for each user is a threshold-based on-off scheme. In the second part, the network throughput is defined as the \textit{guaranteed sum-rate}, when the outage probability approaches zero. In this scenario, it is demonstrated that the on-off power allocation scheme maximizes the throughput, which scales as $\frac{W}{\alpha \varpi} \log K$. Moreover, the optimum spectrum sharing for maximizing the average sum-rate and the guaranteed sum-rate is achieved at M=1.<|reference_end|>
arxiv
@article{abouei2008on, title={On the Throughput Maximization in Dencentralized Wireless Networks}, author={Jamshid Abouei, Alireza Bayesteh, Masoud Ebrahimi, and Amir K. Khandani}, journal={arXiv preprint arXiv:0810.2529}, year={2008}, archivePrefix={arXiv}, eprint={0810.2529}, primaryClass={cs.IT math.IT} }
abouei2008on
arxiv-5167
0810.2598
New avenue to the Parton Distribution Functions: Self-Organizing Maps
<|reference_start|>New avenue to the Parton Distribution Functions: Self-Organizing Maps: Neural network algorithms have been recently applied to construct Parton Distribution Function (PDF) parametrizations which provide an alternative to standard global fitting procedures. We propose a technique based on an interactive neural network algorithm using Self-Organizing Maps (SOMs). SOMs are a class of clustering algorithms based on competitive learning among spatially-ordered neurons. Our SOMs are trained on selections of stochastically generated PDF samples. The selection criterion for every optimization iteration is based on the features of the clustered PDFs. Our main goal is to provide a fitting procedure that, at variance with the standard neural network approaches, allows for an increased control of the systematic bias by enabling user interaction in the various stages of the process.<|reference_end|>
arxiv
@article{carnahan2008new, title={New avenue to the Parton Distribution Functions: Self-Organizing Maps}, author={J. Carnahan, H. Honkanen, S. Liuti, Y. Loitiere, P. R. Reynolds}, journal={Phys.Rev.D79:034022,2009}, year={2008}, doi={10.1103/PhysRevD.79.034022}, archivePrefix={arXiv}, eprint={0810.2598}, primaryClass={hep-ph cs.CE} }
carnahan2008new
arxiv-5168
0810.2653
On combinations of local theory extensions
<|reference_start|>On combinations of local theory extensions: In this paper we study possibilities of efficient reasoning in combinations of theories over possibly non-disjoint signatures. We first present a class of theory extensions (called local extensions) in which hierarchical reasoning is possible, and give several examples from computer science and mathematics in which such extensions occur in a natural way. We then identify situations in which combinations of local extensions of a theory are again local extensions of that theory. We thus obtain criteria both for recognizing wider classes of local theory extensions, and for modular reasoning in combinations of theories over non-disjoint signatures.<|reference_end|>
arxiv
@article{sofronie-stokkermans2008on, title={On combinations of local theory extensions}, author={Viorica Sofronie-Stokkermans}, journal={arXiv preprint arXiv:0810.2653}, year={2008}, archivePrefix={arXiv}, eprint={0810.2653}, primaryClass={cs.LO cs.AI} }
sofronie-stokkermans2008on
arxiv-5169
0810.2659
DSTC Layering Protocols in Wireless Cooperative Networks
<|reference_start|>DSTC Layering Protocols in Wireless Cooperative Networks: In a radio network with single source-destination pair and some relays, a link between any two nodes is considered to have same or zero path loss. However in practice some links may have considerably high path loss than others but still being useful. In this report, we take into account signals received from these links also. \indent Our system model consists of a source-destination pair with two layers of relays in which weaker links between source and second layer and between the first layer and destination are also considered. We propose some protocols in this system model, run simulations under optimum power allocation, and compare these protocols. We show that under reasonable channel strength of these weaker links, the proposed protocols perform ($ \approx 2$ dB) better than the existing basic protocol. As expected, the degree of improvement increases with the strength of the weaker links. We also show that with the receive channel knowledge in relays, the reliability and data rate are improved.<|reference_end|>
arxiv
@article{elamvazhuthi2008dstc, title={DSTC Layering Protocols in Wireless Cooperative Networks}, author={P.S. Elamvazhuthi (1 and 2), P.S. Kulkarni (1 and 3), and B.K. Dey (1) ((1) Indian Institute of Technology Bombay, Mumbai, India, (2) Cognizant Technology Solutions India Pvt. Ltd., Chennai, India, (3) Juniper Networks Inc., Bengaluru, India)}, journal={arXiv preprint arXiv:0810.2659}, year={2008}, archivePrefix={arXiv}, eprint={0810.2659}, primaryClass={cs.NI} }
elamvazhuthi2008dstc
arxiv-5170
0810.2665
Path Planner for Objects, Robots and Mannequins by Multi-Agents Systems or Motion Captures
<|reference_start|>Path Planner for Objects, Robots and Mannequins by Multi-Agents Systems or Motion Captures: In order to optimise the costs and time of design of the new products while improving their quality, concurrent engineering is based on the digital model of these products. However, in order to be able to avoid definitively physical model without loss of information, new tools must be available. Especially, a tool making it possible to check simply and quickly the maintainability of complex mechanical sets using the numerical model is necessary. Since one decade, the MCM team of IRCCyN works on the creation of tools for the generation and the analysis of trajectories of virtual mannequins. The simulation of human tasks can be carried out either by robot-like simulation or by simulation by motion capture. This paper presents some results on the both two methods. The first method is based on a multi-agent system and on a digital mock-up technology, to assess an efficient path planner for a manikin or a robot for access and visibility task taking into account ergonomic constraints or joint limits. The human operator is integrated in the process optimisation to contribute to a global perception of the environment. This operator cooperates, in real-time, with several automatic local elementary agents. In the second method, we worked with the CEA and EADS/CCR to solve the constraints related to the evolution of human virtual in its environment on the basis of data resulting from motion capture system. An approach using of the virtual guides was developed to allow to the user the realization of precise trajectory in absence of force feedback.<|reference_end|>
arxiv
@article{chablat2008path, title={Path Planner for Objects, Robots and Mannequins by Multi-Agents Systems or Motion Captures}, author={Damien Chablat (IRCCyN)}, journal={International Conference on Digital Enterprise Technology, Nantes : France (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0810.2665}, primaryClass={cs.RO} }
chablat2008path
arxiv-5171
0810.2666
A Vision-based Computed Torque Control for Parallel Kinematic Machines
<|reference_start|>A Vision-based Computed Torque Control for Parallel Kinematic Machines: In this paper, a novel approach for parallel kinematic machine control relying on a fast exteroceptive measure is implemented and validated on the Orthoglide robot. This approach begins with rewriting the robot models as a function of the only end-effector pose. It is shown that such an operation reduces the model complexity. Then, this approach uses a classical Cartesian space computed torque control with a fast exteroceptive measure, reducing the control schemes complexity. Simulation results are given to show the expected performance improvements and experiments prove the practical feasibility of the approach.<|reference_end|>
arxiv
@article{paccot2008a, title={A Vision-based Computed Torque Control for Parallel Kinematic Machines}, author={Flavien Paccot (LASMEA), Philippe Lemoine (IRCCyN), Nicolas Andreff (LASMEA), Damien Chablat (IRCCyN), Philippe Martinet (LASMEA)}, journal={IEEE International Conference on Robotics and Automation, Pasadena : \'Etats-Unis d'Am\'erique (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0810.2666}, primaryClass={cs.RO} }
paccot2008a
arxiv-5172
0810.2697
On the cubicity of bipartite graphs
<|reference_start|>On the cubicity of bipartite graphs: {\it A unit cube in $k$-dimension (or a $k$-cube) is defined as the cartesian product $R_1 \times R_2 \times ... \times R_k$, where each $R_i$ is a closed interval on the real line of the form $[a_i, a_i+1]$. The {\it cubicity} of $G$, denoted as $cub(G)$, is the minimum $k$ such that $G$ is the intersection graph of a collection of $k$-cubes. Many NP-complete graph problems can be solved efficiently or have good approximation ratios in graphs of low cubicity. In most of these cases the first step is to get a low dimensional cube representation of the given graph. It is known that for a graph $G$, $cub(G) \leq \lfloor\frac{2n}{3}\rfloor$. Recently it has been shown that for a graph $G$, $cub(G) \leq 4(\Delta + 1)\ln n$, where $n$ and $\Delta$ are the number of vertices and maximum degree of $G$, respectively. In this paper, we show that for a bipartite graph $G = (A \cup B, E)$ with $|A| = n_1$, $|B| = n_2$, $n_1 \leq n_2$, and $\Delta' = \min\{\Delta_A, \Delta_B\}$, where $\Delta_A = {max}_{a \in A}d(a)$ and $\Delta_B = {max}_{b \in B}d(b)$, $d(a)$ and $d(b)$ being the degree of $a$ and $b$ in $G$ respectively, $cub(G) \leq 2(\Delta'+2) \lceil \ln n_2 \rceil$. We also give an efficient randomized algorithm to construct the cube representation of $G$ in $3(\Delta'+2)\lceil \ln n_2 \rceil$ dimensions. The reader may note that in general $\Delta'$ can be much smaller than $\Delta$.}<|reference_end|>
arxiv
@article{chandran2008on, title={On the cubicity of bipartite graphs}, author={L. Sunil Chandran, Anita Das, Naveen Sivadasan}, journal={arXiv preprint arXiv:0810.2697}, year={2008}, archivePrefix={arXiv}, eprint={0810.2697}, primaryClass={cs.DM} }
chandran2008on
arxiv-5173
0810.2717
A Class of Graph-Geodetic Distances Generalizing the Shortest-Path and the Resistance Distances
<|reference_start|>A Class of Graph-Geodetic Distances Generalizing the Shortest-Path and the Resistance Distances: A new class of distances for graph vertices is proposed. This class contains parametric families of distances which reduce to the shortest-path, weighted shortest-path, and the resistance distances at the limiting values of the family parameters. The main property of the class is that all distances it comprises are graph-geodetic: $d(i,j)+d(j,k)=d(i,k)$ if and only if every path from $i$ to $k$ passes through $j$. The construction of the class is based on the matrix forest theorem and the transition inequality.<|reference_end|>
arxiv
@article{chebotarev2008a, title={A Class of Graph-Geodetic Distances Generalizing the Shortest-Path and the Resistance Distances}, author={Pavel Chebotarev}, journal={Discrete Applied Mathematics 159(2011) No. 5. 295-302}, year={2008}, doi={10.1016/j.dam.2010.11.017}, archivePrefix={arXiv}, eprint={0810.2717}, primaryClass={math.CO cs.DM math.MG} }
chebotarev2008a
arxiv-5174
0810.2746
Finite-SNR Diversity-Multiplexing Tradeoff and Optimum Power Allocation in Bidirectional Cooperative Networks
<|reference_start|>Finite-SNR Diversity-Multiplexing Tradeoff and Optimum Power Allocation in Bidirectional Cooperative Networks: This paper focuses on analog network coding (ANC) and time division broadcasting (TDBC) which are two major protocols used in bidirectional cooperative networks. Lower bounds of the outage probabilities of those two protocols are derived first. Those lower bounds are extremely tight in the whole signal-to-noise ratio (SNR) range irrespective of the values of channel variances. Based on those lower bounds, finite-SNR diversity-multiplexing tradeoffs of the ANC and TDBC protocols are obtained. Secondly, we investigate how to efficiently use channel state information (CSI) in those two protocols. Specifically, an optimum power allocation scheme is proposed for the ANC protocol. It simultaneously minimizes the outage probability and maximizes the total mutual information of this protocol. For the TDBC protocol, an optimum method to combine the received signals at the relay terminal is developed under an equal power allocation assumption. This method minimizes the outage probability and maximizes the total mutual information of the TDBC protocol at the same time.<|reference_end|>
arxiv
@article{yi2008finite-snr, title={Finite-SNR Diversity-Multiplexing Tradeoff and Optimum Power Allocation in Bidirectional Cooperative Networks}, author={Zhihang Yi and Il-Min Kim}, journal={arXiv preprint arXiv:0810.2746}, year={2008}, archivePrefix={arXiv}, eprint={0810.2746}, primaryClass={cs.IT math.IT} }
yi2008finite-snr
arxiv-5175
0810.2764
A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables
<|reference_start|>A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables: The LETOR website contains three information retrieval datasets used as a benchmark for testing machine learning ideas for ranking. Algorithms participating in the challenge are required to assign score values to search results for a collection of queries, and are measured using standard IR ranking measures (NDCG, precision, MAP) that depend only the relative score-induced order of the results. Similarly to many of the ideas proposed in the participating algorithms, we train a linear classifier. In contrast with other participating algorithms, we define an additional free variable (intercept, or benchmark) for each query. This allows expressing the fact that results for different queries are incomparable for the purpose of determining relevance. The cost of this idea is the addition of relatively few nuisance parameters. Our approach is simple, and we used a standard logistic regression library to test it. The results beat the reported participating algorithms. Hence, it seems promising to combine our approach with other more complex ideas.<|reference_end|>
arxiv
@article{ailon2008a, title={A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables}, author={Nir Ailon}, journal={arXiv preprint arXiv:0810.2764}, year={2008}, archivePrefix={arXiv}, eprint={0810.2764}, primaryClass={cs.IR cs.LG} }
ailon2008a
arxiv-5176
0810.2781
Linear Time Encoding of LDPC Codes
<|reference_start|>Linear Time Encoding of LDPC Codes: In this paper, we propose a linear complexity encoding method for arbitrary LDPC codes. We start from a simple graph-based encoding method ``label-and-decide.'' We prove that the ``label-and-decide'' method is applicable to Tanner graphs with a hierarchical structure--pseudo-trees-- and that the resulting encoding complexity is linear with the code block length. Next, we define a second type of Tanner graphs--the encoding stopping set. The encoding stopping set is encoded in linear complexity by a revised label-and-decide algorithm--the ``label-decide-recompute.'' Finally, we prove that any Tanner graph can be partitioned into encoding stopping sets and pseudo-trees. By encoding each encoding stopping set or pseudo-tree sequentially, we develop a linear complexity encoding method for general LDPC codes where the encoding complexity is proved to be less than $4 \cdot M \cdot (\overline{k} - 1)$, where $M$ is the number of independent rows in the parity check matrix and $\overline{k}$ represents the mean row weight of the parity check matrix.<|reference_end|>
arxiv
@article{lu2008linear, title={Linear Time Encoding of LDPC Codes}, author={Jin Lu, Jos'e M. F. Moura}, journal={arXiv preprint arXiv:0810.2781}, year={2008}, doi={10.1109/TIT.2009.2034823}, archivePrefix={arXiv}, eprint={0810.2781}, primaryClass={cs.IT math.IT} }
lu2008linear
arxiv-5177
0810.2837
Adaptive Hybrid Deflection and Retransmission Routing for Optical Burst-Switched Networks
<|reference_start|>Adaptive Hybrid Deflection and Retransmission Routing for Optical Burst-Switched Networks: Burst contention is a well known challenging problem in Optical Burst Switching (OBS) networks. Deflection routing is used to resolve contention. Burst retransmission is used to reduce the Burst Loss Ratio (BLR) by retransmitting dropped bursts. Previous works show that combining deflection and retransmission outperforms both pure deflection and pure retransmission approaches. This paper proposes a new Adaptive Hybrid Deflection and Retransmission (AHDR) approach that dynamically combines deflection and retransmission approaches based on network conditions such as BLR and link utilization. Network Simulator 2 (ns-2) is used to simulate the proposed approach on different network topologies. Simulation results show that the proposed approach outperforms static approaches in terms of BLR and goodput.<|reference_end|>
arxiv
@article{levesque2008adaptive, title={Adaptive Hybrid Deflection and Retransmission Routing for Optical Burst-Switched Networks}, author={Martin Levesque, Halima Elbiaze, Wael Hosny Fouad Aly}, journal={arXiv preprint arXiv:0810.2837}, year={2008}, archivePrefix={arXiv}, eprint={0810.2837}, primaryClass={cs.NI} }
levesque2008adaptive
arxiv-5178
0810.2861
A comparison of the notions of optimality in soft constraints and graphical games
<|reference_start|>A comparison of the notions of optimality in soft constraints and graphical games: The notion of optimality naturally arises in many areas of applied mathematics and computer science concerned with decision making. Here we consider this notion in the context of two formalisms used for different purposes and in different research areas: graphical games and soft constraints. We relate the notion of optimality used in the area of soft constraint satisfaction problems (SCSPs) to that used in graphical games, showing that for a large class of SCSPs that includes weighted constraints every optimal solution corresponds to a Nash equilibrium that is also a Pareto efficient joint strategy.<|reference_end|>
arxiv
@article{apt2008a, title={A comparison of the notions of optimality in soft constraints and graphical games}, author={Krzysztof R. Apt, Francesca Rossi, and K. Brent Venable}, journal={arXiv preprint arXiv:0810.2861}, year={2008}, archivePrefix={arXiv}, eprint={0810.2861}, primaryClass={cs.AI cs.GT} }
apt2008a
arxiv-5179
0810.2865
Welfare Undominated Groves Mechanisms
<|reference_start|>Welfare Undominated Groves Mechanisms: A common objective in mechanism design is to choose the outcome (for example, allocation of resources) that maximizes the sum of the agents' valuations, without introducing incentives for agents to misreport their preferences. The class of Groves mechanisms achieves this; however, these mechanisms require the agents to make payments, thereby reducing the agents' total welfare. In this paper we introduce a measure for comparing two mechanisms with respect to the final welfare they generate. This measure induces a partial order on mechanisms and we study the question of finding minimal elements with respect to this partial order. In particular, we say a non-deficit Groves mechanism is welfare undominated if there exists no other non-deficit Groves mechanism that always has a smaller or equal sum of payments. We focus on two domains: (i) auctions with multiple identical units and unit-demand bidders, and (ii) mechanisms for public project problems. In the first domain we analytically characterize all welfare undominated Groves mechanisms that are anonymous and have linear payment functions, by showing that the family of optimal-in-expectation linear redistribution mechanisms, which were introduced in [6] and include the Bailey-Cavallo mechanism [1,2], coincides with the family of welfare undominated Groves mechanisms that are anonymous and linear in the setting we study. In the second domain we show that the classic VCG (Clarke) mechanism is welfare undominated for the class of public project problems with equal participation costs, but is not undominated for a more general class.<|reference_end|>
arxiv
@article{apt2008welfare, title={Welfare Undominated Groves Mechanisms}, author={Krzysztof R. Apt, Vincent Conitzer, Mingyu Guo and Evangelos Markakis}, journal={arXiv preprint arXiv:0810.2865}, year={2008}, archivePrefix={arXiv}, eprint={0810.2865}, primaryClass={cs.GT} }
apt2008welfare
arxiv-5180
0810.2877
Sheaves and geometric logic and applications to the modular verification of complex systems
<|reference_start|>Sheaves and geometric logic and applications to the modular verification of complex systems: In this paper we show that states, transitions and behavior of concurrent systems can often be modeled as sheaves over a suitable topological space. In this context, geometric logic can be used to describe which local properties (i.e. properties of individual systems) are preserved, at a global level, when interconnecting the systems. The main area of application is to modular verification of complex systems. We illustrate the ideas by means of an example involving a family of interacting controllers for trains on a rail track.<|reference_end|>
arxiv
@article{sofronie-stokkermans2008sheaves, title={Sheaves and geometric logic and applications to the modular verification of complex systems}, author={Viorica Sofronie-Stokkermans}, journal={arXiv preprint arXiv:0810.2877}, year={2008}, archivePrefix={arXiv}, eprint={0810.2877}, primaryClass={cs.LO} }
sofronie-stokkermans2008sheaves
arxiv-5181
0810.2891
Taming Modal Impredicativity: Superlazy Reduction
<|reference_start|>Taming Modal Impredicativity: Superlazy Reduction: Pure, or type-free, Linear Logic proof nets are Turing complete once cut-elimination is considered as computation. We introduce modal impredicativity as a new form of impredicativity causing the complexity of cut-elimination to be problematic from a complexity point of view. Modal impredicativity occurs when, during reduction, the conclusion of a residual of a box b interacts with a node that belongs to the proof net inside another residual of b. Technically speaking, superlazy reduction is a new notion of reduction that allows to control modal impredicativity. More specifically, superlazy reduction replicates a box only when all its copies are opened. This makes the overall cost of reducing a proof net finite and predictable. Specifically, superlazy reduction applied to any pure proof nets takes primitive recursive time. Moreover, any primitive recursive function can be computed by a pure proof net via superlazy reduction.<|reference_end|>
arxiv
@article{lago2008taming, title={Taming Modal Impredicativity: Superlazy Reduction}, author={Ugo Dal Lago, Luca Roversi, Luca Vercelli}, journal={arXiv preprint arXiv:0810.2891}, year={2008}, archivePrefix={arXiv}, eprint={0810.2891}, primaryClass={cs.LO} }
lago2008taming
arxiv-5182
0810.2924
BER and Outage Probability Approximations for LMMSE Detectors on Correlated MIMO Channels
<|reference_start|>BER and Outage Probability Approximations for LMMSE Detectors on Correlated MIMO Channels: This paper is devoted to the study of the performance of the Linear Minimum Mean-Square Error receiver for (receive) correlated Multiple-Input Multiple-Output systems. By the random matrix theory, it is well-known that the Signal-to-Noise Ratio (SNR) at the output of this receiver behaves asymptotically like a Gaussian random variable as the number of receive and transmit antennas converge to +$\infty$ at the same rate. However, this approximation being inaccurate for the estimation of some performance metrics such as the Bit Error Rate and the outage probability, especially for small system dimensions, Li et al. proposed convincingly to assume that the SNR follows a generalized Gamma distribution which parameters are tuned by computing the first three asymptotic moments of the SNR. In this article, this technique is generalized to (receive) correlated channels, and closed-form expressions for the first three asymptotic moments of the SNR are provided. To obtain these results, a random matrix theory technique adapted to matrices with Gaussian elements is used. This technique is believed to be simple, efficient, and of broad interest in wireless communications. Simulations are provided, and show that the proposed technique yields in general a good accuracy, even for small system dimensions.<|reference_end|>
arxiv
@article{kammoun2008ber, title={BER and Outage Probability Approximations for LMMSE Detectors on Correlated MIMO Channels}, author={Abla Kammoun, Malika Kharouf, Walid Hachem, Jamal Najim}, journal={arXiv preprint arXiv:0810.2924}, year={2008}, archivePrefix={arXiv}, eprint={0810.2924}, primaryClass={cs.IT math.IT} }
kammoun2008ber
arxiv-5183
0810.2953
On Power Control and Frequency Reuse in the Two User Cognitive Channel
<|reference_start|>On Power Control and Frequency Reuse in the Two User Cognitive Channel: This paper considers the generalized cognitive radio channel where the secondary user is allowed to reuse the frequency during both the idle and active periods of the primary user, as long as the primary rate remains the same. In this setting, the optimal power allocation policy with single-input single-output (SISO) primary and secondary channels is explored. Interestingly, the offered gain resulting from the frequency reuse during the active periods of the spectrum is shown to disappear in both the low and high signal-to-noise ratio (SNR) regimes. We then argue that this drawback in the high SNR region can be avoided by equipping both the primary and secondary transmitters with multiple antennas. Finally, the scenario consisting of SISO primary and multi-input multi-output (MIMO) secondary channels is investigated. Here, a simple Zero-Forcing approach is shown to significantly outperform the celebrated Decoding-Forwarding-Dirty Paper Coding strategy (especially in the high SNR regime).<|reference_end|>
arxiv
@article{koyluoglu2008on, title={On Power Control and Frequency Reuse in the Two User Cognitive Channel}, author={Onur Ozan Koyluoglu and Hesham El Gamal}, journal={arXiv preprint arXiv:0810.2953}, year={2008}, archivePrefix={arXiv}, eprint={0810.2953}, primaryClass={cs.IT math.IT} }
koyluoglu2008on
arxiv-5184
0810.3023
Iterated Regret Minimization: A More Realistic Solution Concept
<|reference_start|>Iterated Regret Minimization: A More Realistic Solution Concept: For some well-known games, such as the Traveler's Dilemma or the Centipede Game, traditional game-theoretic solution concepts--and most notably Nash equilibrium--predict outcomes that are not consistent with empirical observations. In this paper, we introduce a new solution concept, iterated regret minimization, which exhibits the same qualitative behavior as that observed in experiments in many games of interest, including Traveler's Dilemma, the Centipede Game, Nash bargaining, and Bertrand competition. As the name suggests, iterated regret minimization involves the iterated deletion of strategies that do not minimize regret.<|reference_end|>
arxiv
@article{halpern2008iterated, title={Iterated Regret Minimization: A More Realistic Solution Concept}, author={Joseph Y. Halpern and Rafael Pass}, journal={arXiv preprint arXiv:0810.3023}, year={2008}, archivePrefix={arXiv}, eprint={0810.3023}, primaryClass={cs.GT} }
halpern2008iterated
arxiv-5185
0810.3058
Watermarking Digital Images Based on a Content Based Image Retrieval Technique
<|reference_start|>Watermarking Digital Images Based on a Content Based Image Retrieval Technique: The current work is focusing on the implementation of a robust watermarking algorithm for digital images, which is based on an innovative spread spectrum analysis algorithm for watermark embedding and on a content-based image retrieval technique for watermark detection. The highly robust watermark algorithms are applying "detectable watermarks" for which a detection mechanism checks if the watermark exists or no (a Boolean decision) based on a watermarking key. The problem is that the detection of a watermark in a digital image library containing thousands of images means that the watermark detection algorithm is necessary to apply all the keys to the digital images. This application is non-efficient for very large image databases. On the other hand "readable" watermarks may prove weaker but easier to detect as only the detection mechanism is required. The proposed watermarking algorithm combine's the advantages of both "detectable" and "readable" watermarks. The result is a fast and robust watermarking algorithm.<|reference_end|>
arxiv
@article{tsolis2008watermarking, title={Watermarking Digital Images Based on a Content Based Image Retrieval Technique}, author={Dimitrios K. Tsolis, Spyros Sioutas and Theodore S. Papatheodorou}, journal={arXiv preprint arXiv:0810.3058}, year={2008}, archivePrefix={arXiv}, eprint={0810.3058}, primaryClass={cs.DS cs.CR} }
tsolis2008watermarking
arxiv-5186
0810.3076
Combining Semantic Wikis and Controlled Natural Language
<|reference_start|>Combining Semantic Wikis and Controlled Natural Language: We demonstrate AceWiki that is a semantic wiki using the controlled natural language Attempto Controlled English (ACE). The goal is to enable easy creation and modification of ontologies through the web. Texts in ACE can automatically be translated into first-order logic and other languages, for example OWL. Previous evaluation showed that ordinary people are able to use AceWiki without being instructed.<|reference_end|>
arxiv
@article{kuhn2008combining, title={Combining Semantic Wikis and Controlled Natural Language}, author={Tobias Kuhn}, journal={In Proceedings of the Poster and Demonstration Session at the 7th International Semantic Web Conference (ISWC2008), CEUR Workshop Proceedings, Volume 401, 2008}, year={2008}, archivePrefix={arXiv}, eprint={0810.3076}, primaryClass={cs.HC cs.AI} }
kuhn2008combining
arxiv-5187
0810.3093
Detect overlapping and hierarchical community structure in networks
<|reference_start|>Detect overlapping and hierarchical community structure in networks: Clustering and community structure is crucial for many network systems and the related dynamic processes. It has been shown that communities are usually overlapping and hierarchical. However, previous methods investigate these two properties of community structure separately. This paper proposes an algorithm (EAGLE) to detect both the overlapping and hierarchical properties of complex community structure together. This algorithm deals with the set of maximal cliques and adopts an agglomerative framework. The quality function of modularity is extended to evaluate the goodness of a cover. The examples of application to real world networks give excellent results.<|reference_end|>
arxiv
@article{shen2008detect, title={Detect overlapping and hierarchical community structure in networks}, author={Huawei Shen, Xueqi Cheng, Kai Cai, Mao-Bin Hu}, journal={Physica A 388 (2009) 1706-1712}, year={2008}, doi={10.1016/j.physa.2008.12.021}, archivePrefix={arXiv}, eprint={0810.3093}, primaryClass={cs.CY physics.soc-ph} }
shen2008detect
arxiv-5188
0810.3125
On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts
<|reference_start|>On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts: The article presents a new interpretation for Zipf-Mandelbrot's law in natural language which rests on two areas of information theory. Firstly, we construct a new class of grammar-based codes and, secondly, we investigate properties of strongly nonergodic stationary processes. The motivation for the joint discussion is to prove a proposition with a simple informal statement: If a text of length $n$ describes $n^\beta$ independent facts in a repetitive way then the text contains at least $n^\beta/\log n$ different words, under suitable conditions on $n$. In the formal statement, two modeling postulates are adopted. Firstly, the words are understood as nonterminal symbols of the shortest grammar-based encoding of the text. Secondly, the text is assumed to be emitted by a finite-energy strongly nonergodic source whereas the facts are binary IID variables predictable in a shift-invariant way.<|reference_end|>
arxiv
@article{dębowski2008on, title={On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts}, author={{L}ukasz Dk{e}bowski}, journal={IEEE Transactions on Information Theory 57:4589-4599, 2011}, year={2008}, doi={10.1109/TIT.2011.2145170}, archivePrefix={arXiv}, eprint={0810.3125}, primaryClass={cs.IT cs.CL math.IT} }
dębowski2008on
arxiv-5189
0810.3136
On the Complexity of Core, Kernel, and Bargaining Set
<|reference_start|>On the Complexity of Core, Kernel, and Bargaining Set: Coalitional games are mathematical models suited to analyze scenarios where players can collaborate by forming coalitions in order to obtain higher worths than by acting in isolation. A fundamental problem for coalitional games is to single out the most desirable outcomes in terms of appropriate notions of worth distributions, which are usually called solution concepts. Motivated by the fact that decisions taken by realistic players cannot involve unbounded resources, recent computer science literature reconsidered the definition of such concepts by advocating the relevance of assessing the amount of resources needed for their computation in terms of their computational complexity. By following this avenue of research, the paper provides a complete picture of the complexity issues arising with three prominent solution concepts for coalitional games with transferable utility, namely, the core, the kernel, and the bargaining set, whenever the game worth-function is represented in some reasonable compact form (otherwise, if the worths of all coalitions are explicitly listed, the input sizes are so large that complexity problems are---artificially---trivial). The starting investigation point is the setting of graph games, about which various open questions were stated in the literature. The paper gives an answer to these questions, and in addition provides new insights on the setting, by characterizing the computational complexity of the three concepts in some relevant generalizations and specializations.<|reference_end|>
arxiv
@article{greco2008on, title={On the Complexity of Core, Kernel, and Bargaining Set}, author={Gianluigi Greco, Enrico Malizia, Luigi Palopoli, Francesco Scarcello}, journal={Artif. Intell. 175(12-13): 1877-1910 (2011)}, year={2008}, doi={10.1016/j.artint.2011.06.002}, archivePrefix={arXiv}, eprint={0810.3136}, primaryClass={cs.GT cs.AI cs.CC} }
greco2008on
arxiv-5190
0810.3150
Semidefinite Programming for Min-Max Problems and Games
<|reference_start|>Semidefinite Programming for Min-Max Problems and Games: We introduce two min-max problems: the first problem is to minimize the supremum of finitely many rational functions over a compact basic semi-algebraic set whereas the second problem is a 2-player zero-sum polynomial game in randomized strategies and with compact basic semi-algebraic pure strategy sets. It is proved that their optimal solution can be approximated by solving a hierarchy of semidefinite relaxations, in the spirit of the moment approach developed in Lasserre. This provides a unified approach and a class of algorithms to approximate all Nash equilibria and min-max strategies of many static and dynamic games. Each semidefinite relaxation can be solved in time which is polynomial in its input size and practice from global optimization suggests that very often few relaxations are needed for a good approximation (and sometimes even finite convergence).<|reference_end|>
arxiv
@article{laraki2008semidefinite, title={Semidefinite Programming for Min-Max Problems and Games}, author={Rida Laraki (CECO), Jean B. Lasserre (LAAS)}, journal={arXiv preprint arXiv:0810.3150}, year={2008}, number={Rapport LAAS 08582}, archivePrefix={arXiv}, eprint={0810.3150}, primaryClass={math.OC cs.GT} }
laraki2008semidefinite
arxiv-5191
0810.3162
Clone Theory: Its Syntax and Semantics, Applications to Universal Algebra, Lambda Calculus and Algebraic Logic
<|reference_start|>Clone Theory: Its Syntax and Semantics, Applications to Universal Algebra, Lambda Calculus and Algebraic Logic: The primary goal of this paper is to present a unified way to transform the syntax of a logic system into certain initial algebraic structure so that it can be studied algebraically. The algebraic structures which one may choose for this purpose are various clones over a full subcategory of a category. We show that the syntax of equational logic, lambda calculus and first order logic can be represented as clones or right algebras of clones over the set of positive integers. The semantics is then represented by structures derived from left algebras of these clones.<|reference_end|>
arxiv
@article{luo2008clone, title={Clone Theory: Its Syntax and Semantics, Applications to Universal Algebra, Lambda Calculus and Algebraic Logic}, author={Zhaohua Luo}, journal={arXiv preprint arXiv:0810.3162}, year={2008}, archivePrefix={arXiv}, eprint={0810.3162}, primaryClass={cs.LO} }
luo2008clone
arxiv-5192
0810.3163
Reduced Kronecker coefficients and counter-examples to Mulmuley's strong saturation conjecture SH
<|reference_start|>Reduced Kronecker coefficients and counter-examples to Mulmuley's strong saturation conjecture SH: We provide counter-examples to Mulmuley's strong saturation conjecture (strong SH) for the Kronecker coefficients. This conjecture was proposed in the setting of Geometric Complexity Theory to show that deciding whether or not a Kronecker coefficient is zero can be done in polynomial time. We also provide a short proof of the #P-hardness of computing the Kronecker coefficients. Both results rely on the connections between the Kronecker coefficients and another family of structural constants in the representation theory of the symmetric groups: Murnaghan's reduced Kronecker coefficients. An appendix by Mulmuley introduces a relaxed form of the saturation hypothesis SH, still strong enough for the aims of Geometric Complexity Theory.<|reference_end|>
arxiv
@article{briand2008reduced, title={Reduced Kronecker coefficients and counter-examples to Mulmuley's strong saturation conjecture SH}, author={Emmanuel Briand (Universidad de Sevilla), Rosa Orellana (Darmouth College), Mercedes Rosas (Universidad de Sevilla)}, journal={Computational Complexity, vol. 18(4) pp. 577-600 (2009)}, year={2008}, doi={10.1007/s00037-009-0279-z}, archivePrefix={arXiv}, eprint={0810.3163}, primaryClass={math.CO cs.CC math.RT} }
briand2008reduced
arxiv-5193
0810.3182
Optimal Strategies in Sequential Bidding
<|reference_start|>Optimal Strategies in Sequential Bidding: We are interested in mechanisms that maximize social welfare. In [1] this problem was studied for multi-unit auctions with unit demand bidders and for the public project problem, and in each case social welfare undominated mechanisms in the class of feasible and incentive compatible mechanisms were identified. One way to improve upon these optimality results is by allowing the players to move sequentially. With this in mind, we study here sequential versions of two feasible Groves mechanisms used for single item auctions: the Vickrey auction and the Bailey-Cavallo mechanism. Because of the absence of dominant strategies in this sequential setting, we focus on a weaker concept of an optimal strategy. For each mechanism we introduce natural optimal strategies and observe that in each mechanism these strategies exhibit different behaviour. However, we then show that among all optimal strategies, the one we introduce for each mechanism maximizes the social welfare when each player follows it. The resulting social welfare can be larger than the one obtained in the simultaneous setting. Finally, we show that, when interpreting both mechanisms as simultaneous ones, the vectors of the proposed strategies form a Pareto optimal Nash equilibrium in the class of optimal strategies.<|reference_end|>
arxiv
@article{apt2008optimal, title={Optimal Strategies in Sequential Bidding}, author={Krzysztof R. Apt, Vangelis Markakis}, journal={arXiv preprint arXiv:0810.3182}, year={2008}, archivePrefix={arXiv}, eprint={0810.3182}, primaryClass={cs.GT} }
apt2008optimal
arxiv-5194
0810.3199
A Distributed Platform for Mechanism Design
<|reference_start|>A Distributed Platform for Mechanism Design: We describe a structured system for distributed mechanism design. It consists of a sequence of layers. The lower layers deal with the operations relevant for distributed computing only, while the upper layers are concerned only with communication among players, including broadcasting and multicasting, and distributed decision making. This yields a highly flexible distributed system whose specific applications are realized as instances of its top layer. This design supports fault-tolerance, prevents manipulations and makes it possible to implement distributed policing. The system is implemented in Java. We illustrate it by discussing a number of implemented examples.<|reference_end|>
arxiv
@article{apt2008a, title={A Distributed Platform for Mechanism Design}, author={Krzysztof R. Apt, Farhad Arbab, Huiye Ma}, journal={arXiv preprint arXiv:0810.3199}, year={2008}, doi={10.1109/CIMCA.2008.9}, archivePrefix={arXiv}, eprint={0810.3199}, primaryClass={cs.GT cs.DC} }
apt2008a
arxiv-5195
0810.3203
A cache-friendly truncated FFT
<|reference_start|>A cache-friendly truncated FFT: We describe a cache-friendly version of van der Hoeven's truncated FFT and inverse truncated FFT, focusing on the case of `large' coefficients, such as those arising in the Schonhage--Strassen algorithm for multiplication in Z[x]. We describe two implementations and examine their performance.<|reference_end|>
arxiv
@article{harvey2008a, title={A cache-friendly truncated FFT}, author={David Harvey}, journal={arXiv preprint arXiv:0810.3203}, year={2008}, archivePrefix={arXiv}, eprint={0810.3203}, primaryClass={cs.SC cs.DS} }
harvey2008a
arxiv-5196
0810.3226
Optimal Transmission Strategy and Explicit Capacity Region for Broadcast Z Channels
<|reference_start|>Optimal Transmission Strategy and Explicit Capacity Region for Broadcast Z Channels: This paper provides an explicit expression for the capacity region of the two-user broadcast Z channel and proves that the optimal boundary can be achieved by independent encoding of each user. Specifically, the information messages corresponding to each user are encoded independently and the OR of these two encoded streams is transmitted. Nonlinear turbo codes that provide a controlled distribution of ones and zeros are used to demonstrate a low-complexity scheme that operates close to the optimal boundary.<|reference_end|>
arxiv
@article{xie2008optimal, title={Optimal Transmission Strategy and Explicit Capacity Region for Broadcast Z Channels}, author={Bike Xie, Miguel Griot, Andres I. Vila Casado, and Richard D. Wesel}, journal={IEEE Transactions on Information Theory, Vol. 53, No. 9, pp 4296-4304, September 2008}, year={2008}, archivePrefix={arXiv}, eprint={0810.3226}, primaryClass={cs.IT math.IT} }
xie2008optimal
arxiv-5197
0810.3227
Dynamic Approaches to In-Network Aggregation
<|reference_start|>Dynamic Approaches to In-Network Aggregation: Collaboration between small-scale wireless devices hinges on their ability to infer properties shared across multiple nearby nodes. Wireless-enabled mobile devices in particular create a highly dynamic environment not conducive to distributed reasoning about such global properties. This paper addresses a specific instance of this problem: distributed aggregation. We present extensions to existing unstructured aggregation protocols that enable estimation of count, sum, and average aggregates in highly dynamic environments. With the modified protocols, devices with only limited connectivity can maintain estimates of the aggregate, despite \textit{unexpected} peer departures and arrivals. Our analysis of these aggregate maintenance extensions demonstrates their effectiveness in unstructured environments despite high levels of node mobility.<|reference_end|>
arxiv
@article{kennedy2008dynamic, title={Dynamic Approaches to In-Network Aggregation}, author={Oliver Kennedy, Christoph Koch, Al Demers}, journal={arXiv preprint arXiv:0810.3227}, year={2008}, archivePrefix={arXiv}, eprint={0810.3227}, primaryClass={cs.DC cs.DB cs.DS} }
kennedy2008dynamic
arxiv-5198
0810.3283
Quantum robot: structure, algorithms and applications
<|reference_start|>Quantum robot: structure, algorithms and applications: This paper has been withdrawn.<|reference_end|>
arxiv
@article{dong2008quantum, title={Quantum robot: structure, algorithms and applications}, author={Daoyi Dong, Chunlin Chen, Chenbin Zhang, Zonghai Chen}, journal={arXiv preprint arXiv:0810.3283}, year={2008}, archivePrefix={arXiv}, eprint={0810.3283}, primaryClass={cs.RO cs.AI quant-ph} }
dong2008quantum
arxiv-5199
0810.3294
A static theory of promises
<|reference_start|>A static theory of promises: We discuss for the concept of promises within a framework that can be applied to either humans or technology. We compare promises to the more established notion of obligations and find promises to be both simpler and more effective at reducing uncertainty in behavioural outcomes.<|reference_end|>
arxiv
@article{bergstra2008a, title={A static theory of promises}, author={Jan A. Bergstra and Mark Burgess}, journal={arXiv preprint arXiv:0810.3294}, year={2008}, archivePrefix={arXiv}, eprint={0810.3294}, primaryClass={cs.MA cs.SE} }
bergstra2008a
arxiv-5200
0810.3332
A sound spatio-temporal Hoare logic for the verification of structured interactive programs with registers and voices
<|reference_start|>A sound spatio-temporal Hoare logic for the verification of structured interactive programs with registers and voices: Interactive systems with registers and voices (shortly, "rv-systems") are a model for interactive computing obtained closing register machines with respect to a space-time duality transformation ("voices" are the time-dual counterparts of "registers"). In the same vain, AGAPIA v0.1, a structured programming language for rv-systems, is the space-time dual closure of classical while programs (over a specific type of data). Typical AGAPIA programs describe open processes located at various sites and having their temporal windows of adequate reaction to the environment. The language naturally supports process migration, structured interaction, and deployment of components on heterogeneous machines. In this paper a sound Hoare-like spatio-temporal logic for the verification of AGAPIA v0.1 programs is introduced. As a case study, a formal verification proof of a popular distributed termination detection protocol is presented.<|reference_end|>
arxiv
@article{dragoi2008a, title={A sound spatio-temporal Hoare logic for the verification of structured interactive programs with registers and voices}, author={Cezara Dragoi and Gheorghe Stefanescu}, journal={arXiv preprint arXiv:0810.3332}, year={2008}, archivePrefix={arXiv}, eprint={0810.3332}, primaryClass={cs.PL cs.LO} }
dragoi2008a