corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-7101
0904.2136
On the Cooperation of the Constraint Domains H, R and FD in CFLP
<|reference_start|>On the Cooperation of the Constraint Domains H, R and FD in CFLP: This paper presents a computational model for the cooperation of constraint domains and an implementation for a particular case of practical importance. The computational model supports declarative programming with lazy and possibly higher-order functions, predicates, and the cooperation of different constraint domains equipped with their respective solvers, relying on a so-called Constraint Functional Logic Programming (CFLP) scheme. The implementation has been developed on top of the CFLP system TOY, supporting the cooperation of the three domains H, R and FD, which supply equality and disequality constraints over symbolic terms, arithmetic constraints over the real numbers, and finite domain constraints over the integers, respectively. The computational model has been proved sound and complete w.r.t. the declarative semantics provided by the $CFLP$ scheme, while the implemented system has been tested with a set of benchmarks and shown to behave quite efficiently in comparison to the closest related approach we are aware of. To appear in Theory and Practice of Logic Programming (TPLP)<|reference_end|>
arxiv
@article{estévez-martín2009on, title={On the Cooperation of the Constraint Domains H, R and FD in CFLP}, author={S. Est'evez-Mart'in, T. Hortal'a-Gonz'alez, Rodr'iguez-Artalejo, R. del Vado-V'irseda, F. S'aenz-P'erez, and A. J. Fern'andez}, journal={arXiv preprint arXiv:0904.2136}, year={2009}, archivePrefix={arXiv}, eprint={0904.2136}, primaryClass={cs.PL cs.SC} }
estévez-martín2009on
arxiv-7102
0904.2160
Inferring Dynamic Bayesian Networks using Frequent Episode Mining
<|reference_start|>Inferring Dynamic Bayesian Networks using Frequent Episode Mining: Motivation: Several different threads of research have been proposed for modeling and mining temporal data. On the one hand, approaches such as dynamic Bayesian networks (DBNs) provide a formal probabilistic basis to model relationships between time-indexed random variables but these models are intractable to learn in the general case. On the other, algorithms such as frequent episode mining are scalable to large datasets but do not exhibit the rigorous probabilistic interpretations that are the mainstay of the graphical models literature. Results: We present a unification of these two seemingly diverse threads of research, by demonstrating how dynamic (discrete) Bayesian networks can be inferred from the results of frequent episode mining. This helps bridge the modeling emphasis of the former with the counting emphasis of the latter. First, we show how, under reasonable assumptions on data characteristics and on influences of random variables, the optimal DBN structure can be computed using a greedy, local, algorithm. Next, we connect the optimality of the DBN structure with the notion of fixed-delay episodes and their counts of distinct occurrences. Finally, to demonstrate the practical feasibility of our approach, we focus on a specific (but broadly applicable) class of networks, called excitatory networks, and show how the search for the optimal DBN structure can be conducted using just information from frequent episodes. Application on datasets gathered from mathematical models of spiking neurons as well as real neuroscience datasets are presented. Availability: Algorithmic implementations, simulator codebases, and datasets are available from our website at http://neural-code.cs.vt.edu/dbn<|reference_end|>
arxiv
@article{patnaik2009inferring, title={Inferring Dynamic Bayesian Networks using Frequent Episode Mining}, author={Debprakash Patnaik and Srivatsan Laxman and Naren Ramakrishnan}, journal={arXiv preprint arXiv:0904.2160}, year={2009}, archivePrefix={arXiv}, eprint={0904.2160}, primaryClass={cs.LG} }
patnaik2009inferring
arxiv-7103
0904.2203
A Weakly-Robust PTAS for Minimum Clique Partition in Unit Disk Graphs
<|reference_start|>A Weakly-Robust PTAS for Minimum Clique Partition in Unit Disk Graphs: We consider the problem of partitioning the set of vertices of a given unit disk graph (UDG) into a minimum number of cliques. The problem is NP-hard and various constant factor approximations are known, with the current best ratio of 3. Our main result is a {\em weakly robust} polynomial time approximation scheme (PTAS) for UDGs expressed with edge-lengths, it either (i) computes a clique partition or (ii) gives a certificate that the graph is not a UDG; for the case (i) that it computes a clique partition, we show that it is guaranteed to be within $(1+\eps)$ ratio of the optimum if the input is UDG; however if the input is not a UDG it either computes a clique partition as in case (i) with no guarantee on the quality of the clique partition or detects that it is not a UDG. Noting that recognition of UDG's is NP-hard even if we are given edge lengths, our PTAS is a weakly-robust algorithm. Our algorithm can be transformed into an $O(\frac{\log^* n}{\eps^{O(1)}})$ time distributed PTAS. We consider a weighted version of the clique partition problem on vertex weighted UDGs that generalizes the problem. We note some key distinctions with the unweighted version, where ideas useful in obtaining a PTAS breakdown. Yet, surprisingly, it admits a $(2+\eps)$-approximation algorithm for the weighted case where the graph is expressed, say, as an adjacency matrix. This improves on the best known 8-approximation for the {\em unweighted} case for UDGs expressed in standard form.<|reference_end|>
arxiv
@article{pirwani2009a, title={A Weakly-Robust PTAS for Minimum Clique Partition in Unit Disk Graphs}, author={Imran A. Pirwani (1), Mohammad R. Salavatipour (1) ((1) Department of Computing Science, University of Alberta, Edmonton, Canada)}, journal={arXiv preprint arXiv:0904.2203}, year={2009}, doi={10.1007/978-3-642-13731-0_19}, archivePrefix={arXiv}, eprint={0904.2203}, primaryClass={cs.CG cs.DC cs.DM cs.DS} }
pirwani2009a
arxiv-7104
0904.2237
On Binary Cyclic Codes with Five Nonzero Weights
<|reference_start|>On Binary Cyclic Codes with Five Nonzero Weights: Let $q=2^n$, $0\leq k\leq n-1$, $n/\gcd(n,k)$ be odd and $k\neq n/3, 2n/3$. In this paper the value distribution of following exponential sums \[\sum\limits_{x\in \bF_q}(-1)^{\mathrm{Tr}_1^n(\alpha x^{2^{2k}+1}+\beta x^{2^k+1}+\ga x)}\quad(\alpha,\beta,\ga\in \bF_{q})\] is determined. As an application, the weight distribution of the binary cyclic code $\cC$, with parity-check polynomial $h_1(x)h_2(x)h_3(x)$ where $h_1(x)$, $h_2(x)$ and $h_3(x)$ are the minimal polynomials of $\pi^{-1}$, $\pi^{-(2^k+1)}$ and $\pi^{-(2^{2k}+1)}$ respectively for a primitive element $\pi$ of $\bF_q$, is also determined.<|reference_end|>
arxiv
@article{luo2009on, title={On Binary Cyclic Codes with Five Nonzero Weights}, author={Jinquan Luo}, journal={arXiv preprint arXiv:0904.2237}, year={2009}, archivePrefix={arXiv}, eprint={0904.2237}, primaryClass={cs.IT cs.DM math.CO math.IT} }
luo2009on
arxiv-7105
0904.2257
The equality problem for infinite words generated by primitive morphisms
<|reference_start|>The equality problem for infinite words generated by primitive morphisms: We study the equality problem for infinite words obtained by iterating morphisms. In particular, we give a practical algorithm to decide whether or not two words generated by primitive morphisms are equal.<|reference_end|>
arxiv
@article{honkala2009the, title={The equality problem for infinite words generated by primitive morphisms}, author={Juha Honkala}, journal={arXiv preprint arXiv:0904.2257}, year={2009}, archivePrefix={arXiv}, eprint={0904.2257}, primaryClass={cs.FL} }
honkala2009the
arxiv-7106
0904.2290
A Comprehensive study of a New Multipath Energy Aware Routing Protocol for Mobile Ad-hoc Networks
<|reference_start|>A Comprehensive study of a New Multipath Energy Aware Routing Protocol for Mobile Ad-hoc Networks: Maximizing network lifetime is a very challenging issue in routing protocol design for Mobile Ad-hoc NETworks (MANETs), since mobile nodes are powered by limited-capacity batteries. Furthermore, replacing or recharging batteries is often impossible in critical environments (e.g. battlefields, disaster areas, etc.) The proposed MEA-DSR (Multipath Energy-Aware on Demand Source Routing) protocol uses a load distribution policy in order to maximize network lifetime. The simulation results have shown the efficiency of the proposed protocol in comparison to DSR routing protocol in many difficult scenarios<|reference_end|>
arxiv
@article{chettibi2009a, title={A Comprehensive study of a New Multipath Energy Aware Routing Protocol for Mobile Ad-hoc Networks}, author={Saloua Chettibi}, journal={arXiv preprint arXiv:0904.2290}, year={2009}, archivePrefix={arXiv}, eprint={0904.2290}, primaryClass={cs.NI} }
chettibi2009a
arxiv-7107
0904.2302
A Fundamental Characterization of Stability in Broadcast Queueing Systems
<|reference_start|>A Fundamental Characterization of Stability in Broadcast Queueing Systems: Stability with respect to a given scheduling policy has become an important issue for wireless communication systems, but it is hard to prove in particular scenarios. In this paper two simple conditions for stability in broadcast channels are derived, which are easy to check. Heuristically, the conditions imply that if the queue length in the system becomes large, the rate allocation is always the solution of a weighted sum rate maximization problem. Furthermore, the change of the weight factors between two time slots becomes smaller and the weight factors of the users, whose queues are bounded while the other queues expand, tend to zero. Then it is shown that for any mean arrival rate vector inside the ergodic achievable rate region the system is stable in the strong sense when the given scheduling policy complies with the conditions. In this case the policy is so-called throughput-optimal. Subsequently, some results on the necessity of the presented conditions are provided. Finally, in several application examples it is shown that the results in the paper provide a convenient way to verify the throughput-optimal policies.<|reference_end|>
arxiv
@article{zhou2009a, title={A Fundamental Characterization of Stability in Broadcast Queueing Systems}, author={Chan Zhou and Gerhard Wunder}, journal={arXiv preprint arXiv:0904.2302}, year={2009}, archivePrefix={arXiv}, eprint={0904.2302}, primaryClass={cs.NI cs.IT math.IT} }
zhou2009a
arxiv-7108
0904.2306
On irreversible dynamic monopolies in general graphs
<|reference_start|>On irreversible dynamic monopolies in general graphs: Consider the following coloring process in a simple directed graph $G(V,E)$ with positive indegrees. Initially, a set $S$ of vertices are white, whereas all the others are black. Thereafter, a black vertex is colored white whenever more than half of its in-neighbors are white. The coloring process ends when no additional vertices can be colored white. If all vertices end up white, we call $S$ an irreversible dynamic monopoly (or dynamo for short) under the strict-majority scenario. An irreversible dynamo under the simple-majority scenario is defined similarly except that a black vertex is colored white when at least half of its in-neighbors are white. We derive upper bounds of $(2/3)\,|\,V\,|$ and $|\,V\,|/2$ on the minimum sizes of irreversible dynamos under the strict and the simple-majority scenarios, respectively. For the special case when $G$ is an undirected connected graph, we prove the existence of an irreversible dynamo with size at most $\lceil |\,V\,|/2 \rceil$ under the strict-majority scenario. Let $\epsilon>0$ be any constant. We also show that, unless $\text{NP}\subseteq \text{TIME}(n^{O(\ln \ln n)}),$ no polynomial-time, $((1/2-\epsilon)\ln |\,V\,|)$-approximation algorithms exist for finding the minimum irreversible dynamo under either the strict or the simple-majority scenario. The inapproximability results hold even for bipartite graphs with diameter at most 8.<|reference_end|>
arxiv
@article{chang2009on, title={On irreversible dynamic monopolies in general graphs}, author={Ching-Lueh Chang and Yuh-Dauh Lyuu}, journal={arXiv preprint arXiv:0904.2306}, year={2009}, archivePrefix={arXiv}, eprint={0904.2306}, primaryClass={cs.DM cs.DC} }
chang2009on
arxiv-7109
0904.2310
Exact and Approximation Algorithms for Geometric and Capacitated Set Cover Problems with Applications
<|reference_start|>Exact and Approximation Algorithms for Geometric and Capacitated Set Cover Problems with Applications: First, we study geometric variants of the standard set cover motivated by assignment of directional antenna and shipping with deadlines, providing the first known polynomial-time exact solutions. Next, we consider the following general capacitated set cover problem. There is given a set of elements with real weights and a family S of sets of elements. One can use a set if it is a subset of one of the sets on our lists and the sum of weights is at most one. The goal is to cover all the elements with the allowed sets.<br>We show that any polynomial-time algorithm that approximates the un-capacitated version of the set cover problem with ratio r can be converted to an approximation algorithm for the capacitated version with ratio r + 1.357.In particular, the composition of these two results yields a polynomial-time approximation algorithm for the problem of covering a set of customers represented by a weighted n-point set with a minimum number of antennas of variable angular range and fixed capacity with ratio 2.357. Finally, we provide a PTAS for the dual problem where the number of sets (e.g., antennas) to use is fixed and the task is to minimize the maximum set load, in case the sets correspond to line intervals or arcs.<|reference_end|>
arxiv
@article{berman2009exact, title={Exact and Approximation Algorithms for Geometric and Capacitated Set Cover Problems with Applications}, author={Piotr Berman, Marek Karpinski, Andrzej Lingas}, journal={arXiv preprint arXiv:0904.2310}, year={2009}, archivePrefix={arXiv}, eprint={0904.2310}, primaryClass={cs.CC cs.DM cs.DS} }
berman2009exact
arxiv-7110
0904.2311
Source Coding with a Side Information "Vending Machine"
<|reference_start|>Source Coding with a Side Information "Vending Machine": We study source coding in the presence of side information, when the system can take actions that affect the availability, quality, or nature of the side information. We begin by extending the Wyner-Ziv problem of source coding with decoder side information to the case where the decoder is allowed to choose actions affecting the side information. We then consider the setting where actions are taken by the encoder, based on its observation of the source. Actions may have costs that are commensurate with the quality of the side information they yield, and an overall per-symbol cost constraint may be imposed. We characterize the achievable tradeoffs between rate, distortion, and cost in some of these problem settings. Among our findings is the fact that even in the absence of a cost constraint, greedily choosing the action associated with the `best' side information is, in general, sub-optimal. A few examples are worked out.<|reference_end|>
arxiv
@article{weissman2009source, title={Source Coding with a Side Information "Vending Machine"}, author={Tsachy Weissman and Haim H. Permuter}, journal={arXiv preprint arXiv:0904.2311}, year={2009}, archivePrefix={arXiv}, eprint={0904.2311}, primaryClass={cs.IT math.IT} }
weissman2009source
arxiv-7111
0904.2320
Why Global Performance is a Poor Metric for Verifying Convergence of Multi-agent Learning
<|reference_start|>Why Global Performance is a Poor Metric for Verifying Convergence of Multi-agent Learning: Experimental verification has been the method of choice for verifying the stability of a multi-agent reinforcement learning (MARL) algorithm as the number of agents grows and theoretical analysis becomes prohibitively complex. For cooperative agents, where the ultimate goal is to optimize some global metric, the stability is usually verified by observing the evolution of the global performance metric over time. If the global metric improves and eventually stabilizes, it is considered a reasonable verification of the system's stability. The main contribution of this note is establishing the need for better experimental frameworks and measures to assess the stability of large-scale adaptive cooperative systems. We show an experimental case study where the stability of the global performance metric can be rather deceiving, hiding an underlying instability in the system that later leads to a significant drop in performance. We then propose an alternative metric that relies on agents' local policies and show, experimentally, that our proposed metric is more effective (than the traditional global performance metric) in exposing the instability of MARL algorithms.<|reference_end|>
arxiv
@article{abdallah2009why, title={Why Global Performance is a Poor Metric for Verifying Convergence of Multi-agent Learning}, author={Sherief Abdallah}, journal={arXiv preprint arXiv:0904.2320}, year={2009}, archivePrefix={arXiv}, eprint={0904.2320}, primaryClass={cs.MA cs.LG} }
abdallah2009why
arxiv-7112
0904.2323
A Symbolic Summation Approach to Find Optimal Nested Sum Representations
<|reference_start|>A Symbolic Summation Approach to Find Optimal Nested Sum Representations: We consider the following problem: Given a nested sum expression, find a sum representation such that the nested depth is minimal. We obtain a symbolic summation framework that solves this problem for sums defined, e.g., over hypergeometric, $q$-hypergeometric or mixed hypergeometric expressions. Recently, our methods have found applications in quantum field theory.<|reference_end|>
arxiv
@article{schneider2009a, title={A Symbolic Summation Approach to Find Optimal Nested Sum Representations}, author={Carsten Schneider}, journal={arXiv preprint arXiv:0904.2323}, year={2009}, archivePrefix={arXiv}, eprint={0904.2323}, primaryClass={cs.SC math.CO math.NT} }
schneider2009a
arxiv-7113
0904.2340
Explicit fairness in testing semantics
<|reference_start|>Explicit fairness in testing semantics: In this paper we investigate fair computations in the pi-calculus. Following Costa and Stirling's approach for CCS-like languages, we consider a method to label process actions in order to filter out unfair computations. We contrast the existing fair-testing notion with those that naturally arise by imposing weak and strong fairness. This comparison provides insight about the expressiveness of the various `fair' testing semantics and about their discriminating power.<|reference_end|>
arxiv
@article{cacciagrano2009explicit, title={Explicit fairness in testing semantics}, author={D. Cacciagrano, F. Corradini, C. Palamidessi}, journal={Logical Methods in Computer Science, Volume 5, Issue 2 (June 22, 2009) lmcs:1134}, year={2009}, doi={10.2168/LMCS-5(2:15)2009}, archivePrefix={arXiv}, eprint={0904.2340}, primaryClass={cs.LO} }
cacciagrano2009explicit
arxiv-7114
0904.2375
The Zeta Function of a Periodic-Finite-Type Shift
<|reference_start|>The Zeta Function of a Periodic-Finite-Type Shift: The class of periodic-finite-type shifts (PFT's) is a class of sofic shifts that strictly includes the class of shifts of finite type (SFT's), and the zeta function of a PFT is a generating function for the number of periodic sequences in the shift. In this paper, we derive a useful formula for the zeta function of a PFT. This formula allows the zeta function of a PFT to be computed more efficiently than the specialization of a formula known for a generic sofic shift<|reference_end|>
arxiv
@article{manada2009the, title={The Zeta Function of a Periodic-Finite-Type Shift}, author={Akiko Manada and Navin Kashyap}, journal={arXiv preprint arXiv:0904.2375}, year={2009}, archivePrefix={arXiv}, eprint={0904.2375}, primaryClass={cs.IT math.IT} }
manada2009the
arxiv-7115
0904.2385
The Category Theoretic Solution of Recursive Program Schemes
<|reference_start|>The Category Theoretic Solution of Recursive Program Schemes: This paper provides a general account of the notion of recursive program schemes, studying both uninterpreted and interpreted solutions. It can be regarded as the category-theoretic version of the classical area of algebraic semantics. The overall assumptions needed are small indeed: working only in categories with "enough final coalgebras" we show how to formulate, solve, and study recursive program schemes. Our general theory is algebraic and so avoids using ordered, or metric structures. Our work generalizes the previous approaches which do use this extra structure by isolating the key concepts needed to study substitution in infinite trees, including second-order substitution. As special cases of our interpreted solutions we obtain the usual denotational semantics using complete partial orders, and the one using complete metric spaces. Our theory also encompasses implicitly defined objects which are not usually taken to be related to recursive program schemes. For example, the classical Cantor two-thirds set falls out as an interpreted solution (in our sense) of a recursive program scheme.<|reference_end|>
arxiv
@article{milius2009the, title={The Category Theoretic Solution of Recursive Program Schemes}, author={Stefan Milius, Lawrence S. Moss}, journal={Theoret. Comput. Sci. 366 (2006), 3-59}, year={2009}, archivePrefix={arXiv}, eprint={0904.2385}, primaryClass={cs.LO math.CT} }
milius2009the
arxiv-7116
0904.2389
Extracting the multiscale backbone of complex weighted networks
<|reference_start|>Extracting the multiscale backbone of complex weighted networks: A large number of complex systems find a natural abstraction in the form of weighted networks whose nodes represent the elements of the system and the weighted edges identify the presence of an interaction and its relative strength. In recent years, the study of an increasing number of large scale networks has highlighted the statistical heterogeneity of their interaction pattern, with degree and weight distributions which vary over many orders of magnitude. These features, along with the large number of elements and links, make the extraction of the truly relevant connections forming the network's backbone a very challenging problem. More specifically, coarse-graining approaches and filtering techniques are at struggle with the multiscale nature of large scale systems. Here we define a filtering method that offers a practical procedure to extract the relevant connection backbone in complex multiscale networks, preserving the edges that represent statistical significant deviations with respect to a null model for the local assignment of weights to edges. An important aspect of the method is that it does not belittle small-scale interactions and operates at all scales defined by the weight distribution. We apply our method to real world network instances and compare the obtained results with alternative backbone extraction techniques.<|reference_end|>
arxiv
@article{serrano2009extracting, title={Extracting the multiscale backbone of complex weighted networks}, author={M. Angeles Serrano, Marian Boguna, Alessandro Vespignani}, journal={Proc. Natl. Acad. Sci. USA 106, 6483-6488 (2009)}, year={2009}, doi={10.1073/pnas.0808904106}, archivePrefix={arXiv}, eprint={0904.2389}, primaryClass={physics.soc-ph cond-mat.dis-nn cs.NI} }
serrano2009extracting
arxiv-7117
0904.2400
Pricing Randomized Allocations
<|reference_start|>Pricing Randomized Allocations: Randomized mechanisms, which map a set of bids to a probability distribution over outcomes rather than a single outcome, are an important but ill-understood area of computational mechanism design. We investigate the role of randomized outcomes (henceforth, "lotteries") in the context of a fundamental and archetypical multi-parameter mechanism design problem: selling heterogeneous items to unit-demand bidders. To what extent can a seller improve her revenue by pricing lotteries rather than items, and does this modification of the problem affect its computational tractability? Our results show that the answers to these questions hinge on whether consumers can purchase only one lottery (the buy-one model) or purchase any set of lotteries and receive an independent sample from each (the buy-many model). In the buy-one model, there is a polynomial-time algorithm to compute the revenue-maximizing envy-free prices (thus overcoming the inapproximability of the corresponding item pricing problem) and the revenue of the optimal lottery system can exceed the revenue of the optimal item pricing by an unbounded factor as long as the number of item types exceeds 4. In the buy-many model with n item types, the profit achieved by lottery pricing can exceed item pricing by a factor of O(log n) but not more, and optimal lottery pricing cannot be approximated within a factor of O(n^eps) for some eps>0, unless NP has subexponential-time randomized algorithms. Our lower bounds rely on a mixture of geometric and algebraic techniques, whereas the upper bounds use a novel rounding scheme to transform a mechanism with randomized outcomes into one with deterministic outcomes while losing only a bounded amount of revenue.<|reference_end|>
arxiv
@article{briest2009pricing, title={Pricing Randomized Allocations}, author={Patrick Briest, Shuchi Chawla, Robert Kleinberg, and S. Matthew Weinberg}, journal={arXiv preprint arXiv:0904.2400}, year={2009}, archivePrefix={arXiv}, eprint={0904.2400}, primaryClass={cs.GT cs.DS} }
briest2009pricing
arxiv-7118
0904.2401
A Combinatorial Study of Linear Deterministic Relay Networks
<|reference_start|>A Combinatorial Study of Linear Deterministic Relay Networks: In the last few years the so--called "linear deterministic" model of relay channels has gained popularity as a means of studying the flow of information over wireless communication networks, and this approach generalizes the model of wireline networks which is standard in network optimization. There is recent work extending the celebrated max--flow/min--cut theorem to the capacity of a unicast session over a linear deterministic relay network which is modeled by a layered directed graph. This result was first proved by a random coding scheme over large blocks of transmitted signals. We demonstrate the same result with a simple, deterministic, polynomial--time algorithm which takes as input a single transmitted signal instead of a long block of signals. Our capacity-achieving transmission scheme for a two--layer network requires the extension of a one--dimensional Rado--Hall transversal theorem on the independent subsets of rows of a row--partitioned matrix into a two--dimensional variation for block matrices. To generalize our approach to larger networks we use the submodularity of the capacity of a cut for our model and show that our complete transmission scheme can be obtained by solving a linear program over the intersection of two polymatroids. We prove that our transmission scheme can achieve the max-flow/min-cut capacity by applying a theorem of Edmonds about such linear programs. We use standard submodular function minimization techniques as part of our polynomial--time algorithm to construct our capacity-achieving transmission scheme.<|reference_end|>
arxiv
@article{yazdi2009a, title={A Combinatorial Study of Linear Deterministic Relay Networks}, author={S. M. Sadegh Tabatabaei Yazdi and Serap A. Savari}, journal={arXiv preprint arXiv:0904.2401}, year={2009}, archivePrefix={arXiv}, eprint={0904.2401}, primaryClass={cs.IT math.IT} }
yazdi2009a
arxiv-7119
0904.2441
Reliable Identification of RFID Tags Using Multiple Independent Reader Sessions
<|reference_start|>Reliable Identification of RFID Tags Using Multiple Independent Reader Sessions: Radio Frequency Identification (RFID) systems are gaining momentum in various applications of logistics, inventory, etc. A generic problem in such systems is to ensure that the RFID readers can reliably read a set of RFID tags, such that the probability of missing tags stays below an acceptable value. A tag may be missing (left unread) due to errors in the communication link towards the reader e.g. due to obstacles in the radio path. The present paper proposes techniques that use multiple reader sessions, during which the system of readers obtains a running estimate of the probability to have at least one tag missing. Based on such an estimate, it is decided whether an additional reader session is required. Two methods are proposed, they rely on the statistical independence of the tag reading errors across different reader sessions, which is a plausible assumption when e.g. each reader session is executed on different readers. The first method uses statistical relationships that are valid when the reader sessions are independent. The second method is obtained by modifying an existing capture-recapture estimator. The results show that, when the reader sessions are independent, the proposed mechanisms provide a good approximation to the probability of missing tags, such that the number of reader sessions made, meets the target specification. If the assumption of independence is violated, the estimators are still useful, but they should be corrected by a margin of additional reader sessions to ensure that the target probability of missing tags is met.<|reference_end|>
arxiv
@article{jacobsen2009reliable, title={Reliable Identification of RFID Tags Using Multiple Independent Reader Sessions}, author={Rasmus Jacobsen, Karsten Fyhn Nielsen, Petar Popovski, Torben Larsen}, journal={arXiv preprint arXiv:0904.2441}, year={2009}, doi={10.1109/RFID.2009.4911187}, archivePrefix={arXiv}, eprint={0904.2441}, primaryClass={cs.IT math.IT} }
jacobsen2009reliable
arxiv-7120
0904.2448
All that Glisters is not Galled
<|reference_start|>All that Glisters is not Galled: Galled trees, evolutionary networks with isolated reticulation cycles, have appeared under several slightly different definitions in the literature. In this paper we establish the actual relationships between the main four such alternative definitions: namely, the original galled trees, level-1 networks, nested networks with nesting depth 1, and evolutionary networks with arc-disjoint reticulation cycles.<|reference_end|>
arxiv
@article{rossello2009all, title={All that Glisters is not Galled}, author={Francesc Rossello, Gabriel Valiente}, journal={arXiv preprint arXiv:0904.2448}, year={2009}, archivePrefix={arXiv}, eprint={0904.2448}, primaryClass={cs.DM cs.CE q-bio.PE} }
rossello2009all
arxiv-7121
0904.2452
Effective Bounds for P-Recursive Sequences
<|reference_start|>Effective Bounds for P-Recursive Sequences: We describe an algorithm that takes as input a complex sequence $(u_n)$ given by a linear recurrence relation with polynomial coefficients along with initial values, and outputs a simple explicit upper bound $(v_n)$ such that $|u_n| \leq v_n$ for all $n$. Generically, the bound is tight, in the sense that its asymptotic behaviour matches that of $u_n$. We discuss applications to the evaluation of power series with guaranteed precision.<|reference_end|>
arxiv
@article{mezzarobba2009effective, title={Effective Bounds for P-Recursive Sequences}, author={Marc Mezzarobba and Bruno Salvy}, journal={arXiv preprint arXiv:0904.2452}, year={2009}, doi={10.1016/j.jsc.2010.06.024}, archivePrefix={arXiv}, eprint={0904.2452}, primaryClass={cs.SC} }
mezzarobba2009effective
arxiv-7122
0904.2457
Subshifts, Languages and Logic
<|reference_start|>Subshifts, Languages and Logic: We study the Monadic Second Order (MSO) Hierarchy over infinite pictures, that is tilings. We give a characterization of existential MSO in terms of tilings and projections of tilings. Conversely, we characterise logic fragments corresponding to various classes of infinite pictures (subshifts of finite type, so?c subshifts).<|reference_end|>
arxiv
@article{jeandel2009subshifts,, title={Subshifts, Languages and Logic}, author={Emmanuel Jeandel (LIF), Guillaume Theyssier (LM-Savoie)}, journal={13th International Conference on Developments in Language Theory, Stuttgart : Allemagne (2009)}, year={2009}, archivePrefix={arXiv}, eprint={0904.2457}, primaryClass={cs.DM cs.LO} }
jeandel2009subshifts,
arxiv-7123
0904.2477
Joint Range of R\'enyi Entropies
<|reference_start|>Joint Range of R\'enyi Entropies: The exact range of the joined values of several R\'{e}nyi entropies is determined. The method is based on topology with special emphasis on the orientation of the objects studied. Like in the case when only two orders of R\'{e}nyi entropies are studied one can parametrize upper and lower bounds but an explicit formula for a tight upper or lower bound cannot be given.<|reference_end|>
arxiv
@article{harremoës2009joint, title={Joint Range of R\'enyi Entropies}, author={Peter Harremo"es}, journal={arXiv preprint arXiv:0904.2477}, year={2009}, archivePrefix={arXiv}, eprint={0904.2477}, primaryClass={cs.IT math.IT math.PR} }
harremoës2009joint
arxiv-7124
0904.2482
Good Concatenated Code Ensembles for the Binary Erasure Channel
<|reference_start|>Good Concatenated Code Ensembles for the Binary Erasure Channel: In this work, we give good concatenated code ensembles for the binary erasure channel (BEC). In particular, we consider repeat multiple-accumulate (RMA) code ensembles formed by the serial concatenation of a repetition code with multiple accumulators, and the hybrid concatenated code (HCC) ensembles recently introduced by Koller et al. (5th Int. Symp. on Turbo Codes & Rel. Topics, Lausanne, Switzerland) consisting of an outer multiple parallel concatenated code serially concatenated with an inner accumulator. We introduce stopping sets for iterative constituent code oriented decoding using maximum a posteriori erasure correction in the constituent codes. We then analyze the asymptotic stopping set distribution for RMA and HCC ensembles and show that their stopping distance hmin, defined as the size of the smallest nonempty stopping set, asymptotically grows linearly with the block length. Thus, these code ensembles are good for the BEC. It is shown that for RMA code ensembles, contrary to the asymptotic minimum distance dmin, whose growth rate coefficient increases with the number of accumulate codes, the hmin growth rate coefficient diminishes with the number of accumulators. We also consider random puncturing of RMA code ensembles and show that for sufficiently high code rates, the asymptotic hmin does not grow linearly with the block length, contrary to the asymptotic dmin, whose growth rate coefficient approaches the Gilbert-Varshamov bound as the rate increases. Finally, we give iterative decoding thresholds for the different code ensembles to compare the convergence properties.<|reference_end|>
arxiv
@article{amat2009good, title={Good Concatenated Code Ensembles for the Binary Erasure Channel}, author={Alexandre Graell i Amat and Eirik Rosnes}, journal={IEEE J. Select. Areas Commun., vol. 27, no. 6, pp. 928-943, Aug. 2009}, year={2009}, archivePrefix={arXiv}, eprint={0904.2482}, primaryClass={cs.IT math.IT} }
amat2009good
arxiv-7125
0904.2511
One-Counter Markov Decision Processes
<|reference_start|>One-Counter Markov Decision Processes: We study the computational complexity of central analysis problems for One-Counter Markov Decision Processes (OC-MDPs), a class of finitely-presented, countable-state MDPs. OC-MDPs are equivalent to a controlled extension of (discrete-time) Quasi-Birth-Death processes (QBDs), a stochastic model studied heavily in queueing theory and applied probability. They can thus be viewed as a natural ``adversarial'' version of a classic stochastic model. Alternatively, they can also be viewed as a natural probabilistic/controlled extension of classic one-counter automata. OC-MDPs also subsume (as a very restricted special case) a recently studied MDP model called ``solvency games'' that model a risk-averse gambling scenario. Basic computational questions about these models include ``termination'' questions and ``limit'' questions, such as the following: does the controller have a ``strategy'' (or ``policy'') to ensure that the counter (which may for example count the number of jobs in the queue) will hit value 0 (the empty queue) almost surely (a.s.)? Or that it will have infinite limsup value, a.s.? Or, that it will hit value 0 in selected terminal states, a.s.? Or, in case these are not satisfied a.s., compute the maximum (supremum) such probability over all strategies. We provide new upper and lower bounds on the complexity of such problems. For some of them we present a polynomial-time algorithm, whereas for others we show PSPACE- or BH-hardness and give an EXPTIME upper bound. Our upper bounds combine techniques from the theory of MDP reward models, the theory of random walks, and a variety of automata-theoretic methods.<|reference_end|>
arxiv
@article{brázdil2009one-counter, title={One-Counter Markov Decision Processes}, author={Tom'av{s} Br'azdil, V'aclav Brov{z}ek, Kousha Etessami, Anton'in Kuv{c}era, Dominik Wojtczak}, journal={arXiv preprint arXiv:0904.2511}, year={2009}, archivePrefix={arXiv}, eprint={0904.2511}, primaryClass={cs.GT cs.FL} }
brázdil2009one-counter
arxiv-7126
0904.2521
Universal Structures and the logic of Forbidden Patterns
<|reference_start|>Universal Structures and the logic of Forbidden Patterns: Forbidden Patterns Problems (FPPs) are a proper generalisation of Constraint Satisfaction Problems (CSPs). However, we show that when the input is connected and belongs to a class which has low tree-depth decomposition (e.g. structure of bounded degree, proper minor closed class and more generally class of bounded expansion) any FPP becomes a CSP. This result can also be rephrased in terms of expressiveness of the logic MMSNP, introduced by Feder and Vardi in relation with CSPs. Our proof generalises that of a recent paper by Nesetril and Ossona de Mendez. Note that our result holds in the general setting of problems over arbitrary relational structures (not just for graphs).<|reference_end|>
arxiv
@article{madelaine2009universal, title={Universal Structures and the logic of Forbidden Patterns}, author={Florent R. Madelaine}, journal={Logical Methods in Computer Science, Volume 5, Issue 2 (June 2, 2009) lmcs:1237}, year={2009}, doi={10.2168/LMCS-5(2:13)2009}, archivePrefix={arXiv}, eprint={0904.2521}, primaryClass={cs.LO cs.DM} }
madelaine2009universal
arxiv-7127
0904.2540
What does Newcomb's paradox teach us?
<|reference_start|>What does Newcomb's paradox teach us?: In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose though, an antagonist uses a prediction algorithm to deduce your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory's expected utility and dominance principles appear to provide conflicting recommendations for what you should choose. A recent extension of game theory provides a powerful tool for resolving paradoxes concerning human choice, which formulates such paradoxes in terms of Bayes nets. Here we apply this to ol to Newcomb's scenario. We show that the conflicting recommendations in Newcomb's scenario use different Bayes nets to relate your choice and the algorithm's prediction. These two Bayes nets are incompatible. This resolves the paradox: the reason there appears to be two conflicting recommendations is that the specification of the underlying Bayes net is open to two, conflicting interpretations. We then show that the accuracy of the prediction algorithm in Newcomb's paradox, the focus of much previous work, is irrelevant. We similarly show that the utility functions of you and the antagonist are irrelevant. We end by showing that Newcomb's paradox is time-reversal invariant; both the paradox and its resolution are unchanged if the algorithm makes its `prediction' \emph{after} you make your choice rather than before.<|reference_end|>
arxiv
@article{benford2009what, title={What does Newcomb's paradox teach us?}, author={David H. Wolpert Gregory Benford}, journal={arXiv preprint arXiv:0904.2540}, year={2009}, archivePrefix={arXiv}, eprint={0904.2540}, primaryClass={cs.GT} }
benford2009what
arxiv-7128
0904.2541
Disproof of the Neighborhood Conjecture with Implications to SAT
<|reference_start|>Disproof of the Neighborhood Conjecture with Implications to SAT: We study a Maker/Breaker game described by Beck. As a result we disprove a conjecture of Beck on positional games, establish a connection between this game and SAT and construct an unsatisfiable k-CNF formula with few occurrences per variable, thereby improving a previous result by Hoory and Szeider and showing that the bound obtained from the Lovasz Local Lemma is tight up to a constant factor. The Maker/Breaker game we study is as follows. Maker and Breaker take turns in choosing vertices from a given n-uniform hypergraph F, with Maker going first. Maker's goal is to completely occupy a hyperedge and Breaker tries to avoid this. Beck conjectures that if the maximum neighborhood size of F is at most 2^(n-1) then Breaker has a winning strategy. We disprove this conjecture by establishing an n-uniform hypergraph with maximum neighborhood size 3*2^(n - 3) where Maker has a winning strategy. Moreover, we show how to construct an n-uniform hypergraph with maximum degree (2^(n-1))/n where maker has a winning strategy. Finally, we establish a connection between SAT and the Maker/Breaker game we study. We can use this connection to derive new results in SAT. Kratochvil, Savicky and Tuza showed that for every k >= 3 there is an integer f(k) such that every (k,f(k))-formula is satisfiable, but (k,f(k) + 1)-SAT is already NP-complete (it is not known whether f(k) is computable). Kratochvil, Savicky and Tuza also gave the best known lower bound f(k) = Omega(2^k/k), which is a consequence of the Lovasz Local Lemma. We prove that, in fact, f(k) = Theta(2^k/k), improving upon the best known upper bound O((log k) * 2^k/k) by Hoory and Szeider.<|reference_end|>
arxiv
@article{gebauer2009disproof, title={Disproof of the Neighborhood Conjecture with Implications to SAT}, author={Heidi Gebauer}, journal={arXiv preprint arXiv:0904.2541}, year={2009}, archivePrefix={arXiv}, eprint={0904.2541}, primaryClass={cs.GT} }
gebauer2009disproof
arxiv-7129
0904.2550
Geodesic Paths On 3D Surfaces: Survey and Open Problems
<|reference_start|>Geodesic Paths On 3D Surfaces: Survey and Open Problems: This survey gives a brief overview of theoretically and practically relevant algorithms to compute geodesic paths and distances on three-dimensional surfaces. The survey focuses on polyhedral three-dimensional surfaces.<|reference_end|>
arxiv
@article{maheshwari2009geodesic, title={Geodesic Paths On 3D Surfaces: Survey and Open Problems}, author={Anil Maheshwari and Stefanie Wuhrer}, journal={Extended version in: Computational Geometry - Theory and Applications, 44(9):486-498, 2011}, year={2009}, doi={10.1016/j.comgeo.2011.05.006}, archivePrefix={arXiv}, eprint={0904.2550}, primaryClass={cs.CG} }
maheshwari2009geodesic
arxiv-7130
0904.2576
PTAS for k-tour cover problem on the plane for moderately large values of k
<|reference_start|>PTAS for k-tour cover problem on the plane for moderately large values of k: Let P be a set of n points in the Euclidean plane and let O be the origin point in the plane. In the k-tour cover problem (called frequently the capacitated vehicle routing problem), the goal is to minimize the total length of tours that cover all points in P, such that each tour starts and ends in O and covers at most k points from P. The k-tour cover problem is known to be NP-hard. It is also known to admit constant factor approximation algorithms for all values of k and even a polynomial-time approximation scheme (PTAS) for small values of k, i.e., k=O(log n / log log n). We significantly enlarge the set of values of k for which a PTAS is provable. We present a new PTAS for all values of k <= 2^{log^{\delta}n}, where \delta = \delta(\epsilon). The main technical result proved in the paper is a novel reduction of the k-tour cover problem with a set of n points to a small set of instances of the problem, each with O((k/\epsilon)^O(1)) points.<|reference_end|>
arxiv
@article{adamaszek2009ptas, title={PTAS for k-tour cover problem on the plane for moderately large values of k}, author={Anna Adamaszek, Artur Czumaj, Andrzej Lingas}, journal={arXiv preprint arXiv:0904.2576}, year={2009}, archivePrefix={arXiv}, eprint={0904.2576}, primaryClass={cs.DS} }
adamaszek2009ptas
arxiv-7131
0904.2584
New technologies for high speed computer networks: a wavelet approach
<|reference_start|>New technologies for high speed computer networks: a wavelet approach: Indoor multpropagation channel is modeled by the Kaiser electromagnetic wavelet. A method for channel characterization is proposed by modeling all the reflections of indoor propagation in a kernel function instead of its impulse response. This lead us to consider a fractal modulation scheme in which Kaiser wavelets substitute the traditional sinusoidal carrier.<|reference_end|>
arxiv
@article{lizcano2009new, title={New technologies for high speed computer networks: a wavelet approach}, author={J. A. Manzano Lizcano, S. A. Jaramillo Florez}, journal={arXiv preprint arXiv:0904.2584}, year={2009}, archivePrefix={arXiv}, eprint={0904.2584}, primaryClass={cs.NI} }
lizcano2009new
arxiv-7132
0904.2585
Interference Relay Channels - Part I: Transmission Rates
<|reference_start|>Interference Relay Channels - Part I: Transmission Rates: We analyze the performance of a system composed of two interfering point-to-point links where the transmitters can exploit a common relay to improve their individual transmission rate. When the relay uses the amplify-and-forward protocol we prove that it is not always optimal (in some sense defined later on) to exploit all the relay transmit power and derive the corresponding optimal amplification factor. For the case of the decode-and-forward protocol, already investigated in [1], we show that this protocol, through the cooperation degree between each transmitter and the relay, is the only one that naturally introduces a game between the transmitters. For the estimate-and-forward protocol, we derive two rate regions for the general case of discrete interference relay channels (IRCs) and specialize these results to obtain the Gaussian case; these regions correspond to two compression schemes at the relay, having different resolution levels. These schemes are compared analytically in some special cases. All the results mentioned are illustrated by simulations, given in this part, and exploited to study power allocation games in multi-band IRCs in the second part of this two-part paper.<|reference_end|>
arxiv
@article{djeumou2009interference, title={Interference Relay Channels - Part I: Transmission Rates}, author={Brice Djeumou, Elena Veronica Belmega, and Samson Lasaulce}, journal={arXiv preprint arXiv:0904.2585}, year={2009}, archivePrefix={arXiv}, eprint={0904.2585}, primaryClass={cs.IT math.IT} }
djeumou2009interference
arxiv-7133
0904.2587
Interference Relay Channels - Part II: Power Allocation Games
<|reference_start|>Interference Relay Channels - Part II: Power Allocation Games: In the first part of this paper we have derived achievable transmission rates for the (single-band) interference relay channel (IRC) when the relay implements either the amplify-and-forward, decode-and-forward or estimate-and-forward protocol. Here, we consider wireless networks that can be modeled by a multi-band IRC. We tackle the existence issue of Nash equilibria (NE) in these networks where each information source is assumed to selfishly allocate its power between the available bands in order to maximize its individual transmission rate. Interestingly, it is possible to show that the three power allocation (PA) games (corresponding to the three protocols assumed) under investigation are concave, which guarantees the existence of a pure NE after Rosen [3]. Then, as the relay can also optimize several parameters e.g., its position and transmit power, it is further considered as the leader of a Stackelberg game where the information sources are the followers. Our theoretical analysis is illustrated by simulations giving more insights on the addressed issues.<|reference_end|>
arxiv
@article{belmega2009interference, title={Interference Relay Channels - Part II: Power Allocation Games}, author={Elena Veronica Belmega, Brice Djeumou, and Samson Lasaulce}, journal={arXiv preprint arXiv:0904.2587}, year={2009}, archivePrefix={arXiv}, eprint={0904.2587}, primaryClass={cs.IT math.IT} }
belmega2009interference
arxiv-7134
0904.2595
A Methodology for Learning Players' Styles from Game Records
<|reference_start|>A Methodology for Learning Players' Styles from Game Records: We describe a preliminary investigation into learning a Chess player's style from game records. The method is based on attempting to learn features of a player's individual evaluation function using the method of temporal differences, with the aid of a conventional Chess engine architecture. Some encouraging results were obtained in learning the styles of two recent Chess world champions, and we report on our attempt to use the learnt styles to discriminate between the players from game records by trying to detect who was playing white and who was playing black. We also discuss some limitations of our approach and propose possible directions for future research. The method we have presented may also be applicable to other strategic games, and may even be generalisable to other domains where sequences of agents' actions are recorded.<|reference_end|>
arxiv
@article{levene2009a, title={A Methodology for Learning Players' Styles from Game Records}, author={Mark Levene and Trevor Fenner}, journal={arXiv preprint arXiv:0904.2595}, year={2009}, archivePrefix={arXiv}, eprint={0904.2595}, primaryClass={cs.AI cs.LG} }
levene2009a
arxiv-7135
0904.2623
Exponential Family Graph Matching and Ranking
<|reference_start|>Exponential Family Graph Matching and Ranking: We present a method for learning max-weight matching predictors in bipartite graphs. The method consists of performing maximum a posteriori estimation in exponential families with sufficient statistics that encode permutations and data features. Although inference is in general hard, we show that for one very relevant application - web page ranking - exact inference is efficient. For general model instances, an appropriate sampler is readily available. Contrary to existing max-margin matching models, our approach is statistically consistent and, in addition, experiments with increasing sample sizes indicate superior improvement over such models. We apply the method to graph matching in computer vision as well as to a standard benchmark dataset for learning web page ranking, in which we obtain state-of-the-art results, in particular improving on max-margin variants. The drawback of this method with respect to max-margin alternatives is its runtime for large graphs, which is comparatively high.<|reference_end|>
arxiv
@article{petterson2009exponential, title={Exponential Family Graph Matching and Ranking}, author={James Petterson, Tiberio Caetano, Julian McAuley, Jin Yu}, journal={arXiv preprint arXiv:0904.2623}, year={2009}, archivePrefix={arXiv}, eprint={0904.2623}, primaryClass={cs.LG cs.AI} }
petterson2009exponential
arxiv-7136
0904.2638
Better Quality in Synthesis through Quantitative Objectives
<|reference_start|>Better Quality in Synthesis through Quantitative Objectives: Most specification languages express only qualitative constraints. However, among two implementations that satisfy a given specification, one may be preferred to another. For example, if a specification asks that every request is followed by a response, one may prefer an implementation that generates responses quickly but does not generate unnecessary responses. We use quantitative properties to measure the "goodness" of an implementation. Using games with corresponding quantitative objectives, we can synthesize "optimal" implementations, which are preferred among the set of possible implementations that satisfy a given specification. In particular, we show how automata with lexicographic mean-payoff conditions can be used to express many interesting quantitative properties for reactive systems. In this framework, the synthesis of optimal implementations requires the solution of lexicographic mean-payoff games (for safety requirements), and the solution of games with both lexicographic mean-payoff and parity objectives (for liveness requirements). We present algorithms for solving both kinds of novel graph games.<|reference_end|>
arxiv
@article{bloem2009better, title={Better Quality in Synthesis through Quantitative Objectives}, author={Roderick Bloem, Krishnendu Chatterjee, Thomas A. Henzinger and Barbara Jobstmann}, journal={arXiv preprint arXiv:0904.2638}, year={2009}, archivePrefix={arXiv}, eprint={0904.2638}, primaryClass={cs.LO cs.GT} }
bloem2009better
arxiv-7137
0904.2658
On Finding Directed Trees with Many Leaves
<|reference_start|>On Finding Directed Trees with Many Leaves: The Rooted Maximum Leaf Outbranching problem consists in finding a spanning directed tree rooted at some prescribed vertex of a digraph with the maximum number of leaves. Its parameterized version asks if there exists such a tree with at least $k$ leaves. We use the notion of $s-t$ numbering to exhibit combinatorial bounds on the existence of spanning directed trees with many leaves. These combinatorial bounds allow us to produce a constant factor approximation algorithm for finding directed trees with many leaves, whereas the best known approximation algorithm has a $\sqrt{OPT}$-factor. We also show that Rooted Maximum Leaf Outbranching admits a quadratic kernel, improving over the cubic kernel given by Fernau et al.<|reference_end|>
arxiv
@article{daligault2009on, title={On Finding Directed Trees with Many Leaves}, author={Jean Daligault, Stephan Thomasse}, journal={arXiv preprint arXiv:0904.2658}, year={2009}, archivePrefix={arXiv}, eprint={0904.2658}, primaryClass={cs.DM} }
daligault2009on
arxiv-7138
0904.2675
Bounded Linear Logic, Revisited
<|reference_start|>Bounded Linear Logic, Revisited: We present QBAL, an extension of Girard, Scedrov and Scott's bounded linear logic. The main novelty of the system is the possibility of quantifying over resource variables. This generalization makes bounded linear logic considerably more flexible, while preserving soundness and completeness for polynomial time. In particular, we provide compositional embeddings of Leivant's RRW and Hofmann's LFPL into QBAL.<|reference_end|>
arxiv
@article{lago2009bounded, title={Bounded Linear Logic, Revisited}, author={Ugo Dal Lago (Universit`a di Bologna), Martin Hofmann (LMU, Munchen)}, journal={Logical Methods in Computer Science, Volume 6, Issue 4 (December 18, 2010) lmcs:1064}, year={2009}, doi={10.2168/LMCS-6(4:7)2010}, archivePrefix={arXiv}, eprint={0904.2675}, primaryClass={cs.LO} }
lago2009bounded
arxiv-7139
0904.2695
Compressive Diffraction Tomography for Weakly Scattering
<|reference_start|>Compressive Diffraction Tomography for Weakly Scattering: An appealing requirement from the well-known diffraction tomography (DT) exists for success reconstruction from few-view and limited-angle data. Inspired by the well-known compressive sensing (CS), the accurate super-resolution reconstruction from highly sparse data for the weakly scatters has been investigated in this paper. To realize the compressive data measurement, in particular, to obtain the super-resolution reconstruction with highly sparse data, the compressive system which is realized by surrounding the probed obstacles by the random media has been proposed and empirically studied. Several interesting conclusions have been drawn: (a) if the desired resolution is within the range from to, the K-sparse N-unknowns imaging can be obtained exactly bymeasurements, which is comparable to the required number of measurement by the Gaussian random matrix in the literatures of compressive sensing. (b) With incorporating the random media which is used to enforce the multi-path effect of wave propagation, the resulting measurement matrix is incoherence with wavelet matrix, in other words, when the probed obstacles are sparse with the framework of wavelet, the required number of measurements for successful reconstruction is similar as above. (c) If the expected resolution is lower than, the required number of measurements of proposed compressive system is almost identical to the case of free space. (d) There is also a requirement to make the tradeoff between the imaging resolutions and the number of measurements. In addition, by the introduction of complex Gaussian variable the kind of fast sparse Bayesian algorithm has been slightly modified to deal with the complex-valued optimization with sparse constraints.<|reference_end|>
arxiv
@article{li2009compressive, title={Compressive Diffraction Tomography for Weakly Scattering}, author={Lianlin Li, Wenji Zhang, Fang Li}, journal={arXiv preprint arXiv:0904.2695}, year={2009}, archivePrefix={arXiv}, eprint={0904.2695}, primaryClass={cs.CE cs.IT math.IT} }
li2009compressive
arxiv-7140
0904.2712
New Branching Rules: Improvements on Independent Set and Vertex Cover in Sparse Graphs
<|reference_start|>New Branching Rules: Improvements on Independent Set and Vertex Cover in Sparse Graphs: We present an $O^*(1.0919^n)$-time algorithm for finding a maximum independent set in an $n$-vertex graph with degree bounded by 3, which improves the previously known algorithm of running time $O^*(1.0977^n)$ by Bourgeois, Escoffier and Paschos [IWPEC 2008]. We also present an $O^*(1.1923^k)$-time algorithm to decide if a graph with degree bounded by 3 has a vertex cover of size $k$, which improves the previously known algorithm of running time $O^*(1.1939^k)$ by Chen, Kanj and Xia [ISAAC 2003]. Two new branching techniques, \emph{branching on a bottle} and \emph{branching on a 4-cycle}, are introduced, which help us to design simple and fast algorithms for the maximum independent set and minimum vertex cover problems and avoid tedious branching rules.<|reference_end|>
arxiv
@article{xiao2009new, title={New Branching Rules: Improvements on Independent Set and Vertex Cover in Sparse Graphs}, author={Mingyu Xiao}, journal={arXiv preprint arXiv:0904.2712}, year={2009}, archivePrefix={arXiv}, eprint={0904.2712}, primaryClass={cs.DS cs.DM} }
xiao2009new
arxiv-7141
0904.2716
Fast dynamics in Internet topology: preliminary observations and explanations
<|reference_start|>Fast dynamics in Internet topology: preliminary observations and explanations: By focusing on what can be observed by running traceroute-like measurements at a high frequency from a single monitor to a fixed destination set, we show that the observed view of the topology is constantly evolving at a pace much higher than expected. Repeated measurements discover new IP addresses at a constant rate, for long period of times (up to several months). In order to provide explanations, we study this phenomenon both at the IP, and at the Autonomous System levels. We show that this renewal of IP addresses is partially caused by a BGP routing dynamics, altering paths between existing ASes. Furthermore, we conjecture that an intra AS routing dynamics is another cause of this phenomenon.<|reference_end|>
arxiv
@article{magnien2009fast, title={Fast dynamics in Internet topology: preliminary observations and explanations}, author={Clemence Magnien (1), Frederic Ouedraogo (1 and 2), Guillaume Valadon (1), Matthieu Latapy (1) ((1) LIP6 (CNRS - UPMC), (2) LTIC (University of Ouagadougou))}, journal={arXiv preprint arXiv:0904.2716}, year={2009}, archivePrefix={arXiv}, eprint={0904.2716}, primaryClass={cs.NI} }
magnien2009fast
arxiv-7142
0904.2722
On Counteracting Byzantine Attacks in Network Coded Peer-to-Peer Networks
<|reference_start|>On Counteracting Byzantine Attacks in Network Coded Peer-to-Peer Networks: Random linear network coding can be used in peer-to-peer networks to increase the efficiency of content distribution and distributed storage. However, these systems are particularly susceptible to Byzantine attacks. We quantify the impact of Byzantine attacks on the coded system by evaluating the probability that a receiver node fails to correctly recover a file. We show that even for a small probability of attack, the system fails with overwhelming probability. We then propose a novel signature scheme that allows packet-level Byzantine detection. This scheme allows one-hop containment of the contamination, and saves bandwidth by allowing nodes to detect and drop the contaminated packets. We compare the net cost of our signature scheme with various other Byzantine schemes, and show that when the probability of Byzantine attacks is high, our scheme is the most bandwidth efficient.<|reference_end|>
arxiv
@article{kim2009on, title={On Counteracting Byzantine Attacks in Network Coded Peer-to-Peer Networks}, author={MinJi Kim, Lu'isa Lima, Fang Zhao, Joao Barros, Muriel Medard, Ralf Koetter, Ton Kalker, Keesook Han}, journal={arXiv preprint arXiv:0904.2722}, year={2009}, doi={10.1109/JSAC.2010.100607}, archivePrefix={arXiv}, eprint={0904.2722}, primaryClass={cs.NI cs.CR} }
kim2009on
arxiv-7143
0904.2728
Fast Computation of Empirically Tight Bounds for the Diameter of Massive Graphs
<|reference_start|>Fast Computation of Empirically Tight Bounds for the Diameter of Massive Graphs: The diameter of a graph is among its most basic parameters. Since a few years, it moreover became a key issue to compute it for massive graphs in the context of complex network analysis. However, known algorithms, including the ones producing approximate values, have too high a time and/or space complexity to be used in such cases. We propose here a new approach relying on very simple and fast algorithms that compute (upper and lower) bounds for the diameter. We show empirically that, on various real-world cases representative of complex networks studied in the literature, the obtained bounds are very tight (and even equal in some cases). This leads to rigorous and very accurate estimations of the actual diameter in cases which were previously untractable in practice.<|reference_end|>
arxiv
@article{magnien2009fast, title={Fast Computation of Empirically Tight Bounds for the Diameter of Massive Graphs}, author={Clemence Magnien (1), Matthieu Latapy (1) and Michel Habib (2) ((1) LIP6 (CNRS - UPMC), (2) LIAFA (CNRS - Universite Paris Diderot))}, journal={arXiv preprint arXiv:0904.2728}, year={2009}, archivePrefix={arXiv}, eprint={0904.2728}, primaryClass={cs.DS} }
magnien2009fast
arxiv-7144
0904.2733
Detection, Understanding, and Prevention of Traceroute Measurement Artifacts
<|reference_start|>Detection, Understanding, and Prevention of Traceroute Measurement Artifacts: Traceroute is widely used: from the diagnosis of network problems to the assemblage of internet maps. Unfortu- nately, there are a number of problems with traceroute methodology, which lead to the inference of erroneous routes. This paper studies particular structures arising in nearly all traceroute measurements. We characterize them as "loops", "cycles", and "diamonds". We iden- tify load balancing as a possible cause for the appear- ance of false loops, cycles and diamonds, i.e., artifacts that do not represent the internet topology. We pro- vide a new publicly-available traceroute, called Paris traceroute, which, by controlling the packet header con- tents, provides a truer picture of the actual routes that packets follow. We performed measurements, from the perspective of a single source tracing towards multiple destinations, and Paris traceroute allowed us to show that many of the particular structures we observe are indeed traceroute measurement artifacts.<|reference_end|>
arxiv
@article{viger2009detection,, title={Detection, Understanding, and Prevention of Traceroute Measurement Artifacts}, author={Fabien Viger (1), Brice Augustin (1), Xavier Cuvellier (1), Clemence Magnien (1), Matthieu Latapy (1), Timur Friedman (1), and Renata Teixeira (1) ((1) LIP6 (CNRS - UPMC))}, journal={arXiv preprint arXiv:0904.2733}, year={2009}, archivePrefix={arXiv}, eprint={0904.2733}, primaryClass={cs.NI} }
viger2009detection,
arxiv-7145
0904.2751
Reconstruction and Clustering in Random Constraint Satisfaction Problems
<|reference_start|>Reconstruction and Clustering in Random Constraint Satisfaction Problems: Random instances of Constraint Satisfaction Problems (CSP's) appear to be hard for all known algorithms, when the number of constraints per variable lies in a certain interval. Contributing to the general understanding of the structure of the solution space of a CSP in the satisfiable regime, we formulate a set of natural technical conditions on a large family of (random) CSP's, and prove bounds on three most interesting thresholds for the density of such an ensemble: namely, the satisfiability threshold, the threshold for clustering of the solution space, and the threshold for an appropriate reconstruction problem on the CSP's. The bounds become asymptoticlally tight as the number of degrees of freedom in each clause diverges. The families are general enough to include commonly studied problems such as, random instances of Not-All-Equal-SAT, k-XOR formulae, hypergraph 2-coloring, and graph k-coloring. An important new ingredient is a condition involving the Fourier expansion of clauses, which characterizes the class of problems with a similar threshold structure.<|reference_end|>
arxiv
@article{montanari2009reconstruction, title={Reconstruction and Clustering in Random Constraint Satisfaction Problems}, author={Andrea Montanari, Ricardo Restrepo and Prasad Tetali}, journal={arXiv preprint arXiv:0904.2751}, year={2009}, archivePrefix={arXiv}, eprint={0904.2751}, primaryClass={cs.DM} }
montanari2009reconstruction
arxiv-7146
0904.2759
Span programs and quantum query complexity: The general adversary bound is nearly tight for every boolean function
<|reference_start|>Span programs and quantum query complexity: The general adversary bound is nearly tight for every boolean function: The general adversary bound is a semi-definite program (SDP) that lower-bounds the quantum query complexity of a function. We turn this lower bound into an upper bound, by giving a quantum walk algorithm based on the dual SDP that has query complexity at most the general adversary bound, up to a logarithmic factor. In more detail, the proof has two steps, each based on "span programs," a certain linear-algebraic model of computation. First, we give an SDP that outputs for any boolean function a span program computing it that has optimal "witness size." The optimal witness size is shown to coincide with the general adversary lower bound. Second, we give a quantum algorithm for evaluating span programs with only a logarithmic query overhead on the witness size. The first result is motivated by a quantum algorithm for evaluating composed span programs. The algorithm is known to be optimal for evaluating a large class of formulas. The allowed gates include all constant-size functions for which there is an optimal span program. So far, good span programs have been found in an ad hoc manner, and the SDP automates this procedure. Surprisingly, the SDP's value equals the general adversary bound. A corollary is an optimal quantum algorithm for evaluating "balanced" formulas over any finite boolean gate set. The second result extends span programs' applicability beyond the formula evaluation problem. A strong universality result for span programs follows. A good quantum query algorithm for a problem implies a good span program, and vice versa. Although nearly tight, this equivalence is nontrivial. Span programs are a promising model for developing more quantum algorithms.<|reference_end|>
arxiv
@article{reichardt2009span, title={Span programs and quantum query complexity: The general adversary bound is nearly tight for every boolean function}, author={Ben W. Reichardt}, journal={Extended abstract in Proc. 50th IEEE Symp. on Foundations of Computer Science (FOCS), 2009, pages 544-551}, year={2009}, doi={10.1109/FOCS.2009.55}, archivePrefix={arXiv}, eprint={0904.2759}, primaryClass={quant-ph cs.CC} }
reichardt2009span
arxiv-7147
0904.2761
A Non-Holonomic Systems Approach to Special Function Identities
<|reference_start|>A Non-Holonomic Systems Approach to Special Function Identities: We extend Zeilberger's approach to special function identities to cases that are not holonomic. The method of creative telescoping is thus applied to definite sums or integrals involving Stirling or Bernoulli numbers, incomplete Gamma function or polylogarithms, which are not covered by the holonomic framework. The basic idea is to take into account the dimension of appropriate ideals in Ore algebras. This unifies several earlier extensions and provides algorithms for summation and integration in classes that had not been accessible to computer algebra before.<|reference_end|>
arxiv
@article{chyzak2009a, title={A Non-Holonomic Systems Approach to Special Function Identities}, author={Fr'ed'eric Chyzak (INRIA Rocquencourt), Manuel Kauers, Bruno Salvy (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0904.2761}, year={2009}, doi={10.1145/1576702.1576720}, archivePrefix={arXiv}, eprint={0904.2761}, primaryClass={cs.SC} }
chyzak2009a
arxiv-7148
0904.2769
Non Homogeneous Poisson Process Model based Optimal Modular Software Testing using Fault Tolerance
<|reference_start|>Non Homogeneous Poisson Process Model based Optimal Modular Software Testing using Fault Tolerance: In software development process we come across various modules. Which raise the idea of priority of the different modules of a software so that important modules are tested on preference. This approach is desirable because it is not possible to test each module regressively due to time and cost constraints. This paper discuss on some parameters, required to prioritize several modules of a software and provides measure of optimal time and cost for testing based on non homogeneous Poisson process.<|reference_end|>
arxiv
@article{awasthi2009non, title={Non Homogeneous Poisson Process Model based Optimal Modular Software Testing using Fault Tolerance}, author={Amit K Awasthi and Sanjay Chaudhary}, journal={arXiv preprint arXiv:0904.2769}, year={2009}, archivePrefix={arXiv}, eprint={0904.2769}, primaryClass={cs.SE} }
awasthi2009non
arxiv-7149
0904.2785
Decomposition width - a new width parameter for matroids
<|reference_start|>Decomposition width - a new width parameter for matroids: We introduce a new width parameter for matroids called decomposition width and prove that every matroid property expressible in the monadic second order logic can be computed in linear time for matroids with bounded decomposition width if their decomposition is given. Since decompositions of small width for our new notion can be computed in polynomial time for matroids of bounded branch-width represented over finite fields, our results include recent algorithmic results of Hlineny [J. Combin. Theory Ser. B 96 (2006), 325-351] in this area and extend his results to matroids not necessarily representable over finite fields.<|reference_end|>
arxiv
@article{kral2009decomposition, title={Decomposition width - a new width parameter for matroids}, author={Daniel Kral}, journal={arXiv preprint arXiv:0904.2785}, year={2009}, archivePrefix={arXiv}, eprint={0904.2785}, primaryClass={cs.DM cs.DS} }
kral2009decomposition
arxiv-7150
0904.2827
Principle of development
<|reference_start|>Principle of development: Today, science have a powerful tool for the description of reality - the numbers. However, the concept of number was not immediately, lets try to trace the evolution of the concept. The numbers emerged as the need for accurate estimates of the amount in order to permit a comparison of some objects. So if you see to it how many times a day a person uses the numbers and compare, it becomes evident that the comparison is used much more frequently. However, the comparison is not possible without two opposite basic standards. Thus, to introduce the concept of comparison, must have two opposing standards, in turn, the operation of comparison is necessary to introduce the concept of number. Arguably, the scientific description of reality is impossible without the concept of opposites. In this paper analyzes the concept of opposites, as the basis for the introduction of the principle of development.<|reference_end|>
arxiv
@article{vishnevksaya2009principle, title={Principle of development}, author={Elena S. Vishnevksaya}, journal={arXiv preprint arXiv:0904.2827}, year={2009}, archivePrefix={arXiv}, eprint={0904.2827}, primaryClass={cs.AI} }
vishnevksaya2009principle
arxiv-7151
0904.2861
A simple algorithm for decoding both errors and erasures of Reed-Solomon codes
<|reference_start|>A simple algorithm for decoding both errors and erasures of Reed-Solomon codes: A simple algorithm for decoding both errors and erasures of Reed-Solomon codes is described.<|reference_end|>
arxiv
@article{fedorenko2009a, title={A simple algorithm for decoding both errors and erasures of Reed-Solomon codes}, author={Sergei V. Fedorenko}, journal={arXiv preprint arXiv:0904.2861}, year={2009}, archivePrefix={arXiv}, eprint={0904.2861}, primaryClass={cs.IT math.IT} }
fedorenko2009a
arxiv-7152
0904.2863
Error Scaling Laws for Linear Optimal Estimation from Relative Measurements
<|reference_start|>Error Scaling Laws for Linear Optimal Estimation from Relative Measurements: We study the problem of estimating vector-valued variables from noisy "relative" measurements. This problem arises in several sensor network applications. The measurement model can be expressed in terms of a graph, whose nodes correspond to the variables and edges to noisy measurements of the difference between two variables. We take an arbitrary variable as the reference and consider the optimal (minimum variance) linear unbiased estimate of the remaining variables. We investigate how the error in the optimal linear unbiased estimate of a node variable grows with the distance of the node to the reference node. We establish a classification of graphs, namely, dense or sparse in Rd,1<= d <=3, that determines how the linear unbiased optimal estimation error of a node grows with its distance from the reference node. In particular, if a graph is dense in 1,2, or 3D, then a node variable's estimation error is upper bounded by a linear, logarithmic, or bounded function of distance from the reference, respectively. Corresponding lower bounds are obtained if the graph is sparse in 1, 2 and 3D. Our results also show that naive measures of graph density, such as node degree, are inadequate predictors of the estimation error. Being true for the optimal linear unbiased estimate, these scaling laws determine algorithm-independent limits on the estimation accuracy achievable in large graphs.<|reference_end|>
arxiv
@article{barooah2009error, title={Error Scaling Laws for Linear Optimal Estimation from Relative Measurements}, author={Prabir Barooah, Joao P. Hespanha}, journal={arXiv preprint arXiv:0904.2863}, year={2009}, doi={10.1109/TIT.2009.2032805}, archivePrefix={arXiv}, eprint={0904.2863}, primaryClass={cs.IT math.IT} }
barooah2009error
arxiv-7153
0904.2894
On FO2 quantifier alternation over words
<|reference_start|>On FO2 quantifier alternation over words: We show that each level of the quantifier alternation hierarchy within FO^2[<] -- the 2-variable fragment of the first order logic of order on words -- is a variety of languages. We then use the notion of condensed rankers, a refinement of the rankers defined by Weis and Immerman, to produce a decidable hierarchy of varieties which is interwoven with the quantifier alternation hierarchy -- and conjecturally equal to it. It follows that the latter hierarchy is decidable within one unit: given a formula alpha in FO^2[<], one can effectively compute an integer m such that alpha is equivalent to a formula with at most m+1 alternating blocks of quantifiers, but not to a formula with only m-1 blocks. This is a much more precise result than what is known about the quantifier alternation hierarchy within FO[<], where no decidability result is known beyond the very first levels.<|reference_end|>
arxiv
@article{kufleitner2009on, title={On FO2 quantifier alternation over words}, author={Manfred Kufleitner, Pascal Weil (LaBRI)}, journal={Mathematical Foundations of Computer Science 2009, slovaque, R\'epublique (2009)}, year={2009}, doi={10.1007/978-3-642-03816-7_44}, archivePrefix={arXiv}, eprint={0904.2894}, primaryClass={cs.LO} }
kufleitner2009on
arxiv-7154
0904.2921
Inter-Session Network Coding with Strategic Users: A Game-Theoretic Analysis of Network Coding
<|reference_start|>Inter-Session Network Coding with Strategic Users: A Game-Theoretic Analysis of Network Coding: A common assumption in the existing network coding literature is that the users are cooperative and non-selfish. However, this assumption can be violated in practice. In this paper, we analyze inter-session network coding in a wired network using game theory. We assume selfish users acting strategically to maximize their own utility, leading to a resource allocation game among users. In particular, we study the well-known butterfly network topology where a bottleneck link is shared by several network coding and routing flows. We prove the existence of a Nash equilibrium for a wide range of utility functions. We show that the number of Nash equilibria can be large (even infinite) for certain choices of system parameters. This is in sharp contrast to a similar game setting with traditional packet forwarding where the Nash equilibrium is always unique. We then characterize the worst-case efficiency bounds, i.e., the Price-of-Anarchy (PoA), compared to an optimal and cooperative network design. We show that by using a novel discriminatory pricing scheme which charges encoded and forwarded packets differently, we can improve the PoA. However, regardless of the discriminatory pricing scheme being used, the PoA is still worse than for the case when network coding is not applied. This implies that, although inter-session network coding can improve performance compared to ordinary routing, it is significantly more sensitive to users' strategic behaviour. For example, in a butterfly network where the side links have zero cost, the efficiency can be as low as 25%. If the side links have non-zero cost, then the efficiency can further reduce to only 20%. These results generalize the well-known result of guaranteed 67% worst-case efficiency for traditional packet forwarding networks.<|reference_end|>
arxiv
@article{mohsenian-rad2009inter-session, title={Inter-Session Network Coding with Strategic Users: A Game-Theoretic Analysis of Network Coding}, author={Amir-Hamed Mohsenian-Rad, Jianwei Huang, Vincent W.S. Wong, Sidharth Jaggi, and Robert Schober}, journal={arXiv preprint arXiv:0904.2921}, year={2009}, doi={10.1109/TCOMM.2013.021413.110555}, archivePrefix={arXiv}, eprint={0904.2921}, primaryClass={cs.IT math.IT} }
mohsenian-rad2009inter-session
arxiv-7155
0904.2953
Towards an Intelligent System for Risk Prevention and Management
<|reference_start|>Towards an Intelligent System for Risk Prevention and Management: Making a decision in a changeable and dynamic environment is an arduous task owing to the lack of information, their uncertainties and the unawareness of planners about the future evolution of incidents. The use of a decision support system is an efficient solution of this issue. Such a system can help emergency planners and responders to detect possible emergencies, as well as to suggest and evaluate possible courses of action to deal with the emergency. We are interested in our work to the modeling of a monitoring preventive and emergency management system, wherein we stress the generic aspect. In this paper we propose an agent-based architecture of this system and we describe a first step of our approach which is the modeling of information and their representation using a multiagent system.<|reference_end|>
arxiv
@article{kebair2009towards, title={Towards an Intelligent System for Risk Prevention and Management}, author={Fahem Kebair and Frederic Serin}, journal={Proceedings of the 5th International ISCRAM Conference. 526-535, Washington, DC, USA May 2008}, year={2009}, archivePrefix={arXiv}, eprint={0904.2953}, primaryClass={cs.AI cs.MA} }
kebair2009towards
arxiv-7156
0904.2954
Agent-Based Decision Support System to Prevent and Manage Risk Situations
<|reference_start|>Agent-Based Decision Support System to Prevent and Manage Risk Situations: The topic of risk prevention and emergency response has become a key social and political concern. One approach to address this challenge is to develop Decision Support Systems (DSS) that can help emergency planners and responders to detect emergencies, as well as to suggest possible course of actions to deal with the emergency. Our research work comes in this framework and aims to develop a DSS that must be generic as much as possible and independent from the case study.<|reference_end|>
arxiv
@article{kebair2009agent-based, title={Agent-Based Decision Support System to Prevent and Manage Risk Situations}, author={Fahem Kebair and Frederic Serin}, journal={arXiv preprint arXiv:0904.2954}, year={2009}, archivePrefix={arXiv}, eprint={0904.2954}, primaryClass={cs.AI cs.MA} }
kebair2009agent-based
arxiv-7157
0904.2955
A short proof that adding some permutation rules to beta preserves SN
<|reference_start|>A short proof that adding some permutation rules to beta preserves SN: I show that, if a term is $SN$ for $\beta$, it remains $SN$ when some permutation rules are added.<|reference_end|>
arxiv
@article{david2009a, title={A short proof that adding some permutation rules to beta preserves SN}, author={Ren'e David (LAMA)}, journal={arXiv preprint arXiv:0904.2955}, year={2009}, archivePrefix={arXiv}, eprint={0904.2955}, primaryClass={math.LO cs.LO} }
david2009a
arxiv-7158
0904.3036
Inconsistency Robustness in Logic Programs
<|reference_start|>Inconsistency Robustness in Logic Programs: Inconsistency robustness is "information system performance in the face of continually pervasive inconsistencies." A fundamental principle of Inconsistency Robustness is to make contradictions explicit so that arguments for and against propositions can be formalized. This paper explores the role of Inconsistency Robustness in the history and theory of Logic Programs. Robert Kowalski put forward a bold thesis: "Looking back on our early discoveries, I value most the discovery that computation could be subsumed by deduction." However, mathematical logic cannot always infer computational steps because computational systems make use of arbitration for determining which message is processed next by a recipient that is sent multiple messages concurrently. Since reception orders are in general indeterminate, they cannot be inferred from prior information by mathematical logic alone. Therefore mathematical logic cannot in general implement computation. Over the course of history, the term "Functional Program" has grown more precise and technical as the field has matured. "Logic Program" should be on a similar trajectory. Accordingly, "Logic Program" should have a general precise characterization. In the fall of 1972, different characterizations of Logic Programs that have continued to this day: * A Logic Program uses Horn-Clause syntax for forward and backward chaining * Each computational step (according to Actor Model) of a Logic Program is deductively inferred (e.g. in Direct Logic). The above examples are illustrative of how issues of inconsistency robustness have repeatedly arisen in Logic Programs.<|reference_end|>
arxiv
@article{hewitt2009inconsistency, title={Inconsistency Robustness in Logic Programs}, author={Carl Hewitt}, journal={arXiv preprint arXiv:0904.3036}, year={2009}, archivePrefix={arXiv}, eprint={0904.3036}, primaryClass={cs.LO} }
hewitt2009inconsistency
arxiv-7159
0904.3060
An efficient quantum search engine on unsorted database
<|reference_start|>An efficient quantum search engine on unsorted database: We consider the problem of finding one or more desired items out of an unsorted database. Patel has shown that if the database permits quantum queries, then mere digitization is sufficient for efficient search for one desired item. The algorithm, called factorized quantum search algorithm, presented by him can locate the desired item in an unsorted database using $O(log_{4}N)$ queries to factorized oracles. But the algorithm requires that all the property values must be distinct from each other. In this paper, we discuss how to make a database satisfy the requirements, and present a quantum search engine based on the algorithm. Our goal is achieved by introducing auxiliary files for the property values that are not distinct, and converting every complex query request into a sequence of calls to factorized quantum search algorithm. The query complexity of our algorithm is $O(P*Q*M*log_{4}N)$, where P is the number of the potential simple query requests in the complex query request, Q is the maximum number of calls to the factorized quantum search algorithm of the simple queries, M is the number of the auxiliary files for the property on which our algorithm are searching for desired items. This implies that to manage an unsorted database on an actual quantum computer is possible and efficient.<|reference_end|>
arxiv
@article{hu2009an, title={An efficient quantum search engine on unsorted database}, author={Heping Hu, Yingyu Zhang, Zhengding Lu}, journal={arXiv preprint arXiv:0904.3060}, year={2009}, archivePrefix={arXiv}, eprint={0904.3060}, primaryClass={cs.DB cs.DS} }
hu2009an
arxiv-7160
0904.3062
Approximate counting with a floating-point counter
<|reference_start|>Approximate counting with a floating-point counter: Memory becomes a limiting factor in contemporary applications, such as analyses of the Webgraph and molecular sequences, when many objects need to be counted simultaneously. Robert Morris [Communications of the ACM, 21:840--842, 1978] proposed a probabilistic technique for approximate counting that is extremely space-efficient. The basic idea is to increment a counter containing the value $X$ with probability $2^{-X}$. As a result, the counter contains an approximation of $\lg n$ after $n$ probabilistic updates stored in $\lg\lg n$ bits. Here we revisit the original idea of Morris, and introduce a binary floating-point counter that uses a $d$-bit significand in conjunction with a binary exponent. The counter yields a simple formula for an unbiased estimation of $n$ with a standard deviation of about $0.6\cdot n2^{-d/2}$, and uses $d+\lg\lg n$ bits. We analyze the floating-point counter's performance in a general framework that applies to any probabilistic counter, and derive practical formulas to assess its accuracy.<|reference_end|>
arxiv
@article{csuros2009approximate, title={Approximate counting with a floating-point counter}, author={Miklos Csuros}, journal={arXiv preprint arXiv:0904.3062}, year={2009}, archivePrefix={arXiv}, eprint={0904.3062}, primaryClass={cs.DS} }
csuros2009approximate
arxiv-7161
0904.3063
Using Dissortative Mating Genetic Algorithms to Track the Extrema of Dynamic Deceptive Functions
<|reference_start|>Using Dissortative Mating Genetic Algorithms to Track the Extrema of Dynamic Deceptive Functions: Traditional Genetic Algorithms (GAs) mating schemes select individuals for crossover independently of their genotypic or phenotypic similarities. In Nature, this behaviour is known as random mating. However, non-random schemes - in which individuals mate according to their kinship or likeness - are more common in natural systems. Previous studies indicate that, when applied to GAs, negative assortative mating (a specific type of non-random mating, also known as dissortative mating) may improve their performance (on both speed and reliability) in a wide range of problems. Dissortative mating maintains the genetic diversity at a higher level during the run, and that fact is frequently observed as an explanation for dissortative GAs ability to escape local optima traps. Dynamic problems, due to their specificities, demand special care when tuning a GA, because diversity plays an even more crucial role than it does when tackling static ones. This paper investigates the behaviour of dissortative mating GAs, namely the recently proposed Adaptive Dissortative Mating GA (ADMGA), on dynamic trap functions. ADMGA selects parents according to their Hamming distance, via a self-adjustable threshold value. The method, by keeping population diversity during the run, provides an effective means to deal with dynamic problems. Tests conducted with deceptive and nearly deceptive trap functions indicate that ADMGA is able to outperform other GAs, some specifically designed for tracking moving extrema, on a wide range of tests, being particularly effective when speed of change is not very fast. When comparing the algorithm to a previously proposed dissortative GA, results show that performance is equivalent on the majority of the experiments, but ADMGA performs better when solving the hardest instances of the test set.<|reference_end|>
arxiv
@article{fernandes2009using, title={Using Dissortative Mating Genetic Algorithms to Track the Extrema of Dynamic Deceptive Functions}, author={C. M. Fernandes, J.J. Merelo and A.C. Rosa}, journal={arXiv preprint arXiv:0904.3063}, year={2009}, archivePrefix={arXiv}, eprint={0904.3063}, primaryClass={cs.NE} }
fernandes2009using
arxiv-7162
0904.3074
P vs NP Problem in the field anthropology
<|reference_start|>P vs NP Problem in the field anthropology: An attempt of a new kind of complexity anthropology is considered.<|reference_end|>
arxiv
@article{popov2009p, title={P vs NP Problem in the field anthropology}, author={Michael A. Popov}, journal={arXiv preprint arXiv:0904.3074}, year={2009}, archivePrefix={arXiv}, eprint={0904.3074}, primaryClass={cs.OH} }
popov2009p
arxiv-7163
0904.3087
Distributed Maintenance of Anytime Available Spanning Trees in Dynamic Networks
<|reference_start|>Distributed Maintenance of Anytime Available Spanning Trees in Dynamic Networks: We address the problem of building and maintaining distributed spanning trees in highly dynamic networks, in which topological events can occur at any time and any rate, and no stable periods can be assumed. In these harsh environments, we strive to preserve some properties such as cycle-freeness or the existence of a root in each tree, in order to make it possible to keep using the trees uninterruptedly (to a possible extent). Our algorithm operates at a coarse-grain level, using atomic pairwise interactions in a way akin to recent population protocol models. The algorithm relies on a perpetual alternation of \emph{topology-induced splittings} and \emph{computation-induced mergings} of a forest of spanning trees. Each tree in the forest hosts exactly one token (also called root) that performs a random walk {\em inside} the tree, switching parent-child relationships as it crosses edges. When two tokens are located on both sides of a same edge, their trees are merged upon this edge and one token disappears. Whenever an edge that belongs to a tree disappears, its child endpoint regenerates a new token instantly. The main features of this approach is that both \emph{merging} and \emph{splitting} are purely localized phenomenons. In this paper, we present and motivate the algorithm, and we prove its correctness in arbitrary dynamic networks. Then we discuss several implementation choices around this general principle. Preliminary results regarding its analysis are also discussed, in particular an analytical expression of the expected merging time for two given trees in a static context.<|reference_end|>
arxiv
@article{casteigts2009distributed, title={Distributed Maintenance of Anytime Available Spanning Trees in Dynamic Networks}, author={Arnaud Casteigts (LaBRI), Serge Chaumette (LaBRI), Fr'ed'eric Guinand (LITIS), Yoann Pign'e (LITIS)}, journal={arXiv preprint arXiv:0904.3087}, year={2009}, archivePrefix={arXiv}, eprint={0904.3087}, primaryClass={cs.DC cs.NI} }
casteigts2009distributed
arxiv-7164
0904.3093
Counting Paths and Packings in Halves
<|reference_start|>Counting Paths and Packings in Halves: It is shown that one can count $k$-edge paths in an $n$-vertex graph and $m$-set $k$-packings on an $n$-element universe, respectively, in time ${n \choose k/2}$ and ${n \choose mk/2}$, up to a factor polynomial in $n$, $k$, and $m$; in polynomial space, the bounds hold if multiplied by $3^{k/2}$ or $5^{mk/2}$, respectively. These are implications of a more general result: given two set families on an $n$-element universe, one can count the disjoint pairs of sets in the Cartesian product of the two families with $\nO(n \ell)$ basic operations, where $\ell$ is the number of members in the two families and their subsets.<|reference_end|>
arxiv
@article{björklund2009counting, title={Counting Paths and Packings in Halves}, author={Andreas Bj"orklund and Thore Husfeldt and Petteri Kaski and Mikko Koivisto}, journal={arXiv preprint arXiv:0904.3093}, year={2009}, archivePrefix={arXiv}, eprint={0904.3093}, primaryClass={cs.DS cs.DM} }
björklund2009counting
arxiv-7165
0904.3116
Variations on Muchnik's Conditional Complexity Theorem
<|reference_start|>Variations on Muchnik's Conditional Complexity Theorem: Muchnik's theorem about simple conditional descriptions states that for all strings $a$ and $b$ there exists a short program $p$ transforming $a$ to $b$ that has the least possible length and is simple conditional on $b$. In this paper we present two new proofs of this theorem. The first one is based on the on-line matching algorithm for bipartite graphs. The second one, based on extractors, can be generalized to prove a version of Muchnik's theorem for space-bounded Kolmogorov complexity.<|reference_end|>
arxiv
@article{musatov2009variations, title={Variations on Muchnik's Conditional Complexity Theorem}, author={Daniil Musatov, Andrei Romashchenko, Alexander Shen}, journal={arXiv preprint arXiv:0904.3116}, year={2009}, archivePrefix={arXiv}, eprint={0904.3116}, primaryClass={cs.CC} }
musatov2009variations
arxiv-7166
0904.3148
CRT-Based High Speed Parallel Architecture for Long BCH Encoding
<|reference_start|>CRT-Based High Speed Parallel Architecture for Long BCH Encoding: BCH (Bose-Chaudhuri-Hocquenghen) error correcting codes ([1]-[2]) are now widely used in communication systems and digital technology. Direct LFSR(linear feedback shifted register)-based encoding of a long BCH code suffers from serial-in and serial-out limitation and large fanout effect of some XOR gates. This makes the LFSR-based encoders of long BCH codes cannot keep up with the data transmission speed in some applications. Several parallel long parallel encoders for long cyclic codes have been proposed in [3]-[8]. The technique for eliminating the large fanout effect by J-unfolding method and some algebraic manipulation was presented in [7] and [8] . In this paper we propose a CRT(Chinese Remainder Theorem)-based parallel architecture for long BCH encoding. Our novel technique can be used to eliminate the fanout bottleneck. The only restriction on the speed of long BCH encoding of our CRT-based architecture is $log_2N$, where $N$ is the length of the BCH code.<|reference_end|>
arxiv
@article{chen2009crt-based, title={CRT-Based High Speed Parallel Architecture for Long BCH Encoding}, author={Hao Chen}, journal={arXiv preprint arXiv:0904.3148}, year={2009}, archivePrefix={arXiv}, eprint={0904.3148}, primaryClass={cs.AR cs.IT math.IT} }
chen2009crt-based
arxiv-7167
0904.3151
Efficient Construction of Neighborhood Graphs by the Multiple Sorting Method
<|reference_start|>Efficient Construction of Neighborhood Graphs by the Multiple Sorting Method: Neighborhood graphs are gaining popularity as a concise data representation in machine learning. However, naive graph construction by pairwise distance calculation takes $O(n^2)$ runtime for $n$ data points and this is prohibitively slow for millions of data points. For strings of equal length, the multiple sorting method (Uno, 2008) can construct an $\epsilon$-neighbor graph in $O(n+m)$ time, where $m$ is the number of $\epsilon$-neighbor pairs in the data. To introduce this remarkably efficient algorithm to continuous domains such as images, signals and texts, we employ a random projection method to convert vectors to strings. Theoretical results are presented to elucidate the trade-off between approximation quality and computation time. Empirical results show the efficiency of our method in comparison to fast nearest neighbor alternatives.<|reference_end|>
arxiv
@article{uno2009efficient, title={Efficient Construction of Neighborhood Graphs by the Multiple Sorting Method}, author={Takeaki Uno, Masashi Sugiyama, Koji Tsuda}, journal={arXiv preprint arXiv:0904.3151}, year={2009}, archivePrefix={arXiv}, eprint={0904.3151}, primaryClass={cs.DS cs.LG} }
uno2009efficient
arxiv-7168
0904.3157
On the distributed evaluation of recursive queries over graphs
<|reference_start|>On the distributed evaluation of recursive queries over graphs: Logical formalisms such as first-order logic (FO) and fixpoint logic (FP) are well suited to express in a declarative manner fundamental graph functionalities required in distributed systems. We show that these logics constitute good abstractions for programming distributed systems as a whole, since they can be evaluated in a fully distributed manner with reasonable complexity upper-bounds. We first prove that FO and FP can be evaluated with a polynomial number of messages of logarithmic size. We then show that the (global) logical formulas can be translated into rule programs describing the local behavior of the nodes of the distributed system, which compute equivalent results. Finally, we introduce local fragments of these logics, which preserve as much as possible the locality of their distributed computation, while offering a rich expressive power for networking functionalities. We prove that they admit tighter upper-bounds with bounded number of messages of bounded size. Finally, we show that the semantics and the complexity of the local fragments are preserved over locally consistent networks as well as anonymous networks, thus showing the robustness of the proposed local logical formalisms.<|reference_end|>
arxiv
@article{grumbach2009on, title={On the distributed evaluation of recursive queries over graphs}, author={Stephane Grumbach (INRIA Liama), Fang Wang (ISCAS Sklcs) and Zhilin Wu (CASIA Liama)}, journal={arXiv preprint arXiv:0904.3157}, year={2009}, archivePrefix={arXiv}, eprint={0904.3157}, primaryClass={cs.LO cs.DC} }
grumbach2009on
arxiv-7169
0904.3165
Fading Broadcast Channels with State Information at the Receivers
<|reference_start|>Fading Broadcast Channels with State Information at the Receivers: Despite considerable progress on the information-theoretic broadcast channel, the capacity region of fading broadcast channels with channel state known at the receivers but unknown at the transmitter remains unresolved. We address this subject by introducing a layered erasure broadcast channel model in which each component channel has a state that specifies the received signal levels in an instance of a deterministic binary expansion channel. We find the capacity region of this class of broadcast channels. The capacity achieving strategy assigns each signal level to the user that derives the maximum expected rate from that level. The outer bound is based on a channel enhancement that creates a degraded broadcast channel for which the capacity region is known. This same approach is then used to find inner and outer bounds to the capacity region of fading Gaussian broadcast channels. The achievability scheme employs a superposition of binary inputs. For intermittent AWGN channels and for Rayleigh fading channels, the achievable rates are observed to be with 1-2 bits of the outer bound at high SNR. We also prove that the achievable rate region is within 6.386 bits/s/Hz of the capacity region for all fading AWGN broadcast channels.<|reference_end|>
arxiv
@article{tse2009fading, title={Fading Broadcast Channels with State Information at the Receivers}, author={David Tse, Roy Yates}, journal={arXiv preprint arXiv:0904.3165}, year={2009}, archivePrefix={arXiv}, eprint={0904.3165}, primaryClass={cs.IT math.IT} }
tse2009fading
arxiv-7170
0904.3169
Reconstructing 3-colored grids from horizontal and vertical projections is NP-hard
<|reference_start|>Reconstructing 3-colored grids from horizontal and vertical projections is NP-hard: We consider the problem of coloring a grid using k colors with the restriction that in each row and each column has an specific number of cells of each color. In an already classical result, Ryser obtained a necessary and sufficient condition for the existence of such a coloring when two colors are considered. This characterization yields a linear time algorithm for constructing such a coloring when it exists. Gardner et al. showed that for k>=7 the problem is NP-hard. Afterward Chrobak and Durr improved this result, by proving that it remains NP-hard for k>=4. We solve the gap by showing that for 3 colors the problem is already NP-hard. Besides we also give some results on tiling tomography problems.<|reference_end|>
arxiv
@article{durr2009reconstructing, title={Reconstructing 3-colored grids from horizontal and vertical projections is NP-hard}, author={Christoph Durr and Flavio Guinez and Martin Matamala}, journal={arXiv preprint arXiv:0904.3169}, year={2009}, archivePrefix={arXiv}, eprint={0904.3169}, primaryClass={cs.DS cs.CC} }
durr2009reconstructing
arxiv-7171
0904.3183
On the Complexity of Submodular Function Minimisation on Diamonds
<|reference_start|>On the Complexity of Submodular Function Minimisation on Diamonds: Let $(L; \sqcap, \sqcup)$ be a finite lattice and let $n$ be a positive integer. A function $f : L^n \to \mathbb{R}$ is said to be submodular if $f(\tup{a} \sqcap \tup{b}) + f(\tup{a} \sqcup \tup{b}) \leq f(\tup{a}) + f(\tup{b})$ for all $\tup{a}, \tup{b} \in L^n$. In this paper we study submodular functions when $L$ is a diamond. Given oracle access to $f$ we are interested in finding $\tup{x} \in L^n$ such that $f(\tup{x}) = \min_{\tup{y} \in L^n} f(\tup{y})$ as efficiently as possible. We establish a min--max theorem, which states that the minimum of the submodular function is equal to the maximum of a certain function defined over a certain polyhedron; and a good characterisation of the minimisation problem, i.e., we show that given an oracle for computing a submodular $f : L^n \to \mathbb{Z}$ and an integer $m$ such that $\min_{\tup{x} \in L^n} f(\tup{x}) = m$, there is a proof of this fact which can be verified in time polynomial in $n$ and $\max_{\tup{t} \in L^n} \log |f(\tup{t})|$; and a pseudo-polynomial time algorithm for the minimisation problem, i.e., given an oracle for computing a submodular $f : L^n \to \mathbb{Z}$ one can find $\min_{\tup{t} \in L^n} f(\tup{t})$ in time bounded by a polynomial in $n$ and $\max_{\tup{t} \in L^n} |f(\tup{t})|$.<|reference_end|>
arxiv
@article{kuivinen2009on, title={On the Complexity of Submodular Function Minimisation on Diamonds}, author={Fredrik Kuivinen}, journal={arXiv preprint arXiv:0904.3183}, year={2009}, archivePrefix={arXiv}, eprint={0904.3183}, primaryClass={cs.DS cs.CC} }
kuivinen2009on
arxiv-7172
0904.3215
Measurement of eDonkey Activity with Distributed Honeypots
<|reference_start|>Measurement of eDonkey Activity with Distributed Honeypots: Collecting information about user activity in peer-to-peer systems is a key but challenging task. We describe here a distributed platform for doing so on the eDonkey network, relying on a group of honeypot peers which claim to have certain files and log queries they receive for these files. We then conduct some measurements with typical scenarios and use the obtained data to analyze the impact of key parameters like measurement duration, number of honeypots involved, and number of advertised files. This illustrates both the possible uses of our measurement system, and the kind of data one may collect using it.<|reference_end|>
arxiv
@article{allali2009measurement, title={Measurement of eDonkey Activity with Distributed Honeypots}, author={Oussama Allali (1), Matthieu Latapy (1) and Clemence Magnien (1) ((1) LIP6 (CNRS - UPMC))}, journal={arXiv preprint arXiv:0904.3215}, year={2009}, archivePrefix={arXiv}, eprint={0904.3215}, primaryClass={cs.NI} }
allali2009measurement
arxiv-7173
0904.3222
Efficient Measurement of Complex Networks Using Link Queries
<|reference_start|>Efficient Measurement of Complex Networks Using Link Queries: Complex networks are at the core of an intense research activity. However, in most cases, intricate and costly measurement procedures are needed to explore their structure. In some cases, these measurements rely on link queries: given two nodes, it is possible to test the existence of a link between them. These tests may be costly, and thus minimizing their number while maximizing the number of discovered links is a key issue. This paper studies this problem: we observe that properties classically observed on real-world complex networks give hints for their efficient measurement; we derive simple principles and several measurement strategies based on this, and experimentally evaluate their efficiency on real-world cases. In order to do so, we introduce methods to evaluate the efficiency of strategies. We also explore the bias that different measurement strategies may induce.<|reference_end|>
arxiv
@article{tarissan2009efficient, title={Efficient Measurement of Complex Networks Using Link Queries}, author={Fabien Tarissan (1), Matthieu Latapy (2) and Christophe Prieur (3) ((1) ISC (CNRS - Ecole Polytechnique), (2) LIP6 (CNRS - UPMC), (3) LIAFA (Universite Paris Diderot))}, journal={arXiv preprint arXiv:0904.3222}, year={2009}, archivePrefix={arXiv}, eprint={0904.3222}, primaryClass={cs.NI} }
tarissan2009efficient
arxiv-7174
0904.3243
The Business of Selling Electronic Documents
<|reference_start|>The Business of Selling Electronic Documents: The music industry has huge troubles adapting to the new technologies. As many pointed out, when copying music is essentially free and socially accepted it becomes increasingly tempting for users to infringe copyrights and copy music from one person to another. The answer of the music industry is to outlaw a majority of citizens. This article describes how the music industry should reinvent itself and adapt to a world where the network is ubiquitous and exchanging information is essentially free. It relies on adapting prices to the demand and lower costs of electronic documents in a dramatic way.<|reference_end|>
arxiv
@article{oriol2009the, title={The Business of Selling Electronic Documents}, author={Manuel Oriol}, journal={arXiv preprint arXiv:0904.3243}, year={2009}, archivePrefix={arXiv}, eprint={0904.3243}, primaryClass={cs.GL cs.GT} }
oriol2009the
arxiv-7175
0904.3251
On evaluation of permanents
<|reference_start|>On evaluation of permanents: We study the time and space complexity of matrix permanents over rings and semirings.<|reference_end|>
arxiv
@article{björklund2009on, title={On evaluation of permanents}, author={Andreas Bj"orklund and Thore Husfeldt and Petteri Kaski and Mikko Koivisto}, journal={arXiv preprint arXiv:0904.3251}, year={2009}, archivePrefix={arXiv}, eprint={0904.3251}, primaryClass={cs.DS cs.DM} }
björklund2009on
arxiv-7176
0904.3273
A Thermodynamic Turing Machine: Artificial Molecular Computing Using Classical Reversible Logic Switching Networks
<|reference_start|>A Thermodynamic Turing Machine: Artificial Molecular Computing Using Classical Reversible Logic Switching Networks: This paper discusses how to implement certain classes of quantum computer algorithms using classical discrete switching networks that are amenable to implementation in main stream CMOS transistor IC technology. The methods differ from other classical approaches in that asynchronous feedback is exploited in classical transistor reversible logic circuits to implement the Hadamard transform in one simultaneous step over all qubits as in a true quantum computer. The Simon problem is used as an example. The method is used to provide an order n execution speed method for the Gaussian elimination step in the Simon problem. The approach is referred to as a Thermodynamic Turing Machine in that it behaves like an artificial molecule where solutions to a problem are found by evolving the classical circuits from one thermodynamic equilibrium state to another.<|reference_end|>
arxiv
@article{hamel2009a, title={A Thermodynamic Turing Machine: Artificial Molecular Computing Using Classical Reversible Logic Switching Networks}, author={John S. Hamel}, journal={arXiv preprint arXiv:0904.3273}, year={2009}, archivePrefix={arXiv}, eprint={0904.3273}, primaryClass={cs.CC quant-ph} }
hamel2009a
arxiv-7177
0904.3310
FastLMFI: An Efficient Approach for Local Maximal Patterns Propagation and Maximal Patterns Superset Checking
<|reference_start|>FastLMFI: An Efficient Approach for Local Maximal Patterns Propagation and Maximal Patterns Superset Checking: Maximal frequent patterns superset checking plays an important role in the efficient mining of complete Maximal Frequent Itemsets (MFI) and maximal search space pruning. In this paper we present a new indexing approach, FastLMFI for local maximal frequent patterns (itemset) propagation and maximal patterns superset checking. Experimental results on different sparse and dense datasets show that our work is better than the previous well known progressive focusing technique. We have also integrated our superset checking approach with an existing state of the art maximal itemsets algorithm Mafia, and compare our results with current best maximal itemsets algorithms afopt-max and FP (zhu)-max. Our results outperform afopt-max and FP (zhu)-max on dense (chess and mushroom) datasets on almost all support thresholds, which shows the effectiveness of our approach.<|reference_end|>
arxiv
@article{bashir2009fastlmfi:, title={FastLMFI: An Efficient Approach for Local Maximal Patterns Propagation and Maximal Patterns Superset Checking}, author={Shariq Bashir, Abdul Rauf Baig}, journal={arXiv preprint arXiv:0904.3310}, year={2009}, doi={10.1109/AICCSA.2006.205130}, archivePrefix={arXiv}, eprint={0904.3310}, primaryClass={cs.DB cs.AI cs.DS} }
bashir2009fastlmfi:
arxiv-7178
0904.3312
HybridMiner: Mining Maximal Frequent Itemsets Using Hybrid Database Representation Approach
<|reference_start|>HybridMiner: Mining Maximal Frequent Itemsets Using Hybrid Database Representation Approach: In this paper we present a novel hybrid (arraybased layout and vertical bitmap layout) database representation approach for mining complete Maximal Frequent Itemset (MFI) on sparse and large datasets. Our work is novel in terms of scalability, item search order and two horizontal and vertical projection techniques. We also present a maximal algorithm using this hybrid database representation approach. Different experimental results on real and sparse benchmark datasets show that our approach is better than previous state of art maximal algorithms.<|reference_end|>
arxiv
@article{bashir2009hybridminer:, title={HybridMiner: Mining Maximal Frequent Itemsets Using Hybrid Database Representation Approach}, author={Shariq Bashir, and Abdul Rauf Baig}, journal={arXiv preprint arXiv:0904.3312}, year={2009}, doi={10.1109/INMIC.2005.334484}, archivePrefix={arXiv}, eprint={0904.3312}, primaryClass={cs.DB cs.AI cs.DS} }
bashir2009hybridminer:
arxiv-7179
0904.3316
Ramp: Fast Frequent Itemset Mining with Efficient Bit-Vector Projection Technique
<|reference_start|>Ramp: Fast Frequent Itemset Mining with Efficient Bit-Vector Projection Technique: Mining frequent itemset using bit-vector representation approach is very efficient for dense type datasets, but highly inefficient for sparse datasets due to lack of any efficient bit-vector projection technique. In this paper we present a novel efficient bit-vector projection technique, for sparse and dense datasets. To check the efficiency of our bit-vector projection technique, we present a new frequent itemset mining algorithm Ramp (Real Algorithm for Mining Patterns) build upon our bit-vector projection technique. The performance of the Ramp is compared with the current best (all, maximal and closed) frequent itemset mining algorithms on benchmark datasets. Different experimental results on sparse and dense datasets show that mining frequent itemset using Ramp is faster than the current best algorithms, which show the effectiveness of our bit-vector projection idea. We also present a new local maximal frequent itemsets propagation and maximal itemset superset checking approach FastLMFI, build upon our PBR bit-vector projection technique. Our different computational experiments suggest that itemset maximality checking using FastLMFI is fast and efficient than a previous will known progressive focusing approach.<|reference_end|>
arxiv
@article{bashir2009ramp:, title={Ramp: Fast Frequent Itemset Mining with Efficient Bit-Vector Projection Technique}, author={Shariq Bashir, and Abdul Rauf Baig}, journal={arXiv preprint arXiv:0904.3316}, year={2009}, archivePrefix={arXiv}, eprint={0904.3316}, primaryClass={cs.DB cs.AI cs.DS} }
bashir2009ramp:
arxiv-7180
0904.3319
Fast Algorithms for Mining Interesting Frequent Itemsets without Minimum Support
<|reference_start|>Fast Algorithms for Mining Interesting Frequent Itemsets without Minimum Support: Real world datasets are sparse, dirty and contain hundreds of items. In such situations, discovering interesting rules (results) using traditional frequent itemset mining approach by specifying a user defined input support threshold is not appropriate. Since without any domain knowledge, setting support threshold small or large can output nothing or a large number of redundant uninteresting results. Recently a novel approach of mining only N-most/Top-K interesting frequent itemsets has been proposed, which discovers the top N interesting results without specifying any user defined support threshold. However, mining interesting frequent itemsets without minimum support threshold are more costly in terms of itemset search space exploration and processing cost. Thereby, the efficiency of their mining highly depends upon three main factors (1) Database representation approach used for itemset frequency counting, (2) Projection of relevant transactions to lower level nodes of search space and (3) Algorithm implementation technique. Therefore, to improve the efficiency of mining process, in this paper we present two novel algorithms called (N-MostMiner and Top-K-Miner) using the bit-vector representation approach which is very efficient in terms of itemset frequency counting and transactions projection. In addition to this, several efficient implementation techniques of N-MostMiner and Top-K-Miner are also present which we experienced in our implementation. Our experimental results on benchmark datasets suggest that the NMostMiner and Top-K-Miner are very efficient in terms of processing time as compared to current best algorithms BOMO and TFP.<|reference_end|>
arxiv
@article{bashir2009fast, title={Fast Algorithms for Mining Interesting Frequent Itemsets without Minimum Support}, author={Shariq Bashir, Zahoor Jan, Abdul Rauf Baig}, journal={arXiv preprint arXiv:0904.3319}, year={2009}, archivePrefix={arXiv}, eprint={0904.3319}, primaryClass={cs.DB cs.AI cs.DS} }
bashir2009fast
arxiv-7181
0904.3320
Using Association Rules for Better Treatment of Missing Values
<|reference_start|>Using Association Rules for Better Treatment of Missing Values: The quality of training data for knowledge discovery in databases (KDD) and data mining depends upon many factors, but handling missing values is considered to be a crucial factor in overall data quality. Today real world datasets contains missing values due to human, operational error, hardware malfunctioning and many other factors. The quality of knowledge extracted, learning and decision problems depend directly upon the quality of training data. By considering the importance of handling missing values in KDD and data mining tasks, in this paper we propose a novel Hybrid Missing values Imputation Technique (HMiT) using association rules mining and hybrid combination of k-nearest neighbor approach. To check the effectiveness of our HMiT missing values imputation technique, we also perform detail experimental results on real world datasets. Our results suggest that the HMiT technique is not only better in term of accuracy but it also take less processing time as compared to current best missing values imputation technique based on k-nearest neighbor approach, which shows the effectiveness of our missing values imputation technique.<|reference_end|>
arxiv
@article{bashir2009using, title={Using Association Rules for Better Treatment of Missing Values}, author={Shariq Bashir, Saad Razzaq, Umer Maqbool, Sonya Tahir, Abdul Rauf Baig}, journal={arXiv preprint arXiv:0904.3320}, year={2009}, archivePrefix={arXiv}, eprint={0904.3320}, primaryClass={cs.DB cs.AI cs.DS} }
bashir2009using
arxiv-7182
0904.3321
Introducing Partial Matching Approach in Association Rules for Better Treatment of Missing Values
<|reference_start|>Introducing Partial Matching Approach in Association Rules for Better Treatment of Missing Values: Handling missing values in training datasets for constructing learning models or extracting useful information is considered to be an important research task in data mining and knowledge discovery in databases. In recent years, lot of techniques are proposed for imputing missing values by considering attribute relationships with missing value observation and other observations of training dataset. The main deficiency of such techniques is that, they depend upon single approach and do not combine multiple approaches, that why they are less accurate. To improve the accuracy of missing values imputation, in this paper we introduce a novel partial matching concept in association rules mining, which shows better results as compared to full matching concept that we described in our previous work. Our imputation technique combines the partial matching concept in association rules with k-nearest neighbor approach. Since this is a hybrid technique, therefore its accuracy is much better than as compared to those techniques which depend upon single approach. To check the efficiency of our technique, we also provide detail experimental results on number of benchmark datasets which show better results as compared to previous approaches.<|reference_end|>
arxiv
@article{bashir2009introducing, title={Introducing Partial Matching Approach in Association Rules for Better Treatment of Missing Values}, author={Shariq Bashir, Saad Razzaq, Umer Maqbool, Sonya Tahir, Abdul Rauf Baig}, journal={arXiv preprint arXiv:0904.3321}, year={2009}, archivePrefix={arXiv}, eprint={0904.3321}, primaryClass={cs.DB cs.AI cs.DS} }
bashir2009introducing
arxiv-7183
0904.3325
Decision Problems for Nash Equilibria in Stochastic Games
<|reference_start|>Decision Problems for Nash Equilibria in Stochastic Games: We analyse the computational complexity of finding Nash equilibria in stochastic multiplayer games with $\omega$-regular objectives. While the existence of an equilibrium whose payoff falls into a certain interval may be undecidable, we single out several decidable restrictions of the problem. First, restricting the search space to stationary, or pure stationary, equilibria results in problems that are typically contained in PSPACE and NP, respectively. Second, we show that the existence of an equilibrium with a binary payoff (i.e. an equilibrium where each player either wins or loses with probability 1) is decidable. We also establish that the existence of a Nash equilibrium with a certain binary payoff entails the existence of an equilibrium with the same payoff in pure, finite-state strategies.<|reference_end|>
arxiv
@article{ummels2009decision, title={Decision Problems for Nash Equilibria in Stochastic Games}, author={Michael Ummels and Dominik Wojtczak}, journal={arXiv preprint arXiv:0904.3325}, year={2009}, doi={10.1007/978-3-642-04027-6_37}, number={EDI-INF-RR-1325}, archivePrefix={arXiv}, eprint={0904.3325}, primaryClass={cs.GT cs.LO} }
ummels2009decision
arxiv-7184
0904.3340
Lossy Compression in Near-Linear Time via Efficient Random Codebooks and Databases
<|reference_start|>Lossy Compression in Near-Linear Time via Efficient Random Codebooks and Databases: The compression-complexity trade-off of lossy compression algorithms that are based on a random codebook or a random database is examined. Motivated, in part, by recent results of Gupta-Verd\'{u}-Weissman (GVW) and their underlying connections with the pattern-matching scheme of Kontoyiannis' lossy Lempel-Ziv algorithm, we introduce a non-universal version of the lossy Lempel-Ziv method (termed LLZ). The optimality of LLZ for memoryless sources is established, and its performance is compared to that of the GVW divide-and-conquer approach. Experimental results indicate that the GVW approach often yields better compression than LLZ, but at the price of much higher memory requirements. To combine the advantages of both, we introduce a hybrid algorithm (HYB) that utilizes both the divide-and-conquer idea of GVW and the single-database structure of LLZ. It is proved that HYB shares with GVW the exact same rate-distortion performance and implementation complexity, while, like LLZ, requiring less memory, by a factor which may become unbounded, depending on the choice or the relevant design parameters. Experimental results are also presented, illustrating the performance of all three methods on data generated by simple discrete memoryless sources. In particular, the HYB algorithm is shown to outperform existing schemes for the compression of some simple discrete sources with respect to the Hamming distortion criterion.<|reference_end|>
arxiv
@article{gioran2009lossy, title={Lossy Compression in Near-Linear Time via Efficient Random Codebooks and Databases}, author={Chris Gioran, Ioannis Kontoyiannis}, journal={arXiv preprint arXiv:0904.3340}, year={2009}, archivePrefix={arXiv}, eprint={0904.3340}, primaryClass={cs.IT math.IT} }
gioran2009lossy
arxiv-7185
0904.3351
A Subsequence-Histogram Method for Generic Vocabulary Recognition over Deletion Channels
<|reference_start|>A Subsequence-Histogram Method for Generic Vocabulary Recognition over Deletion Channels: We consider the problem of recognizing a vocabulary--a collection of words (sequences) over a finite alphabet--from a potential subsequence of one of its words. We assume the given subsequence is received through a deletion channel as a result of transmission of a random word from one of the two generic underlying vocabularies. An exact maximum a posterior (MAP) solution for this problem counts the number of ways a given subsequence can be derived from particular subsets of candidate vocabularies, requiring exponential time or space. We present a polynomial approximation algorithm for this problem. The algorithm makes no prior assumption about the rules and patterns governing the structure of vocabularies. Instead, through off-line processing of vocabularies, it extracts data regarding regularity patterns in the subsequences of each vocabulary. In the recognition phase, the algorithm just uses this data, called subsequence-histogram, to decide in favor of one of the vocabularies. We provide examples to demonstrate the performance of the algorithm and show that it can achieve the same performance as MAP in some situations. Potential applications include bioinformatics, storage systems, and search engines.<|reference_end|>
arxiv
@article{fozunbal2009a, title={A Subsequence-Histogram Method for Generic Vocabulary Recognition over Deletion Channels}, author={Majid Fozunbal}, journal={arXiv preprint arXiv:0904.3351}, year={2009}, number={HP Labs Technical Report HPL-2009-2}, archivePrefix={arXiv}, eprint={0904.3351}, primaryClass={cs.IT cs.DS math.IT stat.AP} }
fozunbal2009a
arxiv-7186
0904.3352
Optimistic Initialization and Greediness Lead to Polynomial Time Learning in Factored MDPs - Extended Version
<|reference_start|>Optimistic Initialization and Greediness Lead to Polynomial Time Learning in Factored MDPs - Extended Version: In this paper we propose an algorithm for polynomial-time reinforcement learning in factored Markov decision processes (FMDPs). The factored optimistic initial model (FOIM) algorithm, maintains an empirical model of the FMDP in a conventional way, and always follows a greedy policy with respect to its model. The only trick of the algorithm is that the model is initialized optimistically. We prove that with suitable initialization (i) FOIM converges to the fixed point of approximate value iteration (AVI); (ii) the number of steps when the agent makes non-near-optimal decisions (with respect to the solution of AVI) is polynomial in all relevant quantities; (iii) the per-step costs of the algorithm are also polynomial. To our best knowledge, FOIM is the first algorithm with these properties. This extended version contains the rigorous proofs of the main theorem. A version of this paper appeared in ICML'09.<|reference_end|>
arxiv
@article{szita2009optimistic, title={Optimistic Initialization and Greediness Lead to Polynomial Time Learning in Factored MDPs - Extended Version}, author={Istvan Szita, Andras Lorincz}, journal={arXiv preprint arXiv:0904.3352}, year={2009}, archivePrefix={arXiv}, eprint={0904.3352}, primaryClass={cs.AI cs.LG} }
szita2009optimistic
arxiv-7187
0904.3356
A method for Hedging in continuous time
<|reference_start|>A method for Hedging in continuous time: We present a method for hedging in continuous time.<|reference_end|>
arxiv
@article{freund2009a, title={A method for Hedging in continuous time}, author={Yoav Freund}, journal={arXiv preprint arXiv:0904.3356}, year={2009}, archivePrefix={arXiv}, eprint={0904.3356}, primaryClass={cs.IT cs.AI math.IT math.PR} }
freund2009a
arxiv-7188
0904.3366
State complexity of orthogonal catenation
<|reference_start|>State complexity of orthogonal catenation: A language $L$ is the orthogonal catenation of languages $L_1$ and $L_2$ if every word of $L$ can be written in a unique way as a catenation of a word in $L_1$ and a word in $L_2$. We establish a tight bound for the state complexity of orthogonal catenation of regular languages. The bound is smaller than the bound for arbitrary catenation.<|reference_end|>
arxiv
@article{daley2009state, title={State complexity of orthogonal catenation}, author={Mark Daley, Michael Domaratzki, Kai Salomaa}, journal={arXiv preprint arXiv:0904.3366}, year={2009}, archivePrefix={arXiv}, eprint={0904.3366}, primaryClass={cs.FL} }
daley2009state
arxiv-7189
0904.3395
On the cavity method for decimated random constraint satisfaction problems and the analysis of belief propagation guided decimation algorithms
<|reference_start|>On the cavity method for decimated random constraint satisfaction problems and the analysis of belief propagation guided decimation algorithms: We introduce a version of the cavity method for diluted mean-field spin models that allows the computation of thermodynamic quantities similar to the Franz-Parisi quenched potential in sparse random graph models. This method is developed in the particular case of partially decimated random constraint satisfaction problems. This allows to develop a theoretical understanding of a class of algorithms for solving constraint satisfaction problems, in which elementary degrees of freedom are sequentially assigned according to the results of a message passing procedure (belief-propagation). We confront this theoretical analysis to the results of extensive numerical simulations.<|reference_end|>
arxiv
@article{ricci-tersenghi2009on, title={On the cavity method for decimated random constraint satisfaction problems and the analysis of belief propagation guided decimation algorithms}, author={Federico Ricci-Tersenghi and Guilhem Semerjian}, journal={J Stat. Mech. P09001 (2009)}, year={2009}, doi={10.1088/1742-5468/2009/09/P09001}, archivePrefix={arXiv}, eprint={0904.3395}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.DM} }
ricci-tersenghi2009on
arxiv-7190
0904.3420
Identity Based Strong Designated Verifier Parallel Multi-Proxy Signature Scheme
<|reference_start|>Identity Based Strong Designated Verifier Parallel Multi-Proxy Signature Scheme: This paper presents a new identity based strong designated verifier parallel multi-proxy signature scheme. Multi-Proxy signatures allow the original signer to delegate his signing power to a group of proxy signers. In our scheme, the designated verifier can only validate proxy signatures created by a group of proxy signer.<|reference_end|>
arxiv
@article{lal2009identity, title={Identity Based Strong Designated Verifier Parallel Multi-Proxy Signature Scheme}, author={Sunder Lal and Vandani Verma}, journal={arXiv preprint arXiv:0904.3420}, year={2009}, archivePrefix={arXiv}, eprint={0904.3420}, primaryClass={cs.CR} }
lal2009identity
arxiv-7191
0904.3422
Some Proxy Signature and Designated verifier Signature Schemes over Braid Groups
<|reference_start|>Some Proxy Signature and Designated verifier Signature Schemes over Braid Groups: Braids groups provide an alternative to number theoretic public cryptography and can be implemented quite efficiently. The paper proposes five signature schemes: Proxy Signature, Designated Verifier, Bi-Designated Verifier, Designated Verifier Proxy Signature And Bi-Designated Verifier Proxy Signature scheme based on braid groups. We also discuss the security aspects of each of the proposed schemes.<|reference_end|>
arxiv
@article{lal2009some, title={Some Proxy Signature and Designated verifier Signature Schemes over Braid Groups}, author={Sunder Lal and Vandani Verma}, journal={arXiv preprint arXiv:0904.3422}, year={2009}, archivePrefix={arXiv}, eprint={0904.3422}, primaryClass={cs.CR} }
lal2009some
arxiv-7192
0904.3444
Comment to "Coverage by Randomly Deployed Wireless Sensor Networks"
<|reference_start|>Comment to "Coverage by Randomly Deployed Wireless Sensor Networks": It is a correction paper on "P.J. Wan and C.W. Yi, "Coverage by Randomly Deployed Wireless Sensor Networks", IEEE Transaction On Information Theory, vol.52, No.6, June 2006." In the above paper, Lemma (4), on page 2659 play the key role for deriving the main results in the paper. The statement as well as the proof of Lemma (4), page $2659,$ is not correct. We have given the correct version of Lemma. This change in Lemma leads a drastic change in all the result derived in the above paper.<|reference_end|>
arxiv
@article{gupta2009comment, title={Comment to "Coverage by Randomly Deployed Wireless Sensor Networks"}, author={Bhupendra Gupta (Indian Institute of Information Technology-DM-Jabalpur)}, journal={arXiv preprint arXiv:0904.3444}, year={2009}, archivePrefix={arXiv}, eprint={0904.3444}, primaryClass={cs.IT math.IT} }
gupta2009comment
arxiv-7193
0904.3458
JConstHide: A Framework for Java Source Code Constant Hiding
<|reference_start|>JConstHide: A Framework for Java Source Code Constant Hiding: Software obfuscation or obscuring a software is an approach to defeat the practice of reverse engineering a software for using its functionality illegally in the development of another software. Java applications are more amenable to reverse engineering and re-engineering attacks through methods such as decompilation because Java class files store the program in a semi complied form called byte codes. The existing obfuscation systems obfuscate the Java class files. Obfuscated source code produce obfuscated byte codes and hence two level obfuscation (source code and byte code level) of the program makes it more resilient to reverse engineering attacks . But source code obfuscation is much more difficult due to richer set of programming constructs and the scope of the different variables used in the program and only very little progress has been made on this front. We in this paper are proposing a framework named JConstHide for hiding constants, especially integers in the java source codes, to defeat reverse engineering through decompilation. To the best of our knowledge, no data hiding software are available for java source code constant hiding.<|reference_end|>
arxiv
@article{sivadasan2009jconsthide:, title={JConstHide: A Framework for Java Source Code Constant Hiding}, author={Praveen Sivadasan, P Sojan Lal}, journal={arXiv preprint arXiv:0904.3458}, year={2009}, archivePrefix={arXiv}, eprint={0904.3458}, primaryClass={cs.CR} }
sivadasan2009jconsthide:
arxiv-7194
0904.3469
Toggling operators in computability logic
<|reference_start|>Toggling operators in computability logic: Computability logic (CL) (see http://www.cis.upenn.edu/~giorgi/cl.html ) is a research program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which it has more traditionally been. Formulas in CL stand for interactive computational problems, seen as games between a machine and its environment; logical operators represent operations on such entities; and "truth" is understood as existence of an effective solution. The formalism of CL is open-ended, and may undergo series of extensions as the studies of the subject advance. So far three -- parallel, sequential and choice -- sorts of conjunction and disjunction have been studied. The present paper adds one more natural kind to this collection, termed toggling. The toggling operations can be characterized as lenient versions of choice operations where choices are retractable, being allowed to be reconsidered any finite number of times. This way, they model trial-and-error style decision steps in interactive computation. The main technical result of this paper is constructing a sound and complete axiomatization for the propositional fragment of computability logic whose vocabulary, together with negation, includes all four -- parallel, toggling, sequential and choice -- kinds of conjunction and disjunction. Along with toggling conjunction and disjunction, the paper also introduces the toggling versions of quantifiers and recurrence operations.<|reference_end|>
arxiv
@article{japaridze2009toggling, title={Toggling operators in computability logic}, author={Giorgi Japaridze}, journal={Theoretical Computer Science 412 (2011), pp. 971-1004}, year={2009}, doi={10.1016/j.tcs.2010.11.037}, archivePrefix={arXiv}, eprint={0904.3469}, primaryClass={cs.LO cs.AI math.LO} }
japaridze2009toggling
arxiv-7195
0904.3501
Incentive Compatible Budget Elicitation in Multi-unit Auctions
<|reference_start|>Incentive Compatible Budget Elicitation in Multi-unit Auctions: In this paper, we consider the problem of designing incentive compatible auctions for multiple (homogeneous) units of a good, when bidders have private valuations and private budget constraints. When only the valuations are private and the budgets are public, Dobzinski {\em et al} show that the {\em adaptive clinching} auction is the unique incentive-compatible auction achieving Pareto-optimality. They further show thatthere is no deterministic Pareto-optimal auction with private budgets. Our main contribution is to show the following Budget Monotonicity property of this auction: When there is only one infinitely divisible good, a bidder cannot improve her utility by reporting a budget smaller than the truth. This implies that a randomized modification to the adaptive clinching auction is incentive compatible and Pareto-optimal with private budgets. The Budget Monotonicity property also implies other improved results in this context. For revenue maximization, the same auction improves the best-known competitive ratio due to Abrams by a factor of 4, and asymptotically approaches the performance of the optimal single-price auction. Finally, we consider the problem of revenue maximization (or social welfare) in a Bayesian setting. We allow the bidders have public size constraints (on the amount of good they are willing to buy) in addition to private budget constraints. We show a simple poly-time computable 5.83-approximation to the optimal Bayesian incentive compatible mechanism, that is implementable in dominant strategies. Our technique again crucially needs the ability to prevent bidders from over-reporting budgets via randomization.<|reference_end|>
arxiv
@article{bhattacharya2009incentive, title={Incentive Compatible Budget Elicitation in Multi-unit Auctions}, author={Sayan Bhattacharya and Vincent Conitzer and Kamesh Munagala and Lirong Xia}, journal={arXiv preprint arXiv:0904.3501}, year={2009}, archivePrefix={arXiv}, eprint={0904.3501}, primaryClass={cs.GT cs.MA} }
bhattacharya2009incentive
arxiv-7196
0904.3503
On the Complexity of Searching in Trees: Average-case Minimization
<|reference_start|>On the Complexity of Searching in Trees: Average-case Minimization: We focus on the average-case analysis: A function w : V -> Z+ is given which defines the likelihood for a node to be the one marked, and we want the strategy that minimizes the expected number of queries. Prior to this paper, very little was known about this natural question and the complexity of the problem had remained so far an open question. We close this question and prove that the above tree search problem is NP-complete even for the class of trees with diameter at most 4. This results in a complete characterization of the complexity of the problem with respect to the diameter size. In fact, for diameter not larger than 3 the problem can be shown to be polynomially solvable using a dynamic programming approach. In addition we prove that the problem is NP-complete even for the class of trees of maximum degree at most 16. To the best of our knowledge, the only known result in this direction is that the tree search problem is solvable in O(|V| log|V|) time for trees with degree at most 2 (paths). We match the above complexity results with a tight algorithmic analysis. We first show that a natural greedy algorithm attains a 2-approximation. Furthermore, for the bounded degree instances, we show that any optimal strategy (i.e., one that minimizes the expected number of queries) performs at most O(\Delta(T) (log |V| + log w(T))) queries in the worst case, where w(T) is the sum of the likelihoods of the nodes of T and \Delta(T) is the maximum degree of T. We combine this result with a non-trivial exponential time algorithm to provide an FPTAS for trees with bounded degree.<|reference_end|>
arxiv
@article{cicalese2009on, title={On the Complexity of Searching in Trees: Average-case Minimization}, author={Ferdinando Cicalese, Tobias Jacobs, Eduardo Laber and Marco Molinaro}, journal={arXiv preprint arXiv:0904.3503}, year={2009}, archivePrefix={arXiv}, eprint={0904.3503}, primaryClass={cs.DS cs.DM} }
cicalese2009on
arxiv-7197
0904.3525
On using floating-point computations to help an exact linear arithmetic decision procedure
<|reference_start|>On using floating-point computations to help an exact linear arithmetic decision procedure: We consider the decision problem for quantifier-free formulas whose atoms are linear inequalities interpreted over the reals or rationals. This problem may be decided using satisfiability modulo theory (SMT), using a mixture of a SAT solver and a simplex-based decision procedure for conjunctions. State-of-the-art SMT solvers use simplex implementations over rational numbers, which perform well for typical problems arising from model-checking and program analysis (sparse inequalities, small coefficients) but are slow for other applications (denser problems, larger coefficients). We propose a simple preprocessing phase that can be adapted on existing SMT solvers and that may be optionally triggered. Despite using floating-point computations, our method is sound and complete - it merely affects efficiency. We implemented the method and provide benchmarks showing that this change brings a naive and slow decision procedure ("textbook simplex" with rational numbers) up to the efficiency of recent SMT solvers, over test cases arising from model-checking, and makes it definitely faster than state-of-the-art SMT solvers on dense examples.<|reference_end|>
arxiv
@article{monniaux2009on, title={On using floating-point computations to help an exact linear arithmetic decision procedure}, author={David Monniaux (VERIMAG - Imag)}, journal={arXiv preprint arXiv:0904.3525}, year={2009}, archivePrefix={arXiv}, eprint={0904.3525}, primaryClass={cs.LO cs.NA} }
monniaux2009on
arxiv-7198
0904.3528
Deconstruction of Infinite Extensive Games using coinduction
<|reference_start|>Deconstruction of Infinite Extensive Games using coinduction: Finite objects and more specifically finite games are formalized using induction, whereas infinite objects are formalized using coinduction. In this article, after an introduction to the concept of coinduction, we revisit on infinite (discrete) extensive games the basic notions of game theory. Among others, we introduce a definition of Nash equilibrium and a notion of subgame perfect equilibrium for infinite games. We use those concepts to analyze well known infinite games, like the dollar auction game and the centipede game and we show that human behaviors that are often considered as illogic are perfectly rational, if one admits that human agents reason coinductively.<|reference_end|>
arxiv
@article{lescanne2009deconstruction, title={Deconstruction of Infinite Extensive Games using coinduction}, author={Pierre Lescanne (LIP)}, journal={arXiv preprint arXiv:0904.3528}, year={2009}, archivePrefix={arXiv}, eprint={0904.3528}, primaryClass={cs.GT cs.LO} }
lescanne2009deconstruction
arxiv-7199
0904.3552
Internet: Romania vs Europe
<|reference_start|>Internet: Romania vs Europe: This paper presents various access ways to Internet for home users, both for those who are low consumers (consumed time online or traffic monthly value), or large consumers (unlimited connection). The main purpose of the work consists in making a comparison between the situation of the Internet in Romania and other countries in Europe such as Hungary (more western than Romania, so a little more developed, still an Eastern country comparing to the more developed countries in Western Europe and others well developed such as England, Italy, France, and to those in development such as Poland, and at the periphery of Europe such as Ukraine.<|reference_end|>
arxiv
@article{nicolae2009internet:, title={Internet: Romania vs. Europe}, author={Papin Nicolae, Tiberiu Marius Karnyanszky}, journal={Ann. Univ. Tibiscus Comp. Sci. Series IV (2006), 153-166}, year={2009}, archivePrefix={arXiv}, eprint={0904.3552}, primaryClass={cs.OH} }
nicolae2009internet:
arxiv-7200
0904.3588
Termination of Linear Programs with Nonlinear Constraints
<|reference_start|>Termination of Linear Programs with Nonlinear Constraints: Tiwari proved that termination of linear programs (loops with linear loop conditions and updates) over the reals is decidable through Jordan forms and eigenvectors computation. Braverman proved that it is also decidable over the integers. In this paper, we consider the termination of loops with polynomial loop conditions and linear updates over the reals and integers. First, we prove that the termination of such loops over the integers is undecidable. Second, with an assumption, we provide an complete algorithm to decide the termination of a class of such programs over the reals. Our method is similar to that of Tiwari in spirit but uses different techniques. Finally, we conjecture that the termination of linear programs with polynomial loop conditions over the reals is undecidable in general by %constructing a loop and reducing the problem to another decision problem related to number theory and ergodic theory, which we guess undecidable.<|reference_end|>
arxiv
@article{xia2009termination, title={Termination of Linear Programs with Nonlinear Constraints}, author={Bican Xia, Zhihai Zhang}, journal={arXiv preprint arXiv:0904.3588}, year={2009}, archivePrefix={arXiv}, eprint={0904.3588}, primaryClass={cs.LO} }
xia2009termination