corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-6401 | 0902.2260 | Network Coding with Two-Way Relaying: Achievable Rate Regions and Diversity-Multiplexing Tradeoffs | <|reference_start|>Network Coding with Two-Way Relaying: Achievable Rate Regions and Diversity-Multiplexing Tradeoffs: This paper addresses the fundamental characteristics of information exchange via multihop network coding over two-way relaying in a wireless ad hoc network. The end-to-end rate regions achieved by time-division multihop (TDMH), MAC-layer network coding (MLNC) and PHY-layer network coding (PLNC) are first characterized. It is shown that MLNC does not always achieve better rates than TDMH, time sharing between TDMH and MLNC is able to achieve a larger rate region, and PLNC dominates the rate regions achieved by TDMH and MLNC. An opportunistic scheduling algorithm for MLNC and PLNC is then proposed to stabilize the two-way relaying system for Poisson arrivals whenever the rate pair is within the Shannon rate regions of MLNC and PLNC. To understand the two-way transmission limits of multihop network coding, the sum-rate optimization with or without certain traffic pattern and the end-to-end diversity-multiplexing tradeoffs (DMTs) of two-way transmission over multiple relay nodes are also analyzed.<|reference_end|> | arxiv | @article{liu2009network,
title={Network Coding with Two-Way Relaying: Achievable Rate Regions and
Diversity-Multiplexing Tradeoffs},
author={Chun-Hung Liu, Feng Xue and Jeffrey G. Andrews},
journal={arXiv preprint arXiv:0902.2260},
year={2009},
archivePrefix={arXiv},
eprint={0902.2260},
primaryClass={cs.IT math.IT}
} | liu2009network |
arxiv-6402 | 0902.2300 | A Dichotomy Theorem for Polynomial Evaluation | <|reference_start|>A Dichotomy Theorem for Polynomial Evaluation: A dichotomy theorem for counting problems due to Creignou and Hermann states that or any nite set S of logical relations, the counting problem #SAT(S) is either in FP, or #P-complete. In the present paper we show a dichotomy theorem for polynomial evaluation. That is, we show that for a given set S, either there exists a VNP-complete family of polynomials associated to S, or the associated families of polynomials are all in VP. We give a concise characterization of the sets S that give rise to "easy" and "hard" polynomials. We also prove that several problems which were known to be #P-complete under Turing reductions only are in fact #P-complete under many-one reductions.<|reference_end|> | arxiv | @article{briquel2009a,
title={A Dichotomy Theorem for Polynomial Evaluation},
author={Ir'en'ee Briquel (LIP), Pascal Koiran (LIP)},
journal={Dans Mathematical Foundations of Computer Science 2009 -
Mathematical Foundations of Computer Science 2009, Novy Smokovec : Slovakia
(Slovak Republic) (2009)},
year={2009},
doi={10.1007/978-3-642-03816-7},
archivePrefix={arXiv},
eprint={0902.2300},
primaryClass={cs.CC}
} | briquel2009a |
arxiv-6403 | 0902.2316 | On weak isometries of Preparata codes | <|reference_start|>On weak isometries of Preparata codes: Let C1 and C2 be codes with code distance d. Codes C1 and C2 are called weakly isometric, if there exists a mapping J:C1->C2, such that for any x,y from C1 the equality d(x,y)=d holds if and only if d(J(x),J(y))=d. Obviously two codes are weakly isometric if and only if the minimal distance graphs of these codes are isomorphic. In this paper we prove that Preparata codes of length n>=2^12 are weakly isometric if and only if these codes are equivalent. The analogous result is obtained for punctured Preparata codes of length not less than 2^10-1.<|reference_end|> | arxiv | @article{mogilnykh2009on,
title={On weak isometries of Preparata codes},
author={Ivan Yu. Mogilnykh},
journal={arXiv preprint arXiv:0902.2316},
year={2009},
archivePrefix={arXiv},
eprint={0902.2316},
primaryClass={cs.IT math.IT}
} | mogilnykh2009on |
arxiv-6404 | 0902.2345 | What's in a Message? | <|reference_start|>What's in a Message?: In this paper we present the first step in a larger series of experiments for the induction of predicate/argument structures. The structures that we are inducing are very similar to the conceptual structures that are used in Frame Semantics (such as FrameNet). Those structures are called messages and they were previously used in the context of a multi-document summarization system of evolving events. The series of experiments that we are proposing are essentially composed from two stages. In the first stage we are trying to extract a representative vocabulary of words. This vocabulary is later used in the second stage, during which we apply to it various clustering approaches in order to identify the clusters of predicates and arguments--or frames and semantic roles, to use the jargon of Frame Semantics. This paper presents in detail and evaluates the first stage.<|reference_end|> | arxiv | @article{afantenos2009what's,
title={What's in a Message?},
author={Stergos D. Afantenos and Nicolas Hernandez},
journal={12th Conference of the European Chapter of the Association for
Computational Linguistics (EACL 2009), workshop on Cognitive Aspects of
Computational Language Acquisition. Athens, Greece},
year={2009},
archivePrefix={arXiv},
eprint={0902.2345},
primaryClass={cs.CL}
} | afantenos2009what's |
arxiv-6405 | 0902.2362 | XML Representation of Constraint Networks: Format XCSP 21 | <|reference_start|>XML Representation of Constraint Networks: Format XCSP 21: We propose a new extended format to represent constraint networks using XML. This format allows us to represent constraints defined either in extension or in intension. It also allows us to reference global constraints. Any instance of the problems CSP (Constraint Satisfaction Problem), QCSP (Quantified CSP) and WCSP (Weighted CSP) can be represented using this format.<|reference_end|> | arxiv | @article{roussel2009xml,
title={XML Representation of Constraint Networks: Format XCSP 2.1},
author={Olivier Roussel, Christophe Lecoutre},
journal={arXiv preprint arXiv:0902.2362},
year={2009},
archivePrefix={arXiv},
eprint={0902.2362},
primaryClass={cs.AI}
} | roussel2009xml |
arxiv-6406 | 0902.2367 | Dequantizing Compressed Sensing: When Oversampling and Non-Gaussian Constraints Combine | <|reference_start|>Dequantizing Compressed Sensing: When Oversampling and Non-Gaussian Constraints Combine: In this paper we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment $p$ (BPDQ$_p$), that model the quantization distortion more faithfully than the commonly used Basis Pursuit DeNoise (BPDN) program. Our decoders proceed by minimizing the sparsity of the signal to be reconstructed subject to a data-fidelity constraint expressed in the $\ell_p$-norm of the residual error for $2\leq p\leq \infty$. We show theoretically that, (i) the reconstruction error of these new decoders is bounded if the sensing matrix satisfies an extended Restricted Isometry Property involving the $\ell_p$ norm, and (ii), for Gaussian random matrices and uniformly quantized measurements, BPDQ$_p$ performance exceeds that of BPDN by dividing the reconstruction error due to quantization by $\sqrt{p+1}$. This last effect happens with high probability when the number of measurements exceeds a value growing with $p$, i.e. in an oversampled situation compared to what is commonly required by BPDN = BPDQ$_2$. To demonstrate the theoretical power of BPDQ$_p$, we report numerical simulations on signal and image reconstruction problems.<|reference_end|> | arxiv | @article{jacques2009dequantizing,
title={Dequantizing Compressed Sensing: When Oversampling and Non-Gaussian
Constraints Combine},
author={Laurent Jacques, David K. Hammond and M. Jalal Fadili},
journal={arXiv preprint arXiv:0902.2367},
year={2009},
archivePrefix={arXiv},
eprint={0902.2367},
primaryClass={math.OC cs.IT math.IT}
} | jacques2009dequantizing |
arxiv-6407 | 0902.2370 | Outer Bounds on the Admissible Source Region for Broadcast Channels with Dependent Sources | <|reference_start|>Outer Bounds on the Admissible Source Region for Broadcast Channels with Dependent Sources: Outer bounds on the admissible source region for broadcast channels with dependent sources are developed and used to prove capacity results for several classes of sources and channels.<|reference_end|> | arxiv | @article{kramer2009outer,
title={Outer Bounds on the Admissible Source Region for Broadcast Channels with
Dependent Sources},
author={Gerhard Kramer, Yingbin Liang, Shlomo Shamai (Shitz)},
journal={arXiv preprint arXiv:0902.2370},
year={2009},
archivePrefix={arXiv},
eprint={0902.2370},
primaryClass={cs.IT math.IT}
} | kramer2009outer |
arxiv-6408 | 0902.2399 | A Multi-Round Communication Lower Bound for Gap Hamming and Some Consequences | <|reference_start|>A Multi-Round Communication Lower Bound for Gap Hamming and Some Consequences: The Gap-Hamming-Distance problem arose in the context of proving space lower bounds for a number of key problems in the data stream model. In this problem, Alice and Bob have to decide whether the Hamming distance between their $n$-bit input strings is large (i.e., at least $n/2 + \sqrt n$) or small (i.e., at most $n/2 - \sqrt n$); they do not care if it is neither large nor small. This $\Theta(\sqrt n)$ gap in the problem specification is crucial for capturing the approximation allowed to a data stream algorithm. Thus far, for randomized communication, an $\Omega(n)$ lower bound on this problem was known only in the one-way setting. We prove an $\Omega(n)$ lower bound for randomized protocols that use any constant number of rounds. As a consequence we conclude, for instance, that $\epsilon$-approximately counting the number of distinct elements in a data stream requires $\Omega(1/\epsilon^2)$ space, even with multiple (a constant number of) passes over the input stream. This extends earlier one-pass lower bounds, answering a long-standing open question. We obtain similar results for approximating the frequency moments and for approximating the empirical entropy of a data stream. In the process, we also obtain tight $n - \Theta(\sqrt{n}\log n)$ lower and upper bounds on the one-way deterministic communication complexity of the problem. Finally, we give a simple combinatorial proof of an $\Omega(n)$ lower bound on the one-way randomized communication complexity.<|reference_end|> | arxiv | @article{brody2009a,
title={A Multi-Round Communication Lower Bound for Gap Hamming and Some
Consequences},
author={Joshua Brody and Amit Chakrabarti},
journal={arXiv preprint arXiv:0902.2399},
year={2009},
archivePrefix={arXiv},
eprint={0902.2399},
primaryClass={cs.CC cs.DB cs.DS}
} | brody2009a |
arxiv-6409 | 0902.2407 | Group-Theoretic Partial Matrix Multiplication | <|reference_start|>Group-Theoretic Partial Matrix Multiplication: A generalization of recent group-theoretic matrix multiplication algorithms to an analogue of the theory of partial matrix multiplication is presented. We demonstrate that the added flexibility of this approach can in some cases improve upper bounds on the exponent of matrix multiplication yielded by group-theoretic full matrix multiplication. The group theory behind our partial matrix multiplication algorithms leads to the problem of maximizing a quantity representing the "fullness" of a given partial matrix pattern. This problem is shown to be NP-hard, and two algorithms, one optimal and another non-optimal but polynomial-time, are given for solving it.<|reference_end|> | arxiv | @article{bowen2009group-theoretic,
title={Group-Theoretic Partial Matrix Multiplication},
author={Richard Strong Bowen, Bo Chen, Hendrik Orem, and Martijn van
Schaardenburg},
journal={arXiv preprint arXiv:0902.2407},
year={2009},
archivePrefix={arXiv},
eprint={0902.2407},
primaryClass={cs.CC cs.SC}
} | bowen2009group-theoretic |
arxiv-6410 | 0902.2415 | Collectively optimal routing for congested traffic limited by link capacity | <|reference_start|>Collectively optimal routing for congested traffic limited by link capacity: We show that the capacity of a complex network that models a city street grid to support congested traffic can be optimized by using routes that collectively minimize the maximum ratio of betweenness to capacity in any link. Networks with a heterogeneous distribution of link capacities and with a heterogeneous transport load are considered. We find that overall traffic congestion and average travel times can be significantly reduced by a judicious use of slower, smaller capacity links.<|reference_end|> | arxiv | @article{danila2009collectively,
title={Collectively optimal routing for congested traffic limited by link
capacity},
author={Bogdan Danila, Yudong Sun, and Kevin E. Bassler},
journal={Phys Rev E 80 (6) 066116, 2009},
year={2009},
doi={10.1103/PhysRevE.80.066116},
archivePrefix={arXiv},
eprint={0902.2415},
primaryClass={physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.NI physics.comp-ph}
} | danila2009collectively |
arxiv-6411 | 0902.2420 | Self-Assembly as Graph Grammar as Distributed System | <|reference_start|>Self-Assembly as Graph Grammar as Distributed System: In 2004, Klavins et al. introduced the use of graph grammars to describe -- and to program -- systems of self-assembly. It turns out that these graph grammars are a "dual notion" of a graph rewriting characterization of distributed systems that was proposed by Degano and Montanari over twenty years ago. By applying techniques obtained from this observation, we prove a generalized version of Soloveichik and Winfree's theorem on local determinism, and we also present a canonical method to simulate asynchronous constant-size-message-passing models of distributed computing with systems of self-assembly.<|reference_end|> | arxiv | @article{sterling2009self-assembly,
title={Self-Assembly as Graph Grammar as Distributed System},
author={Aaron Sterling},
journal={arXiv preprint arXiv:0902.2420},
year={2009},
archivePrefix={arXiv},
eprint={0902.2420},
primaryClass={cs.DC cs.NE}
} | sterling2009self-assembly |
arxiv-6412 | 0902.2422 | A Time Lower Bound for Multiple Nucleation on a Surface | <|reference_start|>A Time Lower Bound for Multiple Nucleation on a Surface: Majumder, Reif and Sahu have presented a stochastic model of reversible, error-permitting, two-dimensional tile self-assembly, and showed that restricted classes of tile assembly systems achieved equilibrium in (expected) polynomial time. One open question they asked was how much computational power would be added if the model permitted multiple nucleation, i.e., independent groups of tiles growing before attaching to the original seed assembly. This paper provides a partial answer, by proving that if a tile assembly model uses only local binding rules, then it cannot use multiple nucleation on a surface to solve certain "simple" problems in constant time (time independent of the size of the surface). Moreover, this time bound applies to macroscale robotic systems that assemble in a three-dimensional grid, not just to tile assembly systems on a two-dimensional surface. The proof technique defines a new model of distributed computing that simulates tile (and robotic) self-assembly. Keywords: self-assembly, multiple nucleation, locally checkable labeling.<|reference_end|> | arxiv | @article{sterling2009a,
title={A Time Lower Bound for Multiple Nucleation on a Surface},
author={Aaron Sterling},
journal={arXiv preprint arXiv:0902.2422},
year={2009},
archivePrefix={arXiv},
eprint={0902.2422},
primaryClass={cs.CC cs.DC}
} | sterling2009a |
arxiv-6413 | 0902.2425 | Finding Community Structure Based on Subgraph Similarity | <|reference_start|>Finding Community Structure Based on Subgraph Similarity: Community identification is a long-standing challenge in the modern network science, especially for very large scale networks containing millions of nodes. In this paper, we propose a new metric to quantify the structural similarity between subgraphs, based on which an algorithm for community identification is designed. Extensive empirical results on several real networks from disparate fields has demonstrated that the present algorithm can provide the same level of reliability, measure by modularity, while takes much shorter time than the well-known fast algorithm proposed by Clauset, Newman and Moore (CNM). We further propose a hybrid algorithm that can simultaneously enhance modularity and save computational time compared with the CNM algorithm.<|reference_end|> | arxiv | @article{xiang2009finding,
title={Finding Community Structure Based on Subgraph Similarity},
author={Biao Xiang, En-Hong Chen, Tao Zhou},
journal={Studies in Computational Intelligence 207 (2009) 73-81},
year={2009},
doi={10.1007/978-3-642-01206-8},
archivePrefix={arXiv},
eprint={0902.2425},
primaryClass={cs.NI cs.IR physics.soc-ph}
} | xiang2009finding |
arxiv-6414 | 0902.2436 | Nested Lattice Codes for Gaussian Relay Networks with Interference | <|reference_start|>Nested Lattice Codes for Gaussian Relay Networks with Interference: In this paper, a class of relay networks is considered. We assume that, at a node, outgoing channels to its neighbors are orthogonal, while incoming signals from neighbors can interfere with each other. We are interested in the multicast capacity of these networks. As a subclass, we first focus on Gaussian relay networks with interference and find an achievable rate using a lattice coding scheme. It is shown that there is a constant gap between our achievable rate and the information theoretic cut-set bound. This is similar to the recent result by Avestimehr, Diggavi, and Tse, who showed such an approximate characterization of the capacity of general Gaussian relay networks. However, our achievability uses a structured code instead of a random one. Using the same idea used in the Gaussian case, we also consider linear finite-field symmetric networks with interference and characterize the capacity using a linear coding scheme.<|reference_end|> | arxiv | @article{nam2009nested,
title={Nested Lattice Codes for Gaussian Relay Networks with Interference},
author={Wooseok Nam, Sae-Young Chung, Yong H. Lee},
journal={arXiv preprint arXiv:0902.2436},
year={2009},
doi={10.1109/TIT.2011.2170102},
archivePrefix={arXiv},
eprint={0902.2436},
primaryClass={cs.IT math.IT}
} | nam2009nested |
arxiv-6415 | 0902.2438 | Capacity of the Gaussian Two-way Relay Channel to within 1/2 Bit | <|reference_start|>Capacity of the Gaussian Two-way Relay Channel to within 1/2 Bit: In this paper, a Gaussian two-way relay channel, where two source nodes exchange messages with each other through a relay, is considered. We assume that all nodes operate in full-duplex mode and there is no direct channel between the source nodes. We propose an achievable scheme composed of nested lattice codes for the uplink and structured binning for the downlink. We show that the scheme achieves within 1/2 bit from the cut-set bound for all channel parameters and becomes asymptotically optimal as the signal to noise ratios increase.<|reference_end|> | arxiv | @article{nam2009capacity,
title={Capacity of the Gaussian Two-way Relay Channel to within 1/2 Bit},
author={Wooseok Nam, Sae-Young Chung, Yong H. Lee},
journal={arXiv preprint arXiv:0902.2438},
year={2009},
archivePrefix={arXiv},
eprint={0902.2438},
primaryClass={cs.IT math.IT}
} | nam2009capacity |
arxiv-6416 | 0902.2446 | Detection of Gaussian signals via hexagonal sensor networks | <|reference_start|>Detection of Gaussian signals via hexagonal sensor networks: This paper considers a special case of the problem of identifying a static scalar signal, depending on the location, using a planar network of sensors in a distributed fashion. Motivated by the application to monitoring wild-fires spreading and pollutants dispersion, we assume the signal to be Gaussian in space. Using a network of sensors positioned to form a regular hexagonal tessellation, we prove that each node can estimate the parameters of the Gaussian from local measurements. Moreover, we study the sensitivity of these estimates to additive errors affecting the measurements. Finally, we show how a consensus algorithm can be designed to fuse the local estimates into a shared global estimate, effectively compensating the measurement errors.<|reference_end|> | arxiv | @article{frasca2009detection,
title={Detection of Gaussian signals via hexagonal sensor networks},
author={Paolo Frasca, Paolo Mason, Benedetto Piccoli},
journal={arXiv preprint arXiv:0902.2446},
year={2009},
archivePrefix={arXiv},
eprint={0902.2446},
primaryClass={math.OC cs.SY}
} | frasca2009detection |
arxiv-6417 | 0902.2487 | A Recursive Threshold Visual Cryptography Scheme | <|reference_start|>A Recursive Threshold Visual Cryptography Scheme: This paper presents a recursive hiding scheme for 2 out of 3 secret sharing. In recursive hiding of secrets, the user encodes additional information about smaller secrets in the shares of a larger secret without an expansion in the size of the latter, thereby increasing the efficiency of secret sharing. We present applications of our proposed protocol to images as well as text.<|reference_end|> | arxiv | @article{parakh2009a,
title={A Recursive Threshold Visual Cryptography Scheme},
author={Abhishek Parakh and Subhash Kak},
journal={arXiv preprint arXiv:0902.2487},
year={2009},
number={Cryptology ePrint Archive, Report 2008/535},
archivePrefix={arXiv},
eprint={0902.2487},
primaryClass={cs.CR}
} | parakh2009a |
arxiv-6418 | 0902.2501 | The Forgiving Graph: A distributed data structure for low stretch under adversarial attack | <|reference_start|>The Forgiving Graph: A distributed data structure for low stretch under adversarial attack: We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes $v$ and $w$ whose distance would have been $\ell$ in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most $\ell \log n$ in the actual graph, where $n$ is the total number of vertices seen so far. Similarly, at any point, a node $v$ whose degree would have been $d$ in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our algorithm is completely distributed and has low latency and bandwidth requirements.<|reference_end|> | arxiv | @article{hayes2009the,
title={The Forgiving Graph: A distributed data structure for low stretch under
adversarial attack},
author={Tom Hayes and Jared Saia and Amitabh Trehan},
journal={Distributed Computing, 2012, Volume 25, Number 4, Pages 261-278},
year={2009},
archivePrefix={arXiv},
eprint={0902.2501},
primaryClass={cs.DS cs.DC}
} | hayes2009the |
arxiv-6419 | 0902.2504 | Hyperset Approach to Semi-structured Databases and the Experimental Implementation of the Query Language Delta | <|reference_start|>Hyperset Approach to Semi-structured Databases and the Experimental Implementation of the Query Language Delta: This thesis presents practical suggestions towards the implementation of the hyperset approach to semi-structured databases and the associated query language Delta. This work can be characterised as part of a top-down approach to semi-structured databases, from theory to practice. The main original part of this work consisted in implementation of the hyperset Delta query language to semi-structured databases, including worked example queries. In fact, the goal was to demonstrate the practical details of this approach and language. The required development of an extended, practical version of the language based on the existing theoretical version, and the corresponding operational semantics. Here we present detailed description of the most essential steps of the implementation. Another crucial problem for this approach was to demonstrate how to deal in reality with the concept of the equality relation between (hyper)sets, which is computationally realised by the bisimulation relation. In fact, this expensive procedure, especially in the case of distributed semi-structured data, required some additional theoretical considerations and practical suggestions for efficient implementation. To this end the 'local/global' strategy for computing the bisimulation relation over distributed semi-structured data was developed and its efficiency was experimentally confirmed.<|reference_end|> | arxiv | @article{molyneux2009hyperset,
title={Hyperset Approach to Semi-structured Databases and the Experimental
Implementation of the Query Language Delta},
author={Richard Molyneux},
journal={arXiv preprint arXiv:0902.2504},
year={2009},
number={ULCS-09-003},
archivePrefix={arXiv},
eprint={0902.2504},
primaryClass={cs.DB}
} | molyneux2009hyperset |
arxiv-6420 | 0902.2537 | Communication-optimal Parallel and Sequential Cholesky Decomposition | <|reference_start|>Communication-optimal Parallel and Sequential Cholesky Decomposition: Numerical algorithms have two kinds of costs: arithmetic and communication, by which we mean either moving data between levels of a memory hierarchy (in the sequential case) or over a network connecting processors (in the parallel case). Communication costs often dominate arithmetic costs, so it is of interest to design algorithms minimizing communication. In this paper we first extend known lower bounds on the communication cost (both for bandwidth and for latency) of conventional (O(n^3)) matrix multiplication to Cholesky factorization, which is used for solving dense symmetric positive definite linear systems. Second, we compare the costs of various Cholesky decomposition implementations to these lower bounds and identify the algorithms and data structures that attain them. In the sequential case, we consider both the two-level and hierarchical memory models. Combined with prior results in [13, 14, 15], this gives a set of communication-optimal algorithms for O(n^3) implementations of the three basic factorizations of dense linear algebra: LU with pivoting, QR and Cholesky. But it goes beyond this prior work on sequential LU by optimizing communication for any number of levels of memory hierarchy.<|reference_end|> | arxiv | @article{ballard2009communication-optimal,
title={Communication-optimal Parallel and Sequential Cholesky Decomposition},
author={Grey Ballard, James Demmel, Olga Holtz, Oded Schwartz},
journal={SIAM J. Sci. Comput. 32, (2010) pp. 3495-3523},
year={2009},
doi={10.1137/090760969},
archivePrefix={arXiv},
eprint={0902.2537},
primaryClass={cs.NA cs.CC cs.DS math.NA}
} | ballard2009communication-optimal |
arxiv-6421 | 0902.2559 | Power Allocation Games for MIMO Multiple Access Channels with Coordination | <|reference_start|>Power Allocation Games for MIMO Multiple Access Channels with Coordination: A game theoretic approach is used to derive the optimal decentralized power allocation (PA) in fast fading multiple access channels where the transmitters and receiver are equipped with multiple antennas. The players (the mobile terminals) are free to choose their PA in order to maximize their individual transmission rates (in particular they can ignore some specified centralized policies). A simple coordination mechanism between users is introduced. The nature and influence of this mechanism is studied in detail. The coordination signal indicates to the users the order in which the receiver applies successive interference cancellation and the frequency at which this order is used. Two different games are investigated: the users can either adapt their temporal PA to their decoding rank at the receiver or optimize their spatial PA between their transmit antennas. For both games a thorough analysis of the existence, uniqueness and sum-rate efficiency of the network Nash equilibrium is conducted. Analytical and simulation results are provided to assess the gap between the decentralized network performance and its equivalent virtual multiple input multiple output system, which is shown to be zero in some cases and relatively small in general.<|reference_end|> | arxiv | @article{belmega2009power,
title={Power Allocation Games for MIMO Multiple Access Channels with
Coordination},
author={Elena Veronica Belmega, Samson Lasaulce, Merouane Debbah},
journal={arXiv preprint arXiv:0902.2559},
year={2009},
doi={10.1109/TWC.2009.081182},
archivePrefix={arXiv},
eprint={0902.2559},
primaryClass={cs.IT math.IT}
} | belmega2009power |
arxiv-6422 | 0902.2621 | Creating modular and reusable DSL textual syntax definitions with Grammatic/ANTLR | <|reference_start|>Creating modular and reusable DSL textual syntax definitions with Grammatic/ANTLR: In this paper we present Grammatic -- a tool for textual syntax definition. Grammatic serves as a front-end for parser generators (and other tools) and brings modularity and reuse to their development artifacts. It adapts techniques for separation of concerns from Apsect-Oriented Programming to grammars and uses templates for grammar reuse. We illustrate usage of Grammatic by describing a case study: bringing separation of concerns to ANTLR parser generator, which is achieved without a common time- and memory-consuming technique of building an AST to separate semantic actions from a grammar definition.<|reference_end|> | arxiv | @article{breslav2009creating,
title={Creating modular and reusable DSL textual syntax definitions with
Grammatic/ANTLR},
author={Andrey Breslav},
journal={arXiv preprint arXiv:0902.2621},
year={2009},
archivePrefix={arXiv},
eprint={0902.2621},
primaryClass={cs.PL cs.SE}
} | breslav2009creating |
arxiv-6423 | 0902.2648 | More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries | <|reference_start|>More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries: We consider the problem of representing, in a compressed format, a bit-vector $S$ of $m$ bits with $n$ 1s, supporting the following operations, where $b \in \{0, 1 \}$: $rank_b(S,i)$ returns the number of occurrences of bit $b$ in the prefix $S[1..i]$; $select_b(S,i)$ returns the position of the $i$th occurrence of bit $b$ in $S$. Such a data structure is called \emph{fully indexable dictionary (FID)} [Raman et al.,2007], and is at least as powerful as predecessor data structures. Our focus is on space-efficient FIDs on the \textsc{ram} model with word size $\Theta(\lg m)$ and constant time for all operations, so that the time cost is independent of the input size. Given the bitstring $S$ to be encoded, having length $m$ and containing $n$ ones, the minimal amount of information that needs to be stored is $B(n,m) = \lceil \log {{m}\choose{n}} \rceil$. The state of the art in building a FID for $S$ is given in [Patrascu,2008] using $B(m,n)+O(m / ((\log m/ t) ^t)) + O(m^{3/4}) $ bits, to support the operations in $O(t)$ time. Here, we propose a parametric data structure exhibiting a time/space trade-off such that, for any real constants $0 < \delta \leq 1/2$, $0 < \eps \leq 1$, and integer $s > 0$, it uses \[ B(n,m) + O(n^{1+\delta} + n (\frac{m}{n^s})^\eps) \] bits and performs all the operations in time $O(s\delta^{-1} + \eps^{-1})$. The improvement is twofold: our redundancy can be lowered parametrically and, fixing $s = O(1)$, we get a constant-time FID whose space is $B(n,m) + O(m^\eps/\poly{n})$ bits, for sufficiently large $m$. This is a significant improvement compared to the previous bounds for the general case.<|reference_end|> | arxiv | @article{grossi2009more,
title={More Haste, Less Waste: Lowering the Redundancy in Fully Indexable
Dictionaries},
author={Roberto Grossi, Alessio Orlandi, Rajeev Raman, S. Srinivasa Rao},
journal={26th International Symposium on Theoretical Aspects of Computer
Science STACS 2009 (2009) 517-528},
year={2009},
archivePrefix={arXiv},
eprint={0902.2648},
primaryClass={cs.DS}
} | grossi2009more |
arxiv-6424 | 0902.2649 | A Unified Algorithm for Accelerating Edit-Distance Computation via Text-Compression | <|reference_start|>A Unified Algorithm for Accelerating Edit-Distance Computation via Text-Compression: We present a unified framework for accelerating edit-distance computation between two compressible strings using straight-line programs. For two strings of total length $N$ having straight-line program representations of total size $n$, we provide an algorithm running in $O(n^{1.4}N^{1.2})$ time for computing the edit-distance of these two strings under any rational scoring function, and an $O(n^{1.34}N^{1.34})$ time algorithm for arbitrary scoring functions. This improves on a recent algorithm of Tiskin that runs in $O(nN^{1.5})$ time, and works only for rational scoring functions. Also, in the last part of the paper, we show how the classical four-russians technique can be incorporated into our SLP edit-distance scheme, giving us a simple $\Omega(\lg N)$ speed-up in the case of arbitrary scoring functions, for any pair of strings.<|reference_end|> | arxiv | @article{hermelin2009a,
title={A Unified Algorithm for Accelerating Edit-Distance Computation via
Text-Compression},
author={Danny Hermelin, Gad M. Landau, Shir Landau, Oren Weimann (CSAIL)},
journal={26th International Symposium on Theoretical Aspects of Computer
Science - STACS 2009 (2009) 529-540},
year={2009},
archivePrefix={arXiv},
eprint={0902.2649},
primaryClass={cs.CC cs.DS}
} | hermelin2009a |
arxiv-6425 | 0902.2674 | Inseparability and Strong Hypotheses for Disjoint NP Pairs | <|reference_start|>Inseparability and Strong Hypotheses for Disjoint NP Pairs: This paper investigates the existence of inseparable disjoint pairs of NP languages and related strong hypotheses in computational complexity. Our main theorem says that, if NP does not have measure 0 in EXP, then there exist disjoint pairs of NP languages that are P-inseparable, in fact TIME(2^(n^k))-inseparable. We also relate these conditions to strong hypotheses concerning randomness and genericity of disjoint pairs.<|reference_end|> | arxiv | @article{fortnow2009inseparability,
title={Inseparability and Strong Hypotheses for Disjoint NP Pairs},
author={Lance Fortnow, Jack H. Lutz, Elvira Mayordomo},
journal={arXiv preprint arXiv:0902.2674},
year={2009},
archivePrefix={arXiv},
eprint={0902.2674},
primaryClass={cs.CC}
} | fortnow2009inseparability |
arxiv-6426 | 0902.2685 | Ganga: a tool for computational-task management and easy access to Grid resources | <|reference_start|>Ganga: a tool for computational-task management and easy access to Grid resources: In this paper, we present the computational task-management tool Ganga, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. Ganga has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. Ganga provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. Ganga has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the Ganga architecture, give examples of current use, and demonstrate how Ganga can be used in many different areas of science.<|reference_end|> | arxiv | @article{mościcki2009ganga:,
title={Ganga: a tool for computational-task management and easy access to Grid
resources},
author={J.T.Mo'scicki, F.Brochu, J.Ebke, U.Egede, J.Elmsheuser, K.Harrison,
R.W.L.Jones, H.C.Lee, D.Liko, A.Maier, A.Muraru, G.N.Patrick, K.Pajchel,
W.Reece, B.H.Samset, M.W.Slater, A.Soroko, C.L.Tan, D.C.Vanderster,
M.Williams},
journal={arXiv preprint arXiv:0902.2685},
year={2009},
doi={10.1016/j.cpc.2009.06.016},
archivePrefix={arXiv},
eprint={0902.2685},
primaryClass={cs.DC}
} | mościcki2009ganga: |
arxiv-6427 | 0902.2692 | Combining coded signals with arbitrary modulations in orthogonal relay channels | <|reference_start|>Combining coded signals with arbitrary modulations in orthogonal relay channels: We consider a relay channel for which the following assumptions are made. (1) The source-destination and relay-destination channels are orthogonal (frequency division relay channel). (2) The relay implements the decode-and-forward protocol. (3) The source and relay implement the same channel encoder, namely, a onvolutional encoder. (4) They can use arbitrary and possibly different modulations. In this framework, we derive the best combiner in the sense of the maximum likelihood (ML) at the destination and the branch metrics of the trellis associated with its channel decoder for the ML combiner and also for the maximum ratio combiner (MRC), cooperative-MRC (C-MRC), and the minimum mean-square error (MMSE) combiner.<|reference_end|> | arxiv | @article{djeumou2009combining,
title={Combining coded signals with arbitrary modulations in orthogonal relay
channels},
author={Brice Djeumou (LSS), Samson Lasaulce (LSS), Antoine Berthet},
journal={EURASIP Journal on Wireless Communications and Networking 2008,
Article ID 287320, 4 pages (2008)},
year={2009},
doi={10.1155/2008/287320},
archivePrefix={arXiv},
eprint={0902.2692},
primaryClass={cs.IT math.IT}
} | djeumou2009combining |
arxiv-6428 | 0902.2736 | Random Fruits on the Zielonka Tree | <|reference_start|>Random Fruits on the Zielonka Tree: Stochastic games are a natural model for the synthesis of controllers confronted to adversarial and/or random actions. In particular, $\omega$-regular games of infinite length can represent reactive systems which are not expected to reach a correct state, but rather to handle a continuous stream of events. One critical resource in such applications is the memory used by the controller. In this paper, we study the amount of memory that can be saved through the use of randomisation in strategies, and present matching upper and lower bounds for stochastic Muller games.<|reference_end|> | arxiv | @article{horn2009random,
title={Random Fruits on the Zielonka Tree},
author={Florian Horn},
journal={26th International Symposium on Theoretical Aspects of Computer
Science - STACS 2009 (2009) 541-552},
year={2009},
archivePrefix={arXiv},
eprint={0902.2736},
primaryClass={cs.GT cs.PF}
} | horn2009random |
arxiv-6429 | 0902.2751 | Object Classification by means of Multi-Feature Concept Learning in a Multi Expert-Agent System | <|reference_start|>Object Classification by means of Multi-Feature Concept Learning in a Multi Expert-Agent System: Classification of some objects in classes of concepts is an essential and even breathtaking task in many applications. A solution is discussed here based on Multi-Agent systems. A kernel of some expert agents in several classes is to consult a central agent decide among the classification problem of a certain object. This kernel is moderated with the center agent, trying to manage the querying agents for any decision problem by means of a data-header like feature set. Agents have cooperation among concepts related to the classes of this classification decision-making; and may affect on each others' results on a certain query object in a multi-agent learning approach. This leads to an online feature learning via the consulting trend. The performance is discussed to be much better in comparison to some other prior trends while system's message passing overload is decreased to less agents and the expertism helps the performance and operability of system win the comparison.<|reference_end|> | arxiv | @article{mirbakhsh2009object,
title={Object Classification by means of Multi-Feature Concept Learning in a
Multi Expert-Agent System},
author={Nima Mirbakhsh, Arman Didandeh},
journal={arXiv preprint arXiv:0902.2751},
year={2009},
archivePrefix={arXiv},
eprint={0902.2751},
primaryClass={cs.MA cs.LG}
} | mirbakhsh2009object |
arxiv-6430 | 0902.2774 | Pseudorandom Generators Against Advised Context-Free Languages | <|reference_start|>Pseudorandom Generators Against Advised Context-Free Languages: Pseudorandomness has played a central role in modern cryptography, finding theoretical and practical applications to various fields of computer science. A function that generates pseudorandom strings from shorter but truly random seeds is known as a pseudorandom generator. Our generators are designed to fool languages (or equivalently, Boolean-valued functions). In particular, our generator fools advised context-free languages, namely, context-free languages assisted by external information known as advice, and moreover our generator is made almost one-to-one, stretching $n$-bit seeds to $n+1$ bits. We explicitly construct such a pseudorandom generator, which is computed by a deterministic Turing machine using logarithmic space and also belongs to CFLMV(2)/n---a functional extension of the 2-conjunctive closure of CFL with the help of appropriate deterministic advice. In contrast, we show that there is no almost one-to-one pseudorandom generator against context-free languages if we demand that it should be computed by a nondeterministic pushdown automaton equipped with a write-only output tape. Our generator naturally extends known pseudorandom generators against advised regular languages. Our proof of the CFL/n-pseudorandomness of the generator is quite elementary, and in particular, one part of the proof utilizes a special feature of the behaviors of nondeterministic pushdown automata, called a swapping property, which is interesting in its own right, generalizing the swapping lemma for context-free languages.<|reference_end|> | arxiv | @article{yamakami2009pseudorandom,
title={Pseudorandom Generators Against Advised Context-Free Languages},
author={Tomoyuki Yamakami},
journal={Theoretical Computer Science, vol. 613, pp. 1-27, 2016},
year={2009},
archivePrefix={arXiv},
eprint={0902.2774},
primaryClass={cs.FL cs.CC}
} | yamakami2009pseudorandom |
arxiv-6431 | 0902.2783 | New Ica-Beamforming Method to Under-Determined BSS | <|reference_start|>New Ica-Beamforming Method to Under-Determined BSS: This paper has been withdrawn by the author ali pourmohammad.<|reference_end|> | arxiv | @article{pourmohammad2009new,
title={New Ica-Beamforming Method to Under-Determined BSS},
author={Ali Pourmohammad, Seyed Mohammad Ahadi},
journal={arXiv preprint arXiv:0902.2783},
year={2009},
archivePrefix={arXiv},
eprint={0902.2783},
primaryClass={cs.SD}
} | pourmohammad2009new |
arxiv-6432 | 0902.2788 | Using SLP Neural Network to Persian Handwritten Digits Recognition | <|reference_start|>Using SLP Neural Network to Persian Handwritten Digits Recognition: This paper has been withdrawn by the author ali pourmohammad.<|reference_end|> | arxiv | @article{pourmohammad2009using,
title={Using SLP Neural Network to Persian Handwritten Digits Recognition},
author={Ali Pourmohammad, Seyed Mohammad Ahadi},
journal={arXiv preprint arXiv:0902.2788},
year={2009},
archivePrefix={arXiv},
eprint={0902.2788},
primaryClass={cs.CV}
} | pourmohammad2009using |
arxiv-6433 | 0902.2795 | A Graph Reduction Step Preserving Element-Connectivity and Applications | <|reference_start|>A Graph Reduction Step Preserving Element-Connectivity and Applications: Given an undirected graph G=(V,E) and subset of terminals T \subseteq V, the element-connectivity of two terminals u,v \in T is the maximum number of u-v paths that are pairwise disjoint in both edges and non-terminals V \setminus T (the paths need not be disjoint in terminals). Element-connectivity is more general than edge-connectivity and less general than vertex-connectivity. Hind and Oellermann gave a graph reduction step that preserves the global element-connectivity of the graph. We show that this step also preserves local connectivity, that is, all the pairwise element-connectivities of the terminals. We give two applications of this reduction step to connectivity and network design problems: 1. Given a graph G and disjoint terminal sets T_1, T_2, ..., T_m, we seek a maximum number of element-disjoint Steiner forests where each forest connects each T_i. We prove that if each T_i is k-element-connected then there exist \Omega(\frac{k}{\log h \log m}) element-disjoint Steiner forests, where h = |\bigcup_i T_i|. If G is planar (or more generally, has fixed genus), we show that there exist \Omega(k) Steiner forests. Our proofs are constructive, giving poly-time algorithms to find these forests; these are the first non-trivial algorithms for packing element-disjoint Steiner Forests. 2. We give a very short and intuitive proof of a spider-decomposition theorem of Chuzhoy and Khanna in the context of the single-sink k-vertex-connectivity problem; this yields a simple and alternative analysis of an O(k \log n) approximation. Our results highlight the effectiveness of the element-connectivity reduction step; we believe it will find more applications in the future.<|reference_end|> | arxiv | @article{chekuri2009a,
title={A Graph Reduction Step Preserving Element-Connectivity and Applications},
author={Chandra Chekuri and Nitish Korula},
journal={arXiv preprint arXiv:0902.2795},
year={2009},
archivePrefix={arXiv},
eprint={0902.2795},
primaryClass={cs.DS}
} | chekuri2009a |
arxiv-6434 | 0902.2851 | Leader Election Problem Versus Pattern Formation Problem | <|reference_start|>Leader Election Problem Versus Pattern Formation Problem: Leader election and arbitrary pattern formation are funda- mental tasks for a set of autonomous mobile robots. The former consists in distinguishing a unique robot, called the leader. The latter aims in arranging the robots in the plane to form any given pattern. The solv- ability of both these tasks turns out to be necessary in order to achieve more complex tasks. In this paper, we study the relationship between these two tasks in a model, called CORDA, wherein the robots are weak in several aspects. In particular, they are fully asynchronous and they have no direct means of communication. They cannot remember any previous observation nor computation performed in any previous step. Such robots are said to be oblivious. The robots are also uniform and anonymous, i.e, they all have the same program using no global parameter (such as an identity) allowing to differentiate any of them. Moreover, we assume that none of them share any kind of common coordinate mechanism or common sense of direction and we discuss the influence of a common handedness (i.e., chirality). In such a system, Flochini et al. proved in [11] that it is possible to elect a leader for n \geq 3 robots if it is possible to form any pattern for n \geq 3. In this paper, we show that the converse is true for n \geq 4 when the robots share a common handedness and for n \geq 5 when they do not. Thus, we deduce that with chirality (resp. without chirality) both problems are equivalent for n \geq 4 (resp. n \geq 5) in CORDA.<|reference_end|> | arxiv | @article{dieudonné2009leader,
title={Leader Election Problem Versus Pattern Formation Problem},
author={Yoann Dieudonn'e (MIS), Franck Petit (LIP6), Vincent Villain (MIS)},
journal={arXiv preprint arXiv:0902.2851},
year={2009},
archivePrefix={arXiv},
eprint={0902.2851},
primaryClass={cs.DC cs.MA}
} | dieudonné2009leader |
arxiv-6435 | 0902.2853 | A formal calculus on the Riordan near algebra | <|reference_start|>A formal calculus on the Riordan near algebra: The Riordan group is the semi-direct product of a multiplicative group of invertible series and a group, under substitution, of non units. The Riordan near algebra, as introduced in this paper, is the Cartesian product of the algebra of formal power series and its principal ideal of non units, equipped with a product that extends the multiplication of the Riordan group. The later is naturally embedded as a subgroup of units into the former. In this paper, we prove the existence of a formal calculus on the Riordan algebra. This formal calculus plays a role similar to those of holomorphic calculi in the Banach or Fr\'echet algebras setting, but without the constraint of a radius of convergence. Using this calculus, we define \emph{en passant} a notion of generalized powers in the Riordan group.<|reference_end|> | arxiv | @article{poinsot2009a,
title={A formal calculus on the Riordan near algebra},
author={Laurent Poinsot (LIPN), G'erard Duchamp (LIPN)},
journal={Advances and Applications in Discrete Mathematics 6, 1 (2010)
11-44},
year={2009},
archivePrefix={arXiv},
eprint={0902.2853},
primaryClass={cs.SC math.CO}
} | poinsot2009a |
arxiv-6436 | 0902.2859 | Transmission protocols for instruction streams | <|reference_start|>Transmission protocols for instruction streams: Threads as considered in thread algebra model behaviours to be controlled by some execution environment: upon each action performed by a thread, a reply from its execution environment -- which takes the action as an instruction to be processed -- determines how the thread proceeds. In this paper, we are concerned with the case where the execution environment is remote: we describe and analyse some transmission protocols for passing instructions from a thread to a remote execution environment.<|reference_end|> | arxiv | @article{bergstra2009transmission,
title={Transmission protocols for instruction streams},
author={J. A. Bergstra, C. A. Middelburg},
journal={In ICTAC 2009, pages 127--139. Springer-Verlag, LNCS 5684, 2009},
year={2009},
doi={10.1007/978-3-642-03466-4_8},
number={PRG0903},
archivePrefix={arXiv},
eprint={0902.2859},
primaryClass={cs.PL cs.DC}
} | bergstra2009transmission |
arxiv-6437 | 0902.2866 | Collective dynamics of social annotation | <|reference_start|>Collective dynamics of social annotation: The enormous increase of popularity and use of the WWW has led in the recent years to important changes in the ways people communicate. An interesting example of this fact is provided by the now very popular social annotation systems, through which users annotate resources (such as web pages or digital photographs) with text keywords dubbed tags. Understanding the rich emerging structures resulting from the uncoordinated actions of users calls for an interdisciplinary effort. In particular concepts borrowed from statistical physics, such as random walks, and the complex networks framework, can effectively contribute to the mathematical modeling of social annotation systems. Here we show that the process of social annotation can be seen as a collective but uncoordinated exploration of an underlying semantic space, pictured as a graph, through a series of random walks. This modeling framework reproduces several aspects, so far unexplained, of social annotation, among which the peculiar growth of the size of the vocabulary used by the community and its complex network structure that represents an externalization of semantic structures grounded in cognition and typically hard to access.<|reference_end|> | arxiv | @article{cattuto2009collective,
title={Collective dynamics of social annotation},
author={Ciro Cattuto, Alain Barrat (CPT), Andrea Baldassarri, G. Schehr (LPT),
Vittorio Loreto},
journal={Proceeding of the national academy of sciences 106 (2009) 10511},
year={2009},
doi={10.1073/pnas.0901136106},
archivePrefix={arXiv},
eprint={0902.2866},
primaryClass={cs.CY cond-mat.stat-mech physics.soc-ph}
} | cattuto2009collective |
arxiv-6438 | 0902.2871 | The Semantics of Kalah Game | <|reference_start|>The Semantics of Kalah Game: The present work consisted in developing a plateau game. There are the traditional ones (monopoly, cluedo, ect.) but those which interest us leave less place at the chance (luck) than to the strategy such that the chess game. Kallah is an old African game, its rules are simple but the strategies to be used are very complex to implement. Of course, they are based on a strongly mathematical basis as in the film "Rain-Man" where one can see that gambling can be payed with strategies based on mathematical theories. The Artificial Intelligence gives the possibility "of thinking" to a machine and, therefore, allows it to make decisions. In our work, we use it to give the means to the computer choosing its best movement.<|reference_end|> | arxiv | @article{musumbu2009the,
title={The Semantics of Kalah Game},
author={Kaninda Musumbu (LaBRI)},
journal={ACM International conference Proceeding series, ISBN 0-9544145-6-X
(2005) 191 - 196},
year={2009},
archivePrefix={arXiv},
eprint={0902.2871},
primaryClass={cs.AI}
} | musumbu2009the |
arxiv-6439 | 0902.2917 | Full Rate L2-Orthogonal Space-Time CPM for Three Antennas | <|reference_start|>Full Rate L2-Orthogonal Space-Time CPM for Three Antennas: To combine the power efficiency of Continuous Phase Modulation (CPM) with enhanced performance in fading environments, some authors have suggested to use CPM in combination with Space-Time Codes (STC). Recently, we have proposed a CPM ST-coding scheme based on L2-orthogonality for two transmitting antennas. In this paper we extend this approach to the three antennas case. We analytically derive a family of coding schemes which we call Parallel Code (PC). This code family has full rate and we prove that the proposed coding scheme achieves full diversity as confirmed by accompanying simulations. We detail an example of the proposed ST codes that can be interpreted as a conventional CPM scheme with different alphabet sets for the different transmit antennas which results in a simplified implementation. Thanks to L2-orthogonality, the decoding complexity, usually exponentially proportional to the number of transmitting antennas, is reduced to linear complexity.<|reference_end|> | arxiv | @article{hesse2009full,
title={Full Rate L2-Orthogonal Space-Time CPM for Three Antennas},
author={Matthias Hesse (I3S), Jerome Lebrun (I3S), Luc Deneire (I3S)},
journal={arXiv preprint arXiv:0902.2917},
year={2009},
archivePrefix={arXiv},
eprint={0902.2917},
primaryClass={cs.IT math.IT}
} | hesse2009full |
arxiv-6440 | 0902.2948 | Optimized L2-Orthogonal STC CPM for 3 Antennas | <|reference_start|>Optimized L2-Orthogonal STC CPM for 3 Antennas: In this paper, we introduce further our recently designed family of L2 orthogonal Space-Time codes for CPM. With their advantage of maintaining both the constant envelope properties of CPM, the diversity of Space-Time codes and moreover orthogonality, and thus reduced decoding complexity, these codes are also full rate, even for more than two transmitting antennas. The issue of power efficiency for these codes is first dealt with by proving that the inherent increase in bandwidth in these systems is quite moderate. It is then detailed how the initial state of the code influences the coding gain and has to be optimized. For the two and three antennas case, we determine the optimal values by computer simulations and show how the coding gain and therewith the bit error performance are significantly improved by this optimization.<|reference_end|> | arxiv | @article{hesse2009optimized,
title={Optimized L2-Orthogonal STC CPM for 3 Antennas},
author={Matthias Hesse (I3S), Jerome Lebrun (I3S), Luc Deneire (I3S)},
journal={arXiv preprint arXiv:0902.2948},
year={2009},
archivePrefix={arXiv},
eprint={0902.2948},
primaryClass={cs.IT math.IT}
} | hesse2009optimized |
arxiv-6441 | 0902.2953 | ImageSpace: An Environment for Image Ontology Management | <|reference_start|>ImageSpace: An Environment for Image Ontology Management: More and more researchers have realized that ontologies will play a critical role in the development of the Semantic Web, the next generation Web in which content is not only consumable by humans, but also by software agents. The development of tools to support ontology management including creation, visualization, annotation, database storage, and retrieval is thus extremely important. We have developed ImageSpace, an image ontology creation and annotation tool that features (1) full support for the standard web ontology language DAML+OIL; (2) image ontology creation, visualization, image annotation and display in one integrated framework; (3) ontology consistency assurance; and (4) storing ontologies and annotations in relational databases. It is expected that the availability of such a tool will greatly facilitate the creation of image repositories as islands of the Semantic Web.<|reference_end|> | arxiv | @article{lu2009imagespace:,
title={ImageSpace: An Environment for Image Ontology Management},
author={Shiyong Lu, Rong Huang, Artem Chebotko, Yu Deng, Farshad Fotouhi},
journal={International Journal of Information Theories and Applications
(IJITA), 11(2), pp. 127-134, 2004},
year={2009},
archivePrefix={arXiv},
eprint={0902.2953},
primaryClass={cs.DL cs.DB cs.MM cs.SE}
} | lu2009imagespace: |
arxiv-6442 | 0902.2969 | Ptarithmetic | <|reference_start|>Ptarithmetic: The present article introduces ptarithmetic (short for "polynomial time arithmetic") -- a formal number theory similar to the well known Peano arithmetic, but based on the recently born computability logic (see http://www.cis.upenn.edu/~giorgi/cl.html) instead of classical logic. The formulas of ptarithmetic represent interactive computational problems rather than just true/false statements, and their "truth" is understood as existence of a polynomial time solution. The system of ptarithmetic elaborated in this article is shown to be sound and complete. Sound in the sense that every theorem T of the system represents an interactive number-theoretic computational problem with a polynomial time solution and, furthermore, such a solution can be effectively extracted from a proof of T. And complete in the sense that every interactive number-theoretic problem with a polynomial time solution is represented by some theorem T of the system. The paper is self-contained, and can be read without any previous familiarity with computability logic.<|reference_end|> | arxiv | @article{japaridze2009ptarithmetic,
title={Ptarithmetic},
author={Giorgi Japaridze},
journal={The Baltic International Yearbook on Cognition, Logic and
Communication 8 (2013), Article 5, pp. 1-186},
year={2009},
doi={10.4148/1944-3676.1074},
archivePrefix={arXiv},
eprint={0902.2969},
primaryClass={cs.LO cs.AI cs.CC}
} | japaridze2009ptarithmetic |
arxiv-6443 | 0902.2975 | Writing Positive/Negative-Conditional Equations Conveniently | <|reference_start|>Writing Positive/Negative-Conditional Equations Conveniently: We present a convenient notation for positive/negative-conditional equations. The idea is to merge rules specifying the same function by using case-, if-, match-, and let-expressions. Based on the presented macro-rule-construct, positive/negative-conditional equational specifications can be written on a higher level. A rewrite system translates the macro-rule-constructs into positive/negative-conditional equations.<|reference_end|> | arxiv | @article{wirth2009writing,
title={Writing Positive/Negative-Conditional Equations Conveniently},
author={Claus-Peter Wirth, Ruediger Lunde},
journal={arXiv preprint arXiv:0902.2975},
year={2009},
number={SEKI Working-Paper SWP-94-04},
archivePrefix={arXiv},
eprint={0902.2975},
primaryClass={cs.AI cs.LO}
} | wirth2009writing |
arxiv-6444 | 0902.2995 | ASF+ --- eine ASF-aehnliche Spezifikationssprache | <|reference_start|>ASF+ --- eine ASF-aehnliche Spezifikationssprache: Maintaining the main aspects of the algebraic specification language ASF as presented in [Bergstra&al.89] we have extend ASF with the following concepts: While once exported names in ASF must stay visible up to the top the module hierarchy, ASF+ permits a more sophisticated hiding of signature names. The erroneous merging of distinct structures that occurs when importing different actualizations of the same parameterized module in ASF is avoided in ASF+ by a more adequate form of parameter binding. The new ``Namensraum''-concept of ASF+ permits the specifier on the one hand directly to identify the origin of hidden names and on the other to decide whether an imported module is only to be accessed or whether an important property of it is to be modified. In the first case he can access one single globally provided version; in the second he has to import a copy of the module. Finally ASF+ permits semantic conditions on parameters and the specification of tasks for a theorem prover.<|reference_end|> | arxiv | @article{lunde2009asf+,
title={ASF+ --- eine ASF-aehnliche Spezifikationssprache},
author={Ruediger Lunde, Claus-Peter Wirth},
journal={arXiv preprint arXiv:0902.2995},
year={2009},
number={SEKI Working-Paper SWP-94-05},
archivePrefix={arXiv},
eprint={0902.2995},
primaryClass={cs.AI cs.SC}
} | lunde2009asf+ |
arxiv-6445 | 0902.3026 | OntoELAN: An Ontology-based Linguistic Multimedia Annotator | <|reference_start|>OntoELAN: An Ontology-based Linguistic Multimedia Annotator: Despite its scientific, political, and practical value, comprehensive information about human languages, in all their variety and complexity, is not readily obtainable and searchable. One reason is that many language data are collected as audio and video recordings which imposes a challenge to document indexing and retrieval. Annotation of multimedia data provides an opportunity for making the semantics explicit and facilitates the searching of multimedia documents. We have developed OntoELAN, an ontology-based linguistic multimedia annotator that features: (1) support for loading and displaying ontologies specified in OWL; (2) creation of a language profile, which allows a user to choose a subset of terms from an ontology and conveniently rename them if needed; (3) creation of ontological tiers, which can be annotated with profile terms and, therefore, corresponding ontological terms; and (4) saving annotations in the XML format as Multimedia Ontology class instances and, linked to them, class instances of other ontologies used in ontological tiers. To our best knowledge, OntoELAN is the first audio/video annotation tool in linguistic domain that provides support for ontology-based annotation.<|reference_end|> | arxiv | @article{chebotko2009ontoelan:,
title={OntoELAN: An Ontology-based Linguistic Multimedia Annotator},
author={Artem Chebotko, Yu Deng, Shiyong Lu, Farshad Fotouhi, Anthony Aristar,
Hennie Brugman, Alexander Klassmann, Han Sloetjes, Albert Russel, Peter
Wittenburg},
journal={Proceedings of the IEEE Sixth International Symposium on
Multimedia Software Engineering (IEEE-MSE'04), pp. 329-336, Miami, FL, USA,
December, 2004},
year={2009},
doi={10.1109/MMSE.2004.58},
archivePrefix={arXiv},
eprint={0902.3026},
primaryClass={cs.DL cs.DB cs.MM cs.SE}
} | chebotko2009ontoelan: |
arxiv-6446 | 0902.3027 | Ontology-Based Annotation of Multimedia Language Data for the Semantic Web | <|reference_start|>Ontology-Based Annotation of Multimedia Language Data for the Semantic Web: There is an increasing interest and effort in preserving and documenting endangered languages. Language data are valuable only when they are well-cataloged, indexed and searchable. Many language data, particularly those of lesser-spoken languages, are collected as audio and video recordings. While multimedia data provide more channels and dimensions to describe a language's function, and gives a better presentation of the cultural system associated with the language of that community, they are not text-based or structured (in binary format), and their semantics is implicit in their content. The content is thus easy for a human being to understand, but difficult for computers to interpret. Hence, there is a great need for a powerful and user-friendly system to annotate multimedia data with text-based, well-structured and searchable metadata. This chapter describes an ontology-based multimedia annotation tool, OntoELAN, that enables annotation of language multimedia data with a linguistic ontology.<|reference_end|> | arxiv | @article{chebotko2009ontology-based,
title={Ontology-Based Annotation of Multimedia Language Data for the Semantic
Web},
author={Artem Chebotko, Shiyong Lu, Farshad Fotouhi, Anthony Aristar},
journal={arXiv preprint arXiv:0902.3027},
year={2009},
archivePrefix={arXiv},
eprint={0902.3027},
primaryClass={cs.DL cs.DB cs.MM}
} | chebotko2009ontology-based |
arxiv-6447 | 0902.3056 | New Results in the Simultaneous Message Passing Model | <|reference_start|>New Results in the Simultaneous Message Passing Model: Consider the following Simultaneous Message Passing (SMP) model for computing a relation f subset of X x Y x Z. In this model Alice, on input x in X and Bob, on input y in Y, send one message each to a third party Referee who then outputs a z in Z such that (x,y,z) in f. We first show optimal 'Direct sum' results for all relations f in this model, both in the quantum and classical settings, in the situation where we allow shared resources (shared entanglement in quantum protocols and public coins in classical protocols) between Alice and Referee and Bob and Referee and no shared resource between Alice and Bob. This implies that, in this model, the communication required to compute k simultaneous instances of f, with constant success overall, is at least k-times the communication required to compute one instance with constant success. This in particular implies an earlier Direct sum result, shown by Chakrabarti, Shi, Wirth and Yao, 2001, for the Equality function (and a class of other so-called robust functions), in the classical smp model with no shared resources between any parties. Furthermore we investigate the gap between the smp model and the one-way model in communication complexity and exhibit a partial function that is exponentially more expensive in the former if quantum communication with entanglement is allowed, compared to the latter even in the deterministic case.<|reference_end|> | arxiv | @article{jain2009new,
title={New Results in the Simultaneous Message Passing Model},
author={Rahul Jain, Hartmut Klauck},
journal={arXiv preprint arXiv:0902.3056},
year={2009},
archivePrefix={arXiv},
eprint={0902.3056},
primaryClass={cs.DC cs.CC cs.IT math.IT quant-ph}
} | jain2009new |
arxiv-6448 | 0902.3065 | The Multi-Branched Method of Moments for Queueing Networks | <|reference_start|>The Multi-Branched Method of Moments for Queueing Networks: We propose a new exact solution algorithm for closed multiclass product-form queueing networks that is several orders of magnitude faster and less memory consuming than established methods for multiclass models, such as the Mean Value Analysis (MVA) algorithm. The technique is an important generalization of the recently proposed Method of Moments (MoM) which, differently from MVA, recursively computes higher-order moments of queue-lengths instead of mean values. The main contribution of this paper is to prove that the information used in the MoM recursion can be increased by considering multiple recursive branches that evaluate models with different number of queues. This reformulation allows to formulate a simpler matrix difference equation which leads to large computational savings with respect to the original MoM recursion. Computational analysis shows several cases where the proposed algorithm is between 1,000 and 10,000 times faster and less memory consuming than the original MoM, thus extending the range of multiclass models where exact solutions are feasible.<|reference_end|> | arxiv | @article{casale2009the,
title={The Multi-Branched Method of Moments for Queueing Networks},
author={Giuliano Casale},
journal={arXiv preprint arXiv:0902.3065},
year={2009},
archivePrefix={arXiv},
eprint={0902.3065},
primaryClass={cs.PF}
} | casale2009the |
arxiv-6449 | 0902.3072 | Syntactic variation of support verb constructions | <|reference_start|>Syntactic variation of support verb constructions: We report experiments about the syntactic variations of support verb constructions, a special type of multiword expressions (MWEs) containing predicative nouns. In these expressions, the noun can occur with or without the verb, with no clear-cut semantic difference. We extracted from a large French corpus a set of examples of the two situations and derived statistical results from these data. The extraction involved large-coverage language resources and finite-state techniques. The results show that, most frequently, predicative nouns occur without a support verb. This fact has consequences on methods of extracting or recognising MWEs.<|reference_end|> | arxiv | @article{laporte2009syntactic,
title={Syntactic variation of support verb constructions},
author={Eric Laporte (IGM-LabInfo), Elisabete Ranchhod (ONSET-CEL), Anastasia
Yannacopoulou (IGM-LabInfo)},
journal={Lingvisticae Investigationes 31, 2 (2008) 173-185},
year={2009},
archivePrefix={arXiv},
eprint={0902.3072},
primaryClass={cs.CL}
} | laporte2009syntactic |
arxiv-6450 | 0902.3076 | Coding for the Non-Orthogonal Amplify-and-Forward Cooperative Channel | <|reference_start|>Coding for the Non-Orthogonal Amplify-and-Forward Cooperative Channel: In this work, we consider the problem of coding for the half-duplex non-orthogonal amplify-and-forward (NAF) cooperative channel where the transmitter to relay and the inter-relay links are highly reliable. We derive bounds on the diversity order of the NAF protocol that are achieved by a distributed space-time bit-interleaved coded modulation (D-ST-BICM) scheme under iterative APP detection and decoding. These bounds lead to the design of space-time precoders that ensure maximum diversity order and high coding gains. The word error rate performance of D-ST-BICM are also compared to outage probability limits.<|reference_end|> | arxiv | @article{kraidy2009coding,
title={Coding for the Non-Orthogonal Amplify-and-Forward Cooperative Channel},
author={Ghassan M. Kraidy, Nicolas Gresset, and Joseph J. Boutros},
journal={arXiv preprint arXiv:0902.3076},
year={2009},
doi={10.1109/ITW.2007.4313147},
archivePrefix={arXiv},
eprint={0902.3076},
primaryClass={cs.IT math.IT}
} | kraidy2009coding |
arxiv-6451 | 0902.3081 | Compact Ancestry Labeling Schemes for Trees of Small Depth | <|reference_start|>Compact Ancestry Labeling Schemes for Trees of Small Depth: An {\em ancestry labeling scheme} labels the nodes of any tree in such a way that ancestry queries between any two nodes in a tree can be answered just by looking at their corresponding labels. The common measure to evaluate the quality of an ancestry labeling scheme is by its {\em label size}, that is the maximal number of bits stored in a label, taken over all $n$-node trees. The design of ancestry labeling schemes finds applications in XML search engines. In the context of these applications, even small improvements in the label size are important. In fact, the literature about this topic is interested in the exact label size rather than just its order of magnitude. As a result, following the proposal of an original scheme of size $2\log n$ bits, a considerable amount of work was devoted to improve the bound on the label size. The current state of the art upper bound is $\log n + O(\sqrt{\log n})$ bits which is still far from the known $\log n + \Omega(\log\log n)$ lower bound. Moreover, the hidden constant factor in the additive $O(\sqrt{\log n})$ term is large, which makes this term dominate the label size for typical current XML trees. In attempt to provide good performances for real XML data, we rely on the observation that the depth of a typical XML tree is bounded from above by a small constant. Having this in mind, we present an ancestry labeling scheme of size $\log n+2\log d +O(1)$, for the family of trees with at most $n$ nodes and depth at most $d$. In addition to our main result, we prove a result that may be of independent interest concerning the existence of a linear {\em universal graph} for the family of forests with trees of bounded depth.<|reference_end|> | arxiv | @article{fraigniaud2009compact,
title={Compact Ancestry Labeling Schemes for Trees of Small Depth},
author={Pierre Fraigniaud and Amos Korman},
journal={arXiv preprint arXiv:0902.3081},
year={2009},
archivePrefix={arXiv},
eprint={0902.3081},
primaryClass={cs.DS cs.DC cs.DM}
} | fraigniaud2009compact |
arxiv-6452 | 0902.3088 | Automatic generation of non-uniform random variates for arbitrary pointwise computable probability densities by tiling | <|reference_start|>Automatic generation of non-uniform random variates for arbitrary pointwise computable probability densities by tiling: We present a rejection method based on recursive covering of the probability density function with equal tiles. The concept works for any probability density function that is pointwise computable or representable by tabular data. By the implicit construction of piecewise constant majorizing and minorizing functions that are arbitrarily close to the density function the production of random variates is arbitrarily independent of the computation of the density function and extremely fast. The method works unattended for probability densities with discontinuities (jumps and poles). The setup time is short, marginally independent of the shape of the probability density and linear in table size. Recently formulated requirements to a general and automatic non-uniform random number generator are topped. We give benchmarks together with a similar rejection method and with a transformation method.<|reference_end|> | arxiv | @article{fulger2009automatic,
title={Automatic generation of non-uniform random variates for arbitrary
pointwise computable probability densities by tiling},
author={Daniel Fulger and Guido Germano},
journal={arXiv preprint arXiv:0902.3088},
year={2009},
archivePrefix={arXiv},
eprint={0902.3088},
primaryClass={cs.MS cs.NA}
} | fulger2009automatic |
arxiv-6453 | 0902.3104 | On Framework and Hybrid Auction Approach to the Spectrum Licensing Procedure | <|reference_start|>On Framework and Hybrid Auction Approach to the Spectrum Licensing Procedure: Inspired by the recent developments in the field of Spectrum Auctions, we have tried to provide a comprehensive framework for the complete procedure of Spectrum Licensing. We have identified the various issues the Governments need to decide upon while designing the licensing procedure and what are the various options available in each issue. We also provide an in depth study of how each of this options impact the overall procedure along with theoretical and practical results from the past. Lastly we argue as to how we can combine the positives two most widely used Spectrum Auctions mechanisms into the Hybrid Multiple Round Auction mechanism being proposed by us.<|reference_end|> | arxiv | @article{dikshit2009on,
title={On Framework and Hybrid Auction Approach to the Spectrum Licensing
Procedure},
author={Devansh Dikshit, Y. Narahari},
journal={arXiv preprint arXiv:0902.3104},
year={2009},
archivePrefix={arXiv},
eprint={0902.3104},
primaryClass={cs.GT cs.CY}
} | dikshit2009on |
arxiv-6454 | 0902.3114 | Analysis of the Second Moment of the LT Decoder | <|reference_start|>Analysis of the Second Moment of the LT Decoder: We analyze the second moment of the ripple size during the LT decoding process and prove that the standard deviation of the ripple size for an LT-code with length $k$ is of the order of $\sqrt k.$ Together with a result by Karp et. al stating that the expectation of the ripple size is of the order of $k$ [3], this gives bounds on the error probability of the LT decoder. We also give an analytic expression for the variance of the ripple size up to terms of constant order, and refine the expression in [3] for the expectation of the ripple size up to terms of the order of $1/k$, thus providing a first step towards an analytic finite-length analysis of LT decoding.<|reference_end|> | arxiv | @article{maatouk2009analysis,
title={Analysis of the Second Moment of the LT Decoder},
author={Ghid Maatouk, Amin Shokrollahi},
journal={arXiv preprint arXiv:0902.3114},
year={2009},
archivePrefix={arXiv},
eprint={0902.3114},
primaryClass={cs.IT math.IT}
} | maatouk2009analysis |
arxiv-6455 | 0902.3121 | Parallel machine scheduling with precedence constraints and setup times | <|reference_start|>Parallel machine scheduling with precedence constraints and setup times: This paper presents different methods for solving parallel machine scheduling problems with precedence constraints and setup times between the jobs. Limited discrepancy search methods mixed with local search principles, dominance conditions and specific lower bounds are proposed. The proposed methods are evaluated on a set of randomly generated instances and compared with previous results from the literature and those obtained with an efficient commercial solver. We conclude that our propositions are quite competitive and our results even outperform other approaches in most cases.<|reference_end|> | arxiv | @article{gacias2009parallel,
title={Parallel machine scheduling with precedence constraints and setup times},
author={Bernat Gacias (LAAS), Christian Artigues (LAAS), Pierre Lopez (LAAS)},
journal={arXiv preprint arXiv:0902.3121},
year={2009},
archivePrefix={arXiv},
eprint={0902.3121},
primaryClass={cs.DS}
} | gacias2009parallel |
arxiv-6456 | 0902.3136 | A distributed editing environment for XML documents | <|reference_start|>A distributed editing environment for XML documents: XML is based on two essential aspects: the modelization of data in a tree like structure and the separation between the information itself and the way it is displayed. XML structures are easily serializable. The separation between an abstract representation and one or several views on it allows the elaboration of specialized interfaces to visualize or modify data. A lot of developments were made to interact with XML data but the use of these applications over the Internet is just starting. This paper presents a prototype of a distributed editing environment over the Internet. The key point of our system is the way user interactions are handled. Selections and modifications made by a user are not directly reflected on the concrete view, they are serialized in XML and transmitted to a server which applies them to the document and broadcasts updates to the views. This organization has several advantages. XML documents coding selection and modification operations are usually smaller than the edited document and can be directly processed with a transformation engine which can adapt them to different representations. In addition, several selections or modifications can be combined into an unique XML document. This allows one to update multiple views with different frequencies and fits the requirement of an asynchronous communication mode like HTTP.<|reference_end|> | arxiv | @article{pasquier2009a,
title={A distributed editing environment for XML documents},
author={Claude Pasquier (INRIA Sophia Antipolis), Laurent Th'ery (INRIA
Sophia Antipolis)},
journal={1st ECOOP Workshop on XML and Object Technology, Sophia Antipolis
: France (2000)},
year={2009},
archivePrefix={arXiv},
eprint={0902.3136},
primaryClass={cs.SE}
} | pasquier2009a |
arxiv-6457 | 0902.3175 | The One-Way Communication Complexity of Group Membership | <|reference_start|>The One-Way Communication Complexity of Group Membership: This paper studies the one-way communication complexity of the subgroup membership problem, a classical problem closely related to basic questions in quantum computing. Here Alice receives, as input, a subgroup $H$ of a finite group $G$; Bob receives an element $x \in G$. Alice is permitted to send a single message to Bob, after which he must decide if his input $x$ is an element of $H$. We prove the following upper bounds on the classical communication complexity of this problem in the bounded-error setting: (1) The problem can be solved with $O(\log |G|)$ communication, provided the subgroup $H$ is normal; (2) The problem can be solved with $O(d_{\max} \cdot \log |G|)$ communication, where $d_{\max}$ is the maximum of the dimensions of the irreducible complex representations of $G$; (3) For any prime $p$ not dividing $|G|$, the problem can be solved with $O(d_{\max} \cdot \log p)$ communication, where $d_{\max}$ is the maximum of the dimensions of the irreducible $\F_p$-representations of $G$.<|reference_end|> | arxiv | @article{aaronson2009the,
title={The One-Way Communication Complexity of Group Membership},
author={Scott Aaronson, Franc{c}ois Le Gall, Alexander Russell, Seiichiro
Tani},
journal={Chicago Journal of Theoretical Computer Science, Vol. 2011,
Article 6, 2011},
year={2009},
doi={10.4086/cjtcs.2011.006},
archivePrefix={arXiv},
eprint={0902.3175},
primaryClass={cs.CC quant-ph}
} | aaronson2009the |
arxiv-6458 | 0902.3176 | Error-Correcting Tournaments | <|reference_start|>Error-Correcting Tournaments: We present a family of pairwise tournaments reducing $k$-class classification to binary classification. These reductions are provably robust against a constant fraction of binary errors. The results improve on the PECOC construction \cite{SECOC} with an exponential improvement in computation, from $O(k)$ to $O(\log_2 k)$, and the removal of a square root in the regret dependence, matching the best possible computation and regret up to a constant.<|reference_end|> | arxiv | @article{beygelzimer2009error-correcting,
title={Error-Correcting Tournaments},
author={Alina Beygelzimer, John Langford, and Pradeep Ravikumar},
journal={arXiv preprint arXiv:0902.3176},
year={2009},
archivePrefix={arXiv},
eprint={0902.3176},
primaryClass={cs.AI cs.LG}
} | beygelzimer2009error-correcting |
arxiv-6459 | 0902.3178 | Multiple Multicasts with the Help of a Relay | <|reference_start|>Multiple Multicasts with the Help of a Relay: The problem of simultaneous multicasting of multiple messages with the help of a relay terminal is considered. In particular, a model is studied in which a relay station simultaneously assists two transmitters in multicasting their independent messages to two receivers. The relay may also have an independent message of its own to multicast. As a first step to address this general model, referred to as the compound multiple access channel with a relay (cMACr), the capacity region of the multiple access channel with a "cognitive" relay is characterized, including the cases of partial and rate-limited cognition. Then, achievable rate regions for the cMACr model are presented based on decode-and-forward (DF) and compress-and-forward (CF) relaying strategies. Moreover, an outer bound is derived for the special case, called the cMACr without cross-reception, in which each transmitter has a direct link to one of the receivers while the connection to the other receiver is enabled only through the relay terminal. The capacity region is characterized for a binary modulo additive cMACr without cross-reception, showing the optimality of binary linear block codes, thus highlighting the benefits of physical layer network coding and structured codes. Results are extended to the Gaussian channel model as well, providing achievable rate regions for DF and CF, as well as for a structured code design based on lattice codes. It is shown that the performance with lattice codes approaches the upper bound for increasing power, surpassing the rates achieved by the considered random coding-based techniques.<|reference_end|> | arxiv | @article{gunduz2009multiple,
title={Multiple Multicasts with the Help of a Relay},
author={Deniz Gunduz, Osvaldo Simeone, Andrea J. Goldsmith, H. Vincent Poor
and Shlomo Shamai (Shitz)},
journal={arXiv preprint arXiv:0902.3178},
year={2009},
archivePrefix={arXiv},
eprint={0902.3178},
primaryClass={cs.IT math.IT}
} | gunduz2009multiple |
arxiv-6460 | 0902.3196 | Symbolic Computing with Incremental Mindmaps to Manage and Mine Data Streams - Some Applications | <|reference_start|>Symbolic Computing with Incremental Mindmaps to Manage and Mine Data Streams - Some Applications: In our understanding, a mind-map is an adaptive engine that basically works incrementally on the fundament of existing transactional streams. Generally, mind-maps consist of symbolic cells that are connected with each other and that become either stronger or weaker depending on the transactional stream. Based on the underlying biologic principle, these symbolic cells and their connections as well may adaptively survive or die, forming different cell agglomerates of arbitrary size. In this work, we intend to prove mind-maps' eligibility following diverse application scenarios, for example being an underlying management system to represent normal and abnormal traffic behaviour in computer networks, supporting the detection of the user behaviour within search engines, or being a hidden communication layer for natural language interaction.<|reference_end|> | arxiv | @article{brucks2009symbolic,
title={Symbolic Computing with Incremental Mindmaps to Manage and Mine Data
Streams - Some Applications},
author={Claudine Brucks, Michael Hilker, Christoph Schommer, Cynthia Wagner,
Ralph Weires},
journal={Proceedings of the 4th International Workshop on Neural-Symbolic
Learning and Reasoning (NeSy '08); July 2008., Patras, Greece},
year={2009},
archivePrefix={arXiv},
eprint={0902.3196},
primaryClass={cs.NE cs.AI}
} | brucks2009symbolic |
arxiv-6461 | 0902.3207 | Random numbers from the tails of probability distributions using the transformation method | <|reference_start|>Random numbers from the tails of probability distributions using the transformation method: The speed of many one-line transformation methods for the production of, for example, Levy alpha-stable random numbers, which generalize Gaussian ones, and Mittag-Leffler random numbers, which generalize exponential ones, is very high and satisfactory for most purposes. However, for the class of decreasing probability densities fast rejection implementations like the Ziggurat by Marsaglia and Tsang promise a significant speed-up if it is possible to complement them with a method that samples the tails of the infinite support. This requires the fast generation of random numbers greater or smaller than a certain value. We present a method to achieve this, and also to generate random numbers within any arbitrary interval. We demonstrate the method showing the properties of the transform maps of the above mentioned distributions as examples of stable and geometric stable random numbers used for the stochastic solution of the space-time fractional diffusion equation.<|reference_end|> | arxiv | @article{fulger2009random,
title={Random numbers from the tails of probability distributions using the
transformation method},
author={Daniel Fulger, Enrico Scalas and Guido Germano},
journal={Fractional Calculus and Applied Analysis 16 (2), 332-353, 2013},
year={2009},
doi={10.2478/s13540-013-0021-z},
archivePrefix={arXiv},
eprint={0902.3207},
primaryClass={cs.MS cs.NA}
} | fulger2009random |
arxiv-6462 | 0902.3208 | A Fast Multigrid Algorithm for Energy Minimization Under Planar Density Constraints | <|reference_start|>A Fast Multigrid Algorithm for Energy Minimization Under Planar Density Constraints: The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving correction to this problem. The method is demonstrated on various graph drawing (visualization) instances.<|reference_end|> | arxiv | @article{ron2009a,
title={A Fast Multigrid Algorithm for Energy Minimization Under Planar Density
Constraints},
author={Dorit Ron, Ilya Safro, Achi Brandt},
journal={arXiv preprint arXiv:0902.3208},
year={2009},
archivePrefix={arXiv},
eprint={0902.3208},
primaryClass={cs.DS cs.MS cs.NA}
} | ron2009a |
arxiv-6463 | 0902.3210 | Coverage in Multi-Antenna Two-Tier Networks | <|reference_start|>Coverage in Multi-Antenna Two-Tier Networks: In two-tier networks -- comprising a conventional cellular network overlaid with shorter range hotspots (e.g. femtocells, distributed antennas, or wired relays) -- with universal frequency reuse, the near-far effect from cross-tier interference creates dead spots where reliable coverage cannot be guaranteed to users in either tier. Equipping the macrocell and femtocells with multiple antennas enhances robustness against the near-far problem. This work derives the maximum number of simultaneously transmitting multiple antenna femtocells meeting a per-tier outage probability constraint. Coverage dead zones are presented wherein cross-tier interference bottlenecks cellular and hotspot coverage. Two operating regimes are shown namely 1) a cellular-limited regime in which femtocell users experience unacceptable cross-tier interference and 2) a hotspot-limited regime wherein both femtocell users and cellular users are limited by hotspot interference. Our analysis accounts for the per-tier transmit powers, the number of transmit antennas (single antenna transmission being a special case) and terrestrial propagation such as the Rayleigh fading and the path loss exponents. Single-user (SU) multiple antenna transmission at each tier is shown to provide significantly superior coverage and spatial reuse relative to multiuser (MU) transmission. We propose a decentralized carrier-sensing approach to regulate femtocell transmission powers based on their location. Considering a worst-case cell-edge location, simulations using typical path loss scenarios show that our interference management strategy provides reliable cellular coverage with about 60 femtocells per cellsite.<|reference_end|> | arxiv | @article{chandrasekhar2009coverage,
title={Coverage in Multi-Antenna Two-Tier Networks},
author={Vikram Chandrasekhar, Marios Kountouris and Jeffrey G. Andrews},
journal={arXiv preprint arXiv:0902.3210},
year={2009},
doi={10.1109/TWC.2009.090241},
archivePrefix={arXiv},
eprint={0902.3210},
primaryClass={cs.NI cs.IT math.IT}
} | chandrasekhar2009coverage |
arxiv-6464 | 0902.3223 | An Exact Algorithm for the Stratification Problem with Proportional Allocation | <|reference_start|>An Exact Algorithm for the Stratification Problem with Proportional Allocation: We report a new optimal resolution for the statistical stratification problem under proportional sampling allocation among strata. Consider a finite population of N units, a random sample of n units selected from this population and a number L of strata. Thus, we have to define which units belong to each stratum so as to minimize the variance of a total estimator for one desired variable of interest in each stratum,and consequently reduce the overall variance for such quantity. In order to solve this problem, an exact algorithm based on the concept of minimal path in a graph is proposed and assessed. Computational results using real data from IBGE (Brazilian Central Statistical Office) are provided.<|reference_end|> | arxiv | @article{brito2009an,
title={An Exact Algorithm for the Stratification Problem with Proportional
Allocation},
author={Jose Brito, Mauricio Lila, Flavio Montenegro, Nelson Maculan},
journal={arXiv preprint arXiv:0902.3223},
year={2009},
archivePrefix={arXiv},
eprint={0902.3223},
primaryClass={cs.LG cs.DM cs.DS}
} | brito2009an |
arxiv-6465 | 0902.3282 | Computing k-Centers On a Line | <|reference_start|>Computing k-Centers On a Line: In this paper we consider several instances of the k-center on a line problem where the goal is, given a set of points S in the plane and a parameter k >= 1, to find k disks with centers on a line l such that their union covers S and the maximum radius of the disks is minimized. This problem is a constraint version of the well-known k-center problem in which the centers are constrained to lie in a particular region such as a segment, a line, and a polygon. We first consider the simplest version of the problem where the line l is given in advance; we can solve this problem in O(n log^2 n) time. We then investigate the cases where only the orientation of the line l is fixed and where the line l can be arbitrary. We can solve these problems in O(n^2 log^2 n) time and in O(n^4 log^2 n) expected time, respectively. For the last two problems, we present (1 + e)-approximation algorithms, which run in O((1/e) n log^2 n) time and O((1/e^2) n log^2 n) time, respectively.<|reference_end|> | arxiv | @article{brass2009computing,
title={Computing k-Centers On a Line},
author={Peter Brass, Christian Knauer, Hyeon-Suk Na, Chan-Su Shin, Antoine
Vigneron},
journal={arXiv preprint arXiv:0902.3282},
year={2009},
archivePrefix={arXiv},
eprint={0902.3282},
primaryClass={cs.CG}
} | brass2009computing |
arxiv-6466 | 0902.3286 | MDS codes on the erasure-erasure wiretap channel | <|reference_start|>MDS codes on the erasure-erasure wiretap channel: This paper considers the problem of perfectly secure communication on a modified version of Wyner's wiretap channel II where both the main and wiretapper's channels have some erasures. A secret message is to be encoded into $n$ channel symbols and transmitted. The main channel is such that the legitimate receiver receives the transmitted codeword with exactly $n - \nu$ erasures, where the positions of the erasures are random. Additionally, an eavesdropper (wire-tapper) is able to observe the transmitted codeword with $n - \mu$ erasures in a similar fashion. This paper studies the maximum achievable information rate with perfect secrecy on this channel and gives a coding scheme using nested codes that achieves the secrecy capacity.<|reference_end|> | arxiv | @article{subramanian2009mds,
title={MDS codes on the erasure-erasure wiretap channel},
author={Arunkumar Subramanian, Steven W. McLaughlin},
journal={arXiv preprint arXiv:0902.3286},
year={2009},
archivePrefix={arXiv},
eprint={0902.3286},
primaryClass={cs.IT math.IT}
} | subramanian2009mds |
arxiv-6467 | 0902.3287 | Adaptive Decoding of LDPC Codes with Binary Messages | <|reference_start|>Adaptive Decoding of LDPC Codes with Binary Messages: A novel adaptive binary decoding algorithm for LDPC codes is proposed, which reduces the decoding complexity while having a comparable or even better performance than corresponding non-adaptive alternatives. In each iteration the variable node decoders use the binary check node decoders multiple times; each single use is referred to as a sub-iteration. To process the sequences of binary messages in each iteration, the variable node decoders employ pre-computed look-up tables. These look-up tables as well as the number of sub-iterations per iteration are dynamically adapted during the decoding process based on the decoder state, represented by the mutual information between the current messages and the syndrome bits. The look-up tables and the number of sub-iterations per iteration are determined and optimized using density evolution. The performance and the complexity of the proposed adaptive decoding algorithm is exemplified by simulations.<|reference_end|> | arxiv | @article{land2009adaptive,
title={Adaptive Decoding of LDPC Codes with Binary Messages},
author={Ingmar Land, Gottfried Lechner, Lars K. Rasmussen},
journal={arXiv preprint arXiv:0902.3287},
year={2009},
archivePrefix={arXiv},
eprint={0902.3287},
primaryClass={cs.IT math.IT}
} | land2009adaptive |
arxiv-6468 | 0902.3294 | Progress in Computer-Assisted Inductive Theorem Proving by Human-Orientedness and Descente Infinie? | <|reference_start|>Progress in Computer-Assisted Inductive Theorem Proving by Human-Orientedness and Descente Infinie?: In this short position paper we briefly review the development history of automated inductive theorem proving and computer-assisted mathematical induction. We think that the current low expectations on progress in this field result from a faulty narrow-scope historical projection. Our main motivation is to explain--on an abstract but hopefully sufficiently descriptive level--why we believe that future progress in the field is to result from human-orientedness and descente infinie.<|reference_end|> | arxiv | @article{wirth2009progress,
title={Progress in Computer-Assisted Inductive Theorem Proving by
Human-Orientedness and Descente Infinie?},
author={Claus-Peter Wirth},
journal={Logic Journal of the IGPL, 2012, Volume 20, Pp. 1046-1063},
year={2009},
doi={10.1093/jigpal/jzr048},
number={SEKI Working-Paper SR-2006-01},
archivePrefix={arXiv},
eprint={0902.3294},
primaryClass={cs.AI cs.LO}
} | wirth2009progress |
arxiv-6469 | 0902.3304 | A bound on the minimum of a real positive polynomial over the standard simplex | <|reference_start|>A bound on the minimum of a real positive polynomial over the standard simplex: We consider the problem of bounding away from 0 the minimum value m taken by a polynomial P of Z[X_1,...,X_k] over the standard simplex, assuming that m>0. Recent algorithmic developments in real algebraic geometry enable us to obtain a positive lower bound on m in terms of the dimension k, the degree d and the bitsize of the coefficients of P. The bound is explicit, and obtained without any extra assumption on P, in contrast with previous results reported in the literature.<|reference_end|> | arxiv | @article{basu2009a,
title={A bound on the minimum of a real positive polynomial over the standard
simplex},
author={Saugata Basu, Richard Leroy, Marie-Francoise Roy},
journal={arXiv preprint arXiv:0902.3304},
year={2009},
archivePrefix={arXiv},
eprint={0902.3304},
primaryClass={cs.SC}
} | basu2009a |
arxiv-6470 | 0902.3372 | Gaussian Fading Is the Worst Fading | <|reference_start|>Gaussian Fading Is the Worst Fading: The capacity of peak-power limited, single-antenna, noncoherent, flat-fading channels with memory is considered. The emphasis is on the capacity pre-log, i.e., on the limiting ratio of channel capacity to the logarithm of the signal-to-noise ratio (SNR), as the SNR tends to infinity. It is shown that, among all stationary & ergodic fading processes of a given spectral distribution function and whose law has no mass point at zero, the Gaussian process gives rise to the smallest pre-log. The assumption that the law of the fading process has no mass point at zero is essential in the sense that there exist stationary & ergodic fading processes whose law has a mass point at zero and that give rise to a smaller pre-log than the Gaussian process of equal spectral distribution function. An extension of our results to multiple-input single-output fading channels with memory is also presented.<|reference_end|> | arxiv | @article{koch2009gaussian,
title={Gaussian Fading Is the Worst Fading},
author={Tobias Koch and Amos Lapidoth},
journal={arXiv preprint arXiv:0902.3372},
year={2009},
archivePrefix={arXiv},
eprint={0902.3372},
primaryClass={cs.IT math.IT}
} | koch2009gaussian |
arxiv-6471 | 0902.3373 | Learning rules from multisource data for cardiac monitoring | <|reference_start|>Learning rules from multisource data for cardiac monitoring: This paper formalises the concept of learning symbolic rules from multisource data in a cardiac monitoring context. Our sources, electrocardiograms and arterial blood pressure measures, describe cardiac behaviours from different viewpoints. To learn interpretable rules, we use an Inductive Logic Programming (ILP) method. We develop an original strategy to cope with the dimensionality issues caused by using this ILP technique on a rich multisource language. The results show that our method greatly improves the feasibility and the efficiency of the process while staying accurate. They also confirm the benefits of using multiple sources to improve the diagnosis of cardiac arrhythmias.<|reference_end|> | arxiv | @article{cordier2009learning,
title={Learning rules from multisource data for cardiac monitoring},
author={Marie-Odile Cordier (INRIA - Irisa), Elisa Fromont (LAHC), Ren'e
Quiniou (INRIA - Irisa)},
journal={arXiv preprint arXiv:0902.3373},
year={2009},
archivePrefix={arXiv},
eprint={0902.3373},
primaryClass={cs.LG}
} | cordier2009learning |
arxiv-6472 | 0902.3430 | Domain Adaptation: Learning Bounds and Algorithms | <|reference_start|>Domain Adaptation: Learning Bounds and Algorithms: This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben-David et al. (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation.<|reference_end|> | arxiv | @article{mansour2009domain,
title={Domain Adaptation: Learning Bounds and Algorithms},
author={Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh},
journal={arXiv preprint arXiv:0902.3430},
year={2009},
archivePrefix={arXiv},
eprint={0902.3430},
primaryClass={cs.LG cs.AI}
} | mansour2009domain |
arxiv-6473 | 0902.3485 | Pricing strategies for viral marketing on Social Networks | <|reference_start|>Pricing strategies for viral marketing on Social Networks: We study the use of viral marketing strategies on social networks to maximize revenue from the sale of a single product. We propose a model in which the decision of a buyer to buy the product is influenced by friends that own the product and the price at which the product is offered. The influence model we analyze is quite general, naturally extending both the Linear Threshold model and the Independent Cascade model, while also incorporating price information. We consider sales proceeding in a cascading manner through the network, i.e. a buyer is offered the product via recommendations from its neighbors who own the product. In this setting, the seller influences events by offering a cashback to recommenders and by setting prices (via coupons or discounts) for each buyer in the social network. Finding a seller strategy which maximizes the expected revenue in this setting turns out to be NP-hard. However, we propose a seller strategy that generates revenue guaranteed to be within a constant factor of the optimal strategy in a wide variety of models. The strategy is based on an influence-and-exploit idea, and it consists of finding the right trade-off at each time step between: generating revenue from the current user versus offering the product for free and using the influence generated from this sale later in the process. We also show how local search can be used to improve the performance of this technique in practice.<|reference_end|> | arxiv | @article{arthur2009pricing,
title={Pricing strategies for viral marketing on Social Networks},
author={David Arthur, Rajeev Motwani, Aneesh Sharma, Ying Xu},
journal={arXiv preprint arXiv:0902.3485},
year={2009},
archivePrefix={arXiv},
eprint={0902.3485},
primaryClass={cs.DS cs.CY}
} | arthur2009pricing |
arxiv-6474 | 0902.3503 | Generalised sequential crossover of words and languages | <|reference_start|>Generalised sequential crossover of words and languages: In this paper, we propose a new operation, Generalised Sequential Crossover (GSCO) of words, which in some sense an abstract model of crossing over of the chromosomes in the living organisms. We extend GSCO over language $L$ iteratively ($GSCO^*(L)$ as well as iterated GSCO over two languages $GSCO^*(L_1,L_2)$). Our study reveals that $GSCO^*(L)$ is subclass of regular languages for any $L$. We compare the different classes of GSCO languages with the prominent sub-regular classes.<|reference_end|> | arxiv | @article{jeganathan2009generalised,
title={Generalised sequential crossover of words and languages},
author={L Jeganathan, R Rama, Ritabrata Sengupta},
journal={arXiv preprint arXiv:0902.3503},
year={2009},
archivePrefix={arXiv},
eprint={0902.3503},
primaryClass={cs.DM}
} | jeganathan2009generalised |
arxiv-6475 | 0902.3513 | A Systematic Approach to Artificial Agents | <|reference_start|>A Systematic Approach to Artificial Agents: Agents and agent systems are becoming more and more important in the development of a variety of fields such as ubiquitous computing, ambient intelligence, autonomous computing, intelligent systems and intelligent robotics. The need for improvement of our basic knowledge on agents is very essential. We take a systematic approach and present extended classification of artificial agents which can be useful for understanding of what artificial agents are and what they can be in the future. The aim of this classification is to give us insights in what kind of agents can be created and what type of problems demand a specific kind of agents for their solution.<|reference_end|> | arxiv | @article{burgin2009a,
title={A Systematic Approach to Artificial Agents},
author={Mark Burgin and Gordana Dodig-Crnkovic},
journal={arXiv preprint arXiv:0902.3513},
year={2009},
archivePrefix={arXiv},
eprint={0902.3513},
primaryClass={cs.AI cs.MA}
} | burgin2009a |
arxiv-6476 | 0902.3517 | Energy-Efficient Shortest Path Algorithms for Convergecast in Sensor Networks | <|reference_start|>Energy-Efficient Shortest Path Algorithms for Convergecast in Sensor Networks: We introduce a variant of the capacitated vehicle routing problem that is encountered in sensor networks for scientific data collection. Consider an undirected graph $G=(V \cup \{\mathbf{sink}\},E)$. Each vertex $v \in V$ holds a constant-sized reading normalized to 1 byte that needs to be communicated to the $\mathbf{sink}$. The communication protocol is defined such that readings travel in packets. The packets have a capacity of $k$ bytes. We define a {\em packet hop} to be the communication of a packet from a vertex to its neighbor. Each packet hop drains one unit of energy and therefore, we need to communicate the readings to the $\mathbf{sink}$ with the fewest number of hops. We show this problem to be NP-hard and counter it with a simple distributed $(2-\frac{3}{2k})$-approximation algorithm called {\tt SPT} that uses the shortest path tree rooted at the $\mathbf{sink}$. We also show that {\tt SPT} is absolutely optimal when $G$ is a tree and asymptotically optimal when $G$ is a grid. Furthermore, {\tt SPT} has two nice properties. Firstly, the readings always travel along a shortest path toward the $\mathbf{sink}$, which makes it an appealing solution to the convergecast problem as it fits the natural intuition. Secondly, each node employs a very elementary packing strategy. Given all the readings that enter into the node, it sends out as many fully packed packets as possible followed by at most 1 partial packet. We show that any solution that has either one of the two properties cannot be a $(2-\epsilon)$-approximation, for any fixed $\epsilon > 0$. This makes \spt optimal for the class of algorithms that obey either one of those properties.<|reference_end|> | arxiv | @article{augustine2009energy-efficient,
title={Energy-Efficient Shortest Path Algorithms for Convergecast in Sensor
Networks},
author={John Augustine, Qi Han, Philip Loden, Sachin Lodha and Sasanka Roy},
journal={arXiv preprint arXiv:0902.3517},
year={2009},
archivePrefix={arXiv},
eprint={0902.3517},
primaryClass={cs.DS cs.DC cs.DM}
} | augustine2009energy-efficient |
arxiv-6477 | 0902.3526 | Online Multi-task Learning with Hard Constraints | <|reference_start|>Online Multi-task Learning with Hard Constraints: We discuss multi-task online learning when a decision maker has to deal simultaneously with M tasks. The tasks are related, which is modeled by imposing that the M-tuple of actions taken by the decision maker needs to satisfy certain constraints. We give natural examples of such restrictions and then discuss a general class of tractable constraints, for which we introduce computationally efficient ways of selecting actions, essentially by reducing to an on-line shortest path problem. We briefly discuss "tracking" and "bandit" versions of the problem and extend the model in various ways, including non-additive global losses and uncountably infinite sets of tasks.<|reference_end|> | arxiv | @article{lugosi2009online,
title={Online Multi-task Learning with Hard Constraints},
author={Gabor Lugosi, Omiros Papaspiliopoulos, Gilles Stoltz (DMA, GREGH)},
journal={arXiv preprint arXiv:0902.3526},
year={2009},
archivePrefix={arXiv},
eprint={0902.3526},
primaryClass={stat.ML cs.LG math.ST stat.TH}
} | lugosi2009online |
arxiv-6478 | 0902.3528 | A Superstabilizing $\log(n)$-Approximation Algorithm for Dynamic Steiner Trees | <|reference_start|>A Superstabilizing $\log(n)$-Approximation Algorithm for Dynamic Steiner Trees: In this paper we design and prove correct a fully dynamic distributed algorithm for maintaining an approximate Steiner tree that connects via a minimum-weight spanning tree a subset of nodes of a network (referred as Steiner members or Steiner group) . Steiner trees are good candidates to efficiently implement communication primitives such as publish/subscribe or multicast, essential building blocks for the new emergent networks (e.g. P2P, sensor or adhoc networks). The cost of the solution returned by our algorithm is at most $\log |S|$ times the cost of an optimal solution, where $S$ is the group of members. Our algorithm improves over existing solutions in several ways. First, it tolerates the dynamism of both the group members and the network. Next, our algorithm is self-stabilizing, that is, it copes with nodes memory corruption. Last but not least, our algorithm is \emph{superstabilizing}. That is, while converging to a correct configuration (i.e., a Steiner tree) after a modification of the network, it keeps offering the Steiner tree service during the stabilization time to all members that have not been affected by this modification.<|reference_end|> | arxiv | @article{blin2009a,
title={A Superstabilizing $\log(n)$-Approximation Algorithm for Dynamic Steiner
Trees},
author={L'elia Blin (IBISC), Maria Gradinariu Potop-Butucaru (LIP6), Stephane
Rovedakis (IBISC)},
journal={arXiv preprint arXiv:0902.3528},
year={2009},
doi={10.1007/978-3-642-05118-0_10},
archivePrefix={arXiv},
eprint={0902.3528},
primaryClass={cs.DC cs.DS cs.NI}
} | blin2009a |
arxiv-6479 | 0902.3532 | Relational Lattice Foundation for Algebraic Logic | <|reference_start|>Relational Lattice Foundation for Algebraic Logic: Relational Lattice is a succinct mathematical model for Relational Algebra. It reduces the set of six classic relational algebra operators to two: natural join and inner union. In this paper we push relational lattice theory in two directions. First, we uncover a pair of complementary lattice operators, and organize the model into a bilattice of four operations and four distinguished constants. We take a notice a peculiar way bilattice symmetry is broken. Then, we give axiomatic introduction of unary negation operation and prove several laws, including double negation and De Morgan. Next we reduce the model back to two basic binary operations and twelve axioms, and exhibit a convincing argument that the resulting system is complete in model-theoretic sense. The final parts of the paper casts relational lattice perspective onto database dependency theory and into cylindric algebras.<|reference_end|> | arxiv | @article{tropashko2009relational,
title={Relational Lattice Foundation for Algebraic Logic},
author={Vadim Tropashko},
journal={arXiv preprint arXiv:0902.3532},
year={2009},
archivePrefix={arXiv},
eprint={0902.3532},
primaryClass={cs.DB cs.LO}
} | tropashko2009relational |
arxiv-6480 | 0902.3541 | System approach to synthesis, modeling and control of complex dynamical systems | <|reference_start|>System approach to synthesis, modeling and control of complex dynamical systems: We consider the basic features of complex dynamical and control systems. Special attention is paid to the problems of synthesis of dynamical models of complex systems, construction of efficient control models, and to the development of simulation techniques. We propose an approach to the synthesis of dynamic models of complex systems that integrates expert knowledge with the process of modeling. A set-theoretic model of complex system is defined and briefly analyzed. A mathematical model of complex dynamical system with control, based on aggregate description, is also proposed. The structure of the model is described, and architecture of computer simulation system is presented, requirements to and components of computer simulation systems are analyzed.<|reference_end|> | arxiv | @article{bagdasaryan2009system,
title={System approach to synthesis, modeling and control of complex dynamical
systems},
author={Armen Bagdasaryan},
journal={WSEAS Trans. Systems and Control, vol. 4, no. 2, 2009, pp. 77-87},
year={2009},
archivePrefix={arXiv},
eprint={0902.3541},
primaryClass={cs.CE}
} | bagdasaryan2009system |
arxiv-6481 | 0902.3548 | Model of Wikipedia growth based on information exchange via reciprocal arcs | <|reference_start|>Model of Wikipedia growth based on information exchange via reciprocal arcs: We show how reciprocal arcs significantly influence the structural organization of Wikipedias, online encyclopedias. It is shown that random addition of reciprocal arcs in the static network cannot explain the observed reciprocity of Wikipedias. A model of Wikipedia growth based on preferential attachment and on information exchange via reciprocal arcs is presented. An excellent agreement between in-degree distributions of our model and real Wikipedia networks is achieved without fitting the distributions, but by merely extracting a small number of model parameters from the measurement of real networks.<|reference_end|> | arxiv | @article{zlatić2009model,
title={Model of Wikipedia growth based on information exchange via reciprocal
arcs},
author={Vinko Zlati'c, Hrvoje v{S}tefanv{c}i'c},
journal={EPL 93 (2011) 58005},
year={2009},
doi={10.1209/0295-5075/93/58005},
archivePrefix={arXiv},
eprint={0902.3548},
primaryClass={physics.soc-ph cond-mat.stat-mech cs.CY}
} | zlatić2009model |
arxiv-6482 | 0902.3549 | Deaf, Dumb, and Chatting Robots, Enabling Distributed Computation and Fault-Tolerance Among Stigmergic Robot | <|reference_start|>Deaf, Dumb, and Chatting Robots, Enabling Distributed Computation and Fault-Tolerance Among Stigmergic Robot: We investigate ways for the exchange of information (explicit communication) among deaf and dumb mobile robots scattered in the plane. We introduce the use of movement-signals (analogously to flight signals and bees waggle) as a mean to transfer messages, enabling the use of distributed algorithms among the robots. We propose one-to-one deterministic movement protocols that implement explicit communication. We first present protocols for synchronous robots. We begin with a very simple coding protocol for two robots. Based on on this protocol, we provide one-to-one communication for any system of n \geq 2 robots equipped with observable IDs that agree on a common direction (sense of direction). We then propose two solutions enabling one-to-one communication among anonymous robots. Since the robots are devoid of observable IDs, both protocols build recognition mechanisms using the (weak) capabilities offered to the robots. The first protocol assumes that the robots agree on a common direction and a common handedness (chirality), while the second protocol assumes chirality only. Next, we show how the movements of robots can provide implicit acknowledgments in asynchronous systems. We use this result to design asynchronous one-to-one communication with two robots only. Finally, we combine this solution with the schemes developed in synchronous settings to fit the general case of asynchronous one-to-one communication among any number of robots. Our protocols enable the use of distributing algorithms based on message exchanges among swarms of Stigmergic robots. Furthermore, they provides robots equipped with means of communication to overcome faults of their communication device.<|reference_end|> | arxiv | @article{dieudonné2009deaf,,
title={Deaf, Dumb, and Chatting Robots, Enabling Distributed Computation and
Fault-Tolerance Among Stigmergic Robot},
author={Yoann Dieudonn'e (LaRIA, MIS), Shlomi Dolev, Franck Petit (LaRIA,
LIP, INRIA Rh^one-Alpes / LIP Laboratoire de l'Informatique du
Parall'elisme), Michael Segal},
journal={arXiv preprint arXiv:0902.3549},
year={2009},
archivePrefix={arXiv},
eprint={0902.3549},
primaryClass={cs.MA nlin.AO}
} | dieudonné2009deaf, |
arxiv-6483 | 0902.3593 | Performance of MMSE MIMO Receivers: A Large N Analysis for Correlated Channels | <|reference_start|>Performance of MMSE MIMO Receivers: A Large N Analysis for Correlated Channels: Linear receivers are considered as an attractive low-complexity alternative to optimal processing for multi-antenna MIMO communications. In this paper we characterize the performance of MMSE MIMO receivers in the limit of large antenna numbers in the presence of channel correlations. Using the replica method, we generalize our results obtained in arXiv:0810.0883 to Kronecker-product correlated channels and calculate the asymptotic mean and variance of the mutual information of a MIMO system of parallel MMSE subchannels. The replica method allows us to use the ties between the optimal receiver mutual information and the MMSE SIR of Gaussian inputs to calculate the joint moments of the SIRs of the MMSE subchannels. Using the methodology discussed in arXiv:0810.0883 it can be shown that the mutual information converges in distribution to a Gaussian random variable. Our results agree very well with simulations even with a moderate number of antennas.<|reference_end|> | arxiv | @article{moustakas2009performance,
title={Performance of MMSE MIMO Receivers: A Large N Analysis for Correlated
Channels},
author={Aris L. Moustakas, K Raj Kumar and Giuseppe Caire},
journal={arXiv preprint arXiv:0902.3593},
year={2009},
doi={10.1109/VETECS.2009.5073796},
archivePrefix={arXiv},
eprint={0902.3593},
primaryClass={cs.IT math.IT}
} | moustakas2009performance |
arxiv-6484 | 0902.3595 | On Optimum End-to-End Distortion in MIMO Systems | <|reference_start|>On Optimum End-to-End Distortion in MIMO Systems: This paper presents the joint impact of the numbers of antennas, source-to-channel bandwidth ratio and spatial correlation on the optimum expected end-to-end distortion in an outage-free MIMO system. In particular, based on an analytical expression valid for any SNR, a closed-form expression of the optimum asymptotic expected end-to-end distortion valid for high SNR is derived. It is comprised of the optimum distortion exponent and the multiplicative optimum distortion factor. Demonstrated by the simulation results, the analysis on the joint impact of the optimum distortion exponent and the optimum distortion factor explains the behavior of the optimum expected end-to-end distortion varying with the numbers of antennas, source-to-channel bandwidth ratio and spatial correlation. It is also proved that as the correlation tends to zero, the optimum asymptotic expected end-to-end distortion in the setting of correlated channel approaches that in the setting of uncorrelated channel. The results in this paper could be performance objectives for analog-source transmission systems. To some extend, they are instructive for system design.<|reference_end|> | arxiv | @article{chen2009on,
title={On Optimum End-to-End Distortion in MIMO Systems},
author={Jinhui Chen, Dirk T. M. Slock},
journal={arXiv preprint arXiv:0902.3595},
year={2009},
archivePrefix={arXiv},
eprint={0902.3595},
primaryClass={cs.IT math.IT}
} | chen2009on |
arxiv-6485 | 0902.3614 | Syntactic Confluence Criteria for Positive/Negative-Conditional Term Rewriting Systems | <|reference_start|>Syntactic Confluence Criteria for Positive/Negative-Conditional Term Rewriting Systems: We study the combination of the following already known ideas for showing confluence of unconditional or conditional term rewriting systems into practically more useful confluence criteria for conditional systems: Our syntactical separation into constructor and non-constructor symbols, Huet's introduction and Toyama's generalization of parallel closedness for non-noetherian unconditional systems, the use of shallow confluence for proving confluence of noetherian and non-noetherian conditional systems, the idea that certain kinds of limited confluence can be assumed for checking the fulfilledness or infeasibility of the conditions of conditional critical pairs, and the idea that (when termination is given) only prime superpositions have to be considered and certain normalization restrictions can be applied for the substitutions fulfilling the conditions of conditional critical pairs. Besides combining and improving already known methods, we present the following new ideas and results: We strengthen the criterion for overlay joinable noetherian systems, and, by using the expressiveness of our syntactical separation into constructor and non-constructor symbols, we are able to present criteria for level confluence that are not criteria for shallow confluence actually and also able to weaken the severe requirement of normality (stiffened with left-linearity) in the criteria for shallow confluence of noetherian and non-noetherian conditional systems to the easily satisfied requirement of quasi-normality. Finally, the whole paper may also give a practically useful overview of the syntactical means for showing confluence of conditional term rewriting systems.<|reference_end|> | arxiv | @article{wirth2009syntactic,
title={Syntactic Confluence Criteria for Positive/Negative-Conditional Term
Rewriting Systems},
author={Claus-Peter Wirth},
journal={J. Symbolic Computation, 2009, 44:60--98},
year={2009},
number={SEKI Report SR-95-09},
archivePrefix={arXiv},
eprint={0902.3614},
primaryClass={cs.AI cs.LO}
} | wirth2009syntactic |
arxiv-6486 | 0902.3616 | Algorithmic Meta-Theorems | <|reference_start|>Algorithmic Meta-Theorems: Algorithmic meta-theorems are general algorithmic results applying to a whole range of problems, rather than just to a single problem alone. They often have a "logical" and a "structural" component, that is they are results of the form: every computational problem that can be formalised in a given logic L can be solved efficiently on every class C of structures satisfying certain conditions. This paper gives a survey of algorithmic meta-theorems obtained in recent years and the methods used to prove them. As many meta-theorems use results from graph minor theory, we give a brief introduction to the theory developed by Robertson and Seymour for their proof of the graph minor theorem and state the main algorithmic consequences of this theory as far as they are needed in the theory of algorithmic meta-theorems.<|reference_end|> | arxiv | @article{kreutzer2009algorithmic,
title={Algorithmic Meta-Theorems},
author={Stephan Kreutzer},
journal={arXiv preprint arXiv:0902.3616},
year={2009},
archivePrefix={arXiv},
eprint={0902.3616},
primaryClass={cs.LO}
} | kreutzer2009algorithmic |
arxiv-6487 | 0902.3623 | A Self-Contained and Easily Accessible Discussion of the Method of Descente Infinie and Fermat's Only Explicitly Known Proof by Descente Infinie | <|reference_start|>A Self-Contained and Easily Accessible Discussion of the Method of Descente Infinie and Fermat's Only Explicitly Known Proof by Descente Infinie: We present the only proof of Pierre Fermat by descente infinie that is known to exist today. As the text of its Latin original requires active mathematical interpretation, it is more a proof sketch than a proper mathematical proof. We discuss descente infinie from the mathematical, logical, historical, linguistic, and refined logic-historical points of view. We provide the required preliminaries from number theory and develop a self-contained proof in a modern form, which nevertheless is intended to follow Fermat's ideas closely. We then annotate an English translation of Fermat's original proof with terms from the modern proof. Including all important facts, we present a concise and self-contained discussion of Fermat's proof sketch, which is easily accessible to laymen in number theory as well as to laymen in the history of mathematics, and which provides new clarification of the Method of Descente Infinie to the experts in these fields. Last but not least, this paper fills a gap regarding the easy accessibility of the subject.<|reference_end|> | arxiv | @article{wirth2009a,
title={A Self-Contained and Easily Accessible Discussion of the Method of
Descente Infinie and Fermat's Only Explicitly Known Proof by Descente Infinie},
author={Claus-Peter Wirth},
journal={arXiv preprint arXiv:0902.3623},
year={2009},
number={SEKI Working-Paper SWP-2006-02, Second edition},
archivePrefix={arXiv},
eprint={0902.3623},
primaryClass={cs.AI cs.LO}
} | wirth2009a |
arxiv-6488 | 0902.3631 | Distributed Agreement in Tile Self-Assembly | <|reference_start|>Distributed Agreement in Tile Self-Assembly: Laboratory investigations have shown that a formal theory of fault-tolerance will be essential to harness nanoscale self-assembly as a medium of computation. Several researchers have voiced an intuition that self-assembly phenomena are related to the field of distributed computing. This paper formalizes some of that intuition. We construct tile assembly systems that are able to simulate the solution of the wait-free consensus problem in some distributed systems. (For potential future work, this may allow binding errors in tile assembly to be analyzed, and managed, with positive results in distributed computing, as a "blockage" in our tile assembly model is analogous to a crash failure in a distributed computing model.) We also define a strengthening of the "traditional" consensus problem, to make explicit an expectation about consensus algorithms that is often implicit in distributed computing literature. We show that solution of this strengthened consensus problem can be simulated by a two-dimensional tile assembly model only for two processes, whereas a three-dimensional tile assembly model can simulate its solution in a distributed system with any number of processes.<|reference_end|> | arxiv | @article{sterling2009distributed,
title={Distributed Agreement in Tile Self-Assembly},
author={Aaron Sterling},
journal={arXiv preprint arXiv:0902.3631},
year={2009},
archivePrefix={arXiv},
eprint={0902.3631},
primaryClass={cs.DC cs.NE}
} | sterling2009distributed |
arxiv-6489 | 0902.3635 | lim+, delta+, and Non-Permutability of beta-Steps | <|reference_start|>lim+, delta+, and Non-Permutability of beta-Steps: Using a human-oriented formal example proof of the (lim+) theorem, i.e. that the sum of limits is the limit of the sum, which is of value for reference on its own, we exhibit a non-permutability of beta-steps and delta+-steps (according to Smullyan's classification), which is not visible with non-liberalized delta-rules and not serious with further liberalized delta-rules, such as the delta++-rule. Besides a careful presentation of the search for a proof of (lim+) with several pedagogical intentions, the main subject is to explain why the order of beta-steps plays such a practically important role in some calculi.<|reference_end|> | arxiv | @article{wirth2009lim+,,
title={lim+, delta+, and Non-Permutability of beta-Steps},
author={Claus-Peter Wirth},
journal={Journal of Symbolic Computation, 2012, Volume 47, Pp. 1109-1135},
year={2009},
doi={10.1016/j.jsc.2011.12.035},
number={SEKI Report SR-2005-01},
archivePrefix={arXiv},
eprint={0902.3635},
primaryClass={cs.AI cs.LO}
} | wirth2009lim+, |
arxiv-6490 | 0902.3648 | An Algebraic Dexter-Based Hypertext Reference Model | <|reference_start|>An Algebraic Dexter-Based Hypertext Reference Model: We present the first formal algebraic specification of a hypertext reference model. It is based on the well-known Dexter Hypertext Reference Model and includes modifications with respect to the development of hypertext since the WWW came up. Our hypertext model was developed as a product model with the aim to automatically support the design process and is extended to a model of hypertext-systems in order to be able to describe the state transitions in this process. While the specification should be easy to read for non-experts in algebraic specification, it guarantees a unique understanding and enables a close connection to logic-based development and verification.<|reference_end|> | arxiv | @article{mattick2009an,
title={An Algebraic Dexter-Based Hypertext Reference Model},
author={Volker Mattick, Claus-Peter Wirth},
journal={arXiv preprint arXiv:0902.3648},
year={2009},
number={Research Report 719/1999 (green/grey series), Fachbereich
Informatik, University of Dortmund},
archivePrefix={arXiv},
eprint={0902.3648},
primaryClass={cs.AI cs.LO}
} | mattick2009an |
arxiv-6491 | 0902.3722 | A minimalistic look at widening operators | <|reference_start|>A minimalistic look at widening operators: We consider the problem of formalizing the familiar notion of widening in abstract interpretation in higher-order logic. It turns out that many axioms of widening (e.g. widening sequences are ascending) are not useful for proving correctness. After keeping only useful axioms, we give an equivalent characterization of widening as a lazily constructed well-founded tree. In type systems supporting dependent products and sums, this tree can be made to reflect the condition of correct termination of the widening sequence.<|reference_end|> | arxiv | @article{monniaux2009a,
title={A minimalistic look at widening operators},
author={David Monniaux (VERIMAG - Imag)},
journal={arXiv preprint arXiv:0902.3722},
year={2009},
archivePrefix={arXiv},
eprint={0902.3722},
primaryClass={cs.LO cs.PL}
} | monniaux2009a |
arxiv-6492 | 0902.3725 | Statistical Inference of Functional Connectivity in Neuronal Networks using Frequent Episodes | <|reference_start|>Statistical Inference of Functional Connectivity in Neuronal Networks using Frequent Episodes: Identifying the spatio-temporal network structure of brain activity from multi-neuronal data streams is one of the biggest challenges in neuroscience. Repeating patterns of precisely timed activity across a group of neurons is potentially indicative of a microcircuit in the underlying neural tissue. Frequent episode discovery, a temporal data mining framework, has recently been shown to be a computationally efficient method of counting the occurrences of such patterns. In this paper, we propose a framework to determine when the counts are statistically significant by modeling the counting process. Our model allows direct estimation of the strengths of functional connections between neurons with improved resolution over previously published methods. It can also be used to rank the patterns discovered in a network of neurons according to their strengths and begin to reconstruct the graph structure of the network that produced the spike data. We validate our methods on simulated data and present analysis of patterns discovered in data from cultures of cortical neurons.<|reference_end|> | arxiv | @article{diekman2009statistical,
title={Statistical Inference of Functional Connectivity in Neuronal Networks
using Frequent Episodes},
author={Casey Diekman, Kohinoor Dasgupta, Vijay Nair, P.S. Sastry, K.P.
Unnikrishnan},
journal={arXiv preprint arXiv:0902.3725},
year={2009},
archivePrefix={arXiv},
eprint={0902.3725},
primaryClass={q-bio.NC cond-mat.dis-nn cs.DB q-bio.QM stat.ME}
} | diekman2009statistical |
arxiv-6493 | 0902.3730 | Full First-Order Sequent and Tableau Calculi With Preservation of Solutions and the Liberalized delta-Rule but Without Skolemization | <|reference_start|>Full First-Order Sequent and Tableau Calculi With Preservation of Solutions and the Liberalized delta-Rule but Without Skolemization: We present a combination of raising, explicit variable dependency representation, the liberalized delta-rule, and preservation of solutions for first-order deductive theorem proving. Our main motivation is to provide the foundation for our work on inductive theorem proving, where the preservation of solutions is indispensable.<|reference_end|> | arxiv | @article{wirth2009full,
title={Full First-Order Sequent and Tableau Calculi With Preservation of
Solutions and the Liberalized delta-Rule but Without Skolemization},
author={Claus-Peter Wirth},
journal={Caferra, R. and Salzer, G., eds., Automated Deduction in Classical
and Non-Classical Logics (FTP'98), LNAI 1761, pp. 283-298, Springer, 2000},
year={2009},
number={Research Report 698/1998 (green/grey series), Fachbereich
Informatik, University of Dortmund},
archivePrefix={arXiv},
eprint={0902.3730},
primaryClass={cs.AI cs.LO}
} | wirth2009full |
arxiv-6494 | 0902.3749 | Hilbert's epsilon as an Operator of Indefinite Committed Choice | <|reference_start|>Hilbert's epsilon as an Operator of Indefinite Committed Choice: Paul Bernays and David Hilbert carefully avoided overspecification of Hilbert's epsilon-operator and axiomatized only what was relevant for their proof-theoretic investigations. Semantically, this left the epsilon-operator underspecified. In the meanwhile, there have been several suggestions for semantics of the epsilon as a choice operator. After reviewing the literature on semantics of Hilbert's epsilon operator, we propose a new semantics with the following features: We avoid overspecification (such as right-uniqueness), but admit indefinite choice, committed choice, and classical logics. Moreover, our semantics for the epsilon supports proof search optimally and is natural in the sense that it does not only mirror some cases of referential interpretation of indefinite articles in natural language, but may also contribute to philosophy of language. Finally, we ask the question whether our epsilon within our free-variable framework can serve as a paradigm useful in the specification and computation of semantics of discourses in natural language.<|reference_end|> | arxiv | @article{wirth2009hilbert's,
title={Hilbert's epsilon as an Operator of Indefinite Committed Choice},
author={Claus-Peter Wirth},
journal={Journal of Applied Logic 6 (2008), pp. 287-317},
year={2009},
doi={10.1016/j.jal.2007.07.009},
number={SEKI Report SR-2006-02},
archivePrefix={arXiv},
eprint={0902.3749},
primaryClass={cs.AI cs.LO}
} | wirth2009hilbert's |
arxiv-6495 | 0902.3757 | Bounded Independence Fools Halfspaces | <|reference_start|>Bounded Independence Fools Halfspaces: We show that any distribution on {-1,1}^n that is k-wise independent fools any halfspace h with error \eps for k = O(\log^2(1/\eps) /\eps^2). Up to logarithmic factors, our result matches a lower bound by Benjamini, Gurel-Gurevich, and Peled (2007) showing that k = \Omega(1/(\eps^2 \cdot \log(1/\eps))). Using standard constructions of k-wise independent distributions, we obtain the first explicit pseudorandom generators G: {-1,1}^s --> {-1,1}^n that fool halfspaces. Specifically, we fool halfspaces with error eps and seed length s = k \log n = O(\log n \cdot \log^2(1/\eps) /\eps^2). Our approach combines classical tools from real approximation theory with structural results on halfspaces by Servedio (Computational Complexity 2007).<|reference_end|> | arxiv | @article{diakonikolas2009bounded,
title={Bounded Independence Fools Halfspaces},
author={Ilias Diakonikolas, Parikshit Gopalan, Ragesh Jaiswal, Rocco Servedio,
Emanuele Viola},
journal={arXiv preprint arXiv:0902.3757},
year={2009},
archivePrefix={arXiv},
eprint={0902.3757},
primaryClass={cs.CC}
} | diakonikolas2009bounded |
arxiv-6496 | 0902.3780 | Treewidth reduction for constrained separation and bipartization problems | <|reference_start|>Treewidth reduction for constrained separation and bipartization problems: We present a method for reducing the treewidth of a graph while preserving all the minimal $s-t$ separators. This technique turns out to be very useful for establishing the fixed-parameter tractability of constrained separation and bipartization problems. To demonstrate the power of this technique, we prove the fixed-parameter tractability of a number of well-known separation and bipartization problems with various additional restrictions (e.g., the vertices being removed from the graph form an independent set). These results answer a number of open questions in the area of parameterized complexity.<|reference_end|> | arxiv | @article{marx2009treewidth,
title={Treewidth reduction for constrained separation and bipartization
problems},
author={D'aniel Marx (1), Barry O'Sullivan (2), Igor Razgon (2) ((1) Budapest
University of Technology and Economics, (2) Cork Constraint Computation
Centre, University College Cork)},
journal={arXiv preprint arXiv:0902.3780},
year={2009},
archivePrefix={arXiv},
eprint={0902.3780},
primaryClass={cs.DS cs.DM}
} | marx2009treewidth |
arxiv-6497 | 0902.3788 | Communities in Networks | <|reference_start|>Communities in Networks: We survey some of the concepts, methods, and applications of community detection, which has become an increasingly important area of network science. To help ease newcomers into the field, we provide a guide to available methodology and open problems, and discuss why scientists from diverse backgrounds are interested in these problems. As a running theme, we emphasize the connections of community detection to problems in statistical physics and computational optimization.<|reference_end|> | arxiv | @article{porter2009communities,
title={Communities in Networks},
author={Mason A. Porter, Jukka-Pekka Onnela, and Peter J. Mucha},
journal={Notices of the American Mathematical Society, Vol. 56, No. 9:
1082-1097, 1164-1166, 2009},
year={2009},
archivePrefix={arXiv},
eprint={0902.3788},
primaryClass={physics.soc-ph cond-mat.stat-mech cs.CY cs.DM math.ST nlin.AO physics.comp-ph stat.TH}
} | porter2009communities |
arxiv-6498 | 0902.3818 | Application of Generalised sequential crossover of languages to generalised splicing | <|reference_start|>Application of Generalised sequential crossover of languages to generalised splicing: This paper outlines an application of iterated version of generalised sequential crossover of two languages (which in some sense, an abstraction of the crossover of chromosomes in living organisms) in studying some classes of the newly proposed generalised splicing ($GS$) over two languages. It is proved that, for $X,Y \in \{FIN, REG, LIN, CF, CS, RE \}, \sg \in FIN$, the subclass of generalized splicing languages namely $GS(X,Y,\sg)$, (which is a subclass of the class $GS(X,Y,FIN)$) is always regular.<|reference_end|> | arxiv | @article{jeganathan2009application,
title={Application of Generalised sequential crossover of languages to
generalised splicing},
author={L. Jeganathan, R. Rama, Ritabrata Sengupta},
journal={arXiv preprint arXiv:0902.3818},
year={2009},
archivePrefix={arXiv},
eprint={0902.3818},
primaryClass={cs.DM}
} | jeganathan2009application |
arxiv-6499 | 0902.3846 | Uniqueness of Low-Rank Matrix Completion by Rigidity Theory | <|reference_start|>Uniqueness of Low-Rank Matrix Completion by Rigidity Theory: The problem of completing a low-rank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to an efficient randomized algorithm for testing both local and global unique completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix.<|reference_end|> | arxiv | @article{singer2009uniqueness,
title={Uniqueness of Low-Rank Matrix Completion by Rigidity Theory},
author={Amit Singer, Mihai Cucuringu},
journal={arXiv preprint arXiv:0902.3846},
year={2009},
archivePrefix={arXiv},
eprint={0902.3846},
primaryClass={cs.LG}
} | singer2009uniqueness |
arxiv-6500 | 0902.3858 | Why Would You Trust B? | <|reference_start|>Why Would You Trust B?: The use of formal methods provides confidence in the correctness of developments. Yet one may argue about the actual level of confidence obtained when the method itself -- or its implementation -- is not formally checked. We address this question for the B, a widely used formal method that allows for the derivation of correct programs from specifications. Through a deep embedding of the B logic in Coq, we check the B theory but also implement B tools. Both aspects are illustrated by the description of a proved prover for the B logic.<|reference_end|> | arxiv | @article{jaeger2009why,
title={Why Would You Trust B?},
author={Eric Jaeger (DCSSI/SDS/Lti, Lip6), Catherine Dubois (CEDRIC)},
journal={Logic for Programming, Artificial Intelligence, and Reasoning,
Yerevan : Arm\'enie (2007)},
year={2009},
doi={10.1007/978-3-540-75560-9},
archivePrefix={arXiv},
eprint={0902.3858},
primaryClass={cs.LO}
} | jaeger2009why |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.