corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-5401
0811.0819
Persistent Queries
<|reference_start|>Persistent Queries: We propose a syntax and semantics for interactive abstract state machines to deal with the following situation. A query is issued during a certain step, but the step ends before any reply is received. Later, a reply arrives, and later yet the algorithm makes use of this reply. By a persistent query, we mean a query for which a late reply might be used. Syntactically, our proposal involves issuing, along with a persistent query, a location where a late reply is to be stored. Semantically, it involves only a minor modification of the existing theory of interactive small-step abstract state machines.<|reference_end|>
arxiv
@article{blass2008persistent, title={Persistent Queries}, author={Andreas Blass (University of Michigan) and Yuri Gurevich (Microsoft Research)}, journal={arXiv preprint arXiv:0811.0819}, year={2008}, archivePrefix={arXiv}, eprint={0811.0819}, primaryClass={cs.PL cs.LO} }
blass2008persistent
arxiv-5402
0811.0823
Distributed Constrained Optimization with Semicoordinate Transformations
<|reference_start|>Distributed Constrained Optimization with Semicoordinate Transformations: Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents' moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of ``semicoordinate'' variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for $k$-sat constraint satisfaction problems and for unconstrained minimization of $NK$ functions.<|reference_end|>
arxiv
@article{macready2008distributed, title={Distributed Constrained Optimization with Semicoordinate Transformations}, author={William Macready and David Wolpert}, journal={arXiv preprint arXiv:0811.0823}, year={2008}, archivePrefix={arXiv}, eprint={0811.0823}, primaryClass={cs.NE cs.AI} }
macready2008distributed
arxiv-5403
0811.0851
Solitaire: Recent Developments
<|reference_start|>Solitaire: Recent Developments: This special issue on Peg Solitaire has been put together by John Beasley as guest editor, and reports work by John Harris, Alain Maye, Jean-Charles Meyrignac, George Bell, and others. Topics include: short solutions on the 6 x 6 board and the 37-hole "French" board, solving generalized cross boards and long-arm boards. Five new problems are given for readers to solve, with solutions provided.<|reference_end|>
arxiv
@article{beasley2008solitaire:, title={Solitaire: Recent Developments}, author={John D. Beasley}, journal={arXiv preprint arXiv:0811.0851}, year={2008}, archivePrefix={arXiv}, eprint={0811.0851}, primaryClass={math.CO cs.DM} }
beasley2008solitaire:
arxiv-5404
0811.0881
Non-classical Role of Potential Energy in Adiabatic Quantum Annealing
<|reference_start|>Non-classical Role of Potential Energy in Adiabatic Quantum Annealing: Adiabatic quantum annealing is a paradigm of analog quantum computation, where a given computational job is converted to the task of finding the global minimum of some classical potential energy function and the search for the global potential minimum is performed by employing external kinetic quantum fluctuations and subsequent slow reduction (annealing) of them. In this method, the entire potential energy landscape (PEL) may be accessed simultaneously through a delocalized wave-function, in contrast to a classical search, where the searcher has to visit different points in the landscape (i.e., individual classical configurations) sequentially. Thus in such searches, the role of the potential energy might be significantly different in the two cases. Here we discuss this in the context of searching of a single isolated hole (potential minimum) in a golf-course type gradient free PEL. We show, that the quantum particle would be able to locate the hole faster if the hole is deeper, while the classical particle of course would have no scope to exploit the depth of the hole. We also discuss the effect of the underlying quantum phase transition on the adiabatic dynamics.<|reference_end|>
arxiv
@article{das2008non-classical, title={Non-classical Role of Potential Energy in Adiabatic Quantum Annealing}, author={Arnab Das}, journal={J. Phys.: Conf. Ser. vol. 143 012001 (2009)}, year={2008}, doi={10.1088/1742-6596/143/1/012001}, archivePrefix={arXiv}, eprint={0811.0881}, primaryClass={quant-ph cond-mat.stat-mech cs.CC physics.comp-ph} }
das2008non-classical
arxiv-5405
0811.0935
A New Training Protocol for Channel State Estimation in Wireless Relay Networks
<|reference_start|>A New Training Protocol for Channel State Estimation in Wireless Relay Networks: The accuracy of channel state information (CSI) is critical for improving the capacity of wireless networks. In this paper, we introduce a training protocol for wireless relay networks that uses channel estimation and feedforwarding methods. The feedforwarding method is the distinctive feature of the proposed protocol. As we show, each relay feedforwards the imperfect CSI to the destination in a way that provides a higher network capacity and a faster transfer of the CSI than the existing protocols. In addition, we show the importance of the effective CSI accuracy on the wireless relay network capacity by comparing networks with the perfect effective CSI, imperfect effective CSI, and noisy imperfect effective CSI available at the destination.<|reference_end|>
arxiv
@article{yetis2008a, title={A New Training Protocol for Channel State Estimation in Wireless Relay Networks}, author={Cenk M. Yetis and Ahmet H. Kayran}, journal={arXiv preprint arXiv:0811.0935}, year={2008}, archivePrefix={arXiv}, eprint={0811.0935}, primaryClass={cs.IT math.IT} }
yetis2008a
arxiv-5406
0811.0942
\'Etude longitudinale d'une proc\'edure de mod\'elisation de connaissances en mati\`ere de gestion du territoire agricole
<|reference_start|>\'Etude longitudinale d'une proc\'edure de mod\'elisation de connaissances en mati\`ere de gestion du territoire agricole: This paper gives an introduction to this issue, and presents the framework and the main steps of the Rosa project. Four teams of researchers, agronomists, computer scientists, psychologists and linguists were involved during five years within this project that aimed at the development of a knowledge based system. The purpose of the Rosa system is the modelling and the comparison of farm spatial organizations. It relies on a formalization of agronomical knowledge and thus induces a joint knowledge building process involving both the agronomists and the computer scientists. The paper describes the steps of the modelling process as well as the filming procedures set up by the psychologists and linguists in order to make explicit and to analyze the underlying knowledge building process.<|reference_end|>
arxiv
@article{ber2008\'etude, title={\'Etude longitudinale d'une proc\'edure de mod\'elisation de connaissances en mati\`ere de gestion du territoire agricole}, author={Florence Le Ber (INRIA Lorraine - Loria, Cevh), Christian Brassac (LABPSYLOR, L2P)}, journal={Revue d'Anthropologie des Connaissances 2, 2 (2008) 151-168}, year={2008}, archivePrefix={arXiv}, eprint={0811.0942}, primaryClass={cs.AI} }
ber2008\'etude
arxiv-5407
0811.0952
Raptor Codes and Cryptographic Issues
<|reference_start|>Raptor Codes and Cryptographic Issues: In this paper two cryptographic methods are introduced. In the first method the presence of a certain size subgroup of persons can be checked for an action to take place. For this we use fragments of Raptor codes delivered to the group members. In the other method a selection of a subset of objects can be made secret. Also, it can be proven afterwards, what the original selection was.<|reference_end|>
arxiv
@article{malinen2008raptor, title={Raptor Codes and Cryptographic Issues}, author={Mikko Malinen}, journal={arXiv preprint arXiv:0811.0952}, year={2008}, archivePrefix={arXiv}, eprint={0811.0952}, primaryClass={cs.IT math.IT} }
malinen2008raptor
arxiv-5408
0811.0959
The Complexity of Propositional Implication
<|reference_start|>The Complexity of Propositional Implication: The question whether a set of formulae G implies a formula f is fundamental. The present paper studies the complexity of the above implication problem for propositional formulae that are built from a systematically restricted set of Boolean connectives. We give a complete complexity classification for all sets of Boolean functions in the meaning of Post's lattice and show that the implication problem is efficentily solvable only if the connectives are definable using the constants {false,true} and only one of {and,or,xor}. The problem remains coNP-complete in all other cases. We also consider the restriction of G to singletons.<|reference_end|>
arxiv
@article{beyersdorff2008the, title={The Complexity of Propositional Implication}, author={Olaf Beyersdorff, Arne Meier, Michael Thomas, Heribert Vollmer}, journal={arXiv preprint arXiv:0811.0959}, year={2008}, doi={10.1016/j.ipl.2009.06.015}, archivePrefix={arXiv}, eprint={0811.0959}, primaryClass={cs.CC cs.LO} }
beyersdorff2008the
arxiv-5409
0811.0964
One useful logic that defines its own truth
<|reference_start|>One useful logic that defines its own truth: Existential fixed point logic (EFPL) is a natural fit for some applications, and the purpose of this talk is to attract attention to EFPL. The logic is also interesting in its own right as it has attractive properties. One of those properties is rather unusual: truth of formulas can be defined (given appropriate syntactic apparatus) in the logic. We mentioned that property elsewhere, and we use this opportunity to provide the proof.<|reference_end|>
arxiv
@article{blass2008one, title={One useful logic that defines its own truth}, author={Andreas Blass (University of Michigan) and Yuri Gurevich (Microsoft Research)}, journal={arXiv preprint arXiv:0811.0964}, year={2008}, archivePrefix={arXiv}, eprint={0811.0964}, primaryClass={cs.LO} }
blass2008one
arxiv-5410
0811.0971
Mining Complex Hydrobiological Data with Galois Lattices
<|reference_start|>Mining Complex Hydrobiological Data with Galois Lattices: We have used Galois lattices for mining hydrobiological data. These data are about macrophytes, that are macroscopic plants living in water bodies. These plants are characterized by several biological traits, that own several modalities. Our aim is to cluster the plants according to their common traits and modalities and to find out the relations between traits. Galois lattices are efficient methods for such an aim, but apply on binary data. In this article, we detail a few approaches we used to transform complex hydrobiological data into binary data and compare the first results obtained thanks to Galois lattices.<|reference_end|>
arxiv
@article{bertaux2008mining, title={Mining Complex Hydrobiological Data with Galois Lattices}, author={Aur'elie Bertaux (CEVH, Lsiit), AGN`es Braud (LSIIT), Florence Le Ber (CEVH, Inria Lorraine - Loria)}, journal={International Workshop on Advances in Conceptual Knowledge Engineering (ACKE'07), Regensburg : Allemagne (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0811.0971}, primaryClass={cs.AI q-bio.QM} }
bertaux2008mining
arxiv-5411
0811.0977
Two Forms of One Useful Logic: Existential Fixed Point Logic and Liberal Datalog
<|reference_start|>Two Forms of One Useful Logic: Existential Fixed Point Logic and Liberal Datalog: A natural liberalization of Datalog is used in the Distributed Knowledge Authorization Language (DKAL). We show that the expressive power of this liberal Datalog is that of existential fixed-point logic. The exposition is self-contained.<|reference_end|>
arxiv
@article{blass2008two, title={Two Forms of One Useful Logic: Existential Fixed Point Logic and Liberal Datalog}, author={Andreas Blass (University of Michigan) and Yuri Gurevich (Microsoft Research)}, journal={arXiv preprint arXiv:0811.0977}, year={2008}, archivePrefix={arXiv}, eprint={0811.0977}, primaryClass={cs.LO} }
blass2008two
arxiv-5412
0811.0980
Self-organized criticality and adaptation in discrete dynamical networks
<|reference_start|>Self-organized criticality and adaptation in discrete dynamical networks: It has been proposed that adaptation in complex systems is optimized at the critical boundary between ordered and disordered dynamical regimes. Here, we review models of evolving dynamical networks that lead to self-organization of network topology based on a local coupling between a dynamical order parameter and rewiring of network connectivity, with convergence towards criticality in the limit of large network size $N$. In particular, two adaptive schemes are discussed and compared in the context of Boolean Networks and Threshold Networks: 1) Active nodes loose links, frozen nodes aquire new links, 2) Nodes with correlated activity connect, de-correlated nodes disconnect. These simple local adaptive rules lead to co-evolution of network topology and -dynamics. Adaptive networks are strikingly different from random networks: They evolve inhomogeneous topologies and broad plateaus of homeostatic regulation, dynamical activity exhibits $1/f$ noise and attractor periods obey a scale-free distribution. The proposed co-evolutionary mechanism of topological self-organization is robust against noise and does not depend on the details of dynamical transition rules. Using finite-size scaling, it is shown that networks converge to a self-organized critical state in the thermodynamic limit. Finally, we discuss open questions and directions for future research, and outline possible applications of these models to adaptive systems in diverse areas.<|reference_end|>
arxiv
@article{rohlf2008self-organized, title={Self-organized criticality and adaptation in discrete dynamical networks}, author={Thimo Rohlf and Stefan Bornholdt}, journal={arXiv preprint arXiv:0811.0980}, year={2008}, archivePrefix={arXiv}, eprint={0811.0980}, primaryClass={nlin.AO cond-mat.dis-nn cs.NE} }
rohlf2008self-organized
arxiv-5413
0811.0987
Modular difference logic is hard
<|reference_start|>Modular difference logic is hard: In connection with machine arithmetic, we are interested in systems of constraints of the form x + k \leq y + k'. Over integers, the satisfiability problem for such systems is polynomial time. The problem becomes NP complete if we restrict attention to the residues for a fixed modulus N.<|reference_end|>
arxiv
@article{bjørner2008modular, title={Modular difference logic is hard}, author={Nikolaj Bj{o}rner (1), Andreas Blass (2), Yuri Gurevich (1), and Madan Musuvathi (1)}, journal={arXiv preprint arXiv:0811.0987}, year={2008}, archivePrefix={arXiv}, eprint={0811.0987}, primaryClass={cs.CC} }
bjørner2008modular
arxiv-5414
0811.1000
Hard and Soft Spherical-Bound Stack decoder for MIMO systems
<|reference_start|>Hard and Soft Spherical-Bound Stack decoder for MIMO systems: Classical ML decoders of MIMO systems like the sphere decoder, the Schnorr-Euchner algorithm, the Fano and the stack decoders suffer of high complexity for high number of antennas and large constellation sizes. We propose in this paper a novel sequential algorithm which combines the stack algorithm search strategy and the sphere decoder search region. The proposed decoder that we call the Spherical-Bound-Stack decoder (SB-Stack) can then be used to resolve lattice and large size constellations decoding with a reduced complexity compared to the classical ML decoders. The SB-Stack decoder will be further extended to support soft-output detection over linear channels. It will be shown that the soft SB-Stack decoder outperforms other MIMO soft decoders in term of performance and complexity.<|reference_end|>
arxiv
@article{ouertani2008hard, title={Hard and Soft Spherical-Bound Stack decoder for MIMO systems}, author={Rym Ouertani, Ghaya Rekaya Ben-Othman, Abdellatif Salah}, journal={arXiv preprint arXiv:0811.1000}, year={2008}, archivePrefix={arXiv}, eprint={0811.1000}, primaryClass={cs.IT math.IT} }
ouertani2008hard
arxiv-5415
0811.1061
How to turn a scripting language into a domain specific language for computer algebra
<|reference_start|>How to turn a scripting language into a domain specific language for computer algebra: We have developed two computer algebra systems, meditor [Jolly:2007] and JAS [Kredel:2006]. These CAS systems are available as Java libraries. For the use-case of interactively entering and manipulating mathematical expressions, there is a need of a scripting front-end for our libraries. Most other CAS invent and implement their own scripting interface for this purpose. We, however, do not want to reinvent the wheel and propose to use a contemporary scripting language with access to Java code. In this paper we discuss the requirements for a scripting language in computer algebra and check whether the languages Python, Ruby, Groovy and Scala meet these requirements. We conclude that, with minor problems, any of these languages is suitable for our purpose.<|reference_end|>
arxiv
@article{jolly2008how, title={How to turn a scripting language into a domain specific language for computer algebra}, author={Raphael Jolly and Heinz Kredel}, journal={arXiv preprint arXiv:0811.1061}, year={2008}, archivePrefix={arXiv}, eprint={0811.1061}, primaryClass={cs.SC} }
jolly2008how
arxiv-5416
0811.1067
Statistical ranking and combinatorial Hodge theory
<|reference_start|>Statistical ranking and combinatorial Hodge theory: We propose a number of techniques for obtaining a global ranking from data that may be incomplete and imbalanced -- characteristics almost universal to modern datasets coming from e-commerce and internet applications. We are primarily interested in score or rating-based cardinal data. From raw ranking data, we construct pairwise rankings, represented as edge flows on an appropriate graph. Our statistical ranking method uses the graph Helmholtzian, the graph theoretic analogue of the Helmholtz operator or vector Laplacian, in much the same way the graph Laplacian is an analogue of the Laplace operator or scalar Laplacian. We study the graph Helmholtzian using combinatorial Hodge theory: we show that every edge flow representing pairwise ranking can be resolved into two orthogonal components, a gradient flow that represents the L2-optimal global ranking and a divergence-free flow (cyclic) that measures the validity of the global ranking obtained -- if this is large, then the data does not have a meaningful global ranking. This divergence-free flow can be further decomposed orthogonally into a curl flow (locally cyclic) and a harmonic flow (locally acyclic but globally cyclic); these provides information on whether inconsistency arises locally or globally. An obvious advantage over the NP-hard Kemeny optimization is that discrete Hodge decomposition may be computed via a linear least squares regression. We also investigated the L1-projection of edge flows, showing that this is dual to correlation maximization over bounded divergence-free flows, and the L1-approximate sparse cyclic ranking, showing that this is dual to correlation maximization over bounded curl-free flows. We discuss relations with Kemeny optimization, Borda count, and Kendall-Smith consistency index from social choice theory and statistics.<|reference_end|>
arxiv
@article{jiang2008statistical, title={Statistical ranking and combinatorial Hodge theory}, author={Xiaoye Jiang, Lek-Heng Lim, Yuan Yao, Yinyu Ye}, journal={arXiv preprint arXiv:0811.1067}, year={2008}, archivePrefix={arXiv}, eprint={0811.1067}, primaryClass={stat.ML cs.DM} }
jiang2008statistical
arxiv-5417
0811.1075
Resolution Trees with Lemmas: Resolution Refinements that Characterize DLL Algorithms with Clause Learning
<|reference_start|>Resolution Trees with Lemmas: Resolution Refinements that Characterize DLL Algorithms with Clause Learning: Resolution refinements called w-resolution trees with lemmas (WRTL) and with input lemmas (WRTI) are introduced. Dag-like resolution is equivalent to both WRTL and WRTI when there is no regularity condition. For regular proofs, an exponential separation between regular dag-like resolution and both regular WRTL and regular WRTI is given. It is proved that DLL proof search algorithms that use clause learning based on unit propagation can be polynomially simulated by regular WRTI. More generally, non-greedy DLL algorithms with learning by unit propagation are equivalent to regular WRTI. A general form of clause learning, called DLL-Learn, is defined that is equivalent to regular WRTL. A variable extension method is used to give simulations of resolution by regular WRTI, using a simplified form of proof trace extensions. DLL-Learn and non-greedy DLL algorithms with learning by unit propagation can use variable extensions to simulate general resolution without doing restarts. Finally, an exponential lower bound for WRTL where the lemmas are restricted to short clauses is shown.<|reference_end|>
arxiv
@article{buss2008resolution, title={Resolution Trees with Lemmas: Resolution Refinements that Characterize DLL Algorithms with Clause Learning}, author={Samuel R. Buss, Jan Hoffmann, Jan Johannsen}, journal={Logical Methods in Computer Science, Volume 4, Issue 4 (December 5, 2008) lmcs:860}, year={2008}, doi={10.2168/LMCS-4(4:13)2008}, archivePrefix={arXiv}, eprint={0811.1075}, primaryClass={cs.LO cs.CC} }
buss2008resolution
arxiv-5418
0811.1081
Parallel GPU Implementation of Iterative PCA Algorithms
<|reference_start|>Parallel GPU Implementation of Iterative PCA Algorithms: Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA) are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).<|reference_end|>
arxiv
@article{andrecut2008parallel, title={Parallel GPU Implementation of Iterative PCA Algorithms}, author={M. Andrecut}, journal={arXiv preprint arXiv:0811.1081}, year={2008}, archivePrefix={arXiv}, eprint={0811.1081}, primaryClass={q-bio.QM cs.MS physics.comp-ph} }
andrecut2008parallel
arxiv-5419
0811.1083
A role-free approach to indexing large RDF data sets in secondary memory for efficient SPARQL evaluation
<|reference_start|>A role-free approach to indexing large RDF data sets in secondary memory for efficient SPARQL evaluation: Massive RDF data sets are becoming commonplace. RDF data is typically generated in social semantic domains (such as personal information management) wherein a fixed schema is often not available a priori. We propose a simple Three-way Triple Tree (TripleT) secondary-memory indexing technique to facilitate efficient SPARQL query evaluation on such data sets. The novelty of TripleT is that (1) the index is built over the atoms occurring in the data set, rather than at a coarser granularity, such as whole triples occurring in the data set; and (2) the atoms are indexed regardless of the roles (i.e., subjects, predicates, or objects) they play in the triples of the data set. We show through extensive empirical evaluation that TripleT exhibits multiple orders of magnitude improvement over the state of the art on RDF indexing, in terms of both storage and query processing costs.<|reference_end|>
arxiv
@article{fletcher2008a, title={A role-free approach to indexing large RDF data sets in secondary memory for efficient SPARQL evaluation}, author={George H. L. Fletcher and Peter W. Beck}, journal={arXiv preprint arXiv:0811.1083}, year={2008}, archivePrefix={arXiv}, eprint={0811.1083}, primaryClass={cs.DB cs.DS} }
fletcher2008a
arxiv-5420
0811.1095
Allocation of control and data channels for Large-Scale Wireless Sensor Networks
<|reference_start|>Allocation of control and data channels for Large-Scale Wireless Sensor Networks: Both IEEE 802.15.4 and 802.15.4a standards allow for dynamic channel allocation and use of multiple channels available at their physical layers but its MAC protocols are designed only for single channel. Also, sensor's transceivers such as CC2420 provide multiple channels and as shown in [1], [2] and [3] channel switch latency of CC2420 transceiver is just about 200$\mu$s. In order to enhance both energy efficiency and to shorten end to end delay, we propose, in this report, a spectrum-efficient frequency allocation schemes that are able to statically assign control channels and dynamically reuse data channels for Personal Area Networks (PANs) inside a Large-Scale WSN based on UWB technology.<|reference_end|>
arxiv
@article{slimane2008allocation, title={Allocation of control and data channels for Large-Scale Wireless Sensor Networks}, author={Jamila Ben Slimane (INRIA Lorraine - Loria, Mediatron), Ye-Qiong Song (INRIA Lorraine - Loria, Loria), Anis Koub^aa (IPP-Hurray! Research Group), Mounir Frikha (MEDIATRON)}, journal={arXiv preprint arXiv:0811.1095}, year={2008}, archivePrefix={arXiv}, eprint={0811.1095}, primaryClass={cs.NI} }
slimane2008allocation
arxiv-5421
0811.1103
Symbolic Backwards-Reachability Analysis for Higher-Order Pushdown Systems
<|reference_start|>Symbolic Backwards-Reachability Analysis for Higher-Order Pushdown Systems: Higher-order pushdown systems (PDSs) generalise pushdown systems through the use of higher-order stacks, that is, a nested "stack of stacks" structure. These systems may be used to model higher-order programs and are closely related to the Caucal hierarchy of infinite graphs and safe higher-order recursion schemes. We consider the backwards-reachability problem over higher-order Alternating PDSs (APDSs), a generalisation of higher-order PDSs. This builds on and extends previous work on pushdown systems and context-free higher-order processes in a non-trivial manner. In particular, we show that the set of configurations from which a regular set of higher-order APDS configurations is reachable is regular and computable in n-EXPTIME. In fact, the problem is n-EXPTIME-complete. We show that this work has several applications in the verification of higher-order PDSs, such as linear-time model-checking, alternation-free mu-calculus model-checking and the computation of winning regions of reachability games.<|reference_end|>
arxiv
@article{hague2008symbolic, title={Symbolic Backwards-Reachability Analysis for Higher-Order Pushdown Systems}, author={Matthew Hague and C.-H. Luke Ong}, journal={Logical Methods in Computer Science, Volume 4, Issue 4 (December 5, 2008) lmcs:831}, year={2008}, doi={10.2168/LMCS-4(4:14)2008}, archivePrefix={arXiv}, eprint={0811.1103}, primaryClass={cs.CC cs.GT} }
hague2008symbolic
arxiv-5422
0811.1108
Resource Allocation for Downlink Cellular OFDMA Systems: Part I - Optimal Allocation
<|reference_start|>Resource Allocation for Downlink Cellular OFDMA Systems: Part I - Optimal Allocation: In this pair of papers (Part I and Part II in this issue), we investigate the issue of power control and subcarrier assignment in a sectorized two-cell downlink OFDMA system impaired by multicell interference. As recommended for WiMAX, we assume that the first part of the available bandwidth is likely to be reused by different base stations (and is thus subject to multicell interference) and that the second part of the bandwidth is shared in an orthogonal way between the different base stations (and is thus protected from multicell interference). Although the problem of multicell resource allocation is nonconvex in this scenario, we provide in Part I the general form of the global solution. In particular, the optimal resource allocation turns out to be "binary" in the sense that, except for at most one pivot-user in each cell, any user receives data either in the reused bandwidth or in the protected bandwidth, but not in both. The determination of the optimal resource allocation essentially reduces to the determination of the latter pivot-position.<|reference_end|>
arxiv
@article{ksairi2008resource, title={Resource Allocation for Downlink Cellular OFDMA Systems: Part I - Optimal Allocation}, author={Nassar Ksairi, Pascal Bianchi, Philippe Ciblat, Walid Hachem}, journal={arXiv preprint arXiv:0811.1108}, year={2008}, archivePrefix={arXiv}, eprint={0811.1108}, primaryClass={cs.IT math.IT} }
ksairi2008resource
arxiv-5423
0811.1112
Resource Allocation for Downlink Cellular OFDMA Systems: Part II - Practical Algorithms and Optimal Reuse Factor
<|reference_start|>Resource Allocation for Downlink Cellular OFDMA Systems: Part II - Practical Algorithms and Optimal Reuse Factor: In a companion paper, we characterized the optimal resource allocation in terms of power control and subcarrier assignment, for a downlink sectorized OFDMA system. In our model, the network is assumed to be one dimensional for the sake of analysis. We also assume that a certain part of the available bandwidth is likely to be reused by different base stations while that the other part of the bandwidth is shared in an orthogonal way between these base stations. The optimal resource allocation characterized in Part I is obtained by minimizing the total power spent by the network under the constraint that all users rate requirements are satisfied. When optimal resource allocation is used, any user receives data either in the reused bandwidth or in the protected bandwidth, but not in both (except for at most one pivot-user in each cell). We also proposed an algorithm that determines the optimal values of users resource allocation parameters. The optimal allocation algorithm proposed in Part I requires a large number of operations. In the present paper, we propose a distributed practical resource allocation algorithm with low complexity. We study the asymptotic behavior of both this simplified resource allocation algorithm and the optimal resource allocation algorithm of Part I as the number of users in each cell tends to infinity. Our analysis allows to prove that the proposed simplified algorithm is asymptotically optimal. As a byproduct of our analysis, we characterize the optimal value of the frequency reuse factor.<|reference_end|>
arxiv
@article{ksairi2008resource, title={Resource Allocation for Downlink Cellular OFDMA Systems: Part II - Practical Algorithms and Optimal Reuse Factor}, author={Nassar Ksairi, Pascal Bianchi, Phiippe ciblat, Walid Hachem}, journal={arXiv preprint arXiv:0811.1112}, year={2008}, archivePrefix={arXiv}, eprint={0811.1112}, primaryClass={cs.IT math.IT} }
ksairi2008resource
arxiv-5424
0811.1151
A Model for Probabilistic Reasoning on Assume/Guarantee Contracts
<|reference_start|>A Model for Probabilistic Reasoning on Assume/Guarantee Contracts: In this paper, we present a probabilistic adaptation of an Assume/Guarantee contract formalism. For the sake of generality, we assume that the extended state machines used in the contracts and implementations define sets of runs on a given set of variables, that compose by intersection over the common variables. In order to enable probabilistic reasoning, we consider that the contracts dictate how certain input variables will behave, being either non-deterministic, or probabilistic; the introduction of probabilistic variables leading us to tune the notions of implementation, refinement and composition. As shown in the report, this probabilistic adaptation of the Assume/Guarantee contract theory preserves compositionality and therefore allows modular reliability analysis, either with a top-down or a bottom-up approach.<|reference_end|>
arxiv
@article{delahaye2008a, title={A Model for Probabilistic Reasoning on Assume/Guarantee Contracts}, author={Beno^it Delahaye (IRISA), Beno^it Caillaud (IRISA, Irisa, Irisa)}, journal={arXiv preprint arXiv:0811.1151}, year={2008}, number={RR-6719}, archivePrefix={arXiv}, eprint={0811.1151}, primaryClass={cs.PF} }
delahaye2008a
arxiv-5425
0811.1250
Adaptive Base Class Boost for Multi-class Classification
<|reference_start|>Adaptive Base Class Boost for Multi-class Classification: We develop the concept of ABC-Boost (Adaptive Base Class Boost) for multi-class classification and present ABC-MART, a concrete implementation of ABC-Boost. The original MART (Multiple Additive Regression Trees) algorithm has been very successful in large-scale applications. For binary classification, ABC-MART recovers MART. For multi-class classification, ABC-MART considerably improves MART, as evaluated on several public data sets.<|reference_end|>
arxiv
@article{li2008adaptive, title={Adaptive Base Class Boost for Multi-class Classification}, author={Ping Li}, journal={arXiv preprint arXiv:0811.1250}, year={2008}, archivePrefix={arXiv}, eprint={0811.1250}, primaryClass={cs.LG cs.IR} }
li2008adaptive
arxiv-5426
0811.1254
Coding Theory and Algebraic Combinatorics
<|reference_start|>Coding Theory and Algebraic Combinatorics: This chapter introduces and elaborates on the fruitful interplay of coding theory and algebraic combinatorics, with most of the focus on the interaction of codes with combinatorial designs, finite geometries, simple groups, sphere packings, kissing numbers, lattices, and association schemes. In particular, special interest is devoted to the relationship between codes and combinatorial designs. We describe and recapitulate important results in the development of the state of the art. In addition, we give illustrative examples and constructions, and highlight recent advances. Finally, we provide a collection of significant open problems and challenges concerning future research.<|reference_end|>
arxiv
@article{huber2008coding, title={Coding Theory and Algebraic Combinatorics}, author={Michael Huber}, journal={arXiv preprint arXiv:0811.1254}, year={2008}, doi={10.1142/9789812837172_0004}, archivePrefix={arXiv}, eprint={0811.1254}, primaryClass={math.CO cs.IT math.IT} }
huber2008coding
arxiv-5427
0811.1260
The Application of Fuzzy Logic to Collocation Extraction
<|reference_start|>The Application of Fuzzy Logic to Collocation Extraction: Collocations are important for many tasks of Natural language processing such as information retrieval, machine translation, computational lexicography etc. So far many statistical methods have been used for collocation extraction. Almost all the methods form a classical crisp set of collocation. We propose a fuzzy logic approach of collocation extraction to form a fuzzy set of collocations in which each word combination has a certain grade of membership for being collocation. Fuzzy logic provides an easy way to express natural language into fuzzy logic rules. Two existing methods; Mutual information and t-test have been utilized for the input of the fuzzy inference system. The resulting membership function could be easily seen and demonstrated. To show the utility of the fuzzy logic some word pairs have been examined as an example. The working data has been based on a corpus of about one million words contained in different novels constituting project Gutenberg available on www.gutenberg.org. The proposed method has all the advantages of the two methods, while overcoming their drawbacks. Hence it provides a better result than the two methods.<|reference_end|>
arxiv
@article{bisht2008the, title={The Application of Fuzzy Logic to Collocation Extraction}, author={Raj Kishor Bisht, H.S.Dhami}, journal={arXiv preprint arXiv:0811.1260}, year={2008}, archivePrefix={arXiv}, eprint={0811.1260}, primaryClass={cs.CL} }
bisht2008the
arxiv-5428
0811.1301
Distributed Algorithms for Computing Alternate Paths Avoiding Failed Nodes and Links
<|reference_start|>Distributed Algorithms for Computing Alternate Paths Avoiding Failed Nodes and Links: A recent study characterizing failures in computer networks shows that transient single element (node/link) failures are the dominant failures in large communication networks like the Internet. Thus, having the routing paths globally recomputed on a failure does not pay off since the failed element recovers fairly quickly, and the recomputed routing paths need to be discarded. In this paper, we present the first distributed algorithm that computes the alternate paths required by some "proactive recovery schemes" for handling transient failures. Our algorithm computes paths that avoid a failed node, and provides an alternate path to a particular destination from an upstream neighbor of the failed node. With minor modifications, we can have the algorithm compute alternate paths that avoid a failed link as well. To the best of our knowledge all previous algorithms proposed for computing alternate paths are centralized, and need complete information of the network graph as input to the algorithm.<|reference_end|>
arxiv
@article{bhosle2008distributed, title={Distributed Algorithms for Computing Alternate Paths Avoiding Failed Nodes and Links}, author={Amit M. Bhosle and Teofilo F. Gonzalez}, journal={arXiv preprint arXiv:0811.1301}, year={2008}, archivePrefix={arXiv}, eprint={0811.1301}, primaryClass={cs.DC cs.DS cs.NI} }
bhosle2008distributed
arxiv-5429
0811.1304
NB-FEB: An Easy-to-Use and Scalable Universal Synchronization Primitive for Parallel Programming
<|reference_start|>NB-FEB: An Easy-to-Use and Scalable Universal Synchronization Primitive for Parallel Programming: This paper addresses the problem of universal synchronization primitives that can support scalable thread synchronization for large-scale many-core architectures. The universal synchronization primitives that have been deployed widely in conventional architectures like CAS and LL/SC are expected to reach their scalability limits in the evolution to many-core architectures with thousands of cores. We introduce a non-blocking full/empty bit primitive, or NB-FEB for short, as a promising synchronization primitive for parallel programming on may-core architectures. We show that the NB-FEB primitive is universal, scalable, feasible and convenient to use. NB-FEB, together with registers, can solve the consensus problem for an arbitrary number of processes (universality). NB-FEB is combinable, namely its memory requests to the same memory location can be combined into only one memory request, which consequently mitigates performance degradation due to synchronization "hot spots" (scalability). Since NB-FEB is a variant of the original full/empty bit that always returns a value instead of waiting for a conditional flag, it is as feasible as the original full/empty bit, which has been implemented in many computer systems (feasibility). The original full/empty bit is well-known as a special-purpose primitive for fast producer-consumer synchronization and has been used extensively in the specific domain of applications. In this paper, we show that NB-FEB can be deployed easily as a general-purpose primitive. Using NB-FEB, we construct a non-blocking software transactional memory system called NBFEB-STM, which can be used to handle concurrent threads conveniently. NBFEB-STM is space efficient: the space complexity of each object updated by $N$ concurrent threads/transactions is $\Theta(N)$, the optimal.<|reference_end|>
arxiv
@article{ha2008nb-feb:, title={NB-FEB: An Easy-to-Use and Scalable Universal Synchronization Primitive for Parallel Programming}, author={Phuong Hoai Ha, Philippas Tsigas and Otto J. Anshus}, journal={arXiv preprint arXiv:0811.1304}, year={2008}, number={CS:2008-69}, archivePrefix={arXiv}, eprint={0811.1304}, primaryClass={cs.DC cs.AR cs.DS} }
ha2008nb-feb:
arxiv-5430
0811.1305
Applying Practice to Theory
<|reference_start|>Applying Practice to Theory: How can complexity theory and algorithms benefit from practical advances in computing? We give a short overview of some prior work using practical computing to attack problems in computational complexity and algorithms, informally describe how linear program solvers may be used to help prove new lower bounds for satisfiability, and suggest a research program for developing new understanding in circuit complexity.<|reference_end|>
arxiv
@article{williams2008applying, title={Applying Practice to Theory}, author={Ryan Williams}, journal={arXiv preprint arXiv:0811.1305}, year={2008}, archivePrefix={arXiv}, eprint={0811.1305}, primaryClass={cs.CC cs.DS} }
williams2008applying
arxiv-5431
0811.1317
Secrecy in Cooperative Relay Broadcast Channels
<|reference_start|>Secrecy in Cooperative Relay Broadcast Channels: We investigate the effects of user cooperation on the secrecy of broadcast channels by considering a cooperative relay broadcast channel. We show that user cooperation can increase the achievable secrecy region. We propose an achievable scheme that combines Marton's coding scheme for broadcast channels and Cover and El Gamal's compress-and-forward scheme for relay channels. We derive outer bounds for the rate-equivocation region using auxiliary random variables for single-letterization. Finally, we consider a Gaussian channel and show that both users can have positive secrecy rates, which is not possible for scalar Gaussian broadcast channels without cooperation.<|reference_end|>
arxiv
@article{ekrem2008secrecy, title={Secrecy in Cooperative Relay Broadcast Channels}, author={E. Ekrem and S. Ulukus}, journal={arXiv preprint arXiv:0811.1317}, year={2008}, archivePrefix={arXiv}, eprint={0811.1317}, primaryClass={cs.IT math.IT} }
ekrem2008secrecy
arxiv-5432
0811.1319
Modeling Social Annotation: a Bayesian Approach
<|reference_start|>Modeling Social Annotation: a Bayesian Approach: Collaborative tagging systems, such as Delicious, CiteULike, and others, allow users to annotate resources, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of social annotation, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated resources. Unfortunately, that approach had one major shortcoming: the number of topics and interests must be specified a priori. To address this drawback, we extend the model to a fully Bayesian framework, which offers a way to automatically estimate these numbers. In particular, the model allows the number of interests and topics to change as suggested by the structure of the data. We evaluate the proposed model in detail on the synthetic and real-world data by comparing its performance to Latent Dirichlet Allocation on the topic extraction task. For the latter evaluation, we apply the model to infer topics of Web resources from social annotations obtained from Delicious in order to discover new resources similar to a specified one. Our empirical results demonstrate that the proposed model is a promising method for exploiting social knowledge contained in user-generated annotations.<|reference_end|>
arxiv
@article{plangprasopchok2008modeling, title={Modeling Social Annotation: a Bayesian Approach}, author={Anon Plangprasopchok, Kristina Lerman}, journal={arXiv preprint arXiv:0811.1319}, year={2008}, archivePrefix={arXiv}, eprint={0811.1319}, primaryClass={cs.AI} }
plangprasopchok2008modeling
arxiv-5433
0811.1335
Algorithmic Techniques for Several Optimization Problems Regarding Distributed Systems with Tree Topologies
<|reference_start|>Algorithmic Techniques for Several Optimization Problems Regarding Distributed Systems with Tree Topologies: As the development of distributed systems progresses, more and more challenges arise and the need for developing optimized systems and for optimizing existing systems from multiple perspectives becomes more stringent. In this paper I present novel algorithmic techniques for solving several optimization problems regarding distributed systems with tree topologies. I address topics like: reliability improvement, partitioning, coloring, content delivery, optimal matchings, as well as some tree counting aspects. Some of the presented techniques are only of theoretical interest, while others can be used in practical settings.<|reference_end|>
arxiv
@article{andreica2008algorithmic, title={Algorithmic Techniques for Several Optimization Problems Regarding Distributed Systems with Tree Topologies}, author={Mugurel Ionut Andreica}, journal={ROMAI Journal, vol. 4, no. 1, pp. 1-25, 2008 (ISSN: 1841-5512) ; http://www.romai.ro}, year={2008}, archivePrefix={arXiv}, eprint={0811.1335}, primaryClass={cs.DS cs.DM cs.NI} }
andreica2008algorithmic
arxiv-5434
0811.1355
Matrix approach to discrete fractional calculus II: partial fractional differential equations
<|reference_start|>Matrix approach to discrete fractional calculus II: partial fractional differential equations: A new method that enables easy and convenient discretization of partial differential equations with derivatives of arbitrary real order (so-called fractional derivatives) and delays is presented and illustrated on numerical solution of various types of fractional diffusion equation. The suggested method is the development of Podlubny's matrix approach (Fractional Calculus and Applied Analysis, vol. 3, no. 4, 2000, 359--386). Four examples of numerical solution of fractional diffusion equation with various combinations of time/space fractional derivatives (integer/integer, fractional/integer, integer/fractional, and fractional/fractional) with respect to time and to the spatial variable are provided in order to illustrate how simple and general is the suggested approach. The fifth example illustrates that the method can be equally simply used for fractional differential equations with delays. A set of MATLAB routines for the implementation of the method as well as sample code used to solve the examples have been developed.<|reference_end|>
arxiv
@article{podlubny2008matrix, title={Matrix approach to discrete fractional calculus II: partial fractional differential equations}, author={Igor Podlubny, Aleksei V. Chechkin, Tomas Skovranek, YangQuan Chen, Blas M. Vinagre Jara}, journal={Journal of Computational Physics, vol. 228, no. 8, 1 May 2009, pp. 3137-3153}, year={2008}, doi={10.1016/j.jcp.2009.01.014}, archivePrefix={arXiv}, eprint={0811.1355}, primaryClass={math.NA cs.NA math-ph math.CA math.MP physics.comp-ph} }
podlubny2008matrix
arxiv-5435
0811.1365
Configuration spaces of convex and embedded polygons in the plane
<|reference_start|>Configuration spaces of convex and embedded polygons in the plane: This paper studies the configuration spaces of linkages whose underlying graph is a single cycle. Assume that the edge lengths are such that there are no configurations in which all the edges lie along a line. The main results are that, modulo translations and rotations, each component of the space of convex configurations is homeomorphic to a closed Euclidean ball and each component of the space of embedded configurations is homeomorphic to a Euclidean space. This represents an elaboration on the topological information that follows from the convexification theorem of Connelly, Demaine, and Rote.<|reference_end|>
arxiv
@article{shimamoto2008configuration, title={Configuration spaces of convex and embedded polygons in the plane}, author={Don Shimamoto, Mary Wootters}, journal={arXiv preprint arXiv:0811.1365}, year={2008}, archivePrefix={arXiv}, eprint={0811.1365}, primaryClass={cs.CG} }
shimamoto2008configuration
arxiv-5436
0811.1449
Fibonacci Index and Stability Number of Graphs: a Polyhedral Study
<|reference_start|>Fibonacci Index and Stability Number of Graphs: a Polyhedral Study: The Fibonacci index of a graph is the number of its stable sets. This parameter is widely studied and has applications in chemical graph theory. In this paper, we establish tight upper bounds for the Fibonacci index in terms of the stability number and the order of general graphs and connected graphs. Tur\'an graphs frequently appear in extremal graph theory. We show that Tur\'an graphs and a connected variant of them are also extremal for these particular problems. We also make a polyhedral study by establishing all the optimal linear inequalities for the stability number and the Fibonacci index, inside the classes of general and connected graphs of order $n$.<|reference_end|>
arxiv
@article{bruyère2008fibonacci, title={Fibonacci Index and Stability Number of Graphs: a Polyhedral Study}, author={V'eronique Bruy`ere and Hadrien M'elot}, journal={arXiv preprint arXiv:0811.1449}, year={2008}, doi={10.1007/s10878-009-9228-7}, archivePrefix={arXiv}, eprint={0811.1449}, primaryClass={cs.DM} }
bruyère2008fibonacci
arxiv-5437
0811.1500
Linear Processing and Sum Throughput in the Multiuser MIMO Downlink
<|reference_start|>Linear Processing and Sum Throughput in the Multiuser MIMO Downlink: We consider linear precoding and decoding in the downlink of a multiuser multiple-input, multiple-output (MIMO) system, wherein each user may receive more than one data stream. We propose several mean squared error (MSE) based criteria for joint transmit-receive optimization and establish a series of relationships linking these criteria to the signal-to-interference-plus-noise ratios of individual data streams and the information theoretic channel capacity under linear minimum MSE decoding. In particular, we show that achieving the maximum sum throughput is equivalent to minimizing the product of MSE matrix determinants (PDetMSE). Since the PDetMSE minimization problem does not admit a computationally efficient solution, a simplified scalar version of the problem is considered that minimizes the product of mean squared errors (PMSE). An iterative algorithm is proposed to solve the PMSE problem, and is shown to provide near-optimal performance with greatly reduced computational complexity. Our simulations compare the achievable sum rates under linear precoding strategies to the sum capacity for the broadcast channel.<|reference_end|>
arxiv
@article{tenenbaum2008linear, title={Linear Processing and Sum Throughput in the Multiuser MIMO Downlink}, author={Adam J. Tenenbaum and Raviraj S. Adve}, journal={arXiv preprint arXiv:0811.1500}, year={2008}, archivePrefix={arXiv}, eprint={0811.1500}, primaryClass={cs.IT math.IT} }
tenenbaum2008linear
arxiv-5438
0811.1504
Parallel execution of portfolio optimization
<|reference_start|>Parallel execution of portfolio optimization: Analysis of asset liability management (ALM) strategies especially for long term horizon is a crucial issue for banks, funds and insurance companies. Modern economic models, investment strategies and optimization criteria make ALM studies computationally very intensive task. It attracts attention to multiprocessor system and especially to the cheapest one: multi core PCs and PC clusters. In this article we are analyzing problem of parallel organization of portfolio optimization, results of using clusters for optimization and the most efficient cluster architecture for these kinds of tasks.<|reference_end|>
arxiv
@article{nuriyev2008parallel, title={Parallel execution of portfolio optimization}, author={R. Nuriyev}, journal={arXiv preprint arXiv:0811.1504}, year={2008}, archivePrefix={arXiv}, eprint={0811.1504}, primaryClass={cs.DC} }
nuriyev2008parallel
arxiv-5439
0811.1520
Modeling Microscopic Chemical Sensors in Capillaries
<|reference_start|>Modeling Microscopic Chemical Sensors in Capillaries: Nanotechnology-based microscopic robots could provide accurate in vivo measurement of chemicals in the bloodstream for detailed biological research and as an aid to medical treatment. Quantitative performance estimates of such devices require models of how chemicals in the blood diffuse to the devices. This paper models microscopic robots and red blood cells (erythrocytes) in capillaries using realistic distorted cell shapes. The models evaluate two sensing scenarios: robots moving with the cells past a chemical source on the vessel wall, and robots attached to the wall for longer-term chemical monitoring. Using axial symmetric geometry with realistic flow speeds and diffusion coefficients, we compare detection performance with a simpler model that does not include the cells. The average chemical absorption is quantitatively similar in both models, indicating the simpler model is an adequate design guide to sensor performance in capillaries. However, determining the variation in forces and absorption as cells move requires the full model.<|reference_end|>
arxiv
@article{hogg2008modeling, title={Modeling Microscopic Chemical Sensors in Capillaries}, author={Tad Hogg}, journal={The Open Nanomedicine Journal 2:1-9 (2009)}, year={2008}, archivePrefix={arXiv}, eprint={0811.1520}, primaryClass={cs.RO physics.bio-ph q-bio.TO} }
hogg2008modeling
arxiv-5440
0811.1570
Constructions of Subsystem Codes over Finite Fields
<|reference_start|>Constructions of Subsystem Codes over Finite Fields: Subsystem codes protect quantum information by encoding it in a tensor factor of a subspace of the physical state space. Subsystem codes generalize all major quantum error protection schemes, and therefore are especially versatile. This paper introduces numerous constructions of subsystem codes. It is shown how one can derive subsystem codes from classical cyclic codes. Methods to trade the dimensions of subsystem and co-subystem are introduced that maintain or improve the minimum distance. As a consequence, many optimal subsystem codes are obtained. Furthermore, it is shown how given subsystem codes can be extended, shortened, or combined to yield new subsystem codes. These subsystem code constructions are used to derive tables of upper and lower bounds on the subsystem code parameters.<|reference_end|>
arxiv
@article{aly2008constructions, title={Constructions of Subsystem Codes over Finite Fields}, author={Salah A. Aly, Andreas Klappenecker}, journal={arXiv preprint arXiv:0811.1570}, year={2008}, archivePrefix={arXiv}, eprint={0811.1570}, primaryClass={quant-ph cs.IT math.IT} }
aly2008constructions
arxiv-5441
0811.1618
Airport Gate Assignment: New Model and Implementation
<|reference_start|>Airport Gate Assignment: New Model and Implementation: Airport gate assignment is of great importance in airport operations. In this paper, we study the Airport Gate Assignment Problem (AGAP), propose a new model and implement the model with Optimization Programming language (OPL). With the objective to minimize the number of conflicts of any two adjacent aircrafts assigned to the same gate, we build a mathematical model with logical constraints and the binary constraints, which can provide an efficient evaluation criterion for the Airlines to estimate the current gate assignment. To illustrate the feasibility of the model we construct experiments with the data obtained from Continental Airlines, Houston Gorge Bush Intercontinental Airport IAH, which indicate that our model is both energetic and effective. Moreover, we interpret experimental results, which further demonstrate that our proposed model can provide a powerful tool for airline companies to estimate the efficiency of their current work of gate assignment.<|reference_end|>
arxiv
@article{li2008airport, title={Airport Gate Assignment: New Model and Implementation}, author={Chendong Li}, journal={arXiv preprint arXiv:0811.1618}, year={2008}, archivePrefix={arXiv}, eprint={0811.1618}, primaryClass={cs.AI} }
li2008airport
arxiv-5442
0811.1629
Stability Bound for Stationary Phi-mixing and Beta-mixing Processes
<|reference_start|>Stability Bound for Stationary Phi-mixing and Beta-mixing Processes: Most generalization bounds in learning theory are based on some measure of the complexity of the hypothesis class used, independently of any algorithm. In contrast, the notion of algorithmic stability can be used to derive tight generalization bounds that are tailored to specific learning algorithms by exploiting their particular properties. However, as in much of learning theory, existing stability analyses and bounds apply only in the scenario where the samples are independently and identically distributed. In many machine learning applications, however, this assumption does not hold. The observations received by the learning algorithm often have some inherent temporal dependence. This paper studies the scenario where the observations are drawn from a stationary phi-mixing or beta-mixing sequence, a widely adopted assumption in the study of non-i.i.d. processes that implies a dependence between observations weakening over time. We prove novel and distinct stability-based generalization bounds for stationary phi-mixing and beta-mixing sequences. These bounds strictly generalize the bounds given in the i.i.d. case and apply to all stable learning algorithms, thereby extending the use of stability-bounds to non-i.i.d. scenarios. We also illustrate the application of our phi-mixing generalization bounds to general classes of learning algorithms, including Support Vector Regression, Kernel Ridge Regression, and Support Vector Machines, and many other kernel regularization-based and relative entropy-based regularization algorithms. These novel bounds can thus be viewed as the first theoretical basis for the use of these algorithms in non-i.i.d. scenarios.<|reference_end|>
arxiv
@article{mohri2008stability, title={Stability Bound for Stationary Phi-mixing and Beta-mixing Processes}, author={Mehryar Mohri and Afshin Rostamizadeh}, journal={arXiv preprint arXiv:0811.1629}, year={2008}, archivePrefix={arXiv}, eprint={0811.1629}, primaryClass={cs.LG} }
mohri2008stability
arxiv-5443
0811.1664
Best-Effort Strategies for Losing States
<|reference_start|>Best-Effort Strategies for Losing States: We consider games played on finite graphs, whose goal is to obtain a trace belonging to a given set of winning traces. We focus on those states from which Player 1 cannot force a win. We explore and compare several criteria for establishing what is the preferable behavior of Player 1 from those states. Along the way, we prove several results of theoretical and practical interest, such as a characterization of admissible strategies, which also provides a simple algorithm for computing such strategies for various common goals, and the equivalence between the existence of positional winning strategies and the existence of positional subgame perfect strategies.<|reference_end|>
arxiv
@article{faella2008best-effort, title={Best-Effort Strategies for Losing States}, author={Marco Faella}, journal={arXiv preprint arXiv:0811.1664}, year={2008}, archivePrefix={arXiv}, eprint={0811.1664}, primaryClass={cs.GT} }
faella2008best-effort
arxiv-5444
0811.1693
Protection Schemes for Two Link Failures in Optical Networks
<|reference_start|>Protection Schemes for Two Link Failures in Optical Networks: In this paper we develop network protection schemes against two link failures in optical networks. The motivation behind this work is the fact that the majority of all available links in an optical network suffer from single and double link failures. In the proposed network protection schemes, NPS2-I and NPS2-II, we deploy network coding and reduced capacity on the working paths to provide backup protection paths. In addition, we demonstrate the encoding and decoding aspects of the proposed schemes.<|reference_end|>
arxiv
@article{aly2008protection, title={Protection Schemes for Two Link Failures in Optical Networks}, author={Salah A. Aly, Ahmed E. Kamal}, journal={Proc. of ICCTA, 2008}, year={2008}, archivePrefix={arXiv}, eprint={0811.1693}, primaryClass={cs.IT cs.NI math.IT} }
aly2008protection
arxiv-5445
0811.1711
Artificial Intelligence Techniques for Steam Generator Modelling
<|reference_start|>Artificial Intelligence Techniques for Steam Generator Modelling: This paper investigates the use of different Artificial Intelligence methods to predict the values of several continuous variables from a Steam Generator. The objective was to determine how the different artificial intelligence methods performed in making predictions on the given dataset. The artificial intelligence methods evaluated were Neural Networks, Support Vector Machines, and Adaptive Neuro-Fuzzy Inference Systems. The types of neural networks investigated were Multi-Layer Perceptions, and Radial Basis Function. Bayesian and committee techniques were applied to these neural networks. Each of the AI methods considered was simulated in Matlab. The results of the simulations showed that all the AI methods were capable of predicting the Steam Generator data reasonably accurately. However, the Adaptive Neuro-Fuzzy Inference system out performed the other methods in terms of accuracy and ease of implementation, while still achieving a fast execution time as well as a reasonable training time.<|reference_end|>
arxiv
@article{wright2008artificial, title={Artificial Intelligence Techniques for Steam Generator Modelling}, author={Sarah Wright and Tshilidzi Marwala}, journal={arXiv preprint arXiv:0811.1711}, year={2008}, archivePrefix={arXiv}, eprint={0811.1711}, primaryClass={cs.AI} }
wright2008artificial
arxiv-5446
0811.1714
Efficient Multiplication of Dense Matrices over GF(2)
<|reference_start|>Efficient Multiplication of Dense Matrices over GF(2): We describe an efficient implementation of a hierarchy of algorithms for multiplication of dense matrices over the field with two elements (GF(2)). In particular we present our implementation -- in the M4RI library -- of Strassen-Winograd matrix multiplication and the "Method of the Four Russians" multiplication (M4RM) and compare it against other available implementations. Good performance is demonstrated on on AMD's Opteron and particulary good performance on Intel's Core 2 Duo. The open-source M4RI library is available stand-alone as well as part of the Sage mathematics software. In machine terms, addition in GF(2) is logical-XOR, and multiplication is logical-AND, thus a machine word of 64-bits allows one to operate on 64 elements of GF(2) in parallel: at most one CPU cycle for 64 parallel additions or multiplications. As such, element-wise operations over GF(2) are relatively cheap. In fact, in this paper, we conclude that the actual bottlenecks are memory reads and writes and issues of data locality. We present our empirical findings in relation to minimizing these and give an analysis thereof.<|reference_end|>
arxiv
@article{albrecht2008efficient, title={Efficient Multiplication of Dense Matrices over GF(2)}, author={Martin Albrecht, Gregory Bard, William Hart}, journal={arXiv preprint arXiv:0811.1714}, year={2008}, doi={10.1145/1644001.1644010}, archivePrefix={arXiv}, eprint={0811.1714}, primaryClass={cs.MS} }
albrecht2008efficient
arxiv-5447
0811.1770
A Class of Transformations that Polarize Symmetric Binary-Input Memoryless Channels
<|reference_start|>A Class of Transformations that Polarize Symmetric Binary-Input Memoryless Channels: A generalization of Ar\i kan's polar code construction using transformations of the form $G^{\otimes n}$ where $G$ is an $\ell \times \ell$ matrix is considered. Necessary and sufficient conditions are given for these transformations to ensure channel polarization. It is shown that a large class of such transformations polarize symmetric binary-input memoryless channels.<|reference_end|>
arxiv
@article{korada2008a, title={A Class of Transformations that Polarize Symmetric Binary-Input Memoryless Channels}, author={Satish Babu Korada and Eren Sasoglu}, journal={arXiv preprint arXiv:0811.1770}, year={2008}, archivePrefix={arXiv}, eprint={0811.1770}, primaryClass={cs.IT math.IT} }
korada2008a
arxiv-5448
0811.1790
Robust Regression and Lasso
<|reference_start|>Robust Regression and Lasso: Lasso, or $\ell^1$ regularized least squares, has been explored extensively for its remarkable sparsity properties. It is shown in this paper that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a connection of the regularizer to a physical property, namely, protection from noise. This allows a principled selection of the regularizer, and in particular, generalizations of Lasso that also yield convex optimization problems are obtained by considering different uncertainty sets. Secondly, robustness can itself be used as an avenue to exploring different properties of the solution. In particular, it is shown that robustness of the solution explains why the solution is sparse. The analysis as well as the specific results obtained differ from standard sparsity results, providing different geometric intuition. Furthermore, it is shown that the robust optimization formulation is related to kernel density estimation, and based on this approach, a proof that Lasso is consistent is given using robustness directly. Finally, a theorem saying that sparsity and algorithmic stability contradict each other, and hence Lasso is not stable, is presented.<|reference_end|>
arxiv
@article{xu2008robust, title={Robust Regression and Lasso}, author={Huan Xu, Constantine Caramanis and Shie Mannor}, journal={arXiv preprint arXiv:0811.1790}, year={2008}, archivePrefix={arXiv}, eprint={0811.1790}, primaryClass={cs.IT cs.LG math.IT} }
xu2008robust
arxiv-5449
0811.1825
A Divergence Formula for Randomness and Dimension
<|reference_start|>A Divergence Formula for Randomness and Dimension: If $S$ is an infinite sequence over a finite alphabet $\Sigma$ and $\beta$ is a probability measure on $\Sigma$, then the {\it dimension} of $ S$ with respect to $\beta$, written $\dim^\beta(S)$, is a constructive version of Billingsley dimension that coincides with the (constructive Hausdorff) dimension $\dim(S)$ when $\beta$ is the uniform probability measure. This paper shows that $\dim^\beta(S)$ and its dual $\Dim^\beta(S)$, the {\it strong dimension} of $S$ with respect to $\beta$, can be used in conjunction with randomness to measure the similarity of two probability measures $\alpha$ and $\beta$ on $\Sigma$. Specifically, we prove that the {\it divergence formula} \[ \dim^\beta(R) = \Dim^\beta(R) =\frac{\CH(\alpha)}{\CH(\alpha) + \D(\alpha || \beta)} \] holds whenever $\alpha$ and $\beta$ are computable, positive probability measures on $\Sigma$ and $R \in \Sigma^\infty$ is random with respect to $\alpha$. In this formula, $\CH(\alpha)$ is the Shannon entropy of $\alpha$, and $\D(\alpha||\beta)$ is the Kullback-Leibler divergence between $\alpha$ and $\beta$. We also show that the above formula holds for all sequences $R$ that are $\alpha$-normal (in the sense of Borel) when $\dim^\beta(R)$ and $\Dim^\beta(R)$ are replaced by the more effective finite-state dimensions $\dimfs^\beta(R)$ and $\Dimfs^\beta(R)$. In the course of proving this, we also prove finite-state compression characterizations of $\dimfs^\beta(S)$ and $\Dimfs^\beta(S)$.<|reference_end|>
arxiv
@article{lutz2008a, title={A Divergence Formula for Randomness and Dimension}, author={Jack H. Lutz}, journal={arXiv preprint arXiv:0811.1825}, year={2008}, archivePrefix={arXiv}, eprint={0811.1825}, primaryClass={cs.CC cs.IT math.IT} }
lutz2008a
arxiv-5450
0811.1859
A Basic Framework for the Cryptanalysis of Digital Chaos-Based Cryptography
<|reference_start|>A Basic Framework for the Cryptanalysis of Digital Chaos-Based Cryptography: Chaotic cryptography is based on the properties of chaos as source of entropy. Many different schemes have been proposed to take advantage of those properties and to design new strategies to encrypt information. However, the right and efficient use of chaos in the context of cryptography requires a thorough knowledge about the dynamics of the selected chaotic system. Indeed, if the final encryption system reveals enough information about the underlying chaotic system it could be possible for a cryptanalyst to get the key, part of the key or some information somehow equivalent to the key just analyzing those dynamical properties leaked by the cryptosystem. This paper shows what those dynamical properties are and how a cryptanalyst can use them to prove the inadequacy of an encryption system for the secure exchange of information. This study is performed through the introduction of a series of mathematical tools which should be the basic framework of cryptanalysis in the context of digital chaos-based cryptography.<|reference_end|>
arxiv
@article{arroyo2008a, title={A Basic Framework for the Cryptanalysis of Digital Chaos-Based Cryptography}, author={David Arroyo and Gonzalo Alvarez and Veronica Fernandez}, journal={arXiv preprint arXiv:0811.1859}, year={2008}, archivePrefix={arXiv}, eprint={0811.1859}, primaryClass={cs.CR} }
arroyo2008a
arxiv-5451
0811.1868
Necessary Conditions for Discontinuities of Multidimensional Size Functions
<|reference_start|>Necessary Conditions for Discontinuities of Multidimensional Size Functions: Some new results about multidimensional Topological Persistence are presented, proving that the discontinuity points of a k-dimensional size function are necessarily related to the pseudocritical or special values of the associated measuring function.<|reference_end|>
arxiv
@article{cerri2008necessary, title={Necessary Conditions for Discontinuities of Multidimensional Size Functions}, author={Andrea Cerri and Patrizio Frosini}, journal={arXiv preprint arXiv:0811.1868}, year={2008}, archivePrefix={arXiv}, eprint={0811.1868}, primaryClass={cs.CG cs.CV math.AT} }
cerri2008necessary
arxiv-5452
0811.1875
Exact Exponential Time Algorithms for Max Internal Spanning Tree
<|reference_start|>Exact Exponential Time Algorithms for Max Internal Spanning Tree: We consider the NP-hard problem of finding a spanning tree with a maximum number of internal vertices. This problem is a generalization of the famous Hamiltonian Path problem. Our dynamic-programming algorithms for general and degree-bounded graphs have running times of the form O*(c^n) (c <= 3). The main result, however, is a branching algorithm for graphs with maximum degree three. It only needs polynomial space and has a running time of O*(1.8669^n) when analyzed with respect to the number of vertices. We also show that its running time is 2.1364^k n^O(1) when the goal is to find a spanning tree with at least k internal vertices. Both running time bounds are obtained via a Measure & Conquer analysis, the latter one being a novel use of this kind of analyses for parameterized algorithms.<|reference_end|>
arxiv
@article{fernau2008exact, title={Exact Exponential Time Algorithms for Max Internal Spanning Tree}, author={Henning Fernau, Serge Gaspers, Daniel Raible}, journal={arXiv preprint arXiv:0811.1875}, year={2008}, archivePrefix={arXiv}, eprint={0811.1875}, primaryClass={cs.DS cs.DM} }
fernau2008exact
arxiv-5453
0811.1878
Action Theory Evolution
<|reference_start|>Action Theory Evolution: Like any other logical theory, domain descriptions in reasoning about actions may evolve, and thus need revision methods to adequately accommodate new information about the behavior of actions. The present work is about changing action domain descriptions in propositional dynamic logic. Its contribution is threefold: first we revisit the semantics of action theory contraction that has been done in previous work, giving more robust operators that express minimal change based on a notion of distance between Kripke-models. Second we give algorithms for syntactical action theory contraction and establish their correctness w.r.t. our semantics. Finally we state postulates for action theory contraction and assess the behavior of our operators w.r.t. them. Moreover, we also address the revision counterpart of action theory change, showing that it benefits from our semantics for contraction.<|reference_end|>
arxiv
@article{varzinczak2008action, title={Action Theory Evolution}, author={Ivan Varzinczak}, journal={arXiv preprint arXiv:0811.1878}, year={2008}, archivePrefix={arXiv}, eprint={0811.1878}, primaryClass={cs.AI cs.LO} }
varzinczak2008action
arxiv-5454
0811.1882
Ferrers Dimension and Boxicity
<|reference_start|>Ferrers Dimension and Boxicity: This note explores the relation between the boxicity of undirected graphs and the Ferrers dimension of digraphs.<|reference_end|>
arxiv
@article{chatterjee2008ferrers, title={Ferrers Dimension and Boxicity}, author={Soumyottam Chatterjee and Shamik Ghosh}, journal={arXiv preprint arXiv:0811.1882}, year={2008}, archivePrefix={arXiv}, eprint={0811.1882}, primaryClass={cs.DM} }
chatterjee2008ferrers
arxiv-5455
0811.1885
The Expressive Power of Binary Submodular Functions
<|reference_start|>The Expressive Power of Binary Submodular Functions: It has previously been an open problem whether all Boolean submodular functions can be decomposed into a sum of binary submodular functions over a possibly larger set of variables. This problem has been considered within several different contexts in computer science, including computer vision, artificial intelligence, and pseudo-Boolean optimisation. Using a connection between the expressive power of valued constraints and certain algebraic properties of functions, we answer this question negatively. Our results have several corollaries. First, we characterise precisely which submodular functions of arity 4 can be expressed by binary submodular functions. Next, we identify a novel class of submodular functions of arbitrary arities which can be expressed by binary submodular functions, and therefore minimised efficiently using a so-called expressibility reduction to the Min-Cut problem. More importantly, our results imply limitations on this kind of reduction and establish for the first time that it cannot be used in general to minimise arbitrary submodular functions. Finally, we refute a conjecture of Promislow and Young on the structure of the extreme rays of the cone of Boolean submodular functions.<|reference_end|>
arxiv
@article{zivny2008the, title={The Expressive Power of Binary Submodular Functions}, author={Stanislav Zivny, David A. Cohen, Peter G. Jeavons}, journal={Discrete Applied Mathematics 157(15) (2009) 3347-3358}, year={2008}, doi={10.1016/j.dam.2009.07.001}, archivePrefix={arXiv}, eprint={0811.1885}, primaryClass={cs.DM cs.AI cs.CV} }
zivny2008the
arxiv-5456
0811.1914
A TLA+ Proof System
<|reference_start|>A TLA+ Proof System: We describe an extension to the TLA+ specification language with constructs for writing proofs and a proof environment, called the Proof Manager (PM), to checks those proofs. The language and the PM support the incremental development and checking of hierarchically structured proofs. The PM translates a proof into a set of independent proof obligations and calls upon a collection of back-end provers to verify them. Different provers can be used to verify different obligations. The currently supported back-ends are the tableau prover Zenon and Isabelle/TLA+, an axiomatisation of TLA+ in Isabelle/Pure. The proof obligations for a complete TLA+ proof can also be used to certify the theorem in Isabelle/TLA+.<|reference_end|>
arxiv
@article{chaudhuri2008a, title={A TLA+ Proof System}, author={Kaustuv C. Chaudhuri (MRI), Damien Doligez (INRIA Rocquencourt), Leslie Lamport, Stephan Merz (INRIA Lorraine - LORIA)}, journal={Knowledge Exchange: Automated Provers and Proof Assistants (KEAPPA) (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0811.1914}, primaryClass={cs.LO} }
chaudhuri2008a
arxiv-5457
0811.1947
Pilotage des processus collaboratifs dans les syst\`emes PLM Quels indicateurs pour quelle \'evaluation des performances ?
<|reference_start|>Pilotage des processus collaboratifs dans les syst\`emes PLM Quels indicateurs pour quelle \'evaluation des performances ?: Les entreprises qui collaborent dans un processus de d\'eveloppement de produit ont besoin de mettre en oeuvre une gestion efficace des activit\'es collaborative. Malgr\'e la mise en place d'un PLM, les activit\'es collaborative sont loin d'\^etre aussi efficace que l'on pourrait s'y attendre. Cet article propose une analyse des probl\'ematiques de la collaboration avec un syst\`eme PLM. A partir de ces analyses, nous proposons la mise en place d'indicateurs et d'actions sur les processus visant \`a identifier puis att\'enuer les freins dans le travail collaboratif. ----- Companies that collaborate within the product development processes need to implement an effective management of their collaborative activities. Despite the implementation of a PLM system, the collaborative activities are not efficient as it might be expected. This paper presents an analysis of the problems related to the collaborative work using a PLM system, identified through a survey. From this analysis, we propose an approach for improving collaborative processes within a PLM system, based on monitoring indicators. This approach leads to identify and therefore to mitigate the brakes of the collaborative work.<|reference_end|>
arxiv
@article{elkadiri2008pilotage, title={Pilotage des processus collaboratifs dans les syst\`emes PLM. Quels indicateurs pour quelle \'evaluation des performances ?}, author={Soumaya Elkadiri (LIESP), Philippe Pernelle (LIESP), Miguel Delattre (LIESP), Abdelaziz Bouras (LIESP)}, journal={1er Congr\`es des innovations m\'ecaniques CIM'08, Sousse : Tunisie (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0811.1947}, primaryClass={cs.SE} }
elkadiri2008pilotage
arxiv-5458
0811.1950
Collaborative process control: Observation of tracks generated by PLM system
<|reference_start|>Collaborative process control: Observation of tracks generated by PLM system: This paper aims at analyzing the problems related to collaborative work using a PLM system. This research is mainly focused on the organisational aspects of SMEs involved in networks composed of large companies, subcontractors and other industrial partners. From this analysis, we propose the deployment of an approach based on an observation process of tracks generated by PLM system. The specific contributions are of two fold. First is to identify the brake points of collaborative work. The second, thanks to the exploitation of generated tracks, it allows reducing risks by reacting in real time to the incidents or dysfunctions that may occur. The overall system architecture based on services technology and supporting the proposed approach is described, as well as associated prototype developed using an industrial PLM system.<|reference_end|>
arxiv
@article{elkadiri2008collaborative, title={Collaborative process control: Observation of tracks generated by PLM system}, author={Soumaya Elkadiri (LIESP), Philippe Pernelle (LIESP), Miguel Delattre (LIESP), Abdelaziz Bouras (LIESP)}, journal={APMS 2008 - Innovations in Networks, Espoo : Finlande (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0811.1950}, primaryClass={cs.SE} }
elkadiri2008collaborative
arxiv-5459
0811.1959
Characterization and collection of information from heterogeneous multimedia sources with users' parameters for decision support
<|reference_start|>Characterization and collection of information from heterogeneous multimedia sources with users' parameters for decision support: No single information source can be good enough to satisfy the divergent and dynamic needs of users all the time. Integrating information from divergent sources can be a solution to deficiencies in information content. We present how Information from multimedia document can be collected based on associating a generic database to a federated database. Information collected in this way is brought into relevance by integrating the parameters of usage and user's parameter for decision making. We identified seven different classifications of multimedia document.<|reference_end|>
arxiv
@article{robert2008characterization, title={Characterization and collection of information from heterogeneous multimedia sources with users' parameters for decision support}, author={Charles A. B. Robert (LORIA)}, journal={arXiv preprint arXiv:0811.1959}, year={2008}, archivePrefix={arXiv}, eprint={0811.1959}, primaryClass={cs.MM} }
robert2008characterization
arxiv-5460
0811.1974
Magic Fairy Tales as Source for Interface Metaphors
<|reference_start|>Magic Fairy Tales as Source for Interface Metaphors: The work is devoted to a problem of search of metaphors for interactive systems and systems based on Virtual Reality (VR) environments. The analysis of magic fairy tales as a source of metaphors for interface and virtual reality is offered. Some results of design process based on magic metaphors are considered.<|reference_end|>
arxiv
@article{averbukh2008magic, title={Magic Fairy Tales as Source for Interface Metaphors}, author={Vladimir L. Averbukh}, journal={arXiv preprint arXiv:0811.1974}, year={2008}, archivePrefix={arXiv}, eprint={0811.1974}, primaryClass={cs.HC} }
averbukh2008magic
arxiv-5461
0811.1976
Coalgebraic Automata Theory: Basic Results
<|reference_start|>Coalgebraic Automata Theory: Basic Results: We generalize some of the central results in automata theory to the abstraction level of coalgebras and thus lay out the foundations of a universal theory of automata operating on infinite objects. Let F be any set functor that preserves weak pullbacks. We show that the class of recognizable languages of F-coalgebras is closed under taking unions, intersections, and projections. We also prove that if a nondeterministic F-automaton accepts some coalgebra it accepts a finite one of the size of the automaton. Our main technical result concerns an explicit construction which transforms a given alternating F-automaton into an equivalent nondeterministic one, whose size is exponentially bound by the size of the original automaton.<|reference_end|>
arxiv
@article{kupke2008coalgebraic, title={Coalgebraic Automata Theory: Basic Results}, author={C. Kupke, Y. Venema}, journal={Logical Methods in Computer Science, Volume 4, Issue 4 (November 21, 2008) lmcs:1203}, year={2008}, doi={10.2168/LMCS-4(4:10)2008}, archivePrefix={arXiv}, eprint={0811.1976}, primaryClass={cs.LO} }
kupke2008coalgebraic
arxiv-5462
0811.2016
Land Cover Mapping Using Ensemble Feature Selection Methods
<|reference_start|>Land Cover Mapping Using Ensemble Feature Selection Methods: Ensemble classification is an emerging approach to land cover mapping whereby the final classification output is a result of a consensus of classifiers. Intuitively, an ensemble system should consist of base classifiers which are diverse i.e. classifiers whose decision boundaries err differently. In this paper ensemble feature selection is used to impose diversity in ensembles. The features of the constituent base classifiers for each ensemble were created through an exhaustive search algorithm using different separability indices. For each ensemble, the classification accuracy was derived as well as a diversity measure purported to give a measure of the inensemble diversity. The correlation between ensemble classification accuracy and diversity measure was determined to establish the interplay between the two variables. From the findings of this paper, diversity measures as currently formulated do not provide an adequate means upon which to constitute ensembles for land cover mapping.<|reference_end|>
arxiv
@article{gidudu2008land, title={Land Cover Mapping Using Ensemble Feature Selection Methods}, author={A. Gidudu, B. Abe and T. Marwala}, journal={arXiv preprint arXiv:0811.2016}, year={2008}, archivePrefix={arXiv}, eprint={0811.2016}, primaryClass={cs.LG} }
gidudu2008land
arxiv-5463
0811.2055
GPU-Based Interactive Visualization of Billion Point Cosmological Simulations
<|reference_start|>GPU-Based Interactive Visualization of Billion Point Cosmological Simulations: Despite the recent advances in graphics hardware capabilities, a brute force approach is incapable of interactively displaying terabytes of data. We have implemented a system that uses hierarchical level-of-detailing for the results of cosmological simulations, in order to display visually accurate results without loading in the full dataset (containing over 10 billion points). The guiding principle of the program is that the user should not be able to distinguish what they are seeing from a full rendering of the original data. Furthermore, by using a tree-based system for levels of detail, the size of the underlying data is limited only by the capacity of the IO system containing it.<|reference_end|>
arxiv
@article{szalay2008gpu-based, title={GPU-Based Interactive Visualization of Billion Point Cosmological Simulations}, author={Tamas Szalay, Volker Springel, Gerard Lemson}, journal={arXiv preprint arXiv:0811.2055}, year={2008}, archivePrefix={arXiv}, eprint={0811.2055}, primaryClass={cs.GR astro-ph} }
szalay2008gpu-based
arxiv-5464
0811.2113
Compactly accessible categories and quantum key distribution
<|reference_start|>Compactly accessible categories and quantum key distribution: Compact categories have lately seen renewed interest via applications to quantum physics. Being essentially finite-dimensional, they cannot accomodate (co)limit-based constructions. For example, they cannot capture protocols such as quantum key distribution, that rely on the law of large numbers. To overcome this limitation, we introduce the notion of a compactly accessible category, relying on the extra structure of a factorisation system. This notion allows for infinite dimension while retaining key properties of compact categories: the main technical result is that the choice-of-duals functor on the compact part extends canonically to the whole compactly accessible category. As an example, we model a quantum key distribution protocol and prove its correctness categorically.<|reference_end|>
arxiv
@article{heunen2008compactly, title={Compactly accessible categories and quantum key distribution}, author={Chris Heunen}, journal={Logical Methods in Computer Science, Volume 4, Issue 4 (November 17, 2008) lmcs:1129}, year={2008}, doi={10.2168/LMCS-4(4:9)2008}, archivePrefix={arXiv}, eprint={0811.2113}, primaryClass={cs.LO cs.PL quant-ph} }
heunen2008compactly
arxiv-5465
0811.2117
Disjunctive Databases for Representing Repairs
<|reference_start|>Disjunctive Databases for Representing Repairs: This paper addresses the problem of representing the set of repairs of a possibly inconsistent database by means of a disjunctive database. Specifically, the class of denial constraints is considered. We show that, given a database and a set of denial constraints, there exists a (unique) disjunctive database, called canonical, which represents the repairs of the database w.r.t. the constraints and is contained in any other disjunctive database with the same set of minimal models. We propose an algorithm for computing the canonical disjunctive database. Finally, we study the size of the canonical disjunctive database in the presence of functional dependencies for both repairs and cardinality-based repairs.<|reference_end|>
arxiv
@article{molinaro2008disjunctive, title={Disjunctive Databases for Representing Repairs}, author={Cristian Molinaro, Jan Chomicki, Jerzy Marcinkowski}, journal={arXiv preprint arXiv:0811.2117}, year={2008}, archivePrefix={arXiv}, eprint={0811.2117}, primaryClass={cs.DB} }
molinaro2008disjunctive
arxiv-5466
0811.2180
On the long time behavior of the TCP window size process
<|reference_start|>On the long time behavior of the TCP window size process: The TCP window size process appears in the modeling of the famous Transmission Control Protocol used for data transmission over the Internet. This continuous time Markov process takes its values in $[0,\infty)$, is ergodic and irreversible. It belongs to the Additive Increase Multiplicative Decrease class of processes. The sample paths are piecewise linear deterministic and the whole randomness of the dynamics comes from the jump mechanism. Several aspects of this process have already been investigated in the literature. In the present paper, we mainly get quantitative estimates for the convergence to equilibrium, in terms of the $W_1$ Wasserstein coupling distance, for the process and also for its embedded chain.<|reference_end|>
arxiv
@article{chafai2008on, title={On the long time behavior of the TCP window size process}, author={Djalil Chafai (LAMA), Florent Malrieu (IRMAR), Katy Paroux (LM-Besanc{c}on, INRIA - IRISA)}, journal={Stochastic Processes and their Applications 120, 8 (2010) 1518-1534}, year={2008}, doi={10.1016/j.spa.2010.03.019}, archivePrefix={arXiv}, eprint={0811.2180}, primaryClass={math.PR cs.NI} }
chafai2008on
arxiv-5467
0811.2198
The Church Problem for Countable Ordinals
<|reference_start|>The Church Problem for Countable Ordinals: A fundamental theorem of Buchi and Landweber shows that the Church synthesis problem is computable. Buchi and Landweber reduced the Church Problem to problems about &#969;-games and used the determinacy of such games as one of the main tools to show its computability. We consider a natural generalization of the Church problem to countable ordinals and investigate games of arbitrary countable length. We prove that determinacy and decidability parts of the Bu}chi and Landweber theorem hold for all countable ordinals and that its full extension holds for all ordinals < \omega\^\omega.<|reference_end|>
arxiv
@article{rabinovich2008the, title={The Church Problem for Countable Ordinals}, author={Alexander Rabinovich}, journal={Logical Methods in Computer Science, Volume 5, Issue 2 (April 27, 2009) lmcs:1204}, year={2008}, doi={10.2168/LMCS-5(2:5)2009}, archivePrefix={arXiv}, eprint={0811.2198}, primaryClass={cs.LO} }
rabinovich2008the
arxiv-5468
0811.2201
Fast Maximum-Likelihood Decoding of the Golden Code
<|reference_start|>Fast Maximum-Likelihood Decoding of the Golden Code: The golden code is a full-rate full-diversity space-time code for two transmit antennas that has a maximal coding gain. Because each codeword conveys four information symbols from an M-ary quadrature-amplitude modulation alphabet, the complexity of an exhaustive search decoder is proportional to M^2. In this paper we present a new fast algorithm for maximum-likelihood decoding of the golden code that has a worst-case complexity of only O(2M^2.5). We also present an efficient implementation of the fast decoder that exhibits a low average complexity. Finally, in contrast to the overlaid Alamouti codes, which lose their fast decodability property on time-varying channels, we show that the golden code is fast decodable on both quasistatic and rapid time-varying channels.<|reference_end|>
arxiv
@article{sinnokrot2008fast, title={Fast Maximum-Likelihood Decoding of the Golden Code}, author={Mohanned O. Sinnokrot and John R. Barry}, journal={arXiv preprint arXiv:0811.2201}, year={2008}, archivePrefix={arXiv}, eprint={0811.2201}, primaryClass={cs.IT math.IT} }
sinnokrot2008fast
arxiv-5469
0811.2250
Semantics and Evaluation of Top-k Queries in Probabilistic Databases
<|reference_start|>Semantics and Evaluation of Top-k Queries in Probabilistic Databases: We study here fundamental issues involved in top-k query evaluation in probabilistic databases. We consider simple probabilistic databases in which probabilities are associated with individual tuples, and general probabilistic databases in which, additionally, exclusivity relationships between tuples can be represented. In contrast to other recent research in this area, we do not limit ourselves to injective scoring functions. We formulate three intuitive postulates that the semantics of top-k queries in probabilistic databases should satisfy, and introduce a new semantics, Global-Topk, that satisfies those postulates to a large degree. We also show how to evaluate queries under the Global-Topk semantics. For simple databases we design dynamic-programming based algorithms, and for general databases we show polynomial-time reductions to the simple cases. For example, we demonstrate that for a fixed k the time complexity of top-k query evaluation is as low as linear, under the assumption that probabilistic databases are simple and scoring functions are injective.<|reference_end|>
arxiv
@article{zhang2008semantics, title={Semantics and Evaluation of Top-k Queries in Probabilistic Databases}, author={Xi Zhang and Jan Chomicki}, journal={arXiv preprint arXiv:0811.2250}, year={2008}, archivePrefix={arXiv}, eprint={0811.2250}, primaryClass={cs.DB} }
zhang2008semantics
arxiv-5470
0811.2306
Multipath Amplification of Chaotic Radio Pulses and UWB Communications
<|reference_start|>Multipath Amplification of Chaotic Radio Pulses and UWB Communications: Effect of multipath amplification is found in ultrawideband wireless communication systems with chaotic carrier, whereas information is transmitted with chaotic radio pulses. This effect is observed in multipath environment (residence, office, industrial or other indoor space). It exhibits itself through an increase of signal power at the receiver input with respect to the case of free space. Multipath amplification effect gives 5-15 dB energy gain (depending on the environment), which allows to have 2-6 times longer distance range for the same transmitter power.<|reference_end|>
arxiv
@article{andreyev2008multipath, title={Multipath Amplification of Chaotic Radio Pulses and UWB Communications}, author={Yuri V. Andreyev, Alexander S. Dmitriev (Member, IEEE), Andrey V. Kletsov}, journal={arXiv preprint arXiv:0811.2306}, year={2008}, archivePrefix={arXiv}, eprint={0811.2306}, primaryClass={nlin.CD cs.NI} }
andreyev2008multipath
arxiv-5471
0811.2356
The List-Decoding Size of Reed-Muller Codes
<|reference_start|>The List-Decoding Size of Reed-Muller Codes: In this work we study the list-decoding size of Reed-Muller codes. Given a received word and a distance parameter, we are interested in bounding the size of the list of Reed-Muller codewords that are within that distance from the received word. Previous bounds of Gopalan, Klivans and Zuckerman \cite{GKZ08} on the list size of Reed-Muller codes apply only up to the minimum distance of the code. In this work we provide asymptotic bounds for the list-decoding size of Reed-Muller codes that apply for {\em all} distances. Additionally, we study the weight distribution of Reed-Muller codes. Prior results of Kasami and Tokura \cite{KT70} on the structure of Reed-Muller codewords up to twice the minimum distance, imply bounds on the weight distribution of the code that apply only until twice the minimum distance. We provide accumulative bounds for the weight distribution of Reed-Muller codes that apply to {\em all} distances.<|reference_end|>
arxiv
@article{kaufman2008the, title={The List-Decoding Size of Reed-Muller Codes}, author={Tali Kaufman, Shachar Lovett}, journal={arXiv preprint arXiv:0811.2356}, year={2008}, archivePrefix={arXiv}, eprint={0811.2356}, primaryClass={cs.IT cs.DM math.IT} }
kaufman2008the
arxiv-5472
0811.2403
Composite CDMA - A statistical mechanics analysis
<|reference_start|>Composite CDMA - A statistical mechanics analysis: Code Division Multiple Access (CDMA) in which the spreading code assignment to users contains a random element has recently become a cornerstone of CDMA research. The random element in the construction is particular attractive as it provides robustness and flexibility in utilising multi-access channels, whilst not making significant sacrifices in terms of transmission power. Random codes are generated from some ensemble, here we consider the possibility of combining two standard paradigms, sparsely and densely spread codes, in a single composite code ensemble. The composite code analysis includes a replica symmetric calculation of performance in the large system limit, and investigation of finite systems through a composite belief propagation algorithm. A variety of codes are examined with a focus on the high multi-access interference regime. In both the large size limit and finite systems we demonstrate scenarios in which the composite code has typical performance exceeding sparse and dense codes at equivalent signal to noise ratio.<|reference_end|>
arxiv
@article{raymond2008composite, title={Composite CDMA - A statistical mechanics analysis}, author={Jack Raymond and David Saad}, journal={J. Stat. Mech. (2009) P05015}, year={2008}, doi={10.1088/1742-5468/2009/05/P05015}, archivePrefix={arXiv}, eprint={0811.2403}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT} }
raymond2008composite
arxiv-5473
0811.2457
Perfect Matchings via Uniform Sampling in Regular Bipartite Graphs
<|reference_start|>Perfect Matchings via Uniform Sampling in Regular Bipartite Graphs: In this paper we further investigate the well-studied problem of finding a perfect matching in a regular bipartite graph. The first non-trivial algorithm, with running time $O(mn)$, dates back to K\"{o}nig's work in 1916 (here $m=nd$ is the number of edges in the graph, $2n$ is the number of vertices, and $d$ is the degree of each node). The currently most efficient algorithm takes time $O(m)$, and is due to Cole, Ost, and Schirra. We improve this running time to $O(\min\{m, \frac{n^{2.5}\ln n}{d}\})$; this minimum can never be larger than $O(n^{1.75}\sqrt{\ln n})$. We obtain this improvement by proving a uniform sampling theorem: if we sample each edge in a $d$-regular bipartite graph independently with a probability $p = O(\frac{n\ln n}{d^2})$ then the resulting graph has a perfect matching with high probability. The proof involves a decomposition of the graph into pieces which are guaranteed to have many perfect matchings but do not have any small cuts. We then establish a correspondence between potential witnesses to non-existence of a matching (after sampling) in any piece and cuts of comparable size in that same piece. Karger's sampling theorem for preserving cuts in a graph can now be adapted to prove our uniform sampling theorem for preserving perfect matchings. Using the $O(m\sqrt{n})$ algorithm (due to Hopcroft and Karp) for finding maximum matchings in bipartite graphs on the sampled graph then yields the stated running time. We also provide an infinite family of instances to show that our uniform sampling result is tight up to poly-logarithmic factors (in fact, up to $\ln^2 n$).<|reference_end|>
arxiv
@article{goel2008perfect, title={Perfect Matchings via Uniform Sampling in Regular Bipartite Graphs}, author={Ashish Goel, Michael Kapralov, Sanjeev Khanna}, journal={arXiv preprint arXiv:0811.2457}, year={2008}, archivePrefix={arXiv}, eprint={0811.2457}, primaryClass={cs.DS cs.DM} }
goel2008perfect
arxiv-5474
0811.2497
Computing voting power in easy weighted voting games
<|reference_start|>Computing voting power in easy weighted voting games: Weighted voting games are ubiquitous mathematical models which are used in economics, political science, neuroscience, threshold logic, reliability theory and distributed systems. They model situations where agents with variable voting weight vote in favour of or against a decision. A coalition of agents is winning if and only if the sum of weights of the coalition exceeds or equals a specified quota. The Banzhaf index is a measure of voting power of an agent in a weighted voting game. It depends on the number of coalitions in which the agent is the difference in the coalition winning or losing. It is well known that computing Banzhaf indices in a weighted voting game is NP-hard. We give a comprehensive classification of weighted voting games which can be solved in polynomial time. Among other results, we provide a polynomial ($O(k{(\frac{n}{k})}^k)$) algorithm to compute the Banzhaf indices in weighted voting games in which the number of weight values is bounded by $k$.<|reference_end|>
arxiv
@article{aziz2008computing, title={Computing voting power in easy weighted voting games}, author={Haris Aziz and Mike Paterson}, journal={arXiv preprint arXiv:0811.2497}, year={2008}, archivePrefix={arXiv}, eprint={0811.2497}, primaryClass={cs.GT cs.CC cs.DS} }
aziz2008computing
arxiv-5475
0811.2518
Gaussian Belief Propagation: Theory and Aplication
<|reference_start|>Gaussian Belief Propagation: Theory and Aplication: The canonical problem of solving a system of linear equations arises in numerous contexts in information theory, communication theory, and related fields. In this contribution, we develop a solution based upon Gaussian belief propagation (GaBP) that does not involve direct matrix inversion. The iterative nature of our approach allows for a distributed message-passing implementation of the solution algorithm. In the first part of this thesis, we address the properties of the GaBP solver. We characterize the rate of convergence, enhance its message-passing efficiency by introducing a broadcast version, discuss its relation to classical solution methods including numerical examples. We present a new method for forcing the GaBP algorithm to converge to the correct solution for arbitrary column dependent matrices. In the second part we give five applications to illustrate the applicability of the GaBP algorithm to very large computer networks: Peer-to-Peer rating, linear detection, distributed computation of support vector regression, efficient computation of Kalman filter and distributed linear programming. Using extensive simulations on up to 1,024 CPUs in parallel using IBM Bluegene supercomputer we demonstrate the attractiveness and applicability of the GaBP algorithm, using real network topologies with up to millions of nodes and hundreds of millions of communication links. We further relate to several other algorithms and explore their connection to the GaBP algorithm.<|reference_end|>
arxiv
@article{bickson2008gaussian, title={Gaussian Belief Propagation: Theory and Aplication}, author={Danny Bickson}, journal={arXiv preprint arXiv:0811.2518}, year={2008}, archivePrefix={arXiv}, eprint={0811.2518}, primaryClass={cs.IT math.IT} }
bickson2008gaussian
arxiv-5476
0811.2519
Origins of Modern Data Analysis Linked to the Beginnings and Early Development of Computer Science and Information Engineering
<|reference_start|>Origins of Modern Data Analysis Linked to the Beginnings and Early Development of Computer Science and Information Engineering: The history of data analysis that is addressed here is underpinned by two themes, -- those of tabular data analysis, and the analysis of collected heterogeneous data. "Exploratory data analysis" is taken as the heuristic approach that begins with data and information and seeks underlying explanation for what is observed or measured. I also cover some of the evolving context of research and applications, including scholarly publishing, technology transfer and the economic relationship of the university to society.<|reference_end|>
arxiv
@article{murtagh2008origins, title={Origins of Modern Data Analysis Linked to the Beginnings and Early Development of Computer Science and Information Engineering}, author={Fionn Murtagh}, journal={Electronic Journal for History of Probability and Statisics, Vol. 4, no. 2, Dec. 2008}, year={2008}, archivePrefix={arXiv}, eprint={0811.2519}, primaryClass={cs.CY cs.DL} }
murtagh2008origins
arxiv-5477
0811.2525
Amendment to "Performance Analysis of the V-BLAST Algorithm: An Analytical Approach" [1]
<|reference_start|>Amendment to "Performance Analysis of the V-BLAST Algorithm: An Analytical Approach" [1]: An analytical technique for the outage and BER analysis of the nx2 V-BLAST algorithm with the optimal ordering has been presented in [1], including closed-form exact expressions for average BER and outage probabilities, and simple high-SNR approximations. The analysis in [1] is based on the following essential approximations: 1. The SNR was defined in terms of total after-projection signal and noise powers, and the BER was analyzed based on their ratio. This corresponds to a non-coherent (power-wise) equal-gain combining of both the signal and the noise, and it is not optimum since it does not provide the maximum output SNR. 2. The definition of the total after-projection noise power at each step ignored the fact that the after-projection noise vector had correlated components. 3. The after-combining noises at different steps (and hence the errors) were implicitly assumed to be independent of each other. Under non-coherent equal-gain combining, that is not the case. It turns out that the results in [1] hold also true without these approximations, subject to minor modifications only. The purpose of this note is to show this and also to extend the average BER results in [1] to the case of BPSK-modulated V-BLAST with more than two Rx antennas (eq. 18-20). Additionally, we emphasize that the block error rate is dominated by the first step BER at the high-SNR mode (eq. 14 and 21).<|reference_end|>
arxiv
@article{loyka2008amendment, title={Amendment to "Performance Analysis of the V-BLAST Algorithm: An Analytical Approach." [1]}, author={Sergey Loyka, Francois Gagnon}, journal={arXiv preprint arXiv:0811.2525}, year={2008}, archivePrefix={arXiv}, eprint={0811.2525}, primaryClass={cs.IT math.IT} }
loyka2008amendment
arxiv-5478
0811.2535
A Transformation--Based Approach for the Design of Parallel/Distributed Scientific Software: the FFT
<|reference_start|>A Transformation--Based Approach for the Design of Parallel/Distributed Scientific Software: the FFT: We describe a methodology for designing efficient parallel and distributed scientific software. This methodology utilizes sequences of mechanizable algebra--based optimizing transformations. In this study, we apply our methodology to the FFT, starting from a high--level algebraic algorithm description. Abstract multiprocessor plans are developed and refined to specify which computations are to be done by each processor. Templates are then created that specify the locations of computations and data on the processors, as well as data flow among processors. Templates are developed in both the MPI and OpenMP programming styles. Preliminary experiments comparing code constructed using our methodology with code from several standard scientific libraries show that our code is often competitive and sometimes performs better. Interestingly, our code handled a larger range of problem sizes on one target architecture.<|reference_end|>
arxiv
@article{hunt2008a, title={A Transformation--Based Approach for the Design of Parallel/Distributed Scientific Software: the FFT}, author={Harry B. Hunt, Lenore R. Mullin, Daniel J. Rosenkrantz, and James E. Raynolds}, journal={arXiv preprint arXiv:0811.2535}, year={2008}, archivePrefix={arXiv}, eprint={0811.2535}, primaryClass={cs.SE cs.PL} }
hunt2008a
arxiv-5479
0811.2546
Phase transition for Local Search on planted SAT
<|reference_start|>Phase transition for Local Search on planted SAT: The Local Search algorithm (or Hill Climbing, or Iterative Improvement) is one of the simplest heuristics to solve the Satisfiability and Max-Satisfiability problems. It is a part of many satisfiability and max-satisfiability solvers, where it is used to find a good starting point for a more sophisticated heuristics, and to improve a candidate solution. In this paper we give an analysis of Local Search on random planted 3-CNF formulas. We show that if there is k<7/6 such that the clause-to-variable ratio is less than k ln(n) (n is the number of variables in a CNF) then Local Search whp does not find a satisfying assignment, and if there is k>7/6 such that the clause-to-variable ratio is greater than k ln(n)$ then the local search whp finds a satisfying assignment. As a byproduct we also show that for any constant r there is g such that Local Search applied to a random (not necessarily planted) 3-CNF with clause-to-variable ratio r produces an assignment that satisfies at least gn clauses less than the maximal number of satisfiable clauses.<|reference_end|>
arxiv
@article{bulatov2008phase, title={Phase transition for Local Search on planted SAT}, author={Andrei A. Bulatov, Evgeny S. Skvortsov}, journal={arXiv preprint arXiv:0811.2546}, year={2008}, archivePrefix={arXiv}, eprint={0811.2546}, primaryClass={cs.DS cs.LO} }
bulatov2008phase
arxiv-5480
0811.2551
Modeling Cultural Dynamics
<|reference_start|>Modeling Cultural Dynamics: EVOC (for EVOlution of Culture) is a computer model of culture that enables us to investigate how various factors such as barriers to cultural diffusion, the presence and choice of leaders, or changes in the ratio of innovation to imitation affect the diversity and effectiveness of ideas. It consists of neural network based agents that invent ideas for actions, and imitate neighbors' actions. The model is based on a theory of culture according to which what evolves through culture is not memes or artifacts, but the internal models of the world that give rise to them, and they evolve not through a Darwinian process of competitive exclusion but a Lamarckian process involving exchange of innovation protocols. EVOC shows an increase in mean fitness of actions over time, and an increase and then decrease in the diversity of actions. Diversity of actions is positively correlated with population size and density, and with barriers between populations. Slowly eroding borders increase fitness without sacrificing diversity by fostering specialization followed by sharing of fit actions. Introducing a leader that broadcasts its actions throughout the population increases the fitness of actions but reduces diversity of actions. Increasing the number of leaders reduces this effect. Efforts are underway to simulate the conditions under which an agent immigrating from one culture to another contributes new ideas while still fitting in.<|reference_end|>
arxiv
@article{gabora2008modeling, title={Modeling Cultural Dynamics}, author={Liane Gabora}, journal={In A. Davis & J. Ludwig (Co-Chairs), Adaptive agents in a cultural context: Papers from the AAAI Fall Symposium (pp. 18-25). Association for the Advancement of Artificial Intelligence (AAAI), Palo Alto, CA. (2018)}, year={2008}, archivePrefix={arXiv}, eprint={0811.2551}, primaryClass={cs.MA cs.AI q-bio.NC} }
gabora2008modeling
arxiv-5481
0811.2563
Decentralized Overlay for Federation of Enterprise Clouds
<|reference_start|>Decentralized Overlay for Federation of Enterprise Clouds: This chapter describes Aneka-Federation, a decentralized and distributed system that combines enterprise Clouds, overlay networking, and structured peer-to-peer techniques to create scalable wide-area networking of compute nodes for high-throughput computing. The Aneka-Federation integrates numerous small scale Aneka Enterprise Cloud services and nodes that are distributed over multiple control and enterprise domains as parts of a single coordinated resource leasing abstraction. The system is designed with the aim of making distributed enterprise Cloud resource integration and application programming flexible, efficient, and scalable. The system is engineered such that it: enables seamless integration of existing Aneka Enterprise Clouds as part of single wide-area resource leasing federation; self-organizes the system components based on a structured peer-to-peer routing methodology; and presents end-users with a distributed application composition environment that can support variety of programming and execution models. This chapter describes the design and implementation of a novel, extensible and decentralized peer-to-peer technique that helps to discover, connect and provision the services of Aneka Enterprise Clouds among the users who can use different programming models to compose their applications. Evaluations of the system with applications that are programmed using the Task and Thread execution models on top of an overlay of Aneka Enterprise Clouds have been described here.<|reference_end|>
arxiv
@article{ranjan2008decentralized, title={Decentralized Overlay for Federation of Enterprise Clouds}, author={Rajiv Ranjan and Rajkumar Buyya}, journal={arXiv preprint arXiv:0811.2563}, year={2008}, archivePrefix={arXiv}, eprint={0811.2563}, primaryClass={cs.DC cs.NI} }
ranjan2008decentralized
arxiv-5482
0811.2572
An Efficient Algorithm for Partial Order Production
<|reference_start|>An Efficient Algorithm for Partial Order Production: We consider the problem of partial order production: arrange the elements of an unknown totally ordered set T into a target partially ordered set S, by comparing a minimum number of pairs in T. Special cases include sorting by comparisons, selection, multiple selection, and heap construction. We give an algorithm performing ITLB + o(ITLB) + O(n) comparisons in the worst case. Here, n denotes the size of the ground sets, and ITLB denotes a natural information-theoretic lower bound on the number of comparisons needed to produce the target partial order. Our approach is to replace the target partial order by a weak order (that is, a partial order with a layered structure) extending it, without increasing the information theoretic lower bound too much. We then solve the problem by applying an efficient multiple selection algorithm. The overall complexity of our algorithm is polynomial. This answers a question of Yao (SIAM J. Comput. 18, 1989). We base our analysis on the entropy of the target partial order, a quantity that can be efficiently computed and provides a good estimate of the information-theoretic lower bound.<|reference_end|>
arxiv
@article{cardinal2008an, title={An Efficient Algorithm for Partial Order Production}, author={Jean Cardinal, Samuel Fiorini, Gwena"el Joret, Rapha"el M. Jungers, J. Ian Munro}, journal={SIAM J. Comput. Volume 39, Issue 7, pp. 2927-2940 (2010)}, year={2008}, doi={10.1137/090759860}, archivePrefix={arXiv}, eprint={0811.2572}, primaryClass={cs.DS} }
cardinal2008an
arxiv-5483
0811.2578
Encapsulation theory: the configuration efficiency limit
<|reference_start|>Encapsulation theory: the configuration efficiency limit: This paper shows how maximum possible configuration efficiency of an indefinitely large software system is constrained by chosing a fixed upper limit to the number of program units per subsystem. It is then shown how the configuration efficiency of an indefinitely large software system depends on the ratio of the total number of informaiton hiding violational software units divided by the total number of program units.<|reference_end|>
arxiv
@article{kirwan2008encapsulation, title={Encapsulation theory: the configuration efficiency limit}, author={Edmund Kirwan}, journal={arXiv preprint arXiv:0811.2578}, year={2008}, archivePrefix={arXiv}, eprint={0811.2578}, primaryClass={cs.SE} }
kirwan2008encapsulation
arxiv-5484
0811.2586
On models of a nondeterministic computation
<|reference_start|>On models of a nondeterministic computation: In this paper we consider a nondeterministic computation by deterministic multi-head 2-way automata having a read-only access to an auxiliary memory. The memory contains additional data (a guess) and computation is successful iff it is successful for some memory content. Also we consider the case of restricted guesses in which a guess should satisfy some constraint. We show that the standard complexity classes such as L, NL, P, NP, PSPACE can be characterized in terms of these models of nondeterministic computation. These characterizations differ from the well-known ones by absence of alternation.<|reference_end|>
arxiv
@article{vyalyi2008on, title={On models of a nondeterministic computation}, author={M. N. Vyalyi}, journal={arXiv preprint arXiv:0811.2586}, year={2008}, archivePrefix={arXiv}, eprint={0811.2586}, primaryClass={cs.CC} }
vyalyi2008on
arxiv-5485
0811.2596
An Enhanced Mathematical Model for Performance Evaluation of Optical Burst Switched Networks
<|reference_start|>An Enhanced Mathematical Model for Performance Evaluation of Optical Burst Switched Networks: This paper has been withdrawn by the authors.<|reference_end|>
arxiv
@article{morsy2008an, title={An Enhanced Mathematical Model for Performance Evaluation of Optical Burst Switched Networks}, author={Mohamed H.S. Morsy, Mohamad Y.S. Sowailem and Hossam M.H. Shalaby}, journal={arXiv preprint arXiv:0811.2596}, year={2008}, archivePrefix={arXiv}, eprint={0811.2596}, primaryClass={cs.NI cs.PF} }
morsy2008an
arxiv-5486
0811.2609
Noise-Resilient Group Testing: Limitations and Constructions
<|reference_start|>Noise-Resilient Group Testing: Limitations and Constructions: We study combinatorial group testing schemes for learning $d$-sparse Boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of $\tilde{\Omega}(d^2 \log n)$ that is known for exact reconstruction of $d$-sparse vectors of length $n$ via non-adaptive measurements, by a multiplicative factor $\tilde{\Omega}(d)$. Specifically, we give simple randomized constructions of non-adaptive measurement schemes, with $m=O(d \log n)$ measurements, that allow efficient reconstruction of $d$-sparse vectors up to $O(d)$ false positives even in the presence of $\delta m$ false positives and $O(m/d)$ false negatives within the measurement outcomes, for any constant $\delta < 1$. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized trade-off but using $m = O(d^{1+o(1)} \log n)$ measurements. We also obtain explicit constructions that allow fast reconstruction in time $\poly(m)$, which would be sublinear in $n$ for sufficiently sparse vectors. The main tool used in our construction is the list-decoding view of randomness condensers and extractors.<|reference_end|>
arxiv
@article{cheraghchi2008noise-resilient, title={Noise-Resilient Group Testing: Limitations and Constructions}, author={Mahdi Cheraghchi}, journal={arXiv preprint arXiv:0811.2609}, year={2008}, doi={10.1007/978-3-642-03409-1_7}, archivePrefix={arXiv}, eprint={0811.2609}, primaryClass={cs.DM cs.IT math.CO math.IT} }
cheraghchi2008noise-resilient
arxiv-5487
0811.2612
Evaluation of the matrix exponential function using finite elements in time
<|reference_start|>Evaluation of the matrix exponential function using finite elements in time: The evaluation of a matrix exponential function is a classic problem of computational linear algebra. Many different methods have been employed for its numerical evaluation [Moler C and van Loan C 1978 SIAM Review 20 4], none of which produce a definitive algorithm which is broadly applicable and sufficiently accurate, as well as being reasonably fast. Herein, we employ a method which evaulates a matrix exponential as the solution to a first-order initial value problem in a fictitious time variable. The new aspect of the present implementation of this method is to use finite elements in the fictitious time variable. [Weatherford C A, Red E, and Wynn A 2002 Journal of Molecular Structure 592 47] Then using an expansion in a properly chosen time basis, we are able to make accurate calculations of the exponential of any given matrix as the solution to a set of simultaneous equations.<|reference_end|>
arxiv
@article{gebremedhin2008evaluation, title={Evaluation of the matrix exponential function using finite elements in time}, author={D H Gebremedhin, C A Weatherford, X Zhang, A Wynn III, G Tanaka}, journal={arXiv preprint arXiv:0811.2612}, year={2008}, archivePrefix={arXiv}, eprint={0811.2612}, primaryClass={math-ph cs.NA math.MP} }
gebremedhin2008evaluation
arxiv-5488
0811.2637
The Design of Compressive Sensing Filter
<|reference_start|>The Design of Compressive Sensing Filter: In this paper, the design of universal compressive sensing filter based on normal filters including the lowpass, highpass, bandpass, and bandstop filters with different cutoff frequencies (or bandwidth) has been developed to enable signal acquisition with sub-Nyquist sampling. Moreover, to control flexibly the size and the coherence of the compressive sensing filter, as an example, the microstrip filter based on defected ground structure (DGS) has been employed to realize the compressive sensing filter. Of course, the compressive sensing filter also can be constructed along the identical idea by many other structures, for example, the man-made electromagnetic materials, the plasma with different electron density, and so on. By the proposed architecture, the n-dimensional signals of S-sparse in arbitrary orthogonal frame can be exactly reconstructed with measurements on the order of Slog(n) with overwhelming probability, which is consistent with the bonds estimated by theoretical analysis.<|reference_end|>
arxiv
@article{li2008the, title={The Design of Compressive Sensing Filter}, author={Lianlin Li, Wenji Zhang, Yin Xiang and Fang Li}, journal={arXiv preprint arXiv:0811.2637}, year={2008}, archivePrefix={arXiv}, eprint={0811.2637}, primaryClass={cs.CE cs.IT math.IT} }
li2008the
arxiv-5489
0811.2675
Characterizations of probe interval graphs
<|reference_start|>Characterizations of probe interval graphs: In this paper we obtain several characterizations of the adjacency matrix of a probe interval graph. In course of this study we describe an easy method of obtaining interval representation of an interval bipartite graph from its adjacency matrix. Finally, we note that if we add a loop at every probe vertex of a probe interval graph, then the Ferrers dimension of the corresponding symmetric bipartite graph is at most 3.<|reference_end|>
arxiv
@article{ghosh2008characterizations, title={Characterizations of probe interval graphs}, author={Shamik Ghosh, Maitry Podder and Malay K. Sen}, journal={arXiv preprint arXiv:0811.2675}, year={2008}, archivePrefix={arXiv}, eprint={0811.2675}, primaryClass={cs.DM} }
ghosh2008characterizations
arxiv-5490
0811.2690
A framework for the local information dynamics of distributed computation in complex systems
<|reference_start|>A framework for the local information dynamics of distributed computation in complex systems: The nature of distributed computation has often been described in terms of the component operations of universal computation: information storage, transfer and modification. We review the first complete framework that quantifies each of these individual information dynamics on a local scale within a system, and describes the manner in which they interact to create non-trivial computation where "the whole is greater than the sum of the parts". We describe the application of the framework to cellular automata, a simple yet powerful model of distributed computation. This is an important application, because the framework is the first to provide quantitative evidence for several important conjectures about distributed computation in cellular automata: that blinkers embody information storage, particles are information transfer agents, and particle collisions are information modification events. The framework is also shown to contrast the computations conducted by several well-known cellular automata, highlighting the importance of information coherence in complex computation. The results reviewed here provide important quantitative insights into the fundamental nature of distributed computation and the dynamics of complex systems, as well as impetus for the framework to be applied to the analysis and design of other systems.<|reference_end|>
arxiv
@article{lizier2008a, title={A framework for the local information dynamics of distributed computation in complex systems}, author={Joseph T. Lizier, Mikhail Prokopenko, Albert Y. Zomaya}, journal={in "Guided Self-Organization: Inception", edited by M. Prokopenko, pp. 115-158, Springer, Berlin/Heidelberg, 2014}, year={2008}, doi={10.1007/978-3-642-53734-9_5}, number={ICT 08/320}, archivePrefix={arXiv}, eprint={0811.2690}, primaryClass={nlin.CG cs.IT math.IT nlin.AO nlin.PS physics.data-an} }
lizier2008a
arxiv-5491
0811.2696
AG Codes from Polyhedral Divisors
<|reference_start|>AG Codes from Polyhedral Divisors: A description of complete normal varieties with lower dimensional torus action has been given by Altmann, Hausen, and Suess, generalizing the theory of toric varieties. Considering the case where the acting torus T has codimension one, we describe T-invariant Weil and Cartier divisors and provide formulae for calculating global sections, intersection numbers, and Euler characteristics. As an application, we use divisors on these so-called T-varieties to define new evaluation codes called T-codes. We find estimates on their minimum distance using intersection theory. This generalizes the theory of toric codes and combines it with AG codes on curves. As the simplest application of our general techniques we look at codes on ruled surfaces coming from decomposable vector bundles. Already this construction gives codes that are better than the related product code. Further examples show that we can improve these codes by constructing more sophisticated T-varieties. These results suggest to look further for good codes on T-varieties.<|reference_end|>
arxiv
@article{ilten2008ag, title={AG Codes from Polyhedral Divisors}, author={Nathan Ilten and Hendrik S"u{ss}}, journal={Journal of Symbolic Computation 45 (2010) 734}, year={2008}, doi={10.1016/j.jsc.2010.03.008}, archivePrefix={arXiv}, eprint={0811.2696}, primaryClass={math.AG cs.IT math.IT} }
ilten2008ag
arxiv-5492
0811.2731
Topological Dynamics of Cellular Automata: Dimension Matters
<|reference_start|>Topological Dynamics of Cellular Automata: Dimension Matters: Topological dynamics of cellular automata (CA), inherited from classical dynamical systems theory, has been essentially studied in dimension 1. This paper focuses on higher dimensional CA and aims at showing that the situation is different and more complex starting from dimension 2. The main results are the existence of non sensitive CA without equicontinuous points, the non-recursivity of sensitivity constants, the existence of CA having only non-recursive equicontinuous points and the existence of CA having only countably many equicontinuous points. They all show a difference between dimension 1 and higher dimensions. Thanks to these new constructions, we also extend undecidability results concerning topological classification previously obtained in the 1D case. Finally, we show that the set of sensitive CA is only Pi_2 in dimension 1, but becomes Sigma_3-hard for dimension 3.<|reference_end|>
arxiv
@article{sablik2008topological, title={Topological Dynamics of Cellular Automata: Dimension Matters}, author={Mathieu Sablik (LATP), Guillaume Theyssier (LAMA)}, journal={arXiv preprint arXiv:0811.2731}, year={2008}, archivePrefix={arXiv}, eprint={0811.2731}, primaryClass={cs.DM cs.CC} }
sablik2008topological
arxiv-5493
0811.2827
Evolutionary Construction of Geographical Networks with Nearly Optimal Robustness and Efficient Routing Properties
<|reference_start|>Evolutionary Construction of Geographical Networks with Nearly Optimal Robustness and Efficient Routing Properties: Robust and efficient design of networks on a realistic geographical space is one of the important issues for the realization of dependable communication systems. In this paper, based on a percolation theory and a geometric graph property, we investigate such a design from the following viewpoints: 1) network evolution according to a spatially heterogeneous population, 2) trimodal low degrees for the tolerant connectivity against both failures and attacks, and 3) decentralized routing within short paths. Furthermore, we point out the weakened tolerance by geographical constraints on local cycles, and propose a practical strategy by adding a small fraction of shortcut links between randomly chosen nodes in order to improve the robustness to a similar level to that of the optimal bimodal networks with a larger degree $O(\sqrt{N})$ for the network size $N$. These properties will be useful for constructing future ad-hoc networks in wide-area communications.<|reference_end|>
arxiv
@article{hayashi2008evolutionary, title={Evolutionary Construction of Geographical Networks with Nearly Optimal Robustness and Efficient Routing Properties}, author={Yukio Hayashi}, journal={Physica A 388, pp.991-998, 2009}, year={2008}, doi={10.1016/j.physa.2008.11.027}, archivePrefix={arXiv}, eprint={0811.2827}, primaryClass={physics.data-an cs.CG cs.NI physics.soc-ph} }
hayashi2008evolutionary
arxiv-5494
0811.2841
Universally Utility-Maximizing Privacy Mechanisms
<|reference_start|>Universally Utility-Maximizing Privacy Mechanisms: A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of {\em differential privacy}, which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a {\em geometric mechanism} $M^*$ -- a discrete variant of the simple and well-studied Laplace mechanism -- that is {\em simultaneously expected loss-minimizing} for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: {\em every} potential user $u$, no matter what its side information and preferences, derives as much utility from $M^*$ as from interacting with a differentially private mechanism $M_u$ that is optimally tailored to $u$.<|reference_end|>
arxiv
@article{ghosh2008universally, title={Universally Utility-Maximizing Privacy Mechanisms}, author={Arpita Ghosh, Tim Roughgarden, Mukund Sundararajan}, journal={arXiv preprint arXiv:0811.2841}, year={2008}, archivePrefix={arXiv}, eprint={0811.2841}, primaryClass={cs.DB cs.GT} }
ghosh2008universally
arxiv-5495
0811.2847
Boosting the Accuracy of Finite Difference Schemes via Optimal Time Step Selection and Non-Iterative Defect Correction
<|reference_start|>Boosting the Accuracy of Finite Difference Schemes via Optimal Time Step Selection and Non-Iterative Defect Correction: In this article, we present a simple technique for boosting the order of accuracy of finite difference schemes for time dependent partial differential equations by optimally selecting the time step used to advance the numerical solution and adding defect correction terms in a non-iterative manner. The power of the technique is its ability to extract as much accuracy as possible from existing finite difference schemes with minimal additional effort. Through straightforward numerical analysis arguments, we explain the origin of the boost in accuracy and estimate the computational cost of the resulting numerical method. We demonstrate the utility of optimal time step (OTS) selection combined with non-iterative defect correction (NIDC) on several different types of finite difference schemes for a wide array of classical linear and semilinear PDEs in one and more space dimensions on both regular and irregular domains.<|reference_end|>
arxiv
@article{chu2008boosting, title={Boosting the Accuracy of Finite Difference Schemes via Optimal Time Step Selection and Non-Iterative Defect Correction}, author={Kevin T. Chu}, journal={arXiv preprint arXiv:0811.2847}, year={2008}, archivePrefix={arXiv}, eprint={0811.2847}, primaryClass={math.NA cs.NA} }
chu2008boosting
arxiv-5496
0811.2850
Codes against Online Adversaries
<|reference_start|>Codes against Online Adversaries: In this work we consider the communication of information in the presence of an online adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=x_1,...,x_n symbol-by-symbol over a communication channel. The adversarial jammer can view the transmitted symbols x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. More generally, for a delay parameter 0<d<1, we study the scenario in which the jammer's decision on the corruption of x_i must depend solely on x_j for j < i - dn. In this work, we initiate the study of codes for online adversaries, and present a tight characterization of the amount of information one can transmit in both the 0-delay and, more generally, the d-delay online setting. We prove tight results for both additive and overwrite jammers when the transmitted symbols are assumed to be over a sufficiently large field F. Finally, we extend our results to a jam-or-listen online model, where the online adversary can either jam a symbol or eavesdrop on it. We again provide a tight characterization of the achievable rate for several variants of this model. The rate-regions we prove for each model are informational-theoretic in nature and hold for computationally unbounded adversaries. The rate regions are characterized by "simple" piecewise linear functions of p and d. The codes we construct to attain the optimal rate for each scenario are computationally efficient.<|reference_end|>
arxiv
@article{dey2008codes, title={Codes against Online Adversaries}, author={Bikash Kumar Dey, Sidharth Jaggi, Michael Langberg}, journal={arXiv preprint arXiv:0811.2850}, year={2008}, archivePrefix={arXiv}, eprint={0811.2850}, primaryClass={cs.IT math.IT} }
dey2008codes
arxiv-5497
0811.2853
Generating Random Networks Without Short Cycles
<|reference_start|>Generating Random Networks Without Short Cycles: Random graph generation is an important tool for studying large complex networks. Despite abundance of random graph models, constructing models with application-driven constraints is poorly understood. In order to advance state-of-the-art in this area, we focus on random graphs without short cycles as a stylized family of graphs, and propose the RandGraph algorithm for randomly generating them. For any constant k, when m=O(n^{1+1/[2k(k+3)]}), RandGraph generates an asymptotically uniform random graph with n vertices, m edges, and no cycle of length at most k using O(n^2m) operations. We also characterize the approximation error for finite values of n. To the best of our knowledge, this is the first polynomial-time algorithm for the problem. RandGraph works by sequentially adding $m$ edges to an empty graph with n vertices. Recently, such sequential algorithms have been successful for random sampling problems. Our main contributions to this line of research includes introducing a new approach for sequentially approximating edge-specific probabilities at each step of the algorithm, and providing a new method for analyzing such algorithms.<|reference_end|>
arxiv
@article{bayati2008generating, title={Generating Random Networks Without Short Cycles}, author={Mohsen Bayati, Andrea Montanari and Amin Saberi}, journal={arXiv preprint arXiv:0811.2853}, year={2008}, archivePrefix={arXiv}, eprint={0811.2853}, primaryClass={cs.DS cs.IT math.IT} }
bayati2008generating
arxiv-5498
0811.2868
Approximate Sparse Decomposition Based on Smoothed L0-Norm
<|reference_start|>Approximate Sparse Decomposition Based on Smoothed L0-Norm: In this paper, we propose a method to address the problem of source estimation for Sparse Component Analysis (SCA) in the presence of additive noise. Our method is a generalization of a recently proposed method (SL0), which has the advantage of directly minimizing the L0-norm instead of L1-norm, while being very fast. SL0 is based on minimization of the smoothed L0-norm subject to As=x. In order to better estimate the source vector for noisy mixtures, we suggest then to remove the constraint As=x, by relaxing exact equality to an approximation (we call our method Smoothed L0-norm Denoising or SL0DN). The final result can then be obtained by minimization of a proper linear combination of the smoothed L0-norm and a cost function for the approximation. Experimental results emphasize on the significant enhancement of the modified method in noisy cases.<|reference_end|>
arxiv
@article{firouzi2008approximate, title={Approximate Sparse Decomposition Based on Smoothed L0-Norm}, author={Hamed Firouzi, Masoud Farivar, Massoud Babaie-Zadeh, Christian Jutten}, journal={arXiv preprint arXiv:0811.2868}, year={2008}, archivePrefix={arXiv}, eprint={0811.2868}, primaryClass={cs.MM cs.IT math.IT} }
firouzi2008approximate
arxiv-5499
0811.2904
Secondary Indexing in One Dimension: Beyond B-trees and Bitmap Indexes
<|reference_start|>Secondary Indexing in One Dimension: Beyond B-trees and Bitmap Indexes: Let S be a finite, ordered alphabet, and let x = x_1 x_2 ... x_n be a string over S. A "secondary index" for x answers alphabet range queries of the form: Given a range [a_l,a_r] over S, return the set I_{[a_l;a_r]} = {i |x_i \in [a_l; a_r]}. Secondary indexes are heavily used in relational databases and scientific data analysis. It is well-known that the obvious solution, storing a dictionary for the position set associated with each character, does not always give optimal query time. In this paper we give the first theoretically optimal data structure for the secondary indexing problem. In the I/O model, the amount of data read when answering a query is within a constant factor of the minimum space needed to represent I_{[a_l;a_r]}, assuming that the size of internal memory is (|S| log n)^{delta} blocks, for some constant delta > 0. The space usage of the data structure is O(n log |S|) bits in the worst case, and we further show how to bound the size of the data structure in terms of the 0-th order entropy of x. We show how to support updates achieving various time-space trade-offs. We also consider an approximate version of the basic secondary indexing problem where a query reports a superset of I_{[a_l;a_r]} containing each element not in I_{[a_l;a_r]} with probability at most epsilon, where epsilon > 0 is the false positive probability. For this problem the amount of data that needs to be read by the query algorithm is reduced to O(|I_{[a_l;a_r]}| log(1/epsilon)) bits.<|reference_end|>
arxiv
@article{pagh2008secondary, title={Secondary Indexing in One Dimension: Beyond B-trees and Bitmap Indexes}, author={Rasmus Pagh and S. Srinivasa Rao}, journal={arXiv preprint arXiv:0811.2904}, year={2008}, archivePrefix={arXiv}, eprint={0811.2904}, primaryClass={cs.DB cs.DS} }
pagh2008secondary
arxiv-5500
0811.2984
Sensitivity Analysis Using a Fixed Point Interval Iteration
<|reference_start|>Sensitivity Analysis Using a Fixed Point Interval Iteration: Proving the existence of a solution to a system of real equations is a central issue in numerical analysis. In many situations, the system of equations depend on parameters which are not exactly known. It is then natural to aim proving the existence of a solution for all values of these parameters in some given domains. This is the aim of the parametrization of existence tests. A new parametric existence test based on the Hansen-Sengupta operator is presented and compared to a similar one based on the Krawczyk operator. It is used as a basis of a fixed point iteration dedicated to rigorous sensibility analysis of parametric systems of equations.<|reference_end|>
arxiv
@article{goldsztejn2008sensitivity, title={Sensitivity Analysis Using a Fixed Point Interval Iteration}, author={Alexandre Goldsztejn (LINA)}, journal={arXiv preprint arXiv:0811.2984}, year={2008}, archivePrefix={arXiv}, eprint={0811.2984}, primaryClass={cs.NA} }
goldsztejn2008sensitivity