corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-3301 | 0804.1493 | Distributed Space Time Codes for the Amplify-and-Forward Multiple-Access Relay Channel | <|reference_start|>Distributed Space Time Codes for the Amplify-and-Forward Multiple-Access Relay Channel: In this work, we present a construction of a family of space-time block codes for a Multi-Access Amplify-and- Forward Relay channel with two users and a single half-duplex relay. It is assumed that there is no Channel Side Information at the transmitters and that they are not allowed to cooperate together. Using the Diversity Multiplexing Tradeoff as a tool to evaluate the performance, we prove that the proposed scheme is optimal in some sense. Moreover, we provide numerical results which show that the new scheme outperforms the orthogonal transmission scheme, e. g. time sharing and offers a significant gain.<|reference_end|> | arxiv | @article{badr2008distributed,
title={Distributed Space Time Codes for the Amplify-and-Forward Multiple-Access
Relay Channel},
author={Maya Badr and Jean-Claude Belfiore},
journal={arXiv preprint arXiv:0804.1493},
year={2008},
archivePrefix={arXiv},
eprint={0804.1493},
primaryClass={cs.IT math.IT}
} | badr2008distributed |
arxiv-3302 | 0804.1602 | Multiterminal source coding with complementary delivery | <|reference_start|>Multiterminal source coding with complementary delivery: A coding problem for correlated information sources is investigated. Messages emitted from two correlated sources are jointly encoded, and delivered to two decoders. Each decoder has access to one of the two messages to enable it to reproduce the other message. The rate-distortion function for the coding problem and its interesting properties are clarified.<|reference_end|> | arxiv | @article{kimura2008multiterminal,
title={Multiterminal source coding with complementary delivery},
author={Akisato Kimura and Tomohiko Uyematsu},
journal={arXiv preprint arXiv:0804.1602},
year={2008},
archivePrefix={arXiv},
eprint={0804.1602},
primaryClass={cs.IT math.IT}
} | kimura2008multiterminal |
arxiv-3303 | 0804.1607 | Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models | <|reference_start|>Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models: We consider a network of sensors deployed to sense a spatio-temporal field and estimate a parameter of interest. We are interested in the case where the temporal process sensed by each sensor can be modeled as a state-space process that is perturbed by random noise and parametrized by an unknown parameter. To estimate the unknown parameter from the measurements that the sensors sequentially collect, we propose a distributed and recursive estimation algorithm, which we refer to as the incremental recursive prediction error algorithm. This algorithm has the distributed property of incremental gradient algorithms and the on-line property of recursive prediction error algorithms. We study the convergence behavior of the algorithm and provide sufficient conditions for its convergence. Our convergence result is rather general and contains as special cases the known convergence results for the incremental versions of the least-mean square algorithm. Finally, we use the algorithm developed in this paper to identify the source of a gas-leak (diffusing source) in a closed warehouse and also report numerical simulations to verify convergence.<|reference_end|> | arxiv | @article{ram2008distributed,
title={Distributed and Recursive Parameter Estimation in Parametrized Linear
State-Space Models},
author={S. Sundhar Ram, V. V. Veeravalli, and A. Nedic},
journal={arXiv preprint arXiv:0804.1607},
year={2008},
archivePrefix={arXiv},
eprint={0804.1607},
primaryClass={cs.DC}
} | ram2008distributed |
arxiv-3304 | 0804.1617 | Optimal Power Control over Fading Cognitive Radio Channels by Exploiting Primary User CSI | <|reference_start|>Optimal Power Control over Fading Cognitive Radio Channels by Exploiting Primary User CSI: This paper is concerned with spectrum sharing cognitive radio networks, where a secondary user (SU) or cognitive radio link communicates simultaneously over the same frequency band with an existing primary user (PU) link. It is assumed that the SU transmitter has the perfect channel state information (CSI) on the fading channels from SU transmitter to both PU and SU receivers (as usually assumed in the literature), as well as the fading channel from PU transmitter to PU receiver (a new assumption). With the additional PU CSI, we study the optimal power control for the SU over different fading states to maximize the SU ergodic capacity subject to a new proposed constraint to protect the PU transmission, which limits the maximum ergodic capacity loss of the PU resulted from the SU transmission. It is shown that the proposed SU power-control policy is superior over the conventional policy under the constraint on the maximum tolerable interference power/interperferecne temperature at the PU receiver, in terms of the achievable ergodic capacities of both PU and SU.<|reference_end|> | arxiv | @article{zhang2008optimal,
title={Optimal Power Control over Fading Cognitive Radio Channels by Exploiting
Primary User CSI},
author={Rui Zhang},
journal={arXiv preprint arXiv:0804.1617},
year={2008},
archivePrefix={arXiv},
eprint={0804.1617},
primaryClass={cs.IT math.IT}
} | zhang2008optimal |
arxiv-3305 | 0804.1649 | On decomposition of tame polynomials and rational functions | <|reference_start|>On decomposition of tame polynomials and rational functions: In this paper we present algorithmic considerations and theoretical results about the relation between the orders of certain groups associated to the components of a polynomial and the order of the group that corresponds to the polynomial, proving it for arbitrary tame polynomials, and considering the case of rational functions.<|reference_end|> | arxiv | @article{gutierrez2008on,
title={On decomposition of tame polynomials and rational functions},
author={Jaime Gutierrez, David Sevilla},
journal={Computer algebra in scientific computing, 219--226, Lecture Notes
in Comput. Sci., 4194, Springer, Berlin, 2006. MR2279795 (2008d:12001)},
year={2008},
archivePrefix={arXiv},
eprint={0804.1649},
primaryClass={cs.SC math.AC}
} | gutierrez2008on |
arxiv-3306 | 0804.1653 | Nonextensive Generalizations of the Jensen-Shannon Divergence | <|reference_start|>Nonextensive Generalizations of the Jensen-Shannon Divergence: Convexity is a key concept in information theory, namely via the many implications of Jensen's inequality, such as the non-negativity of the Kullback-Leibler divergence (KLD). Jensen's inequality also underlies the concept of Jensen-Shannon divergence (JSD), which is a symmetrized and smoothed version of the KLD. This paper introduces new JSD-type divergences, by extending its two building blocks: convexity and Shannon's entropy. In particular, a new concept of q-convexity is introduced and shown to satisfy a Jensen's q-inequality. Based on this Jensen's q-inequality, the Jensen-Tsallis q-difference is built, which is a nonextensive generalization of the JSD, based on Tsallis entropies. Finally, the Jensen-Tsallis q-difference is charaterized in terms of convexity and extrema.<|reference_end|> | arxiv | @article{martins2008nonextensive,
title={Nonextensive Generalizations of the Jensen-Shannon Divergence},
author={Andre Martins, Pedro Aguiar, Mario Figueiredo},
journal={arXiv preprint arXiv:0804.1653},
year={2008},
archivePrefix={arXiv},
eprint={0804.1653},
primaryClass={cs.IT math.IT math.ST stat.TH}
} | martins2008nonextensive |
arxiv-3307 | 0804.1667 | Mechanizing the Metatheory of LF | <|reference_start|>Mechanizing the Metatheory of LF: LF is a dependent type theory in which many other formal systems can be conveniently embedded. However, correct use of LF relies on nontrivial metatheoretic developments such as proofs of correctness of decision procedures for LF's judgments. Although detailed informal proofs of these properties have been published, they have not been formally verified in a theorem prover. We have formalized these properties within Isabelle/HOL using the Nominal Datatype Package, closely following a recent article by Harper and Pfenning. In the process, we identified and resolved a gap in one of the proofs and a small number of minor lacunae in others. We also formally derive a version of the type checking algorithm from which Isabelle/HOL can generate executable code. Besides its intrinsic interest, our formalization provides a foundation for studying the adequacy of LF encodings, the correctness of Twelf-style metatheoretic reasoning, and the metatheory of extensions to LF.<|reference_end|> | arxiv | @article{urban2008mechanizing,
title={Mechanizing the Metatheory of LF},
author={Christian Urban, James Cheney and Stefan Berghofer},
journal={arXiv preprint arXiv:0804.1667},
year={2008},
archivePrefix={arXiv},
eprint={0804.1667},
primaryClass={cs.LO}
} | urban2008mechanizing |
arxiv-3308 | 0804.1669 | Subclose Families, Threshold Graphs, and the Weight Hierarchy of Grassmann and Schubert Codes | <|reference_start|>Subclose Families, Threshold Graphs, and the Weight Hierarchy of Grassmann and Schubert Codes: We discuss the problem of determining the complete weight hierarchy of linear error correcting codes associated to Grassmann varieties and, more generally, to Schubert varieties in Grassmannians. The problem is partially solved in the case of Grassmann codes, and one of the solutions uses the combinatorial notion of a closed family. We propose a generalization of this to what is called a subclose family. A number of properties of subclose families are proved, and its connection with the notion of threshold graphs and graphs with maximum sum of squares of vertex degrees is outlined.<|reference_end|> | arxiv | @article{ghorpade2008subclose,
title={Subclose Families, Threshold Graphs, and the Weight Hierarchy of
Grassmann and Schubert Codes},
author={Sudhir R. Ghorpade, Arunkumar R. Patil and Harish K. Pillai},
journal={Arithmetic, Geometry, Cryptography and Coding Theory, 87-99,
Contemp. Math. 487, Amer. Math. Soc., Providence, RI, 2009.},
year={2008},
archivePrefix={arXiv},
eprint={0804.1669},
primaryClass={math.CO cs.IT math.IT}
} | ghorpade2008subclose |
arxiv-3309 | 0804.1679 | Computation of unirational fields | <|reference_start|>Computation of unirational fields: One of the main contributions which Volker Weispfenning made to mathematics is related to Groebner bases theory. In this paper we present an algorithm for computing all algebraic intermediate subfields in a separably generated unirational field extension (which in particular includes the zero characteristic case). One of the main tools is Groebner bases theory. Our algorithm also requires computing primitive elements and factoring over algebraic extensions. Moreover, the method can be extended to finitely generated K-algebras.<|reference_end|> | arxiv | @article{gutierrez2008computation,
title={Computation of unirational fields},
author={Jaime Gutierrez, David Sevilla},
journal={Computation of unirational fields. J. Symbolic Comput. 41 (2006),
no. 11, 1222--1244. MR2267134 (2007g:12003)},
year={2008},
archivePrefix={arXiv},
eprint={0804.1679},
primaryClass={cs.SC math.AC}
} | gutierrez2008computation |
arxiv-3310 | 0804.1687 | Building counterexamples to generalizations for rational functions of Ritt's decomposition theorem | <|reference_start|>Building counterexamples to generalizations for rational functions of Ritt's decomposition theorem: The classical Ritt's Theorems state several properties of univariate polynomial decomposition. In this paper we present new counterexamples to Ritt's first theorem, which states the equality of length of decomposition chains of a polynomial, in the case of rational functions. Namely, we provide an explicit example of a rational function with coefficients in Q and two decompositions of different length. Another aspect is the use of some techniques that could allow for other counterexamples, namely, relating groups and decompositions and using the fact that the alternating group A_4 has two subgroup chains of different lengths; and we provide more information about the generalizations of another property of polynomial decomposition: the stability of the base field. We also present an algorithm for computing the fixing group of a rational function providing the complexity over Q.<|reference_end|> | arxiv | @article{gutierrez2008building,
title={Building counterexamples to generalizations for rational functions of
Ritt's decomposition theorem},
author={Jaime Gutierrez, David Sevilla},
journal={J. Algebra 303 (2006), no. 2, 655--667. MR2255128 (2007e:13032)},
year={2008},
archivePrefix={arXiv},
eprint={0804.1687},
primaryClass={math.AC cs.SC}
} | gutierrez2008building |
arxiv-3311 | 0804.1696 | A classification of invasive patterns in AOP | <|reference_start|>A classification of invasive patterns in AOP: Aspect-Oriented Programming (AOP) improves modularity by encapsulating crosscutting concerns into aspects. Some mechanisms to compose aspects allow invasiveness as a mean to integrate concerns. Invasiveness means that AOP languages have unrestricted access to program properties. Such kind of languages are interesting because they allow performing complex operations and better introduce functionalities. In this report we present a classification of invasive patterns in AOP. This classification characterizes the aspects invasive behavior and allows developers to abstract about the aspect incidence over the program they crosscut.<|reference_end|> | arxiv | @article{munoz2008a,
title={A classification of invasive patterns in AOP},
author={Freddy Munoz (IRISA), Benoit Baudry (IRISA), Olivier Barais (IRISA)},
journal={arXiv preprint arXiv:0804.1696},
year={2008},
number={RR-6501},
archivePrefix={arXiv},
eprint={0804.1696},
primaryClass={cs.PL cs.SE}
} | munoz2008a |
arxiv-3312 | 0804.1697 | Lower Bounds on the Rate-Distortion Function of Individual LDGM Codes | <|reference_start|>Lower Bounds on the Rate-Distortion Function of Individual LDGM Codes: We consider lossy compression of a binary symmetric source by means of a low-density generator-matrix code. We derive two lower bounds on the rate distortion function which are valid for any low-density generator-matrix code with a given node degree distribution L(x) on the set of generators and for any encoding algorithm. These bounds show that, due to the sparseness of the code, the performance is strictly bounded away from the Shannon rate-distortion function. In this sense, our bounds represent a natural generalization of Gallager's bound on the maximum rate at which low-density parity-check codes can be used for reliable transmission. Our bounds are similar in spirit to the technique recently developed by Dimakis, Wainwright, and Ramchandran, but they apply to individual codes.<|reference_end|> | arxiv | @article{kudekar2008lower,
title={Lower Bounds on the Rate-Distortion Function of Individual LDGM Codes},
author={Shrinivas Kudekar and Ruediger Urbanke},
journal={arXiv preprint arXiv:0804.1697},
year={2008},
archivePrefix={arXiv},
eprint={0804.1697},
primaryClass={cs.IT math.IT}
} | kudekar2008lower |
arxiv-3313 | 0804.1707 | Computation of unirational fields (extended abstract) | <|reference_start|>Computation of unirational fields (extended abstract): In this paper we present an algorithm for computing all algebraic intermediate subfields in a separably generated unirational field extension (which in particular includes the zero characteristic case). One of the main tools is Groebner bases theory. Our algorithm also requires computing computing primitive elements and factoring over algebraic extensions. Moreover, the method can be extended to finitely generated K-algebras.<|reference_end|> | arxiv | @article{gutierrez2008computation,
title={Computation of unirational fields (extended abstract)},
author={Jaime Gutierrez, David Sevilla},
journal={Proceedings of the 2005 Algorithmic Algebra and Logic (A3L), p.
129--134, BOD Norderstedt, Germany, 2005. ISBN 3-8334-2669-1},
year={2008},
archivePrefix={arXiv},
eprint={0804.1707},
primaryClass={cs.SC math.AC}
} | gutierrez2008computation |
arxiv-3314 | 0804.1724 | Information Acquisition and Exploitation in Multichannel Wireless Networks | <|reference_start|>Information Acquisition and Exploitation in Multichannel Wireless Networks: A wireless system with multiple channels is considered, where each channel has several transmission states. A user learns about the instantaneous state of an available channel by transmitting a control packet in it. Since probing all channels consumes significant energy and time, a user needs to determine what and how much information it needs to acquire about the instantaneous states of the available channels so that it can maximize its transmission rate. This motivates the study of the trade-off between the cost of information acquisition and its value towards improving the transmission rate. A simple model is presented for studying this information acquisition and exploitation trade-off when the channels are multi-state, with different distributions and information acquisition costs. The objective is to maximize a utility function which depends on both the cost and value of information. Solution techniques are presented for computing near-optimal policies with succinct representation in polynomial time. These policies provably achieve at least a fixed constant factor of the optimal utility on any problem instance, and in addition, have natural characterizations. The techniques are based on exploiting the structure of the optimal policy, and use of Lagrangean relaxations which simplify the space of approximately optimal solutions.<|reference_end|> | arxiv | @article{guha2008information,
title={Information Acquisition and Exploitation in Multichannel Wireless
Networks},
author={Sudipto Guha and Kamesh Munagala and Saswati Sarkar},
journal={arXiv preprint arXiv:0804.1724},
year={2008},
archivePrefix={arXiv},
eprint={0804.1724},
primaryClass={cs.DS cs.NI}
} | guha2008information |
arxiv-3315 | 0804.1728 | On cobweb posets most relevant codings | <|reference_start|>On cobweb posets most relevant codings: One considers here orderable acyclic digraphs named KoDAGs which represent the outmost general chains of dibicliques denoting thus the outmost general chains of binary relations. Because of this fact KoDAGs start to become an outstanding concept of nowadays investigation. We propose here examples of codings of KoDAGs looked upon as infinite hyper-boxes as well as chains of rectangular hyper-boxes in N^\infty. Neither of KoDAGs codings considered here is a poset isomorphism with Pi = <P, \leq>. Nevertheless every example of coding supplies a new view on possible investigation of KoDAGs properties. The codes proposed here down are by now recognized as most relevant codes for practical purposes including visualization. More than that. Employing quite arbitrary sequences F=\{n_F\}_{n\geq 0} infinitely many new representations of natural numbers called base of F number system representations are introduced. These constitute mixed radix-type numeral systems. F base nonstandard positional numeral systems in which the numerical base varies from position to position have picturesque interpretation due to KoDAGs graphs and their correspondent posets which in turn are endowed on their own with combinatorial interpretation of uniquely assigned to KoDAGs F-nomial coefficients. The base of F number systems are used for KoDAGs coding and are interpreted as chain coordinatization in KoDAGs pictures as well as systems of infinite number of boxes sequences of F-varying containers capacity of subsequent boxes. Needless to say how crucial is this base of F number system for KoDAGs hence consequently for arbitrary chains of binary relations. New F based numeral systems are umbral base of F number systems in a sense to be explained in what follows.<|reference_end|> | arxiv | @article{kwasniewski2008on,
title={On cobweb posets most relevant codings},
author={A. K. Kwasniewski, M. Dziemianczuk},
journal={arXiv preprint arXiv:0804.1728},
year={2008},
archivePrefix={arXiv},
eprint={0804.1728},
primaryClass={math.CO cs.DM}
} | kwasniewski2008on |
arxiv-3316 | 0804.1729 | On affine usages in signal-based communication | <|reference_start|>On affine usages in signal-based communication: We describe a type system for a synchronous pi-calculus formalising the notion of affine usage in signal-based communication. In particular, we identify a limited number of usages that preserve affinity and that can be composed. As a main application of the resulting system, we show that typable programs are deterministic.<|reference_end|> | arxiv | @article{amadio2008on,
title={On affine usages in signal-based communication},
author={Roberto Amadio (PPS), Mehdi Dogguy (PPS)},
journal={Programming Languages and Systems, 6th Asian Symposium, APLAS
2008, France (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0804.1729},
primaryClass={cs.LO}
} | amadio2008on |
arxiv-3317 | 0804.1740 | Pseudo Quasi-3 Designs and their Applications to Coding Theory | <|reference_start|>Pseudo Quasi-3 Designs and their Applications to Coding Theory: We define a pseudo quasi-3 design as a symmetric design with the property that the derived and residual designs with respect to at least one block are quasi-symmetric. Quasi-symmetric designs can be used to construct optimal self complementary codes. In this article we give a construction of an infinite family of pseudo quasi-3 designs whose residual designs allow us to construct a family of codes with a new parameter set that meet the Grey Rankin bound.<|reference_end|> | arxiv | @article{bracken2008pseudo,
title={Pseudo Quasi-3 Designs and their Applications to Coding Theory},
author={Carl Bracken},
journal={arXiv preprint arXiv:0804.1740},
year={2008},
archivePrefix={arXiv},
eprint={0804.1740},
primaryClass={math.CO cs.IT math.IT}
} | bracken2008pseudo |
arxiv-3318 | 0804.1748 | Noncoherent Capacity of Underspread Fading Channels | <|reference_start|>Noncoherent Capacity of Underspread Fading Channels: We derive bounds on the noncoherent capacity of wide-sense stationary uncorrelated scattering (WSSUS) channels that are selective both in time and frequency, and are underspread, i.e., the product of the channel's delay spread and Doppler spread is small. For input signals that are peak constrained in time and frequency, we obtain upper and lower bounds on capacity that are explicit in the channel's scattering function, are accurate for a large range of bandwidth and allow to coarsely identify the capacity-optimal bandwidth as a function of the peak power and the channel's scattering function. We also obtain a closed-form expression for the first-order Taylor series expansion of capacity in the limit of large bandwidth, and show that our bounds are tight in the wideband regime. For input signals that are peak constrained in time only (and, hence, allowed to be peaky in frequency), we provide upper and lower bounds on the infinite-bandwidth capacity and find cases when the bounds coincide and the infinite-bandwidth capacity is characterized exactly. Our lower bound is closely related to a result by Viterbi (1967). The analysis in this paper is based on a discrete-time discrete-frequency approximation of WSSUS time- and frequency-selective channels. This discretization explicitly takes into account the underspread property, which is satisfied by virtually all wireless communication channels.<|reference_end|> | arxiv | @article{durisi2008noncoherent,
title={Noncoherent Capacity of Underspread Fading Channels},
author={Giuseppe Durisi, Ulrich G. Schuster, Helmut B"olcskei, Shlomo Shamai
(Shitz)},
journal={arXiv preprint arXiv:0804.1748},
year={2008},
doi={10.1109/TIT.2009.2034807},
archivePrefix={arXiv},
eprint={0804.1748},
primaryClass={cs.IT math.IT}
} | durisi2008noncoherent |
arxiv-3319 | 0804.1758 | Measure and integral with purely ordinal scales | <|reference_start|>Measure and integral with purely ordinal scales: We develop a purely ordinal model for aggregation functionals for lattice valued functions, comprising as special cases quantiles, the Ky Fan metric and the Sugeno integral. For modeling findings of psychological experiments like the reflection effect in decision behaviour under risk or uncertainty, we introduce reflection lattices. These are complete linear lattices endowed with an order reversing bijection like the reflection at 0 on the real interval $[-1,1]$. Mathematically we investigate the lattice of non-void intervals in a complete linear lattice, then the class of monotone interval-valued functions and their inner product.<|reference_end|> | arxiv | @article{denneberg2008measure,
title={Measure and integral with purely ordinal scales},
author={Dieter Denneberg, Michel Grabisch (LIP6)},
journal={Journal of Mathematical Psychology 48 (2004) 15-27},
year={2008},
archivePrefix={arXiv},
eprint={0804.1758},
primaryClass={cs.DM math.PR math.RA}
} | denneberg2008measure |
arxiv-3320 | 0804.1760 | The Symmetric Sugeno Integral | <|reference_start|>The Symmetric Sugeno Integral: We propose an extension of the Sugeno integral for negative numbers, in the spirit of the symmetric extension of Choquet integral, also called \Sipos\ integral. Our framework is purely ordinal, since the Sugeno integral has its interest when the underlying structure is ordinal. We begin by defining negative numbers on a linearly ordered set, and we endow this new structure with a suitable algebra, very close to the ring of real numbers. In a second step, we introduce the M\"obius transform on this new structure. Lastly, we define the symmetric Sugeno integral, and show its similarity with the symmetric Choquet integral.<|reference_end|> | arxiv | @article{grabisch2008the,
title={The Symmetric Sugeno Integral},
author={Michel Grabisch (LIP6)},
journal={Fuzzy Sets and Systems 139 (2003) 473-490},
year={2008},
archivePrefix={arXiv},
eprint={0804.1760},
primaryClass={cs.DM math.PR math.RA}
} | grabisch2008the |
arxiv-3321 | 0804.1762 | The Choquet integral for the aggregation of interval scales in multicriteria decision making | <|reference_start|>The Choquet integral for the aggregation of interval scales in multicriteria decision making: This paper addresses the question of which models fit with information concerning the preferences of the decision maker over each attribute, and his preferences about aggregation of criteria (interacting criteria). We show that the conditions induced by these information plus some intuitive conditions lead to a unique possible aggregation operator: the Choquet integral.<|reference_end|> | arxiv | @article{labreuche2008the,
title={The Choquet integral for the aggregation of interval scales in
multicriteria decision making},
author={Christophe Labreuche (TRT), Michel Grabisch (LIP6)},
journal={Fuzzy Sets and Systems 137 (2003) 11-26},
year={2008},
archivePrefix={arXiv},
eprint={0804.1762},
primaryClass={cs.DM cs.AI}
} | labreuche2008the |
arxiv-3322 | 0804.1777 | Phutball is PSPACE-hard | <|reference_start|>Phutball is PSPACE-hard: We consider the $n\times n$ game of Phutball. It is shown that, given an arbitrary position of stones on the board, it is a PSPACE-hard problem to determine whether the specified player can win the game, regardless of the opponent's choices made during the game.<|reference_end|> | arxiv | @article{dereniowski2008phutball,
title={Phutball is PSPACE-hard},
author={Dariusz Dereniowski},
journal={Theoretical Computer Science 411 (2010) 3971-3978},
year={2008},
doi={10.1016/j.tcs.2010.08.019},
archivePrefix={arXiv},
eprint={0804.1777},
primaryClass={cs.GT cs.DM}
} | dereniowski2008phutball |
arxiv-3323 | 0804.1788 | Prediciendo el generador cuadratico (in Spanish) | <|reference_start|>Prediciendo el generador cuadratico (in Spanish): Let p be a prime and a, c be integers such that a<>0 mod p. The quadratic generator is a sequence (u_n) of pseudorandom numbers defined by u_{n+1}=a*(u_n)^2+c mod p. In this article we probe that if we know sufficiently many of the most significant bits of two consecutive values u_n, u_{n+1}, then we can compute the seed u_0 except for a small number of exceptional values. ----- Sean p un primo, a y c enteros tales que a<>0 mod p. El generador cuadratico es una sucesion (u_n) de numeros pseudoaleatorios definidos por la relacion u_{n+1}=a*(u_n)^2+c mod p. En este trabajo demostramos que si conocemos un numero suficientemente grande de los bits mas significativos para dos valores consecutivos u_n, u_{n+1}, entonces podemos descubrir en tiempo polinomial la semilla u_0, excepto para un conjunto pequeno de valores excepcionales.<|reference_end|> | arxiv | @article{gomez-perez2008prediciendo,
title={Prediciendo el generador cuadratico (in Spanish)},
author={Domingo Gomez-Perez, Jaime Gutierrez, Alvar Ibeas, David Sevilla},
journal={Proceedings of the VIII Reunion Espanola sobre Criptologia y
Seguridad de la Informacion (RECSI), p. 185-195, Diaz de Santos, 2004. ISBN
84-7978-650-7},
year={2008},
archivePrefix={arXiv},
eprint={0804.1788},
primaryClass={cs.CR}
} | gomez-perez2008prediciendo |
arxiv-3324 | 0804.1811 | Space-Time Codes from Structured Lattices | <|reference_start|>Space-Time Codes from Structured Lattices: We present constructions of Space-Time (ST) codes based on lattice coset coding. First, we focus on ST code constructions for the short block-length case, i.e., when the block-length is equal to or slightly larger than the number of transmit antennas. We present constructions based on dense lattice packings and nested lattice (Voronoi) shaping. Our codes achieve the optimal diversity-multiplexing tradeoff of quasi-static MIMO fading channels for any fading statistics, and perform very well also at practical, moderate values of signal to noise ratios (SNR). Then, we extend the construction to the case of large block lengths, by using trellis coset coding. We provide constructions of trellis coded modulation (TCM) schemes that are endowed with good packing and shaping properties. Both short-block and trellis constructions allow for a reduced complexity decoding algorithm based on minimum mean squared error generalized decision feedback equalizer (MMSE-GDFE) lattice decoding and a combination of this with a Viterbi TCM decoder for the TCM case. Beyond the interesting algebraic structure, we exhibit codes whose performance is among the state-of-the art considering codes with similar encoding/decoding complexity.<|reference_end|> | arxiv | @article{kumar2008space-time,
title={Space-Time Codes from Structured Lattices},
author={K. Raj Kumar and Giuseppe Caire},
journal={arXiv preprint arXiv:0804.1811},
year={2008},
archivePrefix={arXiv},
eprint={0804.1811},
primaryClass={cs.IT math.IT}
} | kumar2008space-time |
arxiv-3325 | 0804.1839 | Necessary and Sufficient Conditions on Sparsity Pattern Recovery | <|reference_start|>Necessary and Sufficient Conditions on Sparsity Pattern Recovery: The problem of detecting the sparsity pattern of a k-sparse vector in R^n from m random noisy measurements is of interest in many areas such as system identification, denoising, pattern recognition, and compressed sensing. This paper addresses the scaling of the number of measurements m, with signal dimension n and sparsity-level nonzeros k, for asymptotically-reliable detection. We show a necessary condition for perfect recovery at any given SNR for all algorithms, regardless of complexity, is m = Omega(k log(n-k)) measurements. Conversely, it is shown that this scaling of Omega(k log(n-k)) measurements is sufficient for a remarkably simple ``maximum correlation'' estimator. Hence this scaling is optimal and does not require more sophisticated techniques such as lasso or matching pursuit. The constants for both the necessary and sufficient conditions are precisely defined in terms of the minimum-to-average ratio of the nonzero components and the SNR. The necessary condition improves upon previous results for maximum likelihood estimation. For lasso, it also provides a necessary condition at any SNR and for low SNR improves upon previous work. The sufficient condition provides the first asymptotically-reliable detection guarantee at finite SNR.<|reference_end|> | arxiv | @article{fletcher2008necessary,
title={Necessary and Sufficient Conditions on Sparsity Pattern Recovery},
author={Alyson K. Fletcher, Sundeep Rangan, and Vivek K. Goyal},
journal={IEEE Trans. on Information Theory, vol. 55, no. 12, pp. 5758-5772,
December 2009},
year={2008},
doi={10.1109/TIT.2009.2032726},
archivePrefix={arXiv},
eprint={0804.1839},
primaryClass={cs.IT math.IT}
} | fletcher2008necessary |
arxiv-3326 | 0804.1840 | Selfish Distributed Compression over Networks: Correlation Induces Anarchy | <|reference_start|>Selfish Distributed Compression over Networks: Correlation Induces Anarchy: We consider the min-cost multicast problem (under network coding) with multiple correlated sources where each terminal wants to losslessly reconstruct all the sources. We study the inefficiency brought forth by the selfish behavior of the terminals in this scenario by modeling it as a noncooperative game among the terminals. The degradation in performance due to the lack of regulation is measured by the {\it Price of Anarchy} (POA), which is defined as the ratio between the cost of the worst possible \textit{Wardrop equilibrium} and the socially optimum cost. Our main result is that in contrast with the case of independent sources, the presence of source correlations can significantly increase the price of anarchy. Towards establishing this result, we first characterize the socially optimal flow and rate allocation in terms of four intuitive conditions. Next, we show that the Wardrop equilibrium is a socially optimal solution for a different set of (related) cost functions. Using this, we construct explicit examples that demonstrate that the POA $> 1$ and determine near-tight upper bounds on the POA as well. The main techniques in our analysis are Lagrangian duality theory and the usage of the supermodularity of conditional entropy.<|reference_end|> | arxiv | @article{ramamoorthy2008selfish,
title={Selfish Distributed Compression over Networks: Correlation Induces
Anarchy},
author={Aditya Ramamoorthy, Vwani Roychowdhury, Sudhir Kumar Singh},
journal={arXiv preprint arXiv:0804.1840},
year={2008},
archivePrefix={arXiv},
eprint={0804.1840},
primaryClass={cs.GT cs.IT math.IT}
} | ramamoorthy2008selfish |
arxiv-3327 | 0804.1845 | An Optimal Bloom Filter Replacement Based on Matrix Solving | <|reference_start|>An Optimal Bloom Filter Replacement Based on Matrix Solving: We suggest a method for holding a dictionary data structure, which maps keys to values, in the spirit of Bloom Filters. The space requirements of the dictionary we suggest are much smaller than those of a hashtable. We allow storing n keys, each mapped to value which is a string of k bits. Our suggested method requires nk + o(n) bits space to store the dictionary, and O(n) time to produce the data structure, and allows answering a membership query in O(1) memory probes. The dictionary size does not depend on the size of the keys. However, reducing the space requirements of the data structure comes at a certain cost. Our dictionary has a small probability of a one sided error. When attempting to obtain the value for a key that is stored in the dictionary we always get the correct answer. However, when testing for membership of an element that is not stored in the dictionary, we may get an incorrect answer, and when requesting the value of such an element we may get a certain random value. Our method is based on solving equations in GF(2^k) and using several hash functions. Another significant advantage of our suggested method is that we do not require using sophisticated hash functions. We only require pairwise independent hash functions. We also suggest a data structure that requires only nk bits space, has O(n2) preprocessing time, and has a O(log n) query time. However, this data structures requires a uniform hash functions. In order replace a Bloom Filter of n elements with an error proability of 2^{-k}, we require nk + o(n) memory bits, O(1) query time, O(n) preprocessing time, and only pairwise independent hash function. Even the most advanced previously known Bloom Filter would require nk+O(n) space, and a uniform hash functions, so our method is significantly less space consuming especially when k is small.<|reference_end|> | arxiv | @article{porat2008an,
title={An Optimal Bloom Filter Replacement Based on Matrix Solving},
author={Ely Porat},
journal={arXiv preprint arXiv:0804.1845},
year={2008},
archivePrefix={arXiv},
eprint={0804.1845},
primaryClass={cs.DS cs.DB}
} | porat2008an |
arxiv-3328 | 0804.1879 | Lambda-Free Logical Frameworks | <|reference_start|>Lambda-Free Logical Frameworks: We present the definition of the logical framework TF, the Type Framework. TF is a lambda-free logical framework; it does not include lambda-abstraction or product kinds. We give formal proofs of several results in the metatheory of TF, and show how it can be conservatively embedded in the logical framework LF: its judgements can be seen as the judgements of LF that are in beta-normal, eta-long normal form. We show how several properties, such as adequacy theorems for object theories and the injectivity of constants, can be proven more easily in TF, and then `lifted' to LF.<|reference_end|> | arxiv | @article{adams2008lambda-free,
title={Lambda-Free Logical Frameworks},
author={Robin Adams},
journal={arXiv preprint arXiv:0804.1879},
year={2008},
archivePrefix={arXiv},
eprint={0804.1879},
primaryClass={cs.LO}
} | adams2008lambda-free |
arxiv-3329 | 0804.1888 | Quantum circuits for strongly correlated quantum systems | <|reference_start|>Quantum circuits for strongly correlated quantum systems: In recent years, we have witnessed an explosion of experimental tools by which quantum systems can be manipulated in a controlled and coherent way. One of the most important goals now is to build quantum simulators, which would open up the possibility of exciting experiments probing various theories in regimes that are not achievable under normal lab circumstances. Here we present a novel approach to gain detailed control on the quantum simulation of strongly correlated quantum many-body systems by constructing the explicit quantum circuits that diagonalize their dynamics. We show that the exact quantum circuits underlying some of the most relevant many-body Hamiltonians only need a finite amount of local gates. As a particularly simple instance, the full dynamics of a one-dimensional Quantum Ising model in a transverse field with four spins is shown to be reproduced using a quantum circuit of only six local gates. This opens up the possibility of experimentally producing strongly correlated states, their time evolution at zero time and even thermal superpositions at zero temperature. Our method also allows to uncover the exact circuits corresponding to models that exhibit topological order and to stabilizer states.<|reference_end|> | arxiv | @article{verstraete2008quantum,
title={Quantum circuits for strongly correlated quantum systems},
author={Frank Verstraete, J. Ignacio Cirac, Jose I. Latorre},
journal={arXiv preprint arXiv:0804.1888},
year={2008},
doi={10.1103/PhysRevA.79.032316},
archivePrefix={arXiv},
eprint={0804.1888},
primaryClass={quant-ph cond-mat.str-el cs.DS hep-th}
} | verstraete2008quantum |
arxiv-3330 | 0804.1893 | The FAST-Model | <|reference_start|>The FAST-Model: A discrete model of pedestrian motion is presented that is implemented in the Floor field- and Agentbased Simulation Tool (F.A.S.T.) which has already been applicated to a variety of real life scenarios.<|reference_end|> | arxiv | @article{kretz2008the,
title={The F.A.S.T.-Model},
author={Tobias Kretz and Michael Schreckenberg},
journal={Cellular Automata -- 7th International Conference on Cellular
Automata for Research and Industry, ACRI 2006, Perpignan, France, September
20-23, pages 712--715, Springer 2006, Proceedings},
year={2008},
doi={10.1007/11861201_85},
archivePrefix={arXiv},
eprint={0804.1893},
primaryClass={cs.MA physics.soc-ph}
} | kretz2008the |
arxiv-3331 | 0804.1894 | A Reliability-based Framework for Multi-path Routing Analysis in Mobile Ad-Hoc Networks | <|reference_start|>A Reliability-based Framework for Multi-path Routing Analysis in Mobile Ad-Hoc Networks: Unlike traditional routing procedures that, at the best, single out a unique route, multi-path routing protocols discover proactively several alternative routes. It has been recognized that multi-path routing can be more efficient than traditional one mainly for mobile ad hoc networks, where route failure events are frequent. Most studies in the area of multi-path routing focus on heuristic methods, and the performances of these strategies are commonly evaluated by numerical simulations. The need of a theoretical analysis motivates such a paper, which proposes to resort to the terminal-pair routing reliability as performance metric. This metric allows one to assess the performance gain due to the availability of route diversity. By resorting to graph theory, we propose an analytical framework to evaluate the tolerance of multi-path route discovery processes against route failures for mobile ad hoc networks. Moreover, we derive a useful bound to easily estimate the performance improvements achieved by multi-path routing with respect to any traditional routing protocol. Finally, numerical simulation results show the effectiveness of this performance analysis.<|reference_end|> | arxiv | @article{caleffi2008a,
title={A Reliability-based Framework for Multi-path Routing Analysis in Mobile
Ad-Hoc Networks},
author={Marcello Caleffi, Giancarlo Ferraiuolo, Luigi Paura},
journal={International Journal of Communication Networks and Distributed
Systems, Vol.1 No.4/5/6, 2008},
year={2008},
doi={10.1504/IJCNDS.2008.021079},
archivePrefix={arXiv},
eprint={0804.1894},
primaryClass={cs.NI}
} | caleffi2008a |
arxiv-3332 | 0804.1921 | On the Extension of Pseudo-Boolean Functions for the Aggregation of Interacting Criteria | <|reference_start|>On the Extension of Pseudo-Boolean Functions for the Aggregation of Interacting Criteria: The paper presents an analysis on the use of integrals defined for non-additive measures (or capacities) as the Choquet and the \Sipos{} integral, and the multilinear model, all seen as extensions of pseudo-Boolean functions, and used as a means to model interaction between criteria in a multicriteria decision making problem. The emphasis is put on the use, besides classical comparative information, of information about difference of attractiveness between acts, and on the existence, for each point of view, of a ``neutral level'', allowing to introduce the absolute notion of attractive or repulsive act. It is shown that in this case, the Sipos integral is a suitable solution, although not unique. Properties of the Sipos integral as a new way of aggregating criteria are shown, with emphasis on the interaction among criteria.<|reference_end|> | arxiv | @article{grabisch2008on,
title={On the Extension of Pseudo-Boolean Functions for the Aggregation of
Interacting Criteria},
author={Michel Grabisch (LIP6), Christophe Labreuche (TRT), Jean-Claude
Vansnick},
journal={European Journal of Operational Research (2003) 28-47},
year={2008},
archivePrefix={arXiv},
eprint={0804.1921},
primaryClass={cs.DM}
} | grabisch2008on |
arxiv-3333 | 0804.1928 | Characterizing User Mobility in Second Life | <|reference_start|>Characterizing User Mobility in Second Life: In this work we present a measurement study of user mobility in Second Life. We first discuss different techniques to collect user traces and then focus on results obtained using a crawler that we built. Tempted by the question whether our methodology could provide similar results to those obtained in real-world experiments, we study the statistical distribution of user contacts and show that from a qualitative point of view user mobility in Second Life presents similar traits to those of real humans. We further push our analysis to line of sight networks that emerge from user interaction and show that they are highly clustered. Lastly, we focus on the spatial properties of user movements and observe that users in Second Life revolve around several point of interests traveling in general short distances. Besides our findings, the traces collected in this work can be very useful for trace-driven simulations of communication schemes in delay tolerant networks and their performance evaluation.<|reference_end|> | arxiv | @article{la2008characterizing,
title={Characterizing User Mobility in Second Life},
author={Chi-Anh La, Pietro Michiardi},
journal={arXiv preprint arXiv:0804.1928},
year={2008},
number={rr08212},
archivePrefix={arXiv},
eprint={0804.1928},
primaryClass={cs.CY}
} | la2008characterizing |
arxiv-3334 | 0804.1932 | A complexity dichotomy for partition functions with mixed signs | <|reference_start|>A complexity dichotomy for partition functions with mixed signs: Partition functions, also known as homomorphism functions, form a rich family of graph invariants that contain combinatorial invariants such as the number of k-colourings or the number of independent sets of a graph and also the partition functions of certain "spin glass" models of statistical physics such as the Ising model. Building on earlier work by Dyer, Greenhill and Bulatov, Grohe, we completely classify the computational complexity of partition functions. Our main result is a dichotomy theorem stating that every partition function is either computable in polynomial time or #P-complete. Partition functions are described by symmetric matrices with real entries, and we prove that it is decidable in polynomial time in terms of the matrix whether a given partition function is in polynomial time or #P-complete. While in general it is very complicated to give an explicit algebraic or combinatorial description of the tractable cases, for partition functions described by a Hadamard matrices -- these turn out to be central in our proofs -- we obtain a simple algebraic tractability criterion, which says that the tractable cases are those "representable" by a quadratic polynomial over the field GF(2).<|reference_end|> | arxiv | @article{goldberg2008a,
title={A complexity dichotomy for partition functions with mixed signs},
author={Leslie Ann Goldberg, Martin Grohe, Mark Jerrum, Marc Thurley},
journal={arXiv preprint arXiv:0804.1932},
year={2008},
archivePrefix={arXiv},
eprint={0804.1932},
primaryClass={cs.CC cs.DM}
} | goldberg2008a |
arxiv-3335 | 0804.1970 | A Security Protocol for Multi-User Authentication | <|reference_start|>A Security Protocol for Multi-User Authentication: In this note we propose an encryption communication protocol which also provides database security. For the encryption of the data communication we use a transformation similar to the Cubic Public-key transformation. This method represents a many-to-one mapping which increases the complexity for any brute force attack. Some interesting properties of the transformation are also included which are basic in the authentication protocol.<|reference_end|> | arxiv | @article{chava2008a,
title={A Security Protocol for Multi-User Authentication},
author={Srikanth Chava},
journal={arXiv preprint arXiv:0804.1970},
year={2008},
archivePrefix={arXiv},
eprint={0804.1970},
primaryClass={cs.CR}
} | chava2008a |
arxiv-3336 | 0804.1974 | Schemes for Deterministic Polynomial Factoring | <|reference_start|>Schemes for Deterministic Polynomial Factoring: In this work we relate the deterministic complexity of factoring polynomials (over finite fields) to certain combinatorial objects we call m-schemes. We extend the known conditional deterministic subexponential time polynomial factoring algorithm for finite fields to get an underlying m-scheme. We demonstrate how the properties of m-schemes relate to improvements in the deterministic complexity of factoring polynomials over finite fields assuming the generalized Riemann Hypothesis (GRH). In particular, we give the first deterministic polynomial time algorithm (assuming GRH) to find a nontrivial factor of a polynomial of prime degree n where (n-1) is a smooth number.<|reference_end|> | arxiv | @article{ivanyos2008schemes,
title={Schemes for Deterministic Polynomial Factoring},
author={G'abor Ivanyos, Marek Karpinski, Nitin Saxena},
journal={arXiv preprint arXiv:0804.1974},
year={2008},
archivePrefix={arXiv},
eprint={0804.1974},
primaryClass={cs.CC cs.SC}
} | ivanyos2008schemes |
arxiv-3337 | 0804.1982 | Linear Time Recognition Algorithms for Topological Invariants in 3D | <|reference_start|>Linear Time Recognition Algorithms for Topological Invariants in 3D: In this paper, we design linear time algorithms to recognize and determine topological invariants such as the genus and homology groups in 3D. These properties can be used to identify patterns in 3D image recognition. This has tremendous amount of applications in 3D medical image analysis. Our method is based on cubical images with direct adjacency, also called (6,26)-connectivity images in discrete geometry. According to the fact that there are only six types of local surface points in 3D and a discrete version of the well-known Gauss-Bonnett Theorem in differential geometry, we first determine the genus of a closed 2D-connected component (a closed digital surface). Then, we use Alexander duality to obtain the homology groups of a 3D object in 3D space.<|reference_end|> | arxiv | @article{chen2008linear,
title={Linear Time Recognition Algorithms for Topological Invariants in 3D},
author={Li Chen, Yongwu Rong},
journal={arXiv preprint arXiv:0804.1982},
year={2008},
archivePrefix={arXiv},
eprint={0804.1982},
primaryClass={cs.CV}
} | chen2008linear |
arxiv-3338 | 0804.2032 | Tight Bounds and Faster Algorithms for Directed Max-Leaf Problems | <|reference_start|>Tight Bounds and Faster Algorithms for Directed Max-Leaf Problems: An out-tree $T$ of a directed graph $D$ is a rooted tree subgraph with all arcs directed outwards from the root. An out-branching is a spanning out-tree. By $l(D)$ and $l_s(D)$ we denote the maximum number of leaves over all out-trees and out-branchings of $D$, respectively. We give fixed parameter tractable algorithms for deciding whether $l_s(D)\geq k$ and whether $l(D)\geq k$ for a digraph $D$ on $n$ vertices, both with time complexity $2^{O(k\log k)} \cdot n^{O(1)}$. This improves on previous algorithms with complexity $2^{O(k^3\log k)} \cdot n^{O(1)}$ and $2^{O(k\log^2 k)} \cdot n^{O(1)}$, respectively. To obtain the complexity bound in the case of out-branchings, we prove that when all arcs of $D$ are part of at least one out-branching, $l_s(D)\geq l(D)/3$. The second bound we prove in this paper states that for strongly connected digraphs $D$ with minimum in-degree 3, $l_s(D)\geq \Theta(\sqrt{n})$, where previously $l_s(D)\geq \Theta(\sqrt[3]{n})$ was the best known bound. This bound is tight, and also holds for the larger class of digraphs with minimum in-degree 3 in which every arc is part of at least one out-branching.<|reference_end|> | arxiv | @article{bonsma2008tight,
title={Tight Bounds and Faster Algorithms for Directed Max-Leaf Problems},
author={Paul Bonsma and Frederic Dorn},
journal={arXiv preprint arXiv:0804.2032},
year={2008},
archivePrefix={arXiv},
eprint={0804.2032},
primaryClass={cs.DS cs.DM}
} | bonsma2008tight |
arxiv-3339 | 0804.2035 | The non-anticipation of the asynchronous systems | <|reference_start|>The non-anticipation of the asynchronous systems: The asynchronous systems are the models of the asynchronous circuits from the digital electrical engineering and non-anticipation is one of the most important properties in systems theory. Our present purpose is to introduce several concepts of non-anticipation of the asynchronous systems.<|reference_end|> | arxiv | @article{vlad2008the,
title={The non-anticipation of the asynchronous systems},
author={Serban E. Vlad},
journal={arXiv preprint arXiv:0804.2035},
year={2008},
archivePrefix={arXiv},
eprint={0804.2035},
primaryClass={cs.GL}
} | vlad2008the |
arxiv-3340 | 0804.2036 | Towards Physarum robots: computing and manipulating on water surface | <|reference_start|>Towards Physarum robots: computing and manipulating on water surface: Plasmodium of Physarym polycephalum is an ideal biological substrate for implementing concurrent and parallel computation, including combinatorial geometry and optimization on graphs. We report results of scoping experiments on Physarum computing in conditions of minimal friction, on the water surface. We show that plasmodium of Physarum is capable for computing a basic spanning trees and manipulating of light-weight objects. We speculate that our results pave the pathways towards design and implementation of amorphous biological robots.<|reference_end|> | arxiv | @article{adamatzky2008towards,
title={Towards Physarum robots: computing and manipulating on water surface},
author={Andrew Adamatzky},
journal={Journal of Bionic Engineering Volume 5, Issue 4, December 2008,
Pages 348-357},
year={2008},
doi={10.1016/S1672-6529(08)60180-8},
archivePrefix={arXiv},
eprint={0804.2036},
primaryClass={cs.RO cs.AI}
} | adamatzky2008towards |
arxiv-3341 | 0804.2037 | Some properties of the regular asynchronous systems | <|reference_start|>Some properties of the regular asynchronous systems: The asynchronous systems are the models of the asynchronous circuits from the digital electrical engineering. An asynchronous system f is a multi-valued function that assigns to each admissible input u a set f(u) of possible states x in f(u). A special case of asynchronous system consists in the existence of a Boolean function \Upsilon such that for any u and any x in f(u), a certain equation involving \Upsilon is fulfilled. Then \Upsilon is called the generator function of f (Moisil used the terminology of network function) and we say that f is generated by \Upsilon. The systems that have a generator function are called regular. Our purpose is to continue the study of the generation of the asynchronous systems that was started in [2], [3].<|reference_end|> | arxiv | @article{vlad2008some,
title={Some properties of the regular asynchronous systems},
author={Serban E. Vlad},
journal={arXiv preprint arXiv:0804.2037},
year={2008},
archivePrefix={arXiv},
eprint={0804.2037},
primaryClass={cs.OH}
} | vlad2008some |
arxiv-3342 | 0804.2043 | Analytical correlation of routing table length index and routing path length index in hierarchical routing model | <|reference_start|>Analytical correlation of routing table length index and routing path length index in hierarchical routing model: In Kleinrock and Kamoun's paper, the inverse relation of routing table length index and routing path length index in hierarchical routing model is illustrated. In this paper we give the analytical correlation of routing table length index and routing path length index in hierarchical routing model.<|reference_end|> | arxiv | @article{lu2008analytical,
title={Analytical correlation of routing table length index and routing path
length index in hierarchical routing model},
author={Tingrong Lu},
journal={arXiv preprint arXiv:0804.2043},
year={2008},
archivePrefix={arXiv},
eprint={0804.2043},
primaryClass={cs.NI}
} | lu2008analytical |
arxiv-3343 | 0804.2057 | Comparing and Combining Methods for Automatic Query Expansion | <|reference_start|>Comparing and Combining Methods for Automatic Query Expansion: Query expansion is a well known method to improve the performance of information retrieval systems. In this work we have tested different approaches to extract the candidate query terms from the top ranked documents returned by the first-pass retrieval. One of them is the cooccurrence approach, based on measures of cooccurrence of the candidate and the query terms in the retrieved documents. The other one, the probabilistic approach, is based on the probability distribution of terms in the collection and in the top ranked set. We compare the retrieval improvement achieved by expanding the query with terms obtained with different methods belonging to both approaches. Besides, we have developed a na\"ive combination of both kinds of method, with which we have obtained results that improve those obtained with any of them separately. This result confirms that the information provided by each approach is of a different nature and, therefore, can be used in a combined manner.<|reference_end|> | arxiv | @article{pérez-agüera2008comparing,
title={Comparing and Combining Methods for Automatic Query Expansion},
author={Jos'e R. P'erez-Ag"uera and Lourdes Araujo},
journal={Advances in Natural Language Processing and Applications. Research
in Computing Science 33, 2008, pp. 177-188},
year={2008},
archivePrefix={arXiv},
eprint={0804.2057},
primaryClass={cs.IR}
} | pérez-agüera2008comparing |
arxiv-3344 | 0804.2058 | Physical Layer Network Coding Over Finite And Infinite Fields | <|reference_start|>Physical Layer Network Coding Over Finite And Infinite Fields: Direct application of network coding at the physical layer - physical layer network coding (PNC) - is a promising technique for two-way relay wireless networks. In a two-way relay network, relay nodes are used to relay two-way information flows between pairs of end nodes. This paper proposes a precise definition for PNC. Specifically, in PNC, a relay node does not decode the source information from the two ends separately, but rather directly maps the combined signals received simultaneously to a signal to be relayed. Based on this definition, PNC can be further sub-classed into two categories - PNCF (PNC over finite field) and PNCI (PNC over infinite field) - according to whether the network-code field (or groups, rings) adopted is finite or infinite. For each of PNCF and PNCI, we consider two specific estimation techniques for dealing with noise in the mapping process. The performance of the four schemes is investigated by means of analysis and simulation, assuming symbol-level synchronization only.<|reference_end|> | arxiv | @article{shengli2008physical,
title={Physical Layer Network Coding Over Finite And Infinite Fields},
author={Zhang Shengli, Soung chang Liew and Lu Lu},
journal={arXiv preprint arXiv:0804.2058},
year={2008},
archivePrefix={arXiv},
eprint={0804.2058},
primaryClass={cs.NI}
} | shengli2008physical |
arxiv-3345 | 0804.2095 | A Logic Programming Framework for Combinational Circuit Synthesis | <|reference_start|>A Logic Programming Framework for Combinational Circuit Synthesis: Logic Programming languages and combinational circuit synthesis tools share a common "combinatorial search over logic formulae" background. This paper attempts to reconnect the two fields with a fresh look at Prolog encodings for the combinatorial objects involved in circuit synthesis. While benefiting from Prolog's fast unification algorithm and built-in backtracking mechanism, efficiency of our search algorithm is ensured by using parallel bitstring operations together with logic variable equality propagation, as a mapping mechanism from primary inputs to the leaves of candidate Leaf-DAGs implementing a combinational circuit specification. After an exhaustive expressiveness comparison of various minimal libraries, a surprising first-runner, Strict Boolean Inequality "<" together with constant function "1" also turns out to have small transistor-count implementations, competitive to NAND-only or NOR-only libraries. As a practical outcome, a more realistic circuit synthesizer is implemented that combines rewriting-based simplification of (<,1) circuits with exhaustive Leaf-DAG circuit search. Keywords: logic programming and circuit design, combinatorial object generation, exact combinational circuit synthesis, universal boolean logic libraries, symbolic rewriting, minimal transistor-count circuit synthesis<|reference_end|> | arxiv | @article{tarau2008a,
title={A Logic Programming Framework for Combinational Circuit Synthesis},
author={Paul Tarau and Brenda Luderman},
journal={23rd International Conference on Logic Programming (ICLP), LNCS
4670, 2007, pages 180-194},
year={2008},
archivePrefix={arXiv},
eprint={0804.2095},
primaryClass={cs.LO cs.CE cs.DM cs.PL}
} | tarau2008a |
arxiv-3346 | 0804.2097 | Optimal Mechansim Design and Money Burning | <|reference_start|>Optimal Mechansim Design and Money Burning: Mechanism design is now a standard tool in computer science for aligning the incentives of self-interested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising in computer science (such as networks): while monetary transfers (i.e., payments) are essential for most of the known positive results in mechanism design, they are undesirable or even technologically infeasible in many computer systems. Classical impossibility results imply that the reach of mechanisms without transfers is severely limited. Computer systems typically do have the ability to reduce service quality--routing systems can drop or delay traffic, scheduling protocols can delay the release of jobs, and computational payment schemes can require computational payments from users (e.g., in spam-fighting systems). Service degradation is tantamount to requiring that users burn money}, and such ``payments'' can be used to influence the preferences of the agents at a cost of degrading the social surplus. We develop a framework for the design and analysis of money-burning mechanisms to maximize the residual surplus--the total value of the chosen outcome minus the payments required.<|reference_end|> | arxiv | @article{hartline2008optimal,
title={Optimal Mechansim Design and Money Burning},
author={Jason D. Hartline and Tim Roughgarden},
journal={arXiv preprint arXiv:0804.2097},
year={2008},
archivePrefix={arXiv},
eprint={0804.2097},
primaryClass={cs.GT cs.DS}
} | hartline2008optimal |
arxiv-3347 | 0804.2112 | Truthful Unsplittable Flow for Large Capacity Networks | <|reference_start|>Truthful Unsplittable Flow for Large Capacity Networks: In this paper, we focus our attention on the large capacities unsplittable flow problem in a game theoretic setting. In this setting, there are selfish agents, which control some of the requests characteristics, and may be dishonest about them. It is worth noting that in game theoretic settings many standard techniques, such as randomized rounding, violate certain monotonicity properties, which are imperative for truthfulness, and therefore cannot be employed. In light of this state of affairs, we design a monotone deterministic algorithm, which is based on a primal-dual machinery, which attains an approximation ratio of $\frac{e}{e-1}$, up to a disparity of $\epsilon$ away. This implies an improvement on the current best truthful mechanism, as well as an improvement on the current best combinatorial algorithm for the problem under consideration. Surprisingly, we demonstrate that any algorithm in the family of reasonable iterative path minimizing algorithms, cannot yield a better approximation ratio. Consequently, it follows that in order to achieve a monotone PTAS, if exists, one would have to exert different techniques. We also consider the large capacities \textit{single-minded multi-unit combinatorial auction problem}. This problem is closely related to the unsplittable flow problem since one can formulate it as a special case of the integer linear program of the unsplittable flow problem. Accordingly, we obtain a comparable performance guarantee by refining the algorithm suggested for the unsplittable flow problem.<|reference_end|> | arxiv | @article{azar2008truthful,
title={Truthful Unsplittable Flow for Large Capacity Networks},
author={Yossi Azar, Iftah Gamzu and Shai Gutner},
journal={Proc. of 19th SPAA (2007), 320-329},
year={2008},
archivePrefix={arXiv},
eprint={0804.2112},
primaryClass={cs.DS cs.GT}
} | azar2008truthful |
arxiv-3348 | 0804.2138 | A constructive proof of the existence of Viterbi processes | <|reference_start|>A constructive proof of the existence of Viterbi processes: Since the early days of digital communication, hidden Markov models (HMMs) have now been also routinely used in speech recognition, processing of natural languages, images, and in bioinformatics. In an HMM $(X_i,Y_i)_{i\ge 1}$, observations $X_1,X_2,...$ are assumed to be conditionally independent given an ``explanatory'' Markov process $Y_1,Y_2,...$, which itself is not observed; moreover, the conditional distribution of $X_i$ depends solely on $Y_i$. Central to the theory and applications of HMM is the Viterbi algorithm to find {\em a maximum a posteriori} (MAP) estimate $q_{1:n}=(q_1,q_2,...,q_n)$ of $Y_{1:n}$ given observed data $x_{1:n}$. Maximum {\em a posteriori} paths are also known as Viterbi paths or alignments. Recently, attempts have been made to study the behavior of Viterbi alignments when $n\to \infty$. Thus, it has been shown that in some special cases a well-defined limiting Viterbi alignment exists. While innovative, these attempts have relied on rather strong assumptions and involved proofs which are existential. This work proves the existence of infinite Viterbi alignments in a more constructive manner and for a very general class of HMMs.<|reference_end|> | arxiv | @article{lember2008a,
title={A constructive proof of the existence of Viterbi processes},
author={J. Lember, A. Koloydenko},
journal={IEEE Transactions on Information Theory, volume 56, issue 4, 2010,
pages 2017 - 2033},
year={2008},
doi={10.1109/TIT.2010.2040897},
archivePrefix={arXiv},
eprint={0804.2138},
primaryClass={math.ST cs.IT math.IT math.PR stat.CO stat.ML stat.TH}
} | lember2008a |
arxiv-3349 | 0804.2155 | From Qualitative to Quantitative Proofs of Security Properties Using First-Order Conditional Logic | <|reference_start|>From Qualitative to Quantitative Proofs of Security Properties Using First-Order Conditional Logic: A first-order conditional logic is considered, with semantics given by a variant of epsilon-semantics, where p -> q means that Pr(q | p) approaches 1 super-polynomially --faster than any inverse polynomial. This type of convergence is needed for reasoning about security protocols. A complete axiomatization is provided for this semantics, and it is shown how a qualitative proof of the correctness of a security protocol can be automatically converted to a quantitative proof appropriate for reasoning about concrete security.<|reference_end|> | arxiv | @article{halpern2008from,
title={From Qualitative to Quantitative Proofs of Security Properties Using
First-Order Conditional Logic},
author={Joseph Y. Halpern},
journal={arXiv preprint arXiv:0804.2155},
year={2008},
archivePrefix={arXiv},
eprint={0804.2155},
primaryClass={cs.CR cs.AI cs.LO}
} | halpern2008from |
arxiv-3350 | 0804.2181 | Products of Ordinary Differential Operators by Evaluation and Interpolation | <|reference_start|>Products of Ordinary Differential Operators by Evaluation and Interpolation: It is known that multiplication of linear differential operators over ground fields of characteristic zero can be reduced to a constant number of matrix products. We give a new algorithm by evaluation and interpolation which is faster than the previously-known one by a constant factor, and prove that in characteristic zero, multiplication of differential operators and of matrices are computationally equivalent problems. In positive characteristic, we show that differential operators can be multiplied in nearly optimal time. Theoretical results are validated by intensive experiments.<|reference_end|> | arxiv | @article{bostan2008products,
title={Products of Ordinary Differential Operators by Evaluation and
Interpolation},
author={Alin Bostan (INRIA Rocquencourt), Fr'ed'eric Chyzak (INRIA
Rocquencourt), Nicolas Le Roux (INRIA Rocquencourt)},
journal={Dans ISSAC'08 (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0804.2181},
primaryClass={cs.SC}
} | bostan2008products |
arxiv-3351 | 0804.2189 | Impact of Spatial Correlation on the Finite-SNR Diversity-Multiplexing Tradeoff | <|reference_start|>Impact of Spatial Correlation on the Finite-SNR Diversity-Multiplexing Tradeoff: The impact of spatial correlation on the performance limits of multielement antenna (MEA) channels is analyzed in terms of the diversity-multiplexing tradeoff (DMT) at finite signal-to-noise ratio (SNR) values. A lower bound on the outage probability is first derived. Using this bound accurate finite-SNR estimate of the DMT is then derived. This estimate allows to gain insight on the impact of spatial correlation on the DMT at finite SNR. As expected, the DMT is severely degraded as the spatial correlation increases. Moreover, using asymptotic analysis, we show that our framework encompasses well-known results concerning the asymptotic behavior of the DMT.<|reference_end|> | arxiv | @article{rezki2008impact,
title={Impact of Spatial Correlation on the Finite-SNR Diversity-Multiplexing
Tradeoff},
author={Z. Rezki, David Haccoun, Franc{c}ois Gagnon and Wessam Ajib},
journal={arXiv preprint arXiv:0804.2189},
year={2008},
doi={10.1109/TWC.2008.060984},
archivePrefix={arXiv},
eprint={0804.2189},
primaryClass={cs.IT math.IT}
} | rezki2008impact |
arxiv-3352 | 0804.2191 | Push & Pull: autonomous deployment of mobile sensors for a complete coverage | <|reference_start|>Push & Pull: autonomous deployment of mobile sensors for a complete coverage: Mobile sensor networks are important for several strategic applications devoted to monitoring critical areas. In such hostile scenarios, sensors cannot be deployed manually and are either sent from a safe location or dropped from an aircraft. Mobile devices permit a dynamic deployment reconfiguration that improves the coverage in terms of completeness and uniformity. In this paper we propose a distributed algorithm for the autonomous deployment of mobile sensors called Push&Pull. According to our proposal, movement decisions are made by each sensor on the basis of locally available information and do not require any prior knowledge of the operating conditions or any manual tuning of key parameters. We formally prove that, when a sufficient number of sensors are available, our approach guarantees a complete and uniform coverage. Furthermore, we demonstrate that the algorithm execution always terminates preventing movement oscillations. Numerous simulations show that our algorithm reaches a complete coverage within reasonable time with moderate energy consumption, even when the target area has irregular shapes. Performance comparisons between Push&Pull and one of the most acknowledged algorithms show how the former one can efficiently reach a more uniform and complete coverage under a wide range of working scenarios.<|reference_end|> | arxiv | @article{bartolini2008push,
title={Push & Pull: autonomous deployment of mobile sensors for a complete
coverage},
author={N. Bartolini, T. Calamoneri, E. G. Fusco, A. Massini, S. Silvestri},
journal={arXiv preprint arXiv:0804.2191},
year={2008},
archivePrefix={arXiv},
eprint={0804.2191},
primaryClass={cs.NI cs.DC}
} | bartolini2008push |
arxiv-3353 | 0804.2249 | The Secrecy Graph and Some of its Properties | <|reference_start|>The Secrecy Graph and Some of its Properties: A new random geometric graph model, the so-called secrecy graph, is introduced and studied. The graph represents a wireless network and includes only edges over which secure communication in the presence of eavesdroppers is possible. The underlying point process models considered are lattices and Poisson point processes. In the lattice case, analogies to standard bond and site percolation can be exploited to determine percolation thresholds. In the Poisson case, the node degrees are determined and percolation is studied using analytical bounds and simulations. It turns out that a small density of eavesdroppers already has a drastic impact on the connectivity of the secrecy graph.<|reference_end|> | arxiv | @article{haenggi2008the,
title={The Secrecy Graph and Some of its Properties},
author={Martin Haenggi},
journal={arXiv preprint arXiv:0804.2249},
year={2008},
doi={10.1109/ISIT.2008.4595044},
archivePrefix={arXiv},
eprint={0804.2249},
primaryClass={cs.IT cs.DM math.IT math.PR}
} | haenggi2008the |
arxiv-3354 | 0804.2273 | Object Re-Use & Exchange: A Resource-Centric Approach | <|reference_start|>Object Re-Use & Exchange: A Resource-Centric Approach: The OAI Object Reuse and Exchange (OAI-ORE) framework recasts the repository-centric notion of digital object to a bounded aggregation of Web resources. In this manner, digital library content is more integrated with the Web architecture, and thereby more accessible to Web applications and clients. This generalized notion of an aggregation that is independent of repository containment conforms more closely with notions in eScience and eScholarship, where content is distributed across multiple services and databases. We provide a motivation for the OAI-ORE project, review previous interoperability efforts, describe draft ORE specifications and report on promising results from early experimentation that illustrate improved interoperability and reuse of digital objects.<|reference_end|> | arxiv | @article{lagoze2008object,
title={Object Re-Use & Exchange: A Resource-Centric Approach},
author={Carl Lagoze, Herbert Van de Sompel, Michael L. Nelson, Simeon Warner,
Robert Sanderson, Pete Johnston},
journal={arXiv preprint arXiv:0804.2273},
year={2008},
archivePrefix={arXiv},
eprint={0804.2273},
primaryClass={cs.DL cs.NI}
} | lagoze2008object |
arxiv-3355 | 0804.2288 | Parimutuel Betting on Permutations | <|reference_start|>Parimutuel Betting on Permutations: We focus on a permutation betting market under parimutuel call auction model where traders bet on the final ranking of n candidates. We present a Proportional Betting mechanism for this market. Our mechanism allows the traders to bet on any subset of the n x n 'candidate-rank' pairs, and rewards them proportionally to the number of pairs that appear in the final outcome. We show that market organizer's decision problem for this mechanism can be formulated as a convex program of polynomial size. More importantly, the formulation yields a set of n x n unique marginal prices that are sufficient to price the bets in this mechanism, and are computable in polynomial-time. The marginal prices reflect the traders' beliefs about the marginal distributions over outcomes. We also propose techniques to compute the joint distribution over n! permutations from these marginal distributions. We show that using a maximum entropy criterion, we can obtain a concise parametric form (with only n x n parameters) for the joint distribution which is defined over an exponentially large state space. We then present an approximation algorithm for computing the parameters of this distribution. In fact, the algorithm addresses the generic problem of finding the maximum entropy distribution over permutations that has a given mean, and may be of independent interest.<|reference_end|> | arxiv | @article{agrawal2008parimutuel,
title={Parimutuel Betting on Permutations},
author={Shipra Agrawal, Zizhuo Wang, Yinyu Ye},
journal={arXiv preprint arXiv:0804.2288},
year={2008},
archivePrefix={arXiv},
eprint={0804.2288},
primaryClass={cs.GT cs.CC cs.DS cs.MA}
} | agrawal2008parimutuel |
arxiv-3356 | 0804.2337 | Power Series Composition and Change of Basis | <|reference_start|>Power Series Composition and Change of Basis: Efficient algorithms are known for many operations on truncated power series (multiplication, powering, exponential, ...). Composition is a more complex task. We isolate a large class of power series for which composition can be performed efficiently. We deduce fast algorithms for converting polynomials between various bases, including Euler, Bernoulli, Fibonacci, and the orthogonal Laguerre, Hermite, Jacobi, Krawtchouk, Meixner and Meixner-Pollaczek.<|reference_end|> | arxiv | @article{bostan2008power,
title={Power Series Composition and Change of Basis},
author={Alin Bostan (INRIA Rocquencourt), Bruno Salvy (INRIA Rocquencourt),
'Eric Schost},
journal={arXiv preprint arXiv:0804.2337},
year={2008},
doi={10.1145/1390768.1390806},
archivePrefix={arXiv},
eprint={0804.2337},
primaryClass={cs.SC}
} | bostan2008power |
arxiv-3357 | 0804.2343 | Random sampling of colourings of sparse random graphs with a constant number of colours | <|reference_start|>Random sampling of colourings of sparse random graphs with a constant number of colours: In this work we present a simple and efficient algorithm which, with high probability, provides an almost uniform sample from the set of proper k-colourings on an instance of a sparse random graph G(n,d/n), where k=k(d) is a sufficiently large constant. Our algorithm is not based on the Markov Chain Monte Carlo method (M.C.M.C.). Instead, we provide a novel proof of correctness of our Algorithm that is based on interesting "spatial mixing" properties of colourings of G(n,d/n). Our result improves upon previous results (based on M.C.M.C.) that required a number of colours growing unboundedly with n.<|reference_end|> | arxiv | @article{efthymiou2008random,
title={Random sampling of colourings of sparse random graphs with a constant
number of colours},
author={Charilaos Efthymiou (1,2) and Paul G. Spirakis (1,2) ((1) Research
Academic Computer Technology Institute (2) Computer Engineering and
Informatics Department of the University of Patras, Greece)},
journal={arXiv preprint arXiv:0804.2343},
year={2008},
doi={10.1016/j.tcs.2008.05.008},
archivePrefix={arXiv},
eprint={0804.2343},
primaryClass={cs.DM}
} | efthymiou2008random |
arxiv-3358 | 0804.2346 | Theory and Applications of Two-dimensional, Null-boundary, Nine-Neighborhood, Cellular Automata Linear rules | <|reference_start|>Theory and Applications of Two-dimensional, Null-boundary, Nine-Neighborhood, Cellular Automata Linear rules: This paper deals with the theory and application of 2-Dimensional, nine-neighborhood, null- boundary, uniform as well as hybrid Cellular Automata (2D CA) linear rules in image processing. These rules are classified into nine groups depending upon the number of neighboring cells influences the cell under consideration. All the Uniform rules have been found to be rendering multiple copies of a given image depending on the groups to which they belong where as Hybrid rules are also shown to be characterizing the phenomena of zooming in, zooming out, thickening and thinning of a given image. Further, using hybrid CA rules a new searching algorithm is developed called Sweepers algorithm which is found to be applicable to simulate many inter disciplinary research areas like migration of organisms towards a single point destination, Single Attractor and Multiple Attractor Cellular Automata Theory, Pattern Classification and Clustering Problem, Image compression, Encryption and Decryption problems, Density Classification problem etc.<|reference_end|> | arxiv | @article{choudhury2008theory,
title={Theory and Applications of Two-dimensional, Null-boundary,
Nine-Neighborhood, Cellular Automata Linear rules},
author={Pabitra Pal Choudhury, Birendra Kumar Nayak, Sudhakar Sahoo, Sunil
Pankaj Rath},
journal={arXiv preprint arXiv:0804.2346},
year={2008},
number={Tech.Report No. ASD/2005/4, 13 May 2005},
archivePrefix={arXiv},
eprint={0804.2346},
primaryClass={cs.DM cs.CC cs.CV}
} | choudhury2008theory |
arxiv-3359 | 0804.2349 | Secure Remote Voting Using Paper Ballots | <|reference_start|>Secure Remote Voting Using Paper Ballots: Internet voting will probably be one of the most significant achievements of the future information society. It will have an enormous impact on the election process making it fast, reliable and inexpensive. Nonetheless, so far remote voting is considered to be very difficult, as one has to take into account susceptibility of the voter's PC to various cyber-attacks. As a result, most the research effort is put into developing protocols and machines for poll-site electronic voting. Although these solutions yield promising results, they cannot be directly adopted to Internet voting because of secure platform problem. However, the cryptographic components they utilize may be very useful. This paper presents a scheme based on combination of mixnets and homomorphic encryption borrowed from robust poll-site voting, along with techniques recommended for remote voting -- code sheets and test ballots. The protocol tries to minimize the trust put in voter's PC by making the voter responsible for manual encryption of his vote. To achieve this, the voter obtains a paper ballot that allows him to scramble the vote by performing simple operations (lookup in a table). Creation of paper ballots, as well as decryption of votes, is performed by a group of cooperating trusted servers. As a result, the scheme is characterized by strong asymmetry -- all computations are carried out on the server side. In consequence it does not require any additional hardware on the voter's side, and offers distributed trust, receipt-freeness and verifiability.<|reference_end|> | arxiv | @article{nitschke2008secure,
title={Secure Remote Voting Using Paper Ballots},
author={Lukasz Nitschke},
journal={arXiv preprint arXiv:0804.2349},
year={2008},
archivePrefix={arXiv},
eprint={0804.2349},
primaryClass={cs.CR}
} | nitschke2008secure |
arxiv-3360 | 0804.2354 | Information filtering based on wiki index database | <|reference_start|>Information filtering based on wiki index database: In this paper we present a profile-based approach to information filtering by an analysis of the content of text documents. The Wikipedia index database is created and used to automatically generate the user profile from the user document collection. The problem-oriented Wikipedia subcorpora are created (using knowledge extracted from the user profile) for each topic of user interests. The index databases of these subcorpora are applied to filtering information flow (e.g., mails, news). Thus, the analyzed texts are classified into several topics explicitly presented in the user profile. The paper concentrates on the indexing part of the approach. The architecture of an application implementing the Wikipedia indexing is described. The indexing method is evaluated using the Russian and Simple English Wikipedia.<|reference_end|> | arxiv | @article{smirnov2008information,
title={Information filtering based on wiki index database},
author={A. V. Smirnov, A. A. Krizhanovsky},
journal={arXiv preprint arXiv:0804.2354},
year={2008},
archivePrefix={arXiv},
eprint={0804.2354},
primaryClass={cs.IR cs.CL}
} | smirnov2008information |
arxiv-3361 | 0804.2373 | Fast Conversion Algorithms for Orthogonal Polynomials | <|reference_start|>Fast Conversion Algorithms for Orthogonal Polynomials: We discuss efficient conversion algorithms for orthogonal polynomials. We describe a known conversion algorithm from an arbitrary orthogonal basis to the monomial basis, and deduce a new algorithm of the same complexity for the converse operation.<|reference_end|> | arxiv | @article{bostan2008fast,
title={Fast Conversion Algorithms for Orthogonal Polynomials},
author={Alin Bostan (INRIA Rocquencourt), Bruno Salvy (INRIA Rocquencourt),
'Eric Schost},
journal={arXiv preprint arXiv:0804.2373},
year={2008},
doi={10.1016/j.laa.2009.08.002},
archivePrefix={arXiv},
eprint={0804.2373},
primaryClass={cs.SC}
} | bostan2008fast |
arxiv-3362 | 0804.2401 | Causal models have no complete axiomatic characterization | <|reference_start|>Causal models have no complete axiomatic characterization: Markov networks and Bayesian networks are effective graphic representations of the dependencies embedded in probabilistic models. It is well known that independencies captured by Markov networks (called graph-isomorphs) have a finite axiomatic characterization. This paper, however, shows that independencies captured by Bayesian networks (called causal models) have no axiomatization by using even countably many Horn or disjunctive clauses. This is because a sub-independency model of a causal model may be not causal, while graph-isomorphs are closed under sub-models.<|reference_end|> | arxiv | @article{li2008causal,
title={Causal models have no complete axiomatic characterization},
author={Sanjiang Li},
journal={arXiv preprint arXiv:0804.2401},
year={2008},
archivePrefix={arXiv},
eprint={0804.2401},
primaryClass={cs.AI cs.LO}
} | li2008causal |
arxiv-3363 | 0804.2429 | Universal Quantum Circuits | <|reference_start|>Universal Quantum Circuits: We define and construct efficient depth-universal and almost-size-universal quantum circuits. Such circuits can be viewed as general-purpose simulators for central classes of quantum circuits and can be used to capture the computational power of the circuit class being simulated. For depth we construct universal circuits whose depth is the same order as the circuits being simulated. For size, there is a log factor blow-up in the universal circuits constructed here. We prove that this construction is nearly optimal.<|reference_end|> | arxiv | @article{bera2008universal,
title={Universal Quantum Circuits},
author={Debajyoti Bera, Stephen Fenner, Frederic Green and Steve Homer},
journal={arXiv preprint arXiv:0804.2429},
year={2008},
archivePrefix={arXiv},
eprint={0804.2429},
primaryClass={cs.CC quant-ph}
} | bera2008universal |
arxiv-3364 | 0804.2435 | On the Expressiveness and Complexity of ATL | <|reference_start|>On the Expressiveness and Complexity of ATL: ATL is a temporal logic geared towards the specification and verification of properties in multi-agents systems. It allows to reason on the existence of strategies for coalitions of agents in order to enforce a given property. In this paper, we first precisely characterize the complexity of ATL model-checking over Alternating Transition Systems and Concurrent Game Structures when the number of agents is not fixed. We prove that it is \Delta^P_2 - and \Delta^P_?_3-complete, depending on the underlying multi-agent model (ATS and CGS resp.). We also consider the same problems for some extensions of ATL. We then consider expressiveness issues. We show how ATS and CGS are related and provide translations between these models w.r.t. alternating bisimulation. We also prove that the standard definition of ATL (built on modalities "Next", "Always" and "Until") cannot express the duals of its modalities: it is necessary to explicitely add the modality "Release".<|reference_end|> | arxiv | @article{laroussinie2008on,
title={On the Expressiveness and Complexity of ATL},
author={Francois Laroussinie, Nicolas Markey, and Ghassan Oreiby},
journal={Logical Methods in Computer Science, Volume 4, Issue 2 (May 15,
2008) lmcs:826},
year={2008},
doi={10.2168/LMCS-4(2:7)2008},
archivePrefix={arXiv},
eprint={0804.2435},
primaryClass={cs.LO cs.GT cs.MA}
} | laroussinie2008on |
arxiv-3365 | 0804.2469 | On analytic properties of entropy rate | <|reference_start|>On analytic properties of entropy rate: Entropy rate is a real valued functional on the space of discrete random sources which lacks a closed formula even for subclasses of sources which have intuitive parameterizations. A good way to overcome this problem is to examine its analytic properties relative to some reasonable topology. A canonical choice of a topology is that of the norm of total variation as it immediately arises with the idea of a discrete random source as a probability measure on sequence space. It is shown that entropy rate is Lipschitzian relative to this topology, which, by well known facts, is close to differentiability. An application of this theorem leads to a simple and elementary proof of the existence of entropy rate of random sources with finite evolution dimension. This class of sources encompasses arbitrary hidden Markov sources and quantum random walks.<|reference_end|> | arxiv | @article{schönhuth2008on,
title={On analytic properties of entropy rate},
author={Alexander Sch"onhuth},
journal={IEEE Transactions on Information Theory, 55(5), 2119-2127, 2009},
year={2008},
archivePrefix={arXiv},
eprint={0804.2469},
primaryClass={cs.IT math.IT}
} | schönhuth2008on |
arxiv-3366 | 0804.2473 | A Design Framework for Limited Feedback MIMO Systems with Zero-Forcing DFE | <|reference_start|>A Design Framework for Limited Feedback MIMO Systems with Zero-Forcing DFE: We consider the design of multiple-input multiple-output communication systems with a linear precoder at the transmitter, zero-forcing decision feedback equalization (ZF-DFE) at the receiver, and a low-rate feedback channel that enables communication from the receiver to the transmitter. The channel state information (CSI) available at the receiver is assumed to be perfect, and based on this information the receiver selects a suitable precoder from a codebook and feeds back the index of this precoder to the transmitter. Our approach to the design of the components of this limited feedback scheme is based on the development, herein, of a unified framework for the joint design of the precoder and the ZF-DFE under the assumption that perfect CSI is available at both the transmitter and the receiver. The framework is general and embraces a wide range of design criteria. This framework enables us to characterize the statistical distribution of the optimal precoder in a standard Rayleigh fading environment. Using this distribution, we show that codebooks constructed from Grassmann packings minimize an upper bound on an average distortion measure, and hence are natural candidates for the codebook in limited feedback systems. We also show that for any given codebook the performance of the proposed limited feedback schemes is an upper bound on the corresponding schemes with linear zero-forcing receivers. Our simulation studies show that the proposed limited feedback scheme can provide significantly better performance at a lower feedback rate than existing schemes in which the detection order is fed back to the transmitter.<|reference_end|> | arxiv | @article{shenouda2008a,
title={A Design Framework for Limited Feedback MIMO Systems with Zero-Forcing
DFE},
author={Michael Botros Shenouda and Timothy Davidson},
journal={arXiv preprint arXiv:0804.2473},
year={2008},
archivePrefix={arXiv},
eprint={0804.2473},
primaryClass={cs.IT math.IT}
} | shenouda2008a |
arxiv-3367 | 0804.2487 | The ergodic decomposition of asymptotically mean stationary random sources | <|reference_start|>The ergodic decomposition of asymptotically mean stationary random sources: It is demonstrated how to represent asymptotically mean stationary (AMS) random sources with values in standard spaces as mixtures of ergodic AMS sources. This an extension of the well known decomposition of stationary sources which has facilitated the generalization of prominent source coding theorems to arbitrary, not necessarily ergodic, stationary sources. Asymptotic mean stationarity generalizes the definition of stationarity and covers a much larger variety of real-world examples of random sources of practical interest. It is sketched how to obtain source coding and related theorems for arbitrary, not necessarily ergodic, AMS sources, based on the presented ergodic decomposition.<|reference_end|> | arxiv | @article{schoenhuth2008the,
title={The ergodic decomposition of asymptotically mean stationary random
sources},
author={Alexander Schoenhuth},
journal={arXiv preprint arXiv:0804.2487},
year={2008},
archivePrefix={arXiv},
eprint={0804.2487},
primaryClass={cs.IT math.IT math.PR}
} | schoenhuth2008the |
arxiv-3368 | 0804.2535 | Short proofs of strong normalization | <|reference_start|>Short proofs of strong normalization: This paper presents simple, syntactic strong normalization proofs for the simply-typed lambda-calculus and the polymorphic lambda-calculus (system F) with the full set of logical connectives, and all the permutative reductions. The normalization proofs use translations of terms and types to systems, for which strong normalization property is known.<|reference_end|> | arxiv | @article{wojdyga2008short,
title={Short proofs of strong normalization},
author={Aleksander Wojdyga},
journal={arXiv preprint arXiv:0804.2535},
year={2008},
archivePrefix={arXiv},
eprint={0804.2535},
primaryClass={cs.LO}
} | wojdyga2008short |
arxiv-3369 | 0804.2576 | Interlace Polynomials: Enumeration, Unimodality, and Connections to Codes | <|reference_start|>Interlace Polynomials: Enumeration, Unimodality, and Connections to Codes: The interlace polynomial q was introduced by Arratia, Bollobas, and Sorkin. It encodes many properties of the orbit of a graph under edge local complementation (ELC). The interlace polynomial Q, introduced by Aigner and van der Holst, similarly contains information about the orbit of a graph under local complementation (LC). We have previously classified LC and ELC orbits, and now give an enumeration of the corresponding interlace polynomials of all graphs of order up to 12. An enumeration of all circle graphs of order up to 12 is also given. We show that there exist graphs of all orders greater than 9 with interlace polynomials q whose coefficient sequences are non-unimodal, thereby disproving a conjecture by Arratia et al. We have verified that for graphs of order up to 12, all polynomials Q have unimodal coefficients. It has been shown that LC and ELC orbits of graphs correspond to equivalence classes of certain error-correcting codes and quantum states. We show that the properties of these codes and quantum states are related to properties of the associated interlace polynomials.<|reference_end|> | arxiv | @article{danielsen2008interlace,
title={Interlace Polynomials: Enumeration, Unimodality, and Connections to
Codes},
author={Lars Eirik Danielsen, Matthew G. Parker},
journal={Discrete Appl. Math. 158(6), pp. 636-648, 2010.},
year={2008},
doi={10.1016/j.dam.2009.11.011},
archivePrefix={arXiv},
eprint={0804.2576},
primaryClass={math.CO cs.IT math.IT}
} | danielsen2008interlace |
arxiv-3370 | 0804.2614 | Augmenting Actual Life Through MUVEs | <|reference_start|>Augmenting Actual Life Through MUVEs: The necessity of supporting more and more social interaction (and not only the mere information sharing) in online environments is the disruptive force upon which phenomena ascribed to the Web2.0 paradigm continuously bud. People interacting in online socio-technical environments mould technology on their needs, seamlessly integrating it into their everyday life. MUVEs (Multi User Virtual Environments) are no exception and, in several cases, represent the new frontier in this field. In this work we analyze if and how MUVEs can be considered a mean for augmenting communities (and more in general people) life. We trace a framework of analysis based on four main observations, and through these lenses we look at Second Life and at several projects we are currently developing in that synthetic world.<|reference_end|> | arxiv | @article{ripamonti2008augmenting,
title={Augmenting Actual Life Through MUVEs},
author={Laura Anna Ripamonti, Ines Di Loreto, Dario Maggiorini},
journal={arXiv preprint arXiv:0804.2614},
year={2008},
archivePrefix={arXiv},
eprint={0804.2614},
primaryClass={cs.HC cs.CY}
} | ripamonti2008augmenting |
arxiv-3371 | 0804.2621 | Design and Implementation of a Master of Science in Information and Computer Sciences - An Inventory and retrospect for the last four years | <|reference_start|>Design and Implementation of a Master of Science in Information and Computer Sciences - An Inventory and retrospect for the last four years: This Master of Science in Computer and Information Sciences (MICS) is an international accredited master program that has been initiated in 2004 and started in September 2005. MICS is a research-oriented academic study of 4 semesters and a continuation of the Bachelor towards the PhD. It is completely taught in English, supported by lecturers coming from more than ten different countries. This report compass a description of its underlying architecture, describes some implementation details and gives a presentation of diverse experiences and results. As the program has been designed and implemented right after the creation of the University, the significance of the program is moreover a self-discovery of the computer science department, which has finally led to the creation of the today's research institutes and research axes.<|reference_end|> | arxiv | @article{schommer2008design,
title={Design and Implementation of a Master of Science in Information and
Computer Sciences - An Inventory and retrospect for the last four years},
author={Christoph Schommer},
journal={arXiv preprint arXiv:0804.2621},
year={2008},
archivePrefix={arXiv},
eprint={0804.2621},
primaryClass={cs.GL}
} | schommer2008design |
arxiv-3372 | 0804.2642 | p-Symmetric fuzzy measures | <|reference_start|>p-Symmetric fuzzy measures: In this paper we propose a generalization of the concept of symmetric fuzzy measure based in a decomposition of the universal set in what we have called subsets of indifference. Some properties of these measures are studied, as well as their Choquet integral. Finally, a degree of interaction between the subsets of indifference is defined.<|reference_end|> | arxiv | @article{miranda2008p-symmetric,
title={p-Symmetric fuzzy measures},
author={Pedro Miranda, Michel Grabisch (LIP6), Pedro Gil},
journal={International Journal of Uncertainty Fuzziness and Knowledge-Based
Systems (2002) 105-123},
year={2008},
archivePrefix={arXiv},
eprint={0804.2642},
primaryClass={cs.DM}
} | miranda2008p-symmetric |
arxiv-3373 | 0804.2699 | A Critique of a Polynomial-time SAT Solver Devised by Sergey Gubin | <|reference_start|>A Critique of a Polynomial-time SAT Solver Devised by Sergey Gubin: This paper refutes the validity of the polynomial-time algorithm for solving satisfiability proposed by Sergey Gubin. Gubin introduces the algorithm using 3-SAT and eventually expands it to accept a broad range of forms of the Boolean satisfiability problem. Because 3-SAT is NP-complete, the algorithm would have implied P = NP, had it been correct. Additionally, this paper refutes the correctness of his polynomial-time reduction of SAT to 2-SAT.<|reference_end|> | arxiv | @article{christopher2008a,
title={A Critique of a Polynomial-time SAT Solver Devised by Sergey Gubin},
author={Ian Christopher, Dennis Huo, and Bryan Jacobs},
journal={arXiv preprint arXiv:0804.2699},
year={2008},
archivePrefix={arXiv},
eprint={0804.2699},
primaryClass={cs.CC cs.DS}
} | christopher2008a |
arxiv-3374 | 0804.2701 | Information Resources in High-Energy Physics: Surveying the Present Landscape and Charting the Future Course | <|reference_start|>Information Resources in High-Energy Physics: Surveying the Present Landscape and Charting the Future Course: Access to previous results is of paramount importance in the scientific process. Recent progress in information management focuses on building e-infrastructures for the optimization of the research workflow, through both policy-driven and user-pulled dynamics. For decades, High-Energy Physics (HEP) has pioneered innovative solutions in the field of information management and dissemination. In light of a transforming information environment, it is important to assess the current usage of information resources by researchers and HEP provides a unique test-bed for this assessment. A survey of about 10% of practitioners in the field reveals usage trends and information needs. Community-based services, such as the pioneering arXiv and SPIRES systems, largely answer the need of the scientists, with a limited but increasing fraction of younger users relying on Google. Commercial services offered by publishers or database vendors are essentially unused in the field. The survey offers an insight into the most important features that users require to optimize their research workflow. These results inform the future evolution of information management in HEP and, as these researchers are traditionally ``early adopters'' of innovation in scholarly communication, can inspire developments of disciplinary repositories serving other communities.<|reference_end|> | arxiv | @article{gentil-beccot2008information,
title={Information Resources in High-Energy Physics: Surveying the Present
Landscape and Charting the Future Course},
author={Anne Gentil-Beccot, Salvatore Mele, Annette Holtkamp, Heath B.
O'Connell, Travis C. Brooks},
journal={J.Am.Soc.Inf.Sci.60:150-160,2009},
year={2008},
doi={10.1002/asi.20944},
number={CERN-OPEN-2008-010,DESY-08-040,FERMILAB-PUB-08-077-BSS,SLAC-PUB-13199},
archivePrefix={arXiv},
eprint={0804.2701},
primaryClass={cs.DL}
} | gentil-beccot2008information |
arxiv-3375 | 0804.2729 | Generalized Modal Satisfiability | <|reference_start|>Generalized Modal Satisfiability: It is well known that modal satisfiability is PSPACE-complete (Ladner 1977). However, the complexity may decrease if we restrict the set of propositional operators used. Note that there exist an infinite number of propositional operators, since a propositional operator is simply a Boolean function. We completely classify the complexity of modal satisfiability for every finite set of propositional operators, i.e., in contrast to previous work, we classify an infinite number of problems. We show that, depending on the set of propositional operators, modal satisfiability is PSPACE-complete, coNP-complete, or in P. We obtain this trichotomy not only for modal formulas, but also for their more succinct representation using modal circuits. We consider both the uni-modal and the multi-modal case, and study the dual problem of validity as well.<|reference_end|> | arxiv | @article{hemaspaandra2008generalized,
title={Generalized Modal Satisfiability},
author={Edith Hemaspaandra, Henning Schnoor, Ilka Schnoor},
journal={arXiv preprint arXiv:0804.2729},
year={2008},
archivePrefix={arXiv},
eprint={0804.2729},
primaryClass={cs.CC cs.LO}
} | hemaspaandra2008generalized |
arxiv-3376 | 0804.2808 | Robust Precoder for Multiuser MISO Downlink with SINR Constraints | <|reference_start|>Robust Precoder for Multiuser MISO Downlink with SINR Constraints: In this paper, we consider linear precoding with SINR constraints for the downlink of a multiuser MISO (multiple-input single-output) communication system in the presence of imperfect channel state information (CSI). The base station is equipped with multiple transmit antennas and each user terminal is equipped with a single receive antenna. We propose a robust design of linear precoder which transmits minimum power to provide the required SINR at the user terminals when the true channel state lies in a region of a given size around the channel state available at the transmitter. We show that this design problem can be formulated as a Second Order Cone Program (SOCP) which can be solved efficiently. We compare the performance of the proposed design with some of the robust designs reported in the literature. Simulation results show that the proposed robust design provides better performance with reduced complexity.<|reference_end|> | arxiv | @article{ubaidulla2008robust,
title={Robust Precoder for Multiuser MISO Downlink with SINR Constraints},
author={P. Ubaidulla and A. Chockalingam},
journal={arXiv preprint arXiv:0804.2808},
year={2008},
archivePrefix={arXiv},
eprint={0804.2808},
primaryClass={cs.IT math.IT}
} | ubaidulla2008robust |
arxiv-3377 | 0804.2819 | Bipolarization of posets and natural interpolation | <|reference_start|>Bipolarization of posets and natural interpolation: The Choquet integral w.r.t. a capacity can be seen in the finite case as a parsimonious linear interpolator between vertices of $[0,1]^n$. We take this basic fact as a starting point to define the Choquet integral in a very general way, using the geometric realization of lattices and their natural triangulation, as in the work of Koshevoy. A second aim of the paper is to define a general mechanism for the bipolarization of ordered structures. Bisets (or signed sets), as well as bisubmodular functions, bicapacities, bicooperative games, as well as the Choquet integral defined for them can be seen as particular instances of this scheme. Lastly, an application to multicriteria aggregation with multiple reference levels illustrates all the results presented in the paper.<|reference_end|> | arxiv | @article{grabisch2008bipolarization,
title={Bipolarization of posets and natural interpolation},
author={Michel Grabisch (CES), Christophe Labreuche (TRT)},
journal={Journal of Mathematical Analysis and applications (2008) 1080-1097},
year={2008},
doi={10.1016/j.jmaa.2008.02.008},
archivePrefix={arXiv},
eprint={0804.2819},
primaryClass={cs.DM math.PR}
} | grabisch2008bipolarization |
arxiv-3378 | 0804.2831 | Decentralized Knowledge and Learning in Strategic Multi-user Communication | <|reference_start|>Decentralized Knowledge and Learning in Strategic Multi-user Communication: Please see the content of this report.<|reference_end|> | arxiv | @article{su2008decentralized,
title={Decentralized Knowledge and Learning in Strategic Multi-user
Communication},
author={Yi Su and Mihaela van der Schaar},
journal={arXiv preprint arXiv:0804.2831},
year={2008},
archivePrefix={arXiv},
eprint={0804.2831},
primaryClass={cs.GT cs.NI}
} | su2008decentralized |
arxiv-3379 | 0804.2844 | An Analysis of Key Factors for the Success of the Communal Management of Knowledge | <|reference_start|>An Analysis of Key Factors for the Success of the Communal Management of Knowledge: This paper explores the links between Knowledge Management and new community-based models of the organization from both a theoretical and an empirical perspective. From a theoretical standpoint, we look at Communities of Practice (CoPs) and Knowledge Management (KM) and explore the links between the two as they relate to the use of information systems to manage knowledge. We begin by reviewing technologically supported approaches to KM and introduce the idea of "Systemes d'Aide a la Gestion des Connaissances" SAGC (Systems to aid the Management of Knowledge). Following this we examine the contribution that communal structures such as CoPs can make to intraorganizational KM and highlight some of 'success factors' for this approach to KM that are found in the literature. From an empirical standpoint, we present the results of a survey involving the Chief Knowledge Officers (CKOs) of twelve large French businesses; the objective of this study was to identify the factors that might influence the success of such approaches. The survey was analysed using thematic content analysis and the results are presented here with some short illustrative quotes from the CKOs. Finally, the paper concludes with some brief reflections on what can be learnt from looking at this problem from these two perspectives.<|reference_end|> | arxiv | @article{bourdon2008an,
title={An Analysis of Key Factors for the Success of the Communal Management of
Knowledge},
author={Isabelle Bourdon and Chris Kimble},
journal={arXiv preprint arXiv:0804.2844},
year={2008},
archivePrefix={arXiv},
eprint={0804.2844},
primaryClass={cs.HC cs.AI}
} | bourdon2008an |
arxiv-3380 | 0804.2847 | The Concept of Appropriation as a Heuristic for Conceptualising the Relationship between Technology, People and Organisations | <|reference_start|>The Concept of Appropriation as a Heuristic for Conceptualising the Relationship between Technology, People and Organisations: The stated aim of this conference is to debate the continuing evolution of IS in businesses and other organisations. This paper seeks to contribute to this debate by exploring the concept of appropriation from a number of different epistemological, cultural and linguistic viewpoints to allow us to explore 'the black box' of appropriation and to gain a fuller understanding of the term. At the conceptual level, it will examine some of the different ways in which people have attempted to explain the relationship between the objective and concrete features of technology and the subjective and shifting nature of the people and organisation within which that technology is deployed. At the cultural and linguistic level the paper will examine the notion as it is found in the Francophone literature, where the term has a long and rich history, and the Anglophone literature where appropriation is seen as a rather more specialist term. The paper will conclude with some observations on the ongoing nature of the debate, the value of reading beyond the literature with which one is familiar and the rewards that come from exploring different historical (and linguistic) viewpoints.<|reference_end|> | arxiv | @article{baillette2008the,
title={The Concept of Appropriation as a Heuristic for Conceptualising the
Relationship between Technology, People and Organisations},
author={Pamela Baillette and Chris Kimble},
journal={arXiv preprint arXiv:0804.2847},
year={2008},
archivePrefix={arXiv},
eprint={0804.2847},
primaryClass={cs.CY cs.HC}
} | baillette2008the |
arxiv-3381 | 0804.2851 | Identifying 'Hidden' Communities of Practice within Electronic Networks: Some Preliminary Premises | <|reference_start|>Identifying 'Hidden' Communities of Practice within Electronic Networks: Some Preliminary Premises: This paper examines the possibility of discovering 'hidden' (potential) Communities of Practice (CoPs) inside electronic networks, and then using this knowledge to nurture them into a fully developed Virtual Community of Practice (VCoP). Starting from the standpoint of the need to manage knowledge, it discusses several questions related to this subject: the characteristics of 'hidden' communities; the relation between CoPs, Virtual Communities (VCs), Distributed Communities of Practice (DCoPs) and Virtual Communities of Practice (VCoPs); the methods used to search for 'hidden' CoPs; and the possible ways of changing 'hidden' CoPs into fully developed VCoPs. The paper also presents some preliminary findings from a semi-structured interview conducted in The Higher Education Academy Psychology Network (UK). These findings are contrasted against the theory discussed and some additional proposals are suggested at the end.<|reference_end|> | arxiv | @article{ribeiro2008identifying,
title={Identifying 'Hidden' Communities of Practice within Electronic Networks:
Some Preliminary Premises},
author={Richard Ribeiro and Chris Kimble},
journal={arXiv preprint arXiv:0804.2851},
year={2008},
archivePrefix={arXiv},
eprint={0804.2851},
primaryClass={cs.HC cs.CY}
} | ribeiro2008identifying |
arxiv-3382 | 0804.2852 | Philosophical Smoke Signals: Theory and Practice in Information Systems Design | <|reference_start|>Philosophical Smoke Signals: Theory and Practice in Information Systems Design: Although the gulf between the theory and practice in Information Systems is much lamented, few researchers have offered a way forward except through a number of (failed) attempts to develop a single systematic theory for Information Systems. In this paper, we encourage researchers to re-examine the practical consequences of their theoretical arguments. By examining these arguments we may be able to form a number of more rigorous theories of Information Systems, allowing us to draw theory and practice together without undertaking yet another attempt at the holy grail of a single unified systematic theory of Information Systems.<|reference_end|> | arxiv | @article{king2008philosophical,
title={Philosophical Smoke Signals: Theory and Practice in Information Systems
Design},
author={David King and Chris Kimble},
journal={arXiv preprint arXiv:0804.2852},
year={2008},
archivePrefix={arXiv},
eprint={0804.2852},
primaryClass={cs.SE cs.GL}
} | king2008philosophical |
arxiv-3383 | 0804.2940 | Secret Key Agreement by Soft-decision of Signals in Gaussian Maurer's Model | <|reference_start|>Secret Key Agreement by Soft-decision of Signals in Gaussian Maurer's Model: We consider the problem of secret key agreement in Gaussian Maurer's Model. In Gaussian Maurer's model, legitimate receivers, Alice and Bob, and a wire-tapper, Eve, receive signals randomly generated by a satellite through three independent memoryless Gaussian channels respectively. Then Alice and Bob generate a common secret key from their received signals. In this model, we propose a protocol for generating a common secret key by using the result of soft-decision of Alice and Bob's received signals. Then, we calculate a lower bound on the secret key rate in our proposed protocol. As a result of comparison with the protocol that only uses hard-decision, we found that the higher rate is obtained by using our protocol.<|reference_end|> | arxiv | @article{naito2008secret,
title={Secret Key Agreement by Soft-decision of Signals in Gaussian Maurer's
Model},
author={Masashi Naito, Shun Watanabe, Ryutaroh Matsumoto, and Tomohiko
Uyematsu},
journal={IEICE Trans. Fundamentals, vol. 92, no. 2, pp. 525-534, February
2009},
year={2008},
doi={10.1587/transfun.E92.A.525},
archivePrefix={arXiv},
eprint={0804.2940},
primaryClass={cs.IT cs.CR math.IT}
} | naito2008secret |
arxiv-3384 | 0804.2950 | An Adaptive-Parity Error-Resilient LZ'77 Compression Algorithm | <|reference_start|>An Adaptive-Parity Error-Resilient LZ'77 Compression Algorithm: The paper proposes an improved error-resilient Lempel-Ziv'77 (LZ'77) algorithm employing an adaptive amount of parity bits for error protection. It is a modified version of error resilient algorithm LZRS'77, proposed recently, which uses a constant amount of parity over all of the encoded blocks of data. The constant amount of parity is bounded by the lowest-redundancy part of the encoded string, whereas the adaptive parity more efficiently utilizes the available redundancy of the encoded string, and can be on average much higher. The proposed algorithm thus provides better error protection of encoded data. The performance of both algorithms was measured. The comparison showed a noticeable improvement by use of adaptive parity. The proposed algorithm is capable of correcting up to a few times as many errors as the original algorithm, while the compression performance remains practically unchanged.<|reference_end|> | arxiv | @article{korosec2008an,
title={An Adaptive-Parity Error-Resilient LZ'77 Compression Algorithm},
author={Tomaz Korosec and Saso Tomazic (Faculty of Electrical Engineering,
University of Ljubljana, Ljubljana, Slovenia)},
journal={arXiv preprint arXiv:0804.2950},
year={2008},
archivePrefix={arXiv},
eprint={0804.2950},
primaryClass={cs.IT math.IT}
} | korosec2008an |
arxiv-3385 | 0804.2960 | Eigenvalue based Spectrum Sensing Algorithms for Cognitive Radio | <|reference_start|>Eigenvalue based Spectrum Sensing Algorithms for Cognitive Radio: Spectrum sensing is a fundamental component is a cognitive radio. In this paper, we propose new sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users. In particular, two sensing algorithms are suggested, one is based on the ratio of the maximum eigenvalue to minimum eigenvalue; the other is based on the ratio of the average eigenvalue to minimum eigenvalue. Using some latest random matrix theories (RMT), we quantify the distributions of these ratios and derive the probabilities of false alarm and probabilities of detection for the proposed algorithms. We also find the thresholds of the methods for a given probability of false alarm. The proposed methods overcome the noise uncertainty problem, and can even perform better than the ideal energy detection when the signals to be detected are highly correlated. The methods can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power. Simulations based on randomly generated signals, wireless microphone signals and captured ATSC DTV signals are presented to verify the effectiveness of the proposed methods.<|reference_end|> | arxiv | @article{zeng2008eigenvalue,
title={Eigenvalue based Spectrum Sensing Algorithms for Cognitive Radio},
author={Yonghong Zeng and Ying-Chang Liang},
journal={arXiv preprint arXiv:0804.2960},
year={2008},
archivePrefix={arXiv},
eprint={0804.2960},
primaryClass={cs.IT math.IT}
} | zeng2008eigenvalue |
arxiv-3386 | 0804.2991 | Low-Complexity LDPC Codes with Near-Optimum Performance over the BEC | <|reference_start|>Low-Complexity LDPC Codes with Near-Optimum Performance over the BEC: Recent works showed how low-density parity-check (LDPC) erasure correcting codes, under maximum likelihood (ML) decoding, are capable of tightly approaching the performance of an ideal maximum-distance-separable code on the binary erasure channel. Such result is achievable down to low error rates, even for small and moderate block sizes, while keeping the decoding complexity low, thanks to a class of decoding algorithms which exploits the sparseness of the parity-check matrix to reduce the complexity of Gaussian elimination (GE). In this paper the main concepts underlying ML decoding of LDPC codes are recalled. A performance analysis among various LDPC code classes is then carried out, including a comparison with fixed-rate Raptor codes. The results show that LDPC and Raptor codes provide almost identical performance in terms of decoding failure probability vs. overhead.<|reference_end|> | arxiv | @article{paolini2008low-complexity,
title={Low-Complexity LDPC Codes with Near-Optimum Performance over the BEC},
author={Enrico Paolini, Gianluigi Liva, Michela Varrella, Balazs Matuz, Marco
Chiani},
journal={arXiv preprint arXiv:0804.2991},
year={2008},
archivePrefix={arXiv},
eprint={0804.2991},
primaryClass={cs.IT math.IT}
} | paolini2008low-complexity |
arxiv-3387 | 0804.2992 | "E pluribus unum" or How to Derive Single-equation Descriptions for Output-quantities in Nonlinear Circuits using Differential Algebra | <|reference_start|>"E pluribus unum" or How to Derive Single-equation Descriptions for Output-quantities in Nonlinear Circuits using Differential Algebra: In this paper we describe by a number of examples how to deduce one single characterizing higher order differential equation for output quantities of an analog circuit. In the linear case, we apply basic "symbolic" methods from linear algebra to the system of differential equations which is used to model the analog circuit. For nonlinear circuits and their corresponding nonlinear differential equations, we show how to employ computer algebra tools implemented in Maple, which are based on differential algebra.<|reference_end|> | arxiv | @article{gerbracht2008"e,
title={"E pluribus unum" or How to Derive Single-equation Descriptions for
Output-quantities in Nonlinear Circuits using Differential Algebra},
author={Eberhard H.-A. Gerbracht},
journal={Proceedings of the 7th International Workshop on Symbolic Methods
and Applications to Circuit Design, SMACD 2002, Sinaia, Romania, October
10-11, 2002; pp. 65-70; ISBN 973-85072-5-1},
year={2008},
archivePrefix={arXiv},
eprint={0804.2992},
primaryClass={cs.SC math.CA}
} | gerbracht2008"e |
arxiv-3388 | 0804.2998 | OFDM based Distributed Space Time Coding for Asynchronous Relay Networks | <|reference_start|>OFDM based Distributed Space Time Coding for Asynchronous Relay Networks: Recently Li and Xia have proposed a transmission scheme for wireless relay networks based on the Alamouti space time code and orthogonal frequency division multiplexing to combat the effect of timing errors at the relay nodes. This transmission scheme is amazingly simple and achieves a diversity order of two for any number of relays. Motivated by its simplicity, this scheme is extended to a more general transmission scheme that can achieve full cooperative diversity for any number of relays. The conditions on the distributed space time block code (DSTBC) structure that admit its application in the proposed transmission scheme are identified and it is pointed out that the recently proposed full diversity four group decodable DSTBCs from precoded co-ordinate interleaved orthogonal designs and extended Clifford algebras satisfy these conditions. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Finally, four group decodable distributed differential space time block codes applicable in this new transmission scheme for power of two number of relays are also provided.<|reference_end|> | arxiv | @article{rajan2008ofdm,
title={OFDM based Distributed Space Time Coding for Asynchronous Relay Networks},
author={G. Susinder Rajan and B. Sundar Rajan},
journal={arXiv preprint arXiv:0804.2998},
year={2008},
doi={10.1109/ICC.2008.218},
archivePrefix={arXiv},
eprint={0804.2998},
primaryClass={cs.IT math.IT}
} | rajan2008ofdm |
arxiv-3389 | 0804.3023 | Experiments in Model-Checking Optimistic Replication Algorithms | <|reference_start|>Experiments in Model-Checking Optimistic Replication Algorithms: This paper describes a series of model-checking experiments to verify optimistic replication algorithms based on Operational Transformation (OT) approach used for supporting collaborative edition. We formally define, using tool UPPAAL, the behavior and the main consistency requirement (i.e. convergence property) of the collaborative editing systems, as well as the abstract behavior of the environment where these systems are supposed to operate. Due to data replication and the unpredictable nature of user interactions, such systems have infinitely many states. So, we show how to exploit some features of the UPPAAL specification language to attenuate the severe state explosion problem. Two models are proposed. The first one, called concrete model, is very close to the system implementation but runs up against a severe explosion of states. The second model, called symbolic model, aims to overcome the limitation of the concrete model by delaying the effective selection and execution of editing operations until the construction of symbolic execution traces of all sites is completed. Experimental results have shown that the symbolic model allows a significant gain in both space and time. Using the symbolic model, we have been able to show that if the number of sites exceeds 2 then the convergence property is not satisfied for all OT algorithms considered here. A counterexample is provided for every algorithm.<|reference_end|> | arxiv | @article{boucheneb2008experiments,
title={Experiments in Model-Checking Optimistic Replication Algorithms},
author={Hanifa Boucheneb (VeriForm), Abdessamad Imine (INRIA Lorraine - LORIA
/ LIFC)},
journal={arXiv preprint arXiv:0804.3023},
year={2008},
number={RR-6510},
archivePrefix={arXiv},
eprint={0804.3023},
primaryClass={cs.LO cs.SC}
} | boucheneb2008experiments |
arxiv-3390 | 0804.3028 | Parameterized Low-distortion Embeddings - Graph metrics into lines and trees | <|reference_start|>Parameterized Low-distortion Embeddings - Graph metrics into lines and trees: We revisit the issue of low-distortion embedding of metric spaces into the line, and more generally, into the shortest path metric of trees, from the parameterized complexity perspective.Let $M=M(G)$ be the shortest path metric of an edge weighted graph $G=(V,E)$ on $n$ vertices. We describe algorithms for the problem of finding a low distortion non-contracting embedding of $M$ into line and tree metrics. We give an $O(nd^4(2d+1)^{2d})$ time algorithm that for an unweighted graph metric $M$ and integer $d$ either constructs an embedding of $M$ into the line with distortion at most $d$, or concludes that no such embedding exists. We find the result surprising, because the considered problem bears a strong resemblance to the notoriously hard Bandwidth Minimization problem which does not admit any FPT algorithm unless an unlikely collapse of parameterized complexity classes occurs. We show that our algorithm can also be applied to construct small distortion embeddings of weighted graph metrics. The running time of our algorithm is $O(n(dW)^4(2d+1)^{2dW})$ where $W$ is the largest edge weight of the input graph. We also show that deciding whether a weighted graph metric $M(G)$ with maximum weight $W < |V(G)|$ can be embedded into the line with distortion at most $d$ is NP-Complete for every fixed rational $d \geq 2$. This rules out any possibility of an algorithm with running time $O((nW)^{h(d)})$ where $h$ is a function of $d$ alone. We generalize the result on embedding into the line by proving that for any tree $T$ with maximum degree $\Delta$, embedding of $M$ into a shortest path metric of $T$ is FPT, parameterized by $(\Delta,d)$.<|reference_end|> | arxiv | @article{fellows2008parameterized,
title={Parameterized Low-distortion Embeddings - Graph metrics into lines and
trees},
author={Michael Fellows, Fedor Fomin, Daniel Lokshtanov, Elena Losievskaja,
Frances A. Rosamond and Saket Saurabh},
journal={arXiv preprint arXiv:0804.3028},
year={2008},
archivePrefix={arXiv},
eprint={0804.3028},
primaryClass={cs.DS cs.CC}
} | fellows2008parameterized |
arxiv-3391 | 0804.3064 | Intelligence gathering by capturing the social processes within prisons | <|reference_start|>Intelligence gathering by capturing the social processes within prisons: We present a prototype system that can be used to capture longitudinal socialising processes by recording people's encounters in space. We argue that such a system can usefully be deployed in prisons and other detention facilities in order help intelligence analysts assess the behaviour or terrorist and organised crime groups, and their potential relationships. Here we present the results of a longitudinal study, carried out with civilians, which demonstrates the capabilities of our system.<|reference_end|> | arxiv | @article{kostakos2008intelligence,
title={Intelligence gathering by capturing the social processes within prisons},
author={Vassilis Kostakos, Panos A. Kostakos},
journal={International Journal of Pervasive Computing and Communications,
6(4):423-431, 2010},
year={2008},
doi={10.1108/17427371011097622},
archivePrefix={arXiv},
eprint={0804.3064},
primaryClass={cs.CY}
} | kostakos2008intelligence |
arxiv-3392 | 0804.3065 | Visibly Tree Automata with Memory and Constraints | <|reference_start|>Visibly Tree Automata with Memory and Constraints: Tree automata with one memory have been introduced in 2001. They generalize both pushdown (word) automata and the tree automata with constraints of equality between brothers of Bogaert and Tison. Though it has a decidable emptiness problem, the main weakness of this model is its lack of good closure properties. We propose a generalization of the visibly pushdown automata of Alur and Madhusudan to a family of tree recognizers which carry along their (bottom-up) computation an auxiliary unbounded memory with a tree structure (instead of a symbol stack). In other words, these recognizers, called Visibly Tree Automata with Memory (VTAM) define a subclass of tree automata with one memory enjoying Boolean closure properties. We show in particular that they can be determinized and the problems like emptiness, membership, inclusion and universality are decidable for VTAM. Moreover, we propose several extensions of VTAM whose transitions may be constrained by different kinds of tests between memories and also constraints a la Bogaert and Tison comparing brother subtrees in the tree in input. We show that some of these classes of constrained VTAM keep the good closure and decidability properties, and we demonstrate their expressiveness with relevant examples of tree languages.<|reference_end|> | arxiv | @article{comon-lundh2008visibly,
title={Visibly Tree Automata with Memory and Constraints},
author={Hubert Comon-Lundh, Florent Jacquemard, Nicolas Perrin},
journal={Logical Methods in Computer Science, Volume 4, Issue 2 (June 18,
2008) lmcs:827},
year={2008},
doi={10.2168/LMCS-4(2:8)2008},
archivePrefix={arXiv},
eprint={0804.3065},
primaryClass={cs.LO}
} | comon-lundh2008visibly |
arxiv-3393 | 0804.3103 | Size matters: performance declines if your pixels are too big or too small | <|reference_start|>Size matters: performance declines if your pixels are too big or too small: We present a conceptual model that describes the effect of pixel size on target acquisition. We demonstrate the use of our conceptual model by applying it to predict and explain the results of an experiment to evaluate users' performance in a target acquisition task involving three distinct display sizes: standard desktop, small and large displays. The results indicate that users are fastest on standard desktop displays, undershoots are the most common error on small displays and overshoots are the most common error on large displays. We propose heuristics to maintain usability when changing displays. Finally, we contribute to the growing body of evidence that amplitude does affect performance in a display-based pointing task.<|reference_end|> | arxiv | @article{kostakos2008size,
title={Size matters: performance declines if your pixels are too big or too
small},
author={Vassilis Kostakos, Eamonn O'Neill},
journal={arXiv preprint arXiv:0804.3103},
year={2008},
archivePrefix={arXiv},
eprint={0804.3103},
primaryClass={cs.GR cs.HC}
} | kostakos2008size |
arxiv-3394 | 0804.3105 | A lower bound on web services composition | <|reference_start|>A lower bound on web services composition: A web service is modeled here as a finite state machine. A composition problem for web services is to decide if a given web service can be constructed from a given set of web services; where the construction is understood as a simulation of the specification by a fully asynchronous product of the given services. We show an EXPTIME-lower bound for this problem, thus matching the known upper bound. Our result also applies to richer models of web services, such as the Roman model.<|reference_end|> | arxiv | @article{muscholl2008a,
title={A lower bound on web services composition},
author={Anca Muscholl, Igor Walukiewicz},
journal={Logical Methods in Computer Science, Volume 4, Issue 2 (May 15,
2008) lmcs:824},
year={2008},
doi={10.2168/LMCS-4(2:5)2008},
archivePrefix={arXiv},
eprint={0804.3105},
primaryClass={cs.LO}
} | muscholl2008a |
arxiv-3395 | 0804.3109 | Partial Cross-Correlation of D-Sequences based CDMA System | <|reference_start|>Partial Cross-Correlation of D-Sequences based CDMA System: Like other pseudorandom sequences, decimal sequences may be used in designing a Code Division Multiple Access (CDMA) system. They appear to be ideally suited for this since the cross-correlation of d-sequences taken over the LCM of their periods is zero. But a practical system will not, in most likelihood, satisfy the condition that the number of chips per bit is equal to the LCM for all sequences that are assigned to different users. It is essential, therefore, to determine the partial cross-correlation properties of d-sequences. This paper has performed experiments on d-sequences and found that the partial cross-correlation is less than for PN sequences, indicating that d-sequences can be effective for use in CDMA.<|reference_end|> | arxiv | @article{chalasani2008partial,
title={Partial Cross-Correlation of D-Sequences based CDMA System},
author={Sandeep Chalasani},
journal={arXiv preprint arXiv:0804.3109},
year={2008},
archivePrefix={arXiv},
eprint={0804.3109},
primaryClass={cs.IT math.IT}
} | chalasani2008partial |
arxiv-3396 | 0804.3120 | The Capacity Of Two Way Relay Channel | <|reference_start|>The Capacity Of Two Way Relay Channel: This paper investigates the capacity of a wireless two way relay channel in which two end nodes exchange information via a relay node. The capacity is defined in the information-theoretic sense as the maximum information exchange rate between the two end nodes. We give an upper bound of the capacity by applying the cut-set theorem. We prove that this upper bound can be approached in low SNR region using "separated" multiple access for uplinks from the end nodes to the relay in which the data from the end nodes are individually decoded at the relay; and network-coding broadcast for downlinks from the relay to the end nodes in which the relay mixes the information from end nodes before forwarding. We further prove that the capacity is approachable in high SNR region using physical-layer network coding (PNC) multiple access for uplinks, and network-coding broadcast for downlinks. From our proof and observations, we conjecture that the upper bound may be achieved with PNC in all SNR regions.<|reference_end|> | arxiv | @article{shengli2008the,
title={The Capacity Of Two Way Relay Channel},
author={Zhang Shengli, Soung Chang Liew},
journal={arXiv preprint arXiv:0804.3120},
year={2008},
archivePrefix={arXiv},
eprint={0804.3120},
primaryClass={cs.IT math.IT}
} | shengli2008the |
arxiv-3397 | 0804.3155 | Leveraging Coherent Distributed Space-Time Codes for Noncoherent Communication in Relay Networks via Training | <|reference_start|>Leveraging Coherent Distributed Space-Time Codes for Noncoherent Communication in Relay Networks via Training: For point to point multiple input multiple output systems, Dayal-Brehler-Varanasi have proved that training codes achieve the same diversity order as that of the underlying coherent space time block code (STBC) if a simple minimum mean squared error estimate of the channel formed using the training part is employed for coherent detection of the underlying STBC. In this letter, a similar strategy involving a combination of training, channel estimation and detection in conjunction with existing coherent distributed STBCs is proposed for noncoherent communication in AF relay networks. Simulation results show that the proposed simple strategy outperforms distributed differential space-time coding for AF relay networks. Finally, the proposed strategy is extended to asynchronous relay networks using orthogonal frequency division multiplexing.<|reference_end|> | arxiv | @article{rajan2008leveraging,
title={Leveraging Coherent Distributed Space-Time Codes for Noncoherent
Communication in Relay Networks via Training},
author={G. Susinder Rajan and B. Sundar Rajan},
journal={arXiv preprint arXiv:0804.3155},
year={2008},
archivePrefix={arXiv},
eprint={0804.3155},
primaryClass={cs.IT math.IT}
} | rajan2008leveraging |
arxiv-3398 | 0804.3160 | On the performance of approximate equilibria in congestion games | <|reference_start|>On the performance of approximate equilibria in congestion games: We study the performance of approximate Nash equilibria for linear congestion games. We consider how much the price of anarchy worsens and how much the price of stability improves as a function of the approximation factor $\epsilon$. We give (almost) tight upper and lower bounds for both the price of anarchy and the price of stability for atomic and non-atomic congestion games. Our results not only encompass and generalize the existing results of exact equilibria to $\epsilon$-Nash equilibria, but they also provide a unified approach which reveals the common threads of the atomic and non-atomic price of anarchy results. By expanding the spectrum, we also cast the existing results in a new light. For example, the Pigou network, which gives tight results for exact Nash equilibria of selfish routing, remains tight for the price of stability of $\epsilon$-Nash equilibria but not for the price of anarchy.<|reference_end|> | arxiv | @article{christodoulou2008on,
title={On the performance of approximate equilibria in congestion games},
author={George Christodoulou, Elias Koutsoupias and Paul Spirakis},
journal={arXiv preprint arXiv:0804.3160},
year={2008},
archivePrefix={arXiv},
eprint={0804.3160},
primaryClass={cs.GT cs.AI cs.NI}
} | christodoulou2008on |
arxiv-3399 | 0804.3171 | Optimization Approach for Detecting the Critical Data on a Database | <|reference_start|>Optimization Approach for Detecting the Critical Data on a Database: Through purposeful introduction of malicious transactions (tracking transactions) into randomly select nodes of a (database) graph, soiled and clean segments are identified. Soiled and clean measures corresponding those segments are then computed. These measures are used to repose the problem of critical database elements detection as an optimization problem over the graph. This method is universally applicable over a large class of graphs (including directed, weighted, disconnected, cyclic) that occur in several contexts of databases. A generalization argument is presented which extends the critical data problem to abstract settings.<|reference_end|> | arxiv | @article{alluvada2008optimization,
title={Optimization Approach for Detecting the Critical Data on a Database},
author={Prashanth Alluvada},
journal={arXiv preprint arXiv:0804.3171},
year={2008},
archivePrefix={arXiv},
eprint={0804.3171},
primaryClass={cs.DB}
} | alluvada2008optimization |
arxiv-3400 | 0804.3193 | Symbolic computations in differential geometry | <|reference_start|>Symbolic computations in differential geometry: We introduce the C++ library Wedge, based on GiNaC, for symbolic computations in differential geometry. We show how Wedge makes it possible to use the language C++ to perform such computations, and illustrate some advantages of this approach with explicit examples. In particular, we describe a short program to determine whether a given linear exterior differential system is involutive.<|reference_end|> | arxiv | @article{conti2008symbolic,
title={Symbolic computations in differential geometry},
author={Diego Conti},
journal={arXiv preprint arXiv:0804.3193},
year={2008},
archivePrefix={arXiv},
eprint={0804.3193},
primaryClass={math.DG cs.SC}
} | conti2008symbolic |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.