corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-2301
0801.0841
Capacity of the Bosonic Wiretap Channel and the Entropy Photon-Number Inequality
<|reference_start|>Capacity of the Bosonic Wiretap Channel and the Entropy Photon-Number Inequality: Determining the ultimate classical information carrying capacity of electromagnetic waves requires quantum-mechanical analysis to properly account for the bosonic nature of these waves. Recent work has established capacity theorems for bosonic single-user and broadcast channels, under the presumption of two minimum output entropy conjectures. Despite considerable accumulated evidence that supports the validity of these conjectures, they have yet to be proven. In this paper, it is shown that the second conjecture suffices to prove the classical capacity of the bosonic wiretap channel, which in turn would also prove the quantum capacity of the lossy bosonic channel. The preceding minimum output entropy conjectures are then shown to be simple consequences of an Entropy Photon-Number Inequality (EPnI), which is a conjectured quantum-mechanical analog of the Entropy Power Inequality (EPI) form classical information theory.<|reference_end|>
arxiv
@article{guha2008capacity, title={Capacity of the Bosonic Wiretap Channel and the Entropy Photon-Number Inequality}, author={Saikat Guha, Jeffrey H. Shapiro, Baris I. Erkmen}, journal={arXiv preprint arXiv:0801.0841}, year={2008}, archivePrefix={arXiv}, eprint={0801.0841}, primaryClass={quant-ph cs.IT math.IT} }
guha2008capacity
arxiv-2302
0801.0857
Period-Different $m$-Sequences With At Most A Four-Valued Cross Correlation
<|reference_start|>Period-Different $m$-Sequences With At Most A Four-Valued Cross Correlation: In this paper, we follow the recent work of Helleseth, Kholosha, Johanssen and Ness to study the cross correlation between an $m$-sequence of period $2^m-1$ and the $d$-decimation of an $m$-sequence of shorter period $2^{n}-1$ for an even number $m=2n$. Assuming that $d$ satisfies $d(2^l+1)=2^i({\rm mod} 2^n-1)$ for some $l$ and $i$, we prove the cross correlation takes exactly either three or four values, depending on ${\rm gcd}(l,n)$ is equal to or larger than 1. The distribution of the correlation values is also completely determined. Our result confirms the numerical phenomenon Helleseth et al found. It is conjectured that there are no more other cases of $d$ that give at most a four-valued cross correlation apart from the ones proved here.<|reference_end|>
arxiv
@article{hu2008period-different, title={Period-Different $m$-Sequences With At Most A Four-Valued Cross Correlation}, author={Lei Hu, Xiangyong Zeng, Nian Li, and Wenfeng Jiang}, journal={arXiv preprint arXiv:0801.0857}, year={2008}, archivePrefix={arXiv}, eprint={0801.0857}, primaryClass={cs.IT cs.DM math.IT} }
hu2008period-different
arxiv-2303
0801.0882
Call-by-value Termination in the Untyped lambda-calculus
<|reference_start|>Call-by-value Termination in the Untyped lambda-calculus: A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The ``size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's ``minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.<|reference_end|>
arxiv
@article{jones2008call-by-value, title={Call-by-value Termination in the Untyped lambda-calculus}, author={Neil D. Jones and Nina Bohr}, journal={Logical Methods in Computer Science, Volume 4, Issue 1 (March 17, 2008) lmcs:915}, year={2008}, doi={10.2168/LMCS-4(1:3)2008}, archivePrefix={arXiv}, eprint={0801.0882}, primaryClass={cs.PL} }
jones2008call-by-value
arxiv-2304
0801.0931
The Asymptotic Bit Error Probability of LDPC Codes for the Binary Erasure Channel with Finite Iteration Number
<|reference_start|>The Asymptotic Bit Error Probability of LDPC Codes for the Binary Erasure Channel with Finite Iteration Number: We consider communication over the binary erasure channel (BEC) using low-density parity-check (LDPC) code and belief propagation (BP) decoding. The bit error probability for infinite block length is known by density evolution and it is well known that a difference between the bit error probability at finite iteration number for finite block length $n$ and for infinite block length is asymptotically $\alpha/n$, where $\alpha$ is a specific constant depending on the degree distribution, the iteration number and the erasure probability. Our main result is to derive an efficient algorithm for calculating $\alpha$ for regular ensembles. The approximation using $\alpha$ is accurate for $(2,r)$-regular ensembles even in small block length.<|reference_end|>
arxiv
@article{mori2008the, title={The Asymptotic Bit Error Probability of LDPC Codes for the Binary Erasure Channel with Finite Iteration Number}, author={Ryuhei Mori, Kenta Kasai, Tomoharu Shibuya, and Kohichi Sakaniwa}, journal={arXiv preprint arXiv:0801.0931}, year={2008}, archivePrefix={arXiv}, eprint={0801.0931}, primaryClass={cs.IT math.IT} }
mori2008the
arxiv-2305
0801.0938
Cognitive Networks Achieve Throughput Scaling of a Homogeneous Network
<|reference_start|>Cognitive Networks Achieve Throughput Scaling of a Homogeneous Network: We study two distinct, but overlapping, networks that operate at the same time, space, and frequency. The first network consists of $n$ randomly distributed \emph{primary users}, which form either an ad hoc network, or an infrastructure-supported ad hoc network with $l$ additional base stations. The second network consists of $m$ randomly distributed, ad hoc secondary users or cognitive users. The primary users have priority access to the spectrum and do not need to change their communication protocol in the presence of secondary users. The secondary users, however, need to adjust their protocol based on knowledge about the locations of the primary nodes to bring little loss to the primary network's throughput. By introducing preservation regions around primary receivers and avoidance regions around primary base stations, we propose two modified multihop routing protocols for the cognitive users. Base on percolation theory, we show that when the secondary network is denser than the primary network, both networks can simultaneously achieve the same throughput scaling law as a stand-alone network. Furthermore, the primary network throughput is subject to only a vanishingly fractional loss. Specifically, for the ad hoc and the infrastructure-supported primary models, the primary network achieves sum throughputs of order $n^{1/2}$ and $\max\{n^{1/2},l\}$, respectively. For both primary network models, for any $\delta>0$, the secondary network can achieve sum throughput of order $m^{1/2-\delta}$ with an arbitrarily small fraction of outage. Thus, almost all secondary source-destination pairs can communicate at a rate of order $m^{-1/2-\delta}$.<|reference_end|>
arxiv
@article{jeon2008cognitive, title={Cognitive Networks Achieve Throughput Scaling of a Homogeneous Network}, author={Sang-Woon Jeon, Natasha Devroye, Mai Vu, Sae-Young Chung, Vahid Tarokh}, journal={IEEE Transactions on Information Theory, vol. 57, no. 8, pp. 5103-5115, Aug. 2011}, year={2008}, doi={10.1109/TIT.2011.2158874}, archivePrefix={arXiv}, eprint={0801.0938}, primaryClass={cs.IT math.IT} }
jeon2008cognitive
arxiv-2306
0801.0949
On the Refinement of Liveness Properties of Distributed Systems
<|reference_start|>On the Refinement of Liveness Properties of Distributed Systems: We present a new approach for reasoning about liveness properties of distributed systems, represented as automata. Our approach is based on simulation relations, and requires reasoning only over finite execution fragments. Current simulation-relation based methods for reasoning about liveness properties of automata require reasoning over entire executions, since they involve a proof obligation of the form: if a concrete and abstract execution ``correspond'' via the simulation, and the concrete execution is live, then so is the abstract execution. Our contribution consists of (1) a formalism for defining liveness properties, (2) a proof method for liveness properties based on that formalism, and (3) two expressive completeness results: firstly, our formalism can express any liveness property which satisfies a natural ``robustness'' condition, and secondly, our formalism can express any liveness property at all, provided that history variables can be used<|reference_end|>
arxiv
@article{attie2008on, title={On the Refinement of Liveness Properties of Distributed Systems}, author={Paul C. Attie}, journal={arXiv preprint arXiv:0801.0949}, year={2008}, archivePrefix={arXiv}, eprint={0801.0949}, primaryClass={cs.LO} }
attie2008on
arxiv-2307
0801.0969
Pareto and Boltzmann-Gibbs behaviors in a deterministic multi-agent system
<|reference_start|>Pareto and Boltzmann-Gibbs behaviors in a deterministic multi-agent system: A deterministic system of interacting agents is considered as a model for economic dynamics. The dynamics of the system is described by a coupled map lattice with near neighbor interactions. The evolution of each agent results from the competition between two factors: the agent's own tendency to grow and the environmental influence that moderates this growth. Depending on the values of the parameters that control these factors, the system can display Pareto or Boltzmann-Gibbs statistical behaviors in its asymptotic dynamical regime. The regions where these behaviors appear are calculated on the space of parameters of the system. Other statistical properties, such as the mean wealth, the standard deviation, and the Gini coefficient characterizing the degree of equity in the wealth distribution are also calculated on the space of parameters of the system.<|reference_end|>
arxiv
@article{gonzalez-estevez2008pareto, title={Pareto and Boltzmann-Gibbs behaviors in a deterministic multi-agent system}, author={J. Gonzalez-Estevez, M. G. Cosenza, R. Lopez-Ruiz, and J. R. Sanchez}, journal={arXiv preprint arXiv:0801.0969}, year={2008}, doi={10.1016/j.physa.2008.03.013}, archivePrefix={arXiv}, eprint={0801.0969}, primaryClass={q-fin.GN cond-mat.stat-mech cs.MA nlin.AO nlin.CD physics.comp-ph physics.soc-ph} }
gonzalez-estevez2008pareto
arxiv-2308
0801.1002
Capacity Bounds for Peak-Constrained Multiantenna Wideband Channels
<|reference_start|>Capacity Bounds for Peak-Constrained Multiantenna Wideband Channels: We derive bounds on the noncoherent capacity of a very general class of multiple-input multiple-output channels that allow for selectivity in time and frequency as well as for spatial correlation. The bounds apply to peak-constrained inputs; they are explicit in the channel's scattering function, are useful for a large range of bandwidth, and allow to coarsely identify the capacity-optimal combination of bandwidth and number of transmit antennas. Furthermore, we obtain a closed-form expression for the first-order Taylor series expansion of capacity in the limit of infinite bandwidth. From this expression, we conclude that in the wideband regime: (i) it is optimal to use only one transmit antenna when the channel is spatially uncorrelated; (ii) rank-one statistical beamforming is optimal if the channel is spatially correlated; and (iii) spatial correlation, be it at the transmitter, the receiver, or both, is beneficial.<|reference_end|>
arxiv
@article{schuster2008capacity, title={Capacity Bounds for Peak-Constrained Multiantenna Wideband Channels}, author={Ulrich G. Schuster and Giuseppe Durisi and Helmut B"olcskei and H. Vincent Poor}, journal={arXiv preprint arXiv:0801.1002}, year={2008}, archivePrefix={arXiv}, eprint={0801.1002}, primaryClass={cs.IT math.IT} }
schuster2008capacity
arxiv-2309
0801.1033
The What, Who, Where, When, Why and How of Context-Awareness
<|reference_start|>The What, Who, Where, When, Why and How of Context-Awareness: The understanding of context and context-awareness is very important for the areas of handheld and ubiquitous computing. Unfortunately, at present, there has not been a satisfactory definition of these two concepts that would lead to a more effective communication in humancomputer interaction. As a result, on the one hand, application designers are not able to choose what context to use in their applications and on the other, they cannot determine the type of context-awareness behaviours their applications should exhibit. In this work, we aim to provide answers to some fundamental questions that could enlighten us on the definition of context and its functionality.<|reference_end|>
arxiv
@article{tsibidis2008the, title={The What, Who, Where, When, Why and How of Context-Awareness}, author={George Tsibidis, Theodoros N. Arvanitis and Chris Baber}, journal={arXiv preprint arXiv:0801.1033}, year={2008}, archivePrefix={arXiv}, eprint={0801.1033}, primaryClass={cs.HC} }
tsibidis2008the
arxiv-2310
0801.1060
On the Period of a Periodic-Finite-Type Shift
<|reference_start|>On the Period of a Periodic-Finite-Type Shift: Periodic-finite-type shifts (PFT's) form a class of sofic shifts that strictly contains the class of shifts of finite type (SFT's). In this paper, we investigate how the notion of "period" inherent in the definition of a PFT causes it to differ from an SFT, and how the period influences the properties of a PFT.<|reference_end|>
arxiv
@article{manada2008on, title={On the Period of a Periodic-Finite-Type Shift}, author={Akiko Manada and Navin Kashyap}, journal={arXiv preprint arXiv:0801.1060}, year={2008}, archivePrefix={arXiv}, eprint={0801.1060}, primaryClass={cs.IT math.IT} }
manada2008on
arxiv-2311
0801.1063
Modeling Online Reviews with Multi-grain Topic Models
<|reference_start|>Modeling Online Reviews with Multi-grain Topic Models: In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., `waitress' and `bartender' are part of the same topic `staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.<|reference_end|>
arxiv
@article{titov2008modeling, title={Modeling Online Reviews with Multi-grain Topic Models}, author={Ivan Titov and Ryan McDonald}, journal={arXiv preprint arXiv:0801.1063}, year={2008}, archivePrefix={arXiv}, eprint={0801.1063}, primaryClass={cs.IR cs.DB} }
titov2008modeling
arxiv-2312
0801.1067
The lowest-possible BER and FER for any discrete memoryless channel with given capacity
<|reference_start|>The lowest-possible BER and FER for any discrete memoryless channel with given capacity: We investigate properties of a channel coding scheme leading to the minimum-possible frame error ratio when transmitting over a memoryless channel with rate R>C. The results are compared to the well-known properties of a channel coding scheme leading to minimum bit error ratio. It is concluded that these two optimization requests are contradicting. A valuable application of the derived results is presented.<|reference_end|>
arxiv
@article{huber2008the, title={The lowest-possible BER and FER for any discrete memoryless channel with given capacity}, author={Johannes B. Huber and Thorsten Hehn}, journal={arXiv preprint arXiv:0801.1067}, year={2008}, archivePrefix={arXiv}, eprint={0801.1067}, primaryClass={cs.IT math.IT} }
huber2008the
arxiv-2313
0801.1126
Concave Programming Upper Bounds on the Capacity of 2-D Constraints
<|reference_start|>Concave Programming Upper Bounds on the Capacity of 2-D Constraints: The capacity of 1-D constraints is given by the entropy of a corresponding stationary maxentropic Markov chain. Namely, the entropy is maximized over a set of probability distributions, which is defined by some linear requirements. In this paper, certain aspects of this characterization are extended to 2-D constraints. The result is a method for calculating an upper bound on the capacity of 2-D constraints. The key steps are: The maxentropic stationary probability distribution on square configurations is considered. A set of linear equalities and inequalities is derived from this stationarity. The result is a concave program, which can be easily solved numerically. Our method improves upon previous upper bounds for the capacity of the 2-D ``no independent bits'' constraint, as well as certain 2-D RLL constraints.<|reference_end|>
arxiv
@article{tal2008concave, title={Concave Programming Upper Bounds on the Capacity of 2-D Constraints}, author={Ido Tal, Ron M. Roth}, journal={arXiv preprint arXiv:0801.1126}, year={2008}, archivePrefix={arXiv}, eprint={0801.1126}, primaryClass={cs.IT math.IT} }
tal2008concave
arxiv-2314
0801.1136
A Constrained Channel Coding Approach to Joint Communication and Channel Estimation
<|reference_start|>A Constrained Channel Coding Approach to Joint Communication and Channel Estimation: A joint communication and channel state estimation problem is investigated, in which reliable information transmission over a noisy channel, and high-fidelity estimation of the channel state, are simultaneously sought. The tradeoff between the achievable information rate and the estimation distortion is quantified by formulating the problem as a constrained channel coding problem, and the resulting capacity-distortion function characterizes the fundamental limit of the joint communication and channel estimation problem. The analytical results are illustrated through case studies, and further issues such as multiple cost constraints, channel uncertainty, and capacity per unit distortion are also briefly discussed.<|reference_end|>
arxiv
@article{zhang2008a, title={A Constrained Channel Coding Approach to Joint Communication and Channel Estimation}, author={Wenyi Zhang, Satish Vedantam, and Urbashi Mitra}, journal={arXiv preprint arXiv:0801.1136}, year={2008}, archivePrefix={arXiv}, eprint={0801.1136}, primaryClass={cs.IT math.IT} }
zhang2008a
arxiv-2315
0801.1138
An Addendum to "How Good is PSK for Peak-Limited Fading Channels in the Low-SNR Regime?"
<|reference_start|>An Addendum to "How Good is PSK for Peak-Limited Fading Channels in the Low-SNR Regime?": A proof is provided of the operational achievability of $R_\mathrm{rt}$ by the recursive training scheme in \cite{zhang07:it}, for general wide-sense stationary and ergodic Rayleigh fading processes.<|reference_end|>
arxiv
@article{zhang2008an, title={An Addendum to "How Good is PSK for Peak-Limited Fading Channels in the Low-SNR Regime?"}, author={Wenyi Zhang}, journal={arXiv preprint arXiv:0801.1138}, year={2008}, archivePrefix={arXiv}, eprint={0801.1138}, primaryClass={cs.IT math.IT} }
zhang2008an
arxiv-2316
0801.1141
Coding Strategies for Noise-Free Relay Cascades with Half-Duplex Constraint
<|reference_start|>Coding Strategies for Noise-Free Relay Cascades with Half-Duplex Constraint: Two types of noise-free relay cascades are investigated. Networks where a source communicates with a distant receiver via a cascade of half-duplex constrained relays, and networks where not only the source but also a single relay node intends to transmit information to the same destination. We introduce two relay channel models, capturing the half-duplex constraint, and within the framework of these models capacity is determined for the first network type. It turns out that capacity is significantly higher than the rates which are achievable with a straightforward time-sharing approach. A capacity achieving coding strategy is presented based on allocating the transmit and receive time slots of a node in dependence of the node's previously received data. For the networks of the second type, an upper bound to the rate region is derived from the cut-set bound. Further, achievability of the cut-set bound in the single relay case is shown given that the source rate exceeds a certain minimum value.<|reference_end|>
arxiv
@article{lutz2008coding, title={Coding Strategies for Noise-Free Relay Cascades with Half-Duplex Constraint}, author={Tobias Lutz, Christoph Hausl, Ralf K"otter}, journal={arXiv preprint arXiv:0801.1141}, year={2008}, doi={10.1109/ISIT.2008.4595418}, archivePrefix={arXiv}, eprint={0801.1141}, primaryClass={cs.IT math.IT} }
lutz2008coding
arxiv-2317
0801.1179
Corpus sp\'ecialis\'e et ressource de sp\'ecialit\'e
<|reference_start|>Corpus sp\'ecialis\'e et ressource de sp\'ecialit\'e: "Semantic Atlas" is a mathematic and statistic model to visualise word senses according to relations between words. The model, that has been applied to proximity relations from a corpus, has shown its ability to distinguish word senses as the corpus' contributors comprehend them. We propose to use the model and a specialised corpus in order to create automatically a specialised dictionary relative to the corpus' domain. A morpho-syntactic analysis performed on the corpus makes it possible to create the dictionary from syntactic relations between lexical units. The semantic resource can be used to navigate semantically - and not only lexically - through the corpus, to create classical dictionaries or for diachronic studies of the language.<|reference_end|>
arxiv
@article{jacquemin2008corpus, title={Corpus sp{\'e}cialis{\'e} et ressource de sp{\'e}cialit{\'e}}, author={Bernard Jacquemin (ISC, UMR 7044, GERIICO), Sabine Ploux (ISC)}, journal={Appears in Fran\c{c}ois Maniez; Pascaline Dury; Nathalie Arlin; Claire Rougemont. Corpus et dictionnaires de langues de sp{\'e}cialit{\'e}, Presses Universitaires de Granoble, pp.197-212, 2008}, year={2008}, archivePrefix={arXiv}, eprint={0801.1179}, primaryClass={cs.IR cs.CL} }
jacquemin2008corpus
arxiv-2318
0801.1185
Capacity of the Discrete-Time AWGN Channel Under Output Quantization
<|reference_start|>Capacity of the Discrete-Time AWGN Channel Under Output Quantization: We investigate the limits of communication over the discrete-time Additive White Gaussian Noise (AWGN) channel, when the channel output is quantized using a small number of bits. We first provide a proof of our recent conjecture on the optimality of a discrete input distribution in this scenario. Specifically, we show that for any given output quantizer choice with K quantization bins (i.e., a precision of log2 K bits), the input distribution, under an average power constraint, need not have any more than K + 1 mass points to achieve the channel capacity. The cutting-plane algorithm is employed to compute this capacity and to generate optimum input distributions. Numerical optimization over the choice of the quantizer is then performed (for 2-bit and 3-bit symmetric quantization), and the results we obtain show that the loss due to low-precision output quantization, which is small at low signal-to-noise ratio (SNR) as expected, can be quite acceptable even for moderate to high SNR values. For example, at SNRs up to 20 dB, 2-3 bit quantization achieves 80-90% of the capacity achievable using infinite-precision quantization.<|reference_end|>
arxiv
@article{singh2008capacity, title={Capacity of the Discrete-Time AWGN Channel Under Output Quantization}, author={Jaspreet Singh, Onkar Dabeer and Upamanyu Madhow}, journal={arXiv preprint arXiv:0801.1185}, year={2008}, archivePrefix={arXiv}, eprint={0801.1185}, primaryClass={cs.IT math.IT} }
singh2008capacity
arxiv-2319
0801.1208
Hybrid Decoding of Finite Geometry LDPC Codes
<|reference_start|>Hybrid Decoding of Finite Geometry LDPC Codes: For finite geometry low-density parity-check codes, heavy row and column weights in their parity check matrix make the decoding with even Min-Sum (MS) variants computationally expensive. To alleviate it, we present a class of hybrid schemes by concatenating a parallel bit flipping (BF) variant with an Min-Sum (MS) variant. In most SNR region of interest, without compromising performance or convergence rate, simulation results show that the proposed hybrid schemes can save substantial computational complexity with respect to MS variant decoding alone. Specifically, the BF variant, with much less computational complexity, bears most decoding load before resorting to MS variant. Computational and hardware complexity is also elaborated to justify the feasibility of the hybrid schemes.<|reference_end|>
arxiv
@article{li2008hybrid, title={Hybrid Decoding of Finite Geometry LDPC Codes}, author={Guangwen Li, Dashe Li, Yuling Wang, Wenyan Sun}, journal={arXiv preprint arXiv:0801.1208}, year={2008}, archivePrefix={arXiv}, eprint={0801.1208}, primaryClass={cs.IT math.IT} }
li2008hybrid
arxiv-2320
0801.1210
Increasing GP Computing Power via Volunteer Computing
<|reference_start|>Increasing GP Computing Power via Volunteer Computing: This paper describes how it is possible to increase GP Computing Power via Volunteer Computing (VC) using the BOINC framework. Two experiments using well-known GP tools -Lil-gp & ECJ- are performed in order to demonstrate the benefit of using VC in terms of computing power and speed up. Finally we present an extension of the model where any GP tool or framework can be used inside BOINC regardless of its programming language, complexity or required operating system.<|reference_end|>
arxiv
@article{gonzalez2008increasing, title={Increasing GP Computing Power via Volunteer Computing}, author={Daniel Lombrana Gonzalez, Francisco Fernandez de Vega, L. Trujillo, G. Olague, F. Chavez de la O, M. Cardenas, L. Araujo, P. Castillo, K. Sharman}, journal={arXiv preprint arXiv:0801.1210}, year={2008}, archivePrefix={arXiv}, eprint={0801.1210}, primaryClass={cs.DC} }
gonzalez2008increasing
arxiv-2321
0801.1219
DSL development based on target meta-models Using AST transformations for automating semantic analysis in a textual DSL framework
<|reference_start|>DSL development based on target meta-models Using AST transformations for automating semantic analysis in a textual DSL framework: This paper describes an approach to creating textual syntax for Do- main-Specific Languages (DSL). We consider target meta-model to be the main artifact and hence to be developed first. The key idea is to represent analysis of textual syntax as a sequence of transformations. This is made by explicit opera- tions on abstract syntax trees (ATS), for which a simple language is proposed. Text-to-model transformation is divided into two parts: text-to-AST (developed by openArchitectureWare [1]) and AST-to-model (proposed by this paper). Our approach simplifies semantic analysis and helps to generate as much as possi- ble.<|reference_end|>
arxiv
@article{breslav2008dsl, title={DSL development based on target meta-models. Using AST transformations for automating semantic analysis in a textual DSL framework}, author={Andrey Breslav}, journal={arXiv preprint arXiv:0801.1219}, year={2008}, archivePrefix={arXiv}, eprint={0801.1219}, primaryClass={cs.PL} }
breslav2008dsl
arxiv-2322
0801.1245
Matrix Graph Grammars
<|reference_start|>Matrix Graph Grammars: This book objective is to develop an algebraization of graph grammars. Equivalently, we study graph dynamics. From the point of view of a computer scientist, graph grammars are a natural generalization of Chomsky grammars for which a purely algebraic approach does not exist up to now. A Chomsky (or string) grammar is, roughly speaking, a precise description of a formal language (which in essence is a set of strings). On a more discrete mathematical style, it can be said that graph grammars -- Matrix Graph Grammars in particular -- study dynamics of graphs. Ideally, this algebraization would enforce our understanding of grammars in general, providing new analysis techniques and generalizations of concepts, problems and results known so far.<|reference_end|>
arxiv
@article{velasco2008matrix, title={Matrix Graph Grammars}, author={Pedro Pablo Perez Velasco}, journal={arXiv preprint arXiv:0801.1245}, year={2008}, archivePrefix={arXiv}, eprint={0801.1245}, primaryClass={cs.DM} }
velasco2008matrix
arxiv-2323
0801.1251
Generative Unbinding of Names
<|reference_start|>Generative Unbinding of Names: This paper is concerned with the form of typed name binding used by the FreshML family of languages. Its characteristic feature is that a name binding is represented by an abstract (name,value)-pair that may only be deconstructed via the generation of fresh bound names. The paper proves a new result about what operations on names can co-exist with this construct. In FreshML the only observation one can make of names is to test whether or not they are equal. This restricted amount of observation was thought necessary to ensure that there is no observable difference between alpha-equivalent name binders. Yet from an algorithmic point of view it would be desirable to allow other operations and relations on names, such as a total ordering. This paper shows that, contrary to expectations, one may add not just ordering, but almost any relation or numerical function on names without disturbing the fundamental correctness result about this form of typed name binding (that object-level alpha-equivalence precisely corresponds to contextual equivalence at the programming meta-level), so long as one takes the state of dynamically created names into account.<|reference_end|>
arxiv
@article{pitts2008generative, title={Generative Unbinding of Names}, author={Andrew M. Pitts and Mark R. Shinwell}, journal={Logical Methods in Computer Science, Volume 4, Issue 1 (March 18, 2008) lmcs:916}, year={2008}, doi={10.2168/LMCS-4(1:4)2008}, archivePrefix={arXiv}, eprint={0801.1251}, primaryClass={cs.PL cs.LO} }
pitts2008generative
arxiv-2324
0801.1253
Linear Logic by Levels and Bounded Time Complexity
<|reference_start|>Linear Logic by Levels and Bounded Time Complexity: We give a new characterization of elementary and deterministic polynomial time computation in linear logic through the proofs-as-programs correspondence. Girard's seminal results, concerning elementary and light linear logic, achieve this characterization by enforcing a stratification principle on proofs, using the notion of depth in proof nets. Here, we propose a more general form of stratification, based on inducing levels in proof nets by means of indexes, which allows us to extend Girard's systems while keeping the same complexity properties. In particular, it turns out that Girard's systems can be recovered by forcing depth and level to coincide. A consequence of the higher flexibility of levels with respect to depth is the absence of boxes for handling the paragraph modality. We use this fact to propose a variant of our polytime system in which the paragraph modality is only allowed on atoms, and which may thus serve as a basis for developing lambda-calculus type assignment systems with more efficient typing algorithms than existing ones.<|reference_end|>
arxiv
@article{baillot2008linear, title={Linear Logic by Levels and Bounded Time Complexity}, author={Patrick Baillot and Damiano Mazza}, journal={Theoretical Computer Science 411 (2010) 470-503}, year={2008}, doi={10.1016/j.tcs.2009.09.015}, archivePrefix={arXiv}, eprint={0801.1253}, primaryClass={cs.LO cs.CC} }
baillot2008linear
arxiv-2325
0801.1275
Le terme et le concept : fondements d'une ontoterminologie
<|reference_start|>Le terme et le concept : fondements d'une ontoterminologie: Most definitions of ontology, viewed as a "specification of a conceptualization", agree on the fact that if an ontology can take different forms, it necessarily includes a vocabulary of terms and some specification of their meaning in relation to the domain's conceptualization. And as domain knowledge is mainly conveyed through scientific and technical texts, we can hope to extract some useful information from them for building ontology. But is it as simple as this? In this article we shall see that the lexical structure, i.e. the network of words linked by linguistic relationships, does not necessarily match the domain conceptualization. We have to bear in mind that writing documents is the concern of textual linguistics, of which one of the principles is the incompleteness of text, whereas building ontology - viewed as task-independent knowledge - is concerned with conceptualization based on formal and not natural languages. Nevertheless, the famous Sapir and Whorf hypothesis, concerning the interdependence of thought and language, is also applicable to formal languages. This means that the way an ontology is built and a concept is defined depends directly on the formal language which is used; and the results will not be the same. The introduction of the notion of ontoterminology allows to take into account epistemological principles for formal ontology building.<|reference_end|>
arxiv
@article{roche2008le, title={Le terme et le concept : fondements d'une ontoterminologie}, author={Christophe Roche (LISTIC)}, journal={Dans TOTh 2007 : Terminologie et Ontologie : Th\'eories et Applications - TOTh 2007 : Terminologie et Ontologie : Th\'eories et Applications, Annecy : France (2007)}, year={2008}, archivePrefix={arXiv}, eprint={0801.1275}, primaryClass={cs.AI} }
roche2008le
arxiv-2326
0801.1276
On the guaranteed error correction capability of LDPC codes
<|reference_start|>On the guaranteed error correction capability of LDPC codes: We investigate the relation between the girth and the guaranteed error correction capability of $\gamma$-left regular LDPC codes when decoded using the bit flipping (serial and parallel) algorithms. A lower bound on the number of variable nodes which expand by a factor of at least $3 \gamma/4$ is found based on the Moore bound. An upper bound on the guaranteed correction capability is established by studying the sizes of smallest possible trapping sets.<|reference_end|>
arxiv
@article{chilappagari2008on, title={On the guaranteed error correction capability of LDPC codes}, author={Shashi Kiran Chilappagari, Dung Viet Nguyen, Bane Vasic, Michael Marcellin}, journal={arXiv preprint arXiv:0801.1276}, year={2008}, doi={10.1109/ISIT.2008.4595023}, archivePrefix={arXiv}, eprint={0801.1276}, primaryClass={cs.IT math.IT} }
chilappagari2008on
arxiv-2327
0801.1282
LDPC Codes Which Can Correct Three Errors Under Iterative Decoding
<|reference_start|>LDPC Codes Which Can Correct Three Errors Under Iterative Decoding: In this paper, we provide necessary and sufficient conditions for a column-weight-three LDPC code to correct three errors when decoded using Gallager A algorithm. We then provide a construction technique which results in a code satisfying the above conditions. We also provide numerical assessment of code performance via simulation results.<|reference_end|>
arxiv
@article{chilappagari2008ldpc, title={LDPC Codes Which Can Correct Three Errors Under Iterative Decoding}, author={Shashi Kiran Chilappagari, Anantha Raman Krishnan, Bane Vasic}, journal={arXiv preprint arXiv:0801.1282}, year={2008}, doi={10.1109/ITW.2008.4578696}, archivePrefix={arXiv}, eprint={0801.1282}, primaryClass={cs.IT math.IT} }
chilappagari2008ldpc
arxiv-2328
0801.1300
Almost 2-SAT is Fixed-Parameter Tractable
<|reference_start|>Almost 2-SAT is Fixed-Parameter Tractable: We consider the following problem. Given a 2-CNF formula, is it possible to remove at most $k$ clauses so that the resulting 2-CNF formula is satisfiable? This problem is known to different research communities in Theoretical Computer Science under the names 'Almost 2-SAT', 'All-but-$k$ 2-SAT', '2-CNF deletion', '2-SAT deletion'. The status of fixed-parameter tractability of this problem is a long-standing open question in the area of Parameterized Complexity. We resolve this open question by proposing an algorithm which solves this problem in $O(15^k*k*m^3)$ and thus we show that this problem is fixed-parameter tractable.<|reference_end|>
arxiv
@article{razgon2008almost, title={Almost 2-SAT is Fixed-Parameter Tractable}, author={Igor Razgon and Barry O'Sullivan}, journal={arXiv preprint arXiv:0801.1300}, year={2008}, archivePrefix={arXiv}, eprint={0801.1300}, primaryClass={cs.DS cs.CG cs.LO} }
razgon2008almost
arxiv-2329
0801.1306
Capacity Bounds for the Gaussian Interference Channel
<|reference_start|>Capacity Bounds for the Gaussian Interference Channel: The capacity region of the two-user Gaussian Interference Channel (IC) is studied. Three classes of channels are considered: weak, one-sided, and mixed Gaussian IC. For the weak Gaussian IC, a new outer bound on the capacity region is obtained that outperforms previously known outer bounds. The sum capacity for a certain range of channel parameters is derived. For this range, it is proved that using Gaussian codebooks and treating interference as noise is optimal. It is shown that when Gaussian codebooks are used, the full Han-Kobayashi achievable rate region can be obtained by using the naive Han-Kobayashi achievable scheme over three frequency bands (equivalently, three subspaces). For the one-sided Gaussian IC, an alternative proof for the Sato's outer bound is presented. We derive the full Han-Kobayashi achievable rate region when Gaussian codebooks are utilized. For the mixed Gaussian IC, a new outer bound is obtained that outperforms previously known outer bounds. For this case, the sum capacity for the entire range of channel parameters is derived. It is proved that the full Han-Kobayashi achievable rate region using Gaussian codebooks is equivalent to that of the one-sided Gaussian IC for a particular range of channel parameters.<|reference_end|>
arxiv
@article{motahari2008capacity, title={Capacity Bounds for the Gaussian Interference Channel}, author={Abolfazl S. Motahari, Amir K. Khandani}, journal={arXiv preprint arXiv:0801.1306}, year={2008}, archivePrefix={arXiv}, eprint={0801.1306}, primaryClass={cs.IT math.IT} }
motahari2008capacity
arxiv-2330
0801.1307
Alternating Hierarchies for Time-Space Tradeoffs
<|reference_start|>Alternating Hierarchies for Time-Space Tradeoffs: Nepomnjascii's Theorem states that for all 0 <= \epsilon < 1 and k > 0 the class of languages recognized in nondeterministic time n^k and space n^\epsilon, NTISP[n^k, n^\epsilon ], is contained in the linear time hierarchy. By considering restrictions on the size of the universal quantifiers in the linear time hierarchy, this paper refines Nepomnjascii's result to give a sub- hierarchy, Eu-LinH, of the linear time hierarchy that is contained in NP and which contains NTISP[n^k, n^\epsilon ]. Hence, Eu-LinH contains NL and SC. This paper investigates basic structural properties of Eu-LinH. Then the relationships between Eu-LinH and the classes NL, SC, and NP are considered to see if they can shed light on the NL = NP or SC = NP questions. Finally, a new hierarchy, zeta -LinH, is defined to reduce the space requirements needed for the upper bound on Eu-LinH.<|reference_end|>
arxiv
@article{pollett2008alternating, title={Alternating Hierarchies for Time-Space Tradeoffs}, author={Chris Pollett and Eric Miles}, journal={arXiv preprint arXiv:0801.1307}, year={2008}, archivePrefix={arXiv}, eprint={0801.1307}, primaryClass={cs.CC cs.LO} }
pollett2008alternating
arxiv-2331
0801.1336
Stream Computing
<|reference_start|>Stream Computing: Stream computing is the use of multiple autonomic and parallel modules together with integrative processors at a higher level of abstraction to embody "intelligent" processing. The biological basis of this computing is sketched and the matter of learning is examined.<|reference_end|>
arxiv
@article{kak2008stream, title={Stream Computing}, author={Subhash Kak}, journal={arXiv preprint arXiv:0801.1336}, year={2008}, archivePrefix={arXiv}, eprint={0801.1336}, primaryClass={cs.AI} }
kak2008stream
arxiv-2332
0801.1341
Factorization in categories of systems of linear partial differential equations
<|reference_start|>Factorization in categories of systems of linear partial differential equations: We start with elementary algebraic theory of factorization of linear ordinary differential equations developed in the period 1880-1930. After exposing these classical results we sketch more sophisticated algorithmic approaches developed in the last 20 years. The main part of this paper is devoted to modern generalizations of the notion of factorization to the case of systems of linear partial differential equations and their relation with explicit solvability of nonlinear partial differential equations based on some constructions from the theory of abelian categories.<|reference_end|>
arxiv
@article{tsarev2008factorization, title={Factorization in categories of systems of linear partial differential equations}, author={S.P. Tsarev}, journal={arXiv preprint arXiv:0801.1341}, year={2008}, archivePrefix={arXiv}, eprint={0801.1341}, primaryClass={cs.SC} }
tsarev2008factorization
arxiv-2333
0801.1362
A new key exchange cryptosystem
<|reference_start|>A new key exchange cryptosystem: In this paper, we will present a new key exchange cryptosystem based on linear algebra, which take less operations but weaker in security than Diffie-Hellman's one.<|reference_end|>
arxiv
@article{li2008a, title={A new key exchange cryptosystem}, author={An-Ping Li}, journal={arXiv preprint arXiv:0801.1362}, year={2008}, archivePrefix={arXiv}, eprint={0801.1362}, primaryClass={cs.CR} }
li2008a
arxiv-2334
0801.1364
An Algorithm to Compute the Nearest Point in the Lattice $A_n^*$
<|reference_start|>An Algorithm to Compute the Nearest Point in the Lattice $A_n^*$: The lattice $A_n^*$ is an important lattice because of its covering properties in low dimensions. Clarkson \cite{Clarkson1999:Anstar} described an algorithm to compute the nearest lattice point in $A_n^*$ that requires $O(n\log{n})$ arithmetic operations. In this paper, we describe a new algorithm. While the complexity is still $O(n\log{n})$, it is significantly simpler to describe and verify. In practice, we find that the new algorithm also runs faster.<|reference_end|>
arxiv
@article{mckilliam2008an, title={An Algorithm to Compute the Nearest Point in the Lattice $A_{n}^*$}, author={Robby G. McKilliam, I. Vaughan L. Clarkson, Barry G. Quinn}, journal={IEEE Transactions on Information Theory, Vol. 54, No. 9, pp 4378-4381, Sept. 2008}, year={2008}, doi={10.1109/TIT.2008.928280}, archivePrefix={arXiv}, eprint={0801.1364}, primaryClass={cs.IT math.IT} }
mckilliam2008an
arxiv-2335
0801.1410
Two graph isomorphism polytopes
<|reference_start|>Two graph isomorphism polytopes: The convex hull $\psi_{n,n}$ of certain $(n!)^2$ tensors was considered recently in connection with graph isomorphism. We consider the convex hull $\psi_n$ of the $n!$ diagonals among these tensors. We show: 1. The polytope $\psi_n$ is a face of $\psi_{n,n}$. 2. Deciding if a graph $G$ has a subgraph isomorphic to $H$ reduces to optimization over $\psi_n$. 3. Optimization over $\psi_n$ reduces to optimization over $\psi_{n,n}$. In particular, this implies that the subgraph isomorphism problem reduces to optimization over $\psi_{n,n}$.<|reference_end|>
arxiv
@article{onn2008two, title={Two graph isomorphism polytopes}, author={Shmuel Onn}, journal={Discrete Mathematics, 309:2934--2936, 2009}, year={2008}, archivePrefix={arXiv}, eprint={0801.1410}, primaryClass={cs.CC cs.DM math.CO math.OC} }
onn2008two
arxiv-2336
0801.1415
The emerging field of language dynamics
<|reference_start|>The emerging field of language dynamics: A simple review by a linguist, citing many articles by physicists: Quantitative methods, agent-based computer simulations, language dynamics, language typology, historical linguistics<|reference_end|>
arxiv
@article{wichmann2008the, title={The emerging field of language dynamics}, author={S. Wichmann}, journal={arXiv preprint arXiv:0801.1415}, year={2008}, archivePrefix={arXiv}, eprint={0801.1415}, primaryClass={cs.CL physics.soc-ph} }
wichmann2008the
arxiv-2337
0801.1416
Fast Integer Multiplication using Modular Arithmetic
<|reference_start|>Fast Integer Multiplication using Modular Arithmetic: We give an $O(N\cdot \log N\cdot 2^{O(\log^*N)})$ algorithm for multiplying two $N$-bit integers that improves the $O(N\cdot \log N\cdot \log\log N)$ algorithm by Sch\"{o}nhage-Strassen. Both these algorithms use modular arithmetic. Recently, F\"{u}rer gave an $O(N\cdot \log N\cdot 2^{O(\log^*N)})$ algorithm which however uses arithmetic over complex numbers as opposed to modular arithmetic. In this paper, we use multivariate polynomial multiplication along with ideas from F\"{u}rer's algorithm to achieve this improvement in the modular setting. Our algorithm can also be viewed as a $p$-adic version of F\"{u}rer's algorithm. Thus, we show that the two seemingly different approaches to integer multiplication, modular and complex arithmetic, are similar.<|reference_end|>
arxiv
@article{de2008fast, title={Fast Integer Multiplication using Modular Arithmetic}, author={Anindya De, Piyush P Kurur, Chandan Saha and Ramprasad Saptharishi}, journal={arXiv preprint arXiv:0801.1416}, year={2008}, archivePrefix={arXiv}, eprint={0801.1416}, primaryClass={cs.SC cs.DS} }
de2008fast
arxiv-2338
0801.1419
Core Persistence in Peer-to-Peer Systems: Relating Size to Lifetime
<|reference_start|>Core Persistence in Peer-to-Peer Systems: Relating Size to Lifetime: Distributed systems are now both very large and highly dynamic. Peer to peer overlay networks have been proved efficient to cope with this new deal that traditional approaches can no longer accommodate. While the challenge of organizing peers in an overlay network has generated a lot of interest leading to a large number of solutions, maintaining critical data in such a network remains an open issue. In this paper, we are interested in defining the portion of nodes and frequency one has to probe, given the churn observed in the system, in order to achieve a given probability of maintaining the persistence of some critical data. More specifically, we provide a clear result relating the size and the frequency of the probing set along with its proof as well as an analysis of the way of leveraging such an information in a large scale dynamic distributed system.<|reference_end|>
arxiv
@article{gramoli2008core, title={Core Persistence in Peer-to-Peer Systems: Relating Size to Lifetime}, author={Vincent Gramoli (IRISA), Anne-Marie Kermarrec (IRISA), Achour Mostefaoui (IRISA), Michel Raynal (IRISA), Bruno Sericola (IRISA)}, journal={Dans Proceedings of the Workshop on Reliability in Decentralized Distributed Systems 4278 (2006) 1470--1479}, year={2008}, archivePrefix={arXiv}, eprint={0801.1419}, primaryClass={cs.DC} }
gramoli2008core
arxiv-2339
0801.1500
Toward the Graphics Turing Scale on a Blue Gene Supercomputer
<|reference_start|>Toward the Graphics Turing Scale on a Blue Gene Supercomputer: We investigate raytracing performance that can be achieved on a class of Blue Gene supercomputers. We measure a 822 times speedup over a Pentium IV on a 6144 processor Blue Gene/L. We measure the computational performance as a function of number of processors and problem size to determine the scaling performance of the raytracing calculation on the Blue Gene. We find nontrivial scaling behavior at large number of processors. We discuss applications of this technology to scientific visualization with advanced lighting and high resolution. We utilize three racks of a Blue Gene/L in our calculations which is less than three percent of the the capacity of the worlds largest Blue Gene computer.<|reference_end|>
arxiv
@article{mcguigan2008toward, title={Toward the Graphics Turing Scale on a Blue Gene Supercomputer}, author={Michael McGuigan}, journal={arXiv preprint arXiv:0801.1500}, year={2008}, archivePrefix={arXiv}, eprint={0801.1500}, primaryClass={cs.GR} }
mcguigan2008toward
arxiv-2340
0801.1514
Teaching spreadsheet development using peer audit and self-audit methods for reducing error
<|reference_start|>Teaching spreadsheet development using peer audit and self-audit methods for reducing error: Recent research has highlighted the high incidence of errors in spreadsheet models used in industry. In an attempt to reduce the incidence of such errors, a teaching approach has been devised which aids students to reduce their likelihood of making common errors during development. The approach comprises of spreadsheet checking methods based on the commonly accepted educational paradigms of peer assessment and self-assessment. However, these paradigms are here based upon practical techniques commonly used by the internal audit function such as peer audit and control and risk self-assessment. The result of this symbiosis between educational assessment and professional audit is a method that educates students in a set of structured, transferable skills for spreadsheet error-checking which are useful for increasing error-awareness in the classroom and for reducing business risk in the workplace.<|reference_end|>
arxiv
@article{chadwick2008teaching, title={Teaching spreadsheet development using peer audit and self-audit methods for reducing error}, author={David Chadwick, Rodney E. Sue}, journal={European Spreadsheet Risks Int. Grp. 2001 95-105 ISBN:1 86166 179 7}, year={2008}, archivePrefix={arXiv}, eprint={0801.1514}, primaryClass={cs.CY} }
chadwick2008teaching
arxiv-2341
0801.1516
An Evaluation of a Structured Spreadsheet Development Methodology
<|reference_start|>An Evaluation of a Structured Spreadsheet Development Methodology: This paper presents the results of an empirical evaluation of the quality of a structured methodology for the development of spreadsheet models, proposed in numerous previous papers by Rajalingham K, Knight B and Chadwick D et al. This paper also describes an improved version of their methodology, supported by appropriate examples. The principal objective of a structured and disciplined methodology for the construction of spreadsheet models is to reduce the occurrence of user-generated errors in the models. The evaluation of the effectiveness of the methodology has been carried out based on a number of real-life experiments. The results of these experiments demonstrate the methodology's potential for improved integrity control and enhanced comprehensibility of spreadsheet models.<|reference_end|>
arxiv
@article{rajalingham2008an, title={An Evaluation of a Structured Spreadsheet Development Methodology}, author={Kamalasen Rajalingham, David Chadwick, Brian Knight}, journal={European Spreadsheet Risks Int. Grp. 2001 39-59 ISBN:1 86166 179}, year={2008}, archivePrefix={arXiv}, eprint={0801.1516}, primaryClass={cs.CY} }
rajalingham2008an
arxiv-2342
0801.1573
Taking a shower in Youth Hostels: risks and delights of heterogeneity
<|reference_start|>Taking a shower in Youth Hostels: risks and delights of heterogeneity: Tuning one's shower in some hotels may turn into a challenging coordination game with imperfect information. The temperature sensitivity increases with the number of agents, making the problem possibly unlearnable. Because there is in practice a finite number of possible tap positions, identical agents are unlikely to reach even approximately their favorite water temperature. We show that a population of agents with homogeneous strategies is evolutionary unstable, which gives insights into the emergence of heterogeneity, the latter being tempting but risky.<|reference_end|>
arxiv
@article{matzke2008taking, title={Taking a shower in Youth Hostels: risks and delights of heterogeneity}, author={Christina Matzke and Damien Challet}, journal={arXiv preprint arXiv:0801.1573}, year={2008}, doi={10.1103/PhysRevE.84.016107}, archivePrefix={arXiv}, eprint={0801.1573}, primaryClass={physics.soc-ph cs.GT} }
matzke2008taking
arxiv-2343
0801.1600
A Most General Edge Elimination Polynomial - Thickening of Edges
<|reference_start|>A Most General Edge Elimination Polynomial - Thickening of Edges: We consider a graph polynomial \xi(G;x,y,z) introduced by Averbouch, Godlin, and Makowsky (2007). This graph polynomial simultaneously generalizes the Tutte polynomial as well as a bivariate chromatic polynomial defined by Dohmen, Poenitz and Tittmann (2003). We derive an identity which relates the graph polynomial of a thicked graph (i.e. a graph with each edge replaced by k copies of it) to the graph polynomial of the original graph. As a consequence, we observe that at every point (x,y,z), except for points lying within some set of dimension 2, evaluating \xi is #P-hard.<|reference_end|>
arxiv
@article{hoffmann2008a, title={A Most General Edge Elimination Polynomial - Thickening of Edges}, author={Christian Hoffmann}, journal={arXiv preprint arXiv:0801.1600}, year={2008}, archivePrefix={arXiv}, eprint={0801.1600}, primaryClass={math.CO cs.CC} }
hoffmann2008a
arxiv-2344
0801.1630
Computational Solutions for Today's Navy
<|reference_start|>Computational Solutions for Today's Navy: New methods are being employed to meet the Navy's changing software-development environment.<|reference_end|>
arxiv
@article{bentrem2008computational, title={Computational Solutions for Today's Navy}, author={Frank W. Bentrem, John T. Sample, and Michael M. Harris (Naval Research Laboratory)}, journal={Scientific Computing, vol. 25, no. 2, pp. 30-32 (March 2008)}, year={2008}, archivePrefix={arXiv}, eprint={0801.1630}, primaryClass={cs.MA cs.GL} }
bentrem2008computational
arxiv-2345
0801.1655
Episturmian words: a survey
<|reference_start|>Episturmian words: a survey: In this paper, we survey the rich theory of infinite episturmian words which generalize to any finite alphabet, in a rather resembling way, the well-known family of Sturmian words on two letters. After recalling definitions and basic properties, we consider episturmian morphisms that allow for a deeper study of these words. Some properties of factors are described, including factor complexity, palindromes, fractional powers, frequencies, and return words. We also consider lexicographical properties of episturmian words, as well as their connection to the balance property, and related notions such as finite episturmian words, Arnoux-Rauzy sequences, and "episkew words" that generalize the skew words of Morse and Hedlund.<|reference_end|>
arxiv
@article{glen2008episturmian, title={Episturmian words: a survey}, author={Amy Glen, Jacques Justin}, journal={RAIRO - Theoretical Informatics and Applications 43 (2009) 402-433}, year={2008}, doi={10.1051/ita/2009003}, archivePrefix={arXiv}, eprint={0801.1655}, primaryClass={math.CO cs.DM} }
glen2008episturmian
arxiv-2346
0801.1656
Palindromic Richness
<|reference_start|>Palindromic Richness: In this paper, we study combinatorial and structural properties of a new class of finite and infinite words that are 'rich' in palindromes in the utmost sense. A characteristic property of so-called "rich words" is that all complete returns to any palindromic factor are themselves palindromes. These words encompass the well-known episturmian words, originally introduced by the second author together with X. Droubay and G. Pirillo in 2001. Other examples of rich words have appeared in many different contexts. Here we present the first unified approach to the study of this intriguing family of words. Amongst our main results, we give an explicit description of the periodic rich infinite words and show that the recurrent balanced rich infinite words coincide with the balanced episturmian words. We also consider two wider classes of infinite words, namely "weakly rich words" and almost rich words (both strictly contain all rich words, but neither one is contained in the other). In particular, we classify all recurrent balanced weakly rich words. As a consequence, we show that any such word on at least three letters is necessarily episturmian; hence weakly rich words obey Fraenkel's conjecture. Likewise, we prove that a certain class of almost rich words obeys Fraenkel's conjecture by showing that the recurrent balanced ones are episturmian or contain at least two distinct letters with the same frequency. Lastly, we study the action of morphisms on (almost) rich words with particular interest in morphisms that preserve (almost) richness. Such morphisms belong to the class of "P-morphisms" that was introduced by A. Hof, O. Knill, and B. Simon in 1995.<|reference_end|>
arxiv
@article{glen2008palindromic, title={Palindromic Richness}, author={Amy Glen, Jacques Justin, Steve Widmer, Luca Q. Zamboni}, journal={European Journal of Combinatorics 30 (2009) 510-531}, year={2008}, doi={10.1016/j.ejc.2008.04.006}, archivePrefix={arXiv}, eprint={0801.1656}, primaryClass={math.CO cs.DM} }
glen2008palindromic
arxiv-2347
0801.1658
Computational approach to the emergence and evolution of language - evolutionary naming game model
<|reference_start|>Computational approach to the emergence and evolution of language - evolutionary naming game model: Computational modelling with multi-agent systems is becoming an important technique of studying language evolution. We present a brief introduction into this rapidly developing field, as well as our own contributions that include an analysis of the evolutionary naming-game model. In this model communicating agents, that try to establish a common vocabulary, are equipped with an evolutionarily selected learning ability. Such a coupling of biological and linguistic ingredients results in an abrupt transition: upon a small change of the model control parameter a poorly communicating group of linguistically unskilled agents transforms into almost perfectly communicating group with large learning abilities. Genetic imprinting of the learning abilities proceeds via Baldwin effect: initially unskilled communicating agents learn a language and that creates a niche in which there is an evolutionary pressure for the increase of learning ability. Under the assumption that communication intensity increases continuously with finite speed, the transition is split into several transition-like changes. It shows that the speed of cultural changes, that sets an additional characteristic timescale, might be yet another factor affecting the evolution of language. In our opinion, this model shows that linguistic and biological processes have a strong influence on each other and this effect certainly has contributed to an explosive development of our species.<|reference_end|>
arxiv
@article{lipowski2008computational, title={Computational approach to the emergence and evolution of language - evolutionary naming game model}, author={Adam Lipowski, Dorota Lipowska}, journal={arXiv preprint arXiv:0801.1658}, year={2008}, archivePrefix={arXiv}, eprint={0801.1658}, primaryClass={physics.soc-ph cs.CL cs.MA} }
lipowski2008computational
arxiv-2348
0801.1676
Analyzing the Topology Types arising in a Family of Algebraic Curves Depending On Two Parameters
<|reference_start|>Analyzing the Topology Types arising in a Family of Algebraic Curves Depending On Two Parameters: Given the implicit equation $F(x,y,t,s)$ of a family of algebraic plane curves depending on the parameters $t,s$, we provide an algorithm for studying the topology types arising in the family. For this purpose, the algorithm computes a finite partition of the parameter space so that the topology type of the family stays invariant over each element of the partition. The ideas contained in the paper can be seen as a generalization of the ideas in \cite{JGRS}, where the problem is solved for families of algebraic curves depending on one parameter, to the two-parameters case.<|reference_end|>
arxiv
@article{alcazar2008analyzing, title={Analyzing the Topology Types arising in a Family of Algebraic Curves Depending On Two Parameters}, author={Juan Gerardo Alcazar}, journal={arXiv preprint arXiv:0801.1676}, year={2008}, archivePrefix={arXiv}, eprint={0801.1676}, primaryClass={cs.SC} }
alcazar2008analyzing
arxiv-2349
0801.1687
Synthesis of Large Dynamic Concurrent Programs from Dynamic Specifications
<|reference_start|>Synthesis of Large Dynamic Concurrent Programs from Dynamic Specifications: We present a tractable method for synthesizing arbitrarily large concurrent programs, for a shared memory model with common hardware-available primitives such as atomic registers, compare-and-swap, load-linked/store conditional, etc. The programs we synthesize are dynamic: new processes can be created and added at run-time, and so our programs are not finite-state, in general. Nevertheless, we successfully exploit automatic synthesis and model-checking methods based on propositional temporal logic. Our method is algorithmically efficient, with complexity polynomial in the number of component processes (of the program) that are ``alive'' at any time. Our method does not explicitly construct the automata-theoretic product of all processes that are alive, thereby avoiding \intr{state explosion}. Instead, for each pair of processes which interact, our method constructs an automata-theoretic product (\intr{pair-machine}) which embodies all the possible interactions of these two processes. From each pair-machine, we can synthesize a correct \intr{pair-program} which coordinates the two involved processes as needed. We allow such pair-programs to be added dynamically at run-time. They are then ``composed conjunctively'' with the currently alive pair-programs to re-synthesize the program as it results after addition of the new pair-program. We are thus able to add new behaviors, which result in new properties being satisfied, at run-time. We establish a ``large model'' theorem which shows that the synthesized large program inherits correctness properties from the pair-programs.<|reference_end|>
arxiv
@article{attie2008synthesis, title={Synthesis of Large Dynamic Concurrent Programs from Dynamic Specifications}, author={Paul C. Attie}, journal={arXiv preprint arXiv:0801.1687}, year={2008}, archivePrefix={arXiv}, eprint={0801.1687}, primaryClass={cs.LO} }
attie2008synthesis
arxiv-2350
0801.1703
The Quadratic Gaussian Rate-Distortion Function for Source Uncorrelated Distortions
<|reference_start|>The Quadratic Gaussian Rate-Distortion Function for Source Uncorrelated Distortions: We characterize the rate-distortion function for zero-mean stationary Gaussian sources under the MSE fidelity criterion and subject to the additional constraint that the distortion is uncorrelated to the input. The solution is given by two equations coupled through a single scalar parameter. This has a structure similar to the well known water-filling solution obtained without the uncorrelated distortion restriction. Our results fully characterize the unique statistics of the optimal distortion. We also show that, for all positive distortions, the minimum achievable rate subject to the uncorrelation constraint is strictly larger than that given by the un-constrained rate-distortion function. This gap increases with the distortion and tends to infinity and zero, respectively, as the distortion tends to zero and infinity.<|reference_end|>
arxiv
@article{derpich2008the, title={The Quadratic Gaussian Rate-Distortion Function for Source Uncorrelated Distortions}, author={Milan S. Derpich, Jan Ostergaard and Graham C. Goodwin}, journal={arXiv preprint arXiv:0801.1703}, year={2008}, archivePrefix={arXiv}, eprint={0801.1703}, primaryClass={cs.IT math.IT} }
derpich2008the
arxiv-2351
0801.1715
On Breaching Enterprise Data Privacy Through Adversarial Information Fusion
<|reference_start|>On Breaching Enterprise Data Privacy Through Adversarial Information Fusion: Data privacy is one of the key challenges faced by enterprises today. Anonymization techniques address this problem by sanitizing sensitive data such that individual privacy is preserved while allowing enterprises to maintain and share sensitive data. However, existing work on this problem make inherent assumptions about the data that are impractical in day-to-day enterprise data management scenarios. Further, application of existing anonymization schemes on enterprise data could lead to adversarial attacks in which an intruder could use information fusion techniques to inflict a privacy breach. In this paper, we shed light on the shortcomings of current anonymization schemes in the context of enterprise data. We define and experimentally demonstrate Web-based Information- Fusion Attack on anonymized enterprise data. We formulate the problem of Fusion Resilient Enterprise Data Anonymization and propose a prototype solution to address this problem.<|reference_end|>
arxiv
@article{ganta2008on, title={On Breaching Enterprise Data Privacy Through Adversarial Information Fusion}, author={Srivatsava Ranjit Ganta, Raj Acharya}, journal={arXiv preprint arXiv:0801.1715}, year={2008}, archivePrefix={arXiv}, eprint={0801.1715}, primaryClass={cs.DB cs.CR cs.OH} }
ganta2008on
arxiv-2352
0801.1718
Achieving the Quadratic Gaussian Rate-Distortion Function for Source Uncorrelated Distortions
<|reference_start|>Achieving the Quadratic Gaussian Rate-Distortion Function for Source Uncorrelated Distortions: We prove achievability of the recently characterized quadratic Gaussian rate-distortion function (RDF) subject to the constraint that the distortion is uncorrelated to the source. This result is based on shaped dithered lattice quantization in the limit as the lattice dimension tends to infinity and holds for all positive distortions. It turns out that this uncorrelated distortion RDF can be realized causally. This feature, which stands in contrast to Shannon's RDF, is illustrated by causal transform coding. Moreover, we prove that by using feedback noise shaping the uncorrelated distortion RDF can be achieved causally and with memoryless entropy coding. Whilst achievability relies upon infinite dimensional quantizers, we prove that the rate loss incurred in the finite dimensional case can be upper-bounded by the space filling loss of the quantizer and, thus, is at most 0.254 bit/dimension.<|reference_end|>
arxiv
@article{derpich2008achieving, title={Achieving the Quadratic Gaussian Rate-Distortion Function for Source Uncorrelated Distortions}, author={Milan S. Derpich, Jan Ostergaard and Daniel E. Quevedo}, journal={arXiv preprint arXiv:0801.1718}, year={2008}, archivePrefix={arXiv}, eprint={0801.1718}, primaryClass={cs.IT math.IT} }
derpich2008achieving
arxiv-2353
0801.1736
A Central Limit Theorem for the SNR at the Wiener Filter Output for Large Dimensional Signals
<|reference_start|>A Central Limit Theorem for the SNR at the Wiener Filter Output for Large Dimensional Signals: Consider the quadratic form $\beta = {\bf y}^* ({\bf YY}^* + \rho {\bf I})^{-1} {\bf y}$ where $\rho$ is a positive number, where ${\bf y}$ is a random vector and ${\bf Y}$ is a $N \times K$ random matrix both having independent elements with different variances, and where ${\bf y}$ and ${\bf Y}$ are independent. Such quadratic forms represent the Signal to Noise Ratio at the output of the linear Wiener receiver for multi dimensional signals frequently encountered in wireless communications and in array processing. Using well known results of Random Matrix Theory, the quadratic form $\beta$ can be approximated with a known deterministic real number $\bar\beta_K$ in the asymptotic regime where $K\to\infty$ and $K/N \to \alpha > 0$. This paper addresses the problem of convergence of $\beta$. More specifically, it is shown here that $\sqrt{K}(\beta - \bar\beta_K)$ behaves for large $K$ like a Gaussian random variable which variance is provided.<|reference_end|>
arxiv
@article{kammoun2008a, title={A Central Limit Theorem for the SNR at the Wiener Filter Output for Large Dimensional Signals}, author={Abla Kammoun, Malika Kharouf, Walid Hachem, Jamal Najim}, journal={arXiv preprint arXiv:0801.1736}, year={2008}, archivePrefix={arXiv}, eprint={0801.1736}, primaryClass={cs.IT math.IT} }
kammoun2008a
arxiv-2354
0801.1737
Extensions to Network Flow Interdiction on Planar Graphs
<|reference_start|>Extensions to Network Flow Interdiction on Planar Graphs: Network flow interdiction analysis studies by how much the value of a maximum flow in a network can be diminished by removing components of the network constrained to some budget. Although this problem is strongly NP-complete on general networks, pseudo-polynomial algorithms were found for planar networks with a single source and a single sink and without the possibility to remove vertices. In this work we introduce pseudo-polynomial algorithms which overcome some of the restrictions of previous methods. We propose a planarity-preserving transformation that allows to incorporate vertex removals and vertex capacities in pseudo-polynomial interdiction algorithms for planar graphs. Additionally, a pseudo-polynomial algorithm is introduced for the problem of determining the minimal interdiction budget which is at least needed to make it impossible to satisfy the demand of all sink nodes, on planar networks with multiple sources and sinks satisfying that the sum of the supplies at the source nodes equals the sum of the demands at the sink nodes. Furthermore we show that the k-densest subgraph problem on planar graphs can be reduced to a network flow interdiction problem on a planar graph with multiple sources and sinks and polynomially bounded input numbers. However it is still not known if either of these problems can be solved in polynomial time.<|reference_end|>
arxiv
@article{zenklusen2008extensions, title={Extensions to Network Flow Interdiction on Planar Graphs}, author={Rico Zenklusen}, journal={arXiv preprint arXiv:0801.1737}, year={2008}, archivePrefix={arXiv}, eprint={0801.1737}, primaryClass={cs.DM} }
zenklusen2008extensions
arxiv-2355
0801.1766
A Family of Counter Examples to an Approach to Graph Isomorphism
<|reference_start|>A Family of Counter Examples to an Approach to Graph Isomorphism: We give a family of counter examples showing that the two sequences of polytopes $\Phi_{n,n}$ and $\Psi_{n,n}$ are different. These polytopes were defined recently by S. Friedland in an attempt at a polynomial time algorithm for graph isomorphism.<|reference_end|>
arxiv
@article{cai2008a, title={A Family of Counter Examples to an Approach to Graph Isomorphism}, author={Jin-Yi Cai, Pinyan Lu and Mingji Xia}, journal={arXiv preprint arXiv:0801.1766}, year={2008}, archivePrefix={arXiv}, eprint={0801.1766}, primaryClass={cs.CC cs.DM} }
cai2008a
arxiv-2356
0801.1772
Bi-criteria Pipeline Mappings for Parallel Image Processing
<|reference_start|>Bi-criteria Pipeline Mappings for Parallel Image Processing: Mapping workflow applications onto parallel platforms is a challenging problem, even for simple application patterns such as pipeline graphs. Several antagonistic criteria should be optimized, such as throughput and latency (or a combination). Typical applications include digital image processing, where images are processed in steady-state mode. In this paper, we study the mapping of a particular image processing application, the JPEG encoding. Mapping pipelined JPEG encoding onto parallel platforms is useful for instance for encoding Motion JPEG images. As the bi-criteria mapping problem is NP-complete, we concentrate on the evaluation and performance of polynomial heuristics.<|reference_end|>
arxiv
@article{benoit2008bi-criteria, title={Bi-criteria Pipeline Mappings for Parallel Image Processing}, author={Anne Benoit (INRIA Rh^one-Alpes, LIP), Harald Kosch, Veronika Rehn-Sonigo (INRIA Rh^one-Alpes, LIP), Yves Robert (INRIA Rh^one-Alpes, LIP)}, journal={arXiv preprint arXiv:0801.1772}, year={2008}, archivePrefix={arXiv}, eprint={0801.1772}, primaryClass={cs.DC} }
benoit2008bi-criteria
arxiv-2357
0801.1783
An omega-power of a context-free language which is Borel above Delta^0_omega
<|reference_start|>An omega-power of a context-free language which is Borel above Delta^0_omega: We use erasers-like basic operations on words to construct a set that is both Borel and above Delta^0_omega, built as a set V^\omega where V is a language of finite words accepted by a pushdown automaton. In particular, this gives a first example of an omega-power of a context free language which is a Borel set of infinite rank.<|reference_end|>
arxiv
@article{duparc2008an, title={An omega-power of a context-free language which is Borel above Delta^0_omega}, author={Jacques Duparc (UNIL), Olivier Finkel (LIP)}, journal={Dans Proceedings of the International Conference on Foundations of the Formal Sciences V : Infinite Games - Foundations of the Formal Sciences V : Infinite Games, November 26-29, 2004, Bonn : Allemagne}, year={2008}, archivePrefix={arXiv}, eprint={0801.1783}, primaryClass={cs.CC cs.GT cs.LO math.LO} }
duparc2008an
arxiv-2358
0801.1784
Ideal synchronizer for marked pairs in fork-join network
<|reference_start|>Ideal synchronizer for marked pairs in fork-join network: We introduce a new functional element (synchronizer for marked pairs) meant to join results of parallel processing in two-branch fork-join queueing network. Approximations for distribution of sojourn time at the synchronizer are derived along with a validity domain. Calculations are performed assuming that: arrivals to the network form a Poisson process, each branch operates like an M/M/N queueing system. It is shown that mean sojourn time at a real synchronizer node is bounded below by the value, defined by parameters of the network (which contains the synchronizer) and does not depend upon performance and particular properties of the synchronizer.<|reference_end|>
arxiv
@article{vyshenski2008ideal, title={Ideal synchronizer for marked pairs in fork-join network}, author={S. V. Vyshenski, P. V. Grigoriev, Yu. Yu. Dubenskaya}, journal={arXiv preprint arXiv:0801.1784}, year={2008}, archivePrefix={arXiv}, eprint={0801.1784}, primaryClass={cs.DM} }
vyshenski2008ideal
arxiv-2359
0801.1856
Interpretation as a factor in understanding flawed spreadsheets
<|reference_start|>Interpretation as a factor in understanding flawed spreadsheets: The spreadsheet has been used by the business community for many years and yet still raises a number of significant concerns. As educators our concern is to try to develop the students skills in both the development of spreadsheets and in taking a critical view of their potential defects. In this paper we consider both the problems of mechanical production and the problems of translation of problem to spreadsheet representation.<|reference_end|>
arxiv
@article{banks2008interpretation, title={Interpretation as a factor in understanding flawed spreadsheets}, author={David A. Banks, Ann Monday}, journal={Proc. European Spreadsheet Risks Int. Grp. 2002 13 21 ISBN 1 86166 182 7}, year={2008}, archivePrefix={arXiv}, eprint={0801.1856}, primaryClass={cs.CY cs.HC} }
banks2008interpretation
arxiv-2360
0801.1883
D-optimal Bayesian Interrogation for Parameter and Noise Identification of Recurrent Neural Networks
<|reference_start|>D-optimal Bayesian Interrogation for Parameter and Noise Identification of Recurrent Neural Networks: We introduce a novel online Bayesian method for the identification of a family of noisy recurrent neural networks (RNNs). We develop Bayesian active learning technique in order to optimize the interrogating stimuli given past experiences. In particular, we consider the unknown parameters as stochastic variables and use the D-optimality principle, also known as `\emph{infomax method}', to choose optimal stimuli. We apply a greedy technique to maximize the information gain concerning network parameters at each time step. We also derive the D-optimal estimation of the additive noise that perturbs the dynamical system of the RNN. Our analytical results are approximation-free. The analytic derivation gives rise to attractive quadratic update rules.<|reference_end|>
arxiv
@article{poczos2008d-optimal, title={D-optimal Bayesian Interrogation for Parameter and Noise Identification of Recurrent Neural Networks}, author={Barnabas Poczos and Andras Lorincz}, journal={arXiv preprint arXiv:0801.1883}, year={2008}, archivePrefix={arXiv}, eprint={0801.1883}, primaryClass={cs.NE cs.IT math.IT} }
poczos2008d-optimal
arxiv-2361
0801.1925
A Framework for Designing Teleconsultation Systems in Africa
<|reference_start|>A Framework for Designing Teleconsultation Systems in Africa: All of the countries within Africa experience a serious shortage of medical professionals, particularly specialists, a problem that is only exacerbated by high emigration of doctors with better prospects overseas. As a result, those that remain in Africa, particularly those practicing in rural regions, experience a shortage of specialists and other colleagues with whom to exchange ideas. Telemedicine and teleconsultation are key areas that attempt to address this problem by leveraging remote expertise for local problems. This paper presents an overview of teleconsultation in the developing world, with a particular focus on how lessons learned apply to Africa. By teleconsultation, we are addressing non-real-time communication between health care professionals for the purposes of providing expertise and informal recommendations, without the real-time, interactive requirements typical of diagnosis and patient care, which is impractical for the vast majority of existing medical practices. From these previous experiences, we draw a set of guidelines and examine their relevance to Ghana in particular. Based on 6 weeks of needs assessment, we identify key variables that guide our framework, and then illustrate how our framework is used to inform the iterative design of a prototype system.<|reference_end|>
arxiv
@article{luk2008a, title={A Framework for Designing Teleconsultation Systems in Africa}, author={Rowena Luk, Melissa Ho, Paul M. Aoki}, journal={Proc. Int'l Conf. on Health Informatics in Africa (HELINA), Bamako, Mali, Jan. 2007, 28(1-5)}, year={2008}, archivePrefix={arXiv}, eprint={0801.1925}, primaryClass={cs.HC} }
luk2008a
arxiv-2362
0801.1927
Asynchronous Remote Medical Consultation for Ghana
<|reference_start|>Asynchronous Remote Medical Consultation for Ghana: Computer-mediated communication systems can be used to bridge the gap between doctors in underserved regions with local shortages of medical expertise and medical specialists worldwide. To this end, we describe the design of a prototype remote consultation system intended to provide the social, institutional and infrastructural context for sustained, self-organizing growth of a globally-distributed Ghanaian medical community. The design is grounded in an iterative design process that included two rounds of extended design fieldwork throughout Ghana and draws on three key design principles (social networks as a framework on which to build incentives within a self-organizing network; optional and incremental integration with existing referral mechanisms; and a weakly-connected, distributed architecture that allows for a highly interactive, responsive system despite failures in connectivity). We discuss initial experiences from an ongoing trial deployment in southern Ghana.<|reference_end|>
arxiv
@article{luk2008asynchronous, title={Asynchronous Remote Medical Consultation for Ghana}, author={Rowena Luk, Melissa Ho, Paul M. Aoki}, journal={arXiv preprint arXiv:0801.1927}, year={2008}, doi={10.1145/1357054.1357173}, archivePrefix={arXiv}, eprint={0801.1927}, primaryClass={cs.HC} }
luk2008asynchronous
arxiv-2363
0801.1979
Minimum Leaf Out-branching and Related Problems
<|reference_start|>Minimum Leaf Out-branching and Related Problems: Given a digraph $D$, the Minimum Leaf Out-Branching problem (MinLOB) is the problem of finding in $D$ an out-branching with the minimum possible number of leaves, i.e., vertices of out-degree 0. We prove that MinLOB is polynomial-time solvable for acyclic digraphs. In general, MinLOB is NP-hard and we consider three parameterizations of MinLOB. We prove that two of them are NP-complete for every value of the parameter, but the third one is fixed-parameter tractable (FPT). The FPT parametrization is as follows: given a digraph $D$ of order $n$ and a positive integral parameter $k$, check whether $D$ contains an out-branching with at most $n-k$ leaves (and find such an out-branching if it exists). We find a problem kernel of order $O(k^2)$ and construct an algorithm of running time $O(2^{O(k\log k)}+n^6),$ which is an `additive' FPT algorithm. We also consider transformations from two related problems, the minimum path covering and the maximum internal out-tree problems into MinLOB, which imply that some parameterizations of the two problems are FPT as well.<|reference_end|>
arxiv
@article{gutin2008minimum, title={Minimum Leaf Out-branching and Related Problems}, author={G. Gutin, I. Razgon, E.J. Kim}, journal={arXiv preprint arXiv:0801.1979}, year={2008}, archivePrefix={arXiv}, eprint={0801.1979}, primaryClass={cs.DS cs.DM} }
gutin2008minimum
arxiv-2364
0801.1987
A Nearly Linear-Time PTAS for Explicit Fractional Packing and Covering Linear Programs
<|reference_start|>A Nearly Linear-Time PTAS for Explicit Fractional Packing and Covering Linear Programs: We give an approximation algorithm for packing and covering linear programs (linear programs with non-negative coefficients). Given a constraint matrix with n non-zeros, r rows, and c columns, the algorithm computes feasible primal and dual solutions whose costs are within a factor of 1+eps of the optimal cost in time O((r+c)log(n)/eps^2 + n).<|reference_end|>
arxiv
@article{koufogiannakis2008a, title={A Nearly Linear-Time PTAS for Explicit Fractional Packing and Covering Linear Programs}, author={Christos Koufogiannakis and Neal E. Young}, journal={Algorithmica 70(4):648-674(2014)}, year={2008}, doi={10.1007/s00453-013-9771-6}, archivePrefix={arXiv}, eprint={0801.1987}, primaryClass={cs.DS} }
koufogiannakis2008a
arxiv-2365
0801.1988
Online variants of the cross-entropy method
<|reference_start|>Online variants of the cross-entropy method: The cross-entropy method is a simple but efficient method for global optimization. In this paper we provide two online variants of the basic CEM, together with a proof of convergence.<|reference_end|>
arxiv
@article{szita2008online, title={Online variants of the cross-entropy method}, author={Istvan Szita and Andras Lorincz}, journal={arXiv preprint arXiv:0801.1988}, year={2008}, archivePrefix={arXiv}, eprint={0801.1988}, primaryClass={cs.LG} }
szita2008online
arxiv-2366
0801.2034
On the Boundedness of the Support of Optimal Input Measures for Rayleigh Fading Channels
<|reference_start|>On the Boundedness of the Support of Optimal Input Measures for Rayleigh Fading Channels: We consider transmission over a wireless multiple antenna communication system operating in a Rayleigh flat fading environment with no channel state information at the receiver and the transmitter with coherence time T=1. We show that, subject to the average power constraint, the support of the capacity achieving input distribution is bounded. Moreover, we show by a simple example concerning the identity theorem (or uniqueness theorem) from the complex analysis in several variables that some of the existing results in the field are not rigorous.<|reference_end|>
arxiv
@article{sommerfeld2008on, title={On the Boundedness of the Support of Optimal Input Measures for Rayleigh Fading Channels}, author={Jochen Sommerfeld, Igor Bjelakovic and Holger Boche}, journal={arXiv preprint arXiv:0801.2034}, year={2008}, archivePrefix={arXiv}, eprint={0801.2034}, primaryClass={cs.IT math.IT} }
sommerfeld2008on
arxiv-2367
0801.2069
Factored Value Iteration Converges
<|reference_start|>Factored Value Iteration Converges: In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.<|reference_end|>
arxiv
@article{szita2008factored, title={Factored Value Iteration Converges}, author={Istvan Szita and Andras Lorincz}, journal={arXiv preprint arXiv:0801.2069}, year={2008}, archivePrefix={arXiv}, eprint={0801.2069}, primaryClass={cs.AI cs.LG} }
szita2008factored
arxiv-2368
0801.2088
Persistence of Wandering Intervals in Self-Similar Affine Interval Exchange Transformations
<|reference_start|>Persistence of Wandering Intervals in Self-Similar Affine Interval Exchange Transformations: In this article we prove that given a self-similar interval exchange transformation T, whose associated matrix verifies a quite general algebraic condition, there exists an affine interval exchange transformation with wandering intervals that is semi-conjugated to it. That is, in this context the existence of Denjoy counterexamples occurs very often, generalizing the result of M. Cobo in [C].<|reference_end|>
arxiv
@article{bressaud2008persistence, title={Persistence of Wandering Intervals in Self-Similar Affine Interval Exchange Transformations}, author={Xavier Bressaud, Pascal Hubert, Alejandro Maass}, journal={arXiv preprint arXiv:0801.2088}, year={2008}, archivePrefix={arXiv}, eprint={0801.2088}, primaryClass={math.DS cs.IT math.IT} }
bressaud2008persistence
arxiv-2369
0801.2092
Model for synchronizer of marked pairs in fork-join network
<|reference_start|>Model for synchronizer of marked pairs in fork-join network: We introduce a model for synchronizer of marked pairs, which is a node for joining results of parallel processing in two-branch fork-join queueing network. A distribution for number of jobs in the synchronizer is obtained. Calculations are performed assuming that: arrivals to the network form a Poisson process, each branch operates like an M/M/N queueing system. It is shown that a mean quantity of jobs in the synchronizer is bounded below by the value, defined by parameters of the network (which contains the synchronizer) and does not depend upon performance and particular properties of the synchronizer. A domain of network parameters is found, where the flow of jobs departing from the synchronizer does not manifest a statistically significant difference from the Poisson type, despite the correlation between job flows from both branches of the fork-join network.<|reference_end|>
arxiv
@article{vyshenski2008model, title={Model for synchronizer of marked pairs in fork-join network}, author={S. V. Vyshenski, P. V. Grigoriev, Yu. Yu. Dubenskaya}, journal={arXiv preprint arXiv:0801.2092}, year={2008}, archivePrefix={arXiv}, eprint={0801.2092}, primaryClass={cs.DM} }
vyshenski2008model
arxiv-2370
0801.2144
Non-Additive Quantum Codes from Goethals and Preparata Codes
<|reference_start|>Non-Additive Quantum Codes from Goethals and Preparata Codes: We extend the stabilizer formalism to a class of non-additive quantum codes which are constructed from non-linear classical codes. As an example, we present infinite families of non-additive codes which are derived from Goethals and Preparata codes.<|reference_end|>
arxiv
@article{grassl2008non-additive, title={Non-Additive Quantum Codes from Goethals and Preparata Codes}, author={Markus Grassl and Martin Roetteler}, journal={Proceedings IEEE Information Theory Workshop 2008 (ITW 2008), Porto, Portugal, May 2008, pp. 396-400}, year={2008}, doi={10.1109/ITW.2008.4578694}, archivePrefix={arXiv}, eprint={0801.2144}, primaryClass={quant-ph cs.IT math.IT} }
grassl2008non-additive
arxiv-2371
0801.2150
Quantum Goethals-Preparata Codes
<|reference_start|>Quantum Goethals-Preparata Codes: We present a family of non-additive quantum codes based on Goethals and Preparata codes with parameters ((2^m,2^{2^m-5m+1},8)). The dimension of these codes is eight times higher than the dimension of the best known additive quantum codes of equal length and minimum distance.<|reference_end|>
arxiv
@article{grassl2008quantum, title={Quantum Goethals-Preparata Codes}, author={Markus Grassl and Martin Roetteler}, journal={Proceedings 2008 IEEE International Symposium on Information Theory (ISIT 2008), Toronto, Canada, July 2008, pp. 300-304}, year={2008}, doi={10.1109/ISIT.2008.4594996}, archivePrefix={arXiv}, eprint={0801.2150}, primaryClass={quant-ph cs.IT math.IT} }
grassl2008quantum
arxiv-2372
0801.2175
MathPSfrag 2: Convenient LaTeX Labels in Mathematica
<|reference_start|>MathPSfrag 2: Convenient LaTeX Labels in Mathematica: This article introduces the next version of MathPSfrag. MathPSfrag is a Mathematica package that during export automatically replaces all expressions in a plot by corresponding LaTeX commands. The new version can also produce LaTeX independent images; e.g., PDF files for inclusion in pdfLaTeX. Moreover from these files a preview is generated and shown within Mathematica.<|reference_end|>
arxiv
@article{große2008mathpsfrag, title={MathPSfrag 2: Convenient LaTeX Labels in Mathematica}, author={Johannes Gro{ss}e}, journal={arXiv preprint arXiv:0801.2175}, year={2008}, archivePrefix={arXiv}, eprint={0801.2175}, primaryClass={cs.GR} }
große2008mathpsfrag
arxiv-2373
0801.2185
New Outer Bounds on the Capacity Region of Gaussian Interference Channels
<|reference_start|>New Outer Bounds on the Capacity Region of Gaussian Interference Channels: Recent outer bounds on the capacity region of Gaussian interference channels are generalized to $m$-user channels with $m>2$ and asymmetric powers and crosstalk coefficients. The bounds are again shown to give the sum-rate capacity for Gaussian interference channels with low powers and crosstalk coefficients. The capacity is achieved by using single-user detection at each receiver, i.e., treating the interference as noise incurs no loss in performance.<|reference_end|>
arxiv
@article{shang2008new, title={New Outer Bounds on the Capacity Region of Gaussian Interference Channels}, author={Xiaohu Shang, Gerhard Kramer, Biao Chen}, journal={arXiv preprint arXiv:0801.2185}, year={2008}, archivePrefix={arXiv}, eprint={0801.2185}, primaryClass={cs.IT math.IT} }
shang2008new
arxiv-2374
0801.2187
A One-Way Function Based On The Extended Euclidean Algorithm
<|reference_start|>A One-Way Function Based On The Extended Euclidean Algorithm: A problem based on the Extended Euclidean Algorithm applied to a class of polynomials with many factors is presented and believed to be hard. If so, it is a one-way function well suited for applications in digital signicatures.<|reference_end|>
arxiv
@article{feig2008a, title={A One-Way Function Based On The Extended Euclidean Algorithm}, author={Ephraim Feig, Vivian Feig}, journal={arXiv preprint arXiv:0801.2187}, year={2008}, archivePrefix={arXiv}, eprint={0801.2187}, primaryClass={cs.CR} }
feig2008a
arxiv-2375
0801.2201
Policies of System Level Pipeline Modeling
<|reference_start|>Policies of System Level Pipeline Modeling: Pipelining is a well understood and often used implementation technique for increasing the performance of a hardware system. We develop several SystemC/C++ modeling techniques that allow us to quickly model, simulate, and evaluate pipelines. We employ a small domain specific language (DSL) based on resource usage patterns that automates the drudgery of boilerplate code needed to configure connectivity in simulation models. The DSL is embedded directly in the host modeling language SystemC/C++. Additionally we develop several techniques for parameterizing a pipeline's behavior based on policies of function, communication, and timing (performance modeling).<|reference_end|>
arxiv
@article{harcourt2008policies, title={Policies of System Level Pipeline Modeling}, author={Ed Harcourt}, journal={arXiv preprint arXiv:0801.2201}, year={2008}, archivePrefix={arXiv}, eprint={0801.2201}, primaryClass={cs.AR cs.PL} }
harcourt2008policies
arxiv-2376
0801.2226
Programming an interpreter using molecular dynamics
<|reference_start|>Programming an interpreter using molecular dynamics: PGA (ProGram Algebra) is an algebra of programs which concerns programs in their simplest form: sequences of instructions. Molecular dynamics is a simple model of computation developed in the setting of PGA, which bears on the use of dynamic data structures in programming. We consider the programming of an interpreter for a program notation that is close to existing assembly languages using PGA with the primitives of molecular dynamics as basic instructions. It happens that, although primarily meant for explaining programming language features relating to the use of dynamic data structures, the collection of primitives of molecular dynamics in itself is suited to our programming wants.<|reference_end|>
arxiv
@article{bergstra2008programming, title={Programming an interpreter using molecular dynamics}, author={J. A. Bergstra, C. A. Middelburg}, journal={Scientific Annals of Computer Science, 17:47--81, 2007. http://www.infoiasi.ro/bin/download/Annals/XVII/XVII_2.pdf}, year={2008}, number={PRG0801}, archivePrefix={arXiv}, eprint={0801.2226}, primaryClass={cs.PL} }
bergstra2008programming
arxiv-2377
0801.2233
Analysis of Non-binary Hybrid LDPC Codes
<|reference_start|>Analysis of Non-binary Hybrid LDPC Codes: In this paper, we analyse asymptotically a new class of LDPC codes called Non-binary Hybrid LDPC codes, which has been recently introduced. We use density evolution techniques to derive a stability condition for hybrid LDPC codes, and prove their threshold behavior. We study this stability condition to conclude on asymptotic advantages of hybrid LDPC codes compared to their non-hybrid counterparts.<|reference_end|>
arxiv
@article{sassatelli2008analysis, title={Analysis of Non-binary Hybrid LDPC Codes}, author={Lucile Sassatelli and David Declercq}, journal={arXiv preprint arXiv:0801.2233}, year={2008}, archivePrefix={arXiv}, eprint={0801.2233}, primaryClass={cs.IT math.IT} }
sassatelli2008analysis
arxiv-2378
0801.2242
Information Spectrum Approach to Second-Order Coding Rate in Channel Coding
<|reference_start|>Information Spectrum Approach to Second-Order Coding Rate in Channel Coding: Second-order coding rate of channel coding is discussed for general sequence of channels. The optimum second-order transmission rate with a constant error constraint $\epsilon$ is obtained by using the information spectrum method. We apply this result to the discrete memoryless case, the discrete memoryless case with a cost constraint, the additive Markovian case, and the Gaussian channel case with an energy constraint. We also clarify that the Gallager bound does not give the optimum evaluation in the second-order coding rate.<|reference_end|>
arxiv
@article{hayashi2008information, title={Information Spectrum Approach to Second-Order Coding Rate in Channel Coding}, author={Masahito Hayashi}, journal={IEEE Transactions on Information Theory Volume 55, Issue 11, 4947 - 4966 (2009)}, year={2008}, doi={10.1109/TIT.2009.2030478}, archivePrefix={arXiv}, eprint={0801.2242}, primaryClass={cs.IT math.IT} }
hayashi2008information
arxiv-2379
0801.2284
Le probleme de l'isomorphisme de graphes est dans P
<|reference_start|>Le probleme de l'isomorphisme de graphes est dans P: This paper has been withdrawn by the author, due to possible counter-examples.<|reference_end|>
arxiv
@article{kettani2008le, title={Le probleme de l'isomorphisme de graphes est dans P}, author={Omar Kettani}, journal={arXiv preprint arXiv:0801.2284}, year={2008}, archivePrefix={arXiv}, eprint={0801.2284}, primaryClass={cs.DM cs.DS} }
kettani2008le
arxiv-2380
0801.2323
Decentralized Two-Hop Opportunistic Relaying With Limited Channel State Information
<|reference_start|>Decentralized Two-Hop Opportunistic Relaying With Limited Channel State Information: A network consisting of $n$ source-destination pairs and $m$ relays is considered. Focusing on the large system limit (large $n$), the throughput scaling laws of two-hop relaying protocols are studied for Rayleigh fading channels. It is shown that, under the practical constraints of single-user encoding-decoding scheme, and partial channel state information (CSI) at the transmitters (via integer-value feedback from the receivers), the maximal throughput scales as $\log n$ even if full relay cooperation is allowed. Furthermore, a novel decentralized opportunistic relaying scheme with receiver CSI, partial transmitter CSI, and no relay cooperation, is shown to achieve the optimal throughput scaling law of $\log n$.<|reference_end|>
arxiv
@article{cui2008decentralized, title={Decentralized Two-Hop Opportunistic Relaying With Limited Channel State Information}, author={Shengshan Cui, Alexander M. Haimovich, Oren Somekh, and H. Vincent Poor}, journal={arXiv preprint arXiv:0801.2323}, year={2008}, archivePrefix={arXiv}, eprint={0801.2323}, primaryClass={cs.IT math.IT} }
cui2008decentralized
arxiv-2381
0801.2345
On the relationship between the structural and socioacademic communities of a coauthorship network
<|reference_start|>On the relationship between the structural and socioacademic communities of a coauthorship network: This article presents a study that compares detected structural communities in a coauthorship network to the socioacademic characteristics of the scholars that compose the network. The coauthorship network was created from the bibliographic record of a multi-institution, interdisciplinary research group focused on the study of sensor networks and wireless communication. Four different community detection algorithms were employed to assign a structural community to each scholar in the network: leading eigenvector, walktrap, edge betweenness and spinglass. Socioacademic characteristics were gathered from the scholars and include such information as their academic department, academic affiliation, country of origin, and academic position. A Pearson's $\chi^2$ test, with a simulated Monte Carlo, revealed that structural communities best represent groupings of individuals working in the same academic department and at the same institution. A generalization of this result suggests that, even in interdisciplinary, multi-institutional research groups, coauthorship is primarily driven by departmental and institutional affiliation.<|reference_end|>
arxiv
@article{rodriguez2008on, title={On the relationship between the structural and socioacademic communities of a coauthorship network}, author={Marko A. Rodriguez and Alberto Pepe}, journal={Journal of Informetrics, volume 2, issue 3, pages 195-201, ISSN: 1751-1577, Elsevier, July 2008}, year={2008}, doi={10.1016/j.joi.2008.04.002}, number={LA-UR-07-8339}, archivePrefix={arXiv}, eprint={0801.2345}, primaryClass={cs.DL physics.soc-ph} }
rodriguez2008on
arxiv-2382
0801.2347
On the Minimum Spanning Tree for Directed Graphs with Potential Weights
<|reference_start|>On the Minimum Spanning Tree for Directed Graphs with Potential Weights: In general the problem of finding a miminum spanning tree for a weighted directed graph is difficult but solvable. There are a lot of differences between problems for directed and undirected graphs, therefore the algorithms for undirected graphs cannot usually be applied to the directed case. In this paper we examine the kind of weights such that the problems are equivalent and a minimum spanning tree of a directed graph may be found by a simple algorithm for an undirected graph.<|reference_end|>
arxiv
@article{buslov2008on, title={On the Minimum Spanning Tree for Directed Graphs with Potential Weights}, author={V. A. Buslov, V. A. Khudobakhshov}, journal={arXiv preprint arXiv:0801.2347}, year={2008}, archivePrefix={arXiv}, eprint={0801.2347}, primaryClass={cs.DM} }
buslov2008on
arxiv-2383
0801.2378
String algorithms and data structures
<|reference_start|>String algorithms and data structures: The string-matching field has grown at a such complicated stage that various issues come into play when studying it: data structure and algorithmic design, database principles, compression techniques, architectural features, cache and prefetching policies. The expertise nowadays required to design good string data structures and algorithms is therefore transversal to many computer science fields and much more study on the orchestration of known, or novel, techniques is needed to make progress in this fascinating topic. This survey is aimed at illustrating the key ideas which should constitute, in our opinion, the current background of every index designer. We also discuss the positive features and drawback of known indexing schemes and algorithms, and devote much attention to detail research issues and open problems both on the theoretical and the experimental side.<|reference_end|>
arxiv
@article{ferragina2008string, title={String algorithms and data structures}, author={Paolo Ferragina}, journal={arXiv preprint arXiv:0801.2378}, year={2008}, archivePrefix={arXiv}, eprint={0801.2378}, primaryClass={cs.DS cs.IR} }
ferragina2008string
arxiv-2384
0801.2398
Removing the Stiffness of Elastic Force from the Immersed Boundary Method for the 2D Stokes Equations
<|reference_start|>Removing the Stiffness of Elastic Force from the Immersed Boundary Method for the 2D Stokes Equations: The Immersed Boundary method has evolved into one of the most useful computational methods in studying fluid structure interaction. On the other hand, the Immersed Boundary method is also known to suffer from a severe timestep stability restriction when using an explicit time discretization. In this paper, we propose several efficient semi-implicit schemes to remove this stiffness from the Immersed Boundary method for the two-dimensional Stokes flow. First, we obtain a novel unconditionally stable semi-implicit discretization for the immersed boundary problem. Using this unconditionally stable discretization as a building block, we derive several efficient semi-implicit schemes for the immersed boundary problem by applying the Small Scale Decomposition to this unconditionally stable discretization. Our stability analysis and extensive numerical experiments show that our semi-implicit schemes offer much better stability property than the explicit scheme. Unlike other implicit or semi-implicit schemes proposed in the literature, our semi-implicit schemes can be solved explicitly in the spectral space. Thus the computational cost of our semi-implicit schemes is comparable to that of an explicit scheme, but with a much better stability property.<|reference_end|>
arxiv
@article{hou2008removing, title={Removing the Stiffness of Elastic Force from the Immersed Boundary Method for the 2D Stokes Equations}, author={Thomas Y. Hou, Zuoqiang Shi}, journal={arXiv preprint arXiv:0801.2398}, year={2008}, doi={10.1016/j.jcp.2008.03.002}, archivePrefix={arXiv}, eprint={0801.2398}, primaryClass={cs.CE cs.NA math.NA} }
hou2008removing
arxiv-2385
0801.2405
Multiple Uncertainties in Time-Variant Cosmological Particle Data
<|reference_start|>Multiple Uncertainties in Time-Variant Cosmological Particle Data: Though the mediums for visualization are limited, the potential dimensions of a dataset are not. In many areas of scientific study, understanding the correlations between those dimensions and their uncertainties is pivotal to mining useful information from a dataset. Obtaining this insight can necessitate visualizing the many relationships among temporal, spatial, and other dimensionalities of data and its uncertainties. We utilize multiple views for interactive dataset exploration and selection of important features, and we apply those techniques to the unique challenges of cosmological particle datasets. We show how interactivity and incorporation of multiple visualization techniques help overcome the problem of limited visualization dimensions and allow many types of uncertainty to be seen in correlation with other variables.<|reference_end|>
arxiv
@article{haroz2008multiple, title={Multiple Uncertainties in Time-Variant Cosmological Particle Data}, author={Steve Haroz, Kwan-Liu Ma, Katrin Heitmann}, journal={Haroz, S; Ma, K-L; Heitmann, K, "Multiple Uncertainties in Time-Variant Cosmological Particle Data" IEEE PacificVIS '08, pp.207-214, 5-7 March 2008}, year={2008}, doi={10.1109/PACIFICVIS.2008.4475478}, number={LAUR-08-0052}, archivePrefix={arXiv}, eprint={0801.2405}, primaryClass={astro-ph cs.GR cs.HC} }
haroz2008multiple
arxiv-2386
0801.2423
Design and Analysis of LDGM-Based Codes for MSE Quantization
<|reference_start|>Design and Analysis of LDGM-Based Codes for MSE Quantization: Approaching the 1.5329-dB shaping (granular) gain limit in mean-squared error (MSE) quantization of R^n is important in a number of problems, notably dirty-paper coding. For this purpose, we start with a binary low-density generator-matrix (LDGM) code, and construct the quantization codebook by periodically repeating its set of binary codewords, or them mapped to m-ary ones with Gray mapping. The quantization algorithm is based on belief propagation, and it uses a decimation procedure to do the guessing necessary for convergence. Using the results of a true typical decimator (TTD) as reference, it is shown that the asymptotic performance of the proposed quantizer can be characterized by certain monotonicity conditions on the code's fixed point properties, which can be analyzed with density evolution, and degree distribution optimization can be carried out accordingly. When the number of iterations is finite, the resulting loss is made amenable to analysis through the introduction of a recovery algorithm from ``bad'' guesses, and the results of such analysis enable further optimization of the pace of decimation and the degree distribution. Simulation results show that the proposed LDGM-based quantizer can achieve a shaping gain of 1.4906 dB, or 0.0423 dB from the limit, and significantly outperforms trellis-coded quantization (TCQ) at a similar computational complexity.<|reference_end|>
arxiv
@article{wang2008design, title={Design and Analysis of LDGM-Based Codes for MSE Quantization}, author={Qingchuan Wang, Chen He}, journal={arXiv preprint arXiv:0801.2423}, year={2008}, archivePrefix={arXiv}, eprint={0801.2423}, primaryClass={cs.IT math.IT} }
wang2008design
arxiv-2387
0801.2480
Asynchronous Iterative Waterfilling for Gaussian Frequency-Selective Interference Channels
<|reference_start|>Asynchronous Iterative Waterfilling for Gaussian Frequency-Selective Interference Channels: This paper considers the maximization of information rates for the Gaussian frequency-selective interference channel, subject to power and spectral mask constraints on each link. To derive decentralized solutions that do not require any cooperation among the users, the optimization problem is formulated as a static noncooperative game of complete information. To achieve the so-called Nash equilibria of the game, we propose a new distributed algorithm called asynchronous iterative waterfilling algorithm. In this algorithm, the users update their power spectral density in a completely distributed and asynchronous way: some users may update their power allocation more frequently than others and they may even use outdated measurements of the received interference. The proposed algorithm represents a unified framework that encompasses and generalizes all known iterative waterfilling algorithms, e.g., sequential and simultaneous versions. The main result of the paper consists of a unified set of conditions that guarantee the global converge of the proposed algorithm to the (unique) Nash equilibrium of the game.<|reference_end|>
arxiv
@article{scutari2008asynchronous, title={Asynchronous Iterative Waterfilling for Gaussian Frequency-Selective Interference Channels}, author={Gesualdo Scutari, Daniel P. Palomar, and Sergio Barbarossa}, journal={arXiv preprint arXiv:0801.2480}, year={2008}, archivePrefix={arXiv}, eprint={0801.2480}, primaryClass={cs.IT cs.GT math.IT} }
scutari2008asynchronous
arxiv-2388
0801.2498
An Application of the Feferman-Vaught Theorem to Automata and Logics for<br> Words over an Infinite Alphabet
<|reference_start|>An Application of the Feferman-Vaught Theorem to Automata and Logics for<br> Words over an Infinite Alphabet: We show that a special case of the Feferman-Vaught composition theorem gives rise to a natural notion of automata for finite words over an infinite alphabet, with good closure and decidability properties, as well as several logical characterizations. We also consider a slight extension of the Feferman-Vaught formalism which allows to express more relations between component values (such as equality), and prove related decidability results. From this result we get new classes of decidable logics for words over an infinite alphabet.<|reference_end|>
arxiv
@article{bès2008an, title={An Application of the Feferman-Vaught Theorem to Automata and Logics for<br> Words over an Infinite Alphabet}, author={Alexis B`es}, journal={Logical Methods in Computer Science, Volume 4, Issue 1 (March 25, 2008) lmcs:1202}, year={2008}, doi={10.2168/LMCS-4(1:8)2008}, archivePrefix={arXiv}, eprint={0801.2498}, primaryClass={cs.LO} }
bès2008an
arxiv-2389
0801.2510
A Comparison of natural (english) and artificial (esperanto) languages A Multifractal method based analysis
<|reference_start|>A Comparison of natural (english) and artificial (esperanto) languages A Multifractal method based analysis: We present a comparison of two english texts, written by Lewis Carroll, one (Alice in wonderland) and the other (Through a looking glass), the former translated into esperanto, in order to observe whether natural and artificial languages significantly differ from each other. We construct one dimensional time series like signals using either word lengths or word frequencies. We use the multifractal ideas for sorting out correlations in the writings. In order to check the robustness of the methods we also write the corresponding shuffled texts. We compare characteristic functions and e.g. observe marked differences in the (far from parabolic) f(alpha) curves, differences which we attribute to Tsallis non extensive statistical features in the ''frequency time series'' and ''length time series''. The esperanto text has more extreme vallues. A very rough approximation consists in modeling the texts as a random Cantor set if resulting from a binomial cascade of long and short words (or words and blanks). This leads to parameters characterizing the text style, and most likely in fine the author writings.<|reference_end|>
arxiv
@article{gillet2008a, title={A Comparison of natural (english) and artificial (esperanto) languages. A Multifractal method based analysis}, author={J. Gillet and M. Ausloos}, journal={arXiv preprint arXiv:0801.2510}, year={2008}, archivePrefix={arXiv}, eprint={0801.2510}, primaryClass={cs.CL physics.data-an} }
gillet2008a
arxiv-2390
0801.2588
Coding and Decoding for the Dynamic Decode and Forward Relay Protocol
<|reference_start|>Coding and Decoding for the Dynamic Decode and Forward Relay Protocol: We study the Dynamic Decode and Forward (DDF) protocol for a single half-duplex relay, single-antenna channel with quasi-static fading. The DDF protocol is well-known and has been analyzed in terms of the Diversity-Multiplexing Tradeoff (DMT) in the infinite block length limit. We characterize the finite block length DMT and give new explicit code constructions. The finite block length analysis illuminates a few key aspects that have been neglected in the previous literature: 1) we show that one dominating cause of degradation with respect to the infinite block length regime is the event of decoding error at the relay; 2) we explicitly take into account the fact that the destination does not generally know a priori the relay decision time at which the relay switches from listening to transmit mode. Both the above problems can be tackled by a careful design of the decoding algorithm. In particular, we introduce a decision rejection criterion at the relay based on Forney's decision rule (a variant of the Neyman-Pearson rule), such that the relay triggers transmission only when its decision is reliable. Also, we show that a receiver based on the Generalized Likelihood Ratio Test rule that jointly decodes the relay decision time and the information message achieves the optimal DMT. Our results show that no cyclic redundancy check (CRC) for error detection or additional protocol overhead to communicate the decision time are needed for DDF. Finally, we investigate the use of minimum mean squared error generalized decision feedback equalizer (MMSE-GDFE) lattice decoding at both the relay and the destination, and show that it provides near optimal performance at moderate complexity.<|reference_end|>
arxiv
@article{kumar2008coding, title={Coding and Decoding for the Dynamic Decode and Forward Relay Protocol}, author={K. Raj Kumar and Giuseppe Caire}, journal={arXiv preprint arXiv:0801.2588}, year={2008}, archivePrefix={arXiv}, eprint={0801.2588}, primaryClass={cs.IT math.IT} }
kumar2008coding
arxiv-2391
0801.2618
Survey of Technologies for Web Application Development
<|reference_start|>Survey of Technologies for Web Application Development: Web-based application developers face a dizzying array of platforms, languages, frameworks and technical artifacts to choose from. We survey, classify, and compare technologies supporting Web application development. The classification is based on (1) foundational technologies; (2)integration with other information sources; and (3) dynamic content generation. We further survey and classify software engineering techniques and tools that have been adopted from traditional programming into Web programming. We conclude that, although the infrastructure problems of the Web have largely been solved, the cacophony of technologies for Web-based applications reflects the lack of a solid model tailored for this domain.<|reference_end|>
arxiv
@article{doyle2008survey, title={Survey of Technologies for Web Application Development}, author={Barry Doyle (University of California, Irvine) and Cristina Videira Lopes (University of California, Irvine)}, journal={arXiv preprint arXiv:0801.2618}, year={2008}, archivePrefix={arXiv}, eprint={0801.2618}, primaryClass={cs.SE cs.IR cs.NI} }
doyle2008survey
arxiv-2392
0801.2666
MRI/TRUS data fusion for prostate brachytherapy Preliminary results
<|reference_start|>MRI/TRUS data fusion for prostate brachytherapy Preliminary results: Prostate brachytherapy involves implanting radioactive seeds (I125 for instance) permanently in the gland for the treatment of localized prostate cancers, e.g., cT1c-T2a N0 M0 with good prognostic factors. Treatment planning and seed implanting are most often based on the intensive use of transrectal ultrasound (TRUS) imaging. This is not easy because prostate visualization is difficult in this imaging modality particularly as regards the apex of the gland and from an intra- and interobserver variability standpoint. Radioactive seeds are implanted inside open interventional MR machines in some centers. Since MRI was shown to be sensitive and specific for prostate imaging whilst open MR is prohibitive for most centers and makes surgical procedures very complex, this work suggests bringing the MR virtually in the operating room with MRI/TRUS data fusion. This involves providing the physician with bi-modality images (TRUS plus MRI) intended to improve treatment planning from the data registration stage. The paper describes the method developed and implemented in the PROCUR system. Results are reported for a phantom and first series of patients. Phantom experiments helped characterize the accuracy of the process. Patient experiments have shown that using MRI data linked with TRUS data improves TRUS image segmentation especially regarding the apex and base of the prostate. This may significantly modify prostate volume definition and have an impact on treatment planning.<|reference_end|>
arxiv
@article{reynier2008mri/trus, title={MRI/TRUS data fusion for prostate brachytherapy. Preliminary results}, author={Christophe Reynier (TIMC), Jocelyne Troccaz (TIMC), Philippe Fourneret (TIMC), Andr'e Dusserre, C'ecile Gay-Jeune (CHU-Grenoble radio), Jean-Luc Descotes, Michel Bolla, Jean-Yves Giraud}, journal={Medical Physics 31, 6 (2004) 1568-75}, year={2008}, archivePrefix={arXiv}, eprint={0801.2666}, primaryClass={cs.OH} }
reynier2008mri/trus
arxiv-2393
0801.2793
Algorithms for eps-approximations of Terrains
<|reference_start|>Algorithms for eps-approximations of Terrains: Consider a point set D with a measure function w : D -> R. Let A be the set of subsets of D induced by containment in a shape from some geometric family (e.g. axis-aligned rectangles, half planes, balls, k-oriented polygons). We say a range space (D, A) has an eps-approximation P if max {R \in A} | w(R \cap P)/w(P) - w(R \cap D)/w(D) | <= eps. We describe algorithms for deterministically constructing discrete eps-approximations for continuous point sets such as distributions or terrains. Furthermore, for certain families of subsets A, such as those described by axis-aligned rectangles, we reduce the size of the eps-approximations by almost a square root from O(1/eps^2 log 1/eps) to O(1/eps polylog 1/eps). This is often the first step in transforming a continuous problem into a discrete one for which combinatorial techniques can be applied. We describe applications of this result in geo-spatial analysis, biosurveillance, and sensor networks.<|reference_end|>
arxiv
@article{phillips2008algorithms, title={Algorithms for eps-approximations of Terrains}, author={Jeff M. Phillips}, journal={arXiv preprint arXiv:0801.2793}, year={2008}, archivePrefix={arXiv}, eprint={0801.2793}, primaryClass={cs.CG} }
phillips2008algorithms
arxiv-2394
0801.2823
3D/4D ultrasound registration of bone
<|reference_start|>3D/4D ultrasound registration of bone: This paper presents a method to reduce the invasiveness of Computer Assisted Orthopaedic Surgery (CAOS) using ultrasound. In this goal, we need to develop a method for 3D/4D ultrasound registration. The premilinary results of this study suggest that the development of a robust and ``realtime'' 3D/4D ultrasound registration is feasible.<|reference_end|>
arxiv
@article{schers20083d/4d, title={3D/4D ultrasound registration of bone}, author={Jonathan Schers (TIMC), Jocelyne Troccaz (TIMC), Vincent Daanen (TIMC), C'eline Fouard (TIMC), Christopher Plaskos, Pascal Kilian}, journal={Dans IEEE International Ultrasonic Symposium, 2007 - IEEE International Ultrasonic Symposium, 2007, New-York : \'Etats-Unis d'Am\'erique (2007)}, year={2008}, doi={10.1109/ULTSYM.2007.634}, archivePrefix={arXiv}, eprint={0801.2823}, primaryClass={cs.OH physics.med-ph} }
schers20083d/4d
arxiv-2395
0801.2838
An Algorithm for Road Coloring
<|reference_start|>An Algorithm for Road Coloring: A coloring of edges of a finite directed graph turns the graph into finite-state automaton. The synchronizing word of a deterministic automaton is a word in the alphabet of colors (considered as letters) of its edges that maps the automaton to a single state. A coloring of edges of a directed graph of uniform outdegree (constant outdegree of any vertex) is synchronizing if the coloring turns the graph into a deterministic finite automaton possessing a synchronizing word. The road coloring problem is the problem of synchronizing coloring of a directed finite strongly connected graph of uniform outdegree if the greatest common divisor of the lengths of all its cycles is one. The problem posed in 1970 had evoked a noticeable interest among the specialists in the theory of graphs, automata, codes, symbolic dynamics as well as among the wide mathematical community. A polynomial time algorithm of $O(n^3)$ complexity in the most worst case and quadratic in majority of studied cases for the road coloring of the considered graph is presented below. The work is based on recent positive solution of the road coloring problem. The algorithm was implemented in the package TESTAS<|reference_end|>
arxiv
@article{trahtman2008an, title={An Algorithm for Road Coloring}, author={A.N. Trahtman}, journal={arXiv preprint arXiv:0801.2838}, year={2008}, archivePrefix={arXiv}, eprint={0801.2838}, primaryClass={cs.DM} }
trahtman2008an
arxiv-2396
0801.2858
Theoretical analysis of optimization problems - Some properties of random k-SAT and k-XORSAT
<|reference_start|>Theoretical analysis of optimization problems - Some properties of random k-SAT and k-XORSAT: This thesis is divided in two parts. The first presents an overview of known results in statistical mechanics of disordered systems and its approach to random combinatorial optimization problems. The second part is a discussion of two original results. The first result concerns DPLL heuristics for random k-XORSAT, which is equivalent to the diluted Ising p-spin model. It is well known that DPLL is unable to find the ground states in the clustered phase of the problem, i.e. that it leads to contradictions with probability 1. However, no solid argument supports this is general. A class of heuristics, which includes the well known UC and GUC, is introduced and studied. It is shown that any heuristic in this class must fail if the clause to variable ratio is larger than some constant, which depends on the heuristic but is always smaller than the clustering threshold. The second result concerns the properties of random k-SAT at large clause to variable ratios. In this regime, it is well known that the uniform distribution of random instances is dominated by unsatisfiable instances. A general technique (based on the Replica method) to restrict the distribution to satisfiable instances with uniform weight is introduced, and is used to characterize their solutions. It is found that in the limit of large clause to variable ratios, the uniform distribution of satisfiable random k-SAT formulas is asymptotically equal to the much studied Planted distribution. Both results are already published and available as arXiv:0709.0367 and arXiv:cs/0609101 . A more detailed and self-contained derivation is presented here.<|reference_end|>
arxiv
@article{altarelli2008theoretical, title={Theoretical analysis of optimization problems - Some properties of random k-SAT and k-XORSAT}, author={Fabrizio Altarelli}, journal={arXiv preprint arXiv:0801.2858}, year={2008}, archivePrefix={arXiv}, eprint={0801.2858}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC} }
altarelli2008theoretical
arxiv-2397
0801.2890
Entropy landscape and non-Gibbs solutions in constraint satisfaction problems
<|reference_start|>Entropy landscape and non-Gibbs solutions in constraint satisfaction problems: We study the entropy landscape of solutions for the bicoloring problem in random graphs, a representative difficult constraint satisfaction problem. Our goal is to classify which type of clusters of solutions are addressed by different algorithms. In the first part of the study we use the cavity method to obtain the number of clusters with a given internal entropy and determine the phase diagram of the problem, e.g. dynamical, rigidity and SAT-UNSAT transitions. In the second part of the paper we analyze different algorithms and locate their behavior in the entropy landscape of the problem. For instance we show that a smoothed version of a decimation strategy based on Belief Propagation is able to find solutions belonging to sub-dominant clusters even beyond the so called rigidity transition where the thermodynamically relevant clusters become frozen. These non-equilibrium solutions belong to the most probable unfrozen clusters.<|reference_end|>
arxiv
@article{dall'asta2008entropy, title={Entropy landscape and non-Gibbs solutions in constraint satisfaction problems}, author={L. Dall'Asta, A. Ramezanpour and R. Zecchina}, journal={Phys. Rev. E 77, 031118 (2008)}, year={2008}, doi={10.1103/PhysRevE.77.031118}, archivePrefix={arXiv}, eprint={0801.2890}, primaryClass={cond-mat.stat-mech cs.CC} }
dall'asta2008entropy
arxiv-2398
0801.2931
A Truthful Mechanism for Offline Ad Slot Scheduling
<|reference_start|>A Truthful Mechanism for Offline Ad Slot Scheduling: We consider the "Offline Ad Slot Scheduling" problem, where advertisers must be scheduled to "sponsored search" slots during a given period of time. Advertisers specify a budget constraint, as well as a maximum cost per click, and may not be assigned to more than one slot for a particular search. We give a truthful mechanism under the utility model where bidders try to maximize their clicks, subject to their personal constraints. In addition, we show that the revenue-maximizing mechanism is not truthful, but has a Nash equilibrium whose outcome is identical to our mechanism. As far as we can tell, this is the first treatment of sponsored search that directly incorporates both multiple slots and budget constraints into an analysis of incentives. Our mechanism employs a descending-price auction that maintains a solution to a certain machine scheduling problem whose job lengths depend on the price, and hence is variable over the auction. The price stops when the set of bidders that can afford that price pack exactly into a block of ad slots, at which point the mechanism allocates that block and continues on the remaining slots. To prove our result on the equilibrium of the revenue-maximizing mechanism, we first show that a greedy algorithm suffices to solve the revenue-maximizing linear program; we then use this insight to prove that bidders allocated in the same block of our mechanism have no incentive to deviate from bidding the fixed price of that block.<|reference_end|>
arxiv
@article{feldman2008a, title={A Truthful Mechanism for Offline Ad Slot Scheduling}, author={Jon Feldman, S. Muthukrishnan, Evdokia Nikolova, Martin Pal}, journal={arXiv preprint arXiv:0801.2931}, year={2008}, archivePrefix={arXiv}, eprint={0801.2931}, primaryClass={cs.GT cs.DS} }
feldman2008a
arxiv-2399
0801.3024
Construction of Z4-linear Reed-Muller codes
<|reference_start|>Construction of Z4-linear Reed-Muller codes: New quaternary Plotkin constructions are given and are used to obtain new families of quaternary codes. The parameters of the obtained codes, such as the length, the dimension and the minimum distance are studied. Using these constructions new families of quaternary Reed-Muller codes are built with the peculiarity that after using the Gray map the obtained Z4-linear codes have the same parameters and fundamental properties as the codes in the usual binary linear Reed-Muller family. To make more evident the duality relationships in the constructed families the concept of Kronecker inner product is introduced.<|reference_end|>
arxiv
@article{pujol2008construction, title={Construction of Z4-linear Reed-Muller codes}, author={J. Pujol, J. Rif'a, F. I. Solov'eva}, journal={arXiv preprint arXiv:0801.3024}, year={2008}, archivePrefix={arXiv}, eprint={0801.3024}, primaryClass={cs.IT math.IT} }
pujol2008construction
arxiv-2400
0801.3042
Performance Analysis of a Cross-layer Collaborative Beamforming Approach in the Presence of Channel and Phase Errors
<|reference_start|>Performance Analysis of a Cross-layer Collaborative Beamforming Approach in the Presence of Channel and Phase Errors: Collaborative beamforming enables nodes in a wireless network to transmit a common message over long distances in an energy efficient fashion. However, the process of making available the same message to all collaborating nodes introduces delays. The authors recently proposed a MAC-PHY cross-layer scheme that enables collaborative beamforming with significantly reduced collaboration overhead. The method requires knowledge of node locations and internode channel coefficients. In this paper, the performance of that approach is studied analytically in terms of average beampattern and symbol error probability (SEP) under realistic conditions, i.e., when imperfect channel estimates are used and when there are phase errors in the contributions of the collaborating nodes at the receiver.<|reference_end|>
arxiv
@article{dong2008performance, title={Performance Analysis of a Cross-layer Collaborative Beamforming Approach in the Presence of Channel and Phase Errors}, author={Lun Dong, Athina P. Petropulu, H. Vincent Poor}, journal={arXiv preprint arXiv:0801.3042}, year={2008}, archivePrefix={arXiv}, eprint={0801.3042}, primaryClass={cs.IT math.IT} }
dong2008performance