corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-6001 | 0901.1761 | Towards Optimal Range Medians | <|reference_start|>Towards Optimal Range Medians: We consider the following problem: given an unsorted array of $n$ elements, and a sequence of intervals in the array, compute the median in each of the subarrays defined by the intervals. We describe a simple algorithm which uses O(n) space and needs $O(n\log k + k\log n)$ time to answer the first $k$ queries. This improves previous algorithms by a logarithmic factor and matches a lower bound for $k=O(n)$. Since the algorithm decomposes the range of element values rather than the array, it has natural generalizations to higher dimensional problems -- it reduces a range median query to a logarithmic number of range counting queries.<|reference_end|> | arxiv | @article{gfeller2009towards,
title={Towards Optimal Range Medians},
author={Beat Gfeller, Peter Sanders},
journal={arXiv preprint arXiv:0901.1761},
year={2009},
archivePrefix={arXiv},
eprint={0901.1761},
primaryClass={cs.DS}
} | gfeller2009towards |
arxiv-6002 | 0901.1762 | A Tight Estimate for Decoding Error-Probability of LT Codes Using Kovalenko's Rank Distribution | <|reference_start|>A Tight Estimate for Decoding Error-Probability of LT Codes Using Kovalenko's Rank Distribution: A new approach for estimating the Decoding Error-Probability (DEP) of LT codes with dense rows is derived by using the conditional Kovalenko's rank distribution. The estimate by the proposed approach is very close to the DEP approximated by Gaussian Elimination, and is significantly less complex. As a key application, we utilize the estimates for obtaining optimal LT codes with dense rows, whose DEP is very close to the Kovalenko's Full-Rank Limit within a desired error-bound. Experimental evidences which show the viability of the estimates are also provided.<|reference_end|> | arxiv | @article{lee2009a,
title={A Tight Estimate for Decoding Error-Probability of LT Codes Using
Kovalenko's Rank Distribution},
author={Ki-Moon Lee, Hayder Radha, and Beom-Jin Kim},
journal={arXiv preprint arXiv:0901.1762},
year={2009},
archivePrefix={arXiv},
eprint={0901.1762},
primaryClass={cs.IT cs.DM math.CO math.IT}
} | lee2009a |
arxiv-6003 | 0901.1782 | A Holistic Approach to Information Distribution in Ad Hoc Networks | <|reference_start|>A Holistic Approach to Information Distribution in Ad Hoc Networks: We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication/drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.<|reference_end|> | arxiv | @article{casetti2009a,
title={A Holistic Approach to Information Distribution in Ad Hoc Networks},
author={Claudio Casetti, Carla-Fabiana Chiasserini, Marco Fiore, Chi-Anh La,
Pietro Michiardi},
journal={arXiv preprint arXiv:0901.1782},
year={2009},
archivePrefix={arXiv},
eprint={0901.1782},
primaryClass={cs.NI cs.PF}
} | casetti2009a |
arxiv-6004 | 0901.1821 | Semidefinite representation of convex hulls of rational varieties | <|reference_start|>Semidefinite representation of convex hulls of rational varieties: Using elementary duality properties of positive semidefinite moment matrices and polynomial sum-of-squares decompositions, we prove that the convex hull of rationally parameterized algebraic varieties is semidefinite representable (that is, it can be represented as a projection of an affine section of the cone of positive semidefinite matrices) in the case of (a) curves; (b) hypersurfaces parameterized by quadratics; and (c) hypersurfaces parameterized by bivariate quartics; all in an ambient space of arbitrary dimension.<|reference_end|> | arxiv | @article{henrion2009semidefinite,
title={Semidefinite representation of convex hulls of rational varieties},
author={Didier Henrion (LAAS, CTU/FEE)},
journal={arXiv preprint arXiv:0901.1821},
year={2009},
number={Rapport LAAS No. 09001},
archivePrefix={arXiv},
eprint={0901.1821},
primaryClass={math.OC cs.SY math.AG}
} | henrion2009semidefinite |
arxiv-6005 | 0901.1824 | A Highly Nonlinear Differentially 4 Uniform Power Mapping That Permutes Fields of Even Degree | <|reference_start|>A Highly Nonlinear Differentially 4 Uniform Power Mapping That Permutes Fields of Even Degree: Functions with low differential uniformity can be used as the s-boxes of symmetric cryptosystems as they have good resistance to differential attacks. The AES (Advanced Encryption Standard) uses a differentially-4 uniform function called the inverse function. Any function used in a symmetric cryptosystem should be a permutation. Also, it is required that the function is highly nonlinear so that it is resistant to Matsui's linear attack. In this article we demonstrate that a highly nonlinear permutation discovered by Hans Dobbertin has differential uniformity of four and hence, with respect to differential and linear cryptanalysis, is just as suitable for use in a symmetric cryptosystem as the inverse function.<|reference_end|> | arxiv | @article{bracken2009a,
title={A Highly Nonlinear Differentially 4 Uniform Power Mapping That Permutes
Fields of Even Degree},
author={Carl Bracken, Gregor Leander},
journal={arXiv preprint arXiv:0901.1824},
year={2009},
archivePrefix={arXiv},
eprint={0901.1824},
primaryClass={cs.IT math.IT}
} | bracken2009a |
arxiv-6006 | 0901.1827 | Triple-Error-Correcting BCH-Like Codes | <|reference_start|>Triple-Error-Correcting BCH-Like Codes: The binary primitive triple-error-correcting BCH code is a cyclic code of minimum distance 7 with generator polynomial having zeros $\alpha$, $\alpha^3$ and $\alpha^5$ where $\alpha$ is a primitive root of unity. The zero set of the code is said to be {1,3,5}. In the 1970's Kasami showed that one can construct similar triple-error-correcting codes using zero sets consisting of different triples than the BCH codes. Furthermore, in 2000 Chang et. al. found new triples leading to triple-error-correcting codes. In this paper a new such triple is presented. In addition a new method is presented that may be of interest in finding further such triples.<|reference_end|> | arxiv | @article{bracken2009triple-error-correcting,
title={Triple-Error-Correcting BCH-Like Codes},
author={Carl Bracken, Tor Helleseth},
journal={arXiv preprint arXiv:0901.1827},
year={2009},
archivePrefix={arXiv},
eprint={0901.1827},
primaryClass={cs.IT math.IT}
} | bracken2009triple-error-correcting |
arxiv-6007 | 0901.1848 | Detecting lacunary perfect powers and computing their roots | <|reference_start|>Detecting lacunary perfect powers and computing their roots: We consider solutions to the equation f = h^r for polynomials f and h and integer r > 1. Given a polynomial f in the lacunary (also called sparse or super-sparse) representation, we first show how to determine if f can be written as h^r and, if so, to find such an r. This is a Monte Carlo randomized algorithm whose cost is polynomial in the number of non-zero terms of f and in log(deg f), i.e., polynomial in the size of the lacunary representation, and it works over GF(q)[x] (for large characteristic) as well as Q[x]. We also give two deterministic algorithms to compute the perfect root h given f and r. The first is output-sensitive (based on the sparsity of h) and works only over Q[x]. A sparsity-sensitive Newton iteration forms the basis for the second approach to computing h, which is extremely efficient and works over both GF(q)[x] (for large characteristic) and Q[x], but depends on a number-theoretic conjecture. Work of Erdos, Schinzel, Zannier, and others suggests that both of these algorithms are unconditionally polynomial-time in the lacunary size of the input polynomial f. Finally, we demonstrate the efficiency of the randomized detection algorithm and the latter perfect root computation algorithm with an implementation in the C++ library NTL.<|reference_end|> | arxiv | @article{giesbrecht2009detecting,
title={Detecting lacunary perfect powers and computing their roots},
author={Mark Giesbrecht and Daniel S. Roche},
journal={arXiv preprint arXiv:0901.1848},
year={2009},
archivePrefix={arXiv},
eprint={0901.1848},
primaryClass={cs.SC}
} | giesbrecht2009detecting |
arxiv-6008 | 0901.1849 | Randomized Self-Assembly for Exact Shapes | <|reference_start|>Randomized Self-Assembly for Exact Shapes: Working in Winfree's abstract tile assembly model, we show that a constant-size tile assembly system can be programmed through relative tile concentrations to build an n x n square with high probability, for any sufficiently large n. This answers an open question of Kao and Schweller (Randomized Self-Assembly for Approximate Shapes, ICALP 2008), who showed how to build an approximately n x n square using tile concentration programming, and asked whether the approximation could be made exact with high probability. We show how this technique can be modified to answer another question of Kao and Schweller, by showing that a constant-size tile assembly system can be programmed through tile concentrations to assemble arbitrary finite *scaled shapes*, which are shapes modified by replacing each point with a c x c block of points, for some integer c. Furthermore, we exhibit a smooth tradeoff between specifying bits of n via tile concentrations versus specifying them via hard-coded tile types, which allows tile concentration programming to be employed for specifying a fraction of the bits of "input" to a tile assembly system, under the constraint that concentrations can only be specified to a limited precision. Finally, to account for some unrealistic aspects of the tile concentration programming model, we show how to modify the construction to use only concentrations that are arbitrarily close to uniform.<|reference_end|> | arxiv | @article{doty2009randomized,
title={Randomized Self-Assembly for Exact Shapes},
author={David Doty},
journal={arXiv preprint arXiv:0901.1849},
year={2009},
archivePrefix={arXiv},
eprint={0901.1849},
primaryClass={cs.CC cs.DS}
} | doty2009randomized |
arxiv-6009 | 0901.1853 | Binary Causal-Adversary Channels | <|reference_start|>Binary Causal-Adversary Channels: In this work we consider the communication of information in the presence of a causal adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x_1,...,x_n) bit-by-bit over a communication channel. The adversarial jammer can view the transmitted bits x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. Namely, for each bit x_i the jammer's decision on whether to corrupt it or not (and on how to change it) must depend only on x_j for j <= i. This is in contrast to the "classical" adversarial jammer which may base its decisions on its complete knowledge of x. We present a non-trivial upper bound on the amount of information that can be communicated. We show that the achievable rate can be asymptotically no greater than min{1-H(p),(1-4p)^+}. Here H(.) is the binary entropy function, and (1-4p)^+ equals 1-4p for p < 0.25, and 0 otherwise.<|reference_end|> | arxiv | @article{langberg2009binary,
title={Binary Causal-Adversary Channels},
author={Michael Langberg, Sidharth Jaggi, and Bikash Kumar Dey},
journal={arXiv preprint arXiv:0901.1853},
year={2009},
archivePrefix={arXiv},
eprint={0901.1853},
primaryClass={cs.IT math.IT}
} | langberg2009binary |
arxiv-6010 | 0901.1864 | Low-Complexity Near-ML Decoding of Large Non-Orthogonal STBCs using Reactive Tabu Search | <|reference_start|>Low-Complexity Near-ML Decoding of Large Non-Orthogonal STBCs using Reactive Tabu Search: Non-orthogonal space-time block codes (STBC) with {\em large dimensions} are attractive because they can simultaneously achieve both high spectral efficiencies (same spectral efficiency as in V-BLAST for a given number of transmit antennas) {\em as well as} full transmit diversity. Decoding of non-orthogonal STBCs with large dimensions has been a challenge. In this paper, we present a reactive tabu search (RTS) based algorithm for decoding non-orthogonal STBCs from cyclic division algebras (CDA) having large dimensions. Under i.i.d fading and perfect channel state information at the receiver (CSIR), our simulation results show that RTS based decoding of $12\times 12$ STBC from CDA and 4-QAM with 288 real dimensions achieves $i)$ $10^{-3}$ uncoded BER at an SNR of just 0.5 dB away from SISO AWGN performance, and $ii)$ a coded BER performance close to within about 5 dB of the theoretical MIMO capacity, using rate-3/4 turbo code at a spectral efficiency of 18 bps/Hz. RTS is shown to achieve near SISO AWGN performance with less number of dimensions than with LAS algorithm (which we reported recently) at some extra complexity than LAS. We also report good BER performance of RTS when i.i.d fading and perfect CSIR assumptions are relaxed by considering a spatially correlated MIMO channel model, and by using a training based iterative RTS decoding/channel estimation scheme.<|reference_end|> | arxiv | @article{srinidhi2009low-complexity,
title={Low-Complexity Near-ML Decoding of Large Non-Orthogonal STBCs using
Reactive Tabu Search},
author={N. Srinidhi, Saif K. Mohammed, A. Chockalingam, and B. Sundar Rajan},
journal={arXiv preprint arXiv:0901.1864},
year={2009},
archivePrefix={arXiv},
eprint={0901.1864},
primaryClass={cs.IT math.IT}
} | srinidhi2009low-complexity |
arxiv-6011 | 0901.1866 | Capacity Achieving Codes From Randomness Condensers | <|reference_start|>Capacity Achieving Codes From Randomness Condensers: We establish a general framework for construction of small ensembles of capacity achieving linear codes for a wide range of (not necessarily memoryless) discrete symmetric channels, and in particular, the binary erasure and symmetric channels. The main tool used in our constructions is the notion of randomness extractors and lossless condensers that are regarded as central tools in theoretical computer science. Same as random codes, the resulting ensembles preserve their capacity achieving properties under any change of basis. Using known explicit constructions of condensers, we obtain specific ensembles whose size is as small as polynomial in the block length. By applying our construction to Justesen's concatenation scheme (Justesen, 1972) we obtain explicit capacity achieving codes for BEC (resp., BSC) with almost linear time encoding and almost linear time (resp., quadratic time) decoding and exponentially small error probability.<|reference_end|> | arxiv | @article{cheraghchi2009capacity,
title={Capacity Achieving Codes From Randomness Condensers},
author={Mahdi Cheraghchi},
journal={arXiv preprint arXiv:0901.1866},
year={2009},
archivePrefix={arXiv},
eprint={0901.1866},
primaryClass={cs.IT math.IT}
} | cheraghchi2009capacity |
arxiv-6012 | 0901.1867 | Belief Propagation Based Decoding of Large Non-Orthogonal STBCs | <|reference_start|>Belief Propagation Based Decoding of Large Non-Orthogonal STBCs: In this paper, we present a belief propagation (BP) based algorithm for decoding non-orthogonal space-time block codes (STBC) from cyclic division algebras (CDA) having {\em large dimensions}. The proposed approach involves message passing on Markov random field (MRF) representation of the STBC MIMO system. Adoption of BP approach to decode non-orthogonal STBCs of large dimensions has not been reported so far. Our simulation results show that the proposed BP-based decoding achieves increasingly closer to SISO AWGN performance for increased number of dimensions. In addition, it also achieves near-capacity turbo coded BER performance; for e.g., with BP decoding of $24\times 24$ STBC from CDA using BPSK (i.e., 576 real dimensions) and rate-1/2 turbo code (i.e., 12 bps/Hz spectral efficiency), coded BER performance close to within just about 2.5 dB from the theoretical MIMO capacity is achieved.<|reference_end|> | arxiv | @article{suneel2009belief,
title={Belief Propagation Based Decoding of Large Non-Orthogonal STBCs},
author={Madhekar Suneel, Pritam Som, A. Chockalingam, and B. Sundar Rajan},
journal={arXiv preprint arXiv:0901.1867},
year={2009},
archivePrefix={arXiv},
eprint={0901.1867},
primaryClass={cs.IT math.IT}
} | suneel2009belief |
arxiv-6013 | 0901.1869 | Low-Complexity Near-ML Decoding of Large Non-Orthogonal STBCs Using PDA | <|reference_start|>Low-Complexity Near-ML Decoding of Large Non-Orthogonal STBCs Using PDA: Non-orthogonal space-time block codes (STBC) from cyclic division algebras (CDA) having large dimensions are attractive because they can simultaneously achieve both high spectral efficiencies (same spectral efficiency as in V-BLAST for a given number of transmit antennas) {\em as well as} full transmit diversity. Decoding of non-orthogonal STBCs with hundreds of dimensions has been a challenge. In this paper, we present a probabilistic data association (PDA) based algorithm for decoding non-orthogonal STBCs with large dimensions. Our simulation results show that the proposed PDA-based algorithm achieves near SISO AWGN uncoded BER as well as near-capacity coded BER (within about 5 dB of the theoretical capacity) for large non-orthogonal STBCs from CDA. We study the effect of spatial correlation on the BER, and show that the performance loss due to spatial correlation can be alleviated by providing more receive spatial dimensions. We report good BER performance when a training-based iterative decoding/channel estimation is used (instead of assuming perfect channel knowledge) in channels with large coherence times. A comparison of the performances of the PDA algorithm and the likelihood ascent search (LAS) algorithm (reported in our recent work) is also presented.<|reference_end|> | arxiv | @article{mohammed2009low-complexity,
title={Low-Complexity Near-ML Decoding of Large Non-Orthogonal STBCs Using PDA},
author={Saif K. Mohammed, A. Chockalingam, and B. Sundar Rajan},
journal={arXiv preprint arXiv:0901.1869},
year={2009},
archivePrefix={arXiv},
eprint={0901.1869},
primaryClass={cs.IT math.IT}
} | mohammed2009low-complexity |
arxiv-6014 | 0901.1886 | Efficient erasure decoding of Reed-Solomon codes | <|reference_start|>Efficient erasure decoding of Reed-Solomon codes: We present a practical algorithm to decode erasures of Reed-Solomon codes over the q elements binary field in O(q \log_2^2 q) time where the constant implied by the O-notation is very small. Asymptotically fast algorithms based on fast polynomial arithmetic were already known, but even if their complexity is similar, they are mostly impractical. By comparison our algorithm uses only a few Walsh transforms and has been easily implemented.<|reference_end|> | arxiv | @article{didier2009efficient,
title={Efficient erasure decoding of Reed-Solomon codes},
author={Frederic Didier},
journal={arXiv preprint arXiv:0901.1886},
year={2009},
archivePrefix={arXiv},
eprint={0901.1886},
primaryClass={cs.IT cs.DS math.IT}
} | didier2009efficient |
arxiv-6015 | 0901.1892 | A New Achievable Rate Region for the Discrete Memoryless Multiple-Access Channel with Noiseless Feedback | <|reference_start|>A New Achievable Rate Region for the Discrete Memoryless Multiple-Access Channel with Noiseless Feedback: A new single-letter achievable rate region is proposed for the two-user discrete memoryless multiple-access channel(MAC) with noiseless feedback. The proposed region includes the Cover-Leung rate region [1], and it is shown that the inclusion is strict. The proof uses a block-Markov superposition strategy based on the observation that the messages of the two users are correlated given the feedback. The rates of transmission are too high for each encoder to decode the other's message directly using the feedback, so they transmit correlated information in the next block to learn the message of one another. They then cooperate in the following block to resolve the residual uncertainty of the decoder. The coding scheme may be viewed as a natural generalization of the Cover-Leung scheme with a delay of one extra block and a pair of additional auxiliary random variables. We compute the proposed rate region for two different MACs and compare the results with other known rate regions for the MAC with feedback. Finally, we show how the coding scheme can be extended to obtain larger rate regions with more auxiliary random variables.<|reference_end|> | arxiv | @article{venkataramanan2009a,
title={A New Achievable Rate Region for the Discrete Memoryless Multiple-Access
Channel with Noiseless Feedback},
author={Ramji Venkataramanan and S. Sandeep Pradhan},
journal={IEEE Transactions on Information Theory, vol. 57, no.12, pp.
8038-8054, Dec. 2011},
year={2009},
archivePrefix={arXiv},
eprint={0901.1892},
primaryClass={cs.IT math.IT}
} | venkataramanan2009a |
arxiv-6016 | 0901.1898 | Efficient and Guaranteed Rank Minimization by Atomic Decomposition | <|reference_start|>Efficient and Guaranteed Rank Minimization by Atomic Decomposition: Recht, Fazel, and Parrilo provided an analogy between rank minimization and $\ell_0$-norm minimization. Subject to the rank-restricted isometry property, nuclear norm minimization is a guaranteed algorithm for rank minimization. The resulting semidefinite formulation is a convex problem but in practice the algorithms for it do not scale well to large instances. Instead, we explore missing terms in the analogy and propose a new algorithm which is computationally efficient and also has a performance guarantee. The algorithm is based on the atomic decomposition of the matrix variable and extends the idea in the CoSaMP algorithm for $\ell_0$-norm minimization. Combined with the recent fast low rank approximation of matrices based on randomization, the proposed algorithm can efficiently handle large scale rank minimization problems.<|reference_end|> | arxiv | @article{lee2009efficient,
title={Efficient and Guaranteed Rank Minimization by Atomic Decomposition},
author={Kiryung Lee, Yoram Bresler},
journal={arXiv preprint arXiv:0901.1898},
year={2009},
archivePrefix={arXiv},
eprint={0901.1898},
primaryClass={math.NA cs.IT math.IT}
} | lee2009efficient |
arxiv-6017 | 0901.1900 | Performance bounds on compressed sensing with Poisson noise | <|reference_start|>Performance bounds on compressed sensing with Poisson noise: This paper describes performance bounds for compressed sensing in the presence of Poisson noise when the underlying signal, a vector of Poisson intensities, is sparse or compressible (admits a sparse approximation). The signal-independent and bounded noise models used in the literature to analyze the performance of compressed sensing do not accurately model the effects of Poisson noise. However, Poisson noise is an appropriate noise model for a variety of applications, including low-light imaging, where sensing hardware is large or expensive, and limiting the number of measurements collected is important. In this paper, we describe how a feasible positivity-preserving sensing matrix can be constructed, and then analyze the performance of a compressed sensing reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which could be used as a measure of signal sparsity.<|reference_end|> | arxiv | @article{willett2009performance,
title={Performance bounds on compressed sensing with Poisson noise},
author={Rebecca M. Willett and Maxim Raginsky},
journal={arXiv preprint arXiv:0901.1900},
year={2009},
archivePrefix={arXiv},
eprint={0901.1900},
primaryClass={cs.IT math.IT}
} | willett2009performance |
arxiv-6018 | 0901.1904 | Joint universal lossy coding and identification of stationary mixing sources with general alphabets | <|reference_start|>Joint universal lossy coding and identification of stationary mixing sources with general alphabets: We consider the problem of joint universal variable-rate lossy coding and identification for parametric classes of stationary $\beta$-mixing sources with general (Polish) alphabets. Compression performance is measured in terms of Lagrangians, while identification performance is measured by the variational distance between the true source and the estimated source. Provided that the sources are mixing at a sufficiently fast rate and satisfy certain smoothness and Vapnik-Chervonenkis learnability conditions, it is shown that, for bounded metric distortions, there exist universal schemes for joint lossy compression and identification whose Lagrangian redundancies converge to zero as $\sqrt{V_n \log n /n}$ as the block length $n$ tends to infinity, where $V_n$ is the Vapnik-Chervonenkis dimension of a certain class of decision regions defined by the $n$-dimensional marginal distributions of the sources; furthermore, for each $n$, the decoder can identify $n$-dimensional marginal of the active source up to a ball of radius $O(\sqrt{V_n\log n/n})$ in variational distance, eventually with probability one. The results are supplemented by several examples of parametric sources satisfying the regularity conditions.<|reference_end|> | arxiv | @article{raginsky2009joint,
title={Joint universal lossy coding and identification of stationary mixing
sources with general alphabets},
author={Maxim Raginsky},
journal={arXiv preprint arXiv:0901.1904},
year={2009},
doi={10.1109/TIT.2009.2015987},
archivePrefix={arXiv},
eprint={0901.1904},
primaryClass={cs.IT cs.LG math.IT}
} | raginsky2009joint |
arxiv-6019 | 0901.1905 | Achievability results for statistical learning under communication constraints | <|reference_start|>Achievability results for statistical learning under communication constraints: The problem of statistical learning is to construct an accurate predictor of a random variable as a function of a correlated random variable on the basis of an i.i.d. training sample from their joint distribution. Allowable predictors are constrained to lie in some specified class, and the goal is to approach asymptotically the performance of the best predictor in the class. We consider two settings in which the learning agent only has access to rate-limited descriptions of the training data, and present information-theoretic bounds on the predictor performance achievable in the presence of these communication constraints. Our proofs do not assume any separation structure between compression and learning and rely on a new class of operational criteria specifically tailored to joint design of encoders and learning algorithms in rate-constrained settings.<|reference_end|> | arxiv | @article{raginsky2009achievability,
title={Achievability results for statistical learning under communication
constraints},
author={Maxim Raginsky},
journal={arXiv preprint arXiv:0901.1905},
year={2009},
archivePrefix={arXiv},
eprint={0901.1905},
primaryClass={cs.IT cs.LG math.IT}
} | raginsky2009achievability |
arxiv-6020 | 0901.1906 | How to improve the accuracy of the discrete gradient method in the one-dimensional case | <|reference_start|>How to improve the accuracy of the discrete gradient method in the one-dimensional case: We present a new numerical scheme for one dimensional dynamical systems. This is a modification of the discrete gradient method and keeps its advantages, including the stability and the conservation of the energy integral. However, its accuracy is higher by several orders of magnitude.<|reference_end|> | arxiv | @article{cieslinski2009how,
title={How to improve the accuracy of the discrete gradient method in the
one-dimensional case},
author={Jan L. Cieslinski, Boguslaw Ratkiewicz},
journal={arXiv preprint arXiv:0901.1906},
year={2009},
doi={10.1103/PhysRevE.81.016704},
archivePrefix={arXiv},
eprint={0901.1906},
primaryClass={cs.NA}
} | cieslinski2009how |
arxiv-6021 | 0901.1908 | Entropy, Triangulation, and Point Location in Planar Subdivisions | <|reference_start|>Entropy, Triangulation, and Point Location in Planar Subdivisions: A data structure is presented for point location in connected planar subdivisions when the distribution of queries is known in advance. The data structure has an expected query time that is within a constant factor of optimal. More specifically, an algorithm is presented that preprocesses a connected planar subdivision G of size n and a query distribution D to produce a point location data structure for G. The expected number of point-line comparisons performed by this data structure, when the queries are distributed according to D, is H + O(H^{2/3}+1) where H=H(G,D) is a lower bound on the expected number of point-line comparisons performed by any linear decision tree for point location in G under the query distribution D. The preprocessing algorithm runs in O(n log n) time and produces a data structure of size O(n). These results are obtained by creating a Steiner triangulation of G that has near-minimum entropy.<|reference_end|> | arxiv | @article{collette2009entropy,,
title={Entropy, Triangulation, and Point Location in Planar Subdivisions},
author={Sebastien Collette, Vida Dujmovic, John Iacono, Stefan Langerman, and
Pat Morin},
journal={ACM Transactions on Algorithms (TALG), Volume 8 Issue 3, July 2012
Article No. 29},
year={2009},
doi={10.1145/2229163.2229173},
archivePrefix={arXiv},
eprint={0901.1908},
primaryClass={cs.CG cs.DS}
} | collette2009entropy, |
arxiv-6022 | 0901.1924 | Interference Avoidance Game in the Gaussian Interference Channel: Sub-Optimal and Optimal Schemes | <|reference_start|>Interference Avoidance Game in the Gaussian Interference Channel: Sub-Optimal and Optimal Schemes: This paper considers a distributed interference avoidance problem employing frequency assignment in the Gaussian interference channel (IC). We divide the common channel into several subchannels and each user chooses the subchannel with less amount of interference from other users as the transmit channel. This mechanism named interference avoidance in this paper can be modeled as a competitive game model. And a completely autonomous distributed iterative algorithm called Tdistributed interference avoidance algorithm (DIA) is adopted to achieve the Nash equilibriumT (NE) of the game. Due to the self-optimum, DIA is a sub-optimal algorithm. Therefore, through introducing an optimal compensation into the competitive game model, we successfully develop a compensation-based game model to approximate the optimal interference avoidance problem. Moreover, an optimal algorithm called iterative optimal interference avoidance algorithm (IOIA) is proposed to reach the optimality of the interference avoidance scheme. We analyze the implementation complexities of the two algorithms. We also give the proof on the convergence of the proposed algorithms. The performance upper bound and lower bound are also derived for the proposed algorithms. The simulation results show that IOIA does reach the optimality under condition of interference avoidance mechanism.<|reference_end|> | arxiv | @article{jing2009interference,
title={Interference Avoidance Game in the Gaussian Interference Channel:
Sub-Optimal and Optimal Schemes},
author={Zhenhai Jing, Baoming Bai, Xiao Ma, Ying Li},
journal={arXiv preprint arXiv:0901.1924},
year={2009},
archivePrefix={arXiv},
eprint={0901.1924},
primaryClass={cs.IT math.IT}
} | jing2009interference |
arxiv-6023 | 0901.1936 | A Lower Bound on the Capacity of Wireless Erasure Networks with Random Node Locations | <|reference_start|>A Lower Bound on the Capacity of Wireless Erasure Networks with Random Node Locations: In this paper, a lower bound on the capacity of wireless ad hoc erasure networks is derived in closed form in the canonical case where $n$ nodes are uniformly and independently distributed in the unit area square. The bound holds almost surely and is asymptotically tight. We assume all nodes have fixed transmit power and hence two nodes should be within a specified distance $r_n$ of each other to overcome noise. In this context, interference determines outages, so we model each transmitter-receiver pair as an erasure channel with a broadcast constraint, i.e. each node can transmit only one signal across all its outgoing links. A lower bound of $\Theta(n r_n)$ for the capacity of this class of networks is derived. If the broadcast constraint is relaxed and each node can send distinct signals on distinct outgoing links, we show that the gain is a function of $r_n$ and the link erasure probabilities, and is at most a constant if the link erasure probabilities grow sufficiently large with $n$. Finally, the case where the erasure probabilities are themselves random variables, for example due to randomness in geometry or channels, is analyzed. We prove somewhat surprisingly that in this setting, variability in erasure probabilities increases network capacity.<|reference_end|> | arxiv | @article{jaber2009a,
title={A Lower Bound on the Capacity of Wireless Erasure Networks with Random
Node Locations},
author={Rayyan G. Jaber and Jeffrey G. Andrews},
journal={arXiv preprint arXiv:0901.1936},
year={2009},
archivePrefix={arXiv},
eprint={0901.1936},
primaryClass={cs.IT cs.NI math.IT math.PR}
} | jaber2009a |
arxiv-6024 | 0901.1945 | A mathematical proof of the existence of trends in financial time series | <|reference_start|>A mathematical proof of the existence of trends in financial time series: We are settling a longstanding quarrel in quantitative finance by proving the existence of trends in financial time series thanks to a theorem due to P. Cartier and Y. Perrin, which is expressed in the language of nonstandard analysis (Integration over finite sets, F. & M. Diener (Eds): Nonstandard Analysis in Practice, Springer, 1995, pp. 195--204). Those trends, which might coexist with some altered random walk paradigm and efficient market hypothesis, seem nevertheless difficult to reconcile with the celebrated Black-Scholes model. They are estimated via recent techniques stemming from control and signal theory. Several quite convincing computer simulations on the forecast of various financial quantities are depicted. We conclude by discussing the r\^ole of probability theory.<|reference_end|> | arxiv | @article{fliess2009a,
title={A mathematical proof of the existence of trends in financial time series},
author={Michel Fliess (LIX, INRIA Saclay - Ile de France), C'edric Join
(INRIA Saclay - Ile de France, CRAN)},
journal={Systems Theory: Modelling, Analysis and Control (2009) 43-62},
year={2009},
archivePrefix={arXiv},
eprint={0901.1945},
primaryClass={q-fin.ST cs.CE math.CA math.PR q-fin.CP stat.AP}
} | fliess2009a |
arxiv-6025 | 0901.1954 | Two-Way Relay Channels: Error Exponents and Resource Allocation | <|reference_start|>Two-Way Relay Channels: Error Exponents and Resource Allocation: In a two-way relay network, two terminals exchange information over a shared wireless half-duplex channel with the help of a relay. Due to its fundamental and practical importance, there has been an increasing interest in this channel. However, surprisingly, there has been little work that characterizes the fundamental tradeoff between the communication reliability and transmission rate across all signal-to-noise ratios. In this paper, we consider amplify-and-forward (AF) two-way relaying due to its simplicity. We first derive the random coding error exponent for the link in each direction. From the exponent expression, the capacity and cutoff rate for each link are also deduced. We then put forth the notion of the bottleneck error exponent, which is the worst exponent decay between the two links, to give us insight into the fundamental tradeoff between the rate pair and information-exchange reliability of the two terminals. As applications of the error exponent analysis, we present two optimal resource allocations to maximize the bottleneck error exponent: i) the optimal rate allocation under a sum-rate constraint and its closed-form quasi-optimal solution that requires only knowledge of the capacity and cutoff rate of each link; and ii) the optimal power allocation under a total power constraint, which is formulated as a quasi-convex optimization problem. Numerical results verify our analysis and the effectiveness of the optimal rate and power allocations in maximizing the bottleneck error exponent.<|reference_end|> | arxiv | @article{ngo2009two-way,
title={Two-Way Relay Channels: Error Exponents and Resource Allocation},
author={Hien Quoc Ngo, Tony Q.S. Quek and Hyundong Shin},
journal={arXiv preprint arXiv:0901.1954},
year={2009},
archivePrefix={arXiv},
eprint={0901.1954},
primaryClass={cs.IT math.IT}
} | ngo2009two-way |
arxiv-6026 | 0901.1964 | Optimal Detector for Channels with Non-Gaussian Interference | <|reference_start|>Optimal Detector for Channels with Non-Gaussian Interference: The detection problem in the Gaussian interference channel is addressed, when transmitters employ non-Gaussian schemes designed for the single-user Gaussian channel. A structure consisting of a separate symbol-by-symbol detector and a hard decoder is considered. Given this structure, an optimal detector is presented that is compared to an interferenceunaware conventional detector, an interference-aware successive interference cancellation (SIC) detector, and a minimum-distance detector. It is demonstrated analytically and by simulation that the optimal detector outperforms both the conventional and the SIC detector, and that it attains decreasing symbol error rates even in the presence of strong interference. Moreover, the minimum-distance detector performs almost as well as the optimal detector in most scenarios and is significantly less complex.<|reference_end|> | arxiv | @article{lee2009optimal,
title={Optimal Detector for Channels with Non-Gaussian Interference},
author={Jungwon Lee, Dimitris Toumpakaris, Hui-Ling Lou},
journal={arXiv preprint arXiv:0901.1964},
year={2009},
archivePrefix={arXiv},
eprint={0901.1964},
primaryClass={cs.IT math.IT}
} | lee2009optimal |
arxiv-6027 | 0901.1971 | Decoding Frequency Permutation Arrays under Infinite norm | <|reference_start|>Decoding Frequency Permutation Arrays under Infinite norm: A frequency permutation array (FPA) of length $n=m\lambda$ and distance $d$ is a set of permutations on a multiset over $m$ symbols, where each symbol appears exactly $\lambda$ times and the distance between any two elements in the array is at least $d$. FPA generalizes the notion of permutation array. In this paper, under the distance metric $\ell_\infty$-norm, we first prove lower and upper bounds on the size of FPA. Then we give a construction of FPA with efficient encoding and decoding capabilities. Moreover, we show our design is locally decodable, i.e., we can decode a message bit by reading at most $\lambda+1$ symbols, which has an interesting application for private information retrieval.<|reference_end|> | arxiv | @article{shieh2009decoding,
title={Decoding Frequency Permutation Arrays under Infinite norm},
author={Min-Zheng Shieh and Shi-Chun Tsai},
journal={arXiv preprint arXiv:0901.1971},
year={2009},
archivePrefix={arXiv},
eprint={0901.1971},
primaryClass={cs.IT math.IT}
} | shieh2009decoding |
arxiv-6028 | 0901.1988 | Many-Help-One Problem for Gaussian Sources with a Tree Structure on their Correlation | <|reference_start|>Many-Help-One Problem for Gaussian Sources with a Tree Structure on their Correlation: In this paper we consider the separate coding problem for $L+1$ correlated Gaussian memoryless sources. We deal with the case where $L$ separately encoded data of sources work as side information at the decoder for the reconstruction of the remaining source. The determination problem of the rate distortion region for this system is the so called many-help-one problem and has been known as a highly challenging problem. The author determined the rate distortion region in the case where the $L$ sources working as partial side information are conditionally independent if the remaining source we wish to reconstruct is given. This condition on the correlation is called the CI condition. In this paper we extend the author's previous result to the case where $L+1$ sources satisfy a kind of tree structure on their correlation. We call this tree structure of information sources the TS condition, which contains the CI condition as a special case. In this paper we derive an explicit outer bound of the rate distortion region when information sources satisfy the TS condition. We further derive an explicit sufficient condtion for this outer bound to be tight. In particular, we determine the sum rate part of the rate distortion region for the case where information sources satisfy the TS condition. For some class of Gaussian sources with the TS condition we derive an explicit recursive formula of this sum rate part.<|reference_end|> | arxiv | @article{oohama2009many-help-one,
title={Many-Help-One Problem for Gaussian Sources with a Tree Structure on
their Correlation},
author={Yasutada Oohama},
journal={arXiv preprint arXiv:0901.1988},
year={2009},
archivePrefix={arXiv},
eprint={0901.1988},
primaryClass={cs.IT math.IT}
} | oohama2009many-help-one |
arxiv-6029 | 0901.2042 | Average Capacity Analysis of Continuous-Time Frequency-Selective Rayleigh Fading Channels with Correlated Scattering Using Majorization | <|reference_start|>Average Capacity Analysis of Continuous-Time Frequency-Selective Rayleigh Fading Channels with Correlated Scattering Using Majorization: Correlated scattering occurs naturally in frequency-selective fading channels and its impact on the performance needs to be understood. In particular, we answer the question whether the uncorrelated scattering model leads to an optimistic or pessimistic estimation of the actual average capacity. In the paper, we use majorization for functions to show that the average rate with perfectly informed receiver is largest for uncorrelated scattering if the transmitter is uninformed. If the transmitter knows the channel statistics, it can exploit this knowledge. We show that for small SNR, the behavior is opposite, uncorrelated scattering leads to a lower bound on the average capacity. Finally, we provide an example of the theoretical results for an attenuated Ornstein-Uhlenbeck process including illustrations.<|reference_end|> | arxiv | @article{jorswieck2009average,
title={Average Capacity Analysis of Continuous-Time Frequency-Selective
Rayleigh Fading Channels with Correlated Scattering Using Majorization},
author={Eduard Jorswieck and Martin Mittelbach},
journal={arXiv preprint arXiv:0901.2042},
year={2009},
archivePrefix={arXiv},
eprint={0901.2042},
primaryClass={cs.IT math.IT}
} | jorswieck2009average |
arxiv-6030 | 0901.2062 | Notes on Reed-Muller Codes | <|reference_start|>Notes on Reed-Muller Codes: In this paper, we consider the Reed-Muller (RM) codes. For the first order RM code, we prove that it is unique in the sense that any linear code with the same length, dimension and minimum distance must be the first order RM code; For the second order RM code, we give a constructive linear sub-code family for the case when m is even. This is an extension of Corollary 17 of Ch. 15 in the coding book by MacWilliams and Sloane. Furthermore, we show that the specified sub-codes of length <= 256 have minimum distance equal to the upper bound or the best known lower bound for all linear codes of the same length and dimension. As another interesting result, we derive an additive commutative group of the symplectic matrices with full rank.<|reference_end|> | arxiv | @article{chen2009notes,
title={Notes on Reed-Muller Codes},
author={Yanling Chen and Han Vinck},
journal={arXiv preprint arXiv:0901.2062},
year={2009},
archivePrefix={arXiv},
eprint={0901.2062},
primaryClass={cs.IT math.IT}
} | chen2009notes |
arxiv-6031 | 0901.2068 | Beyond Language Equivalence on Visibly Pushdown Automata | <|reference_start|>Beyond Language Equivalence on Visibly Pushdown Automata: We study (bi)simulation-like preorder/equivalence checking on the class of visibly pushdown automata and its natural subclasses visibly BPA (Basic Process Algebra) and visibly one-counter automata. We describe generic methods for proving complexity upper and lower bounds for a number of studied preorders and equivalences like simulation, completed simulation, ready simulation, 2-nested simulation preorders/equivalences and bisimulation equivalence. Our main results are that all the mentioned equivalences and preorders are EXPTIME-complete on visibly pushdown automata, PSPACE-complete on visibly one-counter automata and P-complete on visibly BPA. Our PSPACE lower bound for visibly one-counter automata improves also the previously known DP-hardness results for ordinary one-counter automata and one-counter nets. Finally, we study regularity checking problems for visibly pushdown automata and show that they can be decided in polynomial time.<|reference_end|> | arxiv | @article{srba2009beyond,
title={Beyond Language Equivalence on Visibly Pushdown Automata},
author={Jiv{r}'i Srba},
journal={Logical Methods in Computer Science, Volume 5, Issue 1 (January
26, 2009) lmcs:756},
year={2009},
doi={10.2168/LMCS-5(1:2)2009},
archivePrefix={arXiv},
eprint={0901.2068},
primaryClass={cs.CC cs.LO}
} | srba2009beyond |
arxiv-6032 | 0901.2069 | Encapsulation theory: the transformation equations of absolute information hiding | <|reference_start|>Encapsulation theory: the transformation equations of absolute information hiding: This paper describes how the maximum potential number of edges of an encapsulated graph varies as the graph is transformed, that is, as nodes are created and modified. The equations governing these changes of maximum potential number of edges caused by the transformations are derived and briefly analysed.<|reference_end|> | arxiv | @article{kirwan2009encapsulation,
title={Encapsulation theory: the transformation equations of absolute
information hiding},
author={Edmund Kirwan},
journal={arXiv preprint arXiv:0901.2069},
year={2009},
archivePrefix={arXiv},
eprint={0901.2069},
primaryClass={cs.SE}
} | kirwan2009encapsulation |
arxiv-6033 | 0901.2082 | On Source-Channel Separation in Networks | <|reference_start|>On Source-Channel Separation in Networks: This paper has been withdrawn.<|reference_end|> | arxiv | @article{avestimehr2009on,
title={On Source-Channel Separation in Networks},
author={Salman Avestimehr, Giuseppe Caire, David Tse},
journal={arXiv preprint arXiv:0901.2082},
year={2009},
archivePrefix={arXiv},
eprint={0901.2082},
primaryClass={cs.IT math.IT}
} | avestimehr2009on |
arxiv-6034 | 0901.2090 | Two-Bit Message Passing Decoders for LDPC Codes Over the Binary Symmetric Channel | <|reference_start|>Two-Bit Message Passing Decoders for LDPC Codes Over the Binary Symmetric Channel: In this paper, we consider quantized decoding of LDPC codes on the binary symmetric channel. The binary message passing algorithms, while allowing extremely fast hardware implementation, are not very attractive from the perspective of performance. More complex decoders such as the ones based on belief propagation exhibit superior performance but lead to slower decoders. The approach in this paper is to consider message passing decoders that have larger message alphabet (thereby providing performance improvement) as well as low complexity (thereby ensuring fast decoding). We propose a class of message-passing decoders whose messages are represented by two bits. The thresholds for various decoders in this class are derived using density evolution. The problem of correcting a fixed number of errors assumes significance in the error floor region. For a specific decoder, the sufficient conditions for correcting all patterns with up to three errors are derived. By comparing these conditions and thresholds to the similar ones when Gallager B decoder is used, we emphasize the advantage of decoding on a higher number of bits, even if the channel observation is still one bit.<|reference_end|> | arxiv | @article{sassatelli2009two-bit,
title={Two-Bit Message Passing Decoders for LDPC Codes Over the Binary
Symmetric Channel},
author={Lucile Sassatelli, Shashi Kiran Chilappagari, Bane Vasic, David
Declercq},
journal={arXiv preprint arXiv:0901.2090},
year={2009},
doi={10.1109/ISIT.2009.5205790},
archivePrefix={arXiv},
eprint={0901.2090},
primaryClass={cs.IT math.IT}
} | sassatelli2009two-bit |
arxiv-6035 | 0901.2094 | The Sensing Capacity of Sensor Networks | <|reference_start|>The Sensing Capacity of Sensor Networks: This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as largescale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define a quantity called the sensing capacity and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that a fixed sensor configuration encodes all states of the environment. As a result, codewords are dependent and non-identically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an ntriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.<|reference_end|> | arxiv | @article{rachlin2009the,
title={The Sensing Capacity of Sensor Networks},
author={Yaron Rachlin, Rohit Negi, Pradeep Khosla},
journal={arXiv preprint arXiv:0901.2094},
year={2009},
doi={10.1109/TIT.2010.2103733},
archivePrefix={arXiv},
eprint={0901.2094},
primaryClass={cs.IT math.IT}
} | rachlin2009the |
arxiv-6036 | 0901.2120 | Invertible Extractors and Wiretap Protocols | <|reference_start|>Invertible Extractors and Wiretap Protocols: A wiretap protocol is a pair of randomized encoding and decoding functions such that knowledge of a bounded fraction of the encoding of a message reveals essentially no information about the message, while knowledge of the entire encoding reveals the message using the decoder. In this paper we study the notion of efficiently invertible extractors and show that a wiretap protocol can be constructed from such an extractor. We will then construct invertible extractors for symbol-fixing, affine, and general sources and apply them to create wiretap protocols with asymptotically optimal trade-offs between their rate (ratio of the length of the message versus its encoding) and resilience (ratio of the observed positions of the encoding and the length of the encoding). We will then apply our results to create wiretap protocols for challenging communication problems, such as active intruders who change portions of the encoding, network coding, and intruders observing arbitrary boolean functions of the encoding. As a by-product of our constructions we obtain new explicit extractors for a restricted family of affine sources over large fields (that in particular generalizes the notion of symbol-fixing sources) which is of independent interest. These extractors are able to extract the entire source entropy with zero error. Keywords: Wiretap Channel, Extractors, Network Coding, Active Intrusion, Exposure Resilient Cryptography.<|reference_end|> | arxiv | @article{cheraghchi2009invertible,
title={Invertible Extractors and Wiretap Protocols},
author={Mahdi Cheraghchi, Frederic Didier, Amin Shokrollahi},
journal={arXiv preprint arXiv:0901.2120},
year={2009},
archivePrefix={arXiv},
eprint={0901.2120},
primaryClass={cs.IT math.IT}
} | cheraghchi2009invertible |
arxiv-6037 | 0901.2130 | Hiding Quiet Solutions in Random Constraint Satisfaction Problems | <|reference_start|>Hiding Quiet Solutions in Random Constraint Satisfaction Problems: We study constraint satisfaction problems on the so-called 'planted' random ensemble. We show that for a certain class of problems, e.g. graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions, and the easy/hard/easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid/glass/solid phenomenology.<|reference_end|> | arxiv | @article{krzakala2009hiding,
title={Hiding Quiet Solutions in Random Constraint Satisfaction Problems},
author={Florent Krzakala and Lenka Zdeborov'a},
journal={Phys. Rev. Lett. 102, 238701 (2009)},
year={2009},
doi={10.1103/PhysRevLett.102.238701},
archivePrefix={arXiv},
eprint={0901.2130},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.AI cs.CC}
} | krzakala2009hiding |
arxiv-6038 | 0901.2140 | Efficient reconciliation protocol for discrete-variable quantum key distribution | <|reference_start|>Efficient reconciliation protocol for discrete-variable quantum key distribution: Reconciliation is an essential part of any secret-key agreement protocol and hence of a Quantum Key Distribution (QKD) protocol, where two legitimate parties are given correlated data and want to agree on a common string in the presence of an adversary, while revealing a minimum amount of information. In this paper, we show that for discrete-variable QKD protocols, this problem can be advantageously solved with Low Density Parity Check (LDPC) codes optimized for the BSC. In particular, we demonstrate that our method leads to a significant improvement of the achievable secret key rate, with respect to earlier interactive reconciliation methods used in QKD.<|reference_end|> | arxiv | @article{elkouss2009efficient,
title={Efficient reconciliation protocol for discrete-variable quantum key
distribution},
author={David Elkouss, Anthony Leverrier, Romain All'eaume, Joseph Boutros},
journal={arXiv preprint arXiv:0901.2140},
year={2009},
doi={10.1109/ISIT.2009.5205475},
archivePrefix={arXiv},
eprint={0901.2140},
primaryClass={cs.IT math.IT quant-ph}
} | elkouss2009efficient |
arxiv-6039 | 0901.2143 | Coding for Parallel Links to Maximize Expected Decodable-Message Value | <|reference_start|>Coding for Parallel Links to Maximize Expected Decodable-Message Value: Future communication scenarios for NASA spacecraft may involve multiple communication links and relay nodes, so that there is essentially a network in which there may be multiple paths from a sender to a destination. The availability of individual links may be uncertain. In this paper, scenarios are considered in which the goal is to maximize a payoff that assigns weight based on the worth of data and the probability of successful transmission. Ideally, the choice of what information to send over the various links will provide protection of high value data when many links are unavailable, yet result in communication of significant additional data when most links are available. Here the focus is on the simple network of multiple parallel links, where the links have known capacities and outage probabilities. Given a set of simple inter-link codes, linear programming can be used to find the optimal timesharing strategy among these codes. Some observations are made about the problem of determining all potentially useful codes, and techniques to assist in such determination are presented.<|reference_end|> | arxiv | @article{chang2009coding,
title={Coding for Parallel Links to Maximize Expected Decodable-Message Value},
author={Christopher S. Chang, Matthew A. Klimesh},
journal={arXiv preprint arXiv:0901.2143},
year={2009},
archivePrefix={arXiv},
eprint={0901.2143},
primaryClass={cs.IT math.IT}
} | chang2009coding |
arxiv-6040 | 0901.2147 | Bit Precision Analysis for Compressed Sensing | <|reference_start|>Bit Precision Analysis for Compressed Sensing: This paper studies the stability of some reconstruction algorithms for compressed sensing in terms of the bit precision. Considering the fact that practical digital systems deal with discretized signals, we motivate the importance of the total number of accurate bits needed from the measurement outcomes in addition to the number of measurements. It is shown that if one uses a $2k \times n$ Vandermonde matrix with roots on the unit circle as the measurement matrix, $O(\ell + k \log(n/k))$ bits of precision per measurement are sufficient to reconstruct a $k$-sparse signal $x \in \R^n$ with dynamic range (i.e., the absolute ratio between the largest and the smallest nonzero coefficients) at most $2^\ell$ within $\ell$ bits of precision, hence identifying its correct support. Finally, we obtain an upper bound on the total number of required bits when the measurement matrix satisfies a restricted isometry property, which is in particular the case for random Fourier and Gaussian matrices. For very sparse signals, the upper bound on the number of required bits for Vandermonde matrices is shown to be better than this general upper bound.<|reference_end|> | arxiv | @article{ardestanizadeh2009bit,
title={Bit Precision Analysis for Compressed Sensing},
author={Ehsan Ardestanizadeh, Mahdi Cheraghchi, Amin Shokrollahi},
journal={arXiv preprint arXiv:0901.2147},
year={2009},
archivePrefix={arXiv},
eprint={0901.2147},
primaryClass={cs.IT math.IT}
} | ardestanizadeh2009bit |
arxiv-6041 | 0901.2151 | Improved community structure detection using a modified fine tuning strategy | <|reference_start|>Improved community structure detection using a modified fine tuning strategy: The community structure of a complex network can be determined by finding the partitioning of its nodes that maximizes modularity. Many of the proposed algorithms for doing this work by recursively bisecting the network. We show that this unduely constrains their results, leading to a bias in the size of the communities they find and limiting their effectivness. To solve this problem, we propose adding a step to the existing algorithms that does not increase the order of their computational complexity. We show that, if this step is combined with a commonly used method, the identified constraint and resulting bias are removed, and its ability to find the optimal partitioning is improved. The effectiveness of this combined algorithm is also demonstrated by using it on real-world example networks. For a number of these examples, it achieves the best results of any known algorithm.<|reference_end|> | arxiv | @article{sun2009improved,
title={Improved community structure detection using a modified fine tuning
strategy},
author={Yudong Sun, Bogdan Danila, Kresimir Josic, and Kevin E. Bassler},
journal={arXiv preprint arXiv:0901.2151},
year={2009},
doi={10.1209/0295-5075/86/28004},
archivePrefix={arXiv},
eprint={0901.2151},
primaryClass={cs.CY cond-mat.stat-mech cs.DS physics.comp-ph physics.soc-ph q-bio.QM}
} | sun2009improved |
arxiv-6042 | 0901.2160 | Analysis of Uncoordinated Opportunistic Two-Hop Wireless Ad Hoc Systems | <|reference_start|>Analysis of Uncoordinated Opportunistic Two-Hop Wireless Ad Hoc Systems: We consider a time-slotted two-hop wireless system in which the sources transmit to the relays in the even time slots (first hop) and the relays forward the packets to the destinations in the odd time slots (second hop). Each source may connect to multiple relays in the first hop. In the presence of interference and without tight coordination of the relays, it is not clear which relays should transmit the packet. We propose four decentralized methods of relay selection, some based on location information and others based on the received signal strength (RSS). We provide a complete analytical characterization of these methods using tools from stochastic geometry. We use simulation results to compare these methods in terms of end-to-end success probability.<|reference_end|> | arxiv | @article{ganti2009analysis,
title={Analysis of Uncoordinated Opportunistic Two-Hop Wireless Ad Hoc Systems},
author={Radha Krishna Ganti and Martin Haenggi},
journal={arXiv preprint arXiv:0901.2160},
year={2009},
archivePrefix={arXiv},
eprint={0901.2160},
primaryClass={cs.IT math.IT}
} | ganti2009analysis |
arxiv-6043 | 0901.2164 | Cooperative Multiplexing in the Multiple Antenna Half Duplex Relay Channel | <|reference_start|>Cooperative Multiplexing in the Multiple Antenna Half Duplex Relay Channel: Cooperation between terminals has been proposed to improve the reliability and throughput of wireless communication. While recent work has shown that relay cooperation provides increased diversity, increased multiplexing gain over that offered by direct link has largely been unexplored. In this work we show that cooperative multiplexing gain can be achieved by using a half duplex relay. We capture relative distances between terminals in the high SNR diversity multiplexing tradeoff (DMT) framework. The DMT performance is then characterized for a network having a single antenna half-duplex relay between a single-antenna source and two-antenna destination. Our results show that the achievable multiplexing gain using cooperation can be greater than that of the direct link and is a function of the relative distance between source and relay compared to the destination. Moreover, for multiplexing gains less than 1, a simple scheme of the relay listening 1/3 of the time and transmitting 2/3 of the time can achieve the 2 by 2 MIMO DMT.<|reference_end|> | arxiv | @article{nagpal2009cooperative,
title={Cooperative Multiplexing in the Multiple Antenna Half Duplex Relay
Channel},
author={Vinayak Nagpal, Sameer Pawar, David Tse, Borivoje Nikolic},
journal={Information Theory, 2009. ISIT 2009. IEEE International Symposium
on, vol., no., pp.1438-1442, June 28 2009-July 3 2009},
year={2009},
doi={10.1109/ISIT.2009.5205885},
archivePrefix={arXiv},
eprint={0901.2164},
primaryClass={cs.IT math.IT}
} | nagpal2009cooperative |
arxiv-6044 | 0901.2166 | A Trace Based Bisimulation for the Spi Calculus | <|reference_start|>A Trace Based Bisimulation for the Spi Calculus: A notion of open bisimulation is formulated for the spi calculus, an extension of the pi-calculus with cryptographic primitives. In this formulation, open bisimulation is indexed by pairs of symbolic traces, which represent the history of interactions between the environment with the pairs of processes being checked for bisimilarity. The use of symbolic traces allows for a symbolic treatment of bound input in bisimulation checking which avoids quantification over input values. Open bisimilarity is shown to be sound with respect to testing equivalence, and futher, it is shown to be an equivalence relation on processes and a congruence relation on finite processes. As far as we know, this is the first formulation of open bisimulation for the spi calculus for which the congruence result is proved.<|reference_end|> | arxiv | @article{tiu2009a,
title={A Trace Based Bisimulation for the Spi Calculus},
author={Alwen Tiu},
journal={arXiv preprint arXiv:0901.2166},
year={2009},
archivePrefix={arXiv},
eprint={0901.2166},
primaryClass={cs.CR cs.LO}
} | tiu2009a |
arxiv-6045 | 0901.2192 | On Optimal Secure Message Transmission by Public Discussion | <|reference_start|>On Optimal Secure Message Transmission by Public Discussion: In a secure message transmission (SMT) scenario a sender wants to send a message in a private and reliable way to a receiver. Sender and receiver are connected by $n$ vertex disjoint paths, referred to as wires, $t$ of which can be controlled by an adaptive adversary with unlimited computational resources. In Eurocrypt 2008, Garay and Ostrovsky considered an SMT scenario where sender and receiver have access to a public discussion channel and showed that secure and reliable communication is possible when $n \geq t+1$. In this paper we will show that a secure protocol requires at least 3 rounds of communication and 2 rounds invocation of the public channel and hence give a complete answer to the open question raised by Garay and Ostrovsky. We also describe a round optimal protocol that has \emph{constant} transmission rate over the public channel.<|reference_end|> | arxiv | @article{shi2009on,
title={On Optimal Secure Message Transmission by Public Discussion},
author={Hongsong Shi, Shaoquan Jiang, Rei Safavi-Naini, Mohammed Ashraful
Tuhin},
journal={arXiv preprint arXiv:0901.2192},
year={2009},
archivePrefix={arXiv},
eprint={0901.2192},
primaryClass={cs.CR cs.IT math.IT}
} | shi2009on |
arxiv-6046 | 0901.2194 | Iterative Spectrum Shaping with Opportunistic Multiuser Detection | <|reference_start|>Iterative Spectrum Shaping with Opportunistic Multiuser Detection: This paper studies a new decentralized resource allocation strategy, named iterative spectrum shaping (ISS), for the multi-carrier-based multiuser communication system, where two coexisting users independently and sequentially update transmit power allocations over parallel subcarriers to maximize their individual transmit rates. Unlike the conventional iterative water-filling (IWF) algorithm that applies the single-user detection (SD) at each user's receiver by treating the interference from the other user as additional noise, the proposed ISS algorithm applies multiuser detection techniques to decode both the desired user's and interference user's messages if it is feasible, thus termed as opportunistic multiuser detection (OMD). Two encoding methods are considered for ISS: One is carrier independent encoding where independent codewords are modulated by different subcarriers for which different decoding methods can be applied; the other is carrier joint encoding where a single codeword is modulated by all the subcarriers for which a single decoder is applied. For each encoding method, this paper presents the associated optimal user power and rate allocation strategy at each iteration of transmit adaptation. It is shown that under many circumstances the proposed ISS algorithm employing OMD is able to achieve substantial throughput gains over the conventional IWF algorithm employing SD for decentralized spectrum sharing. Applications of ISS in cognitive radio communication systems are also discussed.<|reference_end|> | arxiv | @article{zhang2009iterative,
title={Iterative Spectrum Shaping with Opportunistic Multiuser Detection},
author={Rui Zhang and John Cioffi},
journal={arXiv preprint arXiv:0901.2194},
year={2009},
archivePrefix={arXiv},
eprint={0901.2194},
primaryClass={cs.IT math.IT}
} | zhang2009iterative |
arxiv-6047 | 0901.2198 | Feasible alphabets for communicating the sum of sources over a network | <|reference_start|>Feasible alphabets for communicating the sum of sources over a network: We consider directed acyclic {\em sum-networks} with $m$ sources and $n$ terminals where the sources generate symbols from an arbitrary alphabet field $F$, and the terminals need to recover the sum of the sources over $F$. We show that for any co-finite set of primes, there is a sum-network which is solvable only over fields of characteristics belonging to that set. We further construct a sum-network where a scalar solution exists over all fields other than the binary field $F_2$. We also show that a sum-network is solvable over a field if and only if its reverse network is solvable over the same field.<|reference_end|> | arxiv | @article{rai2009feasible,
title={Feasible alphabets for communicating the sum of sources over a network},
author={Brijesh Kumar Rai and Bikash Kumar Dey},
journal={arXiv preprint arXiv:0901.2198},
year={2009},
archivePrefix={arXiv},
eprint={0901.2198},
primaryClass={cs.IT math.IT}
} | rai2009feasible |
arxiv-6048 | 0901.2204 | Finite-Length Analysis of Irregular Expurgated LDPC Codes under Finite Number of Iterations | <|reference_start|>Finite-Length Analysis of Irregular Expurgated LDPC Codes under Finite Number of Iterations: Communication over the binary erasure channel (BEC) using low-density parity-check (LDPC) codes and belief propagation (BP) decoding is considered. The average bit error probability of an irregular LDPC code ensemble after a fixed number of iterations converges to a limit, which is calculated via density evolution, as the blocklength $n$ tends to infinity. The difference between the bit error probability with blocklength $n$ and the large-blocklength limit behaves asymptotically like $\alpha/n$, where the coefficient $\alpha$ depends on the ensemble, the number of iterations and the erasure probability of the BEC\null. In [1], $\alpha$ is calculated for regular ensembles. In this paper, $\alpha$ for irregular expurgated ensembles is derived. It is demonstrated that convergence of numerical estimates of $\alpha$ to the analytic result is significantly fast for irregular unexpurgated ensembles.<|reference_end|> | arxiv | @article{mori2009finite-length,
title={Finite-Length Analysis of Irregular Expurgated LDPC Codes under Finite
Number of Iterations},
author={Ryuhei Mori, Toshiyuki Tanaka, Kenta Kasai, and Kohichi Sakaniwa},
journal={arXiv preprint arXiv:0901.2204},
year={2009},
archivePrefix={arXiv},
eprint={0901.2204},
primaryClass={cs.IT math.IT}
} | mori2009finite-length |
arxiv-6049 | 0901.2207 | Performance and Construction of Polar Codes on Symmetric Binary-Input Memoryless Channels | <|reference_start|>Performance and Construction of Polar Codes on Symmetric Binary-Input Memoryless Channels: Channel polarization is a method of constructing capacity achieving codes for symmetric binary-input discrete memoryless channels (B-DMCs) [1]. In the original paper, the construction complexity is exponential in the blocklength. In this paper, a new construction method for arbitrary symmetric binary memoryless channel (B-MC) with linear complexity in the blocklength is proposed. Furthermore, new upper and lower bounds of the block error probability of polar codes are derived for the BEC and the arbitrary symmetric B-MC, respectively.<|reference_end|> | arxiv | @article{mori2009performance,
title={Performance and Construction of Polar Codes on Symmetric Binary-Input
Memoryless Channels},
author={Ryuhei Mori and Toshiyuki Tanaka},
journal={arXiv preprint arXiv:0901.2207},
year={2009},
archivePrefix={arXiv},
eprint={0901.2207},
primaryClass={cs.IT math.IT}
} | mori2009performance |
arxiv-6050 | 0901.2216 | Discovering Global Patterns in Linguistic Networks through Spectral Analysis: A Case Study of the Consonant Inventories | <|reference_start|>Discovering Global Patterns in Linguistic Networks through Spectral Analysis: A Case Study of the Consonant Inventories: Recent research has shown that language and the socio-cognitive phenomena associated with it can be aptly modeled and visualized through networks of linguistic entities. However, most of the existing works on linguistic networks focus only on the local properties of the networks. This study is an attempt to analyze the structure of languages via a purely structural technique, namely spectral analysis, which is ideally suited for discovering the global correlations in a network. Application of this technique to PhoNet, the co-occurrence network of consonants, not only reveals several natural linguistic principles governing the structure of the consonant inventories, but is also able to quantify their relative importance. We believe that this powerful technique can be successfully applied, in general, to study the structure of natural languages.<|reference_end|> | arxiv | @article{mukherjee2009discovering,
title={Discovering Global Patterns in Linguistic Networks through Spectral
Analysis: A Case Study of the Consonant Inventories},
author={Animesh Mukherjee, Monojit Choudhury and Ravi Kannan},
journal={arXiv preprint arXiv:0901.2216},
year={2009},
archivePrefix={arXiv},
eprint={0901.2216},
primaryClass={cs.CL physics.data-an}
} | mukherjee2009discovering |
arxiv-6051 | 0901.2218 | Slepian-Wolf Coding over Cooperative Networks | <|reference_start|>Slepian-Wolf Coding over Cooperative Networks: We present sufficient conditions for multicasting a set of correlated sources over cooperative networks. We propose joint source-Wyner-Ziv encoding/sliding-window decoding scheme, in which each receiver considers an ordered partition of other nodes. Subject to this scheme, we obtain a set of feasibility constraints for each ordered partition. We consolidate the results of different ordered partitions by utilizing a result of geometrical approach to obtain the sufficient conditions. We observe that these sufficient conditions are indeed necessary conditions for Aref networks. As a consequence of the main result, we obtain an achievable rate region for networks with multicast demands. Also, we deduce an achievability result for two-way relay networks, in which two nodes want to communicate over a relay network.<|reference_end|> | arxiv | @article{yassaee2009slepian-wolf,
title={Slepian-Wolf Coding over Cooperative Networks},
author={Mohammad Hossein Yassaee, Mohammad Reza Aref},
journal={arXiv preprint arXiv:0901.2218},
year={2009},
archivePrefix={arXiv},
eprint={0901.2218},
primaryClass={cs.IT math.IT}
} | yassaee2009slepian-wolf |
arxiv-6052 | 0901.2224 | Concept-Oriented Model and Query Language | <|reference_start|>Concept-Oriented Model and Query Language: We describe a new approach to data modeling, called the concept-oriented model (COM), and a novel concept-oriented query language (COQL). The model is based on three principles: duality principle postulates that any element is a couple consisting of one identity and one entity, inclusion principle postulates that any element has a super-element, and order principle assumes that any element has a number of greater elements within a partially ordered set. Concept-oriented query language is based on a new data modeling construct, called concept, inclusion relation between concepts, and concept partial ordering in which greater concepts are represented by their field types. It is demonstrated how COM and COQL can be used to solve three general data modeling tasks: logical navigation, multidimensional analysis and inference. Logical navigation is based on two operations of projection and de-projection. Multidimensional analysis uses product operation for producing a cube from level concepts chosen along the chosen dimension paths. Inference is defined as a two-step procedure where input constraints are first propagated downwards using de-projection and then the constrained result is propagated upwards using projection.<|reference_end|> | arxiv | @article{savinov2009concept-oriented,
title={Concept-Oriented Model and Query Language},
author={Alexandr Savinov},
journal={arXiv preprint arXiv:0901.2224},
year={2009},
archivePrefix={arXiv},
eprint={0901.2224},
primaryClass={cs.DB}
} | savinov2009concept-oriented |
arxiv-6053 | 0901.2270 | A Plotkin-Alamouti Superposition Coding Scheme for Cooperative Broadcasting in Wireless Networks | <|reference_start|>A Plotkin-Alamouti Superposition Coding Scheme for Cooperative Broadcasting in Wireless Networks: This paper deals with superposition coding for cooperative broadcasting in the case of two coordinated source nodes, as introduced in the seminal work of Bergmans and Cover in 1974. A scheme is introduced for two classes of destination (or relay) nodes: Close nodes and far nodes, as ranked by their spatial distances to the pair of transmitting nodes. Two linear codes are combined using the (u,u+v)-construction devised by Plotkin to construct two-level linear unequal error protection (LUEP) codes. However, instead of binary addition of subcode codewords in the source encoder, here modulated subcode sequences are combined at the destination (or relay) nodes antennae. Bergmans and Cover referred to this as over-the-air mixing. In the case of Rayleigh fading, additional diversity order as well as robustness to channel estimation errors are obtained when source nodes transmit pairs of coded sequences in accordance to Alamouti's transmit diversity scheme. We refer to this combination as a Plotkin-Alamouti scheme and study its performance over AWGN and Rayleigh fading channels with a properly partitioned QPSK constellation.<|reference_end|> | arxiv | @article{morelos-zaragoza2009a,
title={A Plotkin-Alamouti Superposition Coding Scheme for Cooperative
Broadcasting in Wireless Networks},
author={Robert Morelos-Zaragoza},
journal={arXiv preprint arXiv:0901.2270},
year={2009},
archivePrefix={arXiv},
eprint={0901.2270},
primaryClass={cs.IT math.IT}
} | morelos-zaragoza2009a |
arxiv-6054 | 0901.2310 | An e-Infrastructure for Collaborative Research in Human Embryo Development | <|reference_start|>An e-Infrastructure for Collaborative Research in Human Embryo Development: Within the context of the EU Design Study Developmental Gene Expression Map, we identify a set of challenges when facilitating collaborative research on early human embryo development. These challenges bring forth requirements, for which we have identified solutions and technology. We summarise our solutions and demonstrate how they integrate to form an e-infrastructure to support collaborative research in this area of developmental biology.<|reference_end|> | arxiv | @article{barker2009an,
title={An e-Infrastructure for Collaborative Research in Human Embryo
Development},
author={Adam Barker, Jano I. van Hemert, Richard A. Baldock, Malcolm P.
Atkinson},
journal={arXiv preprint arXiv:0901.2310},
year={2009},
archivePrefix={arXiv},
eprint={0901.2310},
primaryClass={cs.DC cs.SE}
} | barker2009an |
arxiv-6055 | 0901.2321 | The Redundancy of a Computable Code on a Noncomputable Distribution | <|reference_start|>The Redundancy of a Computable Code on a Noncomputable Distribution: We introduce new definitions of universal and superuniversal computable codes, which are based on a code's ability to approximate Kolmogorov complexity within the prescribed margin for all individual sequences from a given set. Such sets of sequences may be singled out almost surely with respect to certain probability measures. Consider a measure parameterized with a real parameter and put an arbitrary prior on the parameter. The Bayesian measure is the expectation of the parameterized measure with respect to the prior. It appears that a modified Shannon-Fano code for any computable Bayesian measure, which we call the Bayesian code, is superuniversal on a set of parameterized measure-almost all sequences for prior-almost every parameter. According to this result, in the typical setting of mathematical statistics no computable code enjoys redundancy which is ultimately much less than that of the Bayesian code. Thus we introduce another characteristic of computable codes: The catch-up time is the length of data for which the code length drops below the Kolmogorov complexity plus the prescribed margin. Some codes may have smaller catch-up times than Bayesian codes.<|reference_end|> | arxiv | @article{dębowski2009the,
title={The Redundancy of a Computable Code on a Noncomputable Distribution},
author={{L}ukasz Dk{e}bowski},
journal={arXiv preprint arXiv:0901.2321},
year={2009},
archivePrefix={arXiv},
eprint={0901.2321},
primaryClass={stat.ML cs.IT math.IT}
} | dębowski2009the |
arxiv-6056 | 0901.2333 | Q-CSMA: Queue-Length Based CSMA/CA Algorithms for Achieving Maximum Throughput and Low Delay in Wireless Networks | <|reference_start|>Q-CSMA: Queue-Length Based CSMA/CA Algorithms for Achieving Maximum Throughput and Low Delay in Wireless Networks: Recently, it has been shown that CSMA-type random access algorithms can achieve the maximum possible throughput in ad hoc wireless networks. However, these algorithms assume an idealized continuous-time CSMA protocol where collisions can never occur. In addition, simulation results indicate that the delay performance of these algorithms can be quite bad. On the other hand, although some simple heuristics (such as distributed approximations of greedy maximal scheduling) can yield much better delay performance for a large set of arrival rates, they may only achieve a fraction of the capacity region in general. In this paper, we propose a discrete-time version of the CSMA algorithm. Central to our results is a discrete-time distributed randomized algorithm which is based on a generalization of the so-called Glauber dynamics from statistical physics, where multiple links are allowed to update their states in a single time slot. The algorithm generates collision-free transmission schedules while explicitly taking collisions into account during the control phase of the protocol, thus relaxing the perfect CSMA assumption. More importantly, the algorithm allows us to incorporate mechanisms which lead to very good delay performance while retaining the throughput-optimality property. It also resolves the hidden and exposed terminal problems associated with wireless networks.<|reference_end|> | arxiv | @article{ni2009q-csma:,
title={Q-CSMA: Queue-Length Based CSMA/CA Algorithms for Achieving Maximum
Throughput and Low Delay in Wireless Networks},
author={Jian Ni and Bo Tan and R. Srikant},
journal={IEEE/ACM Transactions on Networking, 20(3), 2012},
year={2009},
doi={10.1109/TNET.2011.2177101},
archivePrefix={arXiv},
eprint={0901.2333},
primaryClass={cs.NI cs.IT math.IT}
} | ni2009q-csma: |
arxiv-6057 | 0901.2349 | Beyond word frequency: Bursts, lulls, and scaling in the temporal distributions of words | <|reference_start|>Beyond word frequency: Bursts, lulls, and scaling in the temporal distributions of words: Background: Zipf's discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well. Methodology/Principal Findings: By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type -- a measure of the logicality of each word -- and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage. Conclusions/Significance: Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf's law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics.<|reference_end|> | arxiv | @article{altmann2009beyond,
title={Beyond word frequency: Bursts, lulls, and scaling in the temporal
distributions of words},
author={Eduardo G. Altmann, Janet B. Pierrehumbert, and Adilson E. Motter},
journal={PLoS ONE 4 (11): e7678 (2009)},
year={2009},
doi={10.1371/journal.pone.0007678},
archivePrefix={arXiv},
eprint={0901.2349},
primaryClass={cs.CL cond-mat.dis-nn physics.data-an physics.soc-ph}
} | altmann2009beyond |
arxiv-6058 | 0901.2356 | Information-Theoretic Bounds for Multiround Function Computation in Collocated Networks | <|reference_start|>Information-Theoretic Bounds for Multiround Function Computation in Collocated Networks: We study the limits of communication efficiency for function computation in collocated networks within the framework of multi-terminal block source coding theory. With the goal of computing a desired function of sources at a sink, nodes interact with each other through a sequence of error-free, network-wide broadcasts of finite-rate messages. For any function of independent sources, we derive a computable characterization of the set of all feasible message coding rates - the rate region - in terms of single-letter information measures. We show that when computing symmetric functions of binary sources, the sink will inevitably learn certain additional information which is not demanded in computing the function. This conceptual understanding leads to new improved bounds for the minimum sum-rate. The new bounds are shown to be orderwise better than those based on cut-sets as the network scales. The scaling law of the minimum sum-rate is explored for different classes of symmetric functions and source parameters.<|reference_end|> | arxiv | @article{ma2009information-theoretic,
title={Information-Theoretic Bounds for Multiround Function Computation in
Collocated Networks},
author={Nan Ma, Prakash Ishwar, Piyush Gupta},
journal={arXiv preprint arXiv:0901.2356},
year={2009},
doi={10.1109/ISIT.2009.5205926},
archivePrefix={arXiv},
eprint={0901.2356},
primaryClass={cs.IT math.IT}
} | ma2009information-theoretic |
arxiv-6059 | 0901.2367 | An Implementable Scheme for Universal Lossy Compression of Discrete Markov Sources | <|reference_start|>An Implementable Scheme for Universal Lossy Compression of Discrete Markov Sources: We present a new lossy compressor for discrete sources. For coding a source sequence $x^n$, the encoder starts by assigning a certain cost to each reconstruction sequence. It then finds the reconstruction that minimizes this cost and describes it losslessly to the decoder via a universal lossless compressor. The cost of a sequence is given by a linear combination of its empirical probabilities of some order $k+1$ and its distortion relative to the source sequence. The linear structure of the cost in the empirical count matrix allows the encoder to employ a Viterbi-like algorithm for obtaining the minimizing reconstruction sequence simply. We identify a choice of coefficients for the linear combination in the cost function which ensures that the algorithm universally achieves the optimum rate-distortion performance of any Markov source in the limit of large $n$, provided $k$ is increased as $o(\log n)$.<|reference_end|> | arxiv | @article{jalali2009an,
title={An Implementable Scheme for Universal Lossy Compression of Discrete
Markov Sources},
author={Shirin Jalali, Andrea Montanari, Tsachy Weissman},
journal={arXiv preprint arXiv:0901.2367},
year={2009},
archivePrefix={arXiv},
eprint={0901.2367},
primaryClass={cs.IT math.IT}
} | jalali2009an |
arxiv-6060 | 0901.2370 | Performance of Polar Codes for Channel and Source Coding | <|reference_start|>Performance of Polar Codes for Channel and Source Coding: Polar codes, introduced recently by Ar\i kan, are the first family of codes known to achieve capacity of symmetric channels using a low complexity successive cancellation decoder. Although these codes, combined with successive cancellation, are optimal in this respect, their finite-length performance is not record breaking. We discuss several techniques through which their finite-length performance can be improved. We also study the performance of these codes in the context of source coding, both lossless and lossy, in the single-user context as well as for distributed applications.<|reference_end|> | arxiv | @article{hussami2009performance,
title={Performance of Polar Codes for Channel and Source Coding},
author={Nadine Hussami, Satish Babu Korada, Rudiger Urbanke},
journal={arXiv preprint arXiv:0901.2370},
year={2009},
archivePrefix={arXiv},
eprint={0901.2370},
primaryClass={cs.IT math.IT}
} | hussami2009performance |
arxiv-6061 | 0901.2376 | A Limit Theorem in Singular Regression Problem | <|reference_start|>A Limit Theorem in Singular Regression Problem: In statistical problems, a set of parameterized probability distributions is used to estimate the true probability distribution. If Fisher information matrix at the true distribution is singular, then it has been left unknown what we can estimate about the true distribution from random samples. In this paper, we study a singular regression problem and prove a limit theorem which shows the relation between the singular regression problem and two birational invariants, a real log canonical threshold and a singular fluctuation. The obtained theorem has an important application to statistics, because it enables us to estimate the generalization error from the training error without any knowledge of the true probability distribution.<|reference_end|> | arxiv | @article{watanabe2009a,
title={A Limit Theorem in Singular Regression Problem},
author={Sumio Watanabe},
journal={arXiv preprint arXiv:0901.2376},
year={2009},
archivePrefix={arXiv},
eprint={0901.2376},
primaryClass={cs.LG}
} | watanabe2009a |
arxiv-6062 | 0901.2391 | Weight Distribution of A p-ary Cyclic Code | <|reference_start|>Weight Distribution of A p-ary Cyclic Code: For an odd prime $p$ and two positive integers $n\geq 3$ and $k$ with $\frac{n}{{\rm gcd}(n,k)}$ being odd, the paper determines the weight distribution of a $p$-ary cyclic code $\mathcal{C}$ over $\mathbb{F}_{p}$ with nonzeros $\alpha^{-1}$, $\alpha^{-(p^k+1)}$ and $\alpha^{-(p^{3k}+1)}$, where $\alpha$ is a primitive element of $\mathbb{F}_{p^n}$<|reference_end|> | arxiv | @article{zeng2009weight,
title={Weight Distribution of A p-ary Cyclic Code},
author={Xiangyong Zeng, Lei Hu, Wenfeng Jiang, Qin Yue, Xiwang Cao},
journal={arXiv preprint arXiv:0901.2391},
year={2009},
archivePrefix={arXiv},
eprint={0901.2391},
primaryClass={cs.IT cs.DM math.IT}
} | zeng2009weight |
arxiv-6063 | 0901.2396 | Joint Source-Channel Coding at the Application Layer for Parallel Gaussian Sources | <|reference_start|>Joint Source-Channel Coding at the Application Layer for Parallel Gaussian Sources: In this paper the multicasting of independent parallel Gaussian sources over a binary erasure broadcasted channel is considered. Multiresolution embedded quantizer and layered joint source-channel coding schemes are used in order to serve simultaneously several users at different channel capacities. The convex nature of the rate-distortion function, computed by means of reverse water-filling, allows us to solve relevant convex optimization problems corresponding to different performance criteria. Then, layered joint source-channel codes are constructed based on the concatenation of embedded scalar quantizers with binary rateless encoders.<|reference_end|> | arxiv | @article{bursalioglu2009joint,
title={Joint Source-Channel Coding at the Application Layer for Parallel
Gaussian Sources},
author={Ozgun Y. Bursalioglu, Maria Fresia, Giuseppe Caire, H. Vincent Poor},
journal={arXiv preprint arXiv:0901.2396},
year={2009},
archivePrefix={arXiv},
eprint={0901.2396},
primaryClass={cs.IT math.IT}
} | bursalioglu2009joint |
arxiv-6064 | 0901.2399 | The Safe Lambda Calculus | <|reference_start|>The Safe Lambda Calculus: Safety is a syntactic condition of higher-order grammars that constrains occurrences of variables in the production rules according to their type-theoretic order. In this paper, we introduce the safe lambda calculus, which is obtained by transposing (and generalizing) the safety condition to the setting of the simply-typed lambda calculus. In contrast to the original definition of safety, our calculus does not constrain types (to be homogeneous). We show that in the safe lambda calculus, there is no need to rename bound variables when performing substitution, as variable capture is guaranteed not to happen. We also propose an adequate notion of beta-reduction that preserves safety. In the same vein as Schwichtenberg's 1976 characterization of the simply-typed lambda calculus, we show that the numeric functions representable in the safe lambda calculus are exactly the multivariate polynomials; thus conditional is not definable. We also give a characterization of representable word functions. We then study the complexity of deciding beta-eta equality of two safe simply-typed terms and show that this problem is PSPACE-hard. Finally we give a game-semantic analysis of safety: We show that safe terms are denoted by `P-incrementally justified strategies'. Consequently pointers in the game semantics of safe lambda-terms are only necessary from order 4 onwards.<|reference_end|> | arxiv | @article{blum2009the,
title={The Safe Lambda Calculus},
author={William Blum and C.-H. Luke Ong},
journal={Logical Methods in Computer Science, Volume 5, Issue 1 (February
19, 2009) lmcs:1145},
year={2009},
doi={10.2168/LMCS-5(1:3)2009},
archivePrefix={arXiv},
eprint={0901.2399},
primaryClass={cs.PL cs.GT}
} | blum2009the |
arxiv-6065 | 0901.2401 | MIMO Broadcast Channel Optimization under General Linear Constraints | <|reference_start|>MIMO Broadcast Channel Optimization under General Linear Constraints: The optimization of the transmit parameters (power allocation and steering vectors) for the MIMO BC under general linear constraints is treated under the optimal DPC coding strategy and the simple suboptimal linear zero-forcing beamforming strategy. In the case of DPC, we show that "SINR duality" and "min-max duality" yield the same dual MAC problem, and compare two alternatives for its efficient solution. In the case of zero-forcing beamforming, we provide a new efficient algorithm based on the direct optimization of a generalized inverse matrix. In both cases, the algorithms presented here address the problems in the most general form and can be applied to special cases previously considered, such as per-antenna and per-group of antennas power constraints, "forbidden interference direction" constraints, or any combination thereof.<|reference_end|> | arxiv | @article{huh2009mimo,
title={MIMO Broadcast Channel Optimization under General Linear Constraints},
author={Hoon Huh, Haralabos Papadopoulos, Giuseppe Caire},
journal={arXiv preprint arXiv:0901.2401},
year={2009},
archivePrefix={arXiv},
eprint={0901.2401},
primaryClass={cs.IT math.IT}
} | huh2009mimo |
arxiv-6066 | 0901.2410 | On the Energy Benefit of Network Coding for Wireless Multiple Unicast | <|reference_start|>On the Energy Benefit of Network Coding for Wireless Multiple Unicast: We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding solutions, where the maximum is over all configurations. It is shown that if coding and routing solutions are using the same transmission range, the benefit in $d$-dimensional networks is at least $2d/\lfloor\sqrt{d}\rfloor$. Moreover, it is shown that if the transmission range can be optimized for routing and coding individually, the benefit in 2-dimensional networks is at least 3. Our results imply that codes following a \emph{decode-and-recombine} strategy are not always optimal regarding energy efficiency.<|reference_end|> | arxiv | @article{goseling2009on,
title={On the Energy Benefit of Network Coding for Wireless Multiple Unicast},
author={Jasper Goseling, Ruytaroh Matsumoto, Tomohiko Uyematsu and Jos H.
Weber},
journal={EURASIP Journal on Wireless Communications and Networking, Vol.
2010 (2010), Art.ID 605421},
year={2009},
doi={10.1155/2010/605421},
archivePrefix={arXiv},
eprint={0901.2410},
primaryClass={cs.IT math.IT}
} | goseling2009on |
arxiv-6067 | 0901.2416 | TR01: Time-continuous Sparse Imputation | <|reference_start|>TR01: Time-continuous Sparse Imputation: An effective way to increase the noise robustness of automatic speech recognition is to label noisy speech features as either reliable or unreliable (missing) prior to decoding, and to replace the missing ones by clean speech estimates. We present a novel method to obtain such clean speech estimates. Unlike previous imputation frameworks which work on a frame-by-frame basis, our method focuses on exploiting information from a large time-context. Using a sliding window approach, denoised speech representations are constructed using a sparse representation of the reliable features in an overcomplete basis of fixed-length exemplar fragments. We demonstrate the potential of our approach with experiments on the AURORA-2 connected digit database.<|reference_end|> | arxiv | @article{gemmeke2009tr01:,
title={TR01: Time-continuous Sparse Imputation},
author={J. F. Gemmeke and B. Cranen},
journal={arXiv preprint arXiv:0901.2416},
year={2009},
archivePrefix={arXiv},
eprint={0901.2416},
primaryClass={cs.SD}
} | gemmeke2009tr01: |
arxiv-6068 | 0901.2434 | The compositional construction of Markov processes | <|reference_start|>The compositional construction of Markov processes: We describe an algebra for composing automata in which the actions have probabilities. We illustrate by showing how to calculate the probability of reaching deadlock in k steps in a model of the classical Dining Philosopher problem, and show, using the Perron-Frobenius Theorem, that this probability tends to 1 as k tends to infinity.<|reference_end|> | arxiv | @article{albasini2009the,
title={The compositional construction of Markov processes},
author={L. de Francesco Albasini, N. Sabadini, R.F.C. Walters},
journal={arXiv preprint arXiv:0901.2434},
year={2009},
archivePrefix={arXiv},
eprint={0901.2434},
primaryClass={cs.LO math.CT math.PR}
} | albasini2009the |
arxiv-6069 | 0901.2461 | Grammatic -- a tool for grammar definition reuse and modularity | <|reference_start|>Grammatic -- a tool for grammar definition reuse and modularity: Grammatic is a tool for grammar definition and manipulation aimed to improve modularity and reuse of grammars and related development artifacts. It is independent from parsing technology and any other details of target system implementation. Grammatic provides a way for annotating grammars with arbitrary metadata (like associativity attributes, semantic actions or anything else). It might be used as a front-end for external tools like parser generators to make their input grammars modular and reusable. This paper describes main principles behind Grammatic and gives an overview of languages it provides and their ability to separate concerns and define reusable modules. Also it presents sketches of possible use cases for the tool.<|reference_end|> | arxiv | @article{breslav2009grammatic,
title={Grammatic -- a tool for grammar definition reuse and modularity},
author={Andrey Breslav},
journal={arXiv preprint arXiv:0901.2461},
year={2009},
archivePrefix={arXiv},
eprint={0901.2461},
primaryClass={cs.PL cs.SE}
} | breslav2009grammatic |
arxiv-6070 | 0901.2483 | Fast Encoding and Decoding of Gabidulin Codes | <|reference_start|>Fast Encoding and Decoding of Gabidulin Codes: Gabidulin codes are the rank-metric analogs of Reed-Solomon codes and have a major role in practical error control for network coding. This paper presents new encoding and decoding algorithms for Gabidulin codes based on low-complexity normal bases. In addition, a new decoding algorithm is proposed based on a transform-domain approach. Together, these represent the fastest known algorithms for encoding and decoding Gabidulin codes.<|reference_end|> | arxiv | @article{silva2009fast,
title={Fast Encoding and Decoding of Gabidulin Codes},
author={Danilo Silva and Frank R. Kschischang},
journal={arXiv preprint arXiv:0901.2483},
year={2009},
doi={10.1109/ISIT.2009.5205272},
archivePrefix={arXiv},
eprint={0901.2483},
primaryClass={cs.IT math.IT}
} | silva2009fast |
arxiv-6071 | 0901.2518 | A Faithful Semantics for Generalised Symbolic Trajectory Evaluation | <|reference_start|>A Faithful Semantics for Generalised Symbolic Trajectory Evaluation: Generalised Symbolic Trajectory Evaluation (GSTE) is a high-capacity formal verification technique for hardware. GSTE uses abstraction, meaning that details of the circuit behaviour are removed from the circuit model. A semantics for GSTE can be used to predict and understand why certain circuit properties can or cannot be proven by GSTE. Several semantics have been described for GSTE. These semantics, however, are not faithful to the proving power of GSTE-algorithms, that is, the GSTE-algorithms are incomplete with respect to the semantics. The abstraction used in GSTE makes it hard to understand why a specific property can, or cannot, be proven by GSTE. The semantics mentioned above cannot help the user in doing so. The contribution of this paper is a faithful semantics for GSTE. That is, we give a simple formal theory that deems a property to be true if-and-only-if the property can be proven by a GSTE-model checker. We prove that the GSTE algorithm is sound and complete with respect to this semantics.<|reference_end|> | arxiv | @article{claessen2009a,
title={A Faithful Semantics for Generalised Symbolic Trajectory Evaluation},
author={Koen Claessen, Jan-Willem Roorda},
journal={Logical Methods in Computer Science, Volume 5, Issue 2 (April 8,
2009) lmcs:1028},
year={2009},
doi={10.2168/LMCS-5(2:1)2009},
archivePrefix={arXiv},
eprint={0901.2518},
primaryClass={cs.LO}
} | claessen2009a |
arxiv-6072 | 0901.2538 | Capacity Scaling of SDMA in Wireless Ad Hoc Networks | <|reference_start|>Capacity Scaling of SDMA in Wireless Ad Hoc Networks: We consider an ad hoc network in which each multi-antenna transmitter sends independent streams to multiple receivers in a Poisson field of interferers. We provide the outage probability and transmission capacity scaling laws, aiming at investigating the fundamental limits of Space Division Multiple Access (SDMA). We first show that super linear capacity scaling with the number of receive/transmit antennas can be achieved using dirty paper coding. Nevertheless, the potential benefits of multi-stream, multi-antenna communications fall off quickly if linear precoding is employed, leading to sublinear capacity growth in the case of single-antenna receivers. A key finding is that receive antenna array processing is of vital importance in SDMA ad hoc networks, as a means to cancel the increased residual interference and boost the signal power through diversity.<|reference_end|> | arxiv | @article{kountouris2009capacity,
title={Capacity Scaling of SDMA in Wireless Ad Hoc Networks},
author={Marios Kountouris and Jeffrey G. Andrews},
journal={arXiv preprint arXiv:0901.2538},
year={2009},
archivePrefix={arXiv},
eprint={0901.2538},
primaryClass={cs.IT math.IT}
} | kountouris2009capacity |
arxiv-6073 | 0901.2545 | On the Capacity of the Discrete-Time Channel with Uniform Output Quantization | <|reference_start|>On the Capacity of the Discrete-Time Channel with Uniform Output Quantization: This paper provides new insight into the classical problem of determining both the capacity of the discrete-time channel with uniform output quantization and the capacity achieving input distribution. It builds on earlier work by Gallager and Witsenhausen to provide a detailed analysis of two particular quantization schemes. The first is saturation quantization where overflows are mapped to the nearest quantization bin, and the second is wrapping quantization where overflows are mapped to the nearest quantization bin after reduction by some modulus. Both the capacity of wrapping quantization and the capacity achieving input distribution are determined. When the additive noise is gaussian and relatively small, the capacity of saturation quantization is shown to be bounded below by that of wrapping quantization. In the limit of arbitrarily many uniform quantization levels, it is shown that the difference between the upper and lower bounds on capacity given by Ihara is only 0.26 bits.<|reference_end|> | arxiv | @article{wu2009on,
title={On the Capacity of the Discrete-Time Channel with Uniform Output
Quantization},
author={Yiyue Wu, Linda M. Davis and Robert Calderbank},
journal={arXiv preprint arXiv:0901.2545},
year={2009},
archivePrefix={arXiv},
eprint={0901.2545},
primaryClass={cs.IT math.IT}
} | wu2009on |
arxiv-6074 | 0901.2586 | Information geometries and Microeconomic Theories | <|reference_start|>Information geometries and Microeconomic Theories: More than thirty years ago, Charnes, Cooper and Schinnar (1976) established an enlightening contact between economic production functions (EPFs) -- a cornerstone of neoclassical economics -- and information theory, showing how a generalization of the Cobb-Douglas production function encodes homogeneous functions. As expected by Charnes \textit{et al.}, the contact turns out to be much broader: we show how information geometry as pioneered by Amari and others underpins static and dynamic descriptions of microeconomic cornerstones. We show that the most popular EPFs are fundamentally grounded in a very weak axiomatization of economic transition costs between inputs. The strength of this characterization is surprising, as it geometrically bonds altogether a wealth of collateral economic notions -- advocating for applications in various economic fields --: among all, it characterizes (i) Marshallian and Hicksian demands and their geometric duality, (ii) Slutsky-type properties for the transformation paths, (iii) Roy-type properties for their elementary variations.<|reference_end|> | arxiv | @article{nock2009information,
title={Information geometries and Microeconomic Theories},
author={Richard Nock, Brice Magdalou, Nicolas Sanz, Eric Briys, Fred Celimene,
Frank Nielsen},
journal={arXiv preprint arXiv:0901.2586},
year={2009},
archivePrefix={arXiv},
eprint={0901.2586},
primaryClass={q-fin.GN cs.IT math.IT}
} | nock2009information |
arxiv-6075 | 0901.2588 | The MIMO Wireless Switch: Relaying Can Increase the Multiplexing Gain | <|reference_start|>The MIMO Wireless Switch: Relaying Can Increase the Multiplexing Gain: This paper considers an interference network composed of K half-duplex single-antenna pairs of users who wish to establish bi-directional communication with the aid of a multi-input-multi-output (MIMO) half-duplex relay node. This channel is referred to as the "MIMO Wireless Switch" since, for the sake of simplicity, our model assumes no direct link between the two end nodes of each pair implying that all communication must go through the relay node (i.e., the MIMO switch). Assuming a delay-limited scenario, the fundamental limits in the high signal-to-noise ratio (SNR) regime is analyzed using the diversity multiplexing tradeoff (DMT) framework. Our results sheds light on the structure of optimal transmission schemes and the gain offered by the relay node in two distinct cases, namely reciprocal and non-reciprocal channels (between the relay and end-users). In particular, the existence of a relay node, equipped with a sufficient number of antennas, is shown to increase the multiplexing gain; as compared with the traditional fully connected K-pair interference channel. To the best of our knowledge, this is the first known example where adding a relay node results in enlarging the pre-log factor of the sum rate. Moreover, for the case of reciprocal channels, it is shown that, when the relay has a number of antennas at least equal to the sum of antennas of all the users, static time allocation of decode and forward (DF) type schemes is optimal. On the other hand, in the non-reciprocal scenario, we establish the optimality of dynamic decode and forward in certain relevant scenarios.<|reference_end|> | arxiv | @article{ghozlan2009the,
title={The MIMO Wireless Switch: Relaying Can Increase the Multiplexing Gain},
author={Hassan Ghozlan, Yahya Mohasseb, Hesham El Gamal, Gerhard Kramer},
journal={arXiv preprint arXiv:0901.2588},
year={2009},
archivePrefix={arXiv},
eprint={0901.2588},
primaryClass={cs.IT math.IT}
} | ghozlan2009the |
arxiv-6076 | 0901.2606 | Capacity Bounds of Half-Duplex Gaussian Cooperative Interference Channel | <|reference_start|>Capacity Bounds of Half-Duplex Gaussian Cooperative Interference Channel: In this paper, we investigate the half-duplex cooperative communication scheme of a two user Gaussian interference channel. We develop achievable region and outer bound for the case when the system allow either transmitter or receiver cooperation. We show that by using our transmitter cooperation scheme, there is significant capacity improvement compare to the previous results, especially when the cooperation link is strong. Further, if the cooperation channel gain is infinity, both our transmitter and receiver cooperation rates achieve their respective outer bound. It is also shown that transmitter cooperation provides larger achievable region than receiver cooperation under the same channel and power conditions.<|reference_end|> | arxiv | @article{peng2009capacity,
title={Capacity Bounds of Half-Duplex Gaussian Cooperative Interference Channel},
author={Yong Peng and Dinesh Rajan},
journal={arXiv preprint arXiv:0901.2606},
year={2009},
archivePrefix={arXiv},
eprint={0901.2606},
primaryClass={cs.IT math.IT}
} | peng2009capacity |
arxiv-6077 | 0901.2612 | Some Open Problems in Combinatorial Physics | <|reference_start|>Some Open Problems in Combinatorial Physics: We point out four problems which have arisen during the recent research in the domain of Combinatorial Physics.<|reference_end|> | arxiv | @article{duchamp2009some,
title={Some Open Problems in Combinatorial Physics},
author={G'erard Henry Edmond Duchamp (LIPN), H. Cheballah (LIPN)},
journal={arXiv preprint arXiv:0901.2612},
year={2009},
archivePrefix={arXiv},
eprint={0901.2612},
primaryClass={cs.SC math.CO quant-ph}
} | duchamp2009some |
arxiv-6078 | 0901.2616 | On the Delay Limited Secrecy Capacity of Fading Channels | <|reference_start|>On the Delay Limited Secrecy Capacity of Fading Channels: In this paper, the delay limited secrecy capacity of the flat fading channel is investigated under two different assumptions on the available transmitter channel state information (CSI). The first scenario assumes perfect prior knowledge of both the main and eavesdropper channel gains. Here, upper and lower bounds on the secure delay limited capacity are derived and shown to be tight in the high signal-to-noise ratio (SNR) regime (for a wide class of channel distributions). In the second scenario, only the main channel CSI is assumed to be available at the transmitter. Remarkably, under this assumption, we establish the achievability of non-zero secure rate (for a wide class of channel distributions) under a strict delay constraint. In the two cases, our achievability arguments are based on a novel two-stage approach that overcomes the secrecy outage phenomenon observed in earlier works.<|reference_end|> | arxiv | @article{khalil2009on,
title={On the Delay Limited Secrecy Capacity of Fading Channels},
author={Karim Khalil, Moustafa Youssef, O. Ozan Koyluoglu, Hesham El Gamal},
journal={arXiv preprint arXiv:0901.2616},
year={2009},
doi={10.1109/ISIT.2009.5205955},
archivePrefix={arXiv},
eprint={0901.2616},
primaryClass={cs.IT cs.CR math.IT}
} | khalil2009on |
arxiv-6079 | 0901.2645 | On some simplicial elimination schemes for chordal graphs | <|reference_start|>On some simplicial elimination schemes for chordal graphs: We present here some results on particular elimination schemes for chordal graphs, namely we show that for any chordal graph we can construct in linear time a simplicial elimination scheme starting with a pending maximal clique attached via a minimal separator maximal (resp. minimal) under inclusion among all minimal separators.<|reference_end|> | arxiv | @article{habib2009on,
title={On some simplicial elimination schemes for chordal graphs},
author={Michel Habib (LIAFA), Vincent Limouzy},
journal={arXiv preprint arXiv:0901.2645},
year={2009},
archivePrefix={arXiv},
eprint={0901.2645},
primaryClass={cs.DS}
} | habib2009on |
arxiv-6080 | 0901.2665 | A Density Matrix-based Algorithm for Solving Eigenvalue Problems | <|reference_start|>A Density Matrix-based Algorithm for Solving Eigenvalue Problems: A new numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques, and takes its inspiration from the contour integration and density matrix representation in quantum mechanics. It will be shown that this new algorithm - named FEAST - exhibits high efficiency, robustness, accuracy and scalability on parallel architectures. Examples from electronic structure calculations of Carbon nanotubes (CNT) are presented, and numerical performances and capabilities are discussed.<|reference_end|> | arxiv | @article{polizzi2009a,
title={A Density Matrix-based Algorithm for Solving Eigenvalue Problems},
author={Eric Polizzi},
journal={arXiv preprint arXiv:0901.2665},
year={2009},
doi={10.1103/PhysRevB.79.115112},
archivePrefix={arXiv},
eprint={0901.2665},
primaryClass={cs.CE cs.MS}
} | polizzi2009a |
arxiv-6081 | 0901.2682 | Self-stabilizing Numerical Iterative Computation | <|reference_start|>Self-stabilizing Numerical Iterative Computation: Many challenging tasks in sensor networks, including sensor calibration, ranking of nodes, monitoring, event region detection, collaborative filtering, collaborative signal processing, {\em etc.}, can be formulated as a problem of solving a linear system of equations. Several recent works propose different distributed algorithms for solving these problems, usually by using linear iterative numerical methods. The main problem with previous approaches is that once the problem inputs change during the process of computation, the computation may output unexpected results. In real life settings, sensor measurements are subject to varying environmental conditions and to measurement noise. We present a simple iterative scheme called SS-Iterative for solving systems of linear equations, and examine its properties in the self-stabilizing perspective. We analyze the behavior of the proposed scheme under changing input sequences using two different assumptions on the input: a box bound, and a probabilistic distribution. As a case study, we discuss the sensor calibration problem and provide simulation results to support the applicability of our approach.<|reference_end|> | arxiv | @article{bickson2009self-stabilizing,
title={Self-stabilizing Numerical Iterative Computation},
author={Danny Bickson, Ezra N. Hoch, Harel Avissar and Danny Dolev},
journal={arXiv preprint arXiv:0901.2682},
year={2009},
number={TCS09},
archivePrefix={arXiv},
eprint={0901.2682},
primaryClass={cs.NA cs.DC}
} | bickson2009self-stabilizing |
arxiv-6082 | 0901.2684 | Distributed Large Scale Network Utility Maximization | <|reference_start|>Distributed Large Scale Network Utility Maximization: Recent work by Zymnis et al. proposes an efficient primal-dual interior-point method, using a truncated Newton method, for solving the network utility maximization (NUM) problem. This method has shown superior performance relative to the traditional dual-decomposition approach. Other recent work by Bickson et al. shows how to compute efficiently and distributively the Newton step, which is the main computational bottleneck of the Newton method, utilizing the Gaussian belief propagation algorithm. In the current work, we combine both approaches to create an efficient distributed algorithm for solving the NUM problem. Unlike the work of Zymnis, which uses a centralized approach, our new algorithm is easily distributed. Using an empirical evaluation we show that our new method outperforms previous approaches, including the truncated Newton method and dual-decomposition methods. As an additional contribution, this is the first work that evaluates the performance of the Gaussian belief propagation algorithm vs. the preconditioned conjugate gradient method, for a large scale problem.<|reference_end|> | arxiv | @article{bickson2009distributed,
title={Distributed Large Scale Network Utility Maximization},
author={Danny Bickson, Yoav Tock, Argyris Zymnis, Stephen Boyd and Danny Dolev},
journal={arXiv preprint arXiv:0901.2684},
year={2009},
doi={10.1109/ISIT.2009.5205655},
archivePrefix={arXiv},
eprint={0901.2684},
primaryClass={cs.IT cs.DC math.IT math.OC}
} | bickson2009distributed |
arxiv-6083 | 0901.2685 | A Statistical Approach to Performance Monitoring in Soft Real-Time Distributed Systems | <|reference_start|>A Statistical Approach to Performance Monitoring in Soft Real-Time Distributed Systems: Soft real-time applications require timely delivery of messages conforming to the soft real-time constraints. Satisfying such requirements is a complex task both due to the volatile nature of distributed environments, as well as due to numerous domain-specific factors that affect message latency. Prompt detection of the root-cause of excessive message delay allows a distributed system to react accordingly. This may significantly improve compliance with the required timeliness constraints. In this work, we present a novel approach for distributed performance monitoring of soft-real time distributed systems. We propose to employ recent distributed algorithms from the statistical signal processing and learning domains, and to utilize them in a different context of online performance monitoring and root-cause analysis, for pinpointing the reasons for violation of performance requirements. Our approach is general and can be used for monitoring of any distributed system, and is not limited to the soft real-time domain. We have implemented the proposed framework in TransFab, an IBM prototype of soft real-time messaging fabric. In addition to root-cause analysis, the framework includes facilities to resolve resource allocation problems, such as memory and bandwidth deficiency. The experiments demonstrate that the system can identify and resolve latency problems in a timely fashion.<|reference_end|> | arxiv | @article{bickson2009a,
title={A Statistical Approach to Performance Monitoring in Soft Real-Time
Distributed Systems},
author={Danny Bickson, Gidon Gershinsky, Ezra N. Hoch and Konstantin Shagin},
journal={arXiv preprint arXiv:0901.2685},
year={2009},
archivePrefix={arXiv},
eprint={0901.2685},
primaryClass={cs.NI cs.DC}
} | bickson2009a |
arxiv-6084 | 0901.2687 | A Hybrid Multicast-Unicast Infrastructure for Efficient Publish-Subscribe in Enterprise Networks | <|reference_start|>A Hybrid Multicast-Unicast Infrastructure for Efficient Publish-Subscribe in Enterprise Networks: One of the main challenges in building a large scale publish-subscribe infrastructure in an enterprise network, is to provide the subscribers with the required information, while minimizing the consumed host and network resources. Typically, previous approaches utilize either IP multicast or point-to-point unicast for efficient dissemination of the information. In this work, we propose a novel hybrid framework, which is a combination of both multicast and unicast data dissemination. Our hybrid framework allows us to take the advantages of both multicast and unicast, while avoiding their drawbacks. We investigate several algorithms for computing the best mapping of publishers' transmissions into multicast and unicast transport. Using extensive simulations, we show that our hybrid framework reduces consumed host and network resources, outperforming traditional solutions. To insure the subscribers interests closely resemble those of real-world settings, our simulations are based on stock market data and on recorded IBM WebShpere subscriptions.<|reference_end|> | arxiv | @article{bickson2009a,
title={A Hybrid Multicast-Unicast Infrastructure for Efficient
Publish-Subscribe in Enterprise Networks},
author={Danny Bickson, Ezra N. Hoch, Nir Naaman and Yoav Tock},
journal={SYSTOR 2010 - The 3rd Annual Haifa Experimental Systems
Conference, Haifa, Israel, May 24-26, 2010},
year={2009},
doi={10.1145/1815695.1815722},
archivePrefix={arXiv},
eprint={0901.2687},
primaryClass={cs.NI cs.DC}
} | bickson2009a |
arxiv-6085 | 0901.2689 | Peer-to-Peer Secure Multi-Party Numerical Computation Facing Malicious Adversaries | <|reference_start|>Peer-to-Peer Secure Multi-Party Numerical Computation Facing Malicious Adversaries: We propose an efficient framework for enabling secure multi-party numerical computations in a Peer-to-Peer network. This problem arises in a range of applications such as collaborative filtering, distributed computation of trust and reputation, monitoring and other tasks, where the computing nodes is expected to preserve the privacy of their inputs while performing a joint computation of a certain function. Although there is a rich literature in the field of distributed systems security concerning secure multi-party computation, in practice it is hard to deploy those methods in very large scale Peer-to-Peer networks. In this work, we try to bridge the gap between theoretical algorithms in the security domain, and a practical Peer-to-Peer deployment. We consider two security models. The first is the semi-honest model where peers correctly follow the protocol, but try to reveal private information. We provide three possible schemes for secure multi-party numerical computation for this model and identify a single light-weight scheme which outperforms the others. Using extensive simulation results over real Internet topologies, we demonstrate that our scheme is scalable to very large networks, with up to millions of nodes. The second model we consider is the malicious peers model, where peers can behave arbitrarily, deliberately trying to affect the results of the computation as well as compromising the privacy of other peers. For this model we provide a fourth scheme to defend the execution of the computation against the malicious peers. The proposed scheme has a higher complexity relative to the semi-honest model. Overall, we provide the Peer-to-Peer network designer a set of tools to choose from, based on the desired level of security.<|reference_end|> | arxiv | @article{bickson2009peer-to-peer,
title={Peer-to-Peer Secure Multi-Party Numerical Computation Facing Malicious
Adversaries},
author={Danny Bickson, Tzachy Reinman, Danny Dolev and Benny Pinkas},
journal={arXiv preprint arXiv:0901.2689},
year={2009},
doi={10.1007/s12083-009-0051-9},
archivePrefix={arXiv},
eprint={0901.2689},
primaryClass={cs.CR cs.NI}
} | bickson2009peer-to-peer |
arxiv-6086 | 0901.2698 | On integral probability metrics, \phi-divergences and binary classification | <|reference_start|>On integral probability metrics, \phi-divergences and binary classification: A class of distance measures on probabilities -- the integral probability metrics (IPMs) -- is addressed: these include the Wasserstein distance, Dudley metric, and Maximum Mean Discrepancy. IPMs have thus far mostly been used in more abstract settings, for instance as theoretical tools in mass transportation problems, and in metrizing the weak topology on the set of all Borel probability measures defined on a metric space. Practical applications of IPMs are less common, with some exceptions in the kernel machines literature. The present work contributes a number of novel properties of IPMs, which should contribute to making IPMs more widely used in practice, for instance in areas where $\phi$-divergences are currently popular. First, to understand the relation between IPMs and $\phi$-divergences, the necessary and sufficient conditions under which these classes intersect are derived: the total variation distance is shown to be the only non-trivial $\phi$-divergence that is also an IPM. This shows that IPMs are essentially different from $\phi$-divergences. Second, empirical estimates of several IPMs from finite i.i.d. samples are obtained, and their consistency and convergence rates are analyzed. These estimators are shown to be easily computable, with better rates of convergence than estimators of $\phi$-divergences. Third, a novel interpretation is provided for IPMs by relating them to binary classification, where it is shown that the IPM between class-conditional distributions is the negative of the optimal risk associated with a binary classifier. In addition, the smoothness of an appropriate binary classifier is proved to be inversely related to the distance between the class-conditional distributions, measured in terms of an IPM.<|reference_end|> | arxiv | @article{sriperumbudur2009on,
title={On integral probability metrics, \phi-divergences and binary
classification},
author={Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard
Sch"olkopf and Gert R. G. Lanckriet},
journal={arXiv preprint arXiv:0901.2698},
year={2009},
archivePrefix={arXiv},
eprint={0901.2698},
primaryClass={cs.IT math.IT}
} | sriperumbudur2009on |
arxiv-6087 | 0901.2703 | Language recognition by generalized quantum finite automata with unbounded error (abstract & poster) | <|reference_start|>Language recognition by generalized quantum finite automata with unbounded error (abstract & poster): In this note, we generalize the results of arXiv:0901.2703v1 We show that all one-way quantum finite automaton (QFA) models that are at least as general as Kondacs-Watrous QFA's are equivalent in power to classical probabilistic finite automata in this setting. Unlike their probabilistic counterparts, allowing the tape head to stay put for some steps during its traversal of the input does enlarge the class of languages recognized by such QFA's with unbounded error. (Note that, the proof of Theorem 1 in the abstract was presented in the previous version (arXiv:0901.2703v1).)<|reference_end|> | arxiv | @article{yakaryilmaz2009language,
title={Language recognition by generalized quantum finite automata with
unbounded error (abstract & poster)},
author={Abuzer Yakaryilmaz and A. C. Cem Say},
journal={arXiv preprint arXiv:0901.2703},
year={2009},
archivePrefix={arXiv},
eprint={0901.2703},
primaryClass={cs.CC}
} | yakaryilmaz2009language |
arxiv-6088 | 0901.2723 | Information science and technology as applications of the physics of signalling | <|reference_start|>Information science and technology as applications of the physics of signalling: Adopting the scientific method a theoretical model is proposed as foundation for information science and technology, extending the existing theory of signaling: a fact f becomes known in a physical system only following the success of a test f, tests performed primarily by human sensors and applied to (physical) phenomena within which further tests may be performed. Tests are phenomena and classify phenomena. A phenomenon occupies both time and space, facts and inferences having physical counterparts which are phenomena of specified classes. Identifiers such as f are conventional, assigned by humans; a fact (f', f'') reports the success of a test of generic class f', the outcome f'' of the reported application classifying the successful test in more detail. Facts then exist only within structures of a form dictated by constraints on the structural design of tests. The model explains why responses of real time systems are not uniquely predictable and why restrictions, on concurrency in performing inferences within them, are needed. Improved methods, based on the model and applicable throughout the software life-cycle, are summarised in the paper. No report of similar work has been found in the literature.<|reference_end|> | arxiv | @article{young2009information,
title={Information science and technology as applications of the physics of
signalling},
author={A. P. Young},
journal={arXiv preprint arXiv:0901.2723},
year={2009},
archivePrefix={arXiv},
eprint={0901.2723},
primaryClass={cs.OH}
} | young2009information |
arxiv-6089 | 0901.2731 | A Super-Polynomial Lower Bound for the Parity Game Strategy Improvement Algorithm as We Know it | <|reference_start|>A Super-Polynomial Lower Bound for the Parity Game Strategy Improvement Algorithm as We Know it: This paper presents a new lower bound for the discrete strategy improvement algorithm for solving parity games due to Voege and Jurdziski. First, we informally show which structures are difficult to solve for the algorithm. Second, we outline a family of games of quadratic size on which the algorithm requires exponentially many strategy iterations, answering in the negative the long-standing question whether this algorithm runs in polynomial time. Additionally we note that the same family of games can be used to prove a similar result w.r.t. the strategy improvement variant by Schewe.<|reference_end|> | arxiv | @article{friedmann2009a,
title={A Super-Polynomial Lower Bound for the Parity Game Strategy Improvement
Algorithm as We Know it},
author={Oliver Friedmann},
journal={arXiv preprint arXiv:0901.2731},
year={2009},
archivePrefix={arXiv},
eprint={0901.2731},
primaryClass={cs.GT}
} | friedmann2009a |
arxiv-6090 | 0901.2742 | Sample-Align-D: A High Performance Multiple Sequence Alignment System using Phylogenetic Sampling and Domain Decomposition | <|reference_start|>Sample-Align-D: A High Performance Multiple Sequence Alignment System using Phylogenetic Sampling and Domain Decomposition: Multiple Sequence Alignment (MSA) is one of the most computationally intensive tasks in Computational Biology. Existing best known solutions for multiple sequence alignment take several hours (in some cases days) of computation time to align, for example, 2000 homologous sequences of average length 300. Inspired by the Sample Sort approach in parallel processing, in this paper we propose a highly scalable multiprocessor solution for the MSA problem in phylogenetically diverse sequences. Our method employs an intelligent scheme to partition the set of sequences into smaller subsets using kmer count based similarity index, referred to as k-mer rank. Each subset is then independently aligned in parallel using any sequential approach. Further fine tuning of the local alignments is achieved using constraints derived from a global ancestor of the entire set. The proposed Sample-Align-D Algorithm has been implemented on a cluster of workstations using MPI message passing library. The accuracy of the proposed solution has been tested on standard benchmarks such as PREFAB. The accuracy of the alignment produced by our methods is comparable to that of well known sequential MSA techniques. We were able to align 2000 randomly selected sequences from the Methanosarcina acetivorans genome in less than 10 minutes using Sample-Align-D on a 16 node cluster, compared to over 23 hours on sequential MUSCLE system running on a single cluster node.<|reference_end|> | arxiv | @article{saeed2009sample-align-d:,
title={Sample-Align-D: A High Performance Multiple Sequence Alignment System
using Phylogenetic Sampling and Domain Decomposition},
author={Fahad Saeed and Ashfaq Khokhar},
journal={arXiv preprint arXiv:0901.2742},
year={2009},
archivePrefix={arXiv},
eprint={0901.2742},
primaryClass={cs.DC q-bio.GN q-bio.QM}
} | saeed2009sample-align-d: |
arxiv-6091 | 0901.2747 | An Overview of Multiple Sequence Alignment Systems | <|reference_start|>An Overview of Multiple Sequence Alignment Systems: An overview of current multiple alignment systems to date are described.The useful algorithms, the procedures adopted and their limitations are presented.We also present the quality of the alignments obtained and in which cases(kind of alignments, kind of sequences etc) the particular systems are useful.<|reference_end|> | arxiv | @article{saeed2009an,
title={An Overview of Multiple Sequence Alignment Systems},
author={Fahad Saeed and Ashfaq Khokhar},
journal={arXiv preprint arXiv:0901.2747},
year={2009},
number={PAMS-05-2007},
archivePrefix={arXiv},
eprint={0901.2747},
primaryClass={cs.DS q-bio.GN q-bio.QM}
} | saeed2009an |
arxiv-6092 | 0901.2751 | Pyro-Align: Sample-Align based Multiple Alignment system for Pyrosequencing Reads of Large Number | <|reference_start|>Pyro-Align: Sample-Align based Multiple Alignment system for Pyrosequencing Reads of Large Number: Pyro-Align is a multiple alignment program specifically designed for pyrosequencing reads of huge number. Multiple sequence alignment is shown to be NP-hard and heuristics are designed for approximate solutions. Multiple sequence alignment of pyrosequenceing reads is complex mainly because of 2 factors. One being the huge number of reads, making the use of traditional heuristics,that scale very poorly for large number, unsuitable. The second reason is that the alignment cannot be performed arbitrarily, because the position of the reads with respect to the original genome is important and has to be taken into account.In this report we present a short description of the multiple alignment system for pyrosequencing reads.<|reference_end|> | arxiv | @article{saeed2009pyro-align:,
title={Pyro-Align: Sample-Align based Multiple Alignment system for
Pyrosequencing Reads of Large Number},
author={Fahad Saeed},
journal={arXiv preprint arXiv:0901.2751},
year={2009},
number={DBSSE-08-2008},
archivePrefix={arXiv},
eprint={0901.2751},
primaryClass={cs.DS cs.DC q-bio.GN q-bio.QM}
} | saeed2009pyro-align: |
arxiv-6093 | 0901.2764 | Dirty Paper Coding for Fading Channels with Partial Transmitter Side Information | <|reference_start|>Dirty Paper Coding for Fading Channels with Partial Transmitter Side Information: The problem of Dirty Paper Coding (DPC) over the Fading Dirty Paper Channel (FDPC) Y = H(X + S)+Z, a more general version of Costa's channel, is studied for the case in which there is partial and perfect knowledge of the fading process H at the transmitter (CSIT) and the receiver (CSIR), respectively. A key step in this problem is to determine the optimal inflation factor (under Costa's choice of auxiliary random variable) when there is only partial CSIT. Towards this end, two iterative numerical algorithms are proposed. Both of these algorithms are seen to yield a good choice for the inflation factor. Finally, the high-SNR (signal-to-noise ratio) behavior of the achievable rate over the FDPC is dealt with. It is proved that FDPC (with t transmit and r receive antennas) achieves the largest possible scaling factor of min(t,r) log SNR even with no CSIT. Furthermore, in the high SNR regime, the optimality of Costa's choice of auxiliary random variable is established even when there is partial (or no) CSIT in the special case of FDPC with t <= r. Using the high-SNR scaling-law result of the FDPC (mentioned before), it is shown that a DPC-based multi-user transmission strategy, unlike other beamforming-based multi-user strategies, can achieve a single-user sum-rate scaling factor over the multiple-input multiple-output Gaussian Broadcast Channel with partial (or no) CSIT.<|reference_end|> | arxiv | @article{vaze2009dirty,
title={Dirty Paper Coding for Fading Channels with Partial Transmitter Side
Information},
author={Chinmay S. Vaze and Mahesh K. Varanasi},
journal={arXiv preprint arXiv:0901.2764},
year={2009},
archivePrefix={arXiv},
eprint={0901.2764},
primaryClass={cs.IT math.IT}
} | vaze2009dirty |
arxiv-6094 | 0901.2768 | FRFD MIMO Systems: Precoded V-BLAST with Limited Feedback Versus Non-orthogonal STBC MIMO | <|reference_start|>FRFD MIMO Systems: Precoded V-BLAST with Limited Feedback Versus Non-orthogonal STBC MIMO: Full-rate (FR) and full-diversity (FD) are attractive features in MIMO systems. We refer to systems which achieve both FR and FD simultaneously as FRFD systems. Non-orthogonal STBCs can achieve FRFD without feedback, but their ML decoding complexities are high. V-BLAST without precoding achieves FR but not FD. FRFD can be achieved in V-BLAST through precoding given full channel state information at the transmitter (CSIT). However, with limited feedback precoding, V-BLAST achieves FD, but with some rate loss. Our contribution in this paper is two-fold: $i)$ we propose a limited feedback (LFB) precoding scheme which achieves FRFD in $2\times 2$, $3\times 3$ and $4\times 4$ V-BLAST systems (we refer to this scheme as FRFD-VBLAST-LFB scheme), and $ii)$ comparing the performances of the FRFD-VBLAST-LFB scheme and non-orthogonal STBCs without feedback (e.g., Golden code, perfect codes) under ML decoding, we show that in $2\times 2$ MIMO system with 4-QAM/16-QAM, FRFD-VBLAST-LFB scheme outperforms the Golden code by about 0.6 dB; in $3\times 3$ and $4\times 4$ MIMO systems, the performance of FRFD-VBLAST-LFB scheme is comparable to the performance of perfect codes. The FRFD-VBLAST-LFB scheme is attractive because 1) ML decoding becomes less complex compared to that of non-orthogonal STBCs, 2) the number of feedback bits required to achieve the above performance is small, 3) in slow-fading, it is adequate to send feedback bits only occasionally, and 4) in most practical wireless systems feedback channel is often available (e.g., for adaptive modulation, rate/power control).<|reference_end|> | arxiv | @article{barik2009frfd,
title={FRFD MIMO Systems: Precoded V-BLAST with Limited Feedback Versus
Non-orthogonal STBC MIMO},
author={S. Barik, Saif K. Mohammed, A. Chockalingam, and B. Sundar Rajan},
journal={arXiv preprint arXiv:0901.2768},
year={2009},
archivePrefix={arXiv},
eprint={0901.2768},
primaryClass={cs.IT math.IT}
} | barik2009frfd |
arxiv-6095 | 0901.2771 | Automatic Analog Beamforming Transceiver for 60 GHz Radios | <|reference_start|>Automatic Analog Beamforming Transceiver for 60 GHz Radios: We propose a transceiver architecture for automatic beamforming and instantaneous setup of a multigigabit-per-second wireless link between two millimeter wave radios. The retro-directive architecture eliminates necessity of slow and complex digital algorithms required for searching and tracking the directions of opposite end radios. Simulations predict <5 micro-seconds setup time for a 2-Gbps bidirectional 60-GHz communication link between two 10-meters apart radios. The radios have 4-element arrayed antennas, and use QPSK modulation with 1.5 GHz analog bandwidth.<|reference_end|> | arxiv | @article{gupta2009automatic,
title={Automatic Analog Beamforming Transceiver for 60 GHz Radios},
author={Shalabh Gupta},
journal={arXiv preprint arXiv:0901.2771},
year={2009},
archivePrefix={arXiv},
eprint={0901.2771},
primaryClass={cs.NI}
} | gupta2009automatic |
arxiv-6096 | 0901.2778 | On the Computation of Matrices of Traces and Radicals of Ideals | <|reference_start|>On the Computation of Matrices of Traces and Radicals of Ideals: Let $f_1,...,f_s \in \mathbb{K}[x_1,...,x_m]$ be a system of polynomials generating a zero-dimensional ideal $\I$, where $\mathbb{K}$ is an arbitrary algebraically closed field. We study the computation of "matrices of traces" for the factor algebra $\A := \CC[x_1, ..., x_m]/ \I$, i.e. matrices with entries which are trace functions of the roots of $\I$. Such matrices of traces in turn allow us to compute a system of multiplication matrices $\{M_{x_i}|i=1,...,m\}$ of the radical $\sqrt{\I}$. We first propose a method using Macaulay type resultant matrices of $f_1,...,f_s$ and a polynomial $J$ to compute moment matrices, and in particular matrices of traces for $\A$. Here $J$ is a polynomial generalizing the Jacobian. We prove bounds on the degrees needed for the Macaulay matrix in the case when $\I$ has finitely many projective roots in $\mathbb{P}^m_\CC$. We also extend previous results which work only for the case where $\A$ is Gorenstein to the non-Gorenstein case. The second proposed method uses Bezoutian matrices to compute matrices of traces of $\A$. Here we need the assumption that $s=m$ and $f_1,...,f_m$ define an affine complete intersection. This second method also works if we have higher dimensional components at infinity. A new explicit description of the generators of $\sqrt{\I}$ are given in terms of Bezoutians.<|reference_end|> | arxiv | @article{janovitz-freireich2009on,
title={On the Computation of Matrices of Traces and Radicals of Ideals},
author={Itnuit Janovitz-Freireich, Bernard Mourrain (INRIA Sophia Antipolis),
Lajos Ronayi, Agnes Szanto},
journal={Journal of Symbolic Computation 47, 1 (2012) 102-122},
year={2009},
doi={10.1016/j.jsc.2011.08.020},
archivePrefix={arXiv},
eprint={0901.2778},
primaryClass={cs.SC math.AC}
} | janovitz-freireich2009on |
arxiv-6097 | 0901.2804 | The Secrecy Capacity for a 3-Receiver Broadcast Channel with Degraded Message Sets | <|reference_start|>The Secrecy Capacity for a 3-Receiver Broadcast Channel with Degraded Message Sets: This paper has been withdrawn by the author due to some errors.<|reference_end|> | arxiv | @article{choo2009the,
title={The Secrecy Capacity for a 3-Receiver Broadcast Channel with Degraded
Message Sets},
author={Li-Chia Choo and Kai-Kit Wong},
journal={arXiv preprint arXiv:0901.2804},
year={2009},
archivePrefix={arXiv},
eprint={0901.2804},
primaryClass={cs.IT math.IT}
} | choo2009the |
arxiv-6098 | 0901.2838 | Analytical Solution of Covariance Evolution for Regular LDPC Codes | <|reference_start|>Analytical Solution of Covariance Evolution for Regular LDPC Codes: The covariance evolution is a system of differential equations with respect to the covariance of the number of edges connecting to the nodes of each residual degree. Solving the covariance evolution, we can derive distributions of the number of check nodes of residual degree 1, which helps us to estimate the block error probability for finite-length LDPC code. Amraoui et al.\ resorted to numerical computations to solve the covariance evolution. In this paper, we give the analytical solution of the covariance evolution.<|reference_end|> | arxiv | @article{nozaki2009analytical,
title={Analytical Solution of Covariance Evolution for Regular LDPC Codes},
author={Takayuki Nozaki, Kenta Kasai, Kohichi Sakaniwa},
journal={arXiv preprint arXiv:0901.2838},
year={2009},
archivePrefix={arXiv},
eprint={0901.2838},
primaryClass={cs.IT math.IT}
} | nozaki2009analytical |
arxiv-6099 | 0901.2847 | Average-case analysis of perfect sorting by reversals | <|reference_start|>Average-case analysis of perfect sorting by reversals: A sequence of reversals that takes a signed permutation to the identity is perfect if at no step a common interval is broken. Determining a parsimonious perfect sequence of reversals that sorts a signed permutation is NP-hard. Here we show that, despite this worst-case analysis, with probability one, sorting can be done in polynomial time. Further, we find asymptotic expressions for the average length and number of reversals in commuting permutations, an interesting sub-class of signed permutations.<|reference_end|> | arxiv | @article{bouvel2009average-case,
title={Average-case analysis of perfect sorting by reversals},
author={Mathilde Bouvel (LIAFA), Cedric Chauve, Marni Mishna, Dominique Rossin
(LIAFA)},
journal={CPM'09, Lille : France (2009)},
year={2009},
archivePrefix={arXiv},
eprint={0901.2847},
primaryClass={math.CO cs.DS q-bio.QM}
} | bouvel2009average-case |
arxiv-6100 | 0901.2850 | On finitely recursive programs | <|reference_start|>On finitely recursive programs: Disjunctive finitary programs are a class of logic programs admitting function symbols and hence infinite domains. They have very good computational properties, for example ground queries are decidable while in the general case the stable model semantics is highly undecidable. In this paper we prove that a larger class of programs, called finitely recursive programs, preserves most of the good properties of finitary programs under the stable model semantics, namely: (i) finitely recursive programs enjoy a compactness property; (ii) inconsistency checking and skeptical reasoning are semidecidable; (iii) skeptical resolution is complete for normal finitely recursive programs. Moreover, we show how to check inconsistency and answer skeptical queries using finite subsets of the ground program instantiation. We achieve this by extending the splitting sequence theorem by Lifschitz and Turner: We prove that if the input program P is finitely recursive, then the partial stable models determined by any smooth splitting omega-sequence converge to a stable model of P.<|reference_end|> | arxiv | @article{baselice2009on,
title={On finitely recursive programs},
author={Sabrina Baselice, Piero A. Bonatti, Giovanni Criscuolo},
journal={Theory and Practice of Logic Programming, 9(2), 213-238, 2009},
year={2009},
archivePrefix={arXiv},
eprint={0901.2850},
primaryClass={cs.AI cs.LO}
} | baselice2009on |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.