corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-6201 | 0901.4267 | LR-aided MMSE lattice decoding is DMT optimal for all approximately universal codes | <|reference_start|>LR-aided MMSE lattice decoding is DMT optimal for all approximately universal codes: Currently for the nt x nr MIMO channel, any explicitly constructed space-time (ST) designs that achieve optimality with respect to the diversity multiplexing tradeoff (DMT) are known to do so only when decoded using maximum likelihood (ML) decoding, which may incur prohibitive decoding complexity. In this paper we prove that MMSE regularized lattice decoding, as well as the computationally efficient lattice reduction (LR) aided MMSE decoder, allows for efficient and DMT optimal decoding of any approximately universal lattice-based code. The result identifies for the first time an explicitly constructed encoder and a computationally efficient decoder that achieve DMT optimality for all multiplexing gains and all channel dimensions. The results hold irrespective of the fading statistics.<|reference_end|> | arxiv | @article{jalden2009lr-aided,
title={LR-aided MMSE lattice decoding is DMT optimal for all approximately
universal codes},
author={Joakim Jalden and Petros Elia},
journal={arXiv preprint arXiv:0901.4267},
year={2009},
archivePrefix={arXiv},
eprint={0901.4267},
primaryClass={cs.IT math.IT}
} | jalden2009lr-aided |
arxiv-6202 | 0901.4272 | Dynamic Control of a Flow-Rack Automated Storage and Retrieval System | <|reference_start|>Dynamic Control of a Flow-Rack Automated Storage and Retrieval System: In this paper we propose a control scheme based on coloured Petri net (CPN) for a flow-rack automated storage and retrieval system. The AS/RS is modelled using Coloured Petri nets, the developed model has been used to capture and provide the rack state. We introduce in the control system an optimization module as a decision process which performs a real-time optimization working on a discrete events time scale. The objective is to find bin locations for the retrieval requests by minimizing the total number of retrieval cycles for a batch of requests and thereby increase the system throughput. By solving the optimization model, the proposed method gives according to customers request and the rack state, the best bin locations for retrieval, i.e. allowing at the same time to satisfy the customers request and carrying out the minimum of retrieval cycles.<|reference_end|> | arxiv | @article{hachemi2009dynamic,
title={Dynamic Control of a Flow-Rack Automated Storage and Retrieval System},
author={Khalid Hachemi (GIPSA-lab), Hassane. Alla (GIPSA-lab)},
journal={arXiv preprint arXiv:0901.4272},
year={2009},
archivePrefix={arXiv},
eprint={0901.4272},
primaryClass={cs.IT math.IT}
} | hachemi2009dynamic |
arxiv-6203 | 0901.4275 | Informative Sensing | <|reference_start|>Informative Sensing: Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given the distribution of signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We consider the problem of finding a small number of linear projections that are maximally informative about the signal. Formally, we use the InfoMax criterion and seek to maximize the mutual information between the signal, x, and the (possibly noisy) projection y=Wx. We show that in general the optimal projections are not the principal components of the data nor random projections, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the knowledge of distribution. We present analytic solutions for certain special cases including natural images. In particular, for natural images, the near-optimal projections are bandwise random, i.e., incoherent to the sparse bases at a particular frequency band but with more weights on the low-frequencies, which has a physical relation to the multi-resolution representation of images.<|reference_end|> | arxiv | @article{chang2009informative,
title={Informative Sensing},
author={Hyun Sung Chang, Yair Weiss, William T. Freeman},
journal={arXiv preprint arXiv:0901.4275},
year={2009},
archivePrefix={arXiv},
eprint={0901.4275},
primaryClass={cs.IT math.IT}
} | chang2009informative |
arxiv-6204 | 0901.4322 | Bounds on the degree of APN polynomials The Case of $x^-1+g(x)$ | <|reference_start|>Bounds on the degree of APN polynomials The Case of $x^-1+g(x)$: We prove that functions $f:\f{2^m} \to \f{2^m}$ of the form $f(x)=x^{-1}+g(x)$ where $g$ is any non-affine polynomial are APN on at most a finite number of fields $\f{2^m}$. Furthermore we prove that when the degree of $g$ is less then 7 such functions are APN only if $m \le 3$ where these functions are equivalent to $x^3$.<|reference_end|> | arxiv | @article{leander2009bounds,
title={Bounds on the degree of APN polynomials The Case of $x^{-1}+g(x)$},
author={Gregor Leander (IML), Franc{c}ois Rodier (IML)},
journal={arXiv preprint arXiv:0901.4322},
year={2009},
archivePrefix={arXiv},
eprint={0901.4322},
primaryClass={math.AG cs.CR}
} | leander2009bounds |
arxiv-6205 | 0901.4323 | On the bit-complexity of sparse polynomial multiplication | <|reference_start|>On the bit-complexity of sparse polynomial multiplication: In this paper, we present fast algorithms for the product of two multivariate polynomials in sparse representation. The bit complexity of our algorithms are studied in detail for various types of coefficients, and we derive new complexity results for the power series multiplication in many variables. Our algorithms are implemented and freely available within the Mathemagix software. We show that their theoretical costs are well-reflected in practice.<|reference_end|> | arxiv | @article{van der hoeven2009on,
title={On the bit-complexity of sparse polynomial multiplication},
author={Joris van der Hoeven, Gr'egoire Lecerf},
journal={arXiv preprint arXiv:0901.4323},
year={2009},
archivePrefix={arXiv},
eprint={0901.4323},
primaryClass={cs.DS cs.MS}
} | van der hoeven2009on |
arxiv-6206 | 0901.4375 | Extracting Spooky-activation-at-a-distance from Considerations of Entanglement | <|reference_start|>Extracting Spooky-activation-at-a-distance from Considerations of Entanglement: Following an early claim by Nelson & McEvoy \cite{Nelson:McEvoy:2007} suggesting that word associations can display `spooky action at a distance behaviour', a serious investigation of the potentially quantum nature of such associations is currently underway. This paper presents a simple quantum model of a word association system. It is shown that a quantum model of word entanglement can recover aspects of both the Spreading Activation equation and the Spooky-activation-at-a-distance equation, both of which are used to model the activation level of words in human memory.<|reference_end|> | arxiv | @article{bruza2009extracting,
title={Extracting Spooky-activation-at-a-distance from Considerations of
Entanglement},
author={P.D. Bruza, K. Kitto, D. Nelson, C. McEvoy},
journal={arXiv preprint arXiv:0901.4375},
year={2009},
archivePrefix={arXiv},
eprint={0901.4375},
primaryClass={physics.data-an cs.CL quant-ph}
} | bruza2009extracting |
arxiv-6207 | 0901.4379 | Ergodic Interference Alignment | <|reference_start|>Ergodic Interference Alignment: This paper develops a new communication strategy, ergodic interference alignment, for the K-user interference channel with time-varying fading. At any particular time, each receiver will see a superposition of the transmitted signals plus noise. The standard approach to such a scenario results in each transmitter-receiver pair achieving a rate proportional to 1/K its interference-free ergodic capacity. However, given two well-chosen time indices, the channel coefficients from interfering users can be made to exactly cancel. By adding up these two observations, each receiver can obtain its desired signal without any interference. If the channel gains have independent, uniform phases, this technique allows each user to achieve at least 1/2 its interference-free ergodic capacity at any signal-to-noise ratio. Prior interference alignment techniques were only able to attain this performance as the signal-to-noise ratio tended to infinity. Extensions are given for the case where each receiver wants a message from more than one transmitter as well as the "X channel" case (with two receivers) where each transmitter has an independent message for each receiver. Finally, it is shown how to generalize this strategy beyond Gaussian channel models. For a class of finite field interference channels, this approach yields the ergodic capacity region.<|reference_end|> | arxiv | @article{nazer2009ergodic,
title={Ergodic Interference Alignment},
author={Bobak Nazer, Michael Gastpar, Syed A. Jafar, and Sriram Vishwanath},
journal={arXiv preprint arXiv:0901.4379},
year={2009},
archivePrefix={arXiv},
eprint={0901.4379},
primaryClass={cs.IT math.IT}
} | nazer2009ergodic |
arxiv-6208 | 0901.4400 | Faster Real Feasibility via Circuit Discriminants | <|reference_start|>Faster Real Feasibility via Circuit Discriminants: We show that detecting real roots for honestly n-variate (n+2)-nomials (with integer exponents and coefficients) can be done in time polynomial in the sparse encoding for any fixed n. The best previous complexity bounds were exponential in the sparse encoding, even for n fixed. We then give a characterization of those functions k(n) such that the complexity of detecting real roots for n-variate (n+k(n))-nomials transitions from P to NP-hardness as n tends to infinity. Our proofs follow in large part from a new complexity threshold for deciding the vanishing of A-discriminants of n-variate (n+k(n))-nomials. Diophantine approximation, through linear forms in logarithms, also arises as a key tool.<|reference_end|> | arxiv | @article{bihan2009faster,
title={Faster Real Feasibility via Circuit Discriminants},
author={Frederic Bihan, J. Maurice Rojas, Casey Stella},
journal={arXiv preprint arXiv:0901.4400},
year={2009},
archivePrefix={arXiv},
eprint={0901.4400},
primaryClass={math.AG cs.CC math.OC}
} | bihan2009faster |
arxiv-6209 | 0901.4404 | Performance of Buchberger's Improved Algorithm using Prime Based Ordering | <|reference_start|>Performance of Buchberger's Improved Algorithm using Prime Based Ordering: Prime-based ordering which is proved to be admissible, is the encoding of indeterminates in power-products with prime numbers and ordering them by using the natural number order. Using Eiffel, four versions of Buchberger's improved algorithm for obtaining Groebner Bases have been developed: two total degree versions, representing power products as strings and the other two as integers based on prime-based ordering. The versions are further distinguished by implementing coefficients as 64-bit integers and as multiple-precision integers. By using primebased power product coding, iterative or recursive operations on power products are replaced with integer operations. It is found that on a series of example polynomial sets, significant reductions in computation time of 30% or more are almost always obtained.<|reference_end|> | arxiv | @article{horan2009performance,
title={Performance of Buchberger's Improved Algorithm using Prime Based
Ordering},
author={Peter Horan and John Carminati},
journal={arXiv preprint arXiv:0901.4404},
year={2009},
archivePrefix={arXiv},
eprint={0901.4404},
primaryClass={cs.SE cs.SC}
} | horan2009performance |
arxiv-6210 | 0901.4417 | ALLSAT compressed with wildcards: All, or all maximum independent sets | <|reference_start|>ALLSAT compressed with wildcards: All, or all maximum independent sets: An odd cycle cover is a vertex set whose removal makes a graph bipartite. We show that if a $k$-element odd cycle cover of a graph with w vertices is known then all $N$ maximum anticliques (= independent sets) can be generated in time $O(2^k w^3 + N w^2))$. Generating ${\it all}\ N'$ anticliques (maximum or not) is easier and works for arbitrary graphs in time $O(N'w^2)$. In fact the use of wildcards allows to compactly generate the anticliques in clusters.<|reference_end|> | arxiv | @article{wild2009allsat,
title={ALLSAT compressed with wildcards: All, or all maximum independent sets},
author={Marcel Wild},
journal={arXiv preprint arXiv:0901.4417},
year={2009},
archivePrefix={arXiv},
eprint={0901.4417},
primaryClass={cs.DS cs.DM cs.MS}
} | wild2009allsat |
arxiv-6211 | 0901.4420 | Some Generalizations of the Capacity Theorem for AWGN Channels | <|reference_start|>Some Generalizations of the Capacity Theorem for AWGN Channels: The channel capacity theorem for additive white Gaussian noise channel (AWGN), widely known as the Shannon-Hartley Law, expresses the information capacity of a channel bandlimited in the conventional Fourier domain in terms of the signal-to-noise ratio in it. In this letter generalized versions of the Shannon-Hartley Law using the linear canonical transform (LCT) are presented. The channel capacity for AWGN channels is found to be a function of the LCT parameters.<|reference_end|> | arxiv | @article{sharma2009some,
title={Some Generalizations of the Capacity Theorem for AWGN Channels},
author={Kamalesh Kumar Sharma},
journal={arXiv preprint arXiv:0901.4420},
year={2009},
archivePrefix={arXiv},
eprint={0901.4420},
primaryClass={cs.IT math.IT}
} | sharma2009some |
arxiv-6212 | 0901.4430 | Neighbourhood Structures: Bisimilarity and Basic Model Theory | <|reference_start|>Neighbourhood Structures: Bisimilarity and Basic Model Theory: Neighbourhood structures are the standard semantic tool used to reason about non-normal modal logics. The logic of all neighbourhood models is called classical modal logic. In coalgebraic terms, a neighbourhood frame is a coalgebra for the contravariant powerset functor composed with itself, denoted by 2^2. We use this coalgebraic modelling to derive notions of equivalence between neighbourhood structures. 2^2-bisimilarity and behavioural equivalence are well known coalgebraic concepts, and they are distinct, since 2^2 does not preserve weak pullbacks. We introduce a third, intermediate notion whose witnessing relations we call precocongruences (based on pushouts). We give back-and-forth style characterisations for 2^2-bisimulations and precocongruences, we show that on a single coalgebra, precocongruences capture behavioural equivalence, and that between neighbourhood structures, precocongruences are a better approximation of behavioural equivalence than 2^2-bisimulations. We also introduce a notion of modal saturation for neighbourhood models, and investigate its relationship with definability and image-finiteness. We prove a Hennessy-Milner theorem for modally saturated and for image-finite neighbourhood models. Our main results are an analogue of Van Benthem's characterisation theorem and a model-theoretic proof of Craig interpolation for classical modal logic.<|reference_end|> | arxiv | @article{hansen2009neighbourhood,
title={Neighbourhood Structures: Bisimilarity and Basic Model Theory},
author={Helle Hvid Hansen and Clemens Kupke and Eric Pacuit},
journal={Logical Methods in Computer Science, Volume 5, Issue 2 (April 9,
2009) lmcs:1167},
year={2009},
doi={10.2168/LMCS-5(2:2)2009},
archivePrefix={arXiv},
eprint={0901.4430},
primaryClass={cs.LO}
} | hansen2009neighbourhood |
arxiv-6213 | 0901.4466 | Physarum boats: If plasmodium sailed it would never leave a port | <|reference_start|>Physarum boats: If plasmodium sailed it would never leave a port: Plasmodium of \emph{Physarum polycephalum} is a single huge (visible by naked eye) cell with myriad of nuclei. The plasmodium is a promising substrate for non-classical, nature-inspired, computing devices. It is capable for approximation of shortest path, computation of planar proximity graphs and plane tessellations, primitive memory and decision-making. The unique properties of the plasmodium make it an ideal candidate for a role of amorphous biological robots with massive parallel information processing and distributed inputs and outputs. We show that when adhered to light-weight object resting on a water surface the plasmodium can propel the object by oscillating its protoplasmic pseudopodia. In experimental laboratory conditions and computational experiments we study phenomenology of the plasmodium-floater system, and possible mechanisms of controlling motion of objects propelled by on board plasmodium.<|reference_end|> | arxiv | @article{adamatzky2009physarum,
title={Physarum boats: If plasmodium sailed it would never leave a port},
author={Andrew Adamatzky},
journal={Applied Bionics and Biomechanics, Volume 7, Issue 1 March 2010 ,
pages 31 - 39},
year={2009},
doi={10.1080/11762320902863890},
archivePrefix={arXiv},
eprint={0901.4466},
primaryClass={cs.RO q-bio.CB}
} | adamatzky2009physarum |
arxiv-6214 | 0901.4467 | Efficient LDPC Codes over GF(q) for Lossy Data Compression | <|reference_start|>Efficient LDPC Codes over GF(q) for Lossy Data Compression: In this paper we consider the lossy compression of a binary symmetric source. We present a scheme that provides a low complexity lossy compressor with near optimal empirical performance. The proposed scheme is based on b-reduced ultra-sparse LDPC codes over GF(q). Encoding is performed by the Reinforced Belief Propagation algorithm, a variant of Belief Propagation. The computational complexity at the encoder is O(<d>.n.q.log q), where <d> is the average degree of the check nodes. For our code ensemble, decoding can be performed iteratively following the inverse steps of the leaf removal algorithm. For a sparse parity-check matrix the number of needed operations is O(n).<|reference_end|> | arxiv | @article{braunstein2009efficient,
title={Efficient LDPC Codes over GF(q) for Lossy Data Compression},
author={Alfredo Braunstein, Farbod Kayhan, Riccardo Zecchina},
journal={In: IEEE International Symposium on Information Theory, 2009. ISIT
2009. Seul, Korea; 2009},
year={2009},
doi={10.1109/ISIT.2009.5205707},
archivePrefix={arXiv},
eprint={0901.4467},
primaryClass={cs.IT math.IT}
} | braunstein2009efficient |
arxiv-6215 | 0901.4496 | Ethemba Trusted Host EnvironmentMainly Based on Attestation | <|reference_start|>Ethemba Trusted Host EnvironmentMainly Based on Attestation: Ethemba provides a framework and demonstrator for TPM applications.<|reference_end|> | arxiv | @article{brett2009ethemba,
title={Ethemba Trusted Host EnvironmentMainly Based on Attestation},
author={Andreas Brett, Andreas Leicher},
journal={arXiv preprint arXiv:0901.4496},
year={2009},
archivePrefix={arXiv},
eprint={0901.4496},
primaryClass={cs.CR}
} | brett2009ethemba |
arxiv-6216 | 0901.4551 | Robust Key Agreement Schemes | <|reference_start|>Robust Key Agreement Schemes: This paper considers a key agreement problem in which two parties aim to agree on a key by exchanging messages in the presence of adversarial tampering. The aim of the adversary is to disrupt the key agreement process, but there are no secrecy constraints (i.e. we do not insist that the key is kept secret from the adversary). The main results of the paper are coding schemes and bounds on maximum key generation rates for this problem.<|reference_end|> | arxiv | @article{chan2009robust,
title={Robust Key Agreement Schemes},
author={Terence Chan, Ning Cai, Alex Grant},
journal={arXiv preprint arXiv:0901.4551},
year={2009},
archivePrefix={arXiv},
eprint={0901.4551},
primaryClass={cs.IT math.IT}
} | chan2009robust |
arxiv-6217 | 0901.4571 | Everyone is a Curator: Human-Assisted Preservation for ORE Aggregations | <|reference_start|>Everyone is a Curator: Human-Assisted Preservation for ORE Aggregations: The Open Archives Initiative (OAI) has recently created the Object Reuse and Exchange (ORE) project that defines Resource Maps (ReMs) for describing aggregations of web resources. These aggregations are susceptible to many of the same preservation challenges that face other web resources. In this paper, we investigate how the aggregations of web resources can be preserved outside of the typical repository environment and instead rely on the thousands of interactive users in the web community and the Web Infrastructure (the collection of web archives, search engines, and personal archiving services) to facilitate preservation. Inspired by Web 2.0 services such as digg, deli.cio.us, and Yahoo! Buzz, we have developed a lightweight system called ReMember that attempts to harness the collective abilities of the web community for preservation purposes instead of solely placing the burden of curatorial responsibilities on a small number of experts.<|reference_end|> | arxiv | @article{mccown2009everyone,
title={Everyone is a Curator: Human-Assisted Preservation for ORE Aggregations},
author={Frank McCown, Michael L. Nelson, Herbert Van de Sompel},
journal={arXiv preprint arXiv:0901.4571},
year={2009},
archivePrefix={arXiv},
eprint={0901.4571},
primaryClass={cs.DL cs.IR}
} | mccown2009everyone |
arxiv-6218 | 0901.4591 | Network Coding-Based Protection Strategy Against Node Failures | <|reference_start|>Network Coding-Based Protection Strategy Against Node Failures: The enormous increase in the usage of communication networks has made protection against node and link failures essential in the deployment of reliable networks. To prevent loss of data due to node failures, a network protection strategy is proposed that aims to withstand such failures. Particularly, a protection strategy against any single node failure is designed for a given network with a set of $n$ disjoint paths between senders and receivers. Network coding and reduced capacity are deployed in this strategy without adding extra working paths to the readily available connection paths. This strategy is based on protection against node failures as protection against multiple link failures. In addition, the encoding and decoding operational aspects of the premeditated protection strategy are demonstrated.<|reference_end|> | arxiv | @article{aly2009network,
title={Network Coding-Based Protection Strategy Against Node Failures},
author={Salah A. Aly, Ahmed E. Kamal},
journal={arXiv preprint arXiv:0901.4591},
year={2009},
archivePrefix={arXiv},
eprint={0901.4591},
primaryClass={cs.IT cs.CR cs.NI math.IT}
} | aly2009network |
arxiv-6219 | 0901.4612 | Network Coding Capacity: A Functional Dependence Bound | <|reference_start|>Network Coding Capacity: A Functional Dependence Bound: Explicit characterization and computation of the multi-source network coding capacity region (or even bounds) is long standing open problem. In fact, finding the capacity region requires determination of the set of all entropic vectors $\Gamma^{*}$, which is known to be an extremely hard problem. On the other hand, calculating the explicitly known linear programming bound is very hard in practice due to an exponential growth in complexity as a function of network size. We give a new, easily computable outer bound, based on characterization of all functional dependencies in networks. We also show that the proposed bound is tighter than some known bounds.<|reference_end|> | arxiv | @article{thakor2009network,
title={Network Coding Capacity: A Functional Dependence Bound},
author={Satyajit Thakor, Alex Grant and Terence Chan},
journal={arXiv preprint arXiv:0901.4612},
year={2009},
archivePrefix={arXiv},
eprint={0901.4612},
primaryClass={cs.IT math.IT}
} | thakor2009network |
arxiv-6220 | 0901.4642 | Fast Dual-Radio Cross-Layer Handoffs in Multi-Hop Infrastructure-mode 80211 Wireless Networks for In-Vehicle Multimedia Infotainment | <|reference_start|>Fast Dual-Radio Cross-Layer Handoffs in Multi-Hop Infrastructure-mode 80211 Wireless Networks for In-Vehicle Multimedia Infotainment: Minimizing handoff latency and achieving near-zero packet loss is critical for delivering multimedia infotainment applications to fast-moving vehicles that are likely to encounter frequent handoffs. In this paper, we propose a dual-radio cross-layer handoff scheme for infrastructure-mode 802.11 Wireless Networks that achieve this goal. We present performance results of an implementation of our algorithm in a Linux-based On-Board-Unit prototype.<|reference_end|> | arxiv | @article{poroor2009fast,
title={Fast Dual-Radio Cross-Layer Handoffs in Multi-Hop Infrastructure-mode
802.11 Wireless Networks for In-Vehicle Multimedia Infotainment},
author={Jayaraj Poroor, Sriram Karunagaran, Sudharsan Sundararajan, Ranjith
Pillai},
journal={arXiv preprint arXiv:0901.4642},
year={2009},
doi={10.1109/ANTS.2008.4937782},
archivePrefix={arXiv},
eprint={0901.4642},
primaryClass={cs.NI}
} | poroor2009fast |
arxiv-6221 | 0901.4643 | Visual tool for estimating the fractal dimension of images | <|reference_start|>Visual tool for estimating the fractal dimension of images: This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from noise, we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. In its second version, the application was extended for working also with csv files and three-dimensional images.<|reference_end|> | arxiv | @article{grossu2009visual,
title={Visual tool for estimating the fractal dimension of images},
author={I. V. Grossu, C. Besliu, M.V.Rusu, Al. Jipa, C. C. Bordeianu, D.
Felea, E. Stan, T. Esanu},
journal={Computer Physics Communications 180 (2009) p.1999-2001; CPC,
Volume 181, Issue 4, April 2010, Pages 831-832},
year={2009},
doi={10.1016/j.cpc.2009.05.015},
archivePrefix={arXiv},
eprint={0901.4643},
primaryClass={physics.comp-ph cs.GR nlin.PS}
} | grossu2009visual |
arxiv-6222 | 0901.4646 | A Quantum Key Distribution Network Through Single Mode Optical Fiber | <|reference_start|>A Quantum Key Distribution Network Through Single Mode Optical Fiber: Quantum key distribution (QKD) has been developed within the last decade that is provably secure against arbitrary computing power, and even against quantum computer attacks. Now there is a strong need of research to exploit this technology in the existing communication networks. In this paper we have presented various experimental results pertaining to QKD like Raw key rate and Quantum bit error rate (QBER). We found these results over 25 km single mode optical fiber. The experimental setup implemented the enhanced version of BB84 QKD protocol. Based upon the results obtained, we have presented a network design which can be implemented for the realization of large scale QKD networks. Furthermore, several new ideas are presented and discussed to integrate the QKD technique in the classical communication networks.<|reference_end|> | arxiv | @article{khan2009a,
title={A Quantum Key Distribution Network Through Single Mode Optical Fiber},
author={Muhammad Mubashir Khan, Salahuddin Hyder, Mahmood K Pathan, Kashif H
Sheikh},
journal={Khan, M.M., et al., A Quantum Key Distribution Network through
Single Mode Optical Fiber. Proceedings of the International Symposium on
Collaborative Technologies and Systems, 2006: p. 386-391},
year={2009},
doi={10.1109/CTS.2006.10},
archivePrefix={arXiv},
eprint={0901.4646},
primaryClass={cs.CR}
} | khan2009a |
arxiv-6223 | 0901.4648 | On The Positive Definiteness of Polarity Coincidence Correlation Coefficient Matrix | <|reference_start|>On The Positive Definiteness of Polarity Coincidence Correlation Coefficient Matrix: Polarity coincidence correlator (PCC), when used to estimate the covariance matrix on an element-by-element basis, may not yield a positive semi-definite (PSD) estimate. Devlin et al. [1], claimed that element-wise PCC is not guaranteed to be PSD in dimensions p>3 for real signals. However, no justification or proof was available on this issue. In this letter, it is proved that for real signals with p<=3 and for complex signals with p<=2, a PSD estimate is guaranteed. Counterexamples are presented for higher dimensions which yield invalid covariance estimates.<|reference_end|> | arxiv | @article{haddadi2009on,
title={On The Positive Definiteness of Polarity Coincidence Correlation
Coefficient Matrix},
author={Farzan Haddadi, Mohammad Mahdi Nayebi, Mohammad Reza Aref},
journal={arXiv preprint arXiv:0901.4648},
year={2009},
doi={10.1109/LSP.2007.911193},
archivePrefix={arXiv},
eprint={0901.4648},
primaryClass={cs.IT math.IT}
} | haddadi2009on |
arxiv-6224 | 0901.4664 | Square root meadows | <|reference_start|>Square root meadows: Let Q_0 denote the rational numbers expanded to a meadow by totalizing inversion such that 0^{-1}=0. Q_0 can be expanded by a total sign function s that extracts the sign of a rational number. In this paper we discuss an extension Q_0(s ,\sqrt) of the signed rationals in which every number has a unique square root.<|reference_end|> | arxiv | @article{bergstra2009square,
title={Square root meadows},
author={Jan A. Bergstra and I. Bethke},
journal={arXiv preprint arXiv:0901.4664},
year={2009},
archivePrefix={arXiv},
eprint={0901.4664},
primaryClass={cs.LO}
} | bergstra2009square |
arxiv-6225 | 0901.4694 | Limit on the Addressability of Fault-Tolerant Nanowire Decoders | <|reference_start|>Limit on the Addressability of Fault-Tolerant Nanowire Decoders: Although prone to fabrication error, the nanowire crossbar is a promising candidate component for next generation nanometer-scale circuits. In the nanowire crossbar architecture, nanowires are addressed by controlling voltages on the mesowires. For area efficiency, we are interested in the maximum number of nanowires $N(m,e)$ that can be addressed by $m$ mesowires, in the face of up to $e$ fabrication errors. Asymptotically tight bounds on $N(m,e)$ are established in this paper. In particular, it is shown that $N(m,e) = \Theta(2^m / m^{e+1/2})$. Interesting observations are made on the equivalence between this problem and the problem of constructing optimal EC/AUED codes, superimposed distance codes, pooling designs, and diffbounded set systems. Results in this paper also improve upon those in the EC/AUEC codes literature.<|reference_end|> | arxiv | @article{chee2009limit,
title={Limit on the Addressability of Fault-Tolerant Nanowire Decoders},
author={Yeow Meng Chee and Alan C. H. Ling},
journal={IEEE Transactions on Computers, vol. 58, no. 1, pp. 60-68, 2009},
year={2009},
doi={10.1109/TC.2008.130},
archivePrefix={arXiv},
eprint={0901.4694},
primaryClass={cs.AR cs.DM cs.IT math.IT}
} | chee2009limit |
arxiv-6226 | 0901.4723 | On Algorithms Based on Joint Estimation of Currents and Contrast in Microwave Tomography | <|reference_start|>On Algorithms Based on Joint Estimation of Currents and Contrast in Microwave Tomography: This paper deals with improvements to the contrast source inversion method which is widely used in microwave tomography. First, the method is reviewed and weaknesses of both the criterion form and the optimization strategy are underlined. Then, two new algorithms are proposed. Both of them are based on the same criterion, similar but more robust than the one used in contrast source inversion. The first technique keeps the main characteristics of the contrast source inversion optimization scheme but is based on a better exploitation of the conjugate gradient algorithm. The second technique is based on a preconditioned conjugate gradient algorithm and performs simultaneous updates of sets of unknowns that are normally processed sequentially. Both techniques are shown to be more efficient than original contrast source inversion.<|reference_end|> | arxiv | @article{barrière2009on,
title={On Algorithms Based on Joint Estimation of Currents and Contrast in
Microwave Tomography},
author={Paul-Andr'e Barri`ere, J'er^ome Idier, Yves Goussard, and
Jean-Jacques Laurin},
journal={arXiv preprint arXiv:0901.4723},
year={2009},
archivePrefix={arXiv},
eprint={0901.4723},
primaryClass={math.NA cs.IT math.IT}
} | barrière2009on |
arxiv-6227 | 0901.4727 | Arrow's Impossibility Theorem Without Unanimity | <|reference_start|>Arrow's Impossibility Theorem Without Unanimity: Arrow's Impossibility Theorem states that any constitution which satisfies Transitivity, Independence of Irrelevant Alternatives (IIA) and Unanimity is a dictatorship. Wilson derived properties of constitutions satisfying Transitivity and IIA for unrestricted domains where ties are allowed. In this paper we consider the case where only strict preferences are allowed. In this case we derive a new short proof of Arrow theorem and further obtain a new and complete characterization of all functions satisfying Transitivity and IIA.<|reference_end|> | arxiv | @article{mossel2009arrow's,
title={Arrow's Impossibility Theorem Without Unanimity},
author={Elchanan Mossel},
journal={arXiv preprint arXiv:0901.4727},
year={2009},
archivePrefix={arXiv},
eprint={0901.4727},
primaryClass={cs.GT cs.DM}
} | mossel2009arrow's |
arxiv-6228 | 0901.4728 | Alpaga: A Tool for Solving Parity Games with Imperfect Information | <|reference_start|>Alpaga: A Tool for Solving Parity Games with Imperfect Information: Alpaga is a solver for two-player parity games with imperfect information. Given the description of a game, it determines whether the first player can ensure to win and, if so, it constructs a winning strategy. The tool provides a symbolic implementation of a recent algorithm based on antichains.<|reference_end|> | arxiv | @article{berwanger2009alpaga:,
title={Alpaga: A Tool for Solving Parity Games with Imperfect Information},
author={Dietmar Berwanger and Krishnendu Chatterjee and Martin De Wulf and
Laurent Doyen and Thomas A. Henzinger},
journal={arXiv preprint arXiv:0901.4728},
year={2009},
archivePrefix={arXiv},
eprint={0901.4728},
primaryClass={cs.GT cs.LO}
} | berwanger2009alpaga: |
arxiv-6229 | 0901.4747 | On finding multiplicities of characteristic polynomial factors of black-box matrices | <|reference_start|>On finding multiplicities of characteristic polynomial factors of black-box matrices: We present algorithms and heuristics to compute the characteristic polynomial of a matrix given its minimal polynomial. The matrix is represented as a black-box, i.e., by a function to compute its matrix-vector product. The methods apply to matrices either over the integers or over a large enough finite field. Experiments show that these methods perform efficiently in practice. Combined in an adaptive strategy, these algorithms reach significant speedups in practice for some integer matrices arising in an application from graph theory.<|reference_end|> | arxiv | @article{dumas2009on,
title={On finding multiplicities of characteristic polynomial factors of
black-box matrices},
author={Jean-Guillaume Dumas (LJK), Cl'ement Pernet (INRIA Rh^one-Alpes /
LIG Laboratoire d'Informatique de Grenoble), B. David Saunders (CIS)},
journal={(International Symposium on Symbolic and Algebraic Computation
2009), S\'eoul : Cor\'ee, R\'epublique de (2009)},
year={2009},
archivePrefix={arXiv},
eprint={0901.4747},
primaryClass={cs.SC}
} | dumas2009on |
arxiv-6230 | 0901.4754 | An algebra of automata which includes both classical and quantum entities | <|reference_start|>An algebra of automata which includes both classical and quantum entities: We describe an algebra for composing automata which includes both classical and quantum entities and their communications. We illustrate by describing in detail a quantum protocol.<|reference_end|> | arxiv | @article{albasini2009an,
title={An algebra of automata which includes both classical and quantum
entities},
author={L. de Francesco Albasini, N. Sabadini, R.F.C. Walters},
journal={arXiv preprint arXiv:0901.4754},
year={2009},
archivePrefix={arXiv},
eprint={0901.4754},
primaryClass={cs.LO math.CT}
} | albasini2009an |
arxiv-6231 | 0901.4761 | A Knowledge Discovery Framework for Learning Task Models from User Interactions in Intelligent Tutoring Systems | <|reference_start|>A Knowledge Discovery Framework for Learning Task Models from User Interactions in Intelligent Tutoring Systems: Domain experts should provide relevant domain knowledge to an Intelligent Tutoring System (ITS) so that it can guide a learner during problemsolving learning activities. However, for many ill-defined domains, the domain knowledge is hard to define explicitly. In previous works, we showed how sequential pattern mining can be used to extract a partial problem space from logged user interactions, and how it can support tutoring services during problem-solving exercises. This article describes an extension of this approach to extract a problem space that is richer and more adapted for supporting tutoring services. We combined sequential pattern mining with (1) dimensional pattern mining (2) time intervals, (3) the automatic clustering of valued actions and (4) closed sequences mining. Some tutoring services have been implemented and an experiment has been conducted in a tutoring system.<|reference_end|> | arxiv | @article{fournier-viger2009a,
title={A Knowledge Discovery Framework for Learning Task Models from User
Interactions in Intelligent Tutoring Systems},
author={P. Fournier-Viger, R. Nkambou and E. Mephu Nguifo},
journal={arXiv preprint arXiv:0901.4761},
year={2009},
doi={10.1007/978-3-540-88636-5},
archivePrefix={arXiv},
eprint={0901.4761},
primaryClass={cs.AI}
} | fournier-viger2009a |
arxiv-6232 | 0901.4762 | Optimizing Service Orchestrations | <|reference_start|>Optimizing Service Orchestrations: As the number of services and the size of data involved in workflows increases, centralised orchestration techniques are reaching the limits of scalability. In the classic orchestration model, all data passes through a centralised engine, which results in unnecessary data transfer, wasted bandwidth and the engine to become a bottleneck to the execution of a workflow. This paper presents and evaluates the Circulate architecture which maintains the robustness and simplicity of centralised orchestration, but facilitates choreography by allowing services to exchange data directly with one another. Circulate could be realised within any existing workflow framework, in this paper, we focus on WS-Circulate, a Web services based implementation. Taking inspiration from the Montage workflow, a number of common workflow patterns (sequence, fan-in and fan-out), input to output data size relationships and network configurations are identified and evaluated. The performance analysis concludes that a substantial reduction in communication overhead results in a 2-4 fold performance benefit across all patterns. An end-to-end pattern through the Montage workflow results in an 8 fold performance benefit and demonstrates how the advantage of using the Circulate architecture increases as the complexity of a workflow grows.<|reference_end|> | arxiv | @article{barker2009optimizing,
title={Optimizing Service Orchestrations},
author={Adam Barker},
journal={arXiv preprint arXiv:0901.4762},
year={2009},
archivePrefix={arXiv},
eprint={0901.4762},
primaryClass={cs.DC cs.SE}
} | barker2009optimizing |
arxiv-6233 | 0901.4784 | On the Entropy of Written Spanish | <|reference_start|>On the Entropy of Written Spanish: This paper reports on results on the entropy of the Spanish language. They are based on an analysis of natural language for n-word symbols (n = 1 to 18), trigrams, digrams, and characters. The results obtained in this work are based on the analysis of twelve different literary works in Spanish, as well as a 279917 word news file provided by the Spanish press agency EFE. Entropy values are calculated by a direct method using computer processing and the probability law of large numbers. Three samples of artificial Spanish language produced by a first-order model software source are also analyzed and compared with natural Spanish language.<|reference_end|> | arxiv | @article{guerrero2009on,
title={On the Entropy of Written Spanish},
author={Fabio G. Guerrero},
journal={Revista Colombiana de Estadistica (RCE), Vol. 35, No. 3, Dec.
2012, pp 423-440},
year={2009},
archivePrefix={arXiv},
eprint={0901.4784},
primaryClass={cs.CL cs.IT math.IT}
} | guerrero2009on |
arxiv-6234 | 0901.4798 | Space Efficient Secret Sharing | <|reference_start|>Space Efficient Secret Sharing: This note proposes a method of space efficient secret sharing in which k secrets are mapped into n shares (n>=k) of the same size. Since, n can be chosen to be equal to k, the method is space efficient. This method may be compared with conventional secret sharing schemes that divide a single secret into n shares.<|reference_end|> | arxiv | @article{parakh2009space,
title={Space Efficient Secret Sharing},
author={Abhishek Parakh and Subhash Kak},
journal={4th Annual Computer Science Research Conference at the University
of Oklahoma, April 18, 2009},
year={2009},
archivePrefix={arXiv},
eprint={0901.4798},
primaryClass={cs.CR}
} | parakh2009space |
arxiv-6235 | 0901.4814 | Space Efficient Secret Sharing: A Recursive Approach | <|reference_start|>Space Efficient Secret Sharing: A Recursive Approach: This paper presents a recursive secret sharing technique that distributes k-1 secrets of length b each into n shares such that each share is effectively of length (n/(k-1))*b and any k pieces suffice for reconstructing all the k-1 secrets. Since n/(k-1) is near the optimal factor of n/k, and can be chosen to be close to 1, the proposed technique is space efficient. Furthermore, each share is information theoretically secure, i.e. it does not depend on any unproven assumption of computational intractability. Such a recursive technique has potential applications in secure and reliable storage of information on the Web and in sensor networks.<|reference_end|> | arxiv | @article{parakh2009space,
title={Space Efficient Secret Sharing: A Recursive Approach},
author={Abhishek Parakh and Subhash Kak},
journal={arXiv preprint arXiv:0901.4814},
year={2009},
number={Cryptology ePrint Archive: Report 2009/365},
archivePrefix={arXiv},
eprint={0901.4814},
primaryClass={cs.CR}
} | parakh2009space |
arxiv-6236 | 0901.4830 | On the Relationship Between the Multi-antenna Secrecy Communications and Cognitive Radio Communications | <|reference_start|>On the Relationship Between the Multi-antenna Secrecy Communications and Cognitive Radio Communications: This paper studies the capacity of the multi-antenna or multiple-input multiple-output (MIMO) secrecy channels with multiple eavesdroppers having single/multiple antennas. It is known that the MIMO secrecy capacity is achievable with the optimal transmit covariance matrix that maximizes the minimum difference between the channel mutual information of the secrecy user and those of the eavesdroppers. The MIMO secrecy capacity computation can thus be formulated as a non-convex max-min problem, which cannot be solved efficiently by standard convex optimization techniques. To handle this difficulty, we explore a relationship between the MIMO secrecy channel and the recently developed MIMO cognitive radio (CR) channel, in which the multi-antenna secondary user transmits over the same spectrum simultaneously with multiple primary users, subject to the received interference power constraints at the primary users, or the so-called ``interference temperature (IT)'' constraints. By constructing an auxiliary CR MIMO channel that has the same channel responses as the MIMO secrecy channel, we prove that the optimal transmit covariance matrix to achieve the secrecy capacity is the same as that to achieve the CR spectrum sharing capacity with properly selected IT constraints. Based on this relationship, several algorithms are proposed to solve the non-convex secrecy capacity computation problem by transforming it into a sequence of CR spectrum sharing capacity computation problems that are convex. For the case with single-antenna eavesdroppers, the proposed algorithms obtain the exact capacity of the MIMO secrecy channel, while for the case with multi-antenna eavesdroppers, the proposed algorithms obtain both upper and lower bounds on the MIMO secrecy capacity.<|reference_end|> | arxiv | @article{zhang2009on,
title={On the Relationship Between the Multi-antenna Secrecy Communications and
Cognitive Radio Communications},
author={Lan Zhang, Rui Zhang, Ying-Chang Liang, Yan Xin, Shuguang Cui},
journal={arXiv preprint arXiv:0901.4830},
year={2009},
archivePrefix={arXiv},
eprint={0901.4830},
primaryClass={cs.IT math.IT}
} | zhang2009on |
arxiv-6237 | 0901.4835 | A Mathematical Basis for the Chaining of Lossy Interface Adapters | <|reference_start|>A Mathematical Basis for the Chaining of Lossy Interface Adapters: Despite providing similar functionality, multiple network services may require the use of different interfaces to access the functionality, and this problem will only get worse with the widespread deployment of ubiquitous computing environments. One way around this problem is to use interface adapters that adapt one interface into another. Chaining these adapters allows flexible interface adaptation with fewer adapters, but the loss incurred due to imperfect interface adaptation must be considered. This paper outlines a mathematical basis for analyzing the chaining of lossy interface adapters. We also show that the problem of finding an optimal interface adapter chain is NP-complete.<|reference_end|> | arxiv | @article{chung2009a,
title={A Mathematical Basis for the Chaining of Lossy Interface Adapters},
author={Yoo Chung, Dongman Lee},
journal={IET Software, 4(1):54-54, February 2010},
year={2009},
doi={10.1049/iet-sen.2009.0019},
archivePrefix={arXiv},
eprint={0901.4835},
primaryClass={cs.DM cs.DC cs.SE}
} | chung2009a |
arxiv-6238 | 0901.4846 | Adaptive algorithms for identifying large flows in IP traffic | <|reference_start|>Adaptive algorithms for identifying large flows in IP traffic: We propose in this paper an on-line algorithm based on Bloom filters for identifying large flows in IP traffic (a.k.a. elephants). Because of the large number of small flows, hash tables of these algorithms have to be regularly refreshed. Recognizing that the periodic erasure scheme usually used in the technical literature turns out to be quite inefficient when using real traffic traces over a long period of time, we introduce a simple adaptive scheme that closely follows the variations of traffic. When tested against real traffic traces, the proposed on-line algorithm performs well in the sense that the detection ratio of long flows by the algorithm over a long time period is quite high. Beyond the identification of elephants, this same class of algorithms is applied to the closely related problem of detection of anomalies in IP traffic, e.g., SYN flood due for instance to attacks. An algorithm for detecting SYN and volume flood anomalies in Internet traffic is designed. Experiments show that an anomaly is detected in less than one minute and the targeted destinations are identified at the same time.<|reference_end|> | arxiv | @article{azzana2009adaptive,
title={Adaptive algorithms for identifying large flows in IP traffic},
author={Youssef Azzana (EI), Yousra Chabchoub (INRIA), Christine Fricker,
Fabrice Guillemin (FT R&D), Philippe Robert},
journal={arXiv preprint arXiv:0901.4846},
year={2009},
archivePrefix={arXiv},
eprint={0901.4846},
primaryClass={cs.NI}
} | azzana2009adaptive |
arxiv-6239 | 0901.4876 | Non-Confluent NLC Graph Grammar Inference by Compressing Disjoint Subgraphs | <|reference_start|>Non-Confluent NLC Graph Grammar Inference by Compressing Disjoint Subgraphs: Grammar inference deals with determining (preferable simple) models/grammars consistent with a set of observations. There is a large body of research on grammar inference within the theory of formal languages. However, there is surprisingly little known on grammar inference for graph grammars. In this paper we take a further step in this direction and work within the framework of node label controlled (NLC) graph grammars. Specifically, we characterize, given a set of disjoint and isomorphic subgraphs of a graph $G$, whether or not there is a NLC graph grammar rule which can generate these subgraphs to obtain $G$. This generalizes previous results by assuming that the set of isomorphic subgraphs is disjoint instead of non-touching. This leads naturally to consider the more involved ``non-confluent'' graph grammar rules.<|reference_end|> | arxiv | @article{blockeel2009non-confluent,
title={Non-Confluent NLC Graph Grammar Inference by Compressing Disjoint
Subgraphs},
author={Hendrik Blockeel, Robert Brijder},
journal={arXiv preprint arXiv:0901.4876},
year={2009},
archivePrefix={arXiv},
eprint={0901.4876},
primaryClass={cs.LG cs.DM}
} | blockeel2009non-confluent |
arxiv-6240 | 0901.4898 | Effective Delay Control in Online Network Coding | <|reference_start|>Effective Delay Control in Online Network Coding: Motivated by streaming applications with stringent delay constraints, we consider the design of online network coding algorithms with timely delivery guarantees. Assuming that the sender is providing the same data to multiple receivers over independent packet erasure channels, we focus on the case of perfect feedback and heterogeneous erasure probabilities. Based on a general analytical framework for evaluating the decoding delay, we show that existing ARQ schemes fail to ensure that receivers with weak channels are able to recover from packet losses within reasonable time. To overcome this problem, we re-define the encoding rules in order to break the chains of linear combinations that cannot be decoded after one of the packets is lost. Our results show that sending uncoded packets at key times ensures that all the receivers are able to meet specific delay requirements with very high probability.<|reference_end|> | arxiv | @article{barros2009effective,
title={Effective Delay Control in Online Network Coding},
author={Joao Barros, Rui A. Costa, Daniele Munaretto, Joerg Widmer},
journal={arXiv preprint arXiv:0901.4898},
year={2009},
doi={10.1109/INFCOM.2009.5061923},
archivePrefix={arXiv},
eprint={0901.4898},
primaryClass={cs.IT math.IT}
} | barros2009effective |
arxiv-6241 | 0901.4904 | Finite-size effects in the dependency networks of free and open-source software | <|reference_start|>Finite-size effects in the dependency networks of free and open-source software: We propose a continuum model for the degree distribution of directed networks in free and open-source software. The degree distributions of links in both the in-directed and out-directed dependency networks follow Zipf's law for the intermediate nodes, but the heavily linked nodes and the poorly linked nodes deviate from this trend and exhibit finite-size effects. The finite-size parameters make a quantitative distinction between the in-directed and out-directed networks. For the out-degree distribution, the initial condition for a dynamic evolution corresponds to the limiting count of the most heavily liked nodes that the out-directed network can finally have. The number of nodes contributing out-directed links grows with every generation of software release, but this growth ultimately saturates towards a terminal value due to the finiteness of semantic possibilities in the network.<|reference_end|> | arxiv | @article{nair2009finite-size,
title={Finite-size effects in the dependency networks of free and open-source
software},
author={Rajiv Nair, G. Nagarjuna, Arnab K. Ray},
journal={Complex Systems, 23, 71, 2014},
year={2009},
archivePrefix={arXiv},
eprint={0901.4904},
primaryClass={cs.OH physics.soc-ph}
} | nair2009finite-size |
arxiv-6242 | 0901.4934 | A historical perspective on developing foundations iInfo(TM) information systems: iConsult(TM) and iEntertain(TM) apps using iDescribers(TM) information integration for iOrgs(TM) information systems | <|reference_start|>A historical perspective on developing foundations iInfo(TM) information systems: iConsult(TM) and iEntertain(TM) apps using iDescribers(TM) information integration for iOrgs(TM) information systems: Technology now at hand can integrate all kinds of digital information for individuals, groups, and organizations so their information usefully links together. iInfo(TM) information integration works by making connections including examples like the following: - A statistical connection between "being in a traffic jam" and "driving in downtown Trenton between 5PM and 6PM on a weekday." - A terminological connection between "MSR" and "Microsoft Research." - A causal connection between "joining a group" and "being a member of the group." - A syntactic connection between "a pin dropped" and "a dropped pin." - A biological connection between "a dolphin" and "a mammal". - A demographic connection between "undocumented residents of California" and "7% of the population of California." - A geographical connection between "Leeds" and "England." - A temporal connection between "turning on a computer" and "joining an on-line discussion." By making these connections, iInfo offers tremendous value for individuals, families, groups, and organizations in making more effective use of information technology. In practice, integrated information is invariably pervasively inconsistent. Therefore iInfo must be able to make connections even in the face of inconsistency. The business of iInfo is not to make difficult decisions like deciding the ultimate truth or probability of propositions. Instead it provides means for processing information and carefully recording its provenance including arguments (including arguments about arguments) for and against propositions that is used by iConsult(TM) and iEntertain(TM) apps in iOrgs(TM) Information Systems. A historical perspective on the above questions is highly pertinent to the current quest to develop foundations for privacy-friendly client-cloud computing.<|reference_end|> | arxiv | @article{hewitt2009a,
title={A historical perspective on developing foundations iInfo(TM) information
systems: iConsult(TM) and iEntertain(TM) apps using iDescribers(TM)
information integration for iOrgs(TM) information systems},
author={Carl Hewitt},
journal={arXiv preprint arXiv:0901.4934},
year={2009},
archivePrefix={arXiv},
eprint={0901.4934},
primaryClass={cs.DC cs.DB cs.LO}
} | hewitt2009a |
arxiv-6243 | 0901.4953 | A Keygraph Classification Framework for Real-Time Object Detection | <|reference_start|>A Keygraph Classification Framework for Real-Time Object Detection: In this paper, we propose a new approach for keypoint-based object detection. Traditional keypoint-based methods consist in classifying individual points and using pose estimation to discard misclassifications. Since a single point carries no relational features, such methods inherently restrict the usage of structural information to the pose estimation phase. Therefore, the classifier considers purely appearance-based feature vectors, thus requiring computationally expensive feature extraction or complex probabilistic modelling to achieve satisfactory robustness. In contrast, our approach consists in classifying graphs of keypoints, which incorporates structural information during the classification phase and allows the extraction of simpler feature vectors that are naturally robust. In the present work, 3-vertices graphs have been considered, though the methodology is general and larger order graphs may be adopted. Successful experimental results obtained for real-time object detection in video sequences are reported.<|reference_end|> | arxiv | @article{hashimoto2009a,
title={A Keygraph Classification Framework for Real-Time Object Detection},
author={Marcelo Hashimoto and Roberto M. Cesar Jr},
journal={arXiv preprint arXiv:0901.4953},
year={2009},
archivePrefix={arXiv},
eprint={0901.4953},
primaryClass={cs.CV}
} | hashimoto2009a |
arxiv-6244 | 0901.4963 | How Emotional Mechanism Helps Episodic Learning in a Cognitive Agent | <|reference_start|>How Emotional Mechanism Helps Episodic Learning in a Cognitive Agent: In this paper we propose the CTS (Concious Tutoring System) technology, a biologically plausible cognitive agent based on human brain functions.This agent is capable of learning and remembering events and any related information such as corresponding procedures, stimuli and their emotional valences. Our proposed episodic memory and episodic learning mechanism are closer to the current multiple-trace theory in neuroscience, because they are inspired by it [5] contrary to other mechanisms that are incorporated in cognitive agents. This is because in our model emotions play a role in the encoding and remembering of events. This allows the agent to improve its behavior by remembering previously selected behaviors which are influenced by its emotional mechanism. Moreover, the architecture incorporates a realistic memory consolidation process based on a data mining algorithm.<|reference_end|> | arxiv | @article{faghihi2009how,
title={How Emotional Mechanism Helps Episodic Learning in a Cognitive Agent},
author={Usef Faghihi, Philippe Fournier-Viger, Roger Nkambou, Pierre Poirier,
Andre Mayers},
journal={arXiv preprint arXiv:0901.4963},
year={2009},
archivePrefix={arXiv},
eprint={0901.4963},
primaryClass={cs.AI}
} | faghihi2009how |
arxiv-6245 | 0902.0019 | CPAchecker: A Tool for Configurable Software Verification | <|reference_start|>CPAchecker: A Tool for Configurable Software Verification: Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, is required to implement the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. The major design goal during the development was to provide a framework for developers that is flexible and easy to extend. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this platform and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. We evaluate the efficiency of our tool on benchmarks from the software model checker BLAST. The first released version of CPAchecker implements CPAs for predicate abstraction, octagon, and explicit-value domains. Binaries and the source code of CPAchecker are publicly available as free software.<|reference_end|> | arxiv | @article{beyer2009cpachecker:,
title={CPAchecker: A Tool for Configurable Software Verification},
author={Dirk Beyer, M. Erkan Keremoglu},
journal={arXiv preprint arXiv:0902.0019},
year={2009},
number={SFU-CS-2009-02},
archivePrefix={arXiv},
eprint={0902.0019},
primaryClass={cs.PL cs.SE}
} | beyer2009cpachecker: |
arxiv-6246 | 0902.0026 | Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals | <|reference_start|>Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals: Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.<|reference_end|> | arxiv | @article{tropp2009beyond,
title={Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals},
author={Joel A. Tropp, Jason N. Laska, Marco F. Duarte, Justin K. Romberg,
Richard G. Baraniuk},
journal={IEEE Trans. Inform. Theory, Vol.56, num. 1, pp. 520-544, Jan. 2010},
year={2009},
doi={10.1109/TIT.2009.2034811},
archivePrefix={arXiv},
eprint={0902.0026},
primaryClass={cs.IT math.IT}
} | tropp2009beyond |
arxiv-6247 | 0902.0043 | Cut-Simulation and Impredicativity | <|reference_start|>Cut-Simulation and Impredicativity: We investigate cut-elimination and cut-simulation in impredicative (higher-order) logics. We illustrate that adding simple axioms such as Leibniz equations to a calculus for an impredicative logic -- in our case a sequent calculus for classical type theory -- is like adding cut. The phenomenon equally applies to prominent axioms like Boolean- and functional extensionality, induction, choice, and description. This calls for the development of calculi where these principles are built-in instead of being treated axiomatically.<|reference_end|> | arxiv | @article{benzmueller2009cut-simulation,
title={Cut-Simulation and Impredicativity},
author={Christoph Benzmueller, Chad E. Brown, Michael Kohlhase},
journal={Logical Methods in Computer Science, Volume 5, Issue 1 (March 3,
2009) lmcs:1144},
year={2009},
doi={10.2168/LMCS-5(1:6)2009},
archivePrefix={arXiv},
eprint={0902.0043},
primaryClass={cs.LO cs.AI}
} | benzmueller2009cut-simulation |
arxiv-6248 | 0902.0047 | Bounds on the Size of Small Depth Circuits for Approximating Majority | <|reference_start|>Bounds on the Size of Small Depth Circuits for Approximating Majority: In this paper, we show that for every constant $0 < \epsilon < 1/2$ and for every constant $d \geq 2$, the minimum size of a depth $d$ Boolean circuit that $\epsilon$-approximates Majority function on $n$ variables is exp$(\Theta(n^{1/(2d-2)}))$. The lower bound for every $d \geq 2$ and the upper bound for $d=2$ have been previously shown by O'Donnell and Wimmer [ICALP'07], and the contribution of this paper is to give a matching upper bound for $d \geq 3$.<|reference_end|> | arxiv | @article{amano2009bounds,
title={Bounds on the Size of Small Depth Circuits for Approximating Majority},
author={Kazuyuki Amano},
journal={arXiv preprint arXiv:0902.0047},
year={2009},
archivePrefix={arXiv},
eprint={0902.0047},
primaryClass={cs.CC}
} | amano2009bounds |
arxiv-6249 | 0902.0056 | An Alternative Cracking of The Genetic Code | <|reference_start|>An Alternative Cracking of The Genetic Code: We Propose 22 unique Solutions to the Genetic Code. An Alternative Cracking, from the Perspective of a Mathematician.<|reference_end|> | arxiv | @article{okunoye2009an,
title={An Alternative Cracking of The Genetic Code},
author={O. Babatunde Okunoye},
journal={arXiv preprint arXiv:0902.0056},
year={2009},
archivePrefix={arXiv},
eprint={0902.0056},
primaryClass={cs.OH}
} | okunoye2009an |
arxiv-6250 | 0902.0058 | The second weight of generalized Reed-Muller codes in most cases | <|reference_start|>The second weight of generalized Reed-Muller codes in most cases: The second weight of the Generalized Reed-Muller code of order $d$ over the finite field with $q$ elements is now known for $d <q$ and $d>(n-1)(q-1)$. In this paper, we determine the second weight for the other values of $d$ which are not multiple of $q-1$ plus 1. For the special case $d=a(q-1)+1$ we give an estimate.<|reference_end|> | arxiv | @article{rolland2009the,
title={The second weight of generalized Reed-Muller codes in most cases},
author={Robert Rolland},
journal={arXiv preprint arXiv:0902.0058},
year={2009},
archivePrefix={arXiv},
eprint={0902.0058},
primaryClass={cs.IT math.IT}
} | rolland2009the |
arxiv-6251 | 0902.0084 | On a problem of Frobenius in three numbers | <|reference_start|>On a problem of Frobenius in three numbers: For three positive integers ai, aj, ak pairwise coprime, we present an algorithm that find the least multiple of ai that is a positive linear combination of aj, ak. The average running time of this algorithm is O(1). Using this algorithm and the chinese remainder theorem leads to a direct computation of the Frobenius number f(a1, a2, a3).<|reference_end|> | arxiv | @article{miled2009on,
title={On a problem of Frobenius in three numbers},
author={Abdelwaheb Miled},
journal={arXiv preprint arXiv:0902.0084},
year={2009},
archivePrefix={arXiv},
eprint={0902.0084},
primaryClass={cs.DM}
} | miled2009on |
arxiv-6252 | 0902.0101 | The Complexity of Nash Equilibria in Simple Stochastic Multiplayer Games | <|reference_start|>The Complexity of Nash Equilibria in Simple Stochastic Multiplayer Games: We analyse the computational complexity of finding Nash equilibria in simple stochastic multiplayer games. We show that restricting the search space to equilibria whose payoffs fall into a certain interval may lead to undecidability. In particular, we prove that the following problem is undecidable: Given a game G, does there exist a pure-strategy Nash equilibrium of G where player 0 wins with probability 1. Moreover, this problem remains undecidable if it is restricted to strategies with (unbounded) finite memory. However, if mixed strategies are allowed, decidability remains an open problem. One way to obtain a provably decidable variant of the problem is restricting the strategies to be positional or stationary. For the complexity of these two problems, we obtain a common lower bound of NP and upper bounds of NP and PSPACE respectively.<|reference_end|> | arxiv | @article{ummels2009the,
title={The Complexity of Nash Equilibria in Simple Stochastic Multiplayer Games},
author={Michael Ummels and Dominik Wojtczak},
journal={arXiv preprint arXiv:0902.0101},
year={2009},
doi={10.1007/978-3-642-02930-1_25},
number={EDI-INF-RR-1323},
archivePrefix={arXiv},
eprint={0902.0101},
primaryClass={cs.GT cs.CC cs.LO}
} | ummels2009the |
arxiv-6253 | 0902.0133 | New Algorithms and Lower Bounds for Sequential-Access Data Compression | <|reference_start|>New Algorithms and Lower Bounds for Sequential-Access Data Compression: This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.<|reference_end|> | arxiv | @article{gagie2009new,
title={New Algorithms and Lower Bounds for Sequential-Access Data Compression},
author={Travis Gagie},
journal={arXiv preprint arXiv:0902.0133},
year={2009},
archivePrefix={arXiv},
eprint={0902.0133},
primaryClass={cs.IT math.IT}
} | gagie2009new |
arxiv-6254 | 0902.0140 | Graph Sparsification in the Semi-streaming Model | <|reference_start|>Graph Sparsification in the Semi-streaming Model: Analyzing massive data sets has been one of the key motivations for studying streaming algorithms. In recent years, there has been significant progress in analysing distributions in a streaming setting, but the progress on graph problems has been limited. A main reason for this has been the existence of linear space lower bounds for even simple problems such as determining the connectedness of a graph. However, in many new scenarios that arise from social and other interaction networks, the number of vertices is significantly less than the number of edges. This has led to the formulation of the semi-streaming model where we assume that the space is (near) linear in the number of vertices (but not necessarily the edges), and the edges appear in an arbitrary (and possibly adversarial) order. In this paper we focus on graph sparsification, which is one of the major building blocks in a variety of graph algorithms. There has been a long history of (non-streaming) sampling algorithms that provide sparse graph approximations and it a natural question to ask if the sparsification can be achieved using a small space, and in addition using a single pass over the data? The question is interesting from the standpoint of both theory and practice and we answer the question in the affirmative, by providing a one pass $\tilde{O}(n/\epsilon^{2})$ space algorithm that produces a sparsification that approximates each cut to a $(1+\epsilon)$ factor. We also show that $\Omega(n \log \frac1\epsilon)$ space is necessary for a one pass streaming algorithm to approximate the min-cut, improving upon the $\Omega(n)$ lower bound that arises from lower bounds for testing connectivity.<|reference_end|> | arxiv | @article{ahn2009graph,
title={Graph Sparsification in the Semi-streaming Model},
author={Kook Jin Ahn, Sudipto Guha},
journal={arXiv preprint arXiv:0902.0140},
year={2009},
archivePrefix={arXiv},
eprint={0902.0140},
primaryClass={cs.DS}
} | ahn2009graph |
arxiv-6255 | 0902.0189 | The Ergodic Capacity of The MIMO Wire-Tap Channel | <|reference_start|>The Ergodic Capacity of The MIMO Wire-Tap Channel: This paper has been withdrawn to provide a more rigorous proof of the converse of Theorem 1 and Lemma 1 as well.<|reference_end|> | arxiv | @article{rezki2009the,
title={The Ergodic Capacity of The MIMO Wire-Tap Channel},
author={Zouheir Rezki, Francois Gagnon, Vijay Bhargava},
journal={arXiv preprint arXiv:0902.0189},
year={2009},
archivePrefix={arXiv},
eprint={0902.0189},
primaryClass={cs.IT math.IT}
} | rezki2009the |
arxiv-6256 | 0902.0221 | Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom | <|reference_start|>Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom: A well-known issue of local (adaptive) histogram equalization (LHE) is over-enhancement (i.e., generation of spurious details) in homogenous areas of the image. In this paper, we show that the LHE problem has many solutions due to the ambiguity in ranking pixels with the same intensity. The LHE solution space can be searched for the images having the maximum PSNR or structural similarity (SSIM) with the input image. As compared to the results of the prior art, these solutions are more similar to the input image while offering the same local contrast. Index Terms: histogram modification or specification, contrast enhancement<|reference_end|> | arxiv | @article{avanaki2009over-enhancement,
title={Over-enhancement Reduction in Local Histogram Equalization using its
Degrees of Freedom},
author={Alireza Avanaki},
journal={arXiv preprint arXiv:0902.0221},
year={2009},
archivePrefix={arXiv},
eprint={0902.0221},
primaryClass={cs.CV cs.MM}
} | avanaki2009over-enhancement |
arxiv-6257 | 0902.0239 | The acoustic wave equation in the expanding universe Sachs-Wolfe theorem | <|reference_start|>The acoustic wave equation in the expanding universe Sachs-Wolfe theorem: In this paper the acoustic field propagating in the early hot ($p=\epsilon/$) universe of arbitrary space curvature ($K=0, \pm 1$) is considered. The field equations are reduced to the d'Alembert equation in an auxiliary static Roberson-Walker space-time. Symbolic computation in {\em Mathematica} is applied.<|reference_end|> | arxiv | @article{czaja2009the,
title={The acoustic wave equation in the expanding universe. Sachs-Wolfe
theorem},
author={Wojciech Czaja, Zdzislaw A. Golda, Andrzej Woszczyna},
journal={arXiv preprint arXiv:0902.0239},
year={2009},
archivePrefix={arXiv},
eprint={0902.0239},
primaryClass={cs.SC gr-qc physics.comp-ph}
} | czaja2009the |
arxiv-6258 | 0902.0241 | Hierarchical Triple-Modular Redundancy (H-TMR) Network For Digital Systems | <|reference_start|>Hierarchical Triple-Modular Redundancy (H-TMR) Network For Digital Systems: Hierarchical application of Triple-Modular Redundancy (TMR) increases fault tolerance of digital Integrated Circuit (IC). In this paper, a simple probabilistic model was proposed for analysis of fault masking performance of hierarchical TMR networks. Performance improvements obtained by second order TMR network were theoretically compared with first order TMR network.<|reference_end|> | arxiv | @article{alagoz2009hierarchical,
title={Hierarchical Triple-Modular Redundancy (H-TMR) Network For Digital
Systems},
author={B. Baykant Alagoz},
journal={OncuBilim Algorithm And Systems Labs. Vol.08, Art.No:05,(2008)},
year={2009},
archivePrefix={arXiv},
eprint={0902.0241},
primaryClass={cs.OH}
} | alagoz2009hierarchical |
arxiv-6259 | 0902.0261 | Immunity and Pseudorandomness of Context-Free Languages | <|reference_start|>Immunity and Pseudorandomness of Context-Free Languages: We discuss the computational complexity of context-free languages, concentrating on two well-known structural properties---immunity and pseudorandomness. An infinite language is REG-immune (resp., CFL-immune) if it contains no infinite subset that is a regular (resp., context-free) language. We prove that (i) there is a context-free REG-immune language outside REG/n and (ii) there is a REG-bi-immune language that can be computed deterministically using logarithmic space. We also show that (iii) there is a CFL-simple set, where a CFL-simple language is an infinite context-free language whose complement is CFL-immune. Similar to the REG-immunity, a REG-primeimmune language has no polynomially dense subsets that are also regular. We further prove that (iv) there is a context-free language that is REG/n-bi-primeimmune. Concerning pseudorandomness of context-free languages, we show that (v) CFL contains REG/n-pseudorandom languages. Finally, we prove that (vi) against REG/n, there exists an almost 1-1 pseudorandom generator computable in nondeterministic pushdown automata equipped with a write-only output tape and (vii) against REG, there is no almost 1-1 weakly pseudorandom generator computable deterministically in linear time by a single-tape Turing machine.<|reference_end|> | arxiv | @article{yamakami2009immunity,
title={Immunity and Pseudorandomness of Context-Free Languages},
author={Tomoyuki Yamakami},
journal={Theoretical Computer Science, vol. 412, pp.6432-6450, 2011},
year={2009},
doi={10.1016/j.tcs.2011.07.013},
archivePrefix={arXiv},
eprint={0902.0261},
primaryClass={cs.CC cs.FL}
} | yamakami2009immunity |
arxiv-6260 | 0902.0271 | Asymmetric numeral systems | <|reference_start|>Asymmetric numeral systems: In this paper will be presented new approach to entropy coding: family of generalizations of standard numeral systems which are optimal for encoding sequence of equiprobable symbols, into asymmetric numeral systems - optimal for freely chosen probability distributions of symbols. It has some similarities to Range Coding but instead of encoding symbol in choosing a range, we spread these ranges uniformly over the whole interval. This leads to simpler encoder - instead of using two states to define range, we need only one. This approach is very universal - we can obtain from extremely precise encoding (ABS) to extremely fast with possibility to additionally encrypt the data (ANS). This encryption uses the key to initialize random number generator, which is used to calculate the coding tables. Such preinitialized encryption has additional advantage: is resistant to brute force attack - to check a key we have to make whole initialization. There will be also presented application for new approach to error correction: after an error in each step we have chosen probability to observe that something was wrong. There will be also presented application for new approach to error correction: after an error in each step we have chosen probability to observe that something was wrong. We can get near Shannon's limit for any noise level this way with expected linear time of correction.<|reference_end|> | arxiv | @article{duda2009asymmetric,
title={Asymmetric numeral systems},
author={Jarek Duda},
journal={arXiv preprint arXiv:0902.0271},
year={2009},
archivePrefix={arXiv},
eprint={0902.0271},
primaryClass={cs.IT cs.CR math.GM math.IT}
} | duda2009asymmetric |
arxiv-6261 | 0902.0320 | Planar Graphical Models which are Easy | <|reference_start|>Planar Graphical Models which are Easy: We describe a rich family of binary variables statistical mechanics models on a given planar graph which are equivalent to Gaussian Grassmann Graphical models (free fermions) defined on the same graph. Calculation of the partition function (weighted counting) for such a model is easy (of polynomial complexity) as reducible to evaluation of a Pfaffian of a matrix of size equal to twice the number of edges in the graph. In particular, this approach touches upon Holographic Algorithms of Valiant and utilizes the Gauge Transformations discussed in our previous works.<|reference_end|> | arxiv | @article{chernyak2009planar,
title={Planar Graphical Models which are Easy},
author={Vladimir Y. Chernyak (Wayne State) and Michael Chertkov (LANL)},
journal={arXiv preprint arXiv:0902.0320},
year={2009},
doi={10.1088/1742-5468/2010/11/P11007},
number={LA-UR 09-00533},
archivePrefix={arXiv},
eprint={0902.0320},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC cs.IT math-ph math.IT math.MP}
} | chernyak2009planar |
arxiv-6262 | 0902.0322 | Malware Detection using Attribute-Automata to parse Abstract Behavioral Descriptions | <|reference_start|>Malware Detection using Attribute-Automata to parse Abstract Behavioral Descriptions: Most behavioral detectors of malware remain specific to a given language and platform, mostly PE executables for Windows. The objective of this paper is to define a generic approach for behavioral detection based on two layers respectively responsible for abstraction and detection. The first abstraction layer remains specific to a platform and a language. This first layer interprets the collected instructions, API calls and arguments and classifies these operations as well as the involved objects according to their purpose in the malware lifecycle. The second detection layer remains generic and is totally interoperable between the different abstraction components. This layer relies on parallel automata parsing attribute-grammars where semantic rules are used for object typing (object classification) and object binding (data-flow). To feed detection and to experiment with our approach we have developed two different abstraction components: one processing system call traces from native code and one processing the VBScript interpreted language. The different experimentations have provided promising detection rates, in particular for script files (89%), with almost none false positives. In the case of process traces, the detection rate remains significant (51%) but could be increased by more sophisticated collection tools.<|reference_end|> | arxiv | @article{jacob2009malware,
title={Malware Detection using Attribute-Automata to parse Abstract Behavioral
Descriptions},
author={Gregoire Jacob, Herve Debar and Eric Filiol},
journal={arXiv preprint arXiv:0902.0322},
year={2009},
archivePrefix={arXiv},
eprint={0902.0322},
primaryClass={cs.CR}
} | jacob2009malware |
arxiv-6263 | 0902.0337 | Stability and Delay of Zero-Forcing SDMA with Limited Feedback | <|reference_start|>Stability and Delay of Zero-Forcing SDMA with Limited Feedback: This paper addresses the stability and queueing delay of Space Division Multiple Access (SDMA) systems with bursty traffic, where zero-forcing beamforming enables simultaneous transmission to multiple mobiles. Computing beamforming vectors relies on quantized channel state information (CSI) feedback (limited feedback) from mobiles. Define the stability region for SDMA as the set of multiuser packet-arrival rates for which the steady-state queue lengths are finite. Given perfect CSI feedback and equal power allocation over scheduled queues, the stability region is proved to be a convex polytope having the derived vertices. For any set of arrival rates in the stability region, multiuser queues are shown to be stabilized by a joint queue-and-beamforming control policy that maximizes the departure-rate-weighted sum of queue lengths. The stability region for limited feedback is found to be the perfect-CSI region multiplied by one minus a small factor. The required number of feedback bits per mobile is proved to scale logarithmically with the inverse of the above factor as well as linearly with the number of transmit antennas minus one. The effects of limited feedback on queueing delay are also quantified. For Poisson arrival processes, CSI quantization errors are shown to multiply average queueing delay by a factor larger than one. This factor can be controlled by adjusting the number of feedback bits per mobile following the derived relationship. For general arrival processes, CSI errors are found to increase Kingman's bound on the tail probability of the instantaneous delay by one plus a small factor. The required number of feedback bits per mobile is shown to scale logarithmically with this factor.<|reference_end|> | arxiv | @article{huang2009stability,
title={Stability and Delay of Zero-Forcing SDMA with Limited Feedback},
author={Kaibin Huang and Vincent K. N. Lau},
journal={arXiv preprint arXiv:0902.0337},
year={2009},
archivePrefix={arXiv},
eprint={0902.0337},
primaryClass={cs.IT math.IT}
} | huang2009stability |
arxiv-6264 | 0902.0353 | Non-monotone submodular maximization under matroid and knapsack constraints | <|reference_start|>Non-monotone submodular maximization under matroid and knapsack constraints: Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed/undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, and maximum facility location problems. Unlike submodular minimization, submodular maximization is NP-hard. For the problem of maximizing a non-monotone submodular function, Feige, Mirrokni, and Vondr\'ak recently developed a $2\over 5$-approximation algorithm \cite{FMV07}, however, their algorithms do not handle side constraints.} In this paper, we give the first constant-factor approximation algorithm for maximizing any non-negative submodular function subject to multiple matroid or knapsack constraints. We emphasize that our results are for {\em non-monotone} submodular functions. In particular, for any constant $k$, we present a $({1\over k+2+{1\over k}+\epsilon})$-approximation for the submodular maximization problem under $k$ matroid constraints, and a $({1\over 5}-\epsilon)$-approximation algorithm for this problem subject to $k$ knapsack constraints ($\epsilon>0$ is any constant). We improve the approximation guarantee of our algorithm to ${1\over k+1+{1\over k-1}+\epsilon}$ for $k\ge 2$ partition matroid constraints. This idea also gives a $({1\over k+\epsilon})$-approximation for maximizing a {\em monotone} submodular function subject to $k\ge 2$ partition matroids, which improves over the previously best known guarantee of $\frac{1}{k+1}$.<|reference_end|> | arxiv | @article{lee2009non-monotone,
title={Non-monotone submodular maximization under matroid and knapsack
constraints},
author={Jon Lee, Vahab Mirrokni, Viswanath Nagarjan, Maxim Sviridenko},
journal={arXiv preprint arXiv:0902.0353},
year={2009},
archivePrefix={arXiv},
eprint={0902.0353},
primaryClass={cs.CC cs.DS}
} | lee2009non-monotone |
arxiv-6265 | 0902.0354 | Optimum Power and Rate Allocation for Coded V-BLAST | <|reference_start|>Optimum Power and Rate Allocation for Coded V-BLAST: An analytical framework for minimizing the outage probability of a coded spatial multiplexing system while keeping the rate close to the capacity is developed. Based on this framework, specific strategies of optimum power and rate allocation for the coded V-BLAST architecture are obtained and its performance is analyzed. A fractional waterfilling algorithm, which is shown to optimize both the capacity and the outage probability of the coded V-BLAST, is proposed. Compact, closed-form expressions for the optimum allocation of the average power are given. The uniform allocation of average power is shown to be near optimum at moderate to high SNR for the coded V-BLAST with the average rate allocation (when per-stream rates are set to match the per-stream capacity). The results reported also apply to multiuser detection and channel equalization relying on successive interference cancelation.<|reference_end|> | arxiv | @article{kostina2009optimum,
title={Optimum Power and Rate Allocation for Coded V-BLAST},
author={Victoria Kostina, Sergey Loyka},
journal={arXiv preprint arXiv:0902.0354},
year={2009},
archivePrefix={arXiv},
eprint={0902.0354},
primaryClass={cs.IT math.IT}
} | kostina2009optimum |
arxiv-6266 | 0902.0382 | On the complexity of Nash dynamics and Sink Equilibria | <|reference_start|>On the complexity of Nash dynamics and Sink Equilibria: Studying Nash dynamics is an important approach for analyzing the outcome of games with repeated selfish behavior of self-interested agents. Sink equilibria has been introduced by Goemans, Mirrokni, and Vetta for studying social cost on Nash dynamics over pure strategies in games. However, they do not address the complexity of sink equilibria in these games. Recently, Fabrikant and Papadimitriou initiated the study of the complexity of Nash dynamics in two classes of games. In order to completely understand the complexity of Nash dynamics in a variety of games, we study the following three questions for various games: (i) given a state in game, can we verify if this state is in a sink equilibrium or not? (ii) given an instance of a game, can we verify if there exists any sink equilibrium other than pure Nash equilibria? and (iii) given an instance of a game, can we verify if there exists a pure Nash equilibrium (i.e, a sink equilibrium with one state)? In this paper, we almost answer all of the above questions for a variety of classes of games with succinct representation, including anonymous games, player-specific and weighted congestion games, valid-utility games, and two-sided market games. In particular, for most of these problems, we show that (i) it is PSPACE-complete to verify if a given state is in a sink equilibrium, (ii) it is NP-hard to verify if there exists a pure Nash equilibrium in the game or not, (iii) it is PSPACE-complete to verify if there exists any sink equilibrium other than pure Nash equilibria. To solve these problems, we illustrate general techniques that could be used to answer similar questions in other classes of games.<|reference_end|> | arxiv | @article{mirrokni2009on,
title={On the complexity of Nash dynamics and Sink Equilibria},
author={Vahab Mirrokni and Alexander Skopalik},
journal={arXiv preprint arXiv:0902.0382},
year={2009},
archivePrefix={arXiv},
eprint={0902.0382},
primaryClass={cs.GT cs.CC}
} | mirrokni2009on |
arxiv-6267 | 0902.0392 | Tree Exploration for Bayesian RL Exploration | <|reference_start|>Tree Exploration for Bayesian RL Exploration: Research in reinforcement learning has produced algorithms for optimal decision making under uncertainty that fall within two main types. The first employs a Bayesian framework, where optimality improves with increased computational time. This is because the resulting planning task takes the form of a dynamic programming problem on a belief tree with an infinite number of states. The second type employs relatively simple algorithm which are shown to suffer small regret within a distribution-free framework. This paper presents a lower bound and a high probability upper bound on the optimal value function for the nodes in the Bayesian belief tree, which are analogous to similar bounds in POMDPs. The bounds are then used to create more efficient strategies for exploring the tree. The resulting algorithms are compared with the distribution-free algorithm UCB1, as well as a simpler baseline algorithm on multi-armed bandit problems.<|reference_end|> | arxiv | @article{dimitrakakis2009tree,
title={Tree Exploration for Bayesian RL Exploration},
author={Christos Dimitrakakis},
journal={arXiv preprint arXiv:0902.0392},
year={2009},
number={IAS-08-04},
archivePrefix={arXiv},
eprint={0902.0392},
primaryClass={stat.ML cs.LG}
} | dimitrakakis2009tree |
arxiv-6268 | 0902.0417 | Decoding Network Codes by Message Passing | <|reference_start|>Decoding Network Codes by Message Passing: In this paper, we show how to construct a factor graph from a network code. This provides a systematic framework for decoding using message passing algorithms. The proposed message passing decoder exploits knowledge of the underlying communications network topology to simplify decoding. For uniquely decodeable linear network codes on networks with error-free links, only the message supports (rather than the message values themselves) are required to be passed. This proposed simplified support message algorithm is an instance of the sum-product algorithm. Our message-passing framework provides a basis for the design of network codes and control of network topology with a view toward quantifiable complexity reduction in the sink terminals.<|reference_end|> | arxiv | @article{salmond2009decoding,
title={Decoding Network Codes by Message Passing},
author={Daniel Salmond, Alex Grant, Terence Chan and Ian Grivell},
journal={arXiv preprint arXiv:0902.0417},
year={2009},
archivePrefix={arXiv},
eprint={0902.0417},
primaryClass={cs.IT math.IT}
} | salmond2009decoding |
arxiv-6269 | 0902.0458 | On the Applicability of Combinatorial Designs to Key Predistribution for Wireless Sensor Networks | <|reference_start|>On the Applicability of Combinatorial Designs to Key Predistribution for Wireless Sensor Networks: The constraints of lightweight distributed computing environments such as wireless sensor networks lend themselves to the use of symmetric cryptography to provide security services. The lack of central infrastructure after deployment of such networks requires the necessary symmetric keys to be predistributed to participating nodes. The rich mathematical structure of combinatorial designs has resulted in the proposal of several key predistribution schemes for wireless sensor networks based on designs. We review and examine the appropriateness of combinatorial designs as a tool for building key predistribution schemes suitable for such environments.<|reference_end|> | arxiv | @article{martin2009on,
title={On the Applicability of Combinatorial Designs to Key Predistribution for
Wireless Sensor Networks},
author={Keith M. Martin},
journal={arXiv preprint arXiv:0902.0458},
year={2009},
archivePrefix={arXiv},
eprint={0902.0458},
primaryClass={cs.CR cs.DM}
} | martin2009on |
arxiv-6270 | 0902.0465 | AxialGen: A Research Prototype for Automatically Generating the Axial Map | <|reference_start|>AxialGen: A Research Prototype for Automatically Generating the Axial Map: AxialGen is a research prototype for automatically generating the axial map, which consists of the least number of the longest visibility lines (or axial lines) for representing individual linearly stretched parts of open space of an urban environment. Open space is the space between closed spaces such as buildings and street blocks. This paper aims to provide an accessible guide to software AxialGen, and the underlying concepts and ideas. We concentrate on the explanation and illustration of the key concept of bucket: its definition, formation and how it is used in generating the axial map. Keywords: Bucket, visibility, medial axes, axial lines, isovists, axial map<|reference_end|> | arxiv | @article{jiang2009axialgen:,
title={AxialGen: A Research Prototype for Automatically Generating the Axial
Map},
author={Bin Jiang and Xintao Liu},
journal={Proceedings of CUPUM 2009, the 11th International Conference on
Computers in Urban Planning and Urban Management, Hong Kong, 16-18 June 2009},
year={2009},
archivePrefix={arXiv},
eprint={0902.0465},
primaryClass={cs.RO cs.CG}
} | jiang2009axialgen: |
arxiv-6271 | 0902.0469 | Formalization of malware through process calculi | <|reference_start|>Formalization of malware through process calculi: Since the seminal work from F. Cohen in the eighties, abstract virology has seen the apparition of successive viral models, all based on Turing-equivalent formalisms. But considering recent malware such as rootkits or k-ary codes, these viral models only partially cover these evolved threats. The problem is that Turing-equivalent models do not support interactive computations. New models have thus appeared, offering support for these evolved malware, but loosing the unified approach in the way. This article provides a basis for a unified malware model founded on process algebras and in particular the Join-Calculus. In terms of expressiveness, the new model supports the fundamental definitions based on self-replication and adds support for interactions, concurrency and non-termination allows the definition of more complex behaviors. Evolved malware such as rootkits can now be thoroughly modeled. In terms of detection and prevention, the fundamental results of undecidability and isolation still hold. However the process-based model has permitted to establish new results: identification of fragments from the Join-Calculus where malware detection becomes decidable, formal definition of the non-infection property, approximate solutions to restrict malware propagation.<|reference_end|> | arxiv | @article{jacob2009formalization,
title={Formalization of malware through process calculi},
author={Gregoire Jacob, Eric Filiol and Herve Debar},
journal={arXiv preprint arXiv:0902.0469},
year={2009},
archivePrefix={arXiv},
eprint={0902.0469},
primaryClass={cs.CR}
} | jacob2009formalization |
arxiv-6272 | 0902.0514 | Graphical Reasoning in Compact Closed Categories for Quantum Computation | <|reference_start|>Graphical Reasoning in Compact Closed Categories for Quantum Computation: Compact closed categories provide a foundational formalism for a variety of important domains, including quantum computation. These categories have a natural visualisation as a form of graphs. We present a formalism for equational reasoning about such graphs and develop this into a generic proof system with a fixed logical kernel for equational reasoning about compact closed categories. Automating this reasoning process is motivated by the slow and error prone nature of manual graph manipulation. A salient feature of our system is that it provides a formal and declarative account of derived results that can include `ellipses'-style notation. We illustrate the framework by instantiating it for a graphical language of quantum computation and show how this can be used to perform symbolic computation.<|reference_end|> | arxiv | @article{dixon2009graphical,
title={Graphical Reasoning in Compact Closed Categories for Quantum Computation},
author={Lucas Dixon and Ross Duncan},
journal={arXiv preprint arXiv:0902.0514},
year={2009},
archivePrefix={arXiv},
eprint={0902.0514},
primaryClass={cs.SC cs.AI}
} | dixon2009graphical |
arxiv-6273 | 0902.0524 | An Optimal Multi-Unit Combinatorial Procurement Auction with Single Minded Bidders | <|reference_start|>An Optimal Multi-Unit Combinatorial Procurement Auction with Single Minded Bidders: The current art in optimal combinatorial auctions is limited to handling the case of single units of multiple items, with each bidder bidding on exactly one bundle (single minded bidders). This paper extends the current art by proposing an optimal auction for procuring multiple units of multiple items when the bidders are single minded. The auction minimizes the cost of procurement while satisfying Bayesian incentive compatibility and interim individual rationality. Under appropriate regularity conditions, this optimal auction also satisfies dominant strategy incentive compatibility.<|reference_end|> | arxiv | @article{gujar2009an,
title={An Optimal Multi-Unit Combinatorial Procurement Auction with Single
Minded Bidders},
author={Sujit Gujar and Y Narahari},
journal={arXiv preprint arXiv:0902.0524},
year={2009},
archivePrefix={arXiv},
eprint={0902.0524},
primaryClass={cs.GT}
} | gujar2009an |
arxiv-6274 | 0902.0558 | Analysis of bandwidth measurement methodologies over WLAN systems | <|reference_start|>Analysis of bandwidth measurement methodologies over WLAN systems: WLAN devices have become a fundamental component of nowadays network deployments. However, even though traditional networking applications run mostly unchanged over wireless links, the actual interaction between these applications and the dynamics of wireless transmissions is not yet fully understood. An important example of such applications are bandwidth estimation tools. This area has become a mature research topic with well-developed results. Unfortunately recent studies have shown that the application of these results to WLAN links is not straightforward. The main reasons for this is that the assumptions taken to develop bandwidth measurements tools do not hold any longer in the presence of wireless links (e.g. non-FIFO scheduling). This paper builds from these observations and its main goal is to analyze the interaction between probe packets and WLAN transmissions in bandwidth estimation processes. The paper proposes an analytical model that better accounts for the particularities of WLAN links. The model is validated through extensive experimentation and simulation and reveals that (1) the distribution of the delay to transmit probing packets is not the same for the whole probing sequence, this biases the measurements process and (2) existing tools and techniques point at the achievable throughput rather than the available bandwidth or the capacity, as previously assumed.<|reference_end|> | arxiv | @article{portoles-comeras2009analysis,
title={Analysis of bandwidth measurement methodologies over WLAN systems},
author={Marc Portoles-Comeras, Albert Cabellos-Aparicio, Josep
Mangues-Bafalluy, Jordi Domingo-Pascual},
journal={arXiv preprint arXiv:0902.0558},
year={2009},
archivePrefix={arXiv},
eprint={0902.0558},
primaryClass={cs.NI cs.PF}
} | portoles-comeras2009analysis |
arxiv-6275 | 0902.0562 | A Unified Perspective on Parity- and Syndrome-Based Binary Data Compression Using Off-the-Shelf Turbo Codecs | <|reference_start|>A Unified Perspective on Parity- and Syndrome-Based Binary Data Compression Using Off-the-Shelf Turbo Codecs: We consider the problem of compressing memoryless binary data with or without side information at the decoder. We review the parity- and the syndrome-based approaches and discuss their theoretical limits, assuming that there exists a virtual binary symmetric channel between the source and the side information, and that the source is not necessarily uniformly distributed. We take a factor-graph-based approach in order to devise how to take full advantage of the ready-available iterative decoding procedures when turbo codes are employed, in both a parity- or a syndrome-based fashion. We end up obtaining a unified decoder formulation that holds both for error-free and for error-prone encoder-to-decoder transmission over generic channels. To support the theoretical results, the different compression systems analyzed in the paper are also experimentally tested. They are compared against several different approaches proposed in literature and shown to be competitive in a variety of cases.<|reference_end|> | arxiv | @article{cappellari2009a,
title={A Unified Perspective on Parity- and Syndrome-Based Binary Data
Compression Using Off-the-Shelf Turbo Codecs},
author={Lorenzo Cappellari and Andrea De Giusti},
journal={arXiv preprint arXiv:0902.0562},
year={2009},
archivePrefix={arXiv},
eprint={0902.0562},
primaryClass={cs.IT math.IT}
} | cappellari2009a |
arxiv-6276 | 0902.0606 | Beyond Zipf's law: Modeling the structure of human language | <|reference_start|>Beyond Zipf's law: Modeling the structure of human language: Human language, the most powerful communication system in history, is closely associated with cognition. Written text is one of the fundamental manifestations of language, and the study of its universal regularities can give clues about how our brains process information and how we, as a society, organize and share it. Still, only classical patterns such as Zipf's law have been explored in depth. In contrast, other basic properties like the existence of bursts of rare words in specific documents, the topical organization of collections, or the sublinear growth of vocabulary size with the length of a document, have only been studied one by one and mainly applying heuristic methodologies rather than basic principles and general mechanisms. As a consequence, there is a lack of understanding of linguistic processes as complex emergent phenomena. Beyond Zipf's law for word frequencies, here we focus on Heaps' law, burstiness, and the topicality of document collections, which encode correlations within and across documents absent in random null models. We introduce and validate a generative model that explains the simultaneous emergence of all these patterns from simple rules. As a result, we find a connection between the bursty nature of rare words and the topical organization of texts and identify dynamic word ranking and memory across documents as key mechanisms explaining the non trivial organization of written text. Our research can have broad implications and practical applications in computer science, cognitive science, and linguistics.<|reference_end|> | arxiv | @article{serrano2009beyond,
title={Beyond Zipf's law: Modeling the structure of human language},
author={M. Angeles Serrano, Alessandro Flammini, and Filippo Menczer},
journal={arXiv preprint arXiv:0902.0606},
year={2009},
archivePrefix={arXiv},
eprint={0902.0606},
primaryClass={cs.CL physics.soc-ph}
} | serrano2009beyond |
arxiv-6277 | 0902.0620 | Degrees of Guaranteed Envy-Freeness in Finite Bounded Cake-Cutting Protocols | <|reference_start|>Degrees of Guaranteed Envy-Freeness in Finite Bounded Cake-Cutting Protocols: Cake-cutting protocols aim at dividing a ``cake'' (i.e., a divisible resource) and assigning the resulting portions to several players in a way that each of the players feels to have received a ``fair'' amount of the cake. An important notion of fairness is envy-freeness: No player wishes to switch the portion of the cake received with another player's portion. Despite intense efforts in the past, it is still an open question whether there is a \emph{finite bounded} envy-free cake-cutting protocol for an arbitrary number of players, and even for four players. We introduce the notion of degree of guaranteed envy-freeness (DGEF) as a measure of how good a cake-cutting protocol can approximate the ideal of envy-freeness while keeping the protocol finite bounded (trading being disregarded). We propose a new finite bounded proportional protocol for any number n \geq 3 of players, and show that this protocol has a DGEF of 1 + \lceil (n^2)/2 \rceil. This is the currently best DGEF among known finite bounded cake-cutting protocols for an arbitrary number of players. We will make the case that improving the DGEF even further is a tough challenge, and determine, for comparison, the DGEF of selected known finite bounded cake-cutting protocols.<|reference_end|> | arxiv | @article{lindner2009degrees,
title={Degrees of Guaranteed Envy-Freeness in Finite Bounded Cake-Cutting
Protocols},
author={Claudia Lindner and Joerg Rothe},
journal={arXiv preprint arXiv:0902.0620},
year={2009},
archivePrefix={arXiv},
eprint={0902.0620},
primaryClass={cs.GT}
} | lindner2009degrees |
arxiv-6278 | 0902.0657 | Efficient implementation of linear programming decoding | <|reference_start|>Efficient implementation of linear programming decoding: While linear programming (LP) decoding provides more flexibility for finite-length performance analysis than iterative message-passing (IMP) decoding, it is computationally more complex to implement in its original form, due to both the large size of the relaxed LP problem, and the inefficiency of using general-purpose LP solvers. This paper explores ideas for fast LP decoding of low-density parity-check (LDPC) codes. We first prove, by modifying the previously reported Adaptive LP decoding scheme to allow removal of unnecessary constraints, that LP decoding can be performed by solving a number of LP problems that contain at most one linear constraint derived from each of the parity-check constraints. By exploiting this property, we study a sparse interior-point implementation for solving this sequence of linear programs. Since the most complex part of each iteration of the interior-point algorithm is the solution of a (usually ill-conditioned) system of linear equations for finding the step direction, we propose a preconditioning algorithm to facilitate iterative solution of such systems. The proposed preconditioning algorithm is similar to the encoding procedure of LDPC codes, and we demonstrate its effectiveness via both analytical methods and computer simulation results.<|reference_end|> | arxiv | @article{taghavi2009efficient,
title={Efficient implementation of linear programming decoding},
author={Mohammad H. Taghavi, Amin Shokrollahi, Paul H. Siegel},
journal={arXiv preprint arXiv:0902.0657},
year={2009},
archivePrefix={arXiv},
eprint={0902.0657},
primaryClass={cs.IT math.IT}
} | taghavi2009efficient |
arxiv-6279 | 0902.0668 | Application of the Weil representation: diagonalization of the discrete Fourier transform | <|reference_start|>Application of the Weil representation: diagonalization of the discrete Fourier transform: We survey a new application of the Weil representation to construct a canonical basis of eigenvectors for the discrete Fourier transform (DFT). The transition matrix from the standard basis to the canonical basis defines a novel transform which we call the discrete oscillator transform (DOT for short). In addition, we describe a fast algorithm for computing the DOT in certain cases.<|reference_end|> | arxiv | @article{gurevich2009application,
title={Application of the Weil representation: diagonalization of the discrete
Fourier transform},
author={SHamgar Gurevich (UC Berkeley) and Ronny Hadani (U of Chicago)},
journal={arXiv preprint arXiv:0902.0668},
year={2009},
archivePrefix={arXiv},
eprint={0902.0668},
primaryClass={cs.IT cs.DM math.IT math.RT}
} | gurevich2009application |
arxiv-6280 | 0902.0673 | Optimal profiles in variable speed flows | <|reference_start|>Optimal profiles in variable speed flows: Where a 2D problem of optimal profile in variable speed flow is resolved in a class of convex Bezier curves, using symbolic and numerical computations.<|reference_end|> | arxiv | @article{argentini2009optimal,
title={Optimal profiles in variable speed flows},
author={Gianluca Argentini},
journal={arXiv preprint arXiv:0902.0673},
year={2009},
archivePrefix={arXiv},
eprint={0902.0673},
primaryClass={math.HO cs.CE math.OC physics.flu-dyn}
} | argentini2009optimal |
arxiv-6281 | 0902.0744 | Embedding Data within Knowledge Spaces | <|reference_start|>Embedding Data within Knowledge Spaces: The promise of e-Science will only be realized when data is discoverable, accessible, and comprehensible within distributed teams, across disciplines, and over the long-term--without reliance on out-of-band (non-digital) means. We have developed the open-source Tupelo semantic content management framework and are employing it to manage a wide range of e-Science entities (including data, documents, workflows, people, and projects) and a broad range of metadata (including provenance, social networks, geospatial relationships, temporal relations, and domain descriptions). Tupelo couples the use of global identifiers and resource description framework (RDF) statements with an aggregatable content repository model to provide a unified space for securely managing distributed heterogeneous content and relationships.<|reference_end|> | arxiv | @article{myers2009embedding,
title={Embedding Data within Knowledge Spaces},
author={James D. Myers, Joe Futrelle, Jeff Gaynor, Joel Plutchak, Peter
Bajcsy, Jason Kastner, Kailash Kotwani, Jong Sung Lee, Luigi Marini, Rob
Kooper, Robert E. McGrath, Terry McLaren, Alejandro Rodriguez, Yong Liu
(National Center for Supercomputing Applications, University of Illinois at
Urbana-Champaign)},
journal={arXiv preprint arXiv:0902.0744},
year={2009},
archivePrefix={arXiv},
eprint={0902.0744},
primaryClass={cs.AI cs.HC cs.IR}
} | myers2009embedding |
arxiv-6282 | 0902.0746 | Interference and Congestion Aware Gradient Broadcasting Routing for Wireless Sensor Networks | <|reference_start|>Interference and Congestion Aware Gradient Broadcasting Routing for Wireless Sensor Networks: This paper addresses the problem of reliable transmission of data through a sensor network. We focus on networks rapidly deployed in harsh environments. For these networks, important design requirements are fast data transmission and rapid network setup, as well as minimized energy consumption for increased network lifetime. We propose a novel broadcasting solution that accounts for the interference impact and the congestion level of the channel, in order to improve robustness, energy consumption and delay performance, compared to a benchmark routing protocol, the GRAB algorithm. Three solutions are proposed: P-GRAB, a probabilistic routing algorithm for interference mitigation, U-GRAB, a utility-based algorithm that adjusts to real-time congestion and UP-GRAB, a combination of P-GRAB and U-GRAB. It is shown that P-GRAB provides the best performance for geometry-aware networks while the U-GRAB approach is the best option for unreliable and unstable networks.<|reference_end|> | arxiv | @article{jaffrès-runser2009interference,
title={Interference and Congestion Aware Gradient Broadcasting Routing for
Wireless Sensor Networks},
author={Katia Jaffr`es-Runser, Cristina Comaniciu, Jean-Marie Gorce},
journal={arXiv preprint arXiv:0902.0746},
year={2009},
archivePrefix={arXiv},
eprint={0902.0746},
primaryClass={cs.NI}
} | jaffrès-runser2009interference |
arxiv-6283 | 0902.0755 | A Simple Extraction Procedure for Bibliographical Author Field | <|reference_start|>A Simple Extraction Procedure for Bibliographical Author Field: A procedure for bibliographic author metadata extraction from scholarly texts is presented. The author segments are identified based on capitalization and line break patterns. Two main author layout templates, which can retrieve from a varied set of title pages, are provided. Additionally, several disambiguating rules are described.<|reference_end|> | arxiv | @article{constans2009a,
title={A Simple Extraction Procedure for Bibliographical Author Field},
author={Pere Constans},
journal={arXiv preprint arXiv:0902.0755},
year={2009},
archivePrefix={arXiv},
eprint={0902.0755},
primaryClass={cs.DL}
} | constans2009a |
arxiv-6284 | 0902.0763 | Genetic algorithm based optimization and post optimality analysis of multi-pass face milling | <|reference_start|>Genetic algorithm based optimization and post optimality analysis of multi-pass face milling: This paper presents an optimization technique for the multi-pass face milling process. Genetic algorithm (GA) is used to obtain the optimum cutting parameters by minimizing the unit production cost for a given amount of material removal. Cutting speed, feed and depth of cut for the finish and rough passes are the cutting parameters. An equal depth of cut for roughing passes has been considered. A lookup table containing the feasible combinations of depth of cut in finish and rough passes is generated so as to reduce the number of variables by one. The resulting mixed integer nonlinear optimization problem is solved in a single step using GA. The entire technique is demonstrated in a case study. Post optimality analysis of the example problem is done to develop a strategy for optimizing without running GA again for different values of total depth of cut.<|reference_end|> | arxiv | @article{saha2009genetic,
title={Genetic algorithm based optimization and post optimality analysis of
multi-pass face milling},
author={Sourabh Saha},
journal={arXiv preprint arXiv:0902.0763},
year={2009},
archivePrefix={arXiv},
eprint={0902.0763},
primaryClass={cs.CE}
} | saha2009genetic |
arxiv-6285 | 0902.0782 | A Multiobjective Optimization Framework for Routing in Wireless Ad Hoc Networks | <|reference_start|>A Multiobjective Optimization Framework for Routing in Wireless Ad Hoc Networks: Wireless ad hoc networks are seldom characterized by one single performance metric, yet the current literature lacks a flexible framework to assist in characterizing the design tradeoffs in such networks. In this work, we address this problem by proposing a new modeling framework for routing in ad hoc networks, which used in conjunction with metaheuristic multiobjective search algorithms, will result in a better understanding of network behavior and performance when multiple criteria are relevant. Our approach is to take a holistic view of the network that captures the cross-interactions among interference management techniques implemented at various layers of the protocol stack. The resulting framework is a complex multiobjective optimization problem that can be efficiently solved through existing multiobjective search techniques. In this contribution, we present the Pareto optimal sets for an example sensor network when delay, robustness and energy are considered. The aim of this paper is to present the framework and hence for conciseness purposes, the multiobjective optimization search is not developed herein.<|reference_end|> | arxiv | @article{jaffrès-runser2009a,
title={A Multiobjective Optimization Framework for Routing in Wireless Ad Hoc
Networks},
author={Katia Jaffr`es-Runser, Cristina Comaniciu, Jean-Marie Gorce},
journal={IEEE International Symposium on Conference Modeling and
Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt) 2010},
year={2009},
archivePrefix={arXiv},
eprint={0902.0782},
primaryClass={cs.NI cs.PF}
} | jaffrès-runser2009a |
arxiv-6286 | 0902.0789 | The series limit of sum_k 1/[k log k (log log k)^2] | <|reference_start|>The series limit of sum_k 1/[k log k (log log k)^2]: The slowly converging series sum_{k=3}^infinity 1/[k * log k * (log log k)^a] is evaluated to 38.4067680928 at a=2. After some initial terms, the infinite tail of the sum is replaced by the integral of the associated interpolating function, which is available in simple analytic form. Biases that originate from the difference between the smooth area under the function and the corresponding Riemann sum are corrected by standard means. The cases a=3 and a=4 are computed in the same manner.<|reference_end|> | arxiv | @article{mathar2009the,
title={The series limit of sum_k 1/[k log k (log log k)^2]},
author={Richard J. Mathar},
journal={arXiv preprint arXiv:0902.0789},
year={2009},
archivePrefix={arXiv},
eprint={0902.0789},
primaryClass={math.NA cs.NA}
} | mathar2009the |
arxiv-6287 | 0902.0798 | Alleviating Media Bias Through Intelligent Agent Blogging | <|reference_start|>Alleviating Media Bias Through Intelligent Agent Blogging: Consumers of mass media must have a comprehensive, balanced and plural selection of news to get an unbiased perspective; but achieving this goal can be very challenging, laborious and time consuming. News stories development over time, its (in)consistency, and different level of coverage across the media outlets are challenges that a conscientious reader has to overcome in order to alleviate bias. In this paper we present an intelligent agent framework currently facilitating analysis of the main sources of on-line news in El Salvador. We show how prior tools of text analysis and Web 2.0 technologies can be combined with minimal manual intervention to help individuals on their rational decision process, while holding media outlets accountable for their work.<|reference_end|> | arxiv | @article{diaz-aviles2009alleviating,
title={Alleviating Media Bias Through Intelligent Agent Blogging},
author={Ernesto Diaz-Aviles},
journal={arXiv preprint arXiv:0902.0798},
year={2009},
archivePrefix={arXiv},
eprint={0902.0798},
primaryClass={cs.AI}
} | diaz-aviles2009alleviating |
arxiv-6288 | 0902.0822 | Bootstrapped Oblivious Transfer and Secure Two-Party Function Computation | <|reference_start|>Bootstrapped Oblivious Transfer and Secure Two-Party Function Computation: We propose an information theoretic framework for the secure two-party function computation (SFC) problem and introduce the notion of SFC capacity. We study and extend string oblivious transfer (OT) to sample-wise OT. We propose an efficient, perfectly private OT protocol utilizing the binary erasure channel or source. We also propose the bootstrap string OT protocol which provides disjoint (weakened) privacy while achieving a multiplicative increase in rate, thus trading off security for rate. Finally, leveraging our OT protocol, we construct a protocol for SFC and establish a general lower bound on SFC capacity of the binary erasure channel and source.<|reference_end|> | arxiv | @article{wang2009bootstrapped,
title={Bootstrapped Oblivious Transfer and Secure Two-Party Function
Computation},
author={Ye Wang and Prakash Ishwar},
journal={arXiv preprint arXiv:0902.0822},
year={2009},
archivePrefix={arXiv},
eprint={0902.0822},
primaryClass={cs.CR cs.IT math.IT}
} | wang2009bootstrapped |
arxiv-6289 | 0902.0828 | Finding Exact Minimal Polynomial by Approximations | <|reference_start|>Finding Exact Minimal Polynomial by Approximations: We present a new algorithm for reconstructing an exact algebraic number from its approximate value using an improved parameterized integer relation construction method. Our result is consistent with the existence of error controlling on obtaining an exact rational number from its approximation. The algorithm is applicable for finding exact minimal polynomial by its approximate root. This also enables us to provide an efficient method of converting the rational approximation representation to the minimal polynomial representation, and devise a simple algorithm to factor multivariate polynomials with rational coefficients. Compared with other methods, this method has the numerical computation advantage of high efficiency. The experimental results show that the method is more efficient than \emph{identify} in \emph{Maple} 11 for obtaining an exact algebraic number from its approximation. In this paper, we completely implement how to obtain exact results by numerical approximate computations.<|reference_end|> | arxiv | @article{qin2009finding,
title={Finding Exact Minimal Polynomial by Approximations},
author={Xiaolin Qin, Yong Feng, Jingwei Chen, Jingzhong Zhang},
journal={arXiv preprint arXiv:0902.0828},
year={2009},
archivePrefix={arXiv},
eprint={0902.0828},
primaryClass={cs.CC cs.SC}
} | qin2009finding |
arxiv-6290 | 0902.0838 | The Ergodic Capacity of Phase-Fading Interference Networks | <|reference_start|>The Ergodic Capacity of Phase-Fading Interference Networks: We identify the role of equal strength interference links as bottlenecks on the ergodic sum capacity of a $K$ user phase-fading interference network, i.e., an interference network where the fading process is restricted primarily to independent and uniform phase variations while the channel magnitudes are held fixed across time. It is shown that even though there are $K(K-1)$ cross-links, only about $K/2$ disjoint and equal strength interference links suffice to determine the capacity of the network regardless of the strengths of the rest of the cross channels. This scenario is called a \emph{minimal bottleneck state}. It is shown that ergodic interference alignment is capacity optimal for a network in a minimal bottleneck state. The results are applied to large networks. It is shown that large networks are close to bottleneck states with a high probability, so that ergodic interference alignment is close to optimal for large networks. Limitations of the notion of bottleneck states are also highlighted for channels where both the phase and the magnitudes vary with time. It is shown through an example that for these channels, joint coding across different bottleneck states makes it possible to circumvent the capacity bottlenecks.<|reference_end|> | arxiv | @article{jafar2009the,
title={The Ergodic Capacity of Phase-Fading Interference Networks},
author={Syed A. Jafar},
journal={IEEE Transactions on Information Theory, Vol. 57, No. 12, Pages:
7685-7694, December 2011},
year={2009},
doi={10.1109/TIT.2011.2169110},
archivePrefix={arXiv},
eprint={0902.0838},
primaryClass={cs.IT math.IT}
} | jafar2009the |
arxiv-6291 | 0902.0850 | Matrix Graph Grammars and Monotone Complex Logics | <|reference_start|>Matrix Graph Grammars and Monotone Complex Logics: Graph transformation is concerned with the manipulation of graphs by means of rules. Graph grammars have been traditionally studied using techniques from category theory. In previous works, we introduced Matrix Graph Grammars (MGGs) as a purely algebraic approach for the study of graph grammars and graph dynamics, based on the representation of graphs by means of their adjacency matrices. MGGs have been succesfully applied to problems such as applicability of rule sequences, sequentialization and reachability, providing new analysis techniques and generalizing and improving previous results. Our next objective is to generalize MGGs in order to approach computational complexity theory and "static" properties of graphs out of the "dynamics" of certain grammars. In the present work, we start building bridges between MGGs and complexity by introducing what we call "Monotone Complex Logic", which allows establishing a (bijective) link between MGGs and complex analysis. We use this logic to recast the formulation and basic building blocks of MGGs as more proper geometric and analytic concepts (scalar products, norms, distances). MGG rules can also be interpreted - via operators - as complex numbers. Interestingly, the subset they define can be characterized as the Sierpinski gasket.<|reference_end|> | arxiv | @article{velasco2009matrix,
title={Matrix Graph Grammars and Monotone Complex Logics},
author={Pedro Pablo Perez Velasco, Juan de Lara},
journal={arXiv preprint arXiv:0902.0850},
year={2009},
archivePrefix={arXiv},
eprint={0902.0850},
primaryClass={cs.DM}
} | velasco2009matrix |
arxiv-6292 | 0902.0892 | A Unified Framework for Linear-Programming Based Communication Receivers | <|reference_start|>A Unified Framework for Linear-Programming Based Communication Receivers: It is shown that a large class of communication systems which admit a sum-product algorithm (SPA) based receiver also admit a corresponding linear-programming (LP) based receiver. The two receivers have a relationship defined by the local structure of the underlying graphical model, and are inhibited by the same phenomenon, which we call 'pseudoconfigurations'. This concept is a generalization of the concept of 'pseudocodewords' for linear codes. It is proved that the LP receiver has the 'maximum likelihood certificate' property, and that the receiver output is the lowest cost pseudoconfiguration. Equivalence of graph-cover pseudoconfigurations and linear-programming pseudoconfigurations is also proved. A concept of 'system pseudodistance' is defined which generalizes the existing concept of pseudodistance for binary and nonbinary linear codes. It is demonstrated how the LP design technique may be applied to the problem of joint equalization and decoding of coded transmissions over a frequency selective channel, and a simulation-based analysis of the error events of the resulting LP receiver is also provided. For this particular application, the proposed LP receiver is shown to be competitive with other receivers, and to be capable of outperforming turbo equalization in bit and frame error rate performance.<|reference_end|> | arxiv | @article{flanagan2009a,
title={A Unified Framework for Linear-Programming Based Communication Receivers},
author={Mark F. Flanagan},
journal={arXiv preprint arXiv:0902.0892},
year={2009},
doi={10.1109/TCOMM.2011.100411.100417},
archivePrefix={arXiv},
eprint={0902.0892},
primaryClass={cs.IT math.IT}
} | flanagan2009a |
arxiv-6293 | 0902.0899 | Comparative concept similarity over Minspaces: Axiomatisation and Tableaux Calculus | <|reference_start|>Comparative concept similarity over Minspaces: Axiomatisation and Tableaux Calculus: We study the logic of comparative concept similarity $\CSL$ introduced by Sheremet, Tishkovsky, Wolter and Zakharyaschev to capture a form of qualitative similarity comparison. In this logic we can formulate assertions of the form " objects A are more similar to B than to C". The semantics of this logic is defined by structures equipped by distance functions evaluating the similarity degree of objects. We consider here the particular case of the semantics induced by \emph{minspaces}, the latter being distance spaces where the minimum of a set of distances always exists. It turns out that the semantics over arbitrary minspaces can be equivalently specified in terms of preferential structures, typical of conditional logics. We first give a direct axiomatisation of this logic over Minspaces. We next define a decision procedure in the form of a tableaux calculus. Both the calculus and the axiomatisation take advantage of the reformulation of the semantics in terms of preferential structures.<|reference_end|> | arxiv | @article{alenda2009comparative,
title={Comparative concept similarity over Minspaces: Axiomatisation and
Tableaux Calculus},
author={R'egis Alenda (LSIS), Nicola Olivetti (LSIS), Camilla Schwind (LIF)},
journal={arXiv preprint arXiv:0902.0899},
year={2009},
archivePrefix={arXiv},
eprint={0902.0899},
primaryClass={cs.AI}
} | alenda2009comparative |
arxiv-6294 | 0902.0901 | MicroSim: Modeling the Swedish Population | <|reference_start|>MicroSim: Modeling the Swedish Population: This article presents a unique, large-scale and spatially explicit microsimulation model that uses official anonymized register data collected from all individuals living in Sweden. Individuals are connected to households and workplaces and represent crucial links in the Swedish social contact network. This enables significant policy experiments in the domain of epidemic outbreaks. Development of the model started in 2004 at the Swedish Institute for Infectious Disease Control (SMI) in Solna, Sweden with the goal of creating a tool for testing the effects of intervention policies. These interventions include mass vaccination, targeted vaccination, isolation and social distancing. The model was initially designed for simulating smallpox outbreaks. In 2006, it was modified to support simulations of pandemic influenza. All nine millions members of the Swedish population are represented in the model. This article is a technical description of the simulation model; the input data, the simulation engine and the basic object types.<|reference_end|> | arxiv | @article{brouwers2009microsim:,
title={MicroSim: Modeling the Swedish Population},
author={Lisa Brouwers, Martin Camitz, Baki Cakici, Kalle M"akil"a, Paul
Saretok},
journal={arXiv preprint arXiv:0902.0901},
year={2009},
archivePrefix={arXiv},
eprint={0902.0901},
primaryClass={cs.OH}
} | brouwers2009microsim: |
arxiv-6295 | 0902.0919 | Multiple time-delays system modeling and control for router management | <|reference_start|>Multiple time-delays system modeling and control for router management: This paper investigates the overload problem of a single congested router in TCP (Transmission Control Protocol) networks. To cope with the congestion phenomenon, we design a feedback control based on a multiple time-delays model of the set TCP/AQM (Active Queue Management). Indeed, using robust control tools, especially in the quadratic separation framework, the TCP/AQM model is rewritten as an intercon- nected system and a structured state feedback is constructed to stabilize the network variables. Finally, we illustrate the proposed methodology with a numerical example and simulations using NS-2 simulator.<|reference_end|> | arxiv | @article{ariba2009multiple,
title={Multiple time-delays system modeling and control for router management},
author={Yassine Ariba (LAAS), Fr'ed'eric Gouaisbaut (LAAS), Yann Labit
(LAAS)},
journal={arXiv preprint arXiv:0902.0919},
year={2009},
archivePrefix={arXiv},
eprint={0902.0919},
primaryClass={cs.NI}
} | ariba2009multiple |
arxiv-6296 | 0902.0920 | Design and performance evaluation of a state-space based AQM | <|reference_start|>Design and performance evaluation of a state-space based AQM: Recent research has shown the link between congestion control in communication networks and feedback control system. In this paper, the design of an active queue management (AQM) which can be viewed as a controller, is considered. Based on a state space representation of a linearized fluid flow model of TCP, the AQM design is converted to a state feedback synthesis problem for time delay systems. Finally, an example extracted from the literature and simulations via a network simulator NS (under cross traffic conditions) support our study.<|reference_end|> | arxiv | @article{ariba2009design,
title={Design and performance evaluation of a state-space based AQM},
author={Yassine Ariba (LAAS), Yann Labit (LAAS), Fr'ed'eric Gouaisbaut
(LAAS)},
journal={International Conference on Communication Theory, Reliability, and
Quality of Service, Bucharest : Roumanie (2008)},
year={2009},
doi={10.1109/CTRQ.2008.15},
archivePrefix={arXiv},
eprint={0902.0920},
primaryClass={cs.NI}
} | ariba2009design |
arxiv-6297 | 0902.0922 | On Designing Lyapunov-Krasovskii Based AQM for Routers Supporting TCP Flows | <|reference_start|>On Designing Lyapunov-Krasovskii Based AQM for Routers Supporting TCP Flows: For the last few years, we assist to a growing interest of designing AQM (Active Queue Management) using control theory. In this paper, we focus on the synthesis of an AQM based on the Lyapunov theory for time delay systems. With the help of a recently developed Lyapunov-Krasovskii functional and using a state space representation of a linearized fluid model of TCP, two robust AQMs stabilizing the TCP model are constructed. Notice that our results are constructive and the synthesis problem is reduced to a convex optimization scheme expressed in terms of linear matrix inequalities (LMIs). Finally, an example extracted from the literature and simulations via {\it NS simulator} support our study.<|reference_end|> | arxiv | @article{labit2009on,
title={On Designing Lyapunov-Krasovskii Based AQM for Routers Supporting TCP
Flows},
author={Yann Labit (LAAS), Yassine Ariba (LAAS), Fr'ed'eric Gouaisbaut
(LAAS)},
journal={46th IEEE Conference on Decision and Control, New Orleans :
\'Etats-Unis d'Am\'erique (2007)},
year={2009},
doi={10.1109/CDC.2007.4434673},
archivePrefix={arXiv},
eprint={0902.0922},
primaryClass={cs.NI}
} | labit2009on |
arxiv-6298 | 0902.0924 | Towards a Theory of Requirements Elicitation: Acceptability Condition for the Relative Validity of Requirements | <|reference_start|>Towards a Theory of Requirements Elicitation: Acceptability Condition for the Relative Validity of Requirements: A requirements engineering artifact is valid relative to the stakeholders of the system-to-be if they agree on the content of that artifact. Checking relative validity involves a discussion between the stakeholders and the requirements engineer. This paper proposes (i) a language for the representation of information exchanged in a discussion about the relative validity of an artifact; (ii) the acceptability condition, which, when it verifies in a discussion captured in the proposed language, signals that the relative validity holds for the discussed artifact and for the participants in the discussion; and (iii) reasoning procedures to automatically check the acceptability condition in a discussions captured by the proposed language.<|reference_end|> | arxiv | @article{jureta2009towards,
title={Towards a Theory of Requirements Elicitation: Acceptability Condition
for the Relative Validity of Requirements},
author={Ivan Jureta, John Mylopoulos, Stephane Faulkner},
journal={arXiv preprint arXiv:0902.0924},
year={2009},
archivePrefix={arXiv},
eprint={0902.0924},
primaryClass={cs.SE}
} | jureta2009towards |
arxiv-6299 | 0902.0926 | Robust control tools for traffic monitoring in TCP/AQM networks | <|reference_start|>Robust control tools for traffic monitoring in TCP/AQM networks: Several studies have considered control theory tools for traffic control in communication networks, as for example the congestion control issue in IP (Internet Protocol) routers. In this paper, we propose to design a linear observer for time-delay systems to address the traffic monitoring issue in TCP/AQM (Transmission Control Protocol/Active Queue Management) networks. Due to several propagation delays and the queueing delay, the set TCP/AQM is modeled as a multiple delayed system of a particular form. Hence, appropriate robust control tools as quadratic separation are adopted to construct a delay dependent observer for TCP flows estimation. Note that, the developed mechanism enables also the anomaly detection issue for a class of DoS (Denial of Service) attacks. At last, simulations via the network simulator NS-2 and an emulation experiment validate the proposed methodology.<|reference_end|> | arxiv | @article{ariba2009robust,
title={Robust control tools for traffic monitoring in TCP/AQM networks},
author={Yassine Ariba (LAAS), Fr'ed'eric Gouaisbaut (LAAS), Sandy Rahme
(LAAS), Yann Labit (LAAS)},
journal={arXiv preprint arXiv:0902.0926},
year={2009},
archivePrefix={arXiv},
eprint={0902.0926},
primaryClass={cs.NI}
} | ariba2009robust |
arxiv-6300 | 0902.0947 | On the Gaussian MAC with Imperfect Feedback | <|reference_start|>On the Gaussian MAC with Imperfect Feedback: New achievable rate regions are derived for the two-user additive white Gaussian multiple-access channel with noisy feedback. The regions exhibit the following two properties. Irrespective of the (finite) Gaussian feedback-noise variances, the regions include rate points that lie outside the no-feedback capacity region, and when the feedback-noise variances tend to 0 the regions converge to the perfect-feedback capacity region. The new achievable regions also apply to the partial-feedback setting where one of the transmitters has a noisy feedback link and the other transmitter has no feedback at all. Again, irrespective of the (finite) noise variance on the feedback link, the regions include rate points that lie outside the no-feedback capacity region. Moreover, in the case of perfect partial feedback, i.e., where the only feedback link is noise-free, for certain channel parameters the new regions include rate points that lie outside the Cover-Leung region. This answers in the negative the question posed by van der Meulen as to whether the Cover-Leung region equals the capacity region of the Gaussian multiple-access channel with perfect partial feedback. Finally, we propose new achievable regions also for a setting where the receiver is cognizant of the realizations of the noise sequences on the feedback links.<|reference_end|> | arxiv | @article{lapidoth2009on,
title={On the Gaussian MAC with Imperfect Feedback},
author={Amos Lapidoth and Michele A. Wigger},
journal={arXiv preprint arXiv:0902.0947},
year={2009},
archivePrefix={arXiv},
eprint={0902.0947},
primaryClass={cs.IT math.IT}
} | lapidoth2009on |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.