corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-6701
0903.1952
Statistical Eigenmode Transmission over Jointly-Correlated MIMO Channels
<|reference_start|>Statistical Eigenmode Transmission over Jointly-Correlated MIMO Channels: We investigate MIMO eigenmode transmission using statistical channel state information at the transmitter. We consider a general jointly-correlated MIMO channel model, which does not require separable spatial correlations at the transmitter and receiver. For this model, we first derive a closed-form tight upper bound for the ergodic capacity, which reveals a simple and interesting relationship in terms of the matrix permanent of the eigenmode channel coupling matrix and embraces many existing results in the literature as special cases. Based on this closed-form and tractable upper bound expression, we then employ convex optimization techniques to develop low-complexity power allocation solutions involving only the channel statistics. Necessary and sufficient optimality conditions are derived, from which we develop an iterative water-filling algorithm with guaranteed convergence. Simulations demonstrate the tightness of the capacity upper bound and the near-optimal performance of the proposed low-complexity transmitter optimization approach.<|reference_end|>
arxiv
@article{gao2009statistical, title={Statistical Eigenmode Transmission over Jointly-Correlated MIMO Channels}, author={Xiqi Gao, Bin Jiang, Xiao Li, Alex B. Gershman, Matthew R. McKay}, journal={arXiv preprint arXiv:0903.1952}, year={2009}, doi={10.1109/TIT.2009.2023737}, archivePrefix={arXiv}, eprint={0903.1952}, primaryClass={cs.IT math.IT} }
gao2009statistical
arxiv-6702
0903.1953
Laconic schema mappings: computing core universal solutions by means of SQL queries
<|reference_start|>Laconic schema mappings: computing core universal solutions by means of SQL queries: We present a new method for computing core universal solutions in data exchange settings specified by source-to-target dependencies, by means of SQL queries. Unlike previously known algorithms, which are recursive in nature, our method can be implemented directly on top of any DBMS. Our method is based on the new notion of a laconic schema mapping. A laconic schema mapping is a schema mapping for which the canonical universal solution is the core universal solution. We give a procedure by which every schema mapping specified by FO s-t tgds can be turned into a laconic schema mapping specified by FO s-t tgds that may refer to a linear order on the domain of the source instance. We show that our results are optimal, in the sense that the linear order is necessary and the method cannot be extended to schema mapping involving target constraints.<|reference_end|>
arxiv
@article{cate2009laconic, title={Laconic schema mappings: computing core universal solutions by means of SQL queries}, author={Balder ten Cate, Laura Chiticariu, Phokion Kolaitis and Wang-Chiew Tan}, journal={arXiv preprint arXiv:0903.1953}, year={2009}, archivePrefix={arXiv}, eprint={0903.1953}, primaryClass={cs.DB} }
cate2009laconic
arxiv-6703
0903.1967
Network error correction for unit-delay, memory-free networks using convolutional codes
<|reference_start|>Network error correction for unit-delay, memory-free networks using convolutional codes: A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network-errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.<|reference_end|>
arxiv
@article{prasad2009network, title={Network error correction for unit-delay, memory-free networks using convolutional codes}, author={K. Prasad, B. Sundar Rajan}, journal={arXiv preprint arXiv:0903.1967}, year={2009}, archivePrefix={arXiv}, eprint={0903.1967}, primaryClass={cs.IT math.IT} }
prasad2009network
arxiv-6704
0903.1972
On Competing Wireless Service Providers
<|reference_start|>On Competing Wireless Service Providers: We consider a situation where wireless service providers compete for heterogenous wireless users. The users differ in their willingness to pay as well as in their individual channel gains. We prove existence and uniqueness of the Nash equilibrium for the competition of two service providers, for a generic channel model. Interestingly, the competition of two providers leads to a globally optimal outcome. We extend some of the results to the case where more than two providers are competing. Finally, we provide numerical examples that illustrate the effects of various parameters on the Nash equilibrium.<|reference_end|>
arxiv
@article{gajic2009on, title={On Competing Wireless Service Providers}, author={Vojislav Gajic (1), Jianwei Huang (2), Bixio Rimoldi (1) ((1) EPFL, (2) CUHK)}, journal={arXiv preprint arXiv:0903.1972}, year={2009}, archivePrefix={arXiv}, eprint={0903.1972}, primaryClass={cs.IT cs.GT math.IT} }
gajic2009on
arxiv-6705
0903.2015
Deposition and Extension Approach to Find Longest Common Subsequence for Multiple Sequences
<|reference_start|>Deposition and Extension Approach to Find Longest Common Subsequence for Multiple Sequences: The problem of finding the longest common subsequence (LCS) for a set of sequences is a very interesting and challenging problem in computer science. This problem is NP-complete, but because of its importance, many heuristic algorithms have been proposed, such as Long Run algorithm and Expansion algorithm. However, the performance of many current heuristic algorithms deteriorates fast when the number of sequences and sequence length increase. In this paper, we have proposed a post process heuristic algorithm for the LCS problem, the Deposition and Extension algorithm (DEA). This algorithm first generates common subsequence by the process of sequences deposition, and then extends this common subsequence. The algorithm is proven to generate Common Subsequences (CSs) with guaranteed lengths. The experiments show that the results of DEA algorithm are better than those of Long Run and Expansion algorithm, especially on many long sequences. The algorithm also has superior efficiency both in time and space.<|reference_end|>
arxiv
@article{ning2009deposition, title={Deposition and Extension Approach to Find Longest Common Subsequence for Multiple Sequences}, author={Kang Ning}, journal={arXiv preprint arXiv:0903.2015}, year={2009}, archivePrefix={arXiv}, eprint={0903.2015}, primaryClass={cs.DS cs.DM math.CO} }
ning2009deposition
arxiv-6706
0903.2016
Proof of a Conjecture on the Sequence of Exceptional Numbers, Classifying Cyclic Codes and APN Functions
<|reference_start|>Proof of a Conjecture on the Sequence of Exceptional Numbers, Classifying Cyclic Codes and APN Functions: We prove a conjecture that classifies exceptional numbers. This conjecture arises in two different ways, from cryptography and from coding theory. An odd integer $t\geq 3$ is said to be exceptional if $f(x)=x^t$ is APN (Almost Perfect Nonlinear) over $\mathbb{F}_{2^n}$ for infinitely many values of $n$. Equivalently, $t$ is exceptional if the binary cyclic code of length $2^n-1$ with two zeros $\omega, \omega^t$ has minimum distance 5 for infinitely many values of $n$. The conjecture we prove states that every exceptional number has the form $2^i+1$ or $4^i-2^i+1$.<|reference_end|>
arxiv
@article{hernando2009proof, title={Proof of a Conjecture on the Sequence of Exceptional Numbers, Classifying Cyclic Codes and APN Functions}, author={Fernando Hernando and Gary McGuire}, journal={Journal of Algebra, Volume 343, Issue 1, October 2011, Pages 78-92}, year={2009}, doi={10.1016/j.jalgebra.2011.06.019}, archivePrefix={arXiv}, eprint={0903.2016}, primaryClass={cs.IT math.AG math.IT} }
hernando2009proof
arxiv-6707
0903.2071
Notes on Recent Approaches Concerning the Kirchhoff-Law-Johnson-Noise-based Secure Key Exchange
<|reference_start|>Notes on Recent Approaches Concerning the Kirchhoff-Law-Johnson-Noise-based Secure Key Exchange: We critically analyze the results and claims in [Physics Letters A 373 (2009) 901-904]. We show that the strong security leak appeared in the simulations is only an artifact and not caused by "multiple reflections". Since no wave modes exist at cable length of 5% of the shortest wavelength of the signal, no wave is present to reflect it. In the high wave impedance limit, the conditions used in the simulations are heavily unphysical (requiring cable diameters up to 28000 times greater than the measured size of the known universe) and the results are modeling artifacts due to the unphysical values. At the low cable impedance limit, the observed artifacts are due to violating the recommended (and tested) conditions by neglecting the cable capacitance restrictions and using about 100 times longer cable than recommended without cable capacitance compensation arrangement. We implement and analyze the general circuitry of Liu's circulator and confirm that they are conceptually secure against passive attacks. We introduce an asymmetric, more robust version without feedback loop. Then we crack all these systems by an active attack: a circulator-based man-in-the middle attack. Finally, we analyze the proposed method to increase security by dropping only high-risk bits. We point out the differences between different types of high-risk bits and show the shortage of this strategy for some simple key exchange protocols.<|reference_end|>
arxiv
@article{kish2009notes, title={Notes on Recent Approaches Concerning the Kirchhoff-Law-Johnson-Noise-based Secure Key Exchange}, author={Laszlo B. Kish and Tamas Horvath}, journal={Physics Letters A 373 (2009) 2858-2868}, year={2009}, doi={10.1016/j.physleta.2009.05.077}, archivePrefix={arXiv}, eprint={0903.2071}, primaryClass={physics.gen-ph cs.CR physics.class-ph} }
kish2009notes
arxiv-6708
0903.2100
Partitions versus sets : a case of duality
<|reference_start|>Partitions versus sets : a case of duality: In a recent paper, Amini et al. introduce a general framework to prove duality theorems between special decompositions and their dual combinatorial object. They thus unify all known ad-hoc proofs in one single theorem. While this unification process is definitely good, their main theorem remains quite technical and does not give a real insight of why some decompositions admit dual objects and why others do not. The goal of this paper is both to generalise a little this framework and to give an enlightening simple proof of its central theorem.<|reference_end|>
arxiv
@article{lyaudet2009partitions, title={Partitions versus sets : a case of duality}, author={Laurent Lyaudet (LIP), Fr'ed'eric Mazoit (LaBRI), Stephan Thomasse (LIRMM)}, journal={European Journal of Combinatorics (2009) 1-7}, year={2009}, doi={10.1016/j.ejc.2009.09.004}, archivePrefix={arXiv}, eprint={0903.2100}, primaryClass={cs.DM} }
lyaudet2009partitions
arxiv-6709
0903.2101
Combinatorial Deformations of Algebras: Twisting and Perturbations
<|reference_start|>Combinatorial Deformations of Algebras: Twisting and Perturbations: The framework used to prove the multiplicative law deformation of the algebra of Feynman-Bender diagrams is a \textit{twisted shifted dual law} (in fact, twice). We give here a clear interpretation of its two parameters. The crossing parameter is a deformation of the tensor structure whereas the superposition parameters is a perturbation of the shuffle coproduct of Hoffman type which, in turn, can be interpreted as the diagonal restriction of a superproduct. Here, we systematically detail these constructions.<|reference_end|>
arxiv
@article{duchamp2009combinatorial, title={Combinatorial Deformations of Algebras: Twisting and Perturbations}, author={G'erard Henry Edmond Duchamp (LIPN), Christophe Tollu (LIPN), K. A. Penson (LPTMC), Gleb Koshevoy (CEMI)}, journal={arXiv preprint arXiv:0903.2101}, year={2009}, archivePrefix={arXiv}, eprint={0903.2101}, primaryClass={cs.SC math-ph math.CO math.MP} }
duchamp2009combinatorial
arxiv-6710
0903.2108
A new universal cellular automaton on the ternary heptagrid
<|reference_start|>A new universal cellular automaton on the ternary heptagrid: In this paper, we construct a new weakly universal cellular automaton on the ternary heptagrid. The previous result, obtained by the same author and Y. Song required six states only. This time, the number of states is four. This is the best result up to date for cellular automata in the hyperbolic plane.<|reference_end|>
arxiv
@article{margenstern2009a, title={A new universal cellular automaton on the ternary heptagrid}, author={Maurice Margenstern}, journal={arXiv preprint arXiv:0903.2108}, year={2009}, archivePrefix={arXiv}, eprint={0903.2108}, primaryClass={cs.FL cs.CG} }
margenstern2009a
arxiv-6711
0903.2119
Adaptive Mesh Approach for Predicting Algorithm Behavior with Application to Visibility Culling in Computer Graphics
<|reference_start|>Adaptive Mesh Approach for Predicting Algorithm Behavior with Application to Visibility Culling in Computer Graphics: We propose a concise approximate description, and a method for efficiently obtaining this description, via adaptive random sampling of the performance (running time, memory consumption, or any other profileable numerical quantity) of a given algorithm on some low-dimensional rectangular grid of inputs. The formal correctness is proven under reasonable assumptions on the algorithm under consideration; and the approach's practical benefit is demonstrated by predicting for which observer positions and viewing directions an occlusion culling algorithm yields a net performance benefit or loss compared to a simple brute force renderer.<|reference_end|>
arxiv
@article{fischer2009adaptive, title={Adaptive Mesh Approach for Predicting Algorithm Behavior with Application to Visibility Culling in Computer Graphics}, author={Matthias Fischer, Claudius J"ahn and Martin Ziegler}, journal={arXiv preprint arXiv:0903.2119}, year={2009}, archivePrefix={arXiv}, eprint={0903.2119}, primaryClass={cs.PF cs.GR} }
fischer2009adaptive
arxiv-6712
0903.2134
Analysis of a Bloom Filter Algorithm via the Supermarket Model
<|reference_start|>Analysis of a Bloom Filter Algorithm via the Supermarket Model: This paper deals with the problem of identifying elephants in the Internet Traffic. The aim is to analyze a new adaptive algorithm based on a Bloom Filter. This algorithm uses a so-called min-rule which can be described as in the supermarket model. This model consists of joining the shortest queue among d queues selected at random in a large number of m queues. In case of equality, one of the shortest queues is chosen at random. An analysis of a simplified model gives an insight of the error generated by the algorithm on the estimation of the number of the elephants. The main conclusion is that, as m gets large, there is a deterministic limit for the empirical distribution of the filter counters. Limit theorems are proved and the limit is identified. It depends on key parameters. The condition for the algorithm to perform well is discussed. Theoretical results are validated by experiments on a traffic trace from France Telecom and by simulations.<|reference_end|>
arxiv
@article{chabchoub2009analysis, title={Analysis of a Bloom Filter Algorithm via the Supermarket Model}, author={Yousra Chabchoub, Christine Fricker and Hanene Mohamed}, journal={arXiv preprint arXiv:0903.2134}, year={2009}, archivePrefix={arXiv}, eprint={0903.2134}, primaryClass={cs.DM cs.NI} }
chabchoub2009analysis
arxiv-6713
0903.2158
Supernodal Analysis Revisited
<|reference_start|>Supernodal Analysis Revisited: In this paper we show how to extend the known algorithm of nodal analysis in such a way that, in the case of circuits without nullors and controlled sources (but allowing for both, independent current and voltage sources), the system of nodal equations describing the circuit is partitioned into one part, where the nodal variables are explicitly given as linear combinations of the voltage sources and the voltages of certain reference nodes, and another, which contains the node variables of these reference nodes only and which moreover can be read off directly from the given circuit. Neither do we need preparational graph transformations, nor do we need to introduce additional current variables (as in MNA). Thus this algorithm is more accessible to students, and consequently more suitable for classroom presentations.<|reference_end|>
arxiv
@article{gerbracht2009supernodal, title={Supernodal Analysis Revisited}, author={Eberhard H.-A. Gerbracht}, journal={SMACD'04 Proceedings of the International Workshop on Symbolic Methods and Applications in Circuit Design, Wroclaw, Poland, 23-24 September 2004; pp. 113-116}, year={2009}, archivePrefix={arXiv}, eprint={0903.2158}, primaryClass={cs.SC cs.CE cs.DM} }
gerbracht2009supernodal
arxiv-6714
0903.2168
Better Termination for Prolog with Constraints
<|reference_start|>Better Termination for Prolog with Constraints: Termination properties of actual Prolog systems with constraints are fragile and difficult to analyse. The lack of the occurs-check, moded and overloaded arithmetical evaluation via is/2 and the occasional nontermination of finite domain constraints are all sources for invalidating termination results obtained by current termination analysers that rely on idealized assumptions. In this paper, we present solutions to address these problems on the level of the underlying Prolog system. Improved unification modes meet the requirements of norm based analysers by offering dynamic occurs-check detection. A generalized finite domain solver overcomes the shortcomings of conventional arithmetic without significant runtime overhead. The solver offers unbounded domains, yet propagation always terminates. Our work improves Prolog's termination and makes Prolog a more reliable target for termination and type analysis. It is part of SWI-Prolog since version 5.6.50.<|reference_end|>
arxiv
@article{triska2009better, title={Better Termination for Prolog with Constraints}, author={Markus Triska and Ulrich Neumerkel and Jan Wielemaker}, journal={arXiv preprint arXiv:0903.2168}, year={2009}, number={WLPE/2008/02}, archivePrefix={arXiv}, eprint={0903.2168}, primaryClass={cs.PL cs.SE} }
triska2009better
arxiv-6715
0903.2171
Role-Based Access Controls
<|reference_start|>Role-Based Access Controls: While Mandatory Access Controls (MAC) are appropriate for multilevel secure military applications, Discretionary Access Controls (DAC) are often perceived as meeting the security processing needs of industry and civilian government. This paper argues that reliance on DAC as the principal method of access control is unfounded and inappropriate for many commercial and civilian government organizations. The paper describes a type of non-discretionary access control - role-based access control (RBAC) - that is more central to the secure processing needs of non-military systems than DAC.<|reference_end|>
arxiv
@article{ferraiolo2009role-based, title={Role-Based Access Controls}, author={David F. Ferraiolo and D. Richard Kuhn}, journal={15th National Computer Security Conference, Baltimore, MD. October 13-16, 1992}, year={2009}, archivePrefix={arXiv}, eprint={0903.2171}, primaryClass={cs.CR} }
ferraiolo2009role-based
arxiv-6716
0903.2174
Game theory and the frequency selective interference channel - A tutorial
<|reference_start|>Game theory and the frequency selective interference channel - A tutorial: This paper provides a tutorial overview of game theoretic techniques used for communication over frequency selective interference channels. We discuss both competitive and cooperative techniques. Keywords: Game theory, competitive games, cooperative games, Nash Equilibrium, Nash bargaining solution, Generalized Nash games, Spectrum optimization, distributed coordination, interference channel, multiple access channel, iterative water-filling.<|reference_end|>
arxiv
@article{leshem2009game, title={Game theory and the frequency selective interference channel - A tutorial}, author={Amir Leshem and Ephi Zehavi}, journal={IEEE Signal Processing Magazine. Special issue on applications of game theory in signal processing and communications. Volume 26, Issue 4, pages 28-40. Sep. 2009}, year={2009}, doi={10.1109/MSP.2009.933372}, archivePrefix={arXiv}, eprint={0903.2174}, primaryClass={cs.IT cs.GT math.IT} }
leshem2009game
arxiv-6717
0903.2177
On the (semi)lattices induced by continuous reducibilities
<|reference_start|>On the (semi)lattices induced by continuous reducibilities: Continuous reducibilities are a proven tool in computable analysis, and have applications in other fields such as constructive mathematics or reverse mathematics. We study the order-theoretic properties of several variants of the two most important definitions, and especially introduce suprema for them. The suprema are shown to commutate with several characteristic numbers.<|reference_end|>
arxiv
@article{pauly2009on, title={On the (semi)lattices induced by continuous reducibilities}, author={Arno Pauly}, journal={Mathematical Logic Quarterly, 56(5): 488--502, 2010}, year={2009}, doi={10.1002/malq.200910104}, archivePrefix={arXiv}, eprint={0903.2177}, primaryClass={cs.LO} }
pauly2009on
arxiv-6718
0903.2188
Rfuzzy framework
<|reference_start|>Rfuzzy framework: Fuzzy reasoning is a very productive research field that during the last years has provided a number of theoretical approaches and practical implementation prototypes. Nevertheless, the classical implementations, like Fril, are not adapted to the latest formal approaches, like multi-adjoint logic semantics. Some promising implementations, like Fuzzy Prolog, are so general that the regular user/programmer does not feel comfortable because either representation of fuzzy concepts is complex or the results difficult to interpret. In this paper we present a modern framework, Rfuzzy, that is modelling multi-adjoint logic. It provides some extensions as default values (to represent missing information, even partial default values) and typed variables. Rfuzzy represents the truth value of predicates through facts, rules and functions. Rfuzzy answers queries with direct results (instead of constraints) and it is easy to use for any person that wants to represent a problem using fuzzy reasoning in a simple way (by using the classical representation with real numbers).<|reference_end|>
arxiv
@article{ceruelo2009rfuzzy, title={Rfuzzy framework}, author={Victor Pablos Ceruelo and Susana Munoz-Hernandez and Hannes Strass}, journal={arXiv preprint arXiv:0903.2188}, year={2009}, number={WLPE/2008/01}, archivePrefix={arXiv}, eprint={0903.2188}, primaryClass={cs.PL cs.LO} }
ceruelo2009rfuzzy
arxiv-6719
0903.2199
On the Generation of Test Data for Prolog by Partial Evaluation
<|reference_start|>On the Generation of Test Data for Prolog by Partial Evaluation: In recent work, we have proposed an approach to Test Data Generation (TDG) of imperative bytecode by partial evaluation (PE) of CLP which consists in two phases: (1) the bytecode program is first transformed into an equivalent CLP program by means of interpretive compilation by PE, (2) a second PE is performed in order to supervise the generation of test-cases by execution of the CLP decompiled program. The main advantages of TDG by PE include flexibility to handle new coverage criteria, the possibility to obtain test-case generators and its simplicity to be implemented. The approach in principle can be directly applied for TDG of any imperative language. However, when one tries to apply it to a declarative language like Prolog, we have found as a main difficulty the generation of test-cases which cover the more complex control flow of Prolog. Essentially, the problem is that an intrinsic feature of PE is that it only computes non-failing derivations while in TDG for Prolog it is essential to generate test-cases associated to failing computations. Basically, we propose to transform the original Prolog program into an equivalent Prolog program with explicit failure by partially evaluating a Prolog interpreter which captures failing derivations w.r.t. the input program. Another issue that we discuss in the paper is that, while in the case of bytecode the underlying constraint domain only manipulates integers, in Prolog it should properly handle the symbolic data manipulated by the program. The resulting scheme is of interest for bringing the advantages which are inherent in TDG by PE to the field of logic programming.<|reference_end|>
arxiv
@article{gomez-zamalloa2009on, title={On the Generation of Test Data for Prolog by Partial Evaluation}, author={Miguel Gomez-Zamalloa, Elvira Albert, German Puebla}, journal={arXiv preprint arXiv:0903.2199}, year={2009}, number={WLPE/2008/06}, archivePrefix={arXiv}, eprint={0903.2199}, primaryClass={cs.PL cs.SE} }
gomez-zamalloa2009on
arxiv-6720
0903.2202
Improving Size-Change Analysis in Offline Partial Evaluation
<|reference_start|>Improving Size-Change Analysis in Offline Partial Evaluation: Some recent approaches for scalable offline partial evaluation of logic programs include a size-change analysis for ensuring both so called local and global termination. In this work|inspired by experimental evaluation|we introduce several improvements that may increase the accuracy of the analysis and, thus, the quality of the associated specialized programs. We aim to achieve this while maintaining the same complexity and scalability of the recent works.<|reference_end|>
arxiv
@article{leuschel2009improving, title={Improving Size-Change Analysis in Offline Partial Evaluation}, author={Michael Leuschel and Salvador Tamarit and German Vidal}, journal={arXiv preprint arXiv:0903.2202}, year={2009}, number={WLPE/2008/07}, archivePrefix={arXiv}, eprint={0903.2202}, primaryClass={cs.PL} }
leuschel2009improving
arxiv-6721
0903.2203
Achievable Error Exponents for Channel with Side Information - Erasure and List Decoding
<|reference_start|>Achievable Error Exponents for Channel with Side Information - Erasure and List Decoding: We consider a decoder with an erasure option and a variable size list decoder for channels with non-casual side information at the transmitter. First, universally achievable error exponents are offered for decoding with an erasure option using a parameterized decoder in the spirit of Csisz\'{a}r and K\"{o}rner's decoder. Then, the proposed decoding rule is generalized by extending the range of its parameters to allow variable size list decoding. This extension gives a unified treatment for erasure/list decoding. Exponential bounds on the probability of list error and the average number of incorrect messages on the list are given. Relations to Forney's and Csisz\'{a}r and K\"{o}rner's decoders for discrete memoryless channel are discussed. These results are obtained by exploring a random binning code with conditionally constant composition codewords proposed by Moulin and Wang, but with a different decoding rule.<|reference_end|>
arxiv
@article{sabbag2009achievable, title={Achievable Error Exponents for Channel with Side Information - Erasure and List Decoding}, author={Erez Sabbag and Neri Merhav}, journal={arXiv preprint arXiv:0903.2203}, year={2009}, archivePrefix={arXiv}, eprint={0903.2203}, primaryClass={cs.IT math.IT} }
sabbag2009achievable
arxiv-6722
0903.2205
A Lightweight Combination of Semantics for Non-deterministic Functions
<|reference_start|>A Lightweight Combination of Semantics for Non-deterministic Functions: The use of non-deterministic functions is a distinctive feature of modern functional logic languages. The semantics commonly adopted is call-time choice, a notion that at the operational level is related to the sharing mechanism of lazy evaluation in functional languages. However, there are situations where run-time choice, closer to ordinary rewriting, is more appropriate. In this paper we propose an extension of existing call-time choice based languages, to provide support for run-time choice in localized parts of a program. The extension is remarkably simple at three relevant levels: syntax, formal operational calculi and implementation, which is based on the system Toy.<|reference_end|>
arxiv
@article{lopez-fraguas2009a, title={A Lightweight Combination of Semantics for Non-deterministic Functions}, author={Francisco Javier Lopez-Fraguas and Juan Rodriguez-Hortala and Jaime Sanchez-Hernandez}, journal={arXiv preprint arXiv:0903.2205}, year={2009}, number={WLPE/2008/08}, archivePrefix={arXiv}, eprint={0903.2205}, primaryClass={cs.PL} }
lopez-fraguas2009a
arxiv-6723
0903.2207
Prolog Visualization System Using Logichart Diagrams
<|reference_start|>Prolog Visualization System Using Logichart Diagrams: We have developed a Prolog visualization system that is intended to support Prolog programming education. The system uses Logichart diagrams to visualize Prolog programs. The Logichart diagram is designed to visualize the Prolog execution flow intelligibly and to enable users to easily correlate the Prolog clauses with its parts. The system has the following functions. (1) It visually traces Prolog execution (goal calling, success, and failure) on the Logichart diagram. (2) Dynamic change in a Prolog program by calling extra-logical predicates, such as `assertz' and `retract', is visualized in real time. (3) Variable substitution processes are displayed in a text widget in real time.<|reference_end|>
arxiv
@article{adachi2009prolog, title={Prolog Visualization System Using Logichart Diagrams}, author={Yoshihiro Adachi}, journal={arXiv preprint arXiv:0903.2207}, year={2009}, number={WLPE/2008/03}, archivePrefix={arXiv}, eprint={0903.2207}, primaryClass={cs.PL cs.HC cs.SE} }
adachi2009prolog
arxiv-6724
0903.2226
On the achievable diversity-multiplexing tradeoff in interference channels
<|reference_start|>On the achievable diversity-multiplexing tradeoff in interference channels: We analyze two-user single-antenna fading interference channels with perfect receive channel state information (CSI) and no transmit CSI. For the case of very strong interference, we prove that decoding interference while treating the intended signal as noise, subtracting the result out, and then decoding the desired signal, a process known as "stripping", achieves the diversity-multiplexing tradeoff (DMT) outer bound derived in Akuiyibo and Leveque, Int. Zurich Seminar on Commun., 2008. The proof is constructive in the sense that it provides corresponding code design criteria for DMT optimality. For general interference levels, we compute the DMT of a fixed-power-split Han and Kobayashi type superposition coding scheme, provide design criteria for the corresponding superposition codes, and find that this scheme is DMT-optimal for certain multiplexing rates.<|reference_end|>
arxiv
@article{akçaba2009on, title={On the achievable diversity-multiplexing tradeoff in interference channels}, author={Cemal Akc{c}aba and Helmut B"olcskei}, journal={arXiv preprint arXiv:0903.2226}, year={2009}, archivePrefix={arXiv}, eprint={0903.2226}, primaryClass={cs.IT math.IT} }
akçaba2009on
arxiv-6725
0903.2232
On the Iterative Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing
<|reference_start|>On the Iterative Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing: This paper considers the performance of $(j,k)$-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the $q$-ary symmetric channel ($q$-SC). For the BEC, the density evolution (DE) threshold of iterative decoding scales like $\Theta(k^{-1})$ and the critical stopping ratio scales like $\Theta(k^{-j/(j-2)})$. For the $q$-SC, the DE threshold of verification decoding depends on the details of the decoder and scales like $\Theta(k^{-1})$ for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the the compressed sensing (CS) of strictly-sparse signals. A DE based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly-sparse signals can be reconstructed efficiently with high-probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stopping-set based approach is also used to get stronger (e.g., uniform-in-probability) reconstruction guarantees.<|reference_end|>
arxiv
@article{zhang2009on, title={On the Iterative Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing}, author={Fan Zhang and Henry D. Pfister}, journal={arXiv preprint arXiv:0903.2232}, year={2009}, archivePrefix={arXiv}, eprint={0903.2232}, primaryClass={cs.IT math.IT} }
zhang2009on
arxiv-6726
0903.2243
Pragmatic Information Rates, Generalizations of the Kelly Criterion, and Financial Market Efficiency
<|reference_start|>Pragmatic Information Rates, Generalizations of the Kelly Criterion, and Financial Market Efficiency: This paper is part of an ongoing investigation of "pragmatic information", defined in Weinberger (2002) as "the amount of information actually used in making a decision". Because a study of information rates led to the Noiseless and Noisy Coding Theorems, two of the most important results of Shannon's theory, we begin the paper by defining a pragmatic information rate, showing that all of the relevant limits make sense, and interpreting them as the improvement in compression obtained from using the correct distribution of transmitted symbols. The first of two applications of the theory extends the information theoretic analysis of the Kelly Criterion, and its generalization, the horse race, to a series of races where the stochastic process of winning horses, payoffs, and strategies depend on some stationary process, including, but not limited to the history of previous races. If the bettor is receiving messages (side information) about the probability distribution of winners, the doubling rate of the bettor's winnings is bounded by the pragmatic information of the messages. A second application is to the question of market efficiency. An efficient market is, by definition, a market in which the pragmatic information of the "tradable past" with respect to current prices is zero. Under this definition, markets whose returns are characterized by a GARCH(1,1) process cannot be efficient. Finally, a pragmatic informational analogue to Shannon's Noisy Coding Theorem suggests that a cause of market inefficiency is that the underlying fundamentals are changing so fast that the price discovery mechanism simply cannot keep up. This may happen most readily in the run-up to a financial bubble, where investors' willful ignorance degrade the information processing capabilities of the market.<|reference_end|>
arxiv
@article{weinberger2009pragmatic, title={Pragmatic Information Rates, Generalizations of the Kelly Criterion, and Financial Market Efficiency}, author={Edward D. Weinberger}, journal={arXiv preprint arXiv:0903.2243}, year={2009}, archivePrefix={arXiv}, eprint={0903.2243}, primaryClass={cs.IT math.IT q-fin.PM q-fin.TR} }
weinberger2009pragmatic
arxiv-6727
0903.2251
Constraint solving for high-level WCET analysis
<|reference_start|>Constraint solving for high-level WCET analysis: The safety of our day-to-day life depends crucially on the correct functioning of embedded software systems which control the functioning of more and more technical devices. Many of these software systems are time-critical. Hence, computations performed need not only to be correct, but must also be issued in a timely fashion. Worst case execution time (WCET) analysis is concerned with computing tight upper bounds for the execution time of a system in order to provide formal guarantees for the proper timing behaviour of a system. Central for this is to compute safe and tight bounds for loops and recursion depths. In this paper, we highlight the TuBound approach to this challenge at whose heart is a constraint logic based approach for loop analysis.<|reference_end|>
arxiv
@article{prantl2009constraint, title={Constraint solving for high-level WCET analysis}, author={Adrian Prantl and Jens Knoop and Markus Schordan and Markus Triska}, journal={arXiv preprint arXiv:0903.2251}, year={2009}, number={WLPE/2008/05}, archivePrefix={arXiv}, eprint={0903.2251}, primaryClass={cs.PL cs.LO} }
prantl2009constraint
arxiv-6728
0903.2252
A Semantics-Aware Editing Environment for Prolog in Eclipse
<|reference_start|>A Semantics-Aware Editing Environment for Prolog in Eclipse: In this paper we present a Prolog plugin for Eclipse based upon BE4, and providing many features such as semantic-aware syntax highlighting, outline view, error marking, content assist, hover information, documentation generation, and quick fixes. The plugin makes use of a Java parser for full Prolog with an integrated Prolog engine, and can be extended with further semantic analyses, e.g., based on abstract interpretation.<|reference_end|>
arxiv
@article{bendisposto2009a, title={A Semantics-Aware Editing Environment for Prolog in Eclipse}, author={Jens Bendisposto and Ian Endrijautzki and Michael Leuschel and David Schneider}, journal={arXiv preprint arXiv:0903.2252}, year={2009}, number={WLPE/2008/04}, archivePrefix={arXiv}, eprint={0903.2252}, primaryClass={cs.PL cs.HC cs.SE} }
bendisposto2009a
arxiv-6729
0903.2265
An Absolute 2-Approximation Algorithm for Two-Dimensional Bin Packing
<|reference_start|>An Absolute 2-Approximation Algorithm for Two-Dimensional Bin Packing: We consider the problem of packing rectangles into bins that are unit squares, where the goal is to minimize the number of bins used. All rectangles have to be packed non-overlapping and orthogonal, i.e., axis-parallel. We present an algorithm for this problem with an absolute worst-case ratio of 2, which is optimal provided P != NP.<|reference_end|>
arxiv
@article{harren2009an, title={An Absolute 2-Approximation Algorithm for Two-Dimensional Bin Packing}, author={Rolf Harren, Rob van Stee}, journal={arXiv preprint arXiv:0903.2265}, year={2009}, archivePrefix={arXiv}, eprint={0903.2265}, primaryClass={cs.DS cs.CC} }
harren2009an
arxiv-6730
0903.2272
A Novel Approach for Compression of Images Captured using Bayer Color Filter Arrays
<|reference_start|>A Novel Approach for Compression of Images Captured using Bayer Color Filter Arrays: We propose a new approach for image compression in digital cameras, where the goal is to achieve better quality at a given rate by using the characteristics of a Bayer color filter array. Most digital cameras produce color images by using a single CCD plate, so that each pixel in an image has only one color component and therefore an interpolation method is needed to produce a full color image. After the image processing stage, in order to reduce the memory requirements of the camera, a lossless or lossy compression stage often follows. But in this scheme, before decreasing redundancy through compression, redundancy is increased in an interpolation stage. In order to avoid increasing the redundancy before compression, we propose algorithms for image compression in which the order of the compression and interpolation stages is reversed. We introduce image transform algorithms, since non interpolated images cannot be directly compressed with general image coders. The simulation results show that our algorithm outperforms conventional methods with various color interpolation methods in a wide range of compression ratios. Our proposed algorithm provides not only better quality but also lower encoding complexity because the amount of luminance data used is only half of that in conventional methods.<|reference_end|>
arxiv
@article{lee2009a, title={A Novel Approach for Compression of Images Captured using Bayer Color Filter Arrays}, author={Sang-Yong Lee and Antonio Ortega}, journal={arXiv preprint arXiv:0903.2272}, year={2009}, archivePrefix={arXiv}, eprint={0903.2272}, primaryClass={cs.MM} }
lee2009a
arxiv-6731
0903.2278
Manipulating Scrip Systems: Sybils and Collusion
<|reference_start|>Manipulating Scrip Systems: Sybils and Collusion: Game-theoretic analyses of distributed and peer-to-peer systems typically use the Nash equilibrium solution concept, but this explicitly excludes the possibility of strategic behavior involving more than one agent. We examine the effects of two types of strategic behavior involving more than one agent, sybils and collusion, in the context of scrip systems where agents provide each other with service in exchange for scrip. Sybils make an agent more likely to be chosen to provide service, which generally makes it harder for agents without sybils to earn money and decreases social welfare. Surprisingly, in certain circumstances it is possible for sybils to make all agents better off. While collusion is generally bad, in the context of scrip systems it actually tends to make all agents better off, not merely those who collude. These results also provide insight into the effects of allowing agents to advertise and loan money. While many extensions of Nash equilibrium have been proposed that address collusion and other issues relevant to distributed and peer-to-peer systems, our results show that none of them adequately address the issues raised by sybils and collusion in scrip systems.<|reference_end|>
arxiv
@article{kash2009manipulating, title={Manipulating Scrip Systems: Sybils and Collusion}, author={Ian A. Kash, Eric J. Friedman, Joseph Y. Halpern}, journal={arXiv preprint arXiv:0903.2278}, year={2009}, doi={10.1007/978-3-642-03821-1_4}, archivePrefix={arXiv}, eprint={0903.2278}, primaryClass={cs.GT} }
kash2009manipulating
arxiv-6732
0903.2282
Multiagent Learning in Large Anonymous Games
<|reference_start|>Multiagent Learning in Large Anonymous Games: In large systems, it is important for agents to learn to act effectively, but sophisticated multi-agent learning algorithms generally do not scale. An alternative approach is to find restricted classes of games where simple, efficient algorithms converge. It is shown that stage learning efficiently converges to Nash equilibria in large anonymous games if best-reply dynamics converge. Two features are identified that improve convergence. First, rather than making learning more difficult, more agents are actually beneficial in many settings. Second, providing agents with statistical information about the behavior of others can significantly reduce the number of observations needed.<|reference_end|>
arxiv
@article{kash2009multiagent, title={Multiagent Learning in Large Anonymous Games}, author={Ian A. Kash, Eric J. Friedman, Joseph Y. Halpern}, journal={arXiv preprint arXiv:0903.2282}, year={2009}, archivePrefix={arXiv}, eprint={0903.2282}, primaryClass={cs.MA cs.GT cs.LG} }
kash2009multiagent
arxiv-6733
0903.2299
Differential Contrastive Divergence
<|reference_start|>Differential Contrastive Divergence: This paper has been retracted.<|reference_end|>
arxiv
@article{mcallester2009differential, title={Differential Contrastive Divergence}, author={David McAllester}, journal={arXiv preprint arXiv:0903.2299}, year={2009}, archivePrefix={arXiv}, eprint={0903.2299}, primaryClass={cs.LG} }
mcallester2009differential
arxiv-6734
0903.2310
Analysis of the Relationships among Longest Common Subsequences, Shortest Common Supersequences and Patterns and its application on Pattern Discovery in Biological Sequences
<|reference_start|>Analysis of the Relationships among Longest Common Subsequences, Shortest Common Supersequences and Patterns and its application on Pattern Discovery in Biological Sequences: For a set of mulitple sequences, their patterns,Longest Common Subsequences (LCS) and Shortest Common Supersequences (SCS) represent different aspects of these sequences profile, and they can all be used for biological sequence comparisons and analysis. Revealing the relationship between the patterns and LCS,SCS might provide us with a deeper view of the patterns of biological sequences, in turn leading to better understanding of them. However, There is no careful examinaton about the relationship between patterns, LCS and SCS. In this paper, we have analyzed their relation, and given some lemmas. Based on their relations, a set of algorithms called the PALS (PAtterns by Lcs and Scs) algorithms are propsoed to discover patterns in a set of biological sequences. These algorithms first generate the results for LCS and SCS of sequences by heuristic, and consequently derive patterns from these results. Experiments show that the PALS algorithms perform well (both in efficiency and in accuracy) on a variety of sequences. The PALS approach also provides us with a solution for transforming between the heuristic results of SCS and LCS.<|reference_end|>
arxiv
@article{ning2009analysis, title={Analysis of the Relationships among Longest Common Subsequences, Shortest Common Supersequences and Patterns and its application on Pattern Discovery in Biological Sequences}, author={Kang Ning, Hoong Kee Ng, Hon Wai Leong}, journal={arXiv preprint arXiv:0903.2310}, year={2009}, doi={10.1504/IJDMB.2011.045413}, archivePrefix={arXiv}, eprint={0903.2310}, primaryClass={cs.DS cs.DM cs.IR cs.OH q-bio.QM} }
ning2009analysis
arxiv-6735
0903.2315
Design and Analysis of E2RC Codes
<|reference_start|>Design and Analysis of E2RC Codes: We consider the design and analysis of the efficiently-encodable rate-compatible ($E^2RC$) irregular LDPC codes proposed in previous work. In this work we introduce semi-structured $E^2RC$-like codes and protograph $E^2RC$ codes. EXIT chart based methods are developed for the design of semi-structured $E^2RC$-like codes that allow us to determine near-optimal degree distributions for the systematic part of the code while taking into account the structure of the deterministic parity part, thus resolving one of the open issues in the original construction. We develop a fast EXIT function computation method that does not rely on Monte-Carlo simulations and can be used in other scenarios as well. Our approach allows us to jointly optimize code performance across the range of rates under puncturing. We then consider protograph $E^2RC$ codes (that have a protograph representation) and propose rules for designing a family of rate-compatible punctured protographs with low thresholds. For both the semi-structured and protograph $E^2RC$ families we obtain codes whose gap to capacity is at most 0.3 dB across the range of rates when the maximum variable node degree is twenty.<|reference_end|>
arxiv
@article{shi2009design, title={Design and Analysis of E2RC Codes}, author={Cuizhu Shi and Aditya Ramamoorthy}, journal={arXiv preprint arXiv:0903.2315}, year={2009}, archivePrefix={arXiv}, eprint={0903.2315}, primaryClass={cs.IT cs.DM math.IT} }
shi2009design
arxiv-6736
0903.2352
A Mean Field Approach for Optimization in Particles Systems and Applications
<|reference_start|>A Mean Field Approach for Optimization in Particles Systems and Applications: This paper investigates the limit behavior of Markov Decision Processes (MDPs) made of independent particles evolving in a common environment, when the number of particles goes to infinity. In the finite horizon case or with a discounted cost and an infinite horizon, we show that when the number of particles becomes large, the optimal cost of the system converges almost surely to the optimal cost of a discrete deterministic system (the ``optimal mean field''). Convergence also holds for optimal policies. We further provide insights on the speed of convergence by proving several central limits theorems for the cost and the state of the Markov decision process with explicit formulas for the variance of the limit Gaussian laws. Then, our framework is applied to a brokering problem in grid computing. The optimal policy for the limit deterministic system is computed explicitly. Several simulations with growing numbers of processors are reported. They compare the performance of the optimal policy of the limit system used in the finite case with classical policies (such as Join the Shortest Queue) by measuring its asymptotic gain as well as the threshold above which it starts outperforming classical policies.<|reference_end|>
arxiv
@article{gast2009a, title={A Mean Field Approach for Optimization in Particles Systems and Applications}, author={Nicolas Gast (INRIA Rh^one-Alpes / LIG laboratoire d'Informatique de Grenoble), Bruno Gaujal (INRIA Rh^one-Alpes / LIG laboratoire d'Informatique de Grenoble)}, journal={arXiv preprint arXiv:0903.2352}, year={2009}, number={RR-6877}, archivePrefix={arXiv}, eprint={0903.2352}, primaryClass={math.PR cs.NI cs.PF} }
gast2009a
arxiv-6737
0903.2353
Relations, Constraints and Abstractions: Using the Tools of Logic Programming in the Security Industry
<|reference_start|>Relations, Constraints and Abstractions: Using the Tools of Logic Programming in the Security Industry: Logic programming is sometimes described as relational programming: a paradigm in which the programmer specifies and composes n-ary relations using systems of constraints. An advanced logic programming environment will provide tools that abstract these relations to transform, optimise, or even verify the correctness of a logic program. This talk will show that these concepts, namely relations, constraints and abstractions, turn out to also be important in the reverse engineer process that underpins the discovery of bugs within the security industry.<|reference_end|>
arxiv
@article{king2009relations,, title={Relations, Constraints and Abstractions: Using the Tools of Logic Programming in the Security Industry}, author={Andy King}, journal={arXiv preprint arXiv:0903.2353}, year={2009}, number={WLPE/2008/00}, archivePrefix={arXiv}, eprint={0903.2353}, primaryClass={cs.PL} }
king2009relations,
arxiv-6738
0903.2361
Adaptive Observers and Parameter Estimation for a Class of Systems Nonlinear in the Parameters
<|reference_start|>Adaptive Observers and Parameter Estimation for a Class of Systems Nonlinear in the Parameters: We consider the problem of asymptotic reconstruction of the state and parameter values in systems of ordinary differential equations. A solution to this problem is proposed for a class of systems of which the unknowns are allowed to be nonlinearly parameterized functions of state and time. Reconstruction of state and parameter values is based on the concepts of weakly attracting sets and non-uniform convergence and is subjected to persistency of excitation conditions. In absence of nonlinear parametrization the resulting observers reduce to standard estimation schemes. In this respect, the proposed method constitutes a generalization of the conventional canonical adaptive observer design.<|reference_end|>
arxiv
@article{tyukin2009adaptive, title={Adaptive Observers and Parameter Estimation for a Class of Systems Nonlinear in the Parameters}, author={Ivan Y. Tyukin, Erik Steur, Henk Nijmeijer, and Cees van Leeuwen}, journal={arXiv preprint arXiv:0903.2361}, year={2009}, doi={10.1016/j.automatica.2013.05.008}, archivePrefix={arXiv}, eprint={0903.2361}, primaryClass={math.OC cs.SY math.DS q-bio.QM} }
tyukin2009adaptive
arxiv-6739
0903.2382
Infinite words without palindrome
<|reference_start|>Infinite words without palindrome: We show that there exists an uniformly recurrent infinite word whose set of factors is closed under reversal and which has only finitely many palindromic factors.<|reference_end|>
arxiv
@article{berstel2009infinite, title={Infinite words without palindrome}, author={Jean Berstel, Luc Boasson, Olivier Carton, Isabelle Fagnot}, journal={arXiv preprint arXiv:0903.2382}, year={2009}, archivePrefix={arXiv}, eprint={0903.2382}, primaryClass={cs.DM} }
berstel2009infinite
arxiv-6740
0903.2410
On tiered small jump operators
<|reference_start|>On tiered small jump operators: Predicative analysis of recursion schema is a method to characterize complexity classes like the class FPTIME of polynomial time computable functions. This analysis comes from the works of Bellantoni and Cook, and Leivant by data tiering. Here, we refine predicative analysis by using a ramified Ackermann's construction of a non-primitive recursive function. We obtain a hierarchy of functions which characterizes exactly functions, which are computed in O(n^k) time over register machine model of computation. For this, we introduce a strict ramification principle. Then, we show how to diagonalize in order to obtain an exponential function and to jump outside deterministic polynomial time. Lastly, we suggest a dependent typed lambda-calculus to represent this construction.<|reference_end|>
arxiv
@article{marion2009on, title={On tiered small jump operators}, author={Jean-Yves Marion}, journal={Logical Methods in Computer Science, Volume 5, Issue 1 (March 31, 2009) lmcs:1146}, year={2009}, doi={10.2168/LMCS-5(1:7)2009}, archivePrefix={arXiv}, eprint={0903.2410}, primaryClass={cs.CC cs.LO} }
marion2009on
arxiv-6741
0903.2426
Relay Selection and Power Allocation in Cooperative Cellular Networks
<|reference_start|>Relay Selection and Power Allocation in Cooperative Cellular Networks: We consider a system with a single base station communicating with multiple users over orthogonal channels while being assisted by multiple relays. Several recent works have suggested that, in such a scenario, selection, i.e., a single relay helping the source, is the best relaying option in terms of the resulting complexity and overhead. However, in a multiuser setting, optimal relay assignment is a combinatorial problem. In this paper, we formulate a related convex optimization problem that provides an extremely tight upper bound on performance and show that selection is, almost always, inherent in the solution. We also provide a heuristic to find a close-to-optimal relay assignment and power allocation across users supported by a single relay. Simulation results using realistic channel models demonstrate the efficacy of the proposed schemes, but also raise the question as to whether the gains from relaying are worth the additional costs.<|reference_end|>
arxiv
@article{kadloor2009relay, title={Relay Selection and Power Allocation in Cooperative Cellular Networks}, author={Sachin Kadloor and Raviraj Adve}, journal={arXiv preprint arXiv:0903.2426}, year={2009}, archivePrefix={arXiv}, eprint={0903.2426}, primaryClass={cs.IT math.IT} }
kadloor2009relay
arxiv-6742
0903.2429
Statistical mechanics of budget-constrained auctions
<|reference_start|>Statistical mechanics of budget-constrained auctions: Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). Based on the cavity method of statistical mechanics, we introduce a message passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.<|reference_end|>
arxiv
@article{altarelli2009statistical, title={Statistical mechanics of budget-constrained auctions}, author={F. Altarelli, A. Braunstein, J. Realpe-Gomez, R. Zecchina}, journal={JSTAT 2009;2009:P07002 (27pp)}, year={2009}, doi={10.1088/1742-5468/2009/07/P07002}, archivePrefix={arXiv}, eprint={0903.2429}, primaryClass={cs.GT cond-mat.stat-mech physics.soc-ph} }
altarelli2009statistical
arxiv-6743
0903.2445
Qualitative Logics and Equivalences for Probabilistic Systems
<|reference_start|>Qualitative Logics and Equivalences for Probabilistic Systems: We investigate logics and equivalence relations that capture the qualitative behavior of Markov Decision Processes (MDPs). We present Qualitative Randomized CTL (QRCTL): formulas of this logic can express the fact that certain temporal properties hold over all paths, or with probability 0 or 1, but they do not distinguish among intermediate probability values. We present a symbolic, polynomial time model-checking algorithm for QRCTL on MDPs. The logic QRCTL induces an equivalence relation over states of an MDP that we call qualitative equivalence: informally, two states are qualitatively equivalent if the sets of formulas that hold with probability 0 or 1 at the two states are the same. We show that for finite alternating MDPs, where nondeterministic and probabilistic choices occur in different states, qualitative equivalence coincides with alternating bisimulation, and can thus be computed via efficient partition-refinement algorithms. On the other hand, in non-alternating MDPs the equivalence relations cannot be computed via partition-refinement algorithms, but rather, they require non-local computation. Finally, we consider QRCTL*, that extends QRCTL with nested temporal operators in the same manner in which CTL* extends CTL. We show that QRCTL and QRCTL* induce the same qualitative equivalence on alternating MDPs, while on non-alternating MDPs, the equivalence arising from QRCTL* can be strictly finer. We also provide a full characterization of the relation between qualitative equivalence, bisimulation, and alternating bisimulation, according to whether the MDPs are finite, and to whether their transition relations are finitely-branching.<|reference_end|>
arxiv
@article{chatterjee2009qualitative, title={Qualitative Logics and Equivalences for Probabilistic Systems}, author={Krishnendu Chatterjee, Luca de Alfaro, Marco Faella and Axel Legay}, journal={Logical Methods in Computer Science, Volume 5, Issue 2 (May 4, 2009) lmcs:1082}, year={2009}, doi={10.2168/LMCS-5(2:7)2009}, archivePrefix={arXiv}, eprint={0903.2445}, primaryClass={cs.LO} }
chatterjee2009qualitative
arxiv-6744
0903.2448
Positive Logic with Adjoint Modalities: Proof Theory, Semantics and Reasoning about Information
<|reference_start|>Positive Logic with Adjoint Modalities: Proof Theory, Semantics and Reasoning about Information: We consider a simple modal logic whose non-modal part has conjunction and disjunction as connectives and whose modalities come in adjoint pairs, but are not in general closure operators. Despite absence of negation and implication, and of axioms corresponding to the characteristic axioms of (e.g.) T, S4 and S5, such logics are useful, as shown in previous work by Baltag, Coecke and the first author, for encoding and reasoning about information and misinformation in multi-agent systems. For such a logic we present an algebraic semantics, using lattices with agent-indexed families of adjoint pairs of operators, and a cut-free sequent calculus. The calculus exploits operators on sequents, in the style of "nested" or "tree-sequent" calculi; cut-admissibility is shown by constructive syntactic methods. The applicability of the logic is illustrated by reasoning about the muddy children puzzle, for which the calculus is augmented with extra rules to express the facts of the muddy children scenario.<|reference_end|>
arxiv
@article{sadrzadeh2009positive, title={Positive Logic with Adjoint Modalities: Proof Theory, Semantics and Reasoning about Information}, author={Mehrnoosh Sadrzadeh and Roy Dyckhoff}, journal={arXiv preprint arXiv:0903.2448}, year={2009}, archivePrefix={arXiv}, eprint={0903.2448}, primaryClass={cs.LO cs.MA} }
sadrzadeh2009positive
arxiv-6745
0903.2471
Cooperative Multiplexing: Toward Higher Spectral Efficiency in Multi-antenna Relay Networks
<|reference_start|>Cooperative Multiplexing: Toward Higher Spectral Efficiency in Multi-antenna Relay Networks: Previous work on cooperative communications has concentrated primarily on the diversity benefits of such techniques. This paper, instead, considers the multiplexing benefits of cooperative communications. First, a new interpretation on the fundamental tradeoff between the transmission rate and outage probability in multi-antenna relay networks is given. It follows that multiplexing gains can be obtained at any finite SNR, in full-duplex multi-antenna relay networks. Thus relaying can offer not only stronger link reliability, but also higher spectral efficiency. Specifically, the decode-and-forward protocol is applied and networks that have one source, one destination, and multiple relays are considered. A receive power gain at the relays, which captures the network large scale fading characteristics, is also considered. It is shown that this power gain can significantly affect the system diversity-multiplexing tradeoff for any finite SNR value. Several relaying protocols are proposed and are shown to offer nearly the same outage probability as if the transmit antennas at the source and the relay(s) were co-located, given certain SNR and receive power gains at the relays. Thus a higher multiplexing gain than that of the direct link can be obtained if the destination has more antennas than the source. Much of the analysis in the paper is valid for arbitrary channel fading statistics. These results point to a view of relay networks as a means for providing higher spectral efficiency, rather than only link reliability.<|reference_end|>
arxiv
@article{yijia2009cooperative, title={Cooperative Multiplexing: Toward Higher Spectral Efficiency in Multi-antenna Relay Networks}, author={Yijia (Richard) Fan, Chao Wang, H. Vincent Poor, John S. Thompson}, journal={arXiv preprint arXiv:0903.2471}, year={2009}, archivePrefix={arXiv}, eprint={0903.2471}, primaryClass={cs.IT math.IT} }
yijia2009cooperative
arxiv-6746
0903.2499
Parking functions, labeled trees and DCJ sorting scenarios
<|reference_start|>Parking functions, labeled trees and DCJ sorting scenarios: In genome rearrangement theory, one of the elusive questions raised in recent years is the enumeration of rearrangement scenarios between two genomes. This problem is related to the uniform generation of rearrangement scenarios, and the derivation of tests of statistical significance of the properties of these scenarios. Here we give an exact formula for the number of double-cut-and-join (DCJ) rearrangement scenarios of co-tailed genomes. We also construct effective bijections between the set of scenarios that sort a cycle and well studied combinatorial objects such as parking functions and labeled trees.<|reference_end|>
arxiv
@article{ouangraoua2009parking, title={Parking functions, labeled trees and DCJ sorting scenarios}, author={Aida Ouangraoua, Anne Bergeron}, journal={arXiv preprint arXiv:0903.2499}, year={2009}, doi={10.1007/978-3-642-04744-2_3}, archivePrefix={arXiv}, eprint={0903.2499}, primaryClass={cs.DM} }
ouangraoua2009parking
arxiv-6747
0903.2507
The Fibonacci dimension of a graph
<|reference_start|>The Fibonacci dimension of a graph: The Fibonacci dimension fdim(G) of a graph G is introduced as the smallest integer f such that G admits an isometric embedding into Gamma_f, the f-dimensional Fibonacci cube. We give bounds on the Fibonacci dimension of a graph in terms of the isometric and lattice dimension, provide a combinatorial characterization of the Fibonacci dimension using properties of an associated graph, and establish the Fibonacci dimension for certain families of graphs. From the algorithmic point of view we prove that it is NP-complete to decide if fdim(G) equals to the isometric dimension of G, and that it is also NP-hard to approximate fdim(G) within (741/740)-epsilon. We also give a (3/2)-approximation algorithm for fdim(G) in the general case and a (1+epsilon)-approximation algorithm for simplex graphs.<|reference_end|>
arxiv
@article{cabello2009the, title={The Fibonacci dimension of a graph}, author={Sergio Cabello, David Eppstein, and Sandi Klavzar}, journal={arXiv preprint arXiv:0903.2507}, year={2009}, number={IMFM Preprint 1084}, archivePrefix={arXiv}, eprint={0903.2507}, primaryClass={math.CO cs.DS} }
cabello2009the
arxiv-6748
0903.2516
Effect of Degree Distribution on Evolutionary Search
<|reference_start|>Effect of Degree Distribution on Evolutionary Search: This paper introduces a method to generate hierarchically modular networks with prescribed node degree list and proposes a metric to measure network modularity based on the notion of edge distance. The generated networks are used as test problems to explore the effect of modularity and degree distribution on evolutionary algorithm performance. Results from the experiments (i) confirm a previous finding that modularity increases the performance advantage of genetic algorithms over hill climbers, and (ii) support a new conjecture that test problems with modularized constraint networks having heavy-tailed right-skewed degree distributions are more easily solved than test problems with modularized constraint networks having bell-shaped normal degree distributions.<|reference_end|>
arxiv
@article{khor2009effect, title={Effect of Degree Distribution on Evolutionary Search}, author={Susan Khor}, journal={arXiv preprint arXiv:0903.2516}, year={2009}, archivePrefix={arXiv}, eprint={0903.2516}, primaryClass={cs.NE} }
khor2009effect
arxiv-6749
0903.2525
CloudSim: A Novel Framework for Modeling and Simulation of Cloud Computing Infrastructures and Services
<|reference_start|>CloudSim: A Novel Framework for Modeling and Simulation of Cloud Computing Infrastructures and Services: Cloud computing focuses on delivery of reliable, secure, fault-tolerant, sustainable, and scalable infrastructures for hosting Internet-based application services. These applications have different composition, configuration, and deployment requirements. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: a new generalized and extensible simulation framework that enables seamless modelling, simulation, and experimentation of emerging Cloud computing infrastructures and management services. The simulation framework has the following novel features: (i) support for modelling and instantiation of large scale Cloud computing infrastructure, including data centers on a single physical computing node and java virtual machine; (ii) a self-contained platform for modelling data centers, service brokers, scheduling, and allocations policies; (iii) availability of virtualization engine, which aids in creation and management of multiple, independent, and co-hosted virtualized services on a data center node; and (iv) flexibility to switch between space-shared and time-shared allocation of processing cores to virtualized services.<|reference_end|>
arxiv
@article{calheiros2009cloudsim:, title={CloudSim: A Novel Framework for Modeling and Simulation of Cloud Computing Infrastructures and Services}, author={Rodrigo N. Calheiros, Rajiv Ranjan, Cesar A. F. De Rose, and Rajkumar Buyya}, journal={arXiv preprint arXiv:0903.2525}, year={2009}, number={echnical Report, GRIDS-TR-2009-1, Grid Computing and Distributed Systems Laboratory, The University of Melbourne, Australia, March 13, 2009}, archivePrefix={arXiv}, eprint={0903.2525}, primaryClass={cs.DC cs.OS cs.SE} }
calheiros2009cloudsim:
arxiv-6750
0903.2528
Airport Gate Assignment A Hybrid Model and Implementation
<|reference_start|>Airport Gate Assignment A Hybrid Model and Implementation: With the rapid development of airlines, airports today become much busier and more complicated than previous days. During airlines daily operations, assigning the available gates to the arriving aircrafts based on the fixed schedule is a very important issue, which motivates researchers to study and solve Airport Gate Assignment Problems (AGAP) with all kinds of state-of-the-art combinatorial optimization techniques. In this paper, we study the AGAP and propose a novel hybrid mathematical model based on the method of constraint programming and 0 - 1 mixed-integer programming. With the objective to minimize the number of gate conflicts of any two adjacent aircrafts assigned to the same gate, we build a mathematical model with logical constraints and the binary constraints. For practical considerations, the potential objective of the model is also to minimize the number of gates that airlines must lease or purchase in order to run their business smoothly. We implement the model in the Optimization Programming Language (OPL) and carry out empirical studies with the data obtained from online timetable of Continental Airlines, Houston Gorge Bush Intercontinental Airport IAH, which demonstrate that our model can provide an efficient evaluation criteria for the airline companies to estimate the efficiency of their current gate assignments.<|reference_end|>
arxiv
@article{li2009airport, title={Airport Gate Assignment A Hybrid Model and Implementation}, author={Chendong Li}, journal={arXiv preprint arXiv:0903.2528}, year={2009}, archivePrefix={arXiv}, eprint={0903.2528}, primaryClass={cs.AI cs.OH} }
li2009airport
arxiv-6751
0903.2543
Multi-Agent Crisis Response systems - Design Requirements and Analysis of Current Systems
<|reference_start|>Multi-Agent Crisis Response systems - Design Requirements and Analysis of Current Systems: Crisis response is a critical area of research, with encouraging progress in the past view yeas. The aim of the research is to contribute to building future crisis environment where software agents, robots, responders, crisis managers, and crisis organizations interact to provide advice, protection and aid. This paper discusses the crisis response domain requirements, and provides analysis of five crisis response systems namely: DrillSim [2], DEFACTO [15], ALADDIN [1], RoboCup Rescue [18], and FireGrid [3]. Analysis of systems includes systems architecture and methodology. In addition, we identified features and limitations of systems based on crisis response domain requirements.<|reference_end|>
arxiv
@article{khalil2009multi-agent, title={Multi-Agent Crisis Response systems - Design Requirements and Analysis of Current Systems}, author={Khaled M. Khalil, M. Abdel-Aziz, Taymour T. Nazmy, Abdel-Badeeh M. Salem}, journal={arXiv preprint arXiv:0903.2543}, year={2009}, archivePrefix={arXiv}, eprint={0903.2543}, primaryClass={cs.MA} }
khalil2009multi-agent
arxiv-6752
0903.2544
To Click or not to Click? The Role of Contextualized and User-Centric Web Snippets
<|reference_start|>To Click or not to Click? The Role of Contextualized and User-Centric Web Snippets: When searching the web, it is often possible that there are too many results available for ambiguous queries. Text snippets, extracted from the retrieved pages, are an indicator of the pages' usefulness to the query intention and can be used to focus the scope of search results. In this paper, we propose a novel method for automatically extracting web page snippets that are highly relevant to the query intention and expressive of the pages' entire content. We show that the usage of semantics, as a basis for focused retrieval, produces high quality text snippet suggestions. The snippets delivered by our method are significantly better in terms of retrieval performance compared to those derived using the pages' statistical content. Furthermore, our study suggests that semantically-driven snippet generation can also be used to augment traditional passage retrieval algorithms based on word overlap or statistical weights, since they typically differ in coverage and produce different results. User clicks on the query relevant snippets can be used to refine the query results and promote the most comprehensive among the relevant documents.<|reference_end|>
arxiv
@article{zotos2009to, title={To Click or not to Click? The Role of Contextualized and User-Centric Web Snippets}, author={N. Zotos, P. Tzekou, G. Tsatsaronis, L. Kozanidis, S. Stamou, I. Varlamis}, journal={SIGIR 2007 Workshop on Focused Retrieval}, year={2009}, archivePrefix={arXiv}, eprint={0903.2544}, primaryClass={cs.IR} }
zotos2009to
arxiv-6753
0903.2554
Bottom-up rewriting for words and terms
<|reference_start|>Bottom-up rewriting for words and terms: For the whole class of linear term rewriting systems, we define \emph{bottom-up rewriting} which is a restriction of the usual notion of rewriting. We show that bottom-up rewriting effectively inverse-preserves recognizability and analyze the complexity of the underlying construction. The Bottom-Up class (BU) is, by definition, the set of linear systems for which every derivation can be replaced by a bottom-up derivation. Membership to BU turns out to be undecidable, we are thus lead to define more restricted classes: the classes SBU(k), k in N of Strongly Bottom-Up(k) systems for which we show that membership is decidable. We define the class of Strongly Bottom-Up systems by SBU = U_{k in \} SBU(k). We give a polynomial sufficient condition for a system to be in $\SBU$. The class SBU contains (strictly) several classes of systems which were already known to inverse preserve recognizability: the inverse left-basic semi-Thue systems (viewed as unary term rewriting systems), the linear growing term rewriting systems, the inverse Linear-Finite-Path-Ordering systems.<|reference_end|>
arxiv
@article{durand2009bottom-up, title={Bottom-up rewriting for words and terms}, author={Irene Durand and Geraud Senizergues}, journal={arXiv preprint arXiv:0903.2554}, year={2009}, archivePrefix={arXiv}, eprint={0903.2554}, primaryClass={cs.FL} }
durand2009bottom-up
arxiv-6754
0903.2584
Curvature and temperature of complex networks
<|reference_start|>Curvature and temperature of complex networks: We show that heterogeneous degree distributions in observed scale-free topologies of complex networks can emerge as a consequence of the exponential expansion of hidden hyperbolic space. Fermi-Dirac statistics provides a physical interpretation of hyperbolic distances as energies of links. The hidden space curvature affects the heterogeneity of the degree distribution, while clustering is a function of temperature. We embed the Internet into the hyperbolic plane, and find a remarkable congruency between the embedding and our hyperbolic model. Besides proving our model realistic, this embedding may be used for routing with only local information, which holds significant promise for improving the performance of Internet routing.<|reference_end|>
arxiv
@article{krioukov2009curvature, title={Curvature and temperature of complex networks}, author={Dmitri Krioukov, Fragkiskos Papadopoulos, Amin Vahdat, Marian Boguna}, journal={Phys. Rev. E 80, 035101(R) (2009)}, year={2009}, doi={10.1103/PhysRevE.80.035101}, archivePrefix={arXiv}, eprint={0903.2584}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.NI physics.soc-ph} }
krioukov2009curvature
arxiv-6755
0903.2598
Generating Hierarchically Modular Networks via Link Switching
<|reference_start|>Generating Hierarchically Modular Networks via Link Switching: This paper introduces a method to generate hierarchically modular networks with prescribed node degree list by link switching. Unlike many existing network generating models, our method does not use link probabilities to achieve modularity. Instead, it utilizes a user-specified topology to determine relatedness between pairs of nodes in terms of edge distances and links are switched to increase edge distances. To measure the modular-ness of a network as a whole, a new metric called Q2 is proposed. Comparisons are made between the Q [15] and Q2 measures. We also comment on the effect of our modularization method on other network characteristics such as clustering, hierarchy, average path length, small-worldness, degree correlation and centrality. An application of this method is reported elsewhere [12]. Briefly, the generated networks are used as test problems to explore the effect of modularity and degree distribution on evolutionary search algorithms.<|reference_end|>
arxiv
@article{khor2009generating, title={Generating Hierarchically Modular Networks via Link Switching}, author={Susan Khor}, journal={arXiv preprint arXiv:0903.2598}, year={2009}, archivePrefix={arXiv}, eprint={0903.2598}, primaryClass={cs.OH} }
khor2009generating
arxiv-6756
0903.2641
Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis
<|reference_start|>Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis: We show how the Equation-Free approach for multi-scale computations can be exploited to systematically study the dynamics of neural interactions on a random regular connected graph under a pairwise representation perspective. Using an individual-based microscopic simulator as a black box coarse-grained timestepper and with the aid of simulated annealing we compute the coarse-grained equilibrium bifurcation diagram and analyze the stability of the stationary states sidestepping the necessity of obtaining explicit closures at the macroscopic level. We also exploit the scheme to perform a rare-events analysis by estimating an effective Fokker-Planck describing the evolving probability density function of the corresponding coarse-grained observables.<|reference_end|>
arxiv
@article{spiliotis2009multiscale, title={Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis}, author={Konstantinos G. Spiliotis and Constantinos I. Siettos}, journal={Int. J. Bifurcation and Chaos 20 (1) 121-134 (2010)}, year={2009}, doi={10.1142/S0218127410025442}, archivePrefix={arXiv}, eprint={0903.2641}, primaryClass={cs.CE cs.NA q-bio.NC} }
spiliotis2009multiscale
arxiv-6757
0903.2653
Capacity region of the deterministic multi-pair bi-directional relay network
<|reference_start|>Capacity region of the deterministic multi-pair bi-directional relay network: In this paper we study the capacity region of the multi-pair bidirectional (or two-way) wireless relay network, in which a relay node facilitates the communication between multiple pairs of users. This network is a generalization of the well known bidirectional relay channel, where we have only one pair of users. We examine this problem in the context of the deterministic channel interaction model, which eliminates the channel noise and allows us to focus on the interaction between signals. We characterize the capacity region of this network when the relay is operating at either full-duplex mode or half-duplex mode (with non adaptive listen-transmit scheduling). In both cases we show that the cut-set upper bound is tight and, quite interestingly, the capacity region is achieved by a simple equation-forwarding strategy.<|reference_end|>
arxiv
@article{avestimehr2009capacity, title={Capacity region of the deterministic multi-pair bi-directional relay network}, author={Salman Avestimehr, Amin Khajehnejad, Aydin Sezgin, Babak Hassibi}, journal={arXiv preprint arXiv:0903.2653}, year={2009}, doi={10.1109/ITWNIT.2009.5158541}, archivePrefix={arXiv}, eprint={0903.2653}, primaryClass={cs.IT math.IT} }
avestimehr2009capacity
arxiv-6758
0903.2675
Construction and Covering Properties of Constant-Dimension Codes
<|reference_start|>Construction and Covering Properties of Constant-Dimension Codes: Constant-dimension codes (CDCs) have been investigated for noncoherent error correction in random network coding. The maximum cardinality of CDCs with given minimum distance and how to construct optimal CDCs are both open problems, although CDCs obtained by lifting Gabidulin codes, referred to as KK codes, are nearly optimal. In this paper, we first construct a new class of CDCs based on KK codes, referred to as augmented KK codes, whose cardinalities are greater than previously proposed CDCs. We then propose a low-complexity decoding algorithm for our augmented KK codes using that for KK codes. Our decoding algorithm corrects more errors than a bounded subspace distance decoder by taking advantage of the structure of our augmented KK codes. In the rest of the paper we investigate the covering properties of CDCs. We first derive bounds on the minimum cardinality of a CDC with a given covering radius and then determine the asymptotic behavior of this quantity. Moreover, we show that liftings of rank metric codes have the highest possible covering radius, and hence liftings of rank metric codes are not optimal packing CDCs. Finally, we construct good covering CDCs by permuting liftings of rank metric codes.<|reference_end|>
arxiv
@article{gadouleau2009construction, title={Construction and Covering Properties of Constant-Dimension Codes}, author={Maximilien Gadouleau and Zhiyuan Yan}, journal={arXiv preprint arXiv:0903.2675}, year={2009}, archivePrefix={arXiv}, eprint={0903.2675}, primaryClass={cs.IT math.IT} }
gadouleau2009construction
arxiv-6759
0903.2679
Valuations and Metrics on Partially Ordered Sets
<|reference_start|>Valuations and Metrics on Partially Ordered Sets: We extend the definitions of upper and lower valuations on partially ordered sets, and consider the metrics they induce, in particular the metrics available (or not) based on the logarithms of such valuations. Motivating applications in computational linguistics and computational biology are indicated.<|reference_end|>
arxiv
@article{orum2009valuations, title={Valuations and Metrics on Partially Ordered Sets}, author={Chris Orum and Cliff A Joslyn}, journal={arXiv preprint arXiv:0903.2679}, year={2009}, number={PNWD-SA-8513}, archivePrefix={arXiv}, eprint={0903.2679}, primaryClass={math.CO cs.IT math.IT} }
orum2009valuations
arxiv-6760
0903.2682
Privacy in Location Based Services: Primitives Toward the Solution
<|reference_start|>Privacy in Location Based Services: Primitives Toward the Solution: Location based services (LBS) are one of the most promising and innovative directions of convergence technologies resulting of emergence of several fields including database systems, mobile communication, Internet technology, and positioning systems. Although being initiated as early as middle of 1990's, it is only recently that the LBS received a systematic profound research interest due to its commercial and technological impact. As the LBS is related to the user's location which can be used to trace the user's activities, a strong privacy concern has been raised. To preserve the user's location, several intelligent works have been introduced though many challenges are still awaiting solutions. This paper introduces a survey on LBS systems considering both localization technologies, model and architectures guaranteeing privacy. We also overview cryptographic primitive to possibly use in preserving LBS's privacy followed by fruitful research directions basically concerned with the privacy issue.<|reference_end|>
arxiv
@article{mohaisen2009privacy, title={Privacy in Location Based Services: Primitives Toward the Solution}, author={Abedelaziz Mohaisen, Dowon Hong, and DaeHun Nyang}, journal={arXiv preprint arXiv:0903.2682}, year={2009}, doi={10.1109/NCM.2008.137}, archivePrefix={arXiv}, eprint={0903.2682}, primaryClass={cs.CR} }
mohaisen2009privacy
arxiv-6761
0903.2693
A Pseudo DNA Cryptography Method
<|reference_start|>A Pseudo DNA Cryptography Method: The DNA cryptography is a new and very promising direction in cryptography research. DNA can be used in cryptography for storing and transmitting the information, as well as for computation. Although in its primitive stage, DNA cryptography is shown to be very effective. Currently, several DNA computing algorithms are proposed for quite some cryptography, cryptanalysis and steganography problems, and they are very powerful in these areas. However, the use of the DNA as a means of cryptography has high tech lab requirements and computational limitations, as well as the labor intensive extrapolation means so far. These make the efficient use of DNA cryptography difficult in the security world now. Therefore, more theoretical analysis should be performed before its real applications. In this project, We do not intended to utilize real DNA to perform the cryptography process; rather, We will introduce a new cryptography method based on central dogma of molecular biology. Since this method simulates some critical processes in central dogma, it is a pseudo DNA cryptography method. The theoretical analysis and experiments show this method to be efficient in computation, storage and transmission; and it is very powerful against certain attacks. Thus, this method can be of many uses in cryptography, such as an enhancement insecurity and speed to the other cryptography methods. There are also extensions and variations to this method, which have enhanced security, effectiveness and applicability.<|reference_end|>
arxiv
@article{ning2009a, title={A Pseudo DNA Cryptography Method}, author={Kang Ning}, journal={arXiv preprint arXiv:0903.2693}, year={2009}, doi={10.1016/j.compeleceng.2012.02.007}, archivePrefix={arXiv}, eprint={0903.2693}, primaryClass={cs.CR cs.DM} }
ning2009a
arxiv-6762
0903.2695
Dynamic Multi-Vehicle Routing with Multiple Classes of Demands
<|reference_start|>Dynamic Multi-Vehicle Routing with Multiple Classes of Demands: In this paper we study a dynamic vehicle routing problem in which there are multiple vehicles and multiple classes of demands. Demands of each class arrive in the environment randomly over time and require a random amount of on-site service that is characteristic of the class. To service a demand, one of the vehicles must travel to the demand location and remain there for the required on-site service time. The quality of service provided to each class is given by the expected delay between the arrival of a demand in the class, and that demand's service completion. The goal is to design a routing policy for the service vehicles which minimizes a convex combination of the delays for each class. First, we provide a lower bound on the achievable values of the convex combination of delays. Then, we propose a novel routing policy and analyze its performance under heavy load conditions (i.e., when the fraction of time the service vehicles spend performing on-site service approaches one). The policy performs within a constant factor of the lower bound (and thus the optimal), where the constant depends only on the number of classes, and is independent of the number of vehicles, the arrival rates of demands, the on-site service times, and the convex combination coefficients.<|reference_end|>
arxiv
@article{pavone2009dynamic, title={Dynamic Multi-Vehicle Routing with Multiple Classes of Demands}, author={Marco Pavone, Stephen L. Smith, Francesco Bullo, Emilio Frazzoli}, journal={arXiv preprint arXiv:0903.2695}, year={2009}, archivePrefix={arXiv}, eprint={0903.2695}, primaryClass={cs.RO} }
pavone2009dynamic
arxiv-6763
0903.2711
Performance Assessment of MIMO-BICM Demodulators based on System Capacity
<|reference_start|>Performance Assessment of MIMO-BICM Demodulators based on System Capacity: We provide a comprehensive performance comparison of soft-output and hard-output demodulators in the context of non-iterative multiple-input multiple-output bit-interleaved coded modulation (MIMO-BICM). Coded bit error rate (BER), widely used in literature for demodulator comparison, has the drawback of depending strongly on the error correcting code being used. This motivates us to propose a code-independent performance measure in terms of system capacity, i.e., mutual information of the equivalent modulation channel that comprises modulator, wireless channel, and demodulator. We present extensive numerical results for ergodic and quasi-static fading channels under perfect and imperfect channel state information. These results reveal that the performance ranking of MIMO demodulators is rate-dependent. Furthermore, they provide new insights regarding MIMO-BICM system design, i.e., the choice of antenna configuration, symbol constellation, and demodulator for a given target rate.<|reference_end|>
arxiv
@article{fertl2009performance, title={Performance Assessment of MIMO-BICM Demodulators based on System Capacity}, author={Peter Fertl, Joakim Jalden, Gerald Matz}, journal={arXiv preprint arXiv:0903.2711}, year={2009}, archivePrefix={arXiv}, eprint={0903.2711}, primaryClass={cs.IT math.IT} }
fertl2009performance
arxiv-6764
0903.2742
On Hadwiger's Number of a graph with partial information
<|reference_start|>On Hadwiger's Number of a graph with partial information: We investigate the possibility of proving upper bounds on Hadwiger's number of a graph with partial information, mirroring several known upper bounds for the chromatic number. For each such bound we determine whether the corresponding bound for Hadwiger's number holds. Our results suggest that the ``locality'' of an inequality accounts for the existence of such an extension.<|reference_end|>
arxiv
@article{istrate2009on, title={On Hadwiger's Number of a graph with partial information}, author={Gabriel Istrate}, journal={arXiv preprint arXiv:0903.2742}, year={2009}, archivePrefix={arXiv}, eprint={0903.2742}, primaryClass={cs.DM} }
istrate2009on
arxiv-6765
0903.2749
The Perfect Binary One-Error-Correcting Codes of Length 15: Part II--Properties
<|reference_start|>The Perfect Binary One-Error-Correcting Codes of Length 15: Part II--Properties: A complete classification of the perfect binary one-error-correcting codes of length 15 as well as their extensions of length 16 was recently carried out in [P. R. J. \"Osterg{\aa}rd and O. Pottonen, "The perfect binary one-error-correcting codes of length 15: Part I--Classification," IEEE Trans. Inform. Theory vol. 55, pp. 4657--4660, 2009]. In the current accompanying work, the classified codes are studied in great detail, and their main properties are tabulated. The results include the fact that 33 of the 80 Steiner triple systems of order 15 occur in such codes. Further understanding is gained on full-rank codes via switching, as it turns out that all but two full-rank codes can be obtained through a series of such transformations from the Hamming code. Other topics studied include (non)systematic codes, embedded one-error-correcting codes, and defining sets of codes. A classification of certain mixed perfect codes is also obtained.<|reference_end|>
arxiv
@article{östergård2009the, title={The Perfect Binary One-Error-Correcting Codes of Length 15: Part II--Properties}, author={Patric R. J. "Osterg{aa}rd, Olli Pottonen, Kevin T. Phelps}, journal={IEEE Trans. Inform. Theory vol. 56, pp. 2571-2582, 2010}, year={2009}, doi={10.1109/TIT.2010.2046197}, archivePrefix={arXiv}, eprint={0903.2749}, primaryClass={cs.IT math.IT} }
östergård2009the
arxiv-6766
0903.2774
Compressive estimation of doubly selective channels in multicarrier systems: Leakage effects and sparsity-enhancing processing
<|reference_start|>Compressive estimation of doubly selective channels in multicarrier systems: Leakage effects and sparsity-enhancing processing: We consider the application of compressed sensing (CS) to the estimation of doubly selective channels within pulse-shaping multicarrier systems (which include OFDM systems as a special case). By exploiting sparsity in the delay-Doppler domain, CS-based channel estimation allows for an increase in spectral efficiency through a reduction of the number of pilot symbols. For combating leakage effects that limit the delay-Doppler sparsity, we propose a sparsity-enhancing basis expansion and a method for optimizing the basis with or without prior statistical information about the channel. We also present an alternative CS-based channel estimator for (potentially) strongly time-frequency dispersive channels, which is capable of estimating the "off-diagonal" channel coefficients characterizing intersymbol and intercarrier interference (ISI/ICI). For this estimator, we propose a basis construction combining Fourier (exponential) and prolate spheroidal sequences. Simulation results assess the performance gains achieved by the proposed sparsity-enhancing processing techniques and by explicit estimation of ISI/ICI channel coefficients.<|reference_end|>
arxiv
@article{tauboeck2009compressive, title={Compressive estimation of doubly selective channels in multicarrier systems: Leakage effects and sparsity-enhancing processing}, author={Georg Tauboeck, Franz Hlawatsch, Daniel Eiwen, Holger Rauhut}, journal={IEEE J. Sel. Top. Sig. Process., vol. 4, no. 2, pp. 255-271, April 2010}, year={2009}, doi={10.1109/JSTSP.2010.2042410}, archivePrefix={arXiv}, eprint={0903.2774}, primaryClass={cs.IT math.IT} }
tauboeck2009compressive
arxiv-6767
0903.2791
On the Hamming weight of Repeated Root Cyclic and Negacyclic Codes over Galois Rings
<|reference_start|>On the Hamming weight of Repeated Root Cyclic and Negacyclic Codes over Galois Rings: Repeated root Cyclic and Negacyclic codes over Galois rings have been studied much less than their simple root counterparts. This situation is beginning to change. For example, repeated root codes of length $p^s$, where $p$ is the characteristic of the alphabet ring, have been studied under some additional hypotheses. In each one of those cases, the ambient space for the codes has turned out to be a chain ring. In this paper, all remaining cases of cyclic and negacyclic codes of length $p^s$ over a Galois ring alphabet are considered. In these cases the ambient space is a local ring with simple socle but not a chain ring. Nonetheless, by reducing the problem to one dealing with uniserial subambients, a method for computing the Hamming distance of these codes is provided.<|reference_end|>
arxiv
@article{lopez-permouth2009on, title={On the Hamming weight of Repeated Root Cyclic and Negacyclic Codes over Galois Rings}, author={Sergio Lopez-Permouth and Steve Szabo}, journal={arXiv preprint arXiv:0903.2791}, year={2009}, archivePrefix={arXiv}, eprint={0903.2791}, primaryClass={cs.IT math.IT} }
lopez-permouth2009on
arxiv-6768
0903.2792
Thermodynamics of Information Retrieval
<|reference_start|>Thermodynamics of Information Retrieval: In this work, we suggest a parameterized statistical model (the gamma distribution) for the frequency of word occurrences in long strings of English text and use this model to build a corresponding thermodynamic picture by constructing the partition function. We then use our partition function to compute thermodynamic quantities such as the free energy and the specific heat. In this approach, the parameters of the word frequency model vary from word to word so that each word has a different corresponding thermodynamics and we suggest that differences in the specific heat reflect differences in how the words are used in language, differentiating keywords from common and function words. Finally, we apply our thermodynamic picture to the problem of retrieval of texts based on keywords and suggest some advantages over traditional information retrieval methods.<|reference_end|>
arxiv
@article{koroutchev2009thermodynamics, title={Thermodynamics of Information Retrieval}, author={Kostadin Koroutchev, Jian Shen, Elka Koroutcheva and Manuel Cebrian}, journal={arXiv preprint arXiv:0903.2792}, year={2009}, archivePrefix={arXiv}, eprint={0903.2792}, primaryClass={cs.IT cs.CL cs.SI math.IT} }
koroutchev2009thermodynamics
arxiv-6769
0903.2816
A Note on Preconditioning by Low-Stretch Spanning Trees
<|reference_start|>A Note on Preconditioning by Low-Stretch Spanning Trees: Boman and Hendrickson observed that one can solve linear systems in Laplacian matrices in time $\bigO{m^{3/2 + o (1)} \ln (1/\epsilon)}$ by preconditioning with the Laplacian of a low-stretch spanning tree. By examining the distribution of eigenvalues of the preconditioned linear system, we prove that the preconditioned conjugate gradient will actually solve the linear system in time $\softO{m^{4/3} \ln (1/\epsilon)}$.<|reference_end|>
arxiv
@article{spielman2009a, title={A Note on Preconditioning by Low-Stretch Spanning Trees}, author={Daniel A Spielman, Jaeoh Woo}, journal={arXiv preprint arXiv:0903.2816}, year={2009}, archivePrefix={arXiv}, eprint={0903.2816}, primaryClass={cs.NA cs.DS} }
spielman2009a
arxiv-6770
0903.2820
Cooperative Transmission in a Wireless Relay Network based on Flow Management
<|reference_start|>Cooperative Transmission in a Wireless Relay Network based on Flow Management: In this paper, a cooperative transmission design for a general multi-node half-duplex wireless relay network is presented. It is assumed that the nodes operate in half-duplex mode and that channel information is available at the nodes. The proposed design involves solving a convex flow optimization problem on a graph that models the relay network. A much simpler generalized-link selection protocol based on the above design is also presented. Both the proposed flow-optimized protocol and the generalized-link selection protocol are shown to achieve the optimal diversity-multiplexing tradeoff (DMT) for the relay network. Moreover, simulation results are presented to quantify the gap between the performances of the proposed protocols and that of a max-flow-min-cut type bound, in terms of outage probability.<|reference_end|>
arxiv
@article{chatterjee2009cooperative, title={Cooperative Transmission in a Wireless Relay Network based on Flow Management}, author={Debdeep Chatterjee, Tan F. Wong, and Tat M. Lok}, journal={arXiv preprint arXiv:0903.2820}, year={2009}, archivePrefix={arXiv}, eprint={0903.2820}, primaryClass={cs.IT math.IT} }
chatterjee2009cooperative
arxiv-6771
0903.2825
On the Computational Complexity of Satisfiability Solving for String Theories
<|reference_start|>On the Computational Complexity of Satisfiability Solving for String Theories: Satisfiability solvers are increasingly playing a key role in software verification, with particularly effective use in the analysis of security vulnerabilities. String processing is a key part of many software applications, such as browsers and web servers. These applications are susceptible to attacks through malicious data received over network. Automated tools for analyzing the security of such applications, thus need to reason about strings. For efficiency reasons, it is desirable to have a solver that treats strings as first-class types. In this paper, we present some theories of strings that are useful in a software security context and analyze the computational complexity of the presented theories. We use this complexity analysis to motivate a byte-blast approach which employs a Boolean encoding of the string constraints to a corresponding Boolean satisfiability problem.<|reference_end|>
arxiv
@article{jha2009on, title={On the Computational Complexity of Satisfiability Solving for String Theories}, author={Susmit Jha, Sanjit A. Seshia and Rhishikesh Limaye}, journal={arXiv preprint arXiv:0903.2825}, year={2009}, archivePrefix={arXiv}, eprint={0903.2825}, primaryClass={cs.CC cs.LO cs.PL} }
jha2009on
arxiv-6772
0903.2851
A parameter-free hedging algorithm
<|reference_start|>A parameter-free hedging algorithm: We study the problem of decision-theoretic online learning (DTOL). Motivated by practical applications, we focus on DTOL when the number of actions is very large. Previous algorithms for learning in this framework have a tunable learning rate parameter, and a barrier to using online-learning in practical applications is that it is not understood how to set this parameter optimally, particularly when the number of actions is large. In this paper, we offer a clean solution by proposing a novel and completely parameter-free algorithm for DTOL. We introduce a new notion of regret, which is more natural for applications with a large number of actions. We show that our algorithm achieves good performance with respect to this new notion of regret; in addition, it also achieves performance close to that of the best bounds achieved by previous algorithms with optimally-tuned parameters, according to previous notions of regret.<|reference_end|>
arxiv
@article{chaudhuri2009a, title={A parameter-free hedging algorithm}, author={Kamalika Chaudhuri, Yoav Freund, Daniel Hsu}, journal={arXiv preprint arXiv:0903.2851}, year={2009}, archivePrefix={arXiv}, eprint={0903.2851}, primaryClass={cs.LG cs.AI} }
chaudhuri2009a
arxiv-6773
0903.2862
Tracking using explanation-based modeling
<|reference_start|>Tracking using explanation-based modeling: We study the tracking problem, namely, estimating the hidden state of an object over time, from unreliable and noisy measurements. The standard framework for the tracking problem is the generative framework, which is the basis of solutions such as the Bayesian algorithm and its approximation, the particle filters. However, the problem with these solutions is that they are very sensitive to model mismatches. In this paper, motivated by online learning, we introduce a new framework -- an {\em explanatory} framework -- for tracking. We provide an efficient tracking algorithm for this framework. We provide experimental results comparing our algorithm to the Bayesian algorithm on simulated data. Our experiments show that when there are slight model mismatches, our algorithm vastly outperforms the Bayesian algorithm.<|reference_end|>
arxiv
@article{chaudhuri2009tracking, title={Tracking using explanation-based modeling}, author={Kamalika Chaudhuri, Yoav Freund, Daniel Hsu}, journal={arXiv preprint arXiv:0903.2862}, year={2009}, archivePrefix={arXiv}, eprint={0903.2862}, primaryClass={cs.LG cs.AI cs.CV} }
chaudhuri2009tracking
arxiv-6774
0903.2870
On $p$-adic Classification
<|reference_start|>On $p$-adic Classification: A $p$-adic modification of the split-LBG classification method is presented in which first clusterings and then cluster centers are computed which locally minimise an energy function. The outcome for a fixed dataset is independent of the prime number $p$ with finitely many exceptions. The methods are applied to the construction of $p$-adic classifiers in the context of learning.<|reference_end|>
arxiv
@article{bradley2009on, title={On $p$-adic Classification}, author={Patrick Erik Bradley}, journal={p-Adic Numbers, Ultrametric Analysis, and Applications, Vol. 1, No. 4 (2009), 271-285}, year={2009}, doi={10.1134/S2070046609040013}, archivePrefix={arXiv}, eprint={0903.2870}, primaryClass={cs.LG} }
bradley2009on
arxiv-6775
0903.2890
Kalman Filtering with Intermittent Observations: Weak Convergence to a Stationary Distribution
<|reference_start|>Kalman Filtering with Intermittent Observations: Weak Convergence to a Stationary Distribution: The paper studies the asymptotic behavior of Random Algebraic Riccati Equations (RARE) arising in Kalman filtering when the arrival of the observations is described by a Bernoulli i.i.d. process. We model the RARE as an order-preserving, strongly sublinear random dynamical system (RDS). Under a sufficient condition, stochastic boundedness, and using a limit-set dichotomy result for order-preserving, strongly sublinear RDS, we establish the asymptotic properties of the RARE: the sequence of random prediction error covariance matrices converges weakly to a unique invariant distribution, whose support exhibits fractal behavior. In particular, this weak convergence holds under broad conditions and even when the observations arrival rate is below the critical probability for mean stability. We apply the weak-Feller property of the Markov process governing the RARE to characterize the support of the limiting invariant distribution as the topological closure of a countable set of points, which, in general, is not dense in the set of positive semi-definite matrices. We use the explicit characterization of the support of the invariant distribution and the almost sure ergodicity of the sample paths to easily compute the moments of the invariant distribution. A one dimensional example illustrates that the support is a fractured subset of the non-negative reals with self-similarity properties.<|reference_end|>
arxiv
@article{kar2009kalman, title={Kalman Filtering with Intermittent Observations: Weak Convergence to a Stationary Distribution}, author={Soummya Kar, Bruno Sinopoli, and Jose M. F. Moura}, journal={arXiv preprint arXiv:0903.2890}, year={2009}, archivePrefix={arXiv}, eprint={0903.2890}, primaryClass={cs.IT cs.LG math.IT math.ST stat.TH} }
kar2009kalman
arxiv-6776
0903.2904
A decidable policy language for history-based transaction monitoring
<|reference_start|>A decidable policy language for history-based transaction monitoring: Online trading invariably involves dealings between strangers, so it is important for one party to be able to judge objectively the trustworthiness of the other. In such a setting, the decision to trust a user may sensibly be based on that user's past behaviour. We introduce a specification language based on linear temporal logic for expressing a policy for categorising the behaviour patterns of a user depending on its transaction history. We also present an algorithm for checking whether the transaction history obeys the stated policy. To be useful in a real setting, such a language should allow one to express realistic policies which may involve parameter quantification and quantitative or statistical patterns. We introduce several extensions of linear temporal logic to cater for such needs: a restricted form of universal and existential quantification; arbitrary computable functions and relations in the term language; and a "counting" quantifier for counting how many times a formula holds in the past. We then show that model checking a transaction history against a policy, which we call the history-based transaction monitoring problem, is PSPACE-complete in the size of the policy formula and the length of the history. The problem becomes decidable in polynomial time when the policies are fixed. We also consider the problem of transaction monitoring in the case where not all the parameters of actions are observable. We formulate two such "partial observability" monitoring problems, and show their decidability under certain restrictions.<|reference_end|>
arxiv
@article{bauer2009a, title={A decidable policy language for history-based transaction monitoring}, author={Andreas Bauer, Rajeev Gore, Alwen Tiu}, journal={arXiv preprint arXiv:0903.2904}, year={2009}, doi={10.1007/978-3-642-03466-4_6}, archivePrefix={arXiv}, eprint={0903.2904}, primaryClass={cs.LO cs.CR} }
bauer2009a
arxiv-6777
0903.2908
Communities of solutions in single solution clusters of a random K-Satisfiability formula
<|reference_start|>Communities of solutions in single solution clusters of a random K-Satisfiability formula: The solution space of a K-satisfiability (K-SAT) formula is a collection of solution clusters, each of which contains all the solutions that are mutually reachable through a sequence of single-spin flips. Knowledge of the statistical property of solution clusters is valuable for a complete understanding of the solution space structure and the computational complexity of the random K-SAT problem. This paper explores single solution clusters of random 3- and 4-SAT formulas through unbiased and biased random walk processes and the replica-symmetric cavity method of statistical physics. We find that the giant connected component of the solution space has already formed many different communities when the constraint density of the formula is still lower than the solution space clustering transition point. Solutions of the same community are more similar with each other and more densely connected with each other than with the other solutions. The entropy density of a solution community is calculated using belief propagation and is found to be different for different communities of the same cluster. When the constraint density is beyond the clustering transition point, the same behavior is observed for the solution clusters reached by several stochastic search algorithms. Taking together, the results of this work suggests a refined picture on the evolution of the solution space structure of the random K-SAT problem; they may also be helpful for designing new heuristic algorithms.<|reference_end|>
arxiv
@article{zhou2009communities, title={Communities of solutions in single solution clusters of a random K-Satisfiability formula}, author={Haijun Zhou and Hui Ma}, journal={Phys. Rev. E 80, 066108 (2009)}, year={2009}, doi={10.1103/PhysRevE.80.066108}, archivePrefix={arXiv}, eprint={0903.2908}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.CC} }
zhou2009communities
arxiv-6778
0903.2914
A process calculus with finitary comprehended terms
<|reference_start|>A process calculus with finitary comprehended terms: We introduce the notion of an ACP process algebra and the notion of a meadow enriched ACP process algebra. The former notion originates from the models of the axiom system ACP. The latter notion is a simple generalization of the former notion to processes in which data are involved, the mathematical structure of data being a meadow. Moreover, for all associative operators from the signature of meadow enriched ACP process algebras that are not of an auxiliary nature, we introduce variable-binding operators as generalizations. These variable-binding operators, which give rise to comprehended terms, have the property that they can always be eliminated. Thus, we obtain a process calculus whose terms can be interpreted in all meadow enriched ACP process algebras. Use of the variable-binding operators can have a major impact on the size of terms.<|reference_end|>
arxiv
@article{bergstra2009a, title={A process calculus with finitary comprehended terms}, author={J. A. Bergstra, C. A. Middelburg}, journal={Theory of Computing Systems, 53(4):645--668, 2013}, year={2009}, doi={10.1007/s00224-013-9468-x}, archivePrefix={arXiv}, eprint={0903.2914}, primaryClass={cs.LO math.RA} }
bergstra2009a
arxiv-6779
0903.2923
On uncertainty principles in the finite dimensional setting
<|reference_start|>On uncertainty principles in the finite dimensional setting: The aim of this paper is to prove an uncertainty principle for the representation of a vector in two bases. Our result extends previously known qualitative uncertainty principles into quantitative estimates. We then show how to transfer this result to the discrete version of the Short Time Fourier Transform. An application to trigonometric polynomials is also given.<|reference_end|>
arxiv
@article{ghobber2009on, title={On uncertainty principles in the finite dimensional setting}, author={Saifallah Ghobber (MAPMO), Philippe Jaming (MAPMO, IMB)}, journal={Linear Algebra and its Applications 435(4) (2011), 751--768}, year={2009}, doi={10.1016/j.laa.2011.01.038}, archivePrefix={arXiv}, eprint={0903.2923}, primaryClass={math.CA cs.IT math.IT} }
ghobber2009on
arxiv-6780
0903.2966
Introducing Hierarchy in Energy Games
<|reference_start|>Introducing Hierarchy in Energy Games: In this work we introduce hierarchy in wireless networks that can be modeled by a decentralized multiple access channel and for which energy-efficiency is the main performance index. In these networks users are free to choose their power control strategy to selfishly maximize their energy-efficiency. Specifically, we introduce hierarchy in two different ways: 1. Assuming single-user decoding at the receiver, we investigate a Stackelberg formulation of the game where one user is the leader whereas the other users are assumed to be able to react to the leader's decisions; 2. Assuming neither leader nor followers among the users, we introduce hierarchy by assuming successive interference cancellation at the receiver. It is shown that introducing a certain degree of hierarchy in non-cooperative power control games not only improves the individual energy efficiency of all the users but can also be a way of insuring the existence of a non-saturated equilibrium and reaching a desired trade-off between the global network performance at the equilibrium and the requested amount of signaling. In this respect, the way of measuring the global performance of an energy-efficient network is shown to be a critical issue.<|reference_end|>
arxiv
@article{lasaulce2009introducing, title={Introducing Hierarchy in Energy Games}, author={S. Lasaulce, Y. Hayel, R. El Azouzi, and M. Debbah}, journal={arXiv preprint arXiv:0903.2966}, year={2009}, doi={10.1109/TWC.2009.081443}, archivePrefix={arXiv}, eprint={0903.2966}, primaryClass={cs.GT} }
lasaulce2009introducing
arxiv-6781
0903.2972
Optimistic Simulated Exploration as an Incentive for Real Exploration
<|reference_start|>Optimistic Simulated Exploration as an Incentive for Real Exploration: Many reinforcement learning exploration techniques are overly optimistic and try to explore every state. Such exploration is impossible in environments with the unlimited number of states. I propose to use simulated exploration with an optimistic model to discover promising paths for real exploration. This reduces the needs for the real exploration.<|reference_end|>
arxiv
@article{danihelka2009optimistic, title={Optimistic Simulated Exploration as an Incentive for Real Exploration}, author={Ivo Danihelka}, journal={POSTER 2009}, year={2009}, archivePrefix={arXiv}, eprint={0903.2972}, primaryClass={cs.LG cs.AI} }
danihelka2009optimistic
arxiv-6782
0903.2999
Human Activity in the Web
<|reference_start|>Human Activity in the Web: The recent information technology revolution has enabled the analysis and processing of large-scale datasets describing human activities. The main source of data is represented by the Web, where humans generally use to spend a relevant part of their day. Here we study three large datasets containing the information about Web human activities in different contexts. We study in details inter-event and waiting time statistics. In both cases, the number of subsequent operations which differ by tau units of time decays power-like as tau increases. We use non-parametric statistical tests in order to estimate the significance level of reliability of global distributions to describe activity patterns of single users. Global inter-event time probability distributions are not representative for the behavior of single users: the shape of single users'inter-event distributions is strongly influenced by the total number of operations performed by the users and distributions of the total number of operations performed by users are heterogeneous. A universal behavior can be anyway found by suppressing the intrinsic dependence of the global probability distribution on the activity of the users. This suppression can be performed by simply dividing the inter-event times with their average values. Differently, waiting time probability distributions seem to be independent of the activity of users and global probability distributions are able to significantly represent the replying activity patterns of single users.<|reference_end|>
arxiv
@article{radicchi2009human, title={Human Activity in the Web}, author={Filippo Radicchi}, journal={Phys. Rev. E 80, 026118 (2009)}, year={2009}, doi={10.1103/PhysRevE.80.026118}, archivePrefix={arXiv}, eprint={0903.2999}, primaryClass={physics.soc-ph cond-mat.stat-mech cs.HC} }
radicchi2009human
arxiv-6783
0903.3000
A Robust Ranging Scheme for OFDMA-Based Networks
<|reference_start|>A Robust Ranging Scheme for OFDMA-Based Networks: Uplink synchronization in orthogonal frequency-division multiple-access (OFDMA) systems is a challenging task. In IEEE 802.16-based networks, users that intend to establish a communication link with the base station must go through a synchronization procedure called Initial Ranging (IR). Existing IR schemes aim at estimating the timing offsets and power levels of ranging subscriber stations (RSSs) without considering possible frequency misalignments between the received uplink signals and the base station local reference. In this work, we present a novel IR scheme for OFDMA systems where carrier frequency offsets, timing errors and power levels are estimated for all RSSs in a decoupled fashion. The proposed frequency estimator is based on a subspace decomposition approach, while timing recovery is accomplished by measuring the phase shift between the users'channel responses over adjacent subcarriers. Computer simulations are employed to assess the effectiveness of the proposed solution and to make comparisons with existing alternatives.<|reference_end|>
arxiv
@article{morelli2009a, title={A Robust Ranging Scheme for OFDMA-Based Networks}, author={Michele Morelli, Luca Sanguinetti, H. Vincent Poor}, journal={arXiv preprint arXiv:0903.3000}, year={2009}, archivePrefix={arXiv}, eprint={0903.3000}, primaryClass={cs.IT math.IT} }
morelli2009a
arxiv-6784
0903.3004
Decoding of MDP Convolutional Codes over the Erasure Channel
<|reference_start|>Decoding of MDP Convolutional Codes over the Erasure Channel: This paper studies the decoding capabilities of maximum distance profile (MDP) convolutional codes over the erasure channel and compares them with the decoding capabilities of MDS block codes over the same channel. The erasure channel involving large alphabets is an important practical channel model when studying packet transmissions over a network, e.g, the Internet.<|reference_end|>
arxiv
@article{tomás2009decoding, title={Decoding of MDP Convolutional Codes over the Erasure Channel}, author={Virtudes Tom'as, Joachim Rosenthal and Roxana Smarandache}, journal={arXiv preprint arXiv:0903.3004}, year={2009}, archivePrefix={arXiv}, eprint={0903.3004}, primaryClass={cs.IT math.IT} }
tomás2009decoding
arxiv-6785
0903.3024
A Vector Generalization of Costa's Entropy-Power Inequality with Applications
<|reference_start|>A Vector Generalization of Costa's Entropy-Power Inequality with Applications: This paper considers an entropy-power inequality (EPI) of Costa and presents a natural vector generalization with a real positive semidefinite matrix parameter. This new inequality is proved using a perturbation approach via a fundamental relationship between the derivative of mutual information and the minimum mean-square error (MMSE) estimate in linear vector Gaussian channels. As an application, a new extremal entropy inequality is derived from the generalized Costa EPI and then used to establish the secrecy capacity regions of the degraded vector Gaussian broadcast channel with layered confidential messages.<|reference_end|>
arxiv
@article{liu2009a, title={A Vector Generalization of Costa's Entropy-Power Inequality with Applications}, author={Ruoheng Liu, Tie Liu, H. Vincent Poor and Shlomo Shamai (Shitz)}, journal={arXiv preprint arXiv:0903.3024}, year={2009}, archivePrefix={arXiv}, eprint={0903.3024}, primaryClass={cs.IT math.IT} }
liu2009a
arxiv-6786
0903.3072
Spatial Skyline Queries: An Efficient Geometric Algorithm
<|reference_start|>Spatial Skyline Queries: An Efficient Geometric Algorithm: As more data-intensive applications emerge, advanced retrieval semantics, such as ranking or skylines, have attracted attention. Geographic information systems are such an application with massive spatial data. Our goal is to efficiently support skyline queries over massive spatial data. To achieve this goal, we first observe that the best known algorithm VS2, despite its claim, may fail to deliver correct results. In contrast, we present a simple and efficient algorithm that computes the correct results. To validate the effectiveness and efficiency of our algorithm, we provide an extensive empirical comparison of our algorithm and VS2 in several aspects.<|reference_end|>
arxiv
@article{son2009spatial, title={Spatial Skyline Queries: An Efficient Geometric Algorithm}, author={Wanbin Son, Mu-Woong Lee, Hee-Kap Ahn, Seung-won Hwang}, journal={arXiv preprint arXiv:0903.3072}, year={2009}, archivePrefix={arXiv}, eprint={0903.3072}, primaryClass={cs.DB cs.CG} }
son2009spatial
arxiv-6787
0903.3080
A Unified Theory of Time-Frequency Reassignment
<|reference_start|>A Unified Theory of Time-Frequency Reassignment: Time-frequency representations such as the spectrogram are commonly used to analyze signals having a time-varying distribution of spectral energy, but the spectrogram is constrained by an unfortunate tradeoff between resolution in time and frequency. A method of achieving high-resolution spectral representations has been independently introduced by several parties. The technique has been variously named reassignment and remapping, but while the implementations have differed in details, they are all based on the same theoretical and mathematical foundation. In this work, we present a brief history of work on the method we will call the method of time-frequency reassignment, and present a unified mathematical description of the technique and its derivation. We will focus on the development of time-frequency reassignment in the context of the spectrogram, and conclude with a discussion of some current applications of the reassigned spectrogram.<|reference_end|>
arxiv
@article{fitz2009a, title={A Unified Theory of Time-Frequency Reassignment}, author={Kelly R. Fitz and Sean A. Fulop}, journal={arXiv preprint arXiv:0903.3080}, year={2009}, archivePrefix={arXiv}, eprint={0903.3080}, primaryClass={cs.SD} }
fitz2009a
arxiv-6788
0903.3096
The Secrecy Capacity Region of the Gaussian MIMO Multi-receiver Wiretap Channel
<|reference_start|>The Secrecy Capacity Region of the Gaussian MIMO Multi-receiver Wiretap Channel: In this paper, we consider the Gaussian multiple-input multiple-output (MIMO) multi-receiver wiretap channel in which a transmitter wants to have confidential communication with an arbitrary number of users in the presence of an external eavesdropper. We derive the secrecy capacity region of this channel for the most general case. We first show that even for the single-input single-output (SISO) case, existing converse techniques for the Gaussian scalar broadcast channel cannot be extended to this secrecy context, to emphasize the need for a new proof technique. Our new proof technique makes use of the relationships between the minimum-mean-square-error and the mutual information, and equivalently, the relationships between the Fisher information and the differential entropy. Using the intuition gained from the converse proof of the SISO channel, we first prove the secrecy capacity region of the degraded MIMO channel, in which all receivers have the same number of antennas, and the noise covariance matrices can be arranged according to a positive semi-definite order. We then generalize this result to the aligned case, in which all receivers have the same number of antennas, however there is no order among the noise covariance matrices. We accomplish this task by using the channel enhancement technique. Finally, we find the secrecy capacity region of the general MIMO channel by using some limiting arguments on the secrecy capacity region of the aligned MIMO channel. We show that the capacity achieving coding scheme is a variant of dirty-paper coding with Gaussian signals.<|reference_end|>
arxiv
@article{ekrem2009the, title={The Secrecy Capacity Region of the Gaussian MIMO Multi-receiver Wiretap Channel}, author={Ersen Ekrem and Sennur Ulukus}, journal={arXiv preprint arXiv:0903.3096}, year={2009}, archivePrefix={arXiv}, eprint={0903.3096}, primaryClass={cs.IT math.IT} }
ekrem2009the
arxiv-6789
0903.3100
Time Allocation of a Set of Radars in a Multitarget Environment
<|reference_start|>Time Allocation of a Set of Radars in a Multitarget Environment: The question tackled here is the time allocation of radars in a multitarget environment. At a given time radars can only observe a limited part of the space; it is therefore necessary to move their axis with respect to time, in order to be able to explore the overall space facing them. Such sensors are used to detect, to locate and to identify targets which are in their surrounding aerial space. In this paper we focus on the detection schema when several targets need to be detected by a set of delocalized radars. This work is based on the modelling of the radar detection performances in terms of probability of detection and on the optimization of a criterion based on detection probabilities. This optimization leads to the derivation of allocation strategies and is made for several contexts and several hypotheses about the targets locations.<|reference_end|>
arxiv
@article{duflos2009time, title={Time Allocation of a Set of Radars in a Multitarget Environment}, author={Emmanuel Duflos (INRIA Futurs), Marie De Vilmorin (LGI2A), Philippe Vanheeghe (INRIA Futurs)}, journal={FUSION 2007 (2007)}, year={2009}, archivePrefix={arXiv}, eprint={0903.3100}, primaryClass={math.OC cs.NI math.PR} }
duflos2009time
arxiv-6790
0903.3103
Efficiently Learning a Detection Cascade with Sparse Eigenvectors
<|reference_start|>Efficiently Learning a Detection Cascade with Sparse Eigenvectors: In this work, we first show that feature selection methods other than boosting can also be used for training an efficient object detector. In particular, we introduce Greedy Sparse Linear Discriminant Analysis (GSLDA) \cite{Moghaddam2007Fast} for its conceptual simplicity and computational efficiency; and slightly better detection performance is achieved compared with \cite{Viola2004Robust}. Moreover, we propose a new technique, termed Boosted Greedy Sparse Linear Discriminant Analysis (BGSLDA), to efficiently train a detection cascade. BGSLDA exploits the sample re-weighting property of boosting and the class-separability criterion of GSLDA.<|reference_end|>
arxiv
@article{shen2009efficiently, title={Efficiently Learning a Detection Cascade with Sparse Eigenvectors}, author={Chunhua Shen, Sakrapee Paisitkriangkrai, and Jian Zhang}, journal={arXiv preprint arXiv:0903.3103}, year={2009}, archivePrefix={arXiv}, eprint={0903.3103}, primaryClass={cs.MM cs.AI cs.LG} }
shen2009efficiently
arxiv-6791
0903.3106
Stabilizing Maximal Independent Set in Unidirectional Networks is Hard
<|reference_start|>Stabilizing Maximal Independent Set in Unidirectional Networks is Hard: A distributed algorithm is self-stabilizing if after faults and attacks hit the system and place it in some arbitrary global state, the system recovers from this catastrophic situation without external intervention in finite time. In this paper, we consider the problem of constructing self-stabilizingly a \emph{maximal independent set} in uniform unidirectional networks of arbitrary shape. On the negative side, we present evidence that in uniform networks, \emph{deterministic} self-stabilization of this problem is \emph{impossible}. Also, the \emph{silence} property (\emph{i.e.} having communication fixed from some point in every execution) is impossible to guarantee, either for deterministic or for probabilistic variants of protocols. On the positive side, we present a deterministic protocol for networks with arbitrary unidirectional networks with unique identifiers that exhibits polynomial space and time complexity in asynchronous scheduling. We complement the study with probabilistic protocols for the uniform case: the first probabilistic protocol requires infinite memory but copes with asynchronous scheduling, while the second probabilistic protocol has polynomial space complexity but can only handle synchronous scheduling. Both probabilistic solutions have expected polynomial time complexity.<|reference_end|>
arxiv
@article{masuzawa2009stabilizing, title={Stabilizing Maximal Independent Set in Unidirectional Networks is Hard}, author={Toshimitsu Masuzawa, S'ebastien Tixeuil (LIP6)}, journal={arXiv preprint arXiv:0903.3106}, year={2009}, number={RR-6880}, archivePrefix={arXiv}, eprint={0903.3106}, primaryClass={cs.DS cs.CC cs.DC cs.NI cs.PF} }
masuzawa2009stabilizing
arxiv-6792
0903.3114
Markov Random Field Segmentation of Brain MR Images
<|reference_start|>Markov Random Field Segmentation of Brain MR Images: We describe a fully-automatic 3D-segmentation technique for brain MR images. Using Markov random fields the segmentation algorithm captures three important MR features, i.e. non-parametric distributions of tissue intensities, neighborhood correlations and signal inhomogeneities. Detailed simulations and real MR images demonstrate the performance of the segmentation algorithm. The impact of noise, inhomogeneity, smoothing and structure thickness is analyzed quantitatively. Even single echo MR images are well classified into gray matter, white matter, cerebrospinal fluid, scalp-bone and background. A simulated annealing and an iterated conditional modes implementation are presented. Keywords: Magnetic Resonance Imaging, Segmentation, Markov Random Fields<|reference_end|>
arxiv
@article{held2009markov, title={Markov Random Field Segmentation of Brain MR Images}, author={Karsten Held, Elena Rota Kops, Bernd J. Krause, William M. Wells III, Ron Kikinis, Hans-Wilhelm Mueller-Gaertner}, journal={IEEE Trans. Med. Imag. vol. 16, p. 878 (1997)}, year={2009}, doi={10.1109/42.650883}, archivePrefix={arXiv}, eprint={0903.3114}, primaryClass={cs.CV cond-mat.stat-mech physics.data-an physics.med-ph} }
held2009markov
arxiv-6793
0903.3126
A Generic Framework for Reasoning about Dynamic Networks of Infinite-State Processes
<|reference_start|>A Generic Framework for Reasoning about Dynamic Networks of Infinite-State Processes: We propose a framework for reasoning about unbounded dynamic networks of infinite-state processes. We propose Constrained Petri Nets (CPN) as generic models for these networks. They can be seen as Petri nets where tokens (representing occurrences of processes) are colored by values over some potentially infinite data domain such as integers, reals, etc. Furthermore, we define a logic, called CML (colored markings logic), for the description of CPN configurations. CML is a first-order logic over tokens allowing to reason about their locations and their colors. Both CPNs and CML are parametrized by a color logic allowing to express constraints on the colors (data) associated with tokens. We investigate the decidability of the satisfiability problem of CML and its applications in the verification of CPNs. We identify a fragment of CML for which the satisfiability problem is decidable (whenever it is the case for the underlying color logic), and which is closed under the computations of post and pre images for CPNs. These results can be used for several kinds of analysis such as invariance checking, pre-post condition reasoning, and bounded reachability analysis.<|reference_end|>
arxiv
@article{bouajjani2009a, title={A Generic Framework for Reasoning about Dynamic Networks of Infinite-State Processes}, author={Ahmed Bouajjani, Cezara Dragoi, Constantin Enea, Yan Jurski, Mihaela Sighireanu}, journal={Logical Methods in Computer Science, Volume 5, Issue 2 (April 22, 2009) lmcs:991}, year={2009}, doi={10.2168/LMCS-5(2:3)2009}, archivePrefix={arXiv}, eprint={0903.3126}, primaryClass={cs.LO} }
bouajjani2009a
arxiv-6794
0903.3127
Norm-Product Belief Propagation: Primal-Dual Message-Passing for Approximate Inference
<|reference_start|>Norm-Product Belief Propagation: Primal-Dual Message-Passing for Approximate Inference: In this paper we treat both forms of probabilistic inference, estimating marginal probabilities of the joint distribution and finding the most probable assignment, through a unified message-passing algorithm architecture. We generalize the Belief Propagation (BP) algorithms of sum-product and max-product and tree-rewaighted (TRW) sum and max product algorithms (TRBP) and introduce a new set of convergent algorithms based on "convex-free-energy" and Linear-Programming (LP) relaxation as a zero-temprature of a convex-free-energy. The main idea of this work arises from taking a general perspective on the existing BP and TRBP algorithms while observing that they all are reductions from the basic optimization formula of $f + \sum_i h_i$ where the function $f$ is an extended-valued, strictly convex but non-smooth and the functions $h_i$ are extended-valued functions (not necessarily convex). We use tools from convex duality to present the "primal-dual ascent" algorithm which is an extension of the Bregman successive projection scheme and is designed to handle optimization of the general type $f + \sum_i h_i$. Mapping the fractional-free-energy variational principle to this framework introduces the "norm-product" message-passing. Special cases include sum-product and max-product (BP algorithms) and the TRBP algorithms. When the fractional-free-energy is set to be convex (convex-free-energy) the norm-product is globally convergent for estimating of marginal probabilities and for approximating the LP-relaxation. We also introduce another branch of the norm-product, the "convex-max-product". The convex-max-product is convergent (unlike max-product) and aims at solving the LP-relaxation.<|reference_end|>
arxiv
@article{hazan2009norm-product, title={Norm-Product Belief Propagation: Primal-Dual Message-Passing for Approximate Inference}, author={Tamir Hazan and Amnon Shashua}, journal={arXiv preprint arXiv:0903.3127}, year={2009}, archivePrefix={arXiv}, eprint={0903.3127}, primaryClass={cs.AI cs.IT math.IT} }
hazan2009norm-product
arxiv-6795
0903.3131
Matrix Completion With Noise
<|reference_start|>Matrix Completion With Noise: On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.<|reference_end|>
arxiv
@article{candes2009matrix, title={Matrix Completion With Noise}, author={Emmanuel J. Candes, Yaniv Plan}, journal={arXiv preprint arXiv:0903.3131}, year={2009}, archivePrefix={arXiv}, eprint={0903.3131}, primaryClass={cs.IT math.IT} }
candes2009matrix
arxiv-6796
0903.3163
Smart Antenna Based Broadband communication in Intelligent Transportation system
<|reference_start|>Smart Antenna Based Broadband communication in Intelligent Transportation system: This paper presents a review for the development of Intelligent Transportation System (ITS) world wide and the use of Smart Antennas in ITS. This review work also discusses the usual problems in ITS and proposes the solution of such problems using smart antennas.<|reference_end|>
arxiv
@article{dhar2009smart, title={Smart Antenna Based Broadband communication in Intelligent Transportation system}, author={Sourav Dhar, Debdattta Kandar, Tanushree Bose and Rabindranath Bera}, journal={arXiv preprint arXiv:0903.3163}, year={2009}, archivePrefix={arXiv}, eprint={0903.3163}, primaryClass={cs.NI} }
dhar2009smart
arxiv-6797
0903.3165
Automated Vehicle Location (AVL) Using Global Positioning System (GPS)
<|reference_start|>Automated Vehicle Location (AVL) Using Global Positioning System (GPS): this is a review paper. this describes how DGPS is helpful for lane detection and to avoid collission.<|reference_end|>
arxiv
@article{dutta2009automated, title={Automated Vehicle Location (AVL) Using Global Positioning System (GPS)}, author={Victor Dutta, R. Bera, Sourav Dhar, Jaydeep Chakravorty, Nishant Bagehel}, journal={arXiv preprint arXiv:0903.3165}, year={2009}, archivePrefix={arXiv}, eprint={0903.3165}, primaryClass={cs.NI} }
dutta2009automated
arxiv-6798
0903.3182
Finding matching initial states for equivalent NLFSRs in the fibonacci and the galois configurations
<|reference_start|>Finding matching initial states for equivalent NLFSRs in the fibonacci and the galois configurations: In this paper, a mapping between initial states of the Fibonacci and the Galois configurations of NLFSRs is established. We show how to choose initial states for two configurations so that the resulting output sequences are equivalent.<|reference_end|>
arxiv
@article{dubrova2009finding, title={Finding matching initial states for equivalent NLFSRs in the fibonacci and the galois configurations}, author={Elena Dubrova}, journal={arXiv preprint arXiv:0903.3182}, year={2009}, archivePrefix={arXiv}, eprint={0903.3182}, primaryClass={cs.CR} }
dubrova2009finding
arxiv-6799
0903.3198
TR02: State dependent oracle masks for improved dynamical features
<|reference_start|>TR02: State dependent oracle masks for improved dynamical features: Using the AURORA-2 digit recognition task, we show that recognition accuracies obtained with classical, SNR based oracle masks can be substantially improved by using a state-dependent mask estimation technique.<|reference_end|>
arxiv
@article{gemmeke2009tr02:, title={TR02: State dependent oracle masks for improved dynamical features}, author={J. F. Gemmeke and B. Cranen}, journal={arXiv preprint arXiv:0903.3198}, year={2009}, archivePrefix={arXiv}, eprint={0903.3198}, primaryClass={cs.SD} }
gemmeke2009tr02:
arxiv-6800
0903.3204
On Generalized Minimum Distance Decoding Thresholds for the AWGN Channel
<|reference_start|>On Generalized Minimum Distance Decoding Thresholds for the AWGN Channel: We consider the Additive White Gaussian Noise channel with Binary Phase Shift Keying modulation. Our aim is to enable an algebraic hard decision Bounded Minimum Distance decoder for a binary block code to exploit soft information obtained from the demodulator. This idea goes back to Forney and is based on treating received symbols with low reliability as erasures. This erasing at the decoder is done using a threshold, each received symbol with reliability falling below the threshold is erased. Depending on the target overall complexity of the decoder this pseudo-soft decision decoding can be extended from one threshold T to z>1 thresholds T_1<...<T_z for erasing received symbols with lowest reliability. The resulting technique is widely known as Generalized Minimum Distance decoding. In this paper we provide a means for explicit determination of the optimal threshold locations in terms of minimal decoding error probability. We do this for the one and the general z>1 thresholds case, starting with a geometric interpretation of the optimal threshold location problem and using an approach from Zyablov.<|reference_end|>
arxiv
@article{senger2009on, title={On Generalized Minimum Distance Decoding Thresholds for the AWGN Channel}, author={Christian Senger, Vladimir Sidorenko, Victor Zyablov}, journal={arXiv preprint arXiv:0903.3204}, year={2009}, archivePrefix={arXiv}, eprint={0903.3204}, primaryClass={cs.IT math.IT} }
senger2009on