corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-5901 | 0901.0170 | Pedestrian Traffic: on the Quickest Path | <|reference_start|>Pedestrian Traffic: on the Quickest Path: When a large group of pedestrians moves around a corner, most pedestrians do not follow the shortest path, which is to stay as close as possible to the inner wall, but try to minimize the travel time. For this they accept to move on a longer path with some distance to the corner, to avoid large densities and by this succeed in maintaining a comparatively high speed. In many models of pedestrian dynamics the basic rule of motion is often either "move as far as possible toward the destination" or - reformulated - "of all coordinates accessible in this time step move to the one with the smallest distance to the destination". Atop of this rule modifications are placed to make the motion more realistic. These modifications usually focus on local behavior and neglect long-ranged effects. Compared to real pedestrians this leads to agents in a simulation valuing the shortest path a lot better than the quickest. So, in a situation as the movement of a large crowd around a corner, one needs an additional element in a model of pedestrian dynamics that makes the agents deviate from the rule of the shortest path. In this work it is shown, how this can be achieved by using a flood fill dynamic potential field method, where during the filling process the value of a field cell is not increased by 1, but by a larger value, if it is occupied by an agent. This idea may be an obvious one, however, the tricky part - and therefore in a strict sense the contribution of this work - is a) to minimize unrealistic artifacts, as naive flood fill metrics deviate considerably from the Euclidean metric and in this respect yield large errors, b) do this with limited computational effort, and c) keep agents' movement at very low densities unaltered.<|reference_end|> | arxiv | @article{kretz2009pedestrian,
title={Pedestrian Traffic: on the Quickest Path},
author={Tobias Kretz},
journal={J. Stat. Mech. (2009)},
year={2009},
doi={10.1088/1742-5468/2009/03/P03012},
number={P03012},
archivePrefix={arXiv},
eprint={0901.0170},
primaryClass={physics.soc-ph cs.MA physics.comp-ph}
} | kretz2009pedestrian |
arxiv-5902 | 0901.0179 | Techniques for Distributed Reachability Analysis with Partial Order and Symmetry based Reductions | <|reference_start|>Techniques for Distributed Reachability Analysis with Partial Order and Symmetry based Reductions: In this work we propose techniques for efficient reachability analysis of the state space (e.g., detection of bad states) using a combination of partial order and symmetry based reductions in a distributed setting. The proposed techniques are focused towards explicit state space enumeration based model-checkers like SPIN. We consider variants for both depth-first as well as breadth-first based generation of the reduced state graphs on-the-fly.<|reference_end|> | arxiv | @article{misra2009techniques,
title={Techniques for Distributed Reachability Analysis with Partial Order and
Symmetry based Reductions},
author={Janardan Misra and Suman Roy},
journal={arXiv preprint arXiv:0901.0179},
year={2009},
archivePrefix={arXiv},
eprint={0901.0179},
primaryClass={cs.DC cs.SE}
} | misra2009techniques |
arxiv-5903 | 0901.0205 | On Allocating Goods to Maximize Fairness | <|reference_start|>On Allocating Goods to Maximize Fairness: Given a set of $m$ agents and a set of $n$ items, where agent $A$ has utility $u_{A,i}$ for item $i$, our goal is to allocate items to agents to maximize fairness. Specifically, the utility of an agent is the sum of its utilities for items it receives, and we seek to maximize the minimum utility of any agent. While this problem has received much attention recently, its approximability has not been well-understood thus far: the best known approximation algorithm achieves an $\tilde{O}(\sqrt{m})$-approximation, and in contrast, the best known hardness of approximation stands at 2. Our main result is an approximation algorithm that achieves an $\tilde{O}(n^{\eps})$ approximation for any $\eps=\Omega(\log\log n/\log n)$ in time $n^{O(1/\eps)}$. In particular, we obtain poly-logarithmic approximation in quasi-polynomial time, and for any constant $\eps > 0$, we obtain $O(n^{\eps})$ approximation in polynomial time. An interesting aspect of our algorithm is that we use as a building block a linear program whose integrality gap is $\Omega(\sqrt m)$. We bypass this obstacle by iteratively using the solutions produced by the LP to construct new instances with significantly smaller integrality gaps, eventually obtaining the desired approximation. We also investigate the special case of the problem, where every item has a non-zero utility for at most two agents. We show that even in this restricted setting the problem is hard to approximate upto any factor better tha 2, and show a factor $(2+\eps)$-approximation algorithm running in time $poly(n,1/\eps)$ for any $\eps>0$. This special case can be cast as a graph edge orientation problem, and our algorithm can be viewed as a generalization of Eulerian orientations to weighted graphs.<|reference_end|> | arxiv | @article{chakrabarty2009on,
title={On Allocating Goods to Maximize Fairness},
author={Deeparnab Chakrabarty, Julia Chuzhoy, Sanjeev Khanna},
journal={arXiv preprint arXiv:0901.0205},
year={2009},
archivePrefix={arXiv},
eprint={0901.0205},
primaryClass={cs.DS}
} | chakrabarty2009on |
arxiv-5904 | 0901.0213 | Filtering Microarray Correlations by Statistical Literature Analysis Yields Potential Hypotheses for Lactation Research | <|reference_start|>Filtering Microarray Correlations by Statistical Literature Analysis Yields Potential Hypotheses for Lactation Research: Our results demonstrated that a previously reported protein name co-occurrence method (5-mention PubGene) which was not based on a hypothesis testing framework, it is generally statistically more significant than the 99th percentile of Poisson distribution-based method of calculating co-occurrence. It agrees with previous methods using natural language processing to extract protein-protein interaction from text as more than 96% of the interactions found by natural language processing methods to overlap with the results from 5-mention PubGene method. However, less than 2% of the gene co-expressions analyzed by microarray were found from direct co-occurrence or interaction information extraction from the literature. At the same time, combining microarray and literature analyses, we derive a novel set of 7 potential functional protein-protein interactions that had not been previously described in the literature.<|reference_end|> | arxiv | @article{ling2009filtering,
title={Filtering Microarray Correlations by Statistical Literature Analysis
Yields Potential Hypotheses for Lactation Research},
author={Maurice HT Ling, Christophe Lefevre, Kevin R. Nicholas},
journal={Ling, MHT, Lefevre, C, Nicholas, KR. 2008. Filtering Microarray
Correlations by Statistical Literature Analysis Yields Potential Hypotheses
for Lactation Research. The Python Papers 3(3): 4},
year={2009},
archivePrefix={arXiv},
eprint={0901.0213},
primaryClass={cs.DL cs.DB}
} | ling2009filtering |
arxiv-5905 | 0901.0220 | Comments on "Broadcast Channels with Arbitrarily Correlated Sources" | <|reference_start|>Comments on "Broadcast Channels with Arbitrarily Correlated Sources": The Marton-Gelfand-Pinsker inner bound on the capacity region of broadcast channels was extended by Han-Costa to include arbitrarily correlated sources where the capacity region is replaced by an admissible source region. The main arguments of Han-Costa are correct but unfortunately the authors overlooked an inequality in their derivation. The corrected region is presented and the absence of the omitted inequality is shown to sometimes admit sources that are not admissible.<|reference_end|> | arxiv | @article{kramer2009comments,
title={Comments on "Broadcast Channels with Arbitrarily Correlated Sources"},
author={Gerhard Kramer and Chandra Nair},
journal={arXiv preprint arXiv:0901.0220},
year={2009},
archivePrefix={arXiv},
eprint={0901.0220},
primaryClass={cs.IT math.IT}
} | kramer2009comments |
arxiv-5906 | 0901.0222 | Dynamic Muscle Fatigue Evaluation in Virtual Working Environment | <|reference_start|>Dynamic Muscle Fatigue Evaluation in Virtual Working Environment: Musculoskeletal disorder (MSD) is one of the major health problems in mechanical work especially in manual handling jobs. Muscle fatigue is believed to be the main reason for MSD. Posture analysis techniques have been used to expose MSD risks of the work, but most of the conventional methods are only suitable for static posture analysis. Meanwhile the subjective influences from the inspectors can result differences in the risk assessment. Another disadvantage is that the evaluation has to be taken place in the workshop, so it is impossible to avoid some design defects before data collection in the field environment and it is time consuming. In order to enhance the efficiency of ergonomic MSD risk evaluation and avoid subjective influences, we develop a new muscle fatigue model and a new fatigue index to evaluate the human muscle fatigue during manual handling jobs in this paper. Our new fatigue model is closely related to the muscle load during working procedure so that it can be used to evaluate the dynamic working process. This muscle fatigue model is mathematically validated and it is to be further experimental validated and integrated into a virtual working environment to evaluate the muscle fatigue and predict the MSD risks quickly and objectively.<|reference_end|> | arxiv | @article{ma2009dynamic,
title={Dynamic Muscle Fatigue Evaluation in Virtual Working Environment},
author={Liang Ma (IRCCyN), Damien Chablat (IRCCyN), Fouad Bennis (IRCCyN), Wei
Zhang (DIE)},
journal={International Journal of Industrial Ergonomics 39, 1 (2009)
211-220},
year={2009},
doi={10.1016/j.ergon.2008.04.004},
archivePrefix={arXiv},
eprint={0901.0222},
primaryClass={cs.RO}
} | ma2009dynamic |
arxiv-5907 | 0901.0252 | MIMO decoding based on stochastic reconstruction from multiple projections | <|reference_start|>MIMO decoding based on stochastic reconstruction from multiple projections: Least squares (LS) fitting is one of the most fundamental techniques in science and engineering. It is used to estimate parameters from multiple noisy observations. In many problems the parameters are known a-priori to be bounded integer valued, or they come from a finite set of values on an arbitrary finite lattice. In this case finding the closest vector becomes NP-Hard problem. In this paper we propose a novel algorithm, the Tomographic Least Squares Decoder (TLSD), that not only solves the ILS problem, better than other sub-optimal techniques, but also is capable of providing the a-posteriori probability distribution for each element in the solution vector. The algorithm is based on reconstruction of the vector from multiple two-dimensional projections. The projections are carefully chosen to provide low computational complexity. Unlike other iterative techniques, such as the belief propagation, the proposed algorithm has ensured convergence. We also provide simulated experiments comparing the algorithm to other sub-optimal algorithms.<|reference_end|> | arxiv | @article{leshem2009mimo,
title={MIMO decoding based on stochastic reconstruction from multiple
projections},
author={Amir Leshem and Jacob Goldberger},
journal={arXiv preprint arXiv:0901.0252},
year={2009},
archivePrefix={arXiv},
eprint={0901.0252},
primaryClass={cs.IT cs.LG math.IT}
} | leshem2009mimo |
arxiv-5908 | 0901.0269 | Random Linear Network Coding For Time Division Duplexing: Energy Analysis | <|reference_start|>Random Linear Network Coding For Time Division Duplexing: Energy Analysis: We study the energy performance of random linear network coding for time division duplexing channels. We assume a packet erasure channel with nodes that cannot transmit and receive information simultaneously. The sender transmits coded data packets back-to-back before stopping to wait for the receiver to acknowledge the number of degrees of freedom, if any, that are required to decode correctly the information. Our analysis shows that, in terms of mean energy consumed, there is an optimal number of coded data packets to send before stopping to listen. This number depends on the energy needed to transmit each coded packet and the acknowledgment (ACK), probabilities of packet and ACK erasure, and the number of degrees of freedom that the receiver requires to decode the data. We show that its energy performance is superior to that of a full-duplex system. We also study the performance of our scheme when the number of coded packets is chosen to minimize the mean time to complete transmission as in [1]. Energy performance under this optimization criterion is found to be close to optimal, thus providing a good trade-off between energy and time required to complete transmissions.<|reference_end|> | arxiv | @article{lucani2009random,
title={Random Linear Network Coding For Time Division Duplexing: Energy
Analysis},
author={Daniel E. Lucani, Milica Stojanovic, Muriel M'edard},
journal={arXiv preprint arXiv:0901.0269},
year={2009},
archivePrefix={arXiv},
eprint={0901.0269},
primaryClass={cs.IT math.IT}
} | lucani2009random |
arxiv-5909 | 0901.0275 | Physical-Layer Security: Combining Error Control Coding and Cryptography | <|reference_start|>Physical-Layer Security: Combining Error Control Coding and Cryptography: In this paper we consider tandem error control coding and cryptography in the setting of the {\em wiretap channel} due to Wyner. In a typical communications system a cryptographic application is run at a layer above the physical layer and assumes the channel is error free. However, in any real application the channels for friendly users and passive eavesdroppers are not error free and Wyner's wiretap model addresses this scenario. Using this model, we show the security of a common cryptographic primitive, i.e. a keystream generator based on linear feedback shift registers (LFSR), can be strengthened by exploiting properties of the physical layer. A passive eavesdropper can be made to experience greater difficulty in cracking an LFSR-based cryptographic system insomuch that the computational complexity of discovering the secret key increases by orders of magnitude, or is altogether infeasible. This result is shown for two fast correlation attacks originally presented by Meier and Staffelbach, in the context of channel errors due to the wiretap channel model.<|reference_end|> | arxiv | @article{harrison2009physical-layer,
title={Physical-Layer Security: Combining Error Control Coding and Cryptography},
author={Willie K Harrison and Steven W. McLaughlin},
journal={arXiv preprint arXiv:0901.0275},
year={2009},
doi={10.1109/ICC.2009.5199337},
archivePrefix={arXiv},
eprint={0901.0275},
primaryClass={cs.IT cs.CR math.IT}
} | harrison2009physical-layer |
arxiv-5910 | 0901.0290 | Offline Algorithmic Techniques for Several Content Delivery Problems in Some Restricted Types of Distributed Systems | <|reference_start|>Offline Algorithmic Techniques for Several Content Delivery Problems in Some Restricted Types of Distributed Systems: In this paper we consider several content delivery problems (broadcast and multicast, in particular) in some restricted types of distributed systems (e.g. optical Grids and wireless sensor networks with tree-like topologies). For each problem we provide efficient algorithmic techniques for computing optimal content delivery strategies. The techniques we present are offline, which means that they can be used only when full information is available and the problem parameters do not fluctuate too much.<|reference_end|> | arxiv | @article{andreica2009offline,
title={Offline Algorithmic Techniques for Several Content Delivery Problems in
Some Restricted Types of Distributed Systems},
author={Mugurel Ionut Andreica, Nicolae Tapus},
journal={Proceedings of the International Workshop on High Performance Grid
Middleware (HiPerGrid), pp. 65-72, Bucharest, Romania, 2008. (ISSN:
2065-0701)},
year={2009},
archivePrefix={arXiv},
eprint={0901.0290},
primaryClass={cs.DS cs.NI}
} | andreica2009offline |
arxiv-5911 | 0901.0291 | An Algorithm for File Transfer Scheduling in Grid Environments | <|reference_start|>An Algorithm for File Transfer Scheduling in Grid Environments: This paper addresses the data transfer scheduling problem for Grid environments, presenting a centralized scheduler developed with dynamic and adaptive features. The algorithm offers a reservation system for user transfer requests that allocates them transfer times and bandwidth, according to the network topology and the constraints the user specified for the requests. This paper presents the projects related to the data transfer field, the design of the framework for which the scheduler was built, the main features of the scheduler, the steps for transfer requests rescheduling and two tests that illustrate the system's behavior for different types of transfer requests.<|reference_end|> | arxiv | @article{carpen-amarie2009an,
title={An Algorithm for File Transfer Scheduling in Grid Environments},
author={Alexandra Carpen-Amarie, Mugurel Ionut Andreica, Valentin Cristea},
journal={Proceedings of the International Workshop on High Performance Grid
Middleware (HiPerGrid), pp. 33-40, Bucharest, Romania, 2008. (ISSN:
2065-0701)},
year={2009},
archivePrefix={arXiv},
eprint={0901.0291},
primaryClass={cs.NI cs.DC cs.DS}
} | carpen-amarie2009an |
arxiv-5912 | 0901.0296 | Experience versus Talent Shapes the Structure of the Web | <|reference_start|>Experience versus Talent Shapes the Structure of the Web: We use sequential large-scale crawl data to empirically investigate and validate the dynamics that underlie the evolution of the structure of the web. We find that the overall structure of the web is defined by an intricate interplay between experience or entitlement of the pages (as measured by the number of inbound hyperlinks a page already has), inherent talent or fitness of the pages (as measured by the likelihood that someone visiting the page would give a hyperlink to it), and the continual high rates of birth and death of pages on the web. We find that the web is conservative in judging talent and the overall fitness distribution is exponential, showing low variability. The small variance in talent, however, is enough to lead to experience distributions with high variance: The preferential attachment mechanism amplifies these small biases and leads to heavy-tailed power-law (PL) inbound degree distributions over all pages, as well as over pages that are of the same age. The balancing act between experience and talent on the web allows newly introduced pages with novel and interesting content to grow quickly and surpass older pages. In this regard, it is much like what we observe in high-mobility and meritocratic societies: People with entitlement continue to have access to the best resources, but there is just enough screening for fitness that allows for talented winners to emerge and join the ranks of the leaders. Finally, we show that the fitness estimates have potential practical applications in ranking query results.<|reference_end|> | arxiv | @article{kong2009experience,
title={Experience versus Talent Shapes the Structure of the Web},
author={Joseph S. Kong, Nima Sarshar, Vwani P. Roychowdhury},
journal={Proceedings of the National Academy of Sciences (PNAS), Vol. 105,
Pages 13724-13729, 2008},
year={2009},
doi={10.1073/pnas.0805921105},
archivePrefix={arXiv},
eprint={0901.0296},
primaryClass={cs.CY cs.IR physics.soc-ph}
} | kong2009experience |
arxiv-5913 | 0901.0317 | Design of a P System based Artificial Graph Chemistry | <|reference_start|>Design of a P System based Artificial Graph Chemistry: Artificial Chemistries (ACs) are symbolic chemical metaphors for the exploration of Artificial Life, with specific focus on the origin of life. In this work we define a P system based artificial graph chemistry to understand the principles leading to the evolution of life-like structures in an AC set up and to develop a unified framework to characterize and classify symbolic artificial chemistries by devising appropriate formalism to capture semantic and organizational information. An extension of P system is considered by associating probabilities with the rules providing the topological framework for the evolution of a labeled undirected graph based molecular reaction semantics.<|reference_end|> | arxiv | @article{misra2009design,
title={Design of a P System based Artificial Graph Chemistry},
author={Janardan Misra},
journal={arXiv preprint arXiv:0901.0317},
year={2009},
archivePrefix={arXiv},
eprint={0901.0317},
primaryClass={cs.NE cs.AI}
} | misra2009design |
arxiv-5914 | 0901.0318 | Thoughts on an Unified Framework for Artificial Chemistries | <|reference_start|>Thoughts on an Unified Framework for Artificial Chemistries: Artificial Chemistries (ACs) are symbolic chemical metaphors for the exploration of Artificial Life, with specific focus on the problem of biogenesis or the origin of life. This paper presents authors thoughts towards defining a unified framework to characterize and classify symbolic artificial chemistries by devising appropriate formalism to capture semantic and organizational information. We identify three basic high level abstractions in initial proposal for this framework viz., information, computation, and communication. We present an analysis of two important notions of information, namely, Shannon's Entropy and Algorithmic Information, and discuss inductive and deductive approaches for defining the framework.<|reference_end|> | arxiv | @article{misra2009thoughts,
title={Thoughts on an Unified Framework for Artificial Chemistries},
author={Janrdan Misra},
journal={arXiv preprint arXiv:0901.0318},
year={2009},
archivePrefix={arXiv},
eprint={0901.0318},
primaryClass={cs.AI cs.IT math.IT nlin.AO}
} | misra2009thoughts |
arxiv-5915 | 0901.0339 | Resolution-based Query Answering for Semantic Access to Relational Databases: A Research Note | <|reference_start|>Resolution-based Query Answering for Semantic Access to Relational Databases: A Research Note: We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.<|reference_end|> | arxiv | @article{riazanov2009resolution-based,
title={Resolution-based Query Answering for Semantic Access to Relational
Databases: A Research Note},
author={Alexandre Riazanov},
journal={arXiv preprint arXiv:0901.0339},
year={2009},
archivePrefix={arXiv},
eprint={0901.0339},
primaryClass={cs.LO cs.DB}
} | riazanov2009resolution-based |
arxiv-5916 | 0901.0349 | Protecting infrastructure networks from cost-based attacks | <|reference_start|>Protecting infrastructure networks from cost-based attacks: It has been known that heterogeneous networks are vulnerable to the intentional removal of a small fraction of highly connected or loaded nodes, which implies that, to protect a network effectively, a few important nodes should be allocated with more defense resources than the others. However, if too many resources are allocated to the few important nodes, the numerous less-important nodes will be less protected, which, when attacked all together, still capable of causing a devastating damage. A natural question therefore is how to efficiently distribute the limited defense resources among the network nodes such that the network damage is minimized whatever attack strategy the attacker may take. In this paper, taking into account the factor of attack cost, we will revisit the problem of network security and search for efficient network defense against the cost-based attacks. The study shows that, for a general complex network, there will exist an optimal distribution of the defense resources, with which the network is well protected from cost-based attacks. Furthermore, it is found that the configuration of the optimal defense is dependent on the network parameters. Specifically, network that has a larger size, sparser connection and more heterogeneous structure will be more benefited from the defense optimization.<|reference_end|> | arxiv | @article{wang2009protecting,
title={Protecting infrastructure networks from cost-based attacks},
author={Xingang Wang, Shuguang Guan, and Choy Heng Lai},
journal={arXiv preprint arXiv:0901.0349},
year={2009},
doi={10.1088/1367-2630/11/3/033006},
archivePrefix={arXiv},
eprint={0901.0349},
primaryClass={cs.NI}
} | wang2009protecting |
arxiv-5917 | 0901.0355 | Promotion of cooperation on networks? The myopic best response case | <|reference_start|>Promotion of cooperation on networks? The myopic best response case: We address the issue of the effects of considering a network of contacts on the emergence of cooperation on social dilemmas under myopic best response dynamics. We begin by summarizing the main features observed under less intellectually demanding dynamics, pointing out their most relevant general characteristics. Subsequently we focus on the new framework of best response. By means of an extensive numerical simulation program we show that, contrary to the rest of dynamics considered so far, best response is largely unaffected by the underlying network, which implies that, in most cases, no promotion of cooperation is found with this dynamics. We do find, however, nontrivial results differing from the well-mixed population in the case of coordination games on lattices, which we explain in terms of the formation of spatial clusters and the conditions for their advancement, subsequently discussing their relevance to other networks.<|reference_end|> | arxiv | @article{roca2009promotion,
title={Promotion of cooperation on networks? The myopic best response case},
author={Carlos P. Roca, Jose A. Cuesta and Angel Sanchez},
journal={arXiv preprint arXiv:0901.0355},
year={2009},
doi={10.1140/epjb/e2009-00189-0},
archivePrefix={arXiv},
eprint={0901.0355},
primaryClass={q-bio.PE cs.GT physics.soc-ph}
} | roca2009promotion |
arxiv-5918 | 0901.0358 | Weighted Naive Bayes Model for Semi-Structured Document Categorization | <|reference_start|>Weighted Naive Bayes Model for Semi-Structured Document Categorization: The aim of this paper is the supervised classification of semi-structured data. A formal model based on bayesian classification is developed while addressing the integration of the document structure into classification tasks. We define what we call the structural context of occurrence for unstructured data, and we derive a recursive formulation in which parameters are used to weight the contribution of structural element relatively to the others. A simplified version of this formal model is implemented to carry out textual documents classification experiments. First results show, for a adhoc weighting strategy, that the structural context of word occurrences has a significant impact on classification results comparing to the performance of a simple multinomial naive Bayes classifier. The proposed implementation competes on the Reuters-21578 data with the SVM classifier associated or not with the splitting of structural components. These results encourage exploring the learning of acceptable weighting strategies for this model, in particular boosting strategies.<|reference_end|> | arxiv | @article{marteau2009weighted,
title={Weighted Naive Bayes Model for Semi-Structured Document Categorization},
author={Pierre-Franc{c}ois Marteau (VALORIA), Gilbas M'enier (VALORIA),
Eugen Popovici (VALORIA)},
journal={1st International Conference on Multidisciplinary Information
Sciences and Technologies InSciT2006, Merida : Espagne (2006)},
year={2009},
archivePrefix={arXiv},
eprint={0901.0358},
primaryClass={cs.IR}
} | marteau2009weighted |
arxiv-5919 | 0901.0373 | Highly Undecidable Problems For Infinite Computations | <|reference_start|>Highly Undecidable Problems For Infinite Computations: We show that many classical decision problems about 1-counter omega-languages, context free omega-languages, or infinitary rational relations, are $\Pi_2^1$-complete, hence located at the second level of the analytical hierarchy, and "highly undecidable". In particular, the universality problem, the inclusion problem, the equivalence problem, the determinizability problem, the complementability problem, and the unambiguity problem are all $\Pi_2^1$-complete for context-free omega-languages or for infinitary rational relations. Topological and arithmetical properties of 1-counter omega-languages, context free omega-languages, or infinitary rational relations, are also highly undecidable. These very surprising results provide the first examples of highly undecidable problems about the behaviour of very simple finite machines like 1-counter automata or 2-tape automata.<|reference_end|> | arxiv | @article{finkel2009highly,
title={Highly Undecidable Problems For Infinite Computations},
author={Olivier Finkel (ELM, Lip)},
journal={RAIRO - Theoretical Informatics and Applications 43, 2 (2009)
339-364},
year={2009},
archivePrefix={arXiv},
eprint={0901.0373},
primaryClass={cs.LO cs.CC math.LO}
} | finkel2009highly |
arxiv-5920 | 0901.0401 | From Physics to Economics: An Econometric Example Using Maximum Relative Entropy | <|reference_start|>From Physics to Economics: An Econometric Example Using Maximum Relative Entropy: Econophysics, is based on the premise that some ideas and methods from physics can be applied to economic situations. We intend to show in this paper how a physics concept such as entropy can be applied to an economic problem. In so doing, we demonstrate how information in the form of observable data and moment constraints are introduced into the method of Maximum relative Entropy (MrE). A general example of updating with data and moments is shown. Two specific econometric examples are solved in detail which can then be used as templates for real world problems. A numerical example is compared to a large deviation solution which illustrates some of the advantages of the MrE method.<|reference_end|> | arxiv | @article{giffin2009from,
title={From Physics to Economics: An Econometric Example Using Maximum Relative
Entropy},
author={Adom Giffin},
journal={Physica A 388 (2009), pp. 1610-1620},
year={2009},
doi={10.1016/j.physa.2008.12.066},
archivePrefix={arXiv},
eprint={0901.0401},
primaryClass={q-fin.ST cs.IT math.IT physics.data-an physics.pop-ph stat.CO stat.ME}
} | giffin2009from |
arxiv-5921 | 0901.0492 | Transmission Capacities for Overlaid Wireless Ad Hoc Networks with Outage Constraints | <|reference_start|>Transmission Capacities for Overlaid Wireless Ad Hoc Networks with Outage Constraints: We study the transmission capacities of two coexisting wireless networks (a primary network vs. a secondary network) that operate in the same geographic region and share the same spectrum. We define transmission capacity as the product among the density of transmissions, the transmission rate, and the successful transmission probability (1 minus the outage probability). The primary (PR) network has a higher priority to access the spectrum without particular considerations for the secondary (SR) network, where the SR network limits its interference to the PR network by carefully controlling the density of its transmitters. Assuming that the nodes are distributed according to Poisson point processes and the two networks use different transmission ranges, we quantify the transmission capacities for both of these two networks and discuss their tradeoff based on asymptotic analyses. Our results show that if the PR network permits a small increase of its outage probability, the sum transmission capacity of the two networks (i.e., the overall spectrum efficiency per unit area) will be boosted significantly over that of a single network.<|reference_end|> | arxiv | @article{yin2009transmission,
title={Transmission Capacities for Overlaid Wireless Ad Hoc Networks with
Outage Constraints},
author={Changchuan Yin, Long Gao, Tie Liu, and Shuguang Cui},
journal={arXiv preprint arXiv:0901.0492},
year={2009},
doi={10.1109/ICC.2009.5199539},
archivePrefix={arXiv},
eprint={0901.0492},
primaryClass={cs.IT math.IT}
} | yin2009transmission |
arxiv-5922 | 0901.0498 | Towards the characterization of individual users through Web analytics | <|reference_start|>Towards the characterization of individual users through Web analytics: We perform an analysis of the way individual users navigate in the Web. We focus primarily in the temporal patterns of they return to a given page. The return probability as a function of time as well as the distribution of time intervals between consecutive visits are measured and found to be independent of the level of activity of single users. The results indicate a rich variety of individual behaviors and seem to preclude the possibility of defining a characteristic frequency for each user in his/her visits to a single site.<|reference_end|> | arxiv | @article{goncalves2009towards,
title={Towards the characterization of individual users through Web analytics},
author={Bruno Goncalves and Jose J. Ramasco},
journal={Complex Sciences, 2247-2254 (2009)},
year={2009},
doi={10.1007/978-3-642-02469-6_102},
archivePrefix={arXiv},
eprint={0901.0498},
primaryClass={cs.HC cs.CY physics.soc-ph}
} | goncalves2009towards |
arxiv-5923 | 0901.0501 | Interprocedural Dataflow Analysis over Weight Domains with Infinite Descending Chains | <|reference_start|>Interprocedural Dataflow Analysis over Weight Domains with Infinite Descending Chains: We study generalized fixed-point equations over idempotent semirings and provide an efficient algorithm for the detection whether a sequence of Kleene's iterations stabilizes after a finite number of steps. Previously known approaches considered only bounded semirings where there are no infinite descending chains. The main novelty of our work is that we deal with semirings without the boundedness restriction. Our study is motivated by several applications from interprocedural dataflow analysis. We demonstrate how the reachability problem for weighted pushdown automata can be reduced to solving equations in the framework mentioned above and we describe a few applications to demonstrate its usability.<|reference_end|> | arxiv | @article{kühnrich2009interprocedural,
title={Interprocedural Dataflow Analysis over Weight Domains with Infinite
Descending Chains},
author={Morten K"uhnrich, Stefan Schwoon, Jiv{r}'i Srba, Stefan Kiefer},
journal={arXiv preprint arXiv:0901.0501},
year={2009},
archivePrefix={arXiv},
eprint={0901.0501},
primaryClass={cs.DS}
} | kühnrich2009interprocedural |
arxiv-5924 | 0901.0521 | On Multipath Fading Channels at High SNR | <|reference_start|>On Multipath Fading Channels at High SNR: This work studies the capacity of multipath fading channels. A noncoherent channel model is considered, where neither the transmitter nor the receiver is cognizant of the realization of the path gains, but both are cognizant of their statistics. It is shown that if the delay spread is large in the sense that the variances of the path gains decay exponentially or slower, then capacity is bounded in the signal-to-noise ratio (SNR). For such channels, capacity does not tend to infinity as the SNR tends to infinity. In contrast, if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the SNR. It is further demonstrated that if the number of paths is finite, then at high SNR capacity grows double-logarithmically with the SNR, and the capacity pre-loglog, defined as the limiting ratio of capacity to log(log(SNR)) as SNR tends to infinity, is 1 irrespective of the number of paths.<|reference_end|> | arxiv | @article{koch2009on,
title={On Multipath Fading Channels at High SNR},
author={Tobias Koch and Amos Lapidoth},
journal={arXiv preprint arXiv:0901.0521},
year={2009},
archivePrefix={arXiv},
eprint={0901.0521},
primaryClass={cs.IT math.IT}
} | koch2009on |
arxiv-5925 | 0901.0529 | Measures for classification and detection in steganalysis | <|reference_start|>Measures for classification and detection in steganalysis: Still and multi-media images are subject to transformations for compression, steganographic embedding and digital watermarking. In a major program of activities we are engaged in the modeling, design and analysis of digital content. Statistical and pattern classification techniques should be combined with understanding of run length, transform coding techniques, and also encryption techniques.<|reference_end|> | arxiv | @article{gujar2009measures,
title={Measures for classification and detection in steganalysis},
author={Sujit Gujar, C E Veni Madhavan},
journal={arXiv preprint arXiv:0901.0529},
year={2009},
archivePrefix={arXiv},
eprint={0901.0529},
primaryClass={cs.OH cs.CR}
} | gujar2009measures |
arxiv-5926 | 0901.0536 | Polar Codes: Characterization of Exponent, Bounds, and Constructions | <|reference_start|>Polar Codes: Characterization of Exponent, Bounds, and Constructions: Polar codes were recently introduced by Ar\i kan. They achieve the capacity of arbitrary symmetric binary-input discrete memoryless channels under a low complexity successive cancellation decoding strategy. The original polar code construction is closely related to the recursive construction of Reed-Muller codes and is based on the $2 \times 2$ matrix $\bigl[ 1 &0 1& 1 \bigr]$. It was shown by Ar\i kan and Telatar that this construction achieves an error exponent of $\frac12$, i.e., that for sufficiently large blocklengths the error probability decays exponentially in the square root of the length. It was already mentioned by Ar\i kan that in principle larger matrices can be used to construct polar codes. A fundamental question then is to see whether there exist matrices with exponent exceeding $\frac12$. We first show that any $\ell \times \ell$ matrix none of whose column permutations is upper triangular polarizes symmetric channels. We then characterize the exponent of a given square matrix and derive upper and lower bounds on achievable exponents. Using these bounds we show that there are no matrices of size less than 15 with exponents exceeding $\frac12$. Further, we give a general construction based on BCH codes which for large $n$ achieves exponents arbitrarily close to 1 and which exceeds $\frac12$ for size 16.<|reference_end|> | arxiv | @article{korada2009polar,
title={Polar Codes: Characterization of Exponent, Bounds, and Constructions},
author={Satish Babu Korada, Eren Sasoglu, Rudiger Urbanke},
journal={arXiv preprint arXiv:0901.0536},
year={2009},
archivePrefix={arXiv},
eprint={0901.0536},
primaryClass={cs.IT math.IT}
} | korada2009polar |
arxiv-5927 | 0901.0541 | Linear Transformations and Restricted Isometry Property | <|reference_start|>Linear Transformations and Restricted Isometry Property: The Restricted Isometry Property (RIP) introduced by Cand\'es and Tao is a fundamental property in compressed sensing theory. It says that if a sampling matrix satisfies the RIP of certain order proportional to the sparsity of the signal, then the original signal can be reconstructed even if the sampling matrix provides a sample vector which is much smaller in size than the original signal. This short note addresses the problem of how a linear transformation will affect the RIP. This problem arises from the consideration of extending the sensing matrix and the use of compressed sensing in different bases. As an application, the result is applied to the redundant dictionary setting in compressed sensing.<|reference_end|> | arxiv | @article{ying2009linear,
title={Linear Transformations and Restricted Isometry Property},
author={Leslie Ying and Yi Ming Zou},
journal={arXiv preprint arXiv:0901.0541},
year={2009},
archivePrefix={arXiv},
eprint={0901.0541},
primaryClass={cs.IT math.IT}
} | ying2009linear |
arxiv-5928 | 0901.0573 | Asymptotic stability and capacity results for a broad family of power adjustment rules: Expanded discussion | <|reference_start|>Asymptotic stability and capacity results for a broad family of power adjustment rules: Expanded discussion: In any wireless communication environment in which a transmitter creates interference to the others, a system of non-linear equations arises. Its form (for 2 terminals) is p1=g1(p2;a1) and p2=g2(p1;a2), with p1, p2 power levels; a1, a2 quality-of-service (QoS) targets; and g1, g2 functions akin to "interference functions" in Yates (JSAC, 13(7):1341-1348, 1995). Two fundamental questions are: (1) does the system have a solution?; and if so, (2) what is it?. (Yates, 1995) shows that IF the system has a solution, AND the "interference functions" satisfy some simple properties, a "greedy" power adjustment process will always converge to a solution. We show that, if the power-adjustment functions have similar properties to those of (Yates, 1995), and satisfy a condition of the simple form gi(1,1,...,1)<1, then the system has a unique solution that can be found iteratively. As examples, feasibility conditions for macro-diversity and multiple-connection receptions are given. Informally speaking, we complement (Yates, 1995) by adding the feasibility condition it lacked. Our analysis is based on norm concepts, and the Banach's contraction-mapping principle.<|reference_end|> | arxiv | @article{rodriguez2009asymptotic,
title={Asymptotic stability and capacity results for a broad family of power
adjustment rules: Expanded discussion},
author={Virgilio Rodriguez and Rudolf Mathar and Anke Schmeink},
journal={arXiv preprint arXiv:0901.0573},
year={2009},
archivePrefix={arXiv},
eprint={0901.0573},
primaryClass={cs.IT math.FA math.IT}
} | rodriguez2009asymptotic |
arxiv-5929 | 0901.0585 | A Poissonian explanation for heavy-tails in e-mail communication | <|reference_start|>A Poissonian explanation for heavy-tails in e-mail communication: Patterns of deliberate human activity and behavior are of utmost importance in areas as diverse as disease spread, resource allocation, and emergency response. Because of its widespread availability and use, e-mail correspondence provides an attractive proxy for studying human activity. Recently, it was reported that the probability density for the inter-event time $\tau$ between consecutively sent e-mails decays asymptotically as $\tau^{-\alpha}$, with $\alpha \approx 1$. The slower than exponential decay of the inter-event time distribution suggests that deliberate human activity is inherently non-Poissonian. Here, we demonstrate that the approximate power-law scaling of the inter-event time distribution is a consequence of circadian and weekly cycles of human activity. We propose a cascading non-homogeneous Poisson process which explicitly integrates these periodic patterns in activity with an individual's tendency to continue participating in an activity. Using standard statistical techniques, we show that our model is consistent with the empirical data. Our findings may also provide insight into the origins of heavy-tailed distributions in other complex systems.<|reference_end|> | arxiv | @article{malmgren2009a,
title={A Poissonian explanation for heavy-tails in e-mail communication},
author={R. Dean Malmgren, Daniel B. Stouffer, Adilson E. Motter, Luis A.N.
Amaral},
journal={PNAS 105(47): 18153-18158 (2008)},
year={2009},
doi={10.1073/pnas.0800332105},
archivePrefix={arXiv},
eprint={0901.0585},
primaryClass={physics.soc-ph cs.CY physics.data-an}
} | malmgren2009a |
arxiv-5930 | 0901.0595 | Capacity regions of two new classes of 2-receiver broadcast channels | <|reference_start|>Capacity regions of two new classes of 2-receiver broadcast channels: Motivated by a simple broadcast channel, we generalize the notions of a less noisy receiver and a more capable receiver to an essentially less noisy receiver and an essentially more capable receiver respectively. We establish the capacity regions of these classes by borrowing on existing techniques to obtain the characterization of the capacity region for certain new and interesting classes of broadcast channels. We also establish the relationships between the new classes and the existing classes.<|reference_end|> | arxiv | @article{nair2009capacity,
title={Capacity regions of two new classes of 2-receiver broadcast channels},
author={Chandra Nair},
journal={arXiv preprint arXiv:0901.0595},
year={2009},
archivePrefix={arXiv},
eprint={0901.0595},
primaryClass={cs.IT math.IT}
} | nair2009capacity |
arxiv-5931 | 0901.0597 | On the Optimal Convergence Probability of Univariate Estimation of Distribution Algorithms | <|reference_start|>On the Optimal Convergence Probability of Univariate Estimation of Distribution Algorithms: In this paper, we obtain bounds on the probability of convergence to the optimal solution for the compact Genetic Algorithm (cGA) and the Population Based Incremental Learning (PBIL). We also give a sufficient condition for convergence of these algorithms to the optimal solution and compute a range of possible values of the parameters of these algorithms for which they converge to the optimal solution with a confidence level.<|reference_end|> | arxiv | @article{rastegar2009on,
title={On the Optimal Convergence Probability of Univariate Estimation of
Distribution Algorithms},
author={Reza Rastegar},
journal={arXiv preprint arXiv:0901.0597},
year={2009},
archivePrefix={arXiv},
eprint={0901.0597},
primaryClass={cs.NE cs.AI}
} | rastegar2009on |
arxiv-5932 | 0901.0598 | A Step Forward in Studying the Compact Genetic Algorithm | <|reference_start|>A Step Forward in Studying the Compact Genetic Algorithm: The compact Genetic Algorithm (cGA) is an Estimation of Distribution Algorithm that generates offspring population according to the estimated probabilistic model of the parent population instead of using traditional recombination and mutation operators. The cGA only needs a small amount of memory; therefore, it may be quite useful in memory-constrained applications. This paper introduces a theoretical framework for studying the cGA from the convergence point of view in which, we model the cGA by a Markov process and approximate its behavior using an Ordinary Differential Equation (ODE). Then, we prove that the corresponding ODE converges to local optima and stays there. Consequently, we conclude that the cGA will converge to the local optima of the function to be optimized.<|reference_end|> | arxiv | @article{rastegar2009a,
title={A Step Forward in Studying the Compact Genetic Algorithm},
author={Reza Rastegar, Arash Hariri},
journal={Evolutionary Computation (2006),Vol 14, No 3, 277-290},
year={2009},
archivePrefix={arXiv},
eprint={0901.0598},
primaryClass={cs.NE cs.AI}
} | rastegar2009a |
arxiv-5933 | 0901.0608 | Multicasting correlated multi-source to multi-sink over a network | <|reference_start|>Multicasting correlated multi-source to multi-sink over a network: The problem of network coding with multicast of a single source to multisink has first been studied by Ahlswede, Cai, Li and Yeung in 2000, in which they have established the celebrated max-flow mini-cut theorem on non-physical information flow over a network of independent channels. On the other hand, in 1980, Han has studied the case with correlated multisource and a single sink from the viewpoint of polymatroidal functions in which a necessary and sufficient condition has been demonstrated for reliable transmission over the network. This paper presents an attempt to unify both cases, which leads to establish a necessary and sufficient condition for reliable transmission over a network multicasting correlated multisource to multisink. Here, the problem of separation of source coding and channel coding is also discussed.<|reference_end|> | arxiv | @article{han2009multicasting,
title={Multicasting correlated multi-source to multi-sink over a network},
author={Te Sun Han},
journal={arXiv preprint arXiv:0901.0608},
year={2009},
archivePrefix={arXiv},
eprint={0901.0608},
primaryClass={cs.IT math.IT}
} | han2009multicasting |
arxiv-5934 | 0901.0633 | Optimal control as a graphical model inference problem | <|reference_start|>Optimal control as a graphical model inference problem: We reformulate a class of non-linear stochastic optimal control problems introduced by Todorov (2007) as a Kullback-Leibler (KL) minimization problem. As a result, the optimal control computation reduces to an inference computation and approximate inference methods can be applied to efficiently compute approximate optimal controls. We show how this KL control theory contains the path integral control method as a special case. We provide an example of a block stacking task and a multi-agent cooperative game where we demonstrate how approximate inference can be successfully applied to instances that are too complex for exact computation. We discuss the relation of the KL control approach to other inference approaches to control.<|reference_end|> | arxiv | @article{kappen2009optimal,
title={Optimal control as a graphical model inference problem},
author={B. Kappen, V. Gomez, M. Opper},
journal={arXiv preprint arXiv:0901.0633},
year={2009},
doi={10.1007/s10994-012-5278-7},
archivePrefix={arXiv},
eprint={0901.0633},
primaryClass={math.OC cs.SY}
} | kappen2009optimal |
arxiv-5935 | 0901.0643 | An Information Theoretic Analysis of Single Transceiver Passive RFID Networks | <|reference_start|>An Information Theoretic Analysis of Single Transceiver Passive RFID Networks: In this paper, we study single transceiver passive RFID networks by modeling the underlying physical system as a special cascade of a certain broadcast channel (BCC) and a multiple access channel (MAC), using a "nested codebook" structure in between. The particular application differentiates this communication setup from an ordinary cascade of a BCC and a MAC, and requires certain structures such as "nested codebooks", impurity channels or additional power constraints. We investigate this problem both for discrete alphabets, where we characterize the achievable rate region, as well as for continuous alphabets with additive Gaussian noise, where we provide the capacity region. Hence, we establish the maximal achievable error free communication rates for this particular problem which constitutes the fundamental limit that is achievable by any TDMA based RFID protocol and the achievable rate region for any RFID protocol for the case of continuous alphabets under additive Gaussian noise.<|reference_end|> | arxiv | @article{altug2009an,
title={An Information Theoretic Analysis of Single Transceiver Passive RFID
Networks},
author={Yucel Altug, S. Serdar Kozat, M. Kivanc Mihcak},
journal={arXiv preprint arXiv:0901.0643},
year={2009},
archivePrefix={arXiv},
eprint={0901.0643},
primaryClass={cs.IT math.IT}
} | altug2009an |
arxiv-5936 | 0901.0702 | Multidimensional Flash Codes | <|reference_start|>Multidimensional Flash Codes: Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different levels corresponding to the number of electrons it contains. Increasing the cell level is easy; however, reducing a cell level forces all the other cells in the same block to be erased. This erasing operation is undesirable and therefore has to be used as infrequently as possible. We consider the problem of designing codes for this purpose, where k bits are stored using a block of n cells with q levels each. The goal is to maximize the number of bit writes before an erase operation is required. We present an efficient construction of codes that can store an arbitrary number of bits. Our construction can be viewed as an extension to multiple dimensions of the earlier work of Jiang and Bruck, where single-dimensional codes that can store only 2 bits were proposed.<|reference_end|> | arxiv | @article{yaakobi2009multidimensional,
title={Multidimensional Flash Codes},
author={Eitan Yaakobi, Alexander Vardy, Paul H. Siegel, and Jack K. Wolf},
journal={arXiv preprint arXiv:0901.0702},
year={2009},
archivePrefix={arXiv},
eprint={0901.0702},
primaryClass={cs.IT math.IT}
} | yaakobi2009multidimensional |
arxiv-5937 | 0901.0733 | Contextual hypotheses and semantics of logic programs | <|reference_start|>Contextual hypotheses and semantics of logic programs: Logic programming has developed as a rich field, built over a logical substratum whose main constituent is a nonclassical form of negation, sometimes coexisting with classical negation. The field has seen the advent of a number of alternative semantics, with Kripke-Kleene semantics, the well-founded semantics, the stable model semantics, and the answer-set semantics standing out as the most successful. We show that all aforementioned semantics are particular cases of a generic semantics, in a framework where classical negation is the unique form of negation and where the literals in the bodies of the rules can be `marked' to indicate that they can be the targets of hypotheses. A particular semantics then amounts to choosing a particular marking scheme and choosing a particular set of hypotheses. When a literal belongs to the chosen set of hypotheses, all marked occurrences of that literal in the body of a rule are assumed to be true, whereas the occurrences of that literal that have not been marked in the body of the rule are to be derived in order to contribute to the firing of the rule. Hence the notion of hypothetical reasoning that is presented in this framework is not based on making global assumptions, but more subtly on making local, contextual assumptions, taking effect as indicated by the chosen marking scheme on the basis of the chosen set of hypotheses. Our approach offers a unified view on the various semantics proposed in logic programming, classical in that only classical negation is used, and links the semantics of logic programs to mechanisms that endow rule-based systems with the power to harness hypothetical reasoning.<|reference_end|> | arxiv | @article{martin2009contextual,
title={Contextual hypotheses and semantics of logic programs},
author={'Eric A. Martin},
journal={arXiv preprint arXiv:0901.0733},
year={2009},
archivePrefix={arXiv},
eprint={0901.0733},
primaryClass={cs.LO cs.AI}
} | martin2009contextual |
arxiv-5938 | 0901.0734 | SPARLS: A Low Complexity Recursive $\mathcalL_1$-Regularized Least Squares Algorithm | <|reference_start|>SPARLS: A Low Complexity Recursive $\mathcalL_1$-Regularized Least Squares Algorithm: We develop a Recursive $\mathcal{L}_1$-Regularized Least Squares (SPARLS) algorithm for the estimation of a sparse tap-weight vector in the adaptive filtering setting. The SPARLS algorithm exploits noisy observations of the tap-weight vector output stream and produces its estimate using an Expectation-Maximization type algorithm. Simulation studies in the context of channel estimation, employing multi-path wireless channels, show that the SPARLS algorithm has significant improvement over the conventional widely-used Recursive Least Squares (RLS) algorithm, in terms of both mean squared error (MSE) and computational complexity.<|reference_end|> | arxiv | @article{babadi2009sparls:,
title={SPARLS: A Low Complexity Recursive $\mathcal{L}_1$-Regularized Least
Squares Algorithm},
author={Behtash Babadi, Nicholas Kalouptsidis and Vahid Tarokh},
journal={arXiv preprint arXiv:0901.0734},
year={2009},
archivePrefix={arXiv},
eprint={0901.0734},
primaryClass={cs.IT math.IT}
} | babadi2009sparls: |
arxiv-5939 | 0901.0749 | Quantized Compressive Sensing | <|reference_start|>Quantized Compressive Sensing: We study the average distortion introduced by scalar, vector, and entropy coded quantization of compressive sensing (CS) measurements. The asymptotic behavior of the underlying quantization schemes is either quantified exactly or characterized via bounds. We adapt two benchmark CS reconstruction algorithms to accommodate quantization errors, and empirically demonstrate that these methods significantly reduce the reconstruction distortion when compared to standard CS techniques.<|reference_end|> | arxiv | @article{dai2009quantized,
title={Quantized Compressive Sensing},
author={Wei Dai, Hoa Vinh Pham, and Olgica Milenkovic},
journal={arXiv preprint arXiv:0901.0749},
year={2009},
archivePrefix={arXiv},
eprint={0901.0749},
primaryClass={cs.IT math.IT}
} | dai2009quantized |
arxiv-5940 | 0901.0753 | Distributed Preemption Decisions: Probabilistic Graphical Model, Algorithm and Near-Optimality | <|reference_start|>Distributed Preemption Decisions: Probabilistic Graphical Model, Algorithm and Near-Optimality: Cooperative decision making is a vision of future network management and control. Distributed connection preemption is an important example where nodes can make intelligent decisions on allocating resources and controlling traffic flows for multi-class service networks. A challenge is that nodal decisions are spatially dependent as traffic flows trespass multiple nodes in a network. Hence the performance-complexity trade-off becomes important, i.e., how accurate decisions are versus how much information is exchanged among nodes. Connection preemption is known to be NP-complete. Centralized preemption is optimal but computationally intractable. Decentralized preemption is computationally efficient but may result in a poor performance. This work investigates distributed preemption where nodes decide whether and which flows to preempt using only local information exchange with neighbors. We develop, based on the probabilistic graphical models, a near-optimal distributed algorithm. The algorithm is used by each node to make collectively near-optimal preemption decisions. We study trade-offs between near-optimal performance and complexity that corresponds to the amount of information-exchange of the distributed algorithm. The algorithm is validated by both analysis and simulation.<|reference_end|> | arxiv | @article{jeon2009distributed,
title={Distributed Preemption Decisions: Probabilistic Graphical Model,
Algorithm and Near-Optimality},
author={Sung-eok Jeon and Chuanyi Ji},
journal={arXiv preprint arXiv:0901.0753},
year={2009},
archivePrefix={arXiv},
eprint={0901.0753},
primaryClass={cs.LG}
} | jeon2009distributed |
arxiv-5941 | 0901.0760 | A Theoretical Analysis of Joint Manifolds | <|reference_start|>A Theoretical Analysis of Joint Manifolds: The emergence of low-cost sensor architectures for diverse modalities has made it possible to deploy sensor arrays that capture a single event from a large number of vantage points and using multiple modalities. In many scenarios, these sensors acquire very high-dimensional data such as audio signals, images, and video. To cope with such high-dimensional data, we typically rely on low-dimensional models. Manifold models provide a particularly powerful model that captures the structure of high-dimensional data when it is governed by a low-dimensional set of parameters. However, these models do not typically take into account dependencies among multiple sensors. We thus propose a new joint manifold framework for data ensembles that exploits such dependencies. We show that simple algorithms can exploit the joint manifold structure to improve their performance on standard signal processing applications. Additionally, recent results concerning dimensionality reduction for manifolds enable us to formulate a network-scalable data compression scheme that uses random projections of the sensed data. This scheme efficiently fuses the data from all sensors through the addition of such projections, regardless of the data modalities and dimensions.<|reference_end|> | arxiv | @article{davenport2009a,
title={A Theoretical Analysis of Joint Manifolds},
author={Mark A. Davenport, Chinmay Hegde, Marco F. Duarte, and Richard G.
Baraniuk},
journal={arXiv preprint arXiv:0901.0760},
year={2009},
number={TREE0901, Department of Electrical and Computer Engineering, Rice
University},
archivePrefix={arXiv},
eprint={0901.0760},
primaryClass={cs.LG cs.CV}
} | davenport2009a |
arxiv-5942 | 0901.0763 | Distributed Power Allocation in Multi-User Multi-Channel Relay Networks | <|reference_start|>Distributed Power Allocation in Multi-User Multi-Channel Relay Networks: This paper has been withdrawn by the authors as they feel it inappropriate to publish this paper for the time being.<|reference_end|> | arxiv | @article{ren2009distributed,
title={Distributed Power Allocation in Multi-User Multi-Channel Relay Networks},
author={Shaolei Ren and Mihaela van der Schaar},
journal={arXiv preprint arXiv:0901.0763},
year={2009},
archivePrefix={arXiv},
eprint={0901.0763},
primaryClass={cs.IT math.IT}
} | ren2009distributed |
arxiv-5943 | 0901.0786 | Approximate inference on planar graphs using Loop Calculus and Belief Propagation | <|reference_start|>Approximate inference on planar graphs using Loop Calculus and Belief Propagation: We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006) allows to express the exact partition function of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in (Certkov et al., 2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyze the performance of the algorithm for the partition function approximation for models with binary variables and pairwise interactions on grids and other planar graphs. We study in detail both the loop series and the equivalent Pfaffian series and show that the first term of the Pfaffian series for the general, intractable planar model, can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference.<|reference_end|> | arxiv | @article{gómez2009approximate,
title={Approximate inference on planar graphs using Loop Calculus and Belief
Propagation},
author={V. G'omez, H. J. Kappen, M. Chertkov},
journal={arXiv preprint arXiv:0901.0786},
year={2009},
archivePrefix={arXiv},
eprint={0901.0786},
primaryClass={cs.AI}
} | gómez2009approximate |
arxiv-5944 | 0901.0824 | A Characterization of Max-Min SIR-Balanced Power Allocation with Applications | <|reference_start|>A Characterization of Max-Min SIR-Balanced Power Allocation with Applications: We consider a power-controlled wireless network with an established network topology in which the communication links (transmitter-receiver pairs) are corrupted by the co-channel interference and background noise. We have fairly general power constraints since the vector of transmit powers is confined to belong to an arbitrary convex polytope. The interference is completely determined by a so-called gain matrix. Assuming irreducibility of this gain matrix, we provide an elegant characterization of the max-min SIR-balanced power allocation under such general power constraints. This characterization gives rise to two types of algorithms for computing the max-min SIR-balanced power allocation. One of the algorithms is a utility-based power control algorithm to maximize a weighted sum of the utilities of the link SIRs. Our results show how to choose the weight vector and utility function so that the utility-based solution is equal to the solution of the max-min SIR-balancing problem. The algorithm is not amenable to distributed implementation as the weights are global variables. In order to mitigate the problem of computing the weight vector in distributed wireless networks, we point out a saddle point characterization of the Perron root of some extended gain matrices and discuss how this characterization can be used in the design of algorithms in which each link iteratively updates its weight vector in parallel to the power control recursion. Finally, the paper provides a basis for the development of distributed power control and beamforming algorithms to find a global solution of the max-min SIR-balancing problem.<|reference_end|> | arxiv | @article{stańczak2009a,
title={A Characterization of Max-Min SIR-Balanced Power Allocation with
Applications},
author={S{l}awomir Sta'nczak and Micha{l} Kaliszan and Nicholas Bambos and
Marcin Wiczanowski},
journal={arXiv preprint arXiv:0901.0824},
year={2009},
archivePrefix={arXiv},
eprint={0901.0824},
primaryClass={cs.IT math.IT}
} | stańczak2009a |
arxiv-5945 | 0901.0825 | A new muscle fatigue and recovery model and its ergonomics application in human simulation | <|reference_start|>A new muscle fatigue and recovery model and its ergonomics application in human simulation: Although automatic techniques have been employed in manufacturing industries to increase productivity and efficiency, there are still lots of manual handling jobs, especially for assembly and maintenance jobs. In these jobs, musculoskeletal disorders (MSDs) are one of the major health problems due to overload and cumulative physical fatigue. With combination of conventional posture analysis techniques, digital human modelling and simulation (DHM) techniques have been developed and commercialized to evaluate the potential physical exposures. However, those ergonomics analysis tools are mainly based on posture analysis techniques, and until now there is still no fatigue index available in the commercial software to evaluate the physical fatigue easily and quickly. In this paper, a new muscle fatigue and recovery model is proposed and extended to evaluate joint fatigue level in manual handling jobs. A special application case is described and analyzed by digital human simulation technique.<|reference_end|> | arxiv | @article{ma2009a,
title={A new muscle fatigue and recovery model and its ergonomics application
in human simulation},
author={Liang Ma (IRCCyN), Damien Chablat (IRCCyN), Fouad Bennis (IRCCyN), Wei
Zhang (DIE), Franc{c}ois Guillaume (EADS)},
journal={arXiv preprint arXiv:0901.0825},
year={2009},
archivePrefix={arXiv},
eprint={0901.0825},
primaryClass={cs.RO}
} | ma2009a |
arxiv-5946 | 0901.0834 | Simple Channel Coding Bounds | <|reference_start|>Simple Channel Coding Bounds: New channel coding converse and achievability bounds are derived for a single use of an arbitrary channel. Both bounds are expressed using a quantity called the "smooth 0-divergence", which is a generalization of Renyi's divergence of order 0. The bounds are also studied in the limit of large block-lengths. In particular, they combine to give a general capacity formula which is equivalent to the one derived by Verdu and Han.<|reference_end|> | arxiv | @article{wang2009simple,
title={Simple Channel Coding Bounds},
author={Ligong Wang, Roger Colbeck and Renato Renner},
journal={arXiv preprint arXiv:0901.0834},
year={2009},
archivePrefix={arXiv},
eprint={0901.0834},
primaryClass={cs.IT math.IT}
} | wang2009simple |
arxiv-5947 | 0901.0858 | Weighted Well-Covered Graphs without Cycles of Length 4, 5, 6 and 7 | <|reference_start|>Weighted Well-Covered Graphs without Cycles of Length 4, 5, 6 and 7: A graph is well-covered if every maximal independent set has the same cardinality. The recognition problem of well-covered graphs is known to be co-NP-complete. Let w be a weight function defined on the vertices of G. Then G is w-well-covered if all maximal independent sets of G are of the same weight. The set of weight functions w for which a graph is w-well-covered is a vector space. We prove that finding the vector space of weight functions under which an input graph is w-well-covered can be done in polynomial time, if the input graph does not contain cycles of length 4, 5, 6 and 7.<|reference_end|> | arxiv | @article{levit2009weighted,
title={Weighted Well-Covered Graphs without Cycles of Length 4, 5, 6 and 7},
author={Vadim E. Levit and David Tankus},
journal={arXiv preprint arXiv:0901.0858},
year={2009},
archivePrefix={arXiv},
eprint={0901.0858},
primaryClass={cs.DM cs.CC}
} | levit2009weighted |
arxiv-5948 | 0901.0869 | On the Complexity of Deciding Call-by-Need | <|reference_start|>On the Complexity of Deciding Call-by-Need: In a recent paper we introduced a new framework for the study of call by need computations to normal form and root-stable form in term rewriting. Using elementary tree automata techniques and ground tree transducers we obtained simple decidability proofs for classes of rewrite systems that are much larger than earlier classes defined using the complicated sequentiality concept. In this paper we show that we can do without ground tree transducers in order to arrive at decidability proofs that are phrased in direct tree automata constructions. This allows us to derive better complexity bounds.<|reference_end|> | arxiv | @article{durand2009on,
title={On the Complexity of Deciding Call-by-Need},
author={Ir`ene Durand (LaBRI), Aart Middeldorp},
journal={arXiv preprint arXiv:0901.0869},
year={2009},
archivePrefix={arXiv},
eprint={0901.0869},
primaryClass={cs.LO cs.PL}
} | durand2009on |
arxiv-5949 | 0901.0886 | Developments in ROOT I/O and trees | <|reference_start|>Developments in ROOT I/O and trees: For the last several months the main focus of development in the ROOT I/O package has been code consolidation and performance improvements. Access to remote files is affected both by bandwidth and latency. We introduced a pre-fetch mechanism to minimize the number of transactions between client and server and hence reducing the effect of latency. We will review the implementation and how well it works in different conditions (gain of an order of magnitude for remote file access). We will also review new utilities, including a faster implementation of TTree cloning (gain of an order of magnitude), a generic mechanism for object references, and a new entry list mechanism tuned both for small and large number of selections. In addition to reducing the coupling with the core module and becoming its owns library (libRIO) (as part of the general restructuration of the ROOT libraries), the I/O package has been enhanced in the area of XML and SQL support, thread safety, schema evolution, TTreeFormula, and many other areas. We will also discuss various ways, ROOT will be able to benefit from multi-core architecture to improve I/O performances.<|reference_end|> | arxiv | @article{brun2009developments,
title={Developments in ROOT I/O and trees},
author={R. Brun (CERN), P. Canal (Fermilab), M. Frank (CERN), A. Kreshuk
(CERN), S. Linev (Darmstadt, GSI), P. Russo (Fermilab), F. Rademakers (CERN)},
journal={J.Phys.Conf.Ser.119:042006,2008},
year={2009},
doi={10.1088/1742-6596/119/4/042006},
archivePrefix={arXiv},
eprint={0901.0886},
primaryClass={cs.OH}
} | brun2009developments |
arxiv-5950 | 0901.0911 | Fault Attacks on RSA Public Keys: Left-To-Right Implementations are also Vulnerable | <|reference_start|>Fault Attacks on RSA Public Keys: Left-To-Right Implementations are also Vulnerable: After attacking the RSA by injecting fault and corresponding countermeasures, works appear now about the need for protecting RSA public elements against fault attacks. We provide here an extension of a recent attack based on the public modulus corruption. The difficulty to decompose the "Left-To-Right" exponentiation into partial multiplications is overcome by modifying the public modulus to a number with known factorization. This fault model is justified here by a complete study of faulty prime numbers with a fixed size. The good success rate of this attack combined with its practicability raises the question of using faults for changing algebraic properties of finite field based cryptosystems.<|reference_end|> | arxiv | @article{berzati2009fault,
title={Fault Attacks on RSA Public Keys: Left-To-Right Implementations are also
Vulnerable},
author={Alexandre Berzati (LETI, PRISM), C'ecile Canovas (LETI),
Jean-Guillaume Dumas (LJK), Louis Goubin (PRISM)},
journal={RSA Conference 2009, Cryptographers' Track, San Francisco : United
States (2009)},
year={2009},
archivePrefix={arXiv},
eprint={0901.0911},
primaryClass={cs.CR}
} | berzati2009fault |
arxiv-5951 | 0901.0930 | An \Omega(n log n) lower bound for computing the sum of even-ranked elements | <|reference_start|>An \Omega(n log n) lower bound for computing the sum of even-ranked elements: Given a sequence A of 2n real numbers, the Even-Rank-Sum problem asks for the sum of the n values that are at the even positions in the sorted order of the elements in A. We prove that, in the algebraic computation-tree model, this problem has time complexity \Theta(n log n). This solves an open problem posed by Michael Shamos at the Canadian Conference on Computational Geometry in 2008.<|reference_end|> | arxiv | @article{mörig2009an,
title={An \Omega(n log n) lower bound for computing the sum of even-ranked
elements},
author={Marc M"orig, Dieter Rautenbach, Michiel Smid, Jan Tusch},
journal={arXiv preprint arXiv:0901.0930},
year={2009},
archivePrefix={arXiv},
eprint={0901.0930},
primaryClass={cs.DS}
} | mörig2009an |
arxiv-5952 | 0901.0948 | A New Universal Random-Coding Bound for Average Probability Error Exponent for Multiple-Access Channels | <|reference_start|>A New Universal Random-Coding Bound for Average Probability Error Exponent for Multiple-Access Channels: In this work, a new upper bound for average error probability of a two-user discrete memoryless (DM) multiple-access channel (MAC) is derived. This bound can be universally obtained for all discrete memoryless MACs with given input and output alphabets. This is the first bound of this type that explicitly uses the method of expurgation. It is shown that the exponent of this bound is greater than or equal to those of previously known bounds.<|reference_end|> | arxiv | @article{nazari2009a,
title={A New Universal Random-Coding Bound for Average Probability Error
Exponent for Multiple-Access Channels},
author={Ali Nazari, Achilleas Anastasopoulos and S. Sandeep Pradhan},
journal={arXiv preprint arXiv:0901.0948},
year={2009},
archivePrefix={arXiv},
eprint={0901.0948},
primaryClass={cs.IT math.IT}
} | nazari2009a |
arxiv-5953 | 0901.1043 | The Symmetries of the $\pi$-metric | <|reference_start|>The Symmetries of the $\pi$-metric: Let V be an n-dimensional vector space over a finite field F_q. We consider on V the $\pi$-metric recently introduced by K. Feng, L. Xu and F. J. Hickernell. In this short note we give a complete description of the group of symmetries of V under the $\pi$-metric.<|reference_end|> | arxiv | @article{alves2009the,
title={The Symmetries of the $\pi$-metric},
author={Marcelo Muniz S. Alves and Luciano Panek},
journal={arXiv preprint arXiv:0901.1043},
year={2009},
archivePrefix={arXiv},
eprint={0901.1043},
primaryClass={cs.IT cs.DM math.CO math.IT math.MG}
} | alves2009the |
arxiv-5954 | 0901.1062 | Identification with Encrypted Biometric Data | <|reference_start|>Identification with Encrypted Biometric Data: Biometrics make human identification possible with a sample of a biometric trait and an associated database. Classical identification techniques lead to privacy concerns. This paper introduces a new method to identify someone using his biometrics in an encrypted way. Our construction combines Bloom Filters with Storage and Locality-Sensitive Hashing. We apply this error-tolerant scheme, in a Hamming space, to achieve biometric identification in an efficient way. This is the first non-trivial identification scheme dealing with fuzziness and encrypted data.<|reference_end|> | arxiv | @article{bringer2009identification,
title={Identification with Encrypted Biometric Data},
author={Julien Bringer, Herve Chabanne and Bruno Kindarji},
journal={arXiv preprint arXiv:0901.1062},
year={2009},
archivePrefix={arXiv},
eprint={0901.1062},
primaryClass={cs.CR}
} | bringer2009identification |
arxiv-5955 | 0901.1084 | When do nonlinear filters achieve maximal accuracy? | <|reference_start|>When do nonlinear filters achieve maximal accuracy?: The nonlinear filter for an ergodic signal observed in white noise is said to achieve maximal accuracy if the stationary filtering error vanishes as the signal to noise ratio diverges. We give a general characterization of the maximal accuracy property in terms of various systems theoretic notions. When the signal state space is a finite set explicit necessary and sufficient conditions are obtained, while the linear Gaussian case reduces to a classic result of Kwakernaak and Sivan (1972).<|reference_end|> | arxiv | @article{van handel2009when,
title={When do nonlinear filters achieve maximal accuracy?},
author={Ramon van Handel},
journal={SIAM J. Control Optim. 48, 3151-3168 (2009)},
year={2009},
archivePrefix={arXiv},
eprint={0901.1084},
primaryClass={math.PR cs.IT math.IT}
} | van handel2009when |
arxiv-5956 | 0901.1095 | FAIR: Fuzzy-based Aggregation providing In-network Resilience for real-time Wireless Sensor Networks | <|reference_start|>FAIR: Fuzzy-based Aggregation providing In-network Resilience for real-time Wireless Sensor Networks: This work introduces FAIR, a novel framework for Fuzzy-based Aggregation providing In-network Resilience for Wireless Sensor Networks. FAIR addresses the possibility of malicious aggregator nodes manipulating data. It provides data-integrity based on a trust level of the WSN response and it tolerates link or node failures. Compared to available solutions, it offers a general aggregation model and makes the trust level visible to the querier. We classify the proposed approach as complementary to protocols ensuring resilience against sensor leaf nodes providing faulty data. Thanks to our flexible resilient framework and due to the use of Fuzzy Inference Schemes, we achieve promising results within a short design cycle.<|reference_end|> | arxiv | @article{de cristofaro2009fair:,
title={FAIR: Fuzzy-based Aggregation providing In-network Resilience for
real-time Wireless Sensor Networks},
author={Emiliano De Cristofaro, Jens-Matthias Bohli, Dirk Westhoff},
journal={arXiv preprint arXiv:0901.1095},
year={2009},
archivePrefix={arXiv},
eprint={0901.1095},
primaryClass={cs.CR}
} | de cristofaro2009fair: |
arxiv-5957 | 0901.1123 | A High Dynamic Range 3-Moduli-Set with Efficient Reverse Converter | <|reference_start|>A High Dynamic Range 3-Moduli-Set with Efficient Reverse Converter: -Residue Number System (RNS) is a valuable tool for fast and parallel arithmetic. It has a wide application in digital signal processing, fault tolerant systems, etc. In this work, we introduce the 3-moduli set {2^n, 2^{2n}-1, 2^{2n}+1} and propose its residue to binary converter using the Chinese Remainder Theorem. We present its simple hardware implementation that mainly includes one Carry Save Adder (CSA) and a Modular Adder (MA). We compare the performance and area utilization of our reverse converter to the reverse converters of the moduli sets {2^n-1, 2^n, 2^n+1, 2^{2n}+1} and {2^n-1, 2^n, 2^n+1, 2^n-2^{(n+1)/2}+1, 2^n+2^{(n+1)/2}+1} that have the same dynamic range and we demonstrate that our architecture is better in terms of performance and area utilization. Also, we show that our reverse converter is faster than the reverse converter of {2^n-1, 2^n, 2^n+1} for dynamic ranges like 8-bit, 16-bit, 32-bit and 64-bit however it requires more area.<|reference_end|> | arxiv | @article{hariri2009a,
title={A High Dynamic Range 3-Moduli-Set with Efficient Reverse Converter},
author={Arash Hariri, K. Navi, Reza Rastegar},
journal={Computers & Mathematics with Applications (2008), Vol 55, No 4,
660-668},
year={2009},
archivePrefix={arXiv},
eprint={0901.1123},
primaryClass={cs.AR cs.DC}
} | hariri2009a |
arxiv-5958 | 0901.1140 | On Profit-Maximizing Pricing for the Highway and Tollbooth Problems | <|reference_start|>On Profit-Maximizing Pricing for the Highway and Tollbooth Problems: In the \emph{tollbooth problem}, we are given a tree $\bT=(V,E)$ with $n$ edges, and a set of $m$ customers, each of whom is interested in purchasing a path on the tree. Each customer has a fixed budget, and the objective is to price the edges of $\bT$ such that the total revenue made by selling the paths to the customers that can afford them is maximized. An important special case of this problem, known as the \emph{highway problem}, is when $\bT$ is restricted to be a line. For the tollbooth problem, we present a randomized $O(\log n)$-approximation, improving on the current best $O(\log m)$-approximation. We also study a special case of the tollbooth problem, when all the paths that customers are interested in purchasing go towards a fixed root of $\bT$. In this case, we present an algorithm that returns a $(1-\epsilon)$-approximation, for any $\epsilon > 0$, and runs in quasi-polynomial time. On the other hand, we rule out the existence of an FPTAS by showing that even for the line case, the problem is strongly NP-hard. Finally, we show that in the \emph{coupon model}, when we allow some items to be priced below zero to improve the overall profit, the problem becomes even APX-hard.<|reference_end|> | arxiv | @article{elbassioni2009on,
title={On Profit-Maximizing Pricing for the Highway and Tollbooth Problems},
author={Khaled Elbassioni, Rajiv Raman, Saurabh Ray, Rene Sitters},
journal={arXiv preprint arXiv:0901.1140},
year={2009},
doi={10.1007/978-3-642-04645-2_25},
archivePrefix={arXiv},
eprint={0901.1140},
primaryClass={cs.DS cs.GT}
} | elbassioni2009on |
arxiv-5959 | 0901.1144 | Bayesian Inference Based on Stationary Fokker-Planck Sampling | <|reference_start|>Bayesian Inference Based on Stationary Fokker-Planck Sampling: A novel formalism for Bayesian learning in the context of complex inference models is proposed. The method is based on the use of the Stationary Fokker--Planck (SFP) approach to sample from the posterior density. Stationary Fokker--Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of Artificial Neural Networks are outlined. Off--line and incremental Bayesian inference and Maximum Likelihood Estimation from the posterior is performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low--probabilty regions without the need of a careful tuning of any step size parameter. In fact, the SFP method requires only a small set of meaningful parameters which can be selected following clear, problem--independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.<|reference_end|> | arxiv | @article{berrones2009bayesian,
title={Bayesian Inference Based on Stationary Fokker-Planck Sampling},
author={Arturo Berrones},
journal={arXiv preprint arXiv:0901.1144},
year={2009},
archivePrefix={arXiv},
eprint={0901.1144},
primaryClass={cond-mat.dis-nn cs.NE physics.data-an}
} | berrones2009bayesian |
arxiv-5960 | 0901.1152 | A nonclassical symbolic theory of working memory, mental computations, and mental set | <|reference_start|>A nonclassical symbolic theory of working memory, mental computations, and mental set: The paper tackles four basic questions associated with human brain as a learning system. How can the brain learn to (1) mentally simulate different external memory aids, (2) perform, in principle, any mental computations using imaginary memory aids, (3) recall the real sensory and motor events and synthesize a combinatorial number of imaginary events, (4) dynamically change its mental set to match a combinatorial number of contexts? We propose a uniform answer to (1)-(4) based on the general postulate that the human neocortex processes symbolic information in a "nonclassical" way. Instead of manipulating symbols in a read/write memory, as the classical symbolic systems do, it manipulates the states of dynamical memory representing different temporary attributes of immovable symbolic structures stored in a long-term memory. The approach is formalized as the concept of E-machine. Intuitively, an E-machine is a system that deals mainly with characteristic functions representing subsets of memory pointers rather than the pointers themselves. This nonclassical symbolic paradigm is Turing universal, and, unlike the classical one, is efficiently implementable in homogeneous neural networks with temporal modulation topologically resembling that of the neocortex.<|reference_end|> | arxiv | @article{eliashberg2009a,
title={A nonclassical symbolic theory of working memory, mental computations,
and mental set},
author={Victor Eliashberg},
journal={arXiv preprint arXiv:0901.1152},
year={2009},
archivePrefix={arXiv},
eprint={0901.1152},
primaryClass={cs.AI cs.NE}
} | eliashberg2009a |
arxiv-5961 | 0901.1155 | Balanced allocation: Memory performance tradeoffs | <|reference_start|>Balanced allocation: Memory performance tradeoffs: Suppose we sequentially put $n$ balls into $n$ bins. If we put each ball into a random bin then the heaviest bin will contain ${\sim}\log n/\log\log n$ balls with high probability. However, Azar, Broder, Karlin and Upfal [SIAM J. Comput. 29 (1999) 180--200] showed that if each time we choose two bins at random and put the ball in the least loaded bin among the two, then the heaviest bin will contain only ${\sim}\log\log n$ balls with high probability. How much memory do we need to implement this scheme? We need roughly $\log\log\log n$ bits per bin, and $n\log\log\log n$ bits in total. Let us assume now that we have limited amount of memory. For each ball, we are given two random bins and we have to put the ball into one of them. Our goal is to minimize the load of the heaviest bin. We prove that if we have $n^{1-\delta}$ bits then the heaviest bin will contain at least $\Omega(\delta\log n/\log\log n)$ balls with high probability. The bound is tight in the communication complexity model.<|reference_end|> | arxiv | @article{benjamini2009balanced,
title={Balanced allocation: Memory performance tradeoffs},
author={Itai Benjamini, Yury Makarychev},
journal={Annals of Applied Probability 2012, Vol. 22, No. 4, 1642-1649},
year={2009},
doi={10.1214/11-AAP804},
number={IMS-AAP-AAP804},
archivePrefix={arXiv},
eprint={0901.1155},
primaryClass={cs.DS cs.DM math.PR}
} | benjamini2009balanced |
arxiv-5962 | 0901.1162 | Folded Algebraic Geometric Codes From Galois Extensions | <|reference_start|>Folded Algebraic Geometric Codes From Galois Extensions: We describe a new class of list decodable codes based on Galois extensions of function fields and present a list decoding algorithm. These codes are obtained as a result of folding the set of rational places of a function field using certain elements (automorphisms) from the Galois group of the extension. This work is an extension of Folded Reed Solomon codes to the setting of Algebraic Geometric codes. We describe two constructions based on this framework depending on if the order of the automorphism used to fold the code is large or small compared to the block length. When the automorphism is of large order, the codes have polynomially bounded list size in the worst case. This construction gives codes of rate $R$ over an alphabet of size independent of block length that can correct a fraction of $1-R-\epsilon$ errors subject to the existence of asymptotically good towers of function fields with large automorphisms. The second construction addresses the case when the order of the element used to fold is small compared to the block length. In this case a heuristic analysis shows that for a random received word, the expected list size and the running time of the decoding algorithm are bounded by a polynomial in the block length. When applied to the Garcia-Stichtenoth tower, this yields codes of rate $R$ over an alphabet of size $(\frac{1}{\epsilon^2})^{O(\frac{1}{\epsilon})}$, that can correct a fraction of $1-R-\epsilon$ errors.<|reference_end|> | arxiv | @article{huang2009folded,
title={Folded Algebraic Geometric Codes From Galois Extensions},
author={Ming-Deh Huang and Anand Kumar Narayanan},
journal={arXiv preprint arXiv:0901.1162},
year={2009},
archivePrefix={arXiv},
eprint={0901.1162},
primaryClass={cs.IT math.IT}
} | huang2009folded |
arxiv-5963 | 0901.1181 | Fault Masking By Probabilistic Voting | <|reference_start|>Fault Masking By Probabilistic Voting: In this study, we introduced a probabilistic voter, regarding symbol probabilities in decision process besides majority consensus. Conventional majority voter is independent of functionality of redundant modules. In our study, proposed probabilistic voter is designed corresponding to functionality of the redundant module. We tested probabilistic voter for 3 and 5 redundant modules with random transient errors inserted the wires and it was seen from simulation results that Multi-Modular Redundancy (M-MR) with Probabilistic Voting (PV) had been shown better availability performance than conventional majority voter.<|reference_end|> | arxiv | @article{alagoz2009fault,
title={Fault Masking By Probabilistic Voting},
author={B. Baykant Alagoz},
journal={OncuBilim Algorithm And Systems Labs. Vol.09, Art.No:01,(2009)},
year={2009},
archivePrefix={arXiv},
eprint={0901.1181},
primaryClass={cs.OH}
} | alagoz2009fault |
arxiv-5964 | 0901.1230 | Logical Algorithms meets CHR: A meta-complexity result for Constraint Handling Rules with rule priorities | <|reference_start|>Logical Algorithms meets CHR: A meta-complexity result for Constraint Handling Rules with rule priorities: This paper investigates the relationship between the Logical Algorithms language (LA) of Ganzinger and McAllester and Constraint Handling Rules (CHR). We present a translation schema from LA to CHR-rp: CHR with rule priorities, and show that the meta-complexity theorem for LA can be applied to a subset of CHR-rp via inverse translation. Inspired by the high-level implementation proposal for Logical Algorithm by Ganzinger and McAllester and based on a new scheduling algorithm, we propose an alternative implementation for CHR-rp that gives strong complexity guarantees and results in a new and accurate meta-complexity theorem for CHR-rp. It is furthermore shown that the translation from Logical Algorithms to CHR-rp combined with the new CHR-rp implementation, satisfies the required complexity for the Logical Algorithms meta-complexity result to hold.<|reference_end|> | arxiv | @article{de koninck2009logical,
title={Logical Algorithms meets CHR: A meta-complexity result for Constraint
Handling Rules with rule priorities},
author={Leslie De Koninck},
journal={arXiv preprint arXiv:0901.1230},
year={2009},
archivePrefix={arXiv},
eprint={0901.1230},
primaryClass={cs.PL cs.AI cs.CC}
} | de koninck2009logical |
arxiv-5965 | 0901.1244 | Constructions of Quasi-Twisted Two-Weight Codes | <|reference_start|>Constructions of Quasi-Twisted Two-Weight Codes: A code is said to be two-weight if the non-zero codewords have only two different a weight w1 and w2. Two-weight codes are closely related to strongly regular graphs. In this paper. It is shown that a consta-cyclic code of composite length can be put in the quasi-twisted form. Based on this transformation, a new construction method of quasi-twisted (QT) two-weight codes is presented. A large amount of QT two-weight codes are found, and some new codes are also constructed.<|reference_end|> | arxiv | @article{chen2009constructions,
title={Constructions of Quasi-Twisted Two-Weight Codes},
author={Eric Z. Chen},
journal={arXiv preprint arXiv:0901.1244},
year={2009},
archivePrefix={arXiv},
eprint={0901.1244},
primaryClass={cs.IT math.IT}
} | chen2009constructions |
arxiv-5966 | 0901.1257 | An Internet-based Audience Response System for the Improvement of Teaching | <|reference_start|>An Internet-based Audience Response System for the Improvement of Teaching: We have developed an Internet-based audience response system (called ARSBO). In this way we combine the advantages of common audience response systems using handheld devices and the easy and cheap access to the Internet. Evaluations of audience response systems in the literature have shown their success: encouraging participation of the students as well as immediate feedback to answers to the whole group for evaluational purposes of the teacher. However, commercial systems are relatively expensive and the number of students in such a teaching-learning scenario is limited. ARSBO solves these problems. Using the Internet (e.g. in computer rooms or by wireless Internet access) there are no special costs and the number of participating students is not limited. ARSBO is very easy to use for students as well as for the construction of new questions with possible answers and for the visualization of statistical results to questions.<|reference_end|> | arxiv | @article{luetticke2009an,
title={An Internet-based Audience Response System for the Improvement of
Teaching},
author={Rainer Luetticke, Ridvan Cinar},
journal={arXiv preprint arXiv:0901.1257},
year={2009},
archivePrefix={arXiv},
eprint={0901.1257},
primaryClass={cs.CY cs.NI}
} | luetticke2009an |
arxiv-5967 | 0901.1287 | Infinite families of recursive formulas generating power moments of Kloosterman sums: O^- (2n, 2^r) case | <|reference_start|>Infinite families of recursive formulas generating power moments of Kloosterman sums: O^- (2n, 2^r) case: In this paper, we construct eight infinite families of binary linear codes associated with double cosets with respect to certain maximal parabolic subgroup of the special orthogonal group $SO^-(2n,2^r)$. Then we obtain four infinite families of recursive formulas for the power moments of Kloosterman sums and four those of 2-dimensional Kloosterman sums in terms of the frequencies of weights in the codes. This is done via Pless power moment identity and by utilizing the explicit expressions of exponential sums over those double cosets related to the evaluations of "Gauss sums" for the orthogonal groups $O^-(2n,2^r)$<|reference_end|> | arxiv | @article{kim2009infinite,
title={Infinite families of recursive formulas generating power moments of
Kloosterman sums: O^- (2n, 2^r) case},
author={Dae San Kim},
journal={arXiv preprint arXiv:0901.1287},
year={2009},
archivePrefix={arXiv},
eprint={0901.1287},
primaryClass={math.NT cs.IT math.IT}
} | kim2009infinite |
arxiv-5968 | 0901.1288 | Power-Controlled Feedback and Training for Two-way MIMO Channels | <|reference_start|>Power-Controlled Feedback and Training for Two-way MIMO Channels: Most communication systems use some form of feedback, often related to channel state information. The common models used in analyses either assume perfect channel state information at the receiver and/or noiseless state feedback links. However, in practical systems, neither is the channel estimate known perfectly at the receiver and nor is the feedback link perfect. In this paper, we study the achievable diversity multiplexing tradeoff using i.i.d. Gaussian codebooks, considering the errors in training the receiver and the errors in the feedback link for FDD systems, where the forward and the feedback are independent MIMO channels. Our key result is that the maximum diversity order with one-bit of feedback information is identical to systems with more feedback bits. Thus, asymptotically in $\mathsf{SNR}$, more than one bit of feedback does not improve the system performance at constant rates. Furthermore, the one-bit diversity-multiplexing performance is identical to the system which has perfect channel state information at the receiver along with noiseless feedback link. This achievability uses novel concepts of power controlled feedback and training, which naturally surface when we consider imperfect channel estimation and noisy feedback links. In the process of evaluating the proposed training and feedback protocols, we find an asymptotic expression for the joint probability of the $\mathsf{SNR}$ exponents of eigenvalues of the actual channel and the estimated channel which may be of independent interest.<|reference_end|> | arxiv | @article{aggarwal2009power-controlled,
title={Power-Controlled Feedback and Training for Two-way MIMO Channels},
author={Vaneet Aggarwal and Ashutosh Sabharwal},
journal={IEEE Transactions on Information Theory, vol.56, no.7,
pp.3310,3331, July 2010},
year={2009},
doi={10.1109/TIT.2010.2048472},
archivePrefix={arXiv},
eprint={0901.1288},
primaryClass={cs.IT math.IT}
} | aggarwal2009power-controlled |
arxiv-5969 | 0901.1289 | N-norm and N-conorm in Neutrosophic Logic and Set, and the Neutrosophic Topologies | <|reference_start|>N-norm and N-conorm in Neutrosophic Logic and Set, and the Neutrosophic Topologies: In this paper we present the N-norms/N-conorms in neutrosophic logic and set as extensions of T-norms/T-conorms in fuzzy logic and set. Also, as an extension of the Intuitionistic Fuzzy Topology we present the Neutrosophic Topologies.<|reference_end|> | arxiv | @article{smarandache2009n-norm,
title={N-norm and N-conorm in Neutrosophic Logic and Set, and the Neutrosophic
Topologies},
author={Florentin Smarandache},
journal={In author's book A Unifying Field in Logics: Neutrosophic Logic;
Neutrosophic Set, Neutrosophic Probability and Statistics (fourth edition),
2005; Review of the Air Force Academy, No. 1 (14), pp. 05-11, 2009.},
year={2009},
archivePrefix={arXiv},
eprint={0901.1289},
primaryClass={cs.AI}
} | smarandache2009n-norm |
arxiv-5970 | 0901.1307 | Using Graphics Processors for Parallelizing Hash-based Data Carving | <|reference_start|>Using Graphics Processors for Parallelizing Hash-based Data Carving: The ability to detect fragments of deleted image files and to reconstruct these image files from all available fragments on disk is a key activity in the field of digital forensics. Although reconstruction of image files from the file fragments on disk can be accomplished by simply comparing the content of sectors on disk with the content of known files, this brute-force approach can be time consuming. This paper presents results from research into the use of Graphics Processing Units (GPUs) in detecting specific image file byte patterns in disk clusters. Unique identifying pattern for each disk sector is compared against patterns in known images. A pattern match indicates the potential presence of an image and flags the disk sector for further in-depth examination to confirm the match. The GPU-based implementation outperforms the software implementation by a significant margin.<|reference_end|> | arxiv | @article{collange2009using,
title={Using Graphics Processors for Parallelizing Hash-based Data Carving},
author={Sylvain Collange (ELIAUS), Yoginder Dandass (CSE), Marc Daumas
(ELIAUS), David Defour (ELIAUS)},
journal={42nd Hawaii International Conference on System Sciences, Waikoloa
: \'Etats-Unis d'Am\'erique (2009)},
year={2009},
archivePrefix={arXiv},
eprint={0901.1307},
primaryClass={cs.DC}
} | collange2009using |
arxiv-5971 | 0901.1312 | Novel Architectures and Algorithms for Delay Reduction in Back-pressure Scheduling and Routing | <|reference_start|>Novel Architectures and Algorithms for Delay Reduction in Back-pressure Scheduling and Routing: The back-pressure algorithm is a well-known throughput-optimal algorithm. However, its delay performance may be quite poor even when the traffic load is not close to network capacity due to the following two reasons. First, each node has to maintain a separate queue for each commodity in the network, and only one queue is served at a time. Second, the back-pressure routing algorithm may route some packets along very long routes. In this paper, we present solutions to address both of the above issues, and hence, improve the delay performance of the back-pressure algorithm. One of the suggested solutions also decreases the complexity of the queueing data structures to be maintained at each node.<|reference_end|> | arxiv | @article{bui2009novel,
title={Novel Architectures and Algorithms for Delay Reduction in Back-pressure
Scheduling and Routing},
author={Loc Bui, R. Srikant, Alexander Stolyar},
journal={arXiv preprint arXiv:0901.1312},
year={2009},
archivePrefix={arXiv},
eprint={0901.1312},
primaryClass={cs.NI}
} | bui2009novel |
arxiv-5972 | 0901.1315 | Stochastic Volatility Models Including Open, Close, High and Low Prices | <|reference_start|>Stochastic Volatility Models Including Open, Close, High and Low Prices: Mounting empirical evidence suggests that the observed extreme prices within a trading period can provide valuable information about the volatility of the process within that period. In this paper we define a class of stochastic volatility models that uses opening and closing prices along with the minimum and maximum prices within a trading period to infer the dynamics underlying the volatility process of asset prices and compares it with similar models that have been previously presented in the literature. The paper also discusses sequential Monte Carlo algorithms to fit this class of models and illustrates its features using both a simulation study and data form the SP500 index.<|reference_end|> | arxiv | @article{rodriguez2009stochastic,
title={Stochastic Volatility Models Including Open, Close, High and Low Prices},
author={Abel Rodriguez and Henryk Gzyl and German Molina and Enrique ter Horst},
journal={arXiv preprint arXiv:0901.1315},
year={2009},
archivePrefix={arXiv},
eprint={0901.1315},
primaryClass={q-fin.ST cs.CE cs.NA}
} | rodriguez2009stochastic |
arxiv-5973 | 0901.1322 | A Generalized Carpenter's Rule Theorem for Self-Touching Linkages | <|reference_start|>A Generalized Carpenter's Rule Theorem for Self-Touching Linkages: The Carpenter's Rule Theorem states that any chain linkage in the plane can be folded continuously between any two configurations while preserving the bar lengths and without the bars crossing. However, this theorem applies only to strictly simple configurations, where bars intersect only at their common endpoints. We generalize the theorem to self-touching configurations, where bars can touch but not properly cross. At the heart of our proof is a new definition of self-touching configurations of planar linkages, based on an annotated configuration space and limits of nontouching configurations. We show that this definition is equivalent to the previously proposed definition of self-touching configurations, which is based on a combinatorial description of overlapping features. Using our new definition, we prove the generalized Carpenter's Rule Theorem using a topological argument. We believe that our topological methodology provides a powerful tool for manipulating many kinds of self-touching objects, such as 3D hinged assemblies of polygons and rigid origami. In particular, we show how to apply our methodology to extend to self-touching configurations universal reconfigurability results for open chains with slender polygonal adornments, and single-vertex rigid origami with convex cones.<|reference_end|> | arxiv | @article{abbott2009a,
title={A Generalized Carpenter's Rule Theorem for Self-Touching Linkages},
author={Timothy G. Abbott, Erik D. Demaine, and Blaise Gassend},
journal={arXiv preprint arXiv:0901.1322},
year={2009},
archivePrefix={arXiv},
eprint={0901.1322},
primaryClass={cs.CG cs.FL}
} | abbott2009a |
arxiv-5974 | 0901.1397 | Avoiding Squares and Overlaps Over the Natural Numbers | <|reference_start|>Avoiding Squares and Overlaps Over the Natural Numbers: We consider avoiding squares and overlaps over the natural numbers, using a greedy algorithm that chooses the least possible integer at each step; the word generated is lexicographically least among all such infinite words. In the case of avoiding squares, the word is 01020103..., the familiar ruler function, and is generated by iterating a uniform morphism. The case of overlaps is more challenging. We give an explicitly-defined morphism phi : N* -> N* that generates the lexicographically least infinite overlap-free word by iteration. Furthermore, we show that for all h,k in N with h <= k, the word phi^{k-h}(h) is the lexicographically least overlap-free word starting with the letter h and ending with the letter k, and give some of its symmetry properties.<|reference_end|> | arxiv | @article{guay-paquet2009avoiding,
title={Avoiding Squares and Overlaps Over the Natural Numbers},
author={Mathieu Guay-Paquet, Jeffrey Shallit},
journal={arXiv preprint arXiv:0901.1397},
year={2009},
archivePrefix={arXiv},
eprint={0901.1397},
primaryClass={math.CO cs.FL}
} | guay-paquet2009avoiding |
arxiv-5975 | 0901.1407 | Condition for Energy Efficient Watermarking with Random Vector Model without WSS Assumption | <|reference_start|>Condition for Energy Efficient Watermarking with Random Vector Model without WSS Assumption: Energy efficient watermarking preserves the watermark energy after linear attack as much as possible. We consider in this letter non-stationary signal models and derive conditions for energy efficient watermarking under random vector model without WSS assumption. We find that the covariance matrix of the energy efficient watermark should be proportional to host covariance matrix to best resist the optimal linear removal attacks. In WSS process our result reduces to the well known power spectrum condition. Intuitive geometric interpretation of the results are also discussed which in turn also provide more simpler proof of the main results.<|reference_end|> | arxiv | @article{yan2009condition,
title={Condition for Energy Efficient Watermarking with Random Vector Model
without WSS Assumption},
author={Bin Yan, Zheming Lu and Yinjing Guo},
journal={arXiv preprint arXiv:0901.1407},
year={2009},
archivePrefix={arXiv},
eprint={0901.1407},
primaryClass={cs.MM cs.CR}
} | yan2009condition |
arxiv-5976 | 0901.1408 | A Message-Passing Approach for Joint Channel Estimation, Interference Mitigation and Decoding | <|reference_start|>A Message-Passing Approach for Joint Channel Estimation, Interference Mitigation and Decoding: Channel uncertainty and co-channel interference are two major challenges in the design of wireless systems such as future generation cellular networks. This paper studies receiver design for a wireless channel model with both time-varying Rayleigh fading and strong co-channel interference of similar form as the desired signal. It is assumed that the channel coefficients of the desired signal can be estimated through the use of pilots, whereas no pilot for the interference signal is available, as is the case in many practical wireless systems. Because the interference process is non-Gaussian, treating it as Gaussian noise generally often leads to unacceptable performance. In order to exploit the statistics of the interference and correlated fading in time, an iterative message-passing architecture is proposed for joint channel estimation, interference mitigation and decoding. Each message takes the form of a mixture of Gaussian densities where the number of components is limited so that the overall complexity of the receiver is constant per symbol regardless of the frame and code lengths. Simulation of both coded and uncoded systems shows that the receiver performs significantly better than conventional receivers with linear channel estimation, and is robust with respect to mismatch in the assumed fading model.<|reference_end|> | arxiv | @article{zhu2009a,
title={A Message-Passing Approach for Joint Channel Estimation, Interference
Mitigation and Decoding},
author={Yan Zhu, Dongning Guo and Michael L. Honig},
journal={arXiv preprint arXiv:0901.1408},
year={2009},
archivePrefix={arXiv},
eprint={0901.1408},
primaryClass={cs.IT math.IT}
} | zhu2009a |
arxiv-5977 | 0901.1413 | Bitslicing and the Method of Four Russians Over Larger Finite Fields | <|reference_start|>Bitslicing and the Method of Four Russians Over Larger Finite Fields: We present a method of computing with matrices over very small finite fields of size larger than 2. Specifically, we show how the Method of Four Russians can be efficiently adapted to these larger fields, and introduce a row-wise matrix compression scheme that both reduces memory requirements and allows one to vectorize element operations. We also present timings which confirm the efficiency of these methods and exceed the speed of the fastest implementations the authors are aware of.<|reference_end|> | arxiv | @article{boothby2009bitslicing,
title={Bitslicing and the Method of Four Russians Over Larger Finite Fields},
author={Tomas J. Boothby, Robert W. Bradshaw},
journal={arXiv preprint arXiv:0901.1413},
year={2009},
archivePrefix={arXiv},
eprint={0901.1413},
primaryClass={cs.MS}
} | boothby2009bitslicing |
arxiv-5978 | 0901.1427 | An Online Multi-unit Auction with Improved Competitive Ratio | <|reference_start|>An Online Multi-unit Auction with Improved Competitive Ratio: We improve the best known competitive ratio (from 1/4 to 1/2), for the online multi-unit allocation problem, where the objective is to maximize the single-price revenue. Moreover, the competitive ratio of our algorithm tends to 1, as the bid-profile tends to ``smoothen''. This algorithm is used as a subroutine in designing truthful auctions for the same setting: the allocation has to be done online, while the payments can be decided at the end of the day. Earlier, a reduction from the auction design problem to the allocation problem was known only for the unit-demand case. We give a reduction for the general case when the bidders have decreasing marginal utilities. The problem is inspired by sponsored search auctions.<|reference_end|> | arxiv | @article{chakraborty2009an,
title={An Online Multi-unit Auction with Improved Competitive Ratio},
author={Sourav Chakraborty, Nikhil Devanur},
journal={arXiv preprint arXiv:0901.1427},
year={2009},
archivePrefix={arXiv},
eprint={0901.1427},
primaryClass={cs.GT cs.CC cs.DM cs.DS}
} | chakraborty2009an |
arxiv-5979 | 0901.1444 | Algebraic gossip on Arbitrary Networks | <|reference_start|>Algebraic gossip on Arbitrary Networks: Consider a network of nodes where each node has a message to communicate to all other nodes. For this communication problem, we analyze a gossip based protocol where coded messages are exchanged. This problem was studied by Aoyama and Shah where a bound to the dissemination time based on the spectral properties of the underlying communication graph is provided. Our contribution is a uniform bound that holds for arbitrary networks.<|reference_end|> | arxiv | @article{vasudevan2009algebraic,
title={Algebraic gossip on Arbitrary Networks},
author={Dinkar Vasudevan and Shrinivas Kudekar},
journal={arXiv preprint arXiv:0901.1444},
year={2009},
archivePrefix={arXiv},
eprint={0901.1444},
primaryClass={cs.IT math.IT}
} | vasudevan2009algebraic |
arxiv-5980 | 0901.1462 | A Fully Equivalent Global Pressure Formulation for Three-Phase Compressible Flow | <|reference_start|>A Fully Equivalent Global Pressure Formulation for Three-Phase Compressible Flow: We introduce a new global pressure formulation for immiscible three-phase compressible flows in porous media which is fully equivalent to the original equations, unlike the one introduced in \cite{CJ86}. In this formulation, the total volumetric flow of the three fluids and the global pressure follow a classical Darcy law, which simplifies the resolution of the pressure equation. However, this global pressure formulation exists only for Total Differential (TD) three-phase data, which depend only on two functions of saturations and global pressure: the global capillary pressure and the global mobility. Hence we introduce a class of interpolation which constructs such TD-three-phase data from any set of three two-phase data (for each pair of fluids) which satisfy a TD-compatibility condition.<|reference_end|> | arxiv | @article{chavent2009a,
title={A Fully Equivalent Global Pressure Formulation for Three-Phase
Compressible Flow},
author={Guy Chavent (INRIA Rocquencourt, Ceremade)},
journal={arXiv preprint arXiv:0901.1462},
year={2009},
number={RR-6788},
archivePrefix={arXiv},
eprint={0901.1462},
primaryClass={cs.NA math.AP physics.class-ph}
} | chavent2009a |
arxiv-5981 | 0901.1473 | Communication over Individual Channels | <|reference_start|>Communication over Individual Channels: We consider the problem of communicating over a channel for which no mathematical model is specified. We present achievable rates as a function of the channel input and output known a-posteriori for discrete and continuous channels, as well as a rate-adaptive scheme employing feedback which achieves these rates asymptotically without prior knowledge of the channel behavior.<|reference_end|> | arxiv | @article{lomnitz2009communication,
title={Communication over Individual Channels},
author={Yuval Lomnitz, Meir Feder},
journal={IEEE Trans. Information Theory, vol. 57, no. 11, pp. 7333 --7358,
Nov. 2011},
year={2009},
doi={10.1109/TIT.2011.2169130},
archivePrefix={arXiv},
eprint={0901.1473},
primaryClass={cs.IT math.IT}
} | lomnitz2009communication |
arxiv-5982 | 0901.1479 | Exploiting the Path Propagation Time Differences in Multipath Transmission with FEC | <|reference_start|>Exploiting the Path Propagation Time Differences in Multipath Transmission with FEC: We consider a transmission of a delay-sensitive data stream from a single source to a single destination. The reliability of this transmission may suffer from bursty packet losses - the predominant type of failures in today's Internet. An effective and well studied solution to this problem is to protect the data by a Forward Error Correction (FEC) code and send the FEC packets over multiple paths. In this paper we show that the performance of such a multipath FEC scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling that we call `Spread'. We evaluate our solution with a precise, analytical formulation and trace-driven simulations. Our studies show that Spread substantially outperforms the state-of-the-art solutions. It typically achieves two- to five-fold improvement (reduction) in the effective loss rate. Or conversely, keeping the same level of effective loss rate, Spread significantly decreases the observed delays and helps fighting the delay jitter.<|reference_end|> | arxiv | @article{kurant2009exploiting,
title={Exploiting the Path Propagation Time Differences in Multipath
Transmission with FEC},
author={Maciej Kurant},
journal={arXiv preprint arXiv:0901.1479},
year={2009},
archivePrefix={arXiv},
eprint={0901.1479},
primaryClass={cs.NI}
} | kurant2009exploiting |
arxiv-5983 | 0901.1492 | An information inequality for the BSSC channel | <|reference_start|>An information inequality for the BSSC channel: We establish an information theoretic inequality concerning the binary skew-symmetric broadcast channel that was conjectured by one of the authors. This inequality helps to quantify the gap between the sum rate obtained by the inner bound and outer bound for the binary skew-symmetric broadcast channel.<|reference_end|> | arxiv | @article{jog2009an,
title={An information inequality for the BSSC channel},
author={Varun Jog, Chandra Nair},
journal={arXiv preprint arXiv:0901.1492},
year={2009},
archivePrefix={arXiv},
eprint={0901.1492},
primaryClass={cs.IT math.IT}
} | jog2009an |
arxiv-5984 | 0901.1503 | A Greedy Omnidirectional Relay Scheme | <|reference_start|>A Greedy Omnidirectional Relay Scheme: A greedy omnidirectional relay scheme is developed, and the corresponding achievable rate region is obtained for the all-source all-cast problem. The discussions are first based on the general discrete memoryless channel model, and then applied to the additive white Gaussian noise (AWGN) models, with both full-duplex and half-duplex modes.<|reference_end|> | arxiv | @article{xie2009a,
title={A Greedy Omnidirectional Relay Scheme},
author={Liang-Liang Xie},
journal={arXiv preprint arXiv:0901.1503},
year={2009},
archivePrefix={arXiv},
eprint={0901.1503},
primaryClass={cs.IT math.IT}
} | xie2009a |
arxiv-5985 | 0901.1563 | Fast Algorithms for Max Independent Set in Graphs of Small Average Degree | <|reference_start|>Fast Algorithms for Max Independent Set in Graphs of Small Average Degree: Max Independent Set (MIS) is a paradigmatic problem in theoretical computer science and numerous studies tackle its resolution by exact algorithms with non-trivial worst-case complexity. The best such complexity is, to our knowledge, the $O^*(1.1889^n)$ algorithm claimed by J.M. Robson (T.R. 1251-01, LaBRI, Univ. Bordeaux I, 2001) in his unpublished technical report. We also quote the $O^*(1.2210^n)$ algorithm by Fomin and al. (in Proc. SODA'06, pages 18-25, 2006), that is the best published result about MIS. In this paper we settle MIS in (connected) graphs with "small" average degree, more precisely with average degree at most 3, 4, 5 and 6. Dealing with graphs of average degree at most 3, the best bound known is the recent $O^*(1.0977^n)$ bound by N. Bourgeois and al. in Proc. IWPEC'08, pages 55-65, 2008). Here we improve this result down to $O^*(1.0854^n)$ by proposing finer and more powerful reduction rules. We then propose a generic method showing how improvement of the worst-case complexity for MIS in graphs of average degree $d$ entails improvement of it in any graph of average degree greater than $d$ and, based upon it, we tackle MIS in graphs of average degree 4, 5 and 6. For MIS in graphs with average degree 4, we provide an upper complexity bound of $O^*(1.1571^n)$ that outperforms the best known bound of $O^*(1.1713^n)$ by R. Beigel (Proc. SODA'99, pages 856-857, 1999). For MIS in graphs of average degree at most 5 and 6, we provide bounds of $O^*(1.1969^n)$ and $O^*(1.2149^n)$, respectively, that improve upon the corresponding bounds of $O^*(1.2023^n)$ and $O^*(1.2172^n)$ in graphs of maximum degree 5 and 6 by (Fomin et al., 2006).<|reference_end|> | arxiv | @article{bourgeois2009fast,
title={Fast Algorithms for Max Independent Set in Graphs of Small Average
Degree},
author={Nicolas Bourgeois, Bruno Escoffier, Vangelis Th. Paschos, Johan M.M
van Rooij},
journal={arXiv preprint arXiv:0901.1563},
year={2009},
archivePrefix={arXiv},
eprint={0901.1563},
primaryClass={cs.DM cs.DS}
} | bourgeois2009fast |
arxiv-5986 | 0901.1582 | Parallelizing the XSTAR Photoionization Code | <|reference_start|>Parallelizing the XSTAR Photoionization Code: We describe two means by which XSTAR, a code which computes physical conditions and emission spectra of photoionized gases, has been parallelized. The first is pvm_xstar, a wrapper which can be used in place of the serial xstar2xspec script to foster concurrent execution of the XSTAR command line application on independent sets of parameters. The second is PModel, a plugin for the Interactive Spectral Interpretation System (ISIS) which allows arbitrary components of a broad range of astrophysical models to be distributed across processors during fitting and confidence limits calculations, by scientists with little training in parallel programming. Plugging the XSTAR family of analytic models into PModel enables multiple ionization states (e.g., of a complex absorber/emitter) to be computed simultaneously, alleviating the often prohibitive expense of the traditional serial approach. Initial performance results indicate that these methods substantially enlarge the problem space to which XSTAR may be applied within practical timeframes.<|reference_end|> | arxiv | @article{noble2009parallelizing,
title={Parallelizing the XSTAR Photoionization Code},
author={Michael S. Noble, Li Ji, Andrew Young, Julia Lee},
journal={arXiv preprint arXiv:0901.1582},
year={2009},
archivePrefix={arXiv},
eprint={0901.1582},
primaryClass={astro-ph.IM cs.DC}
} | noble2009parallelizing |
arxiv-5987 | 0901.1610 | Towards a Framework for Observing Artificial Evolutionary Systems | <|reference_start|>Towards a Framework for Observing Artificial Evolutionary Systems: Establishing the emergence of evolutionary behavior as a defining characteristic of 'life' is a major step in the Artificial life (ALife) studies. We present here an abstract formal framework for this aim based upon the notion of high-level observations made on the ALife model at hand during its simulations. An observation process is defined as a computable transformation from the underlying dynamic structure of the model universe to a tuple consisting of abstract components needed to establish the evolutionary processes in the model. Starting with defining entities and their evolutionary relationships observed during the simulations of the model, the framework prescribes a series of definitions, followed by the axioms (conditions) that must be met in order to establish the level of evolutionary behavior in the model. The examples of Cellular Automata based Langton Loops and Lambda calculus based Algorithmic Chemistry are used to illustrate the framework. Generic design suggestions for the ALife research are also drawn based upon the framework design and case study analysis.<|reference_end|> | arxiv | @article{misra2009towards,
title={Towards a Framework for Observing Artificial Evolutionary Systems},
author={Janardan Misra},
journal={arXiv preprint arXiv:0901.1610},
year={2009},
archivePrefix={arXiv},
eprint={0901.1610},
primaryClass={cs.NE cs.MA}
} | misra2009towards |
arxiv-5988 | 0901.1629 | Adaptive threshold-based decision for efficient hybrid deflection and retransmission scheme in OBS networks | <|reference_start|>Adaptive threshold-based decision for efficient hybrid deflection and retransmission scheme in OBS networks: Burst contention is a well-known challenging problem in Optical Burst Switching (OBS) networks. Deflection routing is used to resolve contention. Burst retransmission is used to reduce the Burst Loss Ratio (BLR) by retransmitting dropped bursts. Previous works show that combining deflection and retransmission outperforms both pure deflection and pure retransmission approaches. This paper proposes a new Adaptive Hybrid Deflection and Retransmission (AHDR) approach that dynamically combines deflection and retransmission approaches based on network conditions such as BLR and link utilization. Network Simulator 2 (ns-2) is used to simulate the proposed approach on different network topologies. Simulation results show that the proposed approach outperforms static approaches in terms of BLR by using an adaptive decision threshold.<|reference_end|> | arxiv | @article{levesque2009adaptive,
title={Adaptive threshold-based decision for efficient hybrid deflection and
retransmission scheme in OBS networks},
author={Martin Levesque, Halima Elbiaze, Wael Hosny Fouad Aly},
journal={arXiv preprint arXiv:0901.1629},
year={2009},
archivePrefix={arXiv},
eprint={0901.1629},
primaryClass={cs.NI}
} | levesque2009adaptive |
arxiv-5989 | 0901.1655 | Multishot Codes for Network Coding: Bounds and a Multilevel Construction | <|reference_start|>Multishot Codes for Network Coding: Bounds and a Multilevel Construction: The subspace channel was introduced by Koetter and Kschischang as an adequate model for the communication channel from the source node to a sink node of a multicast network that performs random linear network coding. So far, attention has been given to one-shot subspace codes, that is, codes that use the subspace channel only once. In contrast, this paper explores the idea of using the subspace channel more than once and investigates the so called multishot subspace codes. We present definitions for the problem, a motivating example, lower and upper bounds for the size of codes, and a multilevel construction of codes based on block-coded modulation.<|reference_end|> | arxiv | @article{nobrega2009multishot,
title={Multishot Codes for Network Coding: Bounds and a Multilevel Construction},
author={Roberto W. Nobrega and Bartolomeu F. Uchoa-Filho},
journal={arXiv preprint arXiv:0901.1655},
year={2009},
doi={10.1109/ISIT.2009.5205750},
archivePrefix={arXiv},
eprint={0901.1655},
primaryClass={cs.IT math.IT}
} | nobrega2009multishot |
arxiv-5990 | 0901.1683 | New Bounds for Binary and Ternary Overloaded CDMA | <|reference_start|>New Bounds for Binary and Ternary Overloaded CDMA: In this paper, we study binary and ternary matrices that are used for CDMA applications that are injective on binary or ternary user vectors. In other words, in the absence of additive noise, the interference of overloaded CDMA can be removed completely. Some new algorithms are proposed for constructing such matrices. Also, using an information theoretic approach, we conjecture the extent to which such CDMA matrix codes exist. For overloaded case, we also show that some of the codes derived from our algorithms perform better than the binary Welch Bound Equality codes; the decoding is ML but of low complexity.<|reference_end|> | arxiv | @article{dashmiz2009new,
title={New Bounds for Binary and Ternary Overloaded CDMA},
author={Sh. Dashmiz, P. Pad, F. Marvasti},
journal={arXiv preprint arXiv:0901.1683},
year={2009},
archivePrefix={arXiv},
eprint={0901.1683},
primaryClass={cs.IT cs.DM math.CO math.IT}
} | dashmiz2009new |
arxiv-5991 | 0901.1684 | A rigorous analysis of the cavity equations for the minimum spanning tree | <|reference_start|>A rigorous analysis of the cavity equations for the minimum spanning tree: We analyze a new general representation for the Minimum Weight Steiner Tree (MST) problem which translates the topological connectivity constraint into a set of local conditions which can be analyzed by the so called cavity equations techniques. For the limit case of the Spanning tree we prove that the fixed point of the algorithm arising from the cavity equations leads to the global optimum.<|reference_end|> | arxiv | @article{bayati2009a,
title={A rigorous analysis of the cavity equations for the minimum spanning
tree},
author={M. Bayati, A. Braunstein, R. Zecchina},
journal={J. Math. Phys. 49, 125206 (2008)},
year={2009},
doi={10.1063/1.2982805},
archivePrefix={arXiv},
eprint={0901.1684},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.DS}
} | bayati2009a |
arxiv-5992 | 0901.1694 | Degrees of Freedom of a Communication Channel: Using Generalised Singular Values | <|reference_start|>Degrees of Freedom of a Communication Channel: Using Generalised Singular Values: A fundamental problem in any communication system is: given a communication channel between a transmitter and a receiver, how many "independent" signals can be exchanged between them? Arbitrary communication channels that can be described by linear compact channel operators mapping between normed spaces are examined in this paper. The (well-known) notions of degrees of freedom at level $\epsilon$ and essential dimension of such channels are developed in this general setting. We argue that the degrees of freedom at level $\epsilon$ and the essential dimension fundamentally limit the number of independent signals that can be exchanged between the transmitter and the receiver. We also generalise the concept of singular values of compact operators to be applicable to compact operators defined on arbitrary normed spaces which do not necessarily carry a Hilbert space structure. We show how these generalised singular values can be used to calculate the degrees of freedom at level $\epsilon$ and the essential dimension of compact operators that describe communication channels. We describe physically realistic channels that require such general channel models.<|reference_end|> | arxiv | @article{somaraju2009degrees,
title={Degrees of Freedom of a Communication Channel: Using Generalised
Singular Values},
author={Ram Somaraju and Jochen Trumpf},
journal={arXiv preprint arXiv:0901.1694},
year={2009},
archivePrefix={arXiv},
eprint={0901.1694},
primaryClass={cs.IT math.IT}
} | somaraju2009degrees |
arxiv-5993 | 0901.1695 | On the Degrees-of-Freedom of the K-User Gaussian Interference Channel | <|reference_start|>On the Degrees-of-Freedom of the K-User Gaussian Interference Channel: The degrees-of-freedom of a K-user Gaussian interference channel (GIFC) has been defined to be the multiple of (1/2)log_2(P) at which the maximum sum of achievable rates grows with increasing P. In this paper, we establish that the degrees-of-freedom of three or more user, real, scalar GIFCs, viewed as a function of the channel coefficients, is discontinuous at points where all of the coefficients are non-zero rational numbers. More specifically, for all K>2, we find a class of K-user GIFCs that is dense in the GIFC parameter space for which K/2 degrees-of-freedom are exactly achievable, and we show that the degrees-of-freedom for any GIFC with non-zero rational coefficients is strictly smaller than K/2. These results are proved using new connections with number theory and additive combinatorics.<|reference_end|> | arxiv | @article{etkin2009on,
title={On the Degrees-of-Freedom of the K-User Gaussian Interference Channel},
author={Raul Etkin and Erik Ordentlich},
journal={arXiv preprint arXiv:0901.1695},
year={2009},
archivePrefix={arXiv},
eprint={0901.1695},
primaryClass={cs.IT math.IT}
} | etkin2009on |
arxiv-5994 | 0901.1696 | Rectangular Full Packed Format for Cholesky's Algorithm: Factorization, Solution and Inversion | <|reference_start|>Rectangular Full Packed Format for Cholesky's Algorithm: Factorization, Solution and Inversion: We describe a new data format for storing triangular, symmetric, and Hermitian matrices called RFPF (Rectangular Full Packed Format). The standard two dimensional arrays of Fortran and C (also known as full format) that are used to represent triangular and symmetric matrices waste nearly half of the storage space but provide high performance via the use of Level 3 BLAS. Standard packed format arrays fully utilize storage (array space) but provide low performance as there is no Level 3 packed BLAS. We combine the good features of packed and full storage using RFPF to obtain high performance via using Level 3 BLAS as RFPF is a standard full format representation. Also, RFPF requires exactly the same minimal storage as packed format. Each LAPACK full and/or packed triangular, symmetric, and Hermitian routine becomes a single new RFPF routine based on eight possible data layouts of RFPF. This new RFPF routine usually consists of two calls to the corresponding LAPACK full format routine and two calls to Level 3 BLAS routines. This means {\it no} new software is required. As examples, we present LAPACK routines for Cholesky factorization, Cholesky solution and Cholesky inverse computation in RFPF to illustrate this new work and to describe its performance on several commonly used computer platforms. Performance of LAPACK full routines using RFPF versus LAPACK full routines using standard format for both serial and SMP parallel processing is about the same while using half the storage. Performance gains are roughly one to a factor of 43 for serial and one to a factor of 97 for SMP parallel times faster using vendor LAPACK full routines with RFPF than with using vendor and/or reference packed routines.<|reference_end|> | arxiv | @article{gustavson2009rectangular,
title={Rectangular Full Packed Format for Cholesky's Algorithm: Factorization,
Solution and Inversion},
author={Fred G. Gustavson, Jerzy Wasniewski, Jack J. Dongarra and Julien
Langou},
journal={arXiv preprint arXiv:0901.1696},
year={2009},
archivePrefix={arXiv},
eprint={0901.1696},
primaryClass={cs.MS cs.DS}
} | gustavson2009rectangular |
arxiv-5995 | 0901.1703 | Pilot Contamination and Precoding in Multi-Cell TDD Systems | <|reference_start|>Pilot Contamination and Precoding in Multi-Cell TDD Systems: This paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. For precoding at the base stations, channel state information (CSI) is essential at the base stations. A popular technique for obtaining this CSI in time division duplex (TDD) systems is uplink training by utilizing the reciprocity of the wireless medium. This paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. This paper analyzes this fundamental problem of pilot contamination in multi-cell systems. Furthermore, it develops a new multi-cell MMSE-based precoding method that mitigate this problem. In addition to being a linear precoding method, this precoding method has a simple closed-form expression that results from an intuitive optimization problem formulation. Numerical results show significant performance gains compared to certain popular single-cell precoding methods.<|reference_end|> | arxiv | @article{jose2009pilot,
title={Pilot Contamination and Precoding in Multi-Cell TDD Systems},
author={Jubin Jose, Alexei Ashikhmin, Thomas L. Marzetta, Sriram Vishwanath},
journal={arXiv preprint arXiv:0901.1703},
year={2009},
archivePrefix={arXiv},
eprint={0901.1703},
primaryClass={cs.IT math.IT}
} | jose2009pilot |
arxiv-5996 | 0901.1705 | Rate-Distortion with Side-Information at Many Decoders | <|reference_start|>Rate-Distortion with Side-Information at Many Decoders: We present a new inner bound for the rate region of the $t$-stage successive-refinement problem with side-information. We also present a new upper bound for the rate-distortion function for lossy-source coding with multiple decoders and side-information. Characterising this rate-distortion function is a long-standing open problem, and it is widely believed that the tightest upper bound is provided by Theorem 2 of Heegard and Berger's paper "Rate Distortion when Side Information may be Absent", \emph{IEEE Trans. Inform. Theory}, 1985. We give a counterexample to Heegard and Berger's result.<|reference_end|> | arxiv | @article{timo2009rate-distortion,
title={Rate-Distortion with Side-Information at Many Decoders},
author={Roy Timo, Terence Chan and Alexander Grant},
journal={arXiv preprint arXiv:0901.1705},
year={2009},
doi={10.1109/TIT.2011.2158472},
archivePrefix={arXiv},
eprint={0901.1705},
primaryClass={cs.IT math.IT}
} | timo2009rate-distortion |
arxiv-5997 | 0901.1708 | A statistical mechanical interpretation of instantaneous codes | <|reference_start|>A statistical mechanical interpretation of instantaneous codes: In this paper we develop a statistical mechanical interpretation of the noiseless source coding scheme based on an absolutely optimal instantaneous code. The notions in statistical mechanics such as statistical mechanical entropy, temperature, and thermal equilibrium are translated into the context of noiseless source coding. Especially, it is discovered that the temperature 1 corresponds to the average codeword length of an instantaneous code in this statistical mechanical interpretation of noiseless source coding scheme. This correspondence is also verified by the investigation using box-counting dimension. Using the notion of temperature and statistical mechanical arguments, some information-theoretic relations can be derived in the manner which appeals to intuition.<|reference_end|> | arxiv | @article{tadaki2009a,
title={A statistical mechanical interpretation of instantaneous codes},
author={Kohtaro Tadaki},
journal={arXiv preprint arXiv:0901.1708},
year={2009},
doi={10.1109/ISIT.2007.4557499},
archivePrefix={arXiv},
eprint={0901.1708},
primaryClass={cs.IT math.IT}
} | tadaki2009a |
arxiv-5998 | 0901.1732 | Feedback Communication over Individual Channels | <|reference_start|>Feedback Communication over Individual Channels: We consider the problem of communicating over a channel for which no mathematical model is specified. We present achievable rates as a function of the channel input and output sequences known a-posteriori for discrete and continuous channels. Furthermore we present a rate-adaptive scheme employing feedback which achieves these rates asymptotically without prior knowledge of the channel behavior.<|reference_end|> | arxiv | @article{lomnitz2009feedback,
title={Feedback Communication over Individual Channels},
author={Yuval Lomnitz, Meir Feder},
journal={arXiv preprint arXiv:0901.1732},
year={2009},
archivePrefix={arXiv},
eprint={0901.1732},
primaryClass={cs.IT math.IT}
} | lomnitz2009feedback |
arxiv-5999 | 0901.1737 | Power Adaptive Feedback Communication over an Additive Individual Noise Sequence Channel | <|reference_start|>Power Adaptive Feedback Communication over an Additive Individual Noise Sequence Channel: We consider a real-valued additive channel with an individual unknown noise sequence. We present a simple sequential communication scheme based on the celebrated Schalkwijk-Kailath scheme, which varies the transmit power according to the power of the sequence, so that asymptotically the relation between the SNR and the rate matches the Gaussian channel capacity 1/2 log(1+SNR)for almost every noise sequence.<|reference_end|> | arxiv | @article{lomnitz2009power,
title={Power Adaptive Feedback Communication over an Additive Individual Noise
Sequence Channel},
author={Yuval Lomnitz, Meir Feder},
journal={arXiv preprint arXiv:0901.1737},
year={2009},
archivePrefix={arXiv},
eprint={0901.1737},
primaryClass={cs.IT math.IT}
} | lomnitz2009power |
arxiv-6000 | 0901.1753 | A Channel Coding Perspective of Recommendation Systems | <|reference_start|>A Channel Coding Perspective of Recommendation Systems: Motivated by recommendation systems, we consider the problem of estimating block constant binary matrices (of size $m \times n$) from sparse and noisy observations. The observations are obtained from the underlying block constant matrix after unknown row and column permutations, erasures, and errors. We derive upper and lower bounds on the achievable probability of error. For fixed erasure and error probability, we show that there exists a constant $C_1$ such that if the cluster sizes are less than $C_1 \ln(mn)$, then for any algorithm the probability of error approaches one as $m, n \tends \infty$. On the other hand, we show that a simple polynomial time algorithm gives probability of error diminishing to zero provided the cluster sizes are greater than $C_2 \ln(mn)$ for a suitable constant $C_2$.<|reference_end|> | arxiv | @article{aditya2009a,
title={A Channel Coding Perspective of Recommendation Systems},
author={S.T. Aditya, Onkar Dabeer, and Bikash Kumar Dey},
journal={arXiv preprint arXiv:0901.1753},
year={2009},
archivePrefix={arXiv},
eprint={0901.1753},
primaryClass={cs.IT math.IT}
} | aditya2009a |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.