corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-2401 | 0801.3046 | A model for reactive porous transport during re-wetting of hardened concrete | <|reference_start|>A model for reactive porous transport during re-wetting of hardened concrete: A mathematical model is developed that captures the transport of liquid water in hardened concrete, as well as the chemical reactions that occur between the imbibed water and the residual calcium silicate compounds residing in the porous concrete matrix. The main hypothesis in this model is that the reaction product -- calcium silicate hydrate gel -- clogs the pores within the concrete thereby hindering water transport. Numerical simulations are employed to determine the sensitivity of the model solution to changes in various physical parameters, and compare to experimental results available in the literature.<|reference_end|> | arxiv | @article{chapwanya2008a,
title={A model for reactive porous transport during re-wetting of hardened
concrete},
author={Michael Chapwanya, Wentao Liu and John M. Stockie},
journal={Journal of Engineering Mathematics, 65(1):53-73, 2009},
year={2008},
doi={10.1007/s10665-009-9268-0},
archivePrefix={arXiv},
eprint={0801.3046},
primaryClass={cs.CE physics.flu-dyn}
} | chapwanya2008a |
arxiv-2402 | 0801.3048 | Human Heuristics for Autonomous Agents | <|reference_start|>Human Heuristics for Autonomous Agents: We investigate the problem of autonomous agents processing pieces of information that may be corrupted (tainted). Agents have the option of contacting a central database for a reliable check of the status of the message, but this procedure is costly and therefore should be used with parsimony. Agents have to evaluate the risk of being infected, and decide if and when communicating partners are affordable. Trustability is implemented as a personal (one-to-one) record of past contacts among agents, and as a mean-field monitoring of the level of message corruption. Moreover, this information is slowly forgotten in time, so that at the end everybody is checked against the database. We explore the behavior of a homogeneous system in the case of a fixed pool of spreaders of corrupted messages, and in the case of spontaneous appearance of corrupted messages.<|reference_end|> | arxiv | @article{bagnoli2008human,
title={Human Heuristics for Autonomous Agents},
author={Franco Bagnoli, Andrea Guazzini, Pietro Lio'},
journal={P. Li\'o et al. editors, BIOWIRE 2007, LNCS 5151, pages 340-351,
Springer--Verlag Berlin Heidelberg 2008},
year={2008},
doi={10.1007/978-3-540-92191-2_30},
archivePrefix={arXiv},
eprint={0801.3048},
primaryClass={cs.MA cs.HC cs.NI}
} | bagnoli2008human |
arxiv-2403 | 0801.3049 | Spatial-Spectral Joint Detection for Wideband Spectrum Sensing in Cognitive Radio Networks | <|reference_start|>Spatial-Spectral Joint Detection for Wideband Spectrum Sensing in Cognitive Radio Networks: Spectrum sensing is an essential functionality that enables cognitive radios to detect spectral holes and opportunistically use under-utilized frequency bands without causing harmful interference to primary networks. Since individual cognitive radios might not be able to reliably detect weak primary signals due to channel fading/shadowing, this paper proposes a cooperative wideband spectrum sensing scheme, referred to as spatial-spectral joint detection, which is based on a linear combination of the local statistics from spatially distributed multiple cognitive radios. The cooperative sensing problem is formulated into an optimization problem, for which suboptimal but efficient solutions can be obtained through mathematical transformation under practical conditions.<|reference_end|> | arxiv | @article{quan2008spatial-spectral,
title={Spatial-Spectral Joint Detection for Wideband Spectrum Sensing in
Cognitive Radio Networks},
author={Zhi Quan, Shuguang Cui, Ali. H. Sayed, and H. Vincent Poor},
journal={arXiv preprint arXiv:0801.3049},
year={2008},
doi={10.1109/TSP.2008.2008540},
archivePrefix={arXiv},
eprint={0801.3049},
primaryClass={cs.IT math.IT}
} | quan2008spatial-spectral |
arxiv-2404 | 0801.3065 | Cut Elimination for a Logic with Generic Judgments and Induction | <|reference_start|>Cut Elimination for a Logic with Generic Judgments and Induction: This paper presents a cut-elimination proof for the logic $LG^\omega$, which is an extension of a proof system for encoding generic judgments, the logic $\FOLDNb$ of Miller and Tiu, with an induction principle. The logic $LG^\omega$, just as $\FOLDNb$, features extensions of first-order intuitionistic logic with fixed points and a ``generic quantifier'', $\nabla$, which is used to reason about the dynamics of bindings in object systems encoded in the logic. A previous attempt to extend $\FOLDNb$ with an induction principle has been unsuccessful in modeling some behaviours of bindings in inductive specifications. It turns out that this problem can be solved by relaxing some restrictions on $\nabla$, in particular by adding the axiom $B \equiv \nabla x. B$, where $x$ is not free in $B$. We show that by adopting the equivariance principle, the presentation of the extended logic can be much simplified. This paper contains the technical proofs for the results stated in \cite{tiu07entcs}; readers are encouraged to consult \cite{tiu07entcs} for motivations and examples for $LG^\omega.$<|reference_end|> | arxiv | @article{tiu2008cut,
title={Cut Elimination for a Logic with Generic Judgments and Induction},
author={Alwen Tiu},
journal={arXiv preprint arXiv:0801.3065},
year={2008},
archivePrefix={arXiv},
eprint={0801.3065},
primaryClass={cs.LO}
} | tiu2008cut |
arxiv-2405 | 0801.3073 | Large Deviations Analysis for the Detection of 2D Hidden Gauss-Markov Random Fields Using Sensor Networks | <|reference_start|>Large Deviations Analysis for the Detection of 2D Hidden Gauss-Markov Random Fields Using Sensor Networks: The detection of hidden two-dimensional Gauss-Markov random fields using sensor networks is considered. Under a conditional autoregressive model, the error exponent for the Neyman-Pearson detector satisfying a fixed level constraint is obtained using the large deviations principle. For a symmetric first order autoregressive model, the error exponent is given explicitly in terms of the SNR and an edge dependence factor (field correlation). The behavior of the error exponent as a function of correlation strength is seen to divide into two regions depending on the value of the SNR. At high SNR, uncorrelated observations maximize the error exponent for a given SNR, whereas there is non-zero optimal correlation at low SNR. Based on the error exponent, the energy efficiency (defined as the ratio of the total information gathered to the total energy required) of ad hoc sensor network for detection is examined for two sensor deployment models: an infinite area model and and infinite density model. For a fixed sensor density, the energy efficiency diminishes to zero at rate O(area^{-1/2}) as the area is increased. On the other hand, non-zero efficiency is possible for increasing density depending on the behavior of the physical correlation as a function of the link length.<|reference_end|> | arxiv | @article{sung2008large,
title={Large Deviations Analysis for the Detection of 2D Hidden Gauss-Markov
Random Fields Using Sensor Networks},
author={Youngchul Sung, H. Vincent Poor and Heejung Yu},
journal={arXiv preprint arXiv:0801.3073},
year={2008},
doi={10.1109/ICASSP.2008.4518504},
archivePrefix={arXiv},
eprint={0801.3073},
primaryClass={cs.IT math.IT}
} | sung2008large |
arxiv-2406 | 0801.3097 | Auction-based Resource Allocation for Multi-relay Asynchronous Cooperative Networks | <|reference_start|>Auction-based Resource Allocation for Multi-relay Asynchronous Cooperative Networks: Resource allocation is considered for cooperative transmissions in multiple-relay wireless networks. Two auction mechanisms, SNR auctions and power auctions, are proposed to distributively coordinate the allocation of power among multiple relays. In the SNR auction, a user chooses the relay with the lowest weighted price. In the power auction, a user may choose to use multiple relays simultaneously, depending on the network topology and the relays' prices. Sufficient conditions for the existence (in both auctions) and uniqueness (in the SNR auction) of the Nash equilibrium are given. The fairness of the SNR auction and efficiency of the power auction are further discussed. It is also proven that users can achieve the unique Nash equilibrium distributively via best response updates in a completely asynchronous manner.<|reference_end|> | arxiv | @article{huang2008auction-based,
title={Auction-based Resource Allocation for Multi-relay Asynchronous
Cooperative Networks},
author={Jianwei Huang, Zhu Han, Mung Chiang, H. Vincent Poor},
journal={arXiv preprint arXiv:0801.3097},
year={2008},
doi={10.1109/ICASSP.2008.4518870},
archivePrefix={arXiv},
eprint={0801.3097},
primaryClass={cs.IT math.IT}
} | huang2008auction-based |
arxiv-2407 | 0801.3102 | Balancing transparency, efficiency and security in pervasive systems | <|reference_start|>Balancing transparency, efficiency and security in pervasive systems: This chapter will survey pervasive computing with a look at how its constraint for transparency affects issues of resource management and security. The goal of pervasive computing is to render computing transparent, such that computing resources are ubiquitously offered to the user and services are proactively performed for a user without his or her intervention. The task of integrating computing infrastructure into everyday life without making it excessively invasive brings about tradeoffs between flexibility and robustness, efficiency and effectiveness, as well as autonomy and reliability. As the feasibility of ubiquitous computing and its real potential for mass applications are still a matter of controversy, this chapter will look into the underlying issues of resource management and authentication to discover how these can be handled in a least invasive fashion. The discussion will be closed by an overview of the solutions proposed by current pervasive computing efforts, both in the area of generic platforms and for dedicated applications such as pervasive education and healthcare.<|reference_end|> | arxiv | @article{wenstrom2008balancing,
title={Balancing transparency, efficiency and security in pervasive systems},
author={Mark Wenstrom, Eloisa Bentivegna and Ali Hurson (Pennsylvania State
University)},
journal={arXiv preprint arXiv:0801.3102},
year={2008},
archivePrefix={arXiv},
eprint={0801.3102},
primaryClass={cs.HC cs.IR}
} | wenstrom2008balancing |
arxiv-2408 | 0801.3111 | Analysis of Estimation of Distribution Algorithms and Genetic Algorithms on NK Landscapes | <|reference_start|>Analysis of Estimation of Distribution Algorithms and Genetic Algorithms on NK Landscapes: This study analyzes performance of several genetic and evolutionary algorithms on randomly generated NK fitness landscapes with various values of n and k. A large number of NK problem instances are first generated for each n and k, and the global optimum of each instance is obtained using the branch-and-bound algorithm. Next, the hierarchical Bayesian optimization algorithm (hBOA), the univariate marginal distribution algorithm (UMDA), and the simple genetic algorithm (GA) with uniform and two-point crossover operators are applied to all generated instances. Performance of all algorithms is then analyzed and compared, and the results are discussed.<|reference_end|> | arxiv | @article{pelikan2008analysis,
title={Analysis of Estimation of Distribution Algorithms and Genetic Algorithms
on NK Landscapes},
author={Martin Pelikan},
journal={Proceedings of the Genetic and Evolutionary Computation Conference
(GECCO-2008), ACM Press, 1033-1040},
year={2008},
number={MEDAL Report No. 2008001},
archivePrefix={arXiv},
eprint={0801.3111},
primaryClass={cs.NE cs.AI}
} | pelikan2008analysis |
arxiv-2409 | 0801.3112 | The Two User Gaussian Compound Interference Channel | <|reference_start|>The Two User Gaussian Compound Interference Channel: We introduce the two user finite state compound Gaussian interference channel and characterize its capacity region to within one bit. The main contributions involve both novel inner and outer bounds. The inner bound is multilevel superposition coding, but the decoding of the levels is opportunistic, depending on the channel state. The genie aided outer bound is motivated by the typical error events of the achievable scheme.<|reference_end|> | arxiv | @article{raja2008the,
title={The Two User Gaussian Compound Interference Channel},
author={Adnan Raja, Vinod M. Prabhakaran, and Pramod Viswanath},
journal={arXiv preprint arXiv:0801.3112},
year={2008},
archivePrefix={arXiv},
eprint={0801.3112},
primaryClass={cs.IT math.IT}
} | raja2008the |
arxiv-2410 | 0801.3113 | iBOA: The Incremental Bayesian Optimization Algorithm | <|reference_start|>iBOA: The Incremental Bayesian Optimization Algorithm: This paper proposes the incremental Bayesian optimization algorithm (iBOA), which modifies standard BOA by removing the population of solutions and using incremental updates of the Bayesian network. iBOA is shown to be able to learn and exploit unrestricted Bayesian networks using incremental techniques for updating both the structure as well as the parameters of the probabilistic model. This represents an important step toward the design of competent incremental estimation of distribution algorithms that can solve difficult nearly decomposable problems scalably and reliably.<|reference_end|> | arxiv | @article{pelikan2008iboa:,
title={iBOA: The Incremental Bayesian Optimization Algorithm},
author={Martin Pelikan, Kumara Sastry, and David E. Goldberg},
journal={Proceedings of the Genetic and Evolutionary Computation Conference
(GECCO-2008), ACM Press, 455-462},
year={2008},
number={MEDAL Report No. 2008002},
archivePrefix={arXiv},
eprint={0801.3113},
primaryClass={cs.NE cs.AI}
} | pelikan2008iboa: |
arxiv-2411 | 0801.3114 | Thinking is Bad: Implications of Human Error Research for Spreadsheet Research and Practice | <|reference_start|>Thinking is Bad: Implications of Human Error Research for Spreadsheet Research and Practice: In the spreadsheet error community, both academics and practitioners generally have ignored the rich findings produced by a century of human error research. These findings can suggest ways to reduce errors; we can then test these suggestions empirically. In addition, research on human error seems to suggest that several common prescriptions and expectations for reducing errors are likely to be incorrect. Among the key conclusions from human error research are that thinking is bad, that spreadsheets are not the cause of spreadsheet errors, and that reducing errors is extremely difficult.<|reference_end|> | arxiv | @article{panko2008thinking,
title={Thinking is Bad: Implications of Human Error Research for Spreadsheet
Research and Practice},
author={Raymond R. Panko},
journal={Proc. European Spreadsheet Risks Int. Grp. 2007 69-80 ISBN
978-905617-58-6},
year={2008},
archivePrefix={arXiv},
eprint={0801.3114},
primaryClass={cs.HC}
} | panko2008thinking |
arxiv-2412 | 0801.3116 | Enterprise Spreadsheet Management: A Necessary Good | <|reference_start|>Enterprise Spreadsheet Management: A Necessary Good: This paper presents the arguments and supporting business metrics for Enterprise Spreadsheet Management to be seen as a necessary good. These arguments are divided into a summary of external business drivers that make it necessary and the good that may be delivered to business spreadsheet users involved in repetitive manual processes.<|reference_end|> | arxiv | @article{baxter2008enterprise,
title={Enterprise Spreadsheet Management: A Necessary Good},
author={Ralph Baxter},
journal={Proc. European Spreadsheet Risks Int. Grp. 2007 7-13 ISBN
978-905617-58-6},
year={2008},
archivePrefix={arXiv},
eprint={0801.3116},
primaryClass={cs.CY}
} | baxter2008enterprise |
arxiv-2413 | 0801.3117 | A hierarchy of behavioral equivalences in the $\pi$-calculus with noisy channels | <|reference_start|>A hierarchy of behavioral equivalences in the $\pi$-calculus with noisy channels: The $\pi$-calculus is a process algebra where agents interact by sending communication links to each other via noiseless communication channels. Taking into account the reality of noisy channels, an extension of the $\pi$-calculus, called the $\pi_N$-calculus, has been introduced recently. In this paper, we present an early transitional semantics of the $\pi_N$-calculus, which is not a directly translated version of the late semantics of $\pi_N$, and then extend six kinds of behavioral equivalences consisting of reduction bisimilarity, barbed bisimilarity, barbed equivalence, barbed congruence, bisimilarity, and full bisimilarity into the $\pi_N$-calculus. Such behavioral equivalences are cast in a hierarchy, which is helpful to verify behavioral equivalence of two agents. In particular, we show that due to the noisy nature of channels, the coincidence of bisimilarity and barbed equivalence, as well as the coincidence of full bisimilarity and barbed congruence, in the $\pi$-calculus does not hold in $\pi_N$.<|reference_end|> | arxiv | @article{cao2008a,
title={A hierarchy of behavioral equivalences in the $\pi$-calculus with noisy
channels},
author={Yongzhi Cao},
journal={Comput. J., vol. 53, no. 1, pp. 3-20, 2010},
year={2008},
archivePrefix={arXiv},
eprint={0801.3117},
primaryClass={cs.LO}
} | cao2008a |
arxiv-2414 | 0801.3118 | Spreadsheet Hell | <|reference_start|>Spreadsheet Hell: This management paper looks at the real world issues faced by practitioners managing spreadsheets through the production phase of their life cycle. It draws on the commercial experience of several developers working with large corporations, either as employees or consultants or contractors. It provides commercial examples of some of the practicalities involved with spreadsheet use around the enterprise.<|reference_end|> | arxiv | @article{murphy2008spreadsheet,
title={Spreadsheet Hell},
author={Simon Murphy},
journal={Proc. European Spreadsheet Risks Int. Grp. 2007 15-20 ISBN
978-905617-58-6},
year={2008},
archivePrefix={arXiv},
eprint={0801.3118},
primaryClass={cs.CY}
} | murphy2008spreadsheet |
arxiv-2415 | 0801.3119 | Categorisation of Spreadsheet Use within Organisations, Incorporating Risk: A Progress Report | <|reference_start|>Categorisation of Spreadsheet Use within Organisations, Incorporating Risk: A Progress Report: There has been a significant amount of research into spreadsheets over the last two decades. Errors in spreadsheets are well documented. Once used mainly for simple functions such as logging, tracking and totalling information, spreadsheets with enhanced formulas are being used for complex calculative models. There are many software packages and tools which assist in detecting errors within spreadsheets. There has been very little evidence of investigation into the spreadsheet risks associated with the main stream operations within an organisation. This study is a part of the investigation into the means of mitigating risks associated with spreadsheet use within organisations. In this paper the authors present and analyse three proposed models for categorisation of spreadsheet use and the level of risks involved. The models are analysed in the light of current knowledge and the general risks associated with organisations.<|reference_end|> | arxiv | @article{madahar2008categorisation,
title={Categorisation of Spreadsheet Use within Organisations, Incorporating
Risk: A Progress Report},
author={Mukul Madahar, Pat Cleary, David Ball},
journal={Proc. European Spreadsheet Risks Int. Grp. 2007 37-45 ISBN
978-905617-58-6},
year={2008},
archivePrefix={arXiv},
eprint={0801.3119},
primaryClass={cs.CY cs.HC}
} | madahar2008categorisation |
arxiv-2416 | 0801.3147 | From k-SAT to k-CSP: Two Generalized Algorithms | <|reference_start|>From k-SAT to k-CSP: Two Generalized Algorithms: Constraint satisfaction problems (CSPs) models many important intractable NP-hard problems such as propositional satisfiability problem (SAT). Algorithms with non-trivial upper bounds on running time for restricted SAT with bounded clause length k (k-SAT) can be classified into three styles: DPLL-like, PPSZ-like and Local Search, with local search algorithms having already been generalized to CSP with bounded constraint arity k (k-CSP). We generalize a DPLL-like algorithm in its simplest form and a PPSZ-like algorithm from k-SAT to k-CSP. As far as we know, this is the first attempt to use PPSZ-like strategy to solve k-CSP, and before little work has been focused on the DPLL-like or PPSZ-like strategies for k-CSP.<|reference_end|> | arxiv | @article{li2008from,
title={From k-SAT to k-CSP: Two Generalized Algorithms},
author={Liang Li, Xin Li, Tian Liu, Ke Xu},
journal={arXiv preprint arXiv:0801.3147},
year={2008},
archivePrefix={arXiv},
eprint={0801.3147},
primaryClass={cs.DS cs.AI cs.CC}
} | li2008from |
arxiv-2417 | 0801.3199 | Descent methods for Nonnegative Matrix Factorization | <|reference_start|>Descent methods for Nonnegative Matrix Factorization: In this paper, we present several descent methods that can be applied to nonnegative matrix factorization and we analyze a recently developped fast block coordinate method called Rank-one Residue Iteration (RRI). We also give a comparison of these different methods and show that the new block coordinate method has better properties in terms of approximation error and complexity. By interpreting this method as a rank-one approximation of the residue matrix, we prove that it \emph{converges} and also extend it to the nonnegative tensor factorization and introduce some variants of the method by imposing some additional controllable constraints such as: sparsity, discreteness and smoothness.<|reference_end|> | arxiv | @article{ho2008descent,
title={Descent methods for Nonnegative Matrix Factorization},
author={Ngoc-Diep Ho (1), Paul Van Dooren (1) and Vincent D. Blondel (1) ((1)
Universit'e catholique de Louvain, Belgium)},
journal={arXiv preprint arXiv:0801.3199},
year={2008},
number={2007.057},
archivePrefix={arXiv},
eprint={0801.3199},
primaryClass={cs.NA cs.IR math.OC}
} | ho2008descent |
arxiv-2418 | 0801.3209 | A Pyramidal Evolutionary Algorithm with Different Inter-Agent Partnering Strategies for Scheduling Problems | <|reference_start|>A Pyramidal Evolutionary Algorithm with Different Inter-Agent Partnering Strategies for Scheduling Problems: This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.<|reference_end|> | arxiv | @article{aickelin2008a,
title={A Pyramidal Evolutionary Algorithm with Different Inter-Agent Partnering
Strategies for Scheduling Problems},
author={Uwe Aickelin},
journal={Proceedings of the Genetic and Evolutionary Computation Conference
(GECCO 2001), late-breaking papers volume, pp 1-8, San Francisco, USA},
year={2008},
archivePrefix={arXiv},
eprint={0801.3209},
primaryClass={cs.NE cs.CE}
} | aickelin2008a |
arxiv-2419 | 0801.3239 | Online-concordance "Perekhresni stezhky" ("The Cross-Paths"), a novel by Ivan Franko | <|reference_start|>Online-concordance "Perekhresni stezhky" ("The Cross-Paths"), a novel by Ivan Franko: In the article, theoretical principles and practical realization for the compilation of the concordance to "Perekhresni stezhky" ("The Cross-Paths"), a novel by Ivan Franko, are described. Two forms for the context presentation are proposed. The electronic version of this lexicographic work is available online.<|reference_end|> | arxiv | @article{buk2008online-concordance,
title={Online-concordance "Perekhresni stezhky" ("The Cross-Paths"), a novel by
Ivan Franko},
author={Solomiya Buk, Andrij Rovenchak},
journal={Ivan Franko: Spirit, Science, Thought, Will (Proceedings of the
International Scientific Congress dedicated to the 150th anniversary (Lviv,
27 September -- 1 October 2006, Lviv University Press, Vol. 2, pp. 203-211,
2010)},
year={2008},
archivePrefix={arXiv},
eprint={0801.3239},
primaryClass={cs.CL cs.DL}
} | buk2008online-concordance |
arxiv-2420 | 0801.3249 | Complex Eigenvalues for Binary Subdivision Schemes | <|reference_start|>Complex Eigenvalues for Binary Subdivision Schemes: Convergence properties of binary stationary subdivision schemes for curves have been analyzed using the techniques of z-transforms and eigenanalysis. Eigenanalysis provides a way to determine derivative continuity at specific points based on the eigenvalues of a finite matrix. None of the well-known subdivision schemes for curves have complex eigenvalues. We prove when a convergent scheme with palindromic mask can have complex eigenvalues and that a lower limit for the size of the mask exists in this case. We find a scheme with complex eigenvalues achieving this lower bound. Furthermore we investigate this scheme numerically and explain from a geometric viewpoint why such a scheme has not yet been used in computer-aided geometric design.<|reference_end|> | arxiv | @article{kuehn2008complex,
title={Complex Eigenvalues for Binary Subdivision Schemes},
author={Christian Kuehn},
journal={arXiv preprint arXiv:0801.3249},
year={2008},
archivePrefix={arXiv},
eprint={0801.3249},
primaryClass={cs.GR cs.NA}
} | kuehn2008complex |
arxiv-2421 | 0801.3272 | Nonregenerative MIMO Relaying with Optimal Transmit Antenna Selection | <|reference_start|>Nonregenerative MIMO Relaying with Optimal Transmit Antenna Selection: We derive optimal SNR-based transmit antenna selection rules at the source and relay for the nonregenerative half duplex MIMO relay channel. While antenna selection is a suboptimal form of beamforming, it has the advantage that the optimization is tractable and can be implemented with only a few bits of feedback from the destination to the source and relay. We compare the bit error rate of optimal antenna selection at both the source and relay to other proposed beamforming techniques and propose methods for performing the necessary limited feedback.<|reference_end|> | arxiv | @article{peters2008nonregenerative,
title={Nonregenerative MIMO Relaying with Optimal Transmit Antenna Selection},
author={Steven W. Peters and Robert W. Heath Jr},
journal={arXiv preprint arXiv:0801.3272},
year={2008},
doi={10.1109/LSP.2008.921466},
archivePrefix={arXiv},
eprint={0801.3272},
primaryClass={cs.IT math.IT}
} | peters2008nonregenerative |
arxiv-2422 | 0801.3289 | Optimal Medium Access Control in Cognitive Radios: A Sequential Design Approach | <|reference_start|>Optimal Medium Access Control in Cognitive Radios: A Sequential Design Approach: The design of medium access control protocols for a cognitive user wishing to opportunistically exploit frequency bands within parts of the radio spectrum having multiple bands is considered. In the scenario under consideration, the availability probability of each channel is unknown a priori to the cognitive user. Hence efficient medium access strategies must strike a balance between exploring the availability of channels and exploiting the opportunities identified thus far. Using a sequential design approach, an optimal medium access strategy is derived. To avoid the prohibitive computational complexity of this optimal strategy, a low complexity asymptotically optimal strategy is also developed. The proposed strategy does not require any prior statistical knowledge about the traffic pattern on the different channels.<|reference_end|> | arxiv | @article{lai2008optimal,
title={Optimal Medium Access Control in Cognitive Radios: A Sequential Design
Approach},
author={Lifeng Lai, Hesham El Gamal, Hai Jiang and H. Vincent Poor},
journal={arXiv preprint arXiv:0801.3289},
year={2008},
doi={10.1109/ICASSP.2008.4518049},
archivePrefix={arXiv},
eprint={0801.3289},
primaryClass={cs.IT cs.NI math.IT}
} | lai2008optimal |
arxiv-2423 | 0801.3331 | Worst-Case Hermite-Korkine-Zolotarev Reduced Lattice Bases | <|reference_start|>Worst-Case Hermite-Korkine-Zolotarev Reduced Lattice Bases: The Hermite-Korkine-Zolotarev reduction plays a central role in strong lattice reduction algorithms. By building upon a technique introduced by Ajtai, we show the existence of Hermite-Korkine-Zolotarev reduced bases that are arguably least reduced. We prove that for such bases, Kannan's algorithm solving the shortest lattice vector problem requires $d^{\frac{d}{2\e}(1+o(1))}$ bit operations in dimension $d$. This matches the best complexity upper bound known for this algorithm. These bases also provide lower bounds on Schnorr's constants $\alpha_d$ and $\beta_d$ that are essentially equal to the best upper bounds. Finally, we also show the existence of particularly bad bases for Schnorr's hierarchy of reductions.<|reference_end|> | arxiv | @article{hanrot2008worst-case,
title={Worst-Case Hermite-Korkine-Zolotarev Reduced Lattice Bases},
author={Guillaume Hanrot (INRIA Lorraine - LORIA), Damien Stehl'e (INRIA
Rh^one-Alpes)},
journal={arXiv preprint arXiv:0801.3331},
year={2008},
archivePrefix={arXiv},
eprint={0801.3331},
primaryClass={math.NT cs.CC cs.CR}
} | hanrot2008worst-case |
arxiv-2424 | 0801.3408 | On the expressive power of permanents and perfect matchings of matrices of bounded pathwidth/cliquewidth | <|reference_start|>On the expressive power of permanents and perfect matchings of matrices of bounded pathwidth/cliquewidth: Some 25 years ago Valiant introduced an algebraic model of computation in order to study the complexity of evaluating families of polynomials. The theory was introduced along with the complexity classes VP and VNP which are analogues of the classical classes P and NP. Families of polynomials that are difficult to evaluate (that is, VNP-complete) includes the permanent and hamiltonian polynomials. In a previous paper the authors together with P. Koiran studied the expressive power of permanent and hamiltonian polynomials of matrices of bounded treewidth, as well as the expressive power of perfect matchings of planar graphs. It was established that the permanent and hamiltonian polynomials of matrices of bounded treewidth are equivalent to arithmetic formulas. Also, the sum of weights of perfect matchings of planar graphs was shown to be equivalent to (weakly) skew circuits. In this paper we continue the research in the direction described above, and study the expressive power of permanents, hamiltonians and perfect matchings of matrices that have bounded pathwidth or bounded cliquewidth. In particular, we prove that permanents, hamiltonians and perfect matchings of matrices that have bounded pathwidth express exactly arithmetic formulas. This is an improvement of our previous result for matrices of bounded treewidth. Also, for matrices of bounded weighted cliquewidth we show membership in VP for these polynomials.<|reference_end|> | arxiv | @article{flarup2008on,
title={On the expressive power of permanents and perfect matchings of matrices
of bounded pathwidth/cliquewidth},
author={Uffe Flarup (IMADA), Laurent Lyaudet (LIP)},
journal={arXiv preprint arXiv:0801.3408},
year={2008},
archivePrefix={arXiv},
eprint={0801.3408},
primaryClass={cs.DM}
} | flarup2008on |
arxiv-2425 | 0801.3511 | Deterministic Design of Low-Density Parity-Check Codes for Binary Erasure Channels | <|reference_start|>Deterministic Design of Low-Density Parity-Check Codes for Binary Erasure Channels: We propose a deterministic method to design irregular Low-Density Parity-Check (LDPC) codes for binary erasure channels (BEC). Compared to the existing methods, which are based on the application of asymptomatic analysis tools such as density evolution or Extrinsic Information Transfer (EXIT) charts in an optimization process, the proposed method is much simpler and faster. Through a number of examples, we demonstrate that the codes designed by the proposed method perform very closely to the best codes designed by optimization. An important property of the proposed designs is the flexibility to select the number of constituent variable node degrees P. The proposed designs include existing deterministic designs as a special case with P = N-1, where N is the maximum variable node degree. Compared to the existing deterministic designs, for a given rate and a given d > 0, the designed ensembles can have a threshold in d-neighborhood of the capacity upper bound with smaller values of P and N. They can also achieve the capacity of the BEC as N, and correspondingly P and the maximum check node degree tend to infinity.<|reference_end|> | arxiv | @article{saeedi2008deterministic,
title={Deterministic Design of Low-Density Parity-Check Codes for Binary
Erasure Channels},
author={Hamid Saeedi and Amir H. Banihashemi},
journal={arXiv preprint arXiv:0801.3511},
year={2008},
archivePrefix={arXiv},
eprint={0801.3511},
primaryClass={cs.IT math.IT}
} | saeedi2008deterministic |
arxiv-2426 | 0801.3521 | Capacity of Sparse Wideband Channels with Partial Channel Feedback | <|reference_start|>Capacity of Sparse Wideband Channels with Partial Channel Feedback: This paper studies the ergodic capacity of wideband multipath channels with limited feedback. Our work builds on recent results that have established the possibility of significant capacity gains in the wideband/low-SNR regime when there is perfect channel state information (CSI) at the transmitter. Furthermore, the perfect CSI benchmark gain can be obtained with the feedback of just one bit per channel coefficient. However, the input signals used in these methods are peaky, that is, they have a large peak-to-average power ratios. Signal peakiness is related to channel coherence and many recent measurement campaigns show that, in contrast to previous assumptions, wideband channels exhibit a sparse multipath structure that naturally leads to coherence in time and frequency. In this work, we first show that even an instantaneous power constraint is sufficient to achieve the benchmark gain when perfect CSI is available at the receiver. In the more realistic non-coherent setting, we study the performance of a training-based signaling scheme. We show that multipath sparsity can be leveraged to achieve the benchmark gain under both average as well as instantaneous power constraints as long as the channel coherence scales at a sufficiently fast rate with signal space dimensions. We also present rules of thumb on choosing signaling parameters as a function of the channel parameters so that the full benefits of sparsity can be realized.<|reference_end|> | arxiv | @article{hariharan2008capacity,
title={Capacity of Sparse Wideband Channels with Partial Channel Feedback},
author={Gautham Hariharan, Vasanthan Raghavan, Akbar M. Sayeed},
journal={arXiv preprint arXiv:0801.3521},
year={2008},
archivePrefix={arXiv},
eprint={0801.3521},
primaryClass={cs.IT math.IT}
} | hariharan2008capacity |
arxiv-2427 | 0801.3526 | Quantized Multimode Precoding in Spatially Correlated Multi-Antenna Channels | <|reference_start|>Quantized Multimode Precoding in Spatially Correlated Multi-Antenna Channels: Multimode precoding, where the number of independent data-streams is adapted optimally, can be used to maximize the achievable throughput in multi-antenna communication systems. Motivated by standardization efforts embraced by the industry, the focus of this work is on systematic precoder design with realistic assumptions on the spatial correlation, channel state information (CSI) at the transmitter and the receiver, and implementation complexity. For spatial correlation of the channel matrix, we assume a general channel model, based on physical principles, that has been verified by many recent measurement campaigns. We also assume a coherent receiver and knowledge of the spatial statistics at the transmitter along with the presence of an ideal, low-rate feedback link from the receiver to the transmitter. The reverse link is used for codebook-index feedback and the goal of this work is to construct precoder codebooks, adaptable in response to the statistical information, such that the achievable throughput is significantly enhanced over that of a fixed, non-adaptive, i.i.d. codebook design. We illustrate how a codebook of semiunitary precoder matrices localized around some fixed center on the Grassmann manifold can be skewed in response to the spatial correlation via low-complexity maps that can rotate and scale submanifolds on the Grassmann manifold. The skewed codebook in combination with a lowcomplexity statistical power allocation scheme is then shown to bridge the gap in performance between a perfect CSI benchmark and an i.i.d. codebook design.<|reference_end|> | arxiv | @article{raghavan2008quantized,
title={Quantized Multimode Precoding in Spatially Correlated Multi-Antenna
Channels},
author={Vasanthan Raghavan, Venu Veeravalli, Akbar Sayeed},
journal={arXiv preprint arXiv:0801.3526},
year={2008},
doi={10.1109/TSP.2008.2005748},
archivePrefix={arXiv},
eprint={0801.3526},
primaryClass={cs.IT math.IT}
} | raghavan2008quantized |
arxiv-2428 | 0801.3539 | On the Effects of Idiotypic Interactions for Recommendation Communities in Artificial Immune Systems | <|reference_start|>On the Effects of Idiotypic Interactions for Recommendation Communities in Artificial Immune Systems: It has previously been shown that a recommender based on immune system idiotypic principles can out perform one based on correlation alone. This paper reports the results of work in progress, where we undertake some investigations into the nature of this beneficial effect. The initial findings are that the immune system recommender tends to produce different neighbourhoods, and that the superior performance of this recommender is due partly to the different neighbourhoods, and partly to the way that the idiotypic effect is used to weight each neighbours recommendations.<|reference_end|> | arxiv | @article{cayzer2008on,
title={On the Effects of Idiotypic Interactions for Recommendation Communities
in Artificial Immune Systems},
author={Steve Cayzer and Uwe Aickelin},
journal={Proceedings of the 1st International Conference on Artificial
Immune Systems (ICARIS 2002), pp 154-160, Canterbury, UK, 2001},
year={2008},
archivePrefix={arXiv},
eprint={0801.3539},
primaryClass={cs.NE cs.AI}
} | cayzer2008on |
arxiv-2429 | 0801.3547 | A Recommender System based on the Immune Network | <|reference_start|>A Recommender System based on the Immune Network: The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an artificial immune system (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by collaborative filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen - antibody interaction for matching and antibody - antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.<|reference_end|> | arxiv | @article{cazyer2008a,
title={A Recommender System based on the Immune Network},
author={Steve Cazyer and Uwe Aickelin},
journal={Proceedings of the IEEE Congress on Evolutionary Computation (CEC
2002), pp 807-813, Honolulu, USA, 2002},
year={2008},
archivePrefix={arXiv},
eprint={0801.3547},
primaryClass={cs.NE cs.AI}
} | cazyer2008a |
arxiv-2430 | 0801.3549 | The Danger Theory and Its Application to Artificial Immune Systems | <|reference_start|>The Danger Theory and Its Application to Artificial Immune Systems: Over the last decade, a new idea challenging the classical self-non-self viewpoint has become popular amongst immunologists. It is called the Danger Theory. In this conceptual paper, we look at this theory from the perspective of Artificial Immune System practitioners. An overview of the Danger Theory is presented with particular emphasis on analogies in the Artificial Immune Systems world. A number of potential application areas are then used to provide a framing for a critical assessment of the concept, and its relevance for Artificial Immune Systems.<|reference_end|> | arxiv | @article{aickelin2008the,
title={The Danger Theory and Its Application to Artificial Immune Systems},
author={Uwe Aickelin and Steve Cayzer},
journal={Proceedings of the 1st International Conference on Artificial
Immune Systems (ICARIS 2002), pp 141-148, Canterbury, Uk, 2002},
year={2008},
archivePrefix={arXiv},
eprint={0801.3549},
primaryClass={cs.NE cs.AI cs.CR}
} | aickelin2008the |
arxiv-2431 | 0801.3550 | Partnering Strategies for Fitness Evaluation in a Pyramidal Evolutionary Algorithm | <|reference_start|>Partnering Strategies for Fitness Evaluation in a Pyramidal Evolutionary Algorithm: This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes for (sub-)fitness evaluation purposes are examined for two multiple-choice optimisation problems. It is shown that random partnering strategies perform best by providing better sampling and more diversity.<|reference_end|> | arxiv | @article{aickelin2008partnering,
title={Partnering Strategies for Fitness Evaluation in a Pyramidal Evolutionary
Algorithm},
author={Uwe Aickelin and Larry Bull},
journal={Proceedings of the Genetic and Evolutionary Computation Conference
(GECCO 2002), pp 263-270, New York, USA, 2002},
year={2008},
archivePrefix={arXiv},
eprint={0801.3550},
primaryClass={cs.NE cs.AI}
} | aickelin2008partnering |
arxiv-2432 | 0801.3581 | Shallow, Low, and Light Trees, and Tight Lower Bounds for Euclidean Spanners | <|reference_start|>Shallow, Low, and Light Trees, and Tight Lower Bounds for Euclidean Spanners: We show that for every $n$-point metric space $M$ there exists a spanning tree $T$ with unweighted diameter $O(\log n)$ and weight $\omega(T) = O(\log n) \cdot \omega(MST(M))$. Moreover, there is a designated point $rt$ such that for every point $v$, $dist_T(rt,v) \le (1+\epsilon) \cdot dist_M(rt,v)$, for an arbitrarily small constant $\epsilon > 0$. We extend this result, and provide a tradeoff between unweighted diameter and weight, and prove that this tradeoff is \emph{tight up to constant factors} in the entire range of parameters. These results enable us to settle a long-standing open question in Computational Geometry. In STOC'95 Arya et al. devised a construction of Euclidean Spanners with unweighted diameter $O(\log n)$ and weight $O(\log n) \cdot \omega(MST(M))$. Ten years later in SODA'05 Agarwal et al. showed that this result is tight up to a factor of $O(\log \log n)$. We close this gap and show that the result of Arya et al. is tight up to constant factors.<|reference_end|> | arxiv | @article{dinitz2008shallow,,
title={Shallow, Low, and Light Trees, and Tight Lower Bounds for Euclidean
Spanners},
author={Yefim Dinitz, Michael Elkin, Shay Solomon},
journal={arXiv preprint arXiv:0801.3581},
year={2008},
archivePrefix={arXiv},
eprint={0801.3581},
primaryClass={cs.CG cs.DS}
} | dinitz2008shallow, |
arxiv-2433 | 0801.3624 | Multiparty Communication Complexity of Disjointness | <|reference_start|>Multiparty Communication Complexity of Disjointness: We obtain a lower bound of n^Omega(1) on the k-party randomized communication complexity of the Disjointness function in the `Number on the Forehead' model of multiparty communication when k is a constant. For k=o(loglog n), the bounds remain super-polylogarithmic i.e. (log n)^omega(1). The previous best lower bound for three players until recently was Omega(log n). Our bound separates the communication complexity classes NP^{CC}_k and BPP^{CC}_k for k=o(loglog n). Furthermore, by the results of Beame, Pitassi and Segerlind \cite{BPS07}, our bound implies proof size lower bounds for tree-like, degree k-1 threshold systems and superpolynomial size lower bounds for Lovasz-Schrijver proofs. Sherstov \cite{She07b} recently developed a novel technique to obtain lower bounds on two-party communication using the approximate polynomial degree of boolean functions. We obtain our results by extending his technique to the multi-party setting using ideas from Chattopadhyay \cite{Cha07}. A similar bound for Disjointness has been recently and independently obtained by Lee and Shraibman.<|reference_end|> | arxiv | @article{chattopadhyay2008multiparty,
title={Multiparty Communication Complexity of Disjointness},
author={Arkadev Chattopadhyay and Anil Ada},
journal={arXiv preprint arXiv:0801.3624},
year={2008},
archivePrefix={arXiv},
eprint={0801.3624},
primaryClass={cs.CC}
} | chattopadhyay2008multiparty |
arxiv-2434 | 0801.3640 | Energy Efficiency in Multi-Hop CDMA Networks: a Game Theoretic Analysis Considering Operating Costs | <|reference_start|>Energy Efficiency in Multi-Hop CDMA Networks: a Game Theoretic Analysis Considering Operating Costs: A game-theoretic analysis is used to study the effects of receiver choice and transmit power on the energy efficiency of multi-hop networks in which the nodes communicate using Direct-Sequence Code Division Multiple Access (DS-CDMA). A Nash equilibrium of the game in which the network nodes can choose their receivers as well as their transmit powers to maximize the total number of bits they transmit per unit of energy spent (including both transmit and operating energy) is derived. The energy efficiencies resulting from the use of different linear multiuser receivers in this context are compared for the non-cooperative game. Significant gains in energy efficiency are observed when multiuser receivers, particularly the linear minimum mean-square error (MMSE) receiver, are used instead of conventional matched filter receivers.<|reference_end|> | arxiv | @article{betz2008energy,
title={Energy Efficiency in Multi-Hop CDMA Networks: a Game Theoretic Analysis
Considering Operating Costs},
author={Sharon Betz and H. Vincent Poor},
journal={arXiv preprint arXiv:0801.3640},
year={2008},
doi={10.1109/TSP.2008.929118},
archivePrefix={arXiv},
eprint={0801.3640},
primaryClass={cs.IT math.IT}
} | betz2008energy |
arxiv-2435 | 0801.3642 | Information Rates of Minimal Non-Matroid-Related Access Structures | <|reference_start|>Information Rates of Minimal Non-Matroid-Related Access Structures: In a secret sharing scheme, shares of a secret are distributed to participants in such a way that only certain predetermined sets of participants are qualified to reconstruct the secret. An access structure on a set of participants specifies which sets are to be qualified. The information rate of an access structure is a bound on how efficient a secret sharing scheme for that access structure can be. Marti-Farre and Padro showed that all access structures with information rate greater than two-thirds are matroid-related, and Stinson showed that four of the minor-minimal, non-matroid-related access structures have information rate exactly two-thirds. By a result of Seymour, there are infinitely many remaining minor-minimal, non-matroid-related access structures. In this paper we find the exact information rates for all such structures.<|reference_end|> | arxiv | @article{metcalf-burton2008information,
title={Information Rates of Minimal Non-Matroid-Related Access Structures},
author={Jessica Ruth Metcalf-Burton},
journal={arXiv preprint arXiv:0801.3642},
year={2008},
archivePrefix={arXiv},
eprint={0801.3642},
primaryClass={cs.CR math.CO}
} | metcalf-burton2008information |
arxiv-2436 | 0801.3654 | A path following algorithm for the graph matching problem | <|reference_start|>A path following algorithm for the graph matching problem: We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We therefore construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore to perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four datasets: simulated graphs, QAPLib, retina vessel images and handwritten chinese characters. In all cases, the results are competitive with the state-of-the-art.<|reference_end|> | arxiv | @article{zaslavskiy2008a,
title={A path following algorithm for the graph matching problem},
author={Mikhail Zaslavskiy, Francis Bach, and Jean-Philippe Vert},
journal={arXiv preprint arXiv:0801.3654},
year={2008},
archivePrefix={arXiv},
eprint={0801.3654},
primaryClass={cs.CV cs.DM}
} | zaslavskiy2008a |
arxiv-2437 | 0801.3669 | Merkle's Key Agreement Protocol is Optimal: An $O(n^2)$ Attack on any Key Agreement from Random Oracles | <|reference_start|>Merkle's Key Agreement Protocol is Optimal: An $O(n^2)$ Attack on any Key Agreement from Random Oracles: We prove that every key agreement protocol in the random oracle model in which the honest users make at most $n$ queries to the oracle can be broken by an adversary who makes $O(n^2)$ queries to the oracle. This improves on the previous $\widetilde{\Omega}(n^6)$ query attack given by Impagliazzo and Rudich (STOC '89) and resolves an open question posed by them. Our bound is optimal up to a constant factor since Merkle proposed a key agreement protocol in 1974 that can be easily implemented with $n$ queries to a random oracle and cannot be broken by any adversary who asks $o(n^2)$ queries.<|reference_end|> | arxiv | @article{barak2008merkle's,
title={Merkle's Key Agreement Protocol is Optimal: An $O(n^2)$ Attack on any
Key Agreement from Random Oracles},
author={Boaz Barak, Mohammad Mahmoody},
journal={arXiv preprint arXiv:0801.3669},
year={2008},
archivePrefix={arXiv},
eprint={0801.3669},
primaryClass={cs.CC}
} | barak2008merkle's |
arxiv-2438 | 0801.3678 | Regulation and the Integrity of Spreadsheets in the Information Supply Chain | <|reference_start|>Regulation and the Integrity of Spreadsheets in the Information Supply Chain: Spreadsheets provide many of the key links between information systems, closing the gap between business needs and the capability of central systems. Recent regulations have brought these vulnerable parts of information supply chains into focus. The risk they present to the organisation depends on the role that they fulfil, with generic differences between their use as modeling tools and as operational applications. Four sections of the Sarbanes-Oxley Act (SOX) are particularly relevant to the use of spreadsheets. Compliance with each of these sections is dependent on maintaining the integrity of those spreadsheets acting as operational applications. This can be achieved manually but at high cost. There are a range of commercially available off-the-shelf solutions that can reduce this cost. These may be divided into those that assist in the debugging of logic and more recently the arrival of solutions that monitor the change and user activity taking place in business-critical spreadsheets. ClusterSeven provides one of these monitoring solutions, highlighting areas of operational risk whilst also establishing a database of information to deliver new business intelligence.<|reference_end|> | arxiv | @article{baxter2008regulation,
title={Regulation and the Integrity of Spreadsheets in the Information Supply
Chain},
author={Ralph Baxter},
journal={Proc. European Spreadsheet Risks Int. Grp. 2005 95-101
ISBN:1-902724-16-X},
year={2008},
archivePrefix={arXiv},
eprint={0801.3678},
primaryClass={cs.CY cs.CR}
} | baxter2008regulation |
arxiv-2439 | 0801.3680 | Lower Bounds on Signatures from Symmetric Primitives | <|reference_start|>Lower Bounds on Signatures from Symmetric Primitives: We show that every construction of one-time signature schemes from a random oracle achieves black-box security at most $2^{(1+o(1))q}$, where $q$ is the total number of oracle queries asked by the key generation, signing, and verification algorithms. That is, any such scheme can be broken with probability close to $1$ by a (computationally unbounded) adversary making $2^{(1+o(1))q}$ queries to the oracle. This is tight up to a constant factor in the number of queries, since a simple modification of Lamport's one-time signatures (Lamport '79) achieves $2^{(0.812-o(1))q}$ black-box security using $q$ queries to the oracle. Our result extends (with a loss of a constant factor in the number of queries) also to the random permutation and ideal-cipher oracles. Since the symmetric primitives (e.g. block ciphers, hash functions, and message authentication codes) can be constructed by a constant number of queries to the mentioned oracles, as corollary we get lower bounds on the efficiency of signature schemes from symmetric primitives when the construction is black-box. This can be taken as evidence of an inherent efficiency gap between signature schemes and symmetric primitives.<|reference_end|> | arxiv | @article{barak2008lower,
title={Lower Bounds on Signatures from Symmetric Primitives},
author={Boaz Barak, Mohammad Mahmoody},
journal={arXiv preprint arXiv:0801.3680},
year={2008},
archivePrefix={arXiv},
eprint={0801.3680},
primaryClass={cs.CC cs.CR}
} | barak2008lower |
arxiv-2440 | 0801.3690 | Ensuring Spreadsheet Integrity with Model Master | <|reference_start|>Ensuring Spreadsheet Integrity with Model Master: We have developed the Model Master (MM) language for describing spreadsheets, and tools for converting MM programs to and from spreadsheets. The MM decompiler translates a spreadsheet into an MM program which gives a concise summary of its calculations, layout, and styling. This is valuable when trying to understand spreadsheets one has not seen before, and when checking for errors. The MM compiler goes the other way, translating an MM program into a spreadsheet. This makes possible a new style of development, in which spreadsheets are generated from textual specifications. This can reduce error rates compared to working directly with the raw spreadsheet, and gives important facilities for code reuse. MM programs also offer advantages over Excel files for the interchange of spreadsheets.<|reference_end|> | arxiv | @article{paine2008ensuring,
title={Ensuring Spreadsheet Integrity with Model Master},
author={Jocelyn Paine},
journal={Proc. European Spreadsheet Risks Int. Grp. 2001 17-38 ISBN:1 86166
179 7},
year={2008},
archivePrefix={arXiv},
eprint={0801.3690},
primaryClass={cs.PL cs.HC}
} | paine2008ensuring |
arxiv-2441 | 0801.3697 | The mathematics of Septoku | <|reference_start|>The mathematics of Septoku: Septoku is a Sudoku variant invented by Bruce Oberg, played on a hexagonal grid of 37 cells. We show that up to rotations, reflections, and symbol permutations, there are only six valid Septoku boards. In order to have a unique solution, we show that the minimum number of given values is six. We generalize the puzzle to other board shapes, and devise a puzzle on a star-shaped board with 73 cells with six givens which has a unique solution. We show how this puzzle relates to the unsolved Hadwiger-Nelson problem in combinatorial geometry.<|reference_end|> | arxiv | @article{bell2008the,
title={The mathematics of Septoku},
author={George I. Bell},
journal={arXiv preprint arXiv:0801.3697},
year={2008},
archivePrefix={arXiv},
eprint={0801.3697},
primaryClass={math.CO cs.DM math.GM}
} | bell2008the |
arxiv-2442 | 0801.3702 | Joint source and channel coding for MIMO systems: Is it better to be robust or quick? | <|reference_start|>Joint source and channel coding for MIMO systems: Is it better to be robust or quick?: We develop a framework to optimize the tradeoff between diversity, multiplexing, and delay in MIMO systems to minimize end-to-end distortion. We first focus on the diversity-multiplexing tradeoff in MIMO systems, and develop analytical results to minimize distortion of a vector quantizer concatenated with a space-time MIMO channel code. In the high SNR regime we obtain a closed-form expression for the end-to-end distortion as a function of the optimal point on the diversity-multiplexing tradeoff curve. For large but finite SNR we find this optimal point via convex optimization. We then consider MIMO systems using ARQ retransmission to provide additional diversity at the expense of delay. For sources without a delay constraint, distortion is minimized by maximizing the ARQ window size. This results in an ARQ-enhanced multiplexing-diversity tradeoff region, with distortion minimized over this region in the same manner as without ARQ. Under a source delay constraint the problem formulation changes to account for delay distortion associated with random message arrival and random ARQ completion times. We use a dynamic programming formulation to capture the channel diversity-multiplexing tradeoff at finite SNR as well as the random arrival and retransmission dynamics; we solve for the optimal multiplexing-diversity-delay tradeoff to minimize end-to-end distortion associated with the source encoder, channel, and ARQ retransmissions. Our results show that a delay-sensitive system should adapt its operating point on the diversity-multiplexing-delay tradeoff region to the system dynamics. We provide numerical results that demonstrate significant performance gains of this adaptive policy over a static allocation of diversity/multiplexing in the channel code and a static ARQ window size.<|reference_end|> | arxiv | @article{holliday2008joint,
title={Joint source and channel coding for MIMO systems: Is it better to be
robust or quick?},
author={Tim Holliday, Andrea J. Goldsmith, and H. Vincent Poor},
journal={IEEE Transactions on Information Theory, Vol. 54, No. 4, April
2008},
year={2008},
doi={10.1109/TIT.2008.917725},
archivePrefix={arXiv},
eprint={0801.3702},
primaryClass={cs.IT math.IT}
} | holliday2008joint |
arxiv-2443 | 0801.3703 | On minimality of convolutional ring encoders | <|reference_start|>On minimality of convolutional ring encoders: Convolutional codes are considered with code sequences modelled as semi-infinite Laurent series. It is wellknown that a convolutional code C over a finite group G has a minimal trellis representation that can be derived from code sequences. It is also wellknown that, for the case that G is a finite field, any polynomial encoder of C can be algebraically manipulated to yield a minimal polynomial encoder whose controller canonical realization is a minimal trellis. In this paper we seek to extend this result to the finite ring case G = Z_{p^r} by introducing a socalled "p-encoder". We show how to manipulate a polynomial encoding of a noncatastrophic convolutional code over Z_{p^r} to produce a particular type of p-encoder ("minimal p-encoder") whose controller canonical realization is a minimal trellis with nonlinear features. The minimum number of trellis states is then expressed as p^gamma, where gamma is the sum of the row degrees of the minimal p-encoder. In particular, we show that any convolutional code over Z_{p^r} admits a delay-free p-encoder which implies the novel result that delay-freeness is not a property of the code but of the encoder, just as in the field case. We conjecture that a similar result holds with respect to catastrophicity, i.e., any catastrophic convolutional code over Z_{p^r} admits a noncatastrophic p-encoder.<|reference_end|> | arxiv | @article{kuijper2008on,
title={On minimality of convolutional ring encoders},
author={Margreta Kuijper and Raquel Pinto},
journal={IEEE Trans. Information Theory, Vol. 55, No. 11, pp. 4890-4897,
November 2009},
year={2008},
archivePrefix={arXiv},
eprint={0801.3703},
primaryClass={cs.IT math.IT}
} | kuijper2008on |
arxiv-2444 | 0801.3710 | Picking up the Pieces: Self-Healing in Reconfigurable Networks | <|reference_start|>Picking up the Pieces: Self-Healing in Reconfigurable Networks: We consider the problem of self-healing in networks that are reconfigurable in the sense that they can change their topology during an attack. Our goal is to maintain connectivity in these networks, even in the presence of repeated adversarial node deletion, by carefully adding edges after each attack. We present a new algorithm, DASH, that provably ensures that: 1) the network stays connected even if an adversary deletes up to all nodes in the network; and 2) no node ever increases its degree by more than 2 log n, where n is the number of nodes initially in the network. DASH is fully distributed; adds new edges only among neighbors of deleted nodes; and has average latency and bandwidth costs that are at most logarithmic in n. DASH has these properties irrespective of the topology of the initial network, and is thus orthogonal and complementary to traditional topology-based approaches to defending against attack. We also prove lower-bounds showing that DASH is asymptotically optimal in terms of minimizing maximum degree increase over multiple attacks. Finally, we present empirical results on power-law graphs that show that DASH performs well in practice, and that it significantly outperforms naive algorithms in reducing maximum degree increase. We also present empirical results on performance of our algorithms and a new heuristic with regard to stretch (increase in shortest path lengths).<|reference_end|> | arxiv | @article{saia2008picking,
title={Picking up the Pieces: Self-Healing in Reconfigurable Networks},
author={Jared Saia, Amitabh Trehan},
journal={arXiv preprint arXiv:0801.3710},
year={2008},
doi={10.1109/IPDPS.2008.4536326},
archivePrefix={arXiv},
eprint={0801.3710},
primaryClass={cs.DS cs.DC cs.NI}
} | saia2008picking |
arxiv-2445 | 0801.3711 | 3D-Ultrasound probe calibration for computer-guided diagnosis and therapy | <|reference_start|>3D-Ultrasound probe calibration for computer-guided diagnosis and therapy: With the emergence of swept-volume ultrasound (US) probes, precise and almost real-time US volume imaging has become available. This offers many new opportunities for computer guided diagnosis and therapy, 3-D images containing significantly more information than 2-D slices. However, computer guidance often requires knowledge about the exact position of US voxels relative to a tracking reference, which can only be achieved through probe calibration. In this paper we present a 3-D US probe calibration system based on a membrane phantom. The calibration matrix is retrieved by detection of a membrane plane in a dozen of US acquisitions of the phantom. Plane detection is robustly performed with the 2-D Hough transformation. The feature extraction process is fully automated, calibration requires about 20 minutes and the calibration system can be used in a clinical context. The precision of the system was evaluated to a root mean square (RMS) distance error of 1.15mm and to an RMS angular error of 0.61 degrees. The point reconstruction accuracy was evaluated to 0.9mm and the angular reconstruction accuracy to 1.79 degrees.<|reference_end|> | arxiv | @article{baumann20083d-ultrasound,
title={3D-Ultrasound probe calibration for computer-guided diagnosis and
therapy},
author={Michael Baumann (TIMC), Vincent Daanen (TIMC), Antoine Leroy (TIMC),
Jocelyne Troccaz (TIMC)},
journal={Dans Proceedings of CVAMIA'06 - 2nd International workshop on
Computer Vision Approaches to Medical Image Analysis - CVAMIA'06, Graz :
Autriche (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0801.3711},
primaryClass={cs.OH}
} | baumann20083d-ultrasound |
arxiv-2446 | 0801.3714 | 5-cycles and the Petersen graph | <|reference_start|>5-cycles and the Petersen graph: We show that if G is a connected bridgeless cubic graph whose every 2-factor is comprised of cycles of length five then G is the Petersen graph.<|reference_end|> | arxiv | @article{devos20085-cycles,
title={5-cycles and the Petersen graph},
author={Matt DeVos, Vahan V. Mkrtchyan, Samvel S. Petrosyan},
journal={arXiv preprint arXiv:0801.3714},
year={2008},
archivePrefix={arXiv},
eprint={0801.3714},
primaryClass={cs.DM}
} | devos20085-cycles |
arxiv-2447 | 0801.3715 | Modular Compilation of a Synchronous Language | <|reference_start|>Modular Compilation of a Synchronous Language: Synchronous languages rely on formal methods to ease the development of applications in an efficient and reusable way. Formal methods have been advocated as a means of increasing the reliability of systems, especially those which are safety or business critical. It is still difficult to develop automatic specification and verification tools due to limitations like state explosion, undecidability, etc... In this work, we design a new specification model based on a reactive synchronous approach. Then, we benefit from a formal framework well suited to perform compilation and formal validation of systems. In practice, we design and implement a special purpose language (LE) and its two semantics: the ehavioral semantics helps us to define a program by the set of its behaviors and avoid ambiguousness in programs' interpretation; the execution equational semantics allows the modular compilation of programs into software and hardware targets (c code, vhdl code, fpga synthesis, observers). Our approach is pertinent considering the two main requirements of critical realistic applications: the modular compilation allows us to deal with large systems, the model-based approach provides us with formal validation.<|reference_end|> | arxiv | @article{ressouche2008modular,
title={Modular Compilation of a Synchronous Language},
author={Annie Ressouche, Daniel Gaff'e (LEAT), Val'erie Roy},
journal={arXiv preprint arXiv:0801.3715},
year={2008},
archivePrefix={arXiv},
eprint={0801.3715},
primaryClass={cs.PL cs.LO}
} | ressouche2008modular |
arxiv-2448 | 0801.3773 | Graph-Based Classification of Self-Dual Additive Codes over Finite Fields | <|reference_start|>Graph-Based Classification of Self-Dual Additive Codes over Finite Fields: Quantum stabilizer states over GF(m) can be represented as self-dual additive codes over GF(m^2). These codes can be represented as weighted graphs, and orbits of graphs under the generalized local complementation operation correspond to equivalence classes of codes. We have previously used this fact to classify self-dual additive codes over GF(4). In this paper we classify self-dual additive codes over GF(9), GF(16), and GF(25). Assuming that the classical MDS conjecture holds, we are able to classify all self-dual additive MDS codes over GF(9) by using an extension technique. We prove that the minimum distance of a self-dual additive code is related to the minimum vertex degree in the associated graph orbit. Circulant graph codes are introduced, and a computer search reveals that this set contains many strong codes. We show that some of these codes have highly regular graph representations.<|reference_end|> | arxiv | @article{danielsen2008graph-based,
title={Graph-Based Classification of Self-Dual Additive Codes over Finite
Fields},
author={Lars Eirik Danielsen},
journal={Adv. Math. Commun. 3(4), pp. 329-348, 2009},
year={2008},
doi={10.3934/amc.2009.3.329},
archivePrefix={arXiv},
eprint={0801.3773},
primaryClass={cs.IT math.CO math.IT quant-ph}
} | danielsen2008graph-based |
arxiv-2449 | 0801.3790 | Characterization of the Vertices and Extreme Directions of the Negative Cycles Polyhedron and Hardness of Generating Vertices of 0/1-Polyhedra | <|reference_start|>Characterization of the Vertices and Extreme Directions of the Negative Cycles Polyhedron and Hardness of Generating Vertices of 0/1-Polyhedra: Given a graph $G=(V,E)$ and a weight function on the edges $w:E\mapsto\RR$, we consider the polyhedron $P(G,w)$ of negative-weight flows on $G$, and get a complete characterization of the vertices and extreme directions of $P(G,w)$. As a corollary, we show that, unless $P=NP$, there is no output polynomial-time algorithm to generate all the vertices of a 0/1-polyhedron. This strengthens the NP-hardness result of Khachiyan et al. (2006) for non 0/1-polyhedra, and comes in contrast with the polynomiality of vertex enumeration for 0/1-polytopes \cite{BL98} [Bussieck and L\"ubbecke (1998)].<|reference_end|> | arxiv | @article{boros2008characterization,
title={Characterization of the Vertices and Extreme Directions of the Negative
Cycles Polyhedron and Hardness of Generating Vertices of 0/1-Polyhedra},
author={Endre Boros, Khaled Elbassioni, Vladimir Gurvich, Hans Raj Tiwary},
journal={arXiv preprint arXiv:0801.3790},
year={2008},
archivePrefix={arXiv},
eprint={0801.3790},
primaryClass={cs.CC cs.DM}
} | boros2008characterization |
arxiv-2450 | 0801.3802 | Dichotomy Results for Fixed-Point Existence Problems for Boolean Dynamical Systems | <|reference_start|>Dichotomy Results for Fixed-Point Existence Problems for Boolean Dynamical Systems: A complete classification of the computational complexity of the fixed-point existence problem for boolean dynamical systems, i.e., finite discrete dynamical systems over the domain {0, 1}, is presented. For function classes F and graph classes G, an (F, G)-system is a boolean dynamical system such that all local transition functions lie in F and the underlying graph lies in G. Let F be a class of boolean functions which is closed under composition and let G be a class of graphs which is closed under taking minors. The following dichotomy theorems are shown: (1) If F contains the self-dual functions and G contains the planar graphs then the fixed-point existence problem for (F, G)-systems with local transition function given by truth-tables is NP-complete; otherwise, it is decidable in polynomial time. (2) If F contains the self-dual functions and G contains the graphs having vertex covers of size one then the fixed-point existence problem for (F, G)-systems with local transition function given by formulas or circuits is NP-complete; otherwise, it is decidable in polynomial time.<|reference_end|> | arxiv | @article{kosub2008dichotomy,
title={Dichotomy Results for Fixed-Point Existence Problems for Boolean
Dynamical Systems},
author={Sven Kosub},
journal={Mathematics in Computer Science, 1(3):487-505, 2008, special issue
on Modeling and Analysis of Complex Systems},
year={2008},
number={TUM-I0701, Institut fuer Informatik, Technische Universitaet
Muenchen},
archivePrefix={arXiv},
eprint={0801.3802},
primaryClass={cs.CC cond-mat.dis-nn cs.DM nlin.AO nlin.CG}
} | kosub2008dichotomy |
arxiv-2451 | 0801.3817 | Robustness Evaluation of Two CCG, a PCFG and a Link Grammar Parsers | <|reference_start|>Robustness Evaluation of Two CCG, a PCFG and a Link Grammar Parsers: Robustness in a parser refers to an ability to deal with exceptional phenomena. A parser is robust if it deals with phenomena outside its normal range of inputs. This paper reports on a series of robustness evaluations of state-of-the-art parsers in which we concentrated on one aspect of robustness: its ability to parse sentences containing misspelled words. We propose two measures for robustness evaluation based on a comparison of a parser's output for grammatical input sentences and their noisy counterparts. In this paper, we use these measures to compare the overall robustness of the four evaluated parsers, and we present an analysis of the decline in parser performance with increasing error levels. Our results indicate that performance typically declines tens of percentage units when parsers are presented with texts containing misspellings. When it was tested on our purpose-built test set of 443 sentences, the best parser in the experiment (C&C parser) was able to return exactly the same parse tree for the grammatical and ungrammatical sentences for 60.8%, 34.0% and 14.9% of the sentences with one, two or three misspelled words respectively.<|reference_end|> | arxiv | @article{kakkonen2008robustness,
title={Robustness Evaluation of Two CCG, a PCFG and a Link Grammar Parsers},
author={Tuomo Kakkonen},
journal={Proceedings of the 3rd Language & Technology Conference: Human
Language Technologies as a Challenge for Computer Science and Linguistics.
Poznan, Poland, 2007},
year={2008},
archivePrefix={arXiv},
eprint={0801.3817},
primaryClass={cs.CL}
} | kakkonen2008robustness |
arxiv-2452 | 0801.3837 | Universal Fingerprinting: Capacity and Random-Coding Exponents | <|reference_start|>Universal Fingerprinting: Capacity and Random-Coding Exponents: This paper studies fingerprinting (traitor tracing) games in which the number of colluders and the collusion channel are unknown. The fingerprints are embedded into host sequences representing signals to be protected and provide the receiver with the capability to trace back pirated copies to the colluders. The colluders and the fingerprint embedder are subject to signal fidelity constraints. Our problem setup unifies the signal-distortion and Boneh-Shaw formulations of fingerprinting. The fundamental tradeoffs between fingerprint codelength, number of users, number of colluders, fidelity constraints, and decoding reliability are then determined. Several bounds on fingerprinting capacity have been presented in recent literature. This paper derives exact capacity formulas and presents a new randomized fingerprinting scheme with the following properties: (1) the encoder and receiver assume a nominal coalition size but do not need to know the actual coalition size and the collusion channel; (2) a tunable parameter $\Delta$ trades off false-positive and false-negative error exponents; (3) the receiver provides a reliability metric for its decision; and (4) the scheme is capacity-achieving when the false-positive exponent $\Delta$ tends to zero and the nominal coalition size coincides with the actual coalition size. A fundamental component of the new scheme is the use of a "time-sharing" randomized sequence. The decoder is a maximum penalized mutual information decoder, where the significance of each candidate coalition is assessed relative to a threshold, and the penalty is proportional to the coalition size. A much simpler {\em threshold decoder} that satisfies properties (1)---(3) above but not (4) is also given.<|reference_end|> | arxiv | @article{moulin2008universal,
title={Universal Fingerprinting: Capacity and Random-Coding Exponents},
author={Pierre Moulin},
journal={arXiv preprint arXiv:0801.3837},
year={2008},
archivePrefix={arXiv},
eprint={0801.3837},
primaryClass={cs.IT math.IT}
} | moulin2008universal |
arxiv-2453 | 0801.3841 | Analysis of Prime Reciprocal Sequences in Base 10 | <|reference_start|>Analysis of Prime Reciprocal Sequences in Base 10: Prime reciprocals have applications in coding and cryptography and for generation of random sequences. This paper investigates the structural redundancy of prime reciprocals in base 10 in a manner that parallels an earlier study for binary prime reciprocals. Several different kinds of structural relationships amongst the digits in reciprocal sequences are classified with respect to the digit in the least significant place of the prime. It is also shown that the frequency of digit 0 exceeds that of every other digit when the entire set of prime reciprocal sequences is considered.<|reference_end|> | arxiv | @article{gangasani2008analysis,
title={Analysis of Prime Reciprocal Sequences in Base 10},
author={Sumanth Kumar Reddy Gangasani},
journal={arXiv preprint arXiv:0801.3841},
year={2008},
archivePrefix={arXiv},
eprint={0801.3841},
primaryClass={cs.CR}
} | gangasani2008analysis |
arxiv-2454 | 0801.3853 | Comparison of Spreadsheets with other Development Tools (limitations, solutions, workarounds and alternatives) | <|reference_start|>Comparison of Spreadsheets with other Development Tools (limitations, solutions, workarounds and alternatives): The spreadsheet paradigm has some unique risks and challenges that are not present in more traditional development technologies. Many of the recent advances in other branches of software development have bypassed spreadsheets and spreadsheet developers. This paper compares spreadsheets and spreadsheet development to more traditional platforms such as databases and procedural languages. It also considers the fundamental danger introduced in the transition from paper spreadsheets to electronic. Suggestions are made to manage the risks and work around the limitations.<|reference_end|> | arxiv | @article{murphy2008comparison,
title={Comparison of Spreadsheets with other Development Tools (limitations,
solutions, workarounds and alternatives)},
author={Simon Murphy},
journal={Proc. European Spreadsheet Risks Int. Grp. 2005 201208
ISBN:1-902724-16-X},
year={2008},
archivePrefix={arXiv},
eprint={0801.3853},
primaryClass={cs.SE cs.CY}
} | murphy2008comparison |
arxiv-2455 | 0801.3864 | Between conjecture and memento: shaping a collective emotional perception of the future | <|reference_start|>Between conjecture and memento: shaping a collective emotional perception of the future: Large scale surveys of public mood are costly and often impractical to perform. However, the web is awash with material indicative of public mood such as blogs, emails, and web queries. Inexpensive content analysis on such extensive corpora can be used to assess public mood fluctuations. The work presented here is concerned with the analysis of the public mood towards the future. Using an extension of the Profile of Mood States questionnaire, we have extracted mood indicators from 10,741 emails submitted in 2006 to futureme.org, a web service that allows its users to send themselves emails to be delivered at a later date. Our results indicate long-term optimism toward the future, but medium-term apprehension and confusion.<|reference_end|> | arxiv | @article{pepe2008between,
title={Between conjecture and memento: shaping a collective emotional
perception of the future},
author={Alberto Pepe and Johan Bollen},
journal={arXiv preprint arXiv:0801.3864},
year={2008},
archivePrefix={arXiv},
eprint={0801.3864},
primaryClass={cs.CL cs.GL}
} | pepe2008between |
arxiv-2456 | 0801.3871 | On the Scaling Window of Model RB | <|reference_start|>On the Scaling Window of Model RB: This paper analyzes the scaling window of a random CSP model (i.e. model RB) for which we can identify the threshold points exactly, denoted by $r_{cr}$ or $p_{cr}$. For this model, we establish the scaling window $W(n,\delta)=(r_{-}(n,\delta), r_{+}(n,\delta))$ such that the probability of a random instance being satisfiable is greater than $1-\delta$ for $r<r_{-}(n,\delta)$ and is less than $\delta$ for $r>r_{+}(n,\delta)$. Specifically, we obtain the following result $$W(n,\delta)=(r_{cr}-\Theta(\frac{1}{n^{1-\epsilon}\ln n}), \ r_{cr}+\Theta(\frac{1}{n\ln n})),$$ where $0\leq\epsilon<1$ is a constant. A similar result with respect to the other parameter $p$ is also obtained. Since the instances generated by model RB have been shown to be hard at the threshold, this is the first attempt, as far as we know, to analyze the scaling window of such a model with hard instances.<|reference_end|> | arxiv | @article{zhao2008on,
title={On the Scaling Window of Model RB},
author={Chunyan Zhao, Ke Xu, Zhiming Zheng},
journal={arXiv preprint arXiv:0801.3871},
year={2008},
archivePrefix={arXiv},
eprint={0801.3871},
primaryClass={cs.CC cond-mat.stat-mech cs.AI}
} | zhao2008on |
arxiv-2457 | 0801.3875 | Towards a Real-Time Data Driven Wildland Fire Model | <|reference_start|>Towards a Real-Time Data Driven Wildland Fire Model: A wildland fire model based on semi-empirical relations for the spread rate of a surface fire and post-frontal heat release is coupled with the Weather Research and Forecasting atmospheric model (WRF). The propagation of the fire front is implemented by a level set method. Data is assimilated by a morphing ensemble Kalman filter, which provides amplitude as well as position corrections. Thermal images of a fire will provide the observations and will be compared to a synthetic image from the model state.<|reference_end|> | arxiv | @article{mandel2008towards,
title={Towards a Real-Time Data Driven Wildland Fire Model},
author={Jan Mandel, Jonathan D. Beezley, Soham Chakraborty, Janice L. Coen,
Craig C. Douglas, Anthony Vodacek, Zhen Wang},
journal={IEEE International Symposium on Parallel and Distributed
Processing, 2008 (IPDPS 2008), pp. 1-5},
year={2008},
doi={10.1109/IPDPS.2008.4536414},
number={UCD CCM Report 265},
archivePrefix={arXiv},
eprint={0801.3875},
primaryClass={physics.ao-ph cs.CE}
} | mandel2008towards |
arxiv-2458 | 0801.3878 | Hash Property and Coding Theorems for Sparse Matrices and Maximum-Likelihood Coding | <|reference_start|>Hash Property and Coding Theorems for Sparse Matrices and Maximum-Likelihood Coding: The aim of this paper is to prove the achievability of several coding problems by using sparse matrices (the maximum column weight grows logarithmically in the block length) and maximal-likelihood (ML) coding. These problems are the Slepian-Wolf problem, the Gel'fand-Pinsker problem, the Wyner-Ziv problem, and the One-helps-one problem (source coding with partial side information at the decoder). To this end, the notion of a hash property for an ensemble of functions is introduced and it is proved that an ensemble of $q$-ary sparse matrices satisfies the hash property. Based on this property, it is proved that the rate of codes using sparse matrices and maximal-likelihood (ML) coding can achieve the optimal rate.<|reference_end|> | arxiv | @article{muramatsu2008hash,
title={Hash Property and Coding Theorems for Sparse Matrices and
Maximum-Likelihood Coding},
author={Jun Muramatsu and Shigeki Miyake},
journal={IEEE Transactions on Information Theory, vol 56, no. 5,
pp.2143-2167, May 2010; Corrections: IEEE Transactions on Information Theory,
vol. 56, no.9, p. 4762, Sep. 2010. Corrections: vol.56, no.9, p.4762, 2010},
year={2008},
archivePrefix={arXiv},
eprint={0801.3878},
primaryClass={cs.IT math.IT}
} | muramatsu2008hash |
arxiv-2459 | 0801.3880 | Spectral efficiency and optimal medium access control of random access systems over large random spreading CDMA | <|reference_start|>Spectral efficiency and optimal medium access control of random access systems over large random spreading CDMA: This paper analyzes the spectral efficiency as a function of medium access control (MAC) for large random spreading CDMA random access systems that employ a linear receiver. It is shown that located at higher than the physical layer, MAC along with spreading and power allocation can effectively perform spectral efficiency maximization and near-far mitigation.<|reference_end|> | arxiv | @article{sun2008spectral,
title={Spectral efficiency and optimal medium access control of random access
systems over large random spreading CDMA},
author={Yi Sun},
journal={arXiv preprint arXiv:0801.3880},
year={2008},
doi={10.1109/TCOMM.2009.05.07044},
archivePrefix={arXiv},
eprint={0801.3880},
primaryClass={cs.IT math.IT}
} | sun2008spectral |
arxiv-2460 | 0801.3908 | Encoding changing country codes for the Semantic Web with ISO 3166 and SKOS | <|reference_start|>Encoding changing country codes for the Semantic Web with ISO 3166 and SKOS: This paper shows how authority files can be encoded for the Semantic Web with the Simple Knowledge Organisation System (SKOS). In particular the application of SKOS for encoding the structure, management, and utilization of country codes as defined in ISO 3166 is demonstrated. The proposed encoding gives a use case for SKOS that includes features that have only been discussed little so far, such as multiple notations, nested concept schemes, changes by versioning.<|reference_end|> | arxiv | @article{voss2008encoding,
title={Encoding changing country codes for the Semantic Web with ISO 3166 and
SKOS},
author={Jakob Voss},
journal={arXiv preprint arXiv:0801.3908},
year={2008},
archivePrefix={arXiv},
eprint={0801.3908},
primaryClass={cs.IR}
} | voss2008encoding |
arxiv-2461 | 0801.3912 | On the Continuity Set of an omega Rational Function | <|reference_start|>On the Continuity Set of an omega Rational Function: In this paper, we study the continuity of rational functions realized by B\"uchi finite state transducers. It has been shown by Prieur that it can be decided whether such a function is continuous. We prove here that surprisingly, it cannot be decided whether such a function F has at least one point of continuity and that its continuity set C(F) cannot be computed. In the case of a synchronous rational function, we show that its continuity set is rational and that it can be computed. Furthermore we prove that any rational Pi^0_2-subset of X^omega for some alphabet X is the continuity set C(F) of an omega-rational synchronous function F defined on X^omega.<|reference_end|> | arxiv | @article{carton2008on,
title={On the Continuity Set of an omega Rational Function},
author={Olivier Carton (LIAFA), Olivier Finkel (LIP), Pierre Simonnet (SPE)},
journal={Theoretical Informatics and Applications (1), 42 (2008) 183-196},
year={2008},
archivePrefix={arXiv},
eprint={0801.3912},
primaryClass={cs.CC cs.LO}
} | carton2008on |
arxiv-2462 | 0801.3924 | Increased security through open source | <|reference_start|>Increased security through open source: In this paper we discuss the impact of open source on both the security and transparency of a software system. We focus on the more technical aspects of this issue, combining and extending arguments developed over the years. We stress that our discussion of the problem only applies to software for general purpose computing systems. For embedded systems, where the software usually cannot easily be patched or upgraded, different considerations may apply.<|reference_end|> | arxiv | @article{hoepman2008increased,
title={Increased security through open source},
author={Jaap-Henk Hoepman, Bart Jacobs},
journal={Communications of the ACM, 50(1):79-83, 2007},
year={2008},
archivePrefix={arXiv},
eprint={0801.3924},
primaryClass={cs.CR cs.CY cs.SE}
} | hoepman2008increased |
arxiv-2463 | 0801.3926 | On the Weight Distribution of the Extended Quadratic Residue Code of Prime 137 | <|reference_start|>On the Weight Distribution of the Extended Quadratic Residue Code of Prime 137: The Hamming weight enumerator function of the formally self-dual even, binary extended quadratic residue code of prime p = 8m + 1 is given by Gleason's theorem for singly-even code. Using this theorem, the Hamming weight distribution of the extended quadratic residue is completely determined once the number of codewords of Hamming weight j A_j, for 0 <= j <= 2m, are known. The smallest prime for which the Hamming weight distribution of the corresponding extended quadratic residue code is unknown is 137. It is shown in this paper that, for p=137 A_2m = A_34 may be obtained with out the need of exhaustive codeword enumeration. After the remainder of A_j required by Gleason's theorem are computed and independently verified using their congruences, the Hamming weight distributions of the binary augmented and extended quadratic residue codes of prime 137 are derived.<|reference_end|> | arxiv | @article{tjhai2008on,
title={On the Weight Distribution of the Extended Quadratic Residue Code of
Prime 137},
author={C. Tjhai, M. Tomlinson, M. Ambroze and M. Ahmed},
journal={arXiv preprint arXiv:0801.3926},
year={2008},
archivePrefix={arXiv},
eprint={0801.3926},
primaryClass={cs.IT cs.DM math.IT}
} | tjhai2008on |
arxiv-2464 | 0801.3930 | Crossing Borders: Security and Privacy Issues of the European e-Passport | <|reference_start|>Crossing Borders: Security and Privacy Issues of the European e-Passport: The first generation of European e-passports will be issued in 2006. We discuss how borders are crossed regarding the security and privacy erosion of the proposed schemes, and show which borders need to be crossed to improve the security and the privacy protection of the next generation of e-passports. In particular we discuss attacks on Basic Access Control due to the low entropy of the data from which the access keys are derived, we sketch the European proposals for Extended Access Control and the weaknesses in that scheme, and show how fundamentally different design decisions can make e-passports more secure.<|reference_end|> | arxiv | @article{hoepman2008crossing,
title={Crossing Borders: Security and Privacy Issues of the European e-Passport},
author={Jaap-Henk Hoepman, Engelbert Hubbers, Bart Jacobs, Martijn Oostdijk,
Ronny Wichers Schreur},
journal={1st Int. Workshop on Security, LNCS 4266, pages 152-167, Kyoto,
Japan, October 23-24 2006},
year={2008},
archivePrefix={arXiv},
eprint={0801.3930},
primaryClass={cs.CR cs.CY}
} | hoepman2008crossing |
arxiv-2465 | 0801.3965 | Framework for 3D TransRectal Ultrasound | <|reference_start|>Framework for 3D TransRectal Ultrasound: Prostate biopsies are mainly performed under 2D TransRectal UltraSound (TRUS) control by sampling the prostate according to a predefined pattern. In case of first biopsies, this pattern follows a random systematic plan. Sometimes, repeat biopsies can be needed to target regions unsampled by previous biopsies or resample critical regions (for example in case of cancer expectant management or previous prostatic intraepithelial neoplasia findings). From a clinical point of view, it could be useful to control the 3D spatial distribution of theses biopsies inside the prostate. Modern 3D-TRUS probes allow acquiring high-quality volumes of the prostate in few seconds. We developed a framework to track the prostate in 3D TRUS images. It means that if one acquires a reference volume at the beginning of the session and another during each biopsy, it is possible to determine the relationship between the prostate in the reference and the others volumes by aligning images. We used this tool to evaluate the ability of a single operator (a young urologist assistant professor) to perform a pattern of 12 biopsies under 2D TRUS guidance.<|reference_end|> | arxiv | @article{mozer2008framework,
title={Framework for 3D TransRectal Ultrasound},
author={Pierre Mozer (TIMC), Michael Baumann (TIMC), G. Chevreau (TIMC),
Vincent Daanen (TIMC), Alexandre Moreau-Gaudry (TIMC, CHU-Grenoble CIC),
Jocelyne Troccaz (TIMC)},
journal={Johns Hopkins University "Prostate Day", Baltimore : \'Etats-Unis
d'Am\'erique (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0801.3965},
primaryClass={cs.OH}
} | mozer2008framework |
arxiv-2466 | 0801.3971 | A Bayesian Optimisation Algorithm for the Nurse Scheduling Problem | <|reference_start|>A Bayesian Optimisation Algorithm for the Nurse Scheduling Problem: A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurses assignment. Unlike our previous work that used Gas to implement implicit learning, the learning in the proposed algorithm is explicit, ie. Eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated, ie in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.<|reference_end|> | arxiv | @article{li2008a,
title={A Bayesian Optimisation Algorithm for the Nurse Scheduling Problem},
author={Jingpeng Li and Uwe Aickelin},
journal={Proceedings of the IEEE Congress on Evolutionary Computation (CEC
2003), pp 2149-2156, Canberra, Australia, 2003},
year={2008},
archivePrefix={arXiv},
eprint={0801.3971},
primaryClass={cs.NE cs.CE}
} | li2008a |
arxiv-2467 | 0801.3982 | Pseudo-Random Bit Generation based on 2D chaotic maps of logistic type and its Applications in Chaotic Cryptography | <|reference_start|>Pseudo-Random Bit Generation based on 2D chaotic maps of logistic type and its Applications in Chaotic Cryptography: Pseudo-Random Bit Generation (PRBG) is required in many aspects of cryptography as well as in other applications of modern security engineering. In this work, PRBG based on 2D symmetrical chaotic mappings of logistic type is considered. The sequences generated with a chaotic PRBG of this type, are statistically tested and the computational effectiveness of the generators is estimated. Considering this PRBG valid for cryptography, the size of the available key space is also calculated. Different cryptographic applications can be suitable to this PRBG, being a stream cipher probably the most immediate of them.<|reference_end|> | arxiv | @article{pellicer-lostao2008pseudo-random,
title={Pseudo-Random Bit Generation based on 2D chaotic maps of logistic type
and its Applications in Chaotic Cryptography},
author={C. Pellicer-Lostao and R. Lopez-Ruiz},
journal={arXiv preprint arXiv:0801.3982},
year={2008},
archivePrefix={arXiv},
eprint={0801.3982},
primaryClass={nlin.CD cs.CR physics.comp-ph}
} | pellicer-lostao2008pseudo-random |
arxiv-2468 | 0801.3983 | New Upper Bounds on Sizes of Permutation Arrays | <|reference_start|>New Upper Bounds on Sizes of Permutation Arrays: A permutation array(or code) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the Hamming distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. Let $P(n,d)$ denote the maximum size of an $(n,d)$ PA. New upper bounds on $P(n,d)$ are given. For constant $\alpha,\beta$ satisfying certain conditions, whenever $d=\beta n^{\alpha}$, the new upper bounds are asymptotically better than the previous ones.<|reference_end|> | arxiv | @article{yang2008new,
title={New Upper Bounds on Sizes of Permutation Arrays},
author={Lizhen Yang, Ling Dong, Kefei Chen},
journal={arXiv preprint arXiv:0801.3983},
year={2008},
archivePrefix={arXiv},
eprint={0801.3983},
primaryClass={cs.IT math.IT}
} | yang2008new |
arxiv-2469 | 0801.3985 | Cobweb posets - Recent Results | <|reference_start|>Cobweb posets - Recent Results: Cobweb posets uniquely represented by directed acyclic graphs are such a generalization of the Fibonacci tree that allows joint combinatorial interpretation for all of them under admissibility condition. This interpretation was derived in the source papers ([6,7] and references therein to the first author).[7,6,8] include natural enquires to be reported on here. The purpose of this presentation is to report on the progress in solving computational problems which are quite easily formulated for the new class of directed acyclic graphs interpreted as Hasse diagrams. The problems posed there and not yet all solved completely are of crucial importance for the vast class of new partially ordered sets with joint combinatorial interpretation. These so called cobweb posets - are relatives of Fibonacci tree and are labeled by specific number sequences - natural numbers sequence and Fibonacci sequence included. The cobweb posets might be identified with a chain of di-bicliques i.e. by definition - a chain of complete bipartite one direction digraphs [6]. Any chain of relations is therefore obtainable from the cobweb poset chain of complete relations via deleting arcs in di-bicliques of the complete relations chain. In particular we response to one of those problems [1].<|reference_end|> | arxiv | @article{kwasniewski2008cobweb,
title={Cobweb posets - Recent Results},
author={A. Krzysztof Kwasniewski, M. Dziemianczuk},
journal={Adv. Stud. Contemp. Math. volume 16 (2), 2008 (April) pp. 197-218},
year={2008},
archivePrefix={arXiv},
eprint={0801.3985},
primaryClass={math.CO cs.DM}
} | kwasniewski2008cobweb |
arxiv-2470 | 0801.3986 | New Lower Bounds on Sizes of Permutation Arrays | <|reference_start|>New Lower Bounds on Sizes of Permutation Arrays: A permutation array(or code) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the Hamming distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. Let $P(n,d)$ denote the maximum size of an $(n,d)$ PA. This correspondence focuses on the lower bound on $P(n,d)$. First we give three improvements over the Gilbert-Varshamov lower bounds on $P(n,d)$ by applying the graph theorem framework presented by Jiang and Vardy. Next we show another two new improved bounds by considering the covered balls intersections. Finally some new lower bounds for certain values of $n$ and $d$ are given.<|reference_end|> | arxiv | @article{yang2008new,
title={New Lower Bounds on Sizes of Permutation Arrays},
author={Lizhen Yang, Kefei Chen, Luo Yuan},
journal={arXiv preprint arXiv:0801.3986},
year={2008},
archivePrefix={arXiv},
eprint={0801.3986},
primaryClass={cs.IT math.IT}
} | yang2008new |
arxiv-2471 | 0801.3987 | New Constructions of Permutation Arrays | <|reference_start|>New Constructions of Permutation Arrays: A permutation array(permutation code, PA) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the Hamming distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. In this correspondence, we present two constructions of PA from fractional polynomials over finite field, and a construction of $(n,d)$ PA from permutation group with degree $n$ and minimal degree $d$. All these new constructions produces some new lower bounds for PA.<|reference_end|> | arxiv | @article{yang2008new,
title={New Constructions of Permutation Arrays},
author={Lizhen Yang, Kefei Chen, Luo Yuan},
journal={arXiv preprint arXiv:0801.3987},
year={2008},
archivePrefix={arXiv},
eprint={0801.3987},
primaryClass={cs.IT math.IT}
} | yang2008new |
arxiv-2472 | 0801.4013 | Spanners of Additively Weighted Point Sets | <|reference_start|>Spanners of Additively Weighted Point Sets: We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs $(p,r)$ where $p$ is a point in the plane and $r$ is a real number. The distance between two points $(p_i,r_i)$ and $(p_j,r_j)$ is defined as $|p_ip_j|-r_i-r_j$. We show that in the case where all $r_i$ are positive numbers and $|p_ip_j|\geq r_i+r_j$ for all $i,j$ (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a $(1+\epsilon)$-spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.<|reference_end|> | arxiv | @article{bose2008spanners,
title={Spanners of Additively Weighted Point Sets},
author={Prosenjit Bose and Paz Carmi and Mathieu Couture},
journal={arXiv preprint arXiv:0801.4013},
year={2008},
archivePrefix={arXiv},
eprint={0801.4013},
primaryClass={cs.CG}
} | bose2008spanners |
arxiv-2473 | 0801.4019 | A Class of Convex Polyhedra with Few Edge Unfoldings | <|reference_start|>A Class of Convex Polyhedra with Few Edge Unfoldings: We construct a sequence of convex polyhedra on n vertices with the property that, as n -> infinity, the fraction of its edge unfoldings that avoid overlap approaches 0, and so the fraction that overlap approaches 1. Nevertheless, each does have (several) nonoverlapping edge unfoldings.<|reference_end|> | arxiv | @article{benton2008a,
title={A Class of Convex Polyhedra with Few Edge Unfoldings},
author={Alex Benton and Joseph O'Rourke},
journal={arXiv preprint arXiv:0801.4019},
year={2008},
number={Smith Computer Science 088},
archivePrefix={arXiv},
eprint={0801.4019},
primaryClass={cs.CG}
} | benton2008a |
arxiv-2474 | 0801.4024 | Set-based complexity and biological information | <|reference_start|>Set-based complexity and biological information: It is not obvious what fraction of all the potential information residing in the molecules and structures of living systems is significant or meaningful to the system. Sets of random sequences or identically repeated sequences, for example, would be expected to contribute little or no useful information to a cell. This issue of quantitation of information is important since the ebb and flow of biologically significant information is essential to our quantitative understanding of biological function and evolution. Motivated specifically by these problems of biological information, we propose here a class of measures to quantify the contextual nature of the information in sets of objects, based on Kolmogorov's intrinsic complexity. Such measures discount both random and redundant information and are inherent in that they do not require a defined state space to quantify the information. The maximization of this new measure, which can be formulated in terms of the universal information distance, appears to have several useful and interesting properties, some of which we illustrate with examples.<|reference_end|> | arxiv | @article{galas2008set-based,
title={Set-based complexity and biological information},
author={David J. Galas, Matti Nykter, Gregory W. Carter, Nathan D. Price, Ilya
Shmulevich},
journal={arXiv preprint arXiv:0801.4024},
year={2008},
archivePrefix={arXiv},
eprint={0801.4024},
primaryClass={cs.IT cs.CC math.IT q-bio.QM}
} | galas2008set-based |
arxiv-2475 | 0801.4048 | High Performance Cooperative Transmission Protocols Based on Multiuser Detection and Network Coding | <|reference_start|>High Performance Cooperative Transmission Protocols Based on Multiuser Detection and Network Coding: Cooperative transmission is an emerging communication technique that takes advantage of the broadcast nature of wireless channels. However, due to low spectral efficiency and the requirement of orthogonal channels, its potential for use in future wireless networks is limited. In this paper, by making use of multiuser detection (MUD) and network coding, cooperative transmission protocols with high spectral efficiency, diversity order, and coding gain are developed. Compared with the traditional cooperative transmission protocols with single-user detection, in which the diversity gain is only for one source user, the proposed MUD cooperative transmission protocols have the merit that the improvement of one user's link can also benefit the other users. In addition, using MUD at the relay provides an environment in which network coding can be employed. The coding gain and high diversity order can be obtained by fully utilizing the link between the relay and the destination. From the analysis and simulation results, it is seen that the proposed protocols achieve higher diversity gain, better asymptotic efficiency, and lower bit error rate, compared to traditional MUD schemes and to existing cooperative transmission protocols. From the simulation results, the performance of the proposed scheme is near optimal as the performance gap is 0.12dB for average bit error rate (BER) 10^{-6} and 1.04dB for average BER 10^(-3), compared to two performance upper bounds.<|reference_end|> | arxiv | @article{han2008high,
title={High Performance Cooperative Transmission Protocols Based on Multiuser
Detection and Network Coding},
author={Zhu Han, Xin Zhang, H. Vincent Poor},
journal={arXiv preprint arXiv:0801.4048},
year={2008},
doi={10.1109/TWC.2009.070181},
archivePrefix={arXiv},
eprint={0801.4048},
primaryClass={cs.IT math.IT}
} | han2008high |
arxiv-2476 | 0801.4054 | Bounded Mean-Delay Throughput and Non-Starvation Conditions in Aloha Network | <|reference_start|>Bounded Mean-Delay Throughput and Non-Starvation Conditions in Aloha Network: This paper considers the requirements to ensure bounded mean queuing delay and non-starvation in a slotted Aloha network operating the exponential backoff protocol. It is well-known that the maximum possible throughput of a slotted Aloha system with a large number of nodes is 1/e = 0.3679. Indeed, a saturation throughput of 1/e can be achieved with an exponential backoff factor of r = e/(e-1)=1.5820. The binary backoff factor of r = 2 is assumed in the majority of prior work, and in many practical multiple-access networks such as the Ethernet and WiFi. For slotted Aloha, the saturation throughput 0.3466 for r = 2 is reasonably close to the maximum of 1/e, and one could hardly raise objection to adopting r = 2 in the system. However, this paper shows that if mean queuing delay is to be bounded, then the sustainable throughput when r = 2 is only 0.2158, a drastic 41% drop from 1/e . Fortunately, the optimal setting of r = 1.3757 under the bounded mean-delay requirement allows us to achieve sustainable throughput of 0.3545, a penalty of only less than 4% relative to 1/e. A general conclusion is that the value of r may significantly affect the queuing delay performance. Besides analyzing mean queuing delay, this paper also delves into the phenomenon of starvation, wherein some nodes are deprived of service for an extended period of time while other nodes hog the system. Specifically, we propose a quantitative definition for starvation and show that the conditions to guarantee bounded mean delay and non-starved operation are one of the same, thus uniting these two notions. Finally, we show that when mean delay is large and starvation occurs, the performance results obtained from simulation experiments may not converge. A quantitative discussion of this issue is provided in this paper.<|reference_end|> | arxiv | @article{liew2008bounded,
title={Bounded Mean-Delay Throughput and Non-Starvation Conditions in Aloha
Network},
author={Soung Chang Liew, Ying Jun Zhang, Da Rui Chen},
journal={arXiv preprint arXiv:0801.4054},
year={2008},
archivePrefix={arXiv},
eprint={0801.4054},
primaryClass={cs.NI}
} | liew2008bounded |
arxiv-2477 | 0801.4061 | The optimal assignment kernel is not positive definite | <|reference_start|>The optimal assignment kernel is not positive definite: We prove that the optimal assignment kernel, proposed recently as an attempt to embed labeled graphs and more generally tuples of basic data to a Hilbert space, is in fact not always positive definite.<|reference_end|> | arxiv | @article{vert2008the,
title={The optimal assignment kernel is not positive definite},
author={Jean-Philippe Vert (CB)},
journal={arXiv preprint arXiv:0801.4061},
year={2008},
archivePrefix={arXiv},
eprint={0801.4061},
primaryClass={cs.LG}
} | vert2008the |
arxiv-2478 | 0801.4079 | An equivalence preserving transformation from the Fibonacci to the Galois NLFSRs | <|reference_start|>An equivalence preserving transformation from the Fibonacci to the Galois NLFSRs: Conventional Non-Linear Feedback Shift Registers (NLFSRs) use the Fibonacci configuration in which the value of the first bit is updated according to some non-linear feedback function of previous values of other bits, and each remaining bit repeats the value of its previous bit. We show how to transform the feedback function of a Fibonacci NLFSR into several smaller feedback functions of individual bits. Such a transformation reduces the propagation time, thus increasing the speed of pseudo-random sequence generation. The practical significance of the presented technique is that is makes possible increasing the keystream generation speed of any Fibonacci NLFSR-based stream cipher with no penalty in area.<|reference_end|> | arxiv | @article{dubrova2008an,
title={An equivalence preserving transformation from the Fibonacci to the
Galois NLFSRs},
author={Elena Dubrova (Royal Institute of Technology)},
journal={arXiv preprint arXiv:0801.4079},
year={2008},
archivePrefix={arXiv},
eprint={0801.4079},
primaryClass={cs.CR}
} | dubrova2008an |
arxiv-2479 | 0801.4082 | On Reliability of Dynamic Addressing Routing Protocols in Mobile Ad Hoc Networks | <|reference_start|>On Reliability of Dynamic Addressing Routing Protocols in Mobile Ad Hoc Networks: In this paper, a reliability analysis is carried out to state a performance comparison between two recently proposed proactive routing algorithms. These protocols are able to scale in ad hoc and sensor networks by resorting to dynamic addressing, to face with the topology variability, which is typical of ad hoc, and sensor networks. Numerical simulations are also carried out to corroborate the results of the analysis.<|reference_end|> | arxiv | @article{caleffi2008on,
title={On Reliability of Dynamic Addressing Routing Protocols in Mobile Ad Hoc
Networks},
author={Marcello Caleffi, Giancarlo Ferraiuolo, Luigi Paura},
journal={arXiv preprint arXiv:0801.4082},
year={2008},
archivePrefix={arXiv},
eprint={0801.4082},
primaryClass={cs.NI cs.DC}
} | caleffi2008on |
arxiv-2480 | 0801.4105 | Quantified Propositional Logspace Reasoning | <|reference_start|>Quantified Propositional Logspace Reasoning: In this paper, we develop a quantified propositional proof systems that corresponds to logarithmic-space reasoning. We begin by defining a class SigmaCNF(2) of quantified formulas that can be evaluated in log space. Then our new proof system GL^* is defined as G_1^* with cuts restricted to SigmaCNF(2) formulas and no cut formula that is not quantifier free contains a free variable that does not appear in the final formula. To show that GL^* is strong enough to capture log space reasoning, we translate theorems of VL into a family of tautologies that have polynomial-size GL^* proofs. VL is a theory of bounded arithmetic that is known to correspond to logarithmic-space reasoning. To do the translation, we find an appropriate axiomatization of VL, and put VL proofs into a new normal form. To show that GL^* is not too strong, we prove the soundness of GL^* in such a way that it can be formalized in VL. This is done by giving a logarithmic-space algorithm that witnesses GL^* proofs.<|reference_end|> | arxiv | @article{perron2008quantified,
title={Quantified Propositional Logspace Reasoning},
author={Steven Perron (University of Toronto)},
journal={arXiv preprint arXiv:0801.4105},
year={2008},
archivePrefix={arXiv},
eprint={0801.4105},
primaryClass={cs.LO cs.CC}
} | perron2008quantified |
arxiv-2481 | 0801.4119 | Strategic Alert Throttling for Intrusion Detection Systems | <|reference_start|>Strategic Alert Throttling for Intrusion Detection Systems: Network intrusion detection systems are themselves becoming targets of attackers. Alert flood attacks may be used to conceal malicious activity by hiding it among a deluge of false alerts sent by the attacker. Although these types of attacks are very hard to stop completely, our aim is to present techniques that improve alert throughput and capacity to such an extent that the resources required to successfully mount the attack become prohibitive. The key idea presented is to combine a token bucket filter with a realtime correlation algorithm. The proposed algorithm throttles alert output from the IDS when an attack is detected. The attack graph used in the correlation algorithm is used to make sure that alerts crucial to forming strategies are not discarded by throttling.<|reference_end|> | arxiv | @article{tedesco2008strategic,
title={Strategic Alert Throttling for Intrusion Detection Systems},
author={Gianni Tedesco and Uwe Aickelin},
journal={4th WSEAS International Conference on Information Security (WSEAS
2005), Tenerife, Spain, 2005},
year={2008},
archivePrefix={arXiv},
eprint={0801.4119},
primaryClass={cs.NE cs.CR}
} | tedesco2008strategic |
arxiv-2482 | 0801.4129 | Scaling Laws and Techniques in Decentralized Processing of Interfered Gaussian Channels | <|reference_start|>Scaling Laws and Techniques in Decentralized Processing of Interfered Gaussian Channels: The scaling laws of the achievable communication rates and the corresponding upper bounds of distributed reception in the presence of an interfering signal are investigated. The scheme includes one transmitter communicating to a remote destination via two relays, which forward messages to the remote destination through reliable links with finite capacities. The relays receive the transmission along with some unknown interference. We focus on three common settings for distributed reception, wherein the scaling laws of the capacity (the pre-log as the power of the transmitter and the interference are taken to infinity) are completely characterized. It is shown in most cases that in order to overcome the interference, a definite amount of information about the interference needs to be forwarded along with the desired message, to the destination. It is exemplified in one scenario that the cut-set upper bound is strictly loose. The results are derived using the cut-set along with a new bounding technique, which relies on multi letter expressions. Furthermore, lattices are found to be a useful communication technique in this setting, and are used to characterize the scaling laws of achievable rates.<|reference_end|> | arxiv | @article{sanderovich2008scaling,
title={Scaling Laws and Techniques in Decentralized Processing of Interfered
Gaussian Channels},
author={Amichai Sanderovich, Michael Peleg and Shlomo Shamai},
journal={arXiv preprint arXiv:0801.4129},
year={2008},
archivePrefix={arXiv},
eprint={0801.4129},
primaryClass={cs.IT math.IT}
} | sanderovich2008scaling |
arxiv-2483 | 0801.4130 | Solving Min-Max Problems with Applications to Games | <|reference_start|>Solving Min-Max Problems with Applications to Games: We refine existing general network optimization techniques, give new characterizations for the class of problems to which they can be applied, and show that they can also be used to solve various two-player games in almost linear time. Among these is a new variant of the network interdiction problem, where the interdictor wants to destroy high-capacity paths from the source to the destination using a vertex-wise limited budget of arc removals. We also show that replacing the limit average in mean payoff games by the maximum weight results in a class of games amenable to these techniques.<|reference_end|> | arxiv | @article{andersson2008solving,
title={Solving Min-Max Problems with Applications to Games},
author={Daniel Andersson},
journal={arXiv preprint arXiv:0801.4130},
year={2008},
archivePrefix={arXiv},
eprint={0801.4130},
primaryClass={cs.GT cs.DS}
} | andersson2008solving |
arxiv-2484 | 0801.4150 | e-Science perspectives in Venezuela | <|reference_start|>e-Science perspectives in Venezuela: We describe the e-Science strategy in Venezuela, in particular initiatives by the Centro Nacional de Calculo Cientifico Universidad de Los Andes (CECALCULA), Merida, the Universidad de Los Andes (ULA), Merida, and the Instituto Venezolano de Investigaciones Cientificas (IVIC), Caracas. We present the plans for the Venezuelan Academic Grid and the current status of Grid ULA supported by Internet2. We show different web-based scientific applications that are being developed in quantum chemistry, atomic physics, structural damage analysis, biomedicine and bioclimate within the framework of the E-Infrastructure shared between Europe and Latin America (EELA)<|reference_end|> | arxiv | @article{diaz2008e-science,
title={e-Science perspectives in Venezuela},
author={G. Diaz, J. Florez-Lopez, V. Hamar, H. Hoeger, C. Mendoza, Z. Mendez,
L. A. Nunez, N. Ruiz, R. Torrens, M. Uzcategui},
journal={Proceedings of the Third EELA Conference, R. Gavela, B. Marechal,
R. Barbera, L.N. Ciuffo, R. Mayo. (Editors), CIEMAT, Madrid, Spain (2007), pp
131-139},
year={2008},
archivePrefix={arXiv},
eprint={0801.4150},
primaryClass={cs.DC}
} | diaz2008e-science |
arxiv-2485 | 0801.4158 | Measuring the Dynamical State of the Internet: Large Scale Network Tomography via the ETOMIC Infrastructure | <|reference_start|>Measuring the Dynamical State of the Internet: Large Scale Network Tomography via the ETOMIC Infrastructure: In this paper we show how to go beyond the study of the topological properties of the Internet, by measuring its dynamical state using special active probing techniques and the methods of network tomography. We demonstrate this approach by measuring the key state parameters of Internet paths, the characteristics of queueing delay, in a part of the European Internet. In the paper we describe in detail the ETOMIC measurement platform that was used to conduct the experiments, and the applied method of queueing delay tomography. The main results of the paper are maps showing various spatial structure in the characteristics of queueing delay corresponding to the resolved part of the European Internet. These maps reveal that the average queueing delay of network segments spans more than two orders of magnitude, and that the distribution of this quantity is very well fitted by the log-normal distribution.<|reference_end|> | arxiv | @article{simon2008measuring,
title={Measuring the Dynamical State of the Internet: Large Scale Network
Tomography via the ETOMIC Infrastructure},
author={Gabor Simon, Jozsef Steger, Peter Haga Istvan Csabai, Gabor Vattay},
journal={arXiv preprint arXiv:0801.4158},
year={2008},
archivePrefix={arXiv},
eprint={0801.4158},
primaryClass={physics.data-an cs.NI}
} | simon2008measuring |
arxiv-2486 | 0801.4190 | Phylogenies without Branch Bounds: Contracting the Short, Pruning the Deep | <|reference_start|>Phylogenies without Branch Bounds: Contracting the Short, Pruning the Deep: We introduce a new phylogenetic reconstruction algorithm which, unlike most previous rigorous inference techniques, does not rely on assumptions regarding the branch lengths or the depth of the tree. The algorithm returns a forest which is guaranteed to contain all edges that are: 1) sufficiently long and 2) sufficiently close to the leaves. How much of the true tree is recovered depends on the sequence length provided. The algorithm is distance-based and runs in polynomial time.<|reference_end|> | arxiv | @article{daskalakis2008phylogenies,
title={Phylogenies without Branch Bounds: Contracting the Short, Pruning the
Deep},
author={Constantinos Daskalakis, Elchanan Mossel, Sebastien Roch},
journal={arXiv preprint arXiv:0801.4190},
year={2008},
archivePrefix={arXiv},
eprint={0801.4190},
primaryClass={q-bio.PE cs.CE cs.DS math.PR math.ST stat.TH}
} | daskalakis2008phylogenies |
arxiv-2487 | 0801.4194 | A statistical mechanical interpretation of algorithmic information theory | <|reference_start|>A statistical mechanical interpretation of algorithmic information theory: We develop a statistical mechanical interpretation of algorithmic information theory by introducing the notion of thermodynamic quantities, such as free energy, energy, statistical mechanical entropy, and specific heat, into algorithmic information theory. We investigate the properties of these quantities by means of program-size complexity from the point of view of algorithmic randomness. It is then discovered that, in the interpretation, the temperature plays a role as the compression rate of the values of all these thermodynamic quantities, which include the temperature itself. Reflecting this self-referential nature of the compression rate of the temperature, we obtain fixed point theorems on compression rate.<|reference_end|> | arxiv | @article{tadaki2008a,
title={A statistical mechanical interpretation of algorithmic information
theory},
author={Kohtaro Tadaki},
journal={arXiv preprint arXiv:0801.4194},
year={2008},
archivePrefix={arXiv},
eprint={0801.4194},
primaryClass={cs.IT cs.CC math.IT math.PR quant-ph}
} | tadaki2008a |
arxiv-2488 | 0801.4198 | Microscopic Analysis for Decoupling Principle of Linear Vector Channel | <|reference_start|>Microscopic Analysis for Decoupling Principle of Linear Vector Channel: This paper studies the decoupling principle of a linear vector channel, which is an extension of CDMA and MIMO channels. We show that the scalar-channel characterization obtained via the decoupling principle is valid not only for collections of a large number of elements of input vector, as discussed in previous studies, but also for individual elements of input vector, i.e. the linear vector channel for individual elements of channel input vector is decomposed into a bank of independent scalar Gaussian channels in the large-system limit, where dimensions of channel input and output are both sent to infinity while their ratio fixed.<|reference_end|> | arxiv | @article{nakamura2008microscopic,
title={Microscopic Analysis for Decoupling Principle of Linear Vector Channel},
author={Kazutaka Nakamura, Toshiyuki Tanaka},
journal={arXiv preprint arXiv:0801.4198},
year={2008},
archivePrefix={arXiv},
eprint={0801.4198},
primaryClass={cs.IT math.IT}
} | nakamura2008microscopic |
arxiv-2489 | 0801.4230 | Quantum entanglement analysis based on abstract interpretation | <|reference_start|>Quantum entanglement analysis based on abstract interpretation: Entanglement is a non local property of quantum states which has no classical counterpart and plays a decisive role in quantum information theory. Several protocols, like the teleportation, are based on quantum entangled states. Moreover, any quantum algorithm which does not create entanglement can be efficiently simulated on a classical computer. The exact role of the entanglement is nevertheless not well understood. Since an exact analysis of entanglement evolution induces an exponential slowdown, we consider approximative analysis based on the framework of abstract interpretation. In this paper, a concrete quantum semantics based on superoperators is associated with a simple quantum programming language. The representation of entanglement, i.e. the design of the abstract domain is a key issue. A representation of entanglement as a partition of the memory is chosen. An abstract semantics is introduced, and the soundness of the approximation is proven.<|reference_end|> | arxiv | @article{perdrix2008quantum,
title={Quantum entanglement analysis based on abstract interpretation},
author={Simon Perdrix},
journal={Proc. of 15th International Static Analysis Symposium (SAS 2008).
LNCS 5079, pp 270-282},
year={2008},
doi={10.1007/978-3-540-69166-2_18},
archivePrefix={arXiv},
eprint={0801.4230},
primaryClass={cs.LO cs.PL quant-ph}
} | perdrix2008quantum |
arxiv-2490 | 0801.4238 | Algorithms for Temperature-Aware Task Scheduling in Microprocessor Systems | <|reference_start|>Algorithms for Temperature-Aware Task Scheduling in Microprocessor Systems: We study scheduling problems motivated by recently developed techniques for microprocessor thermal management at the operating systems level. The general scenario can be described as follows. The microprocessor's temperature is controlled by the hardware thermal management system that continuously monitors the chip temperature and automatically reduces the processor's speed as soon as the thermal threshold is exceeded. Some tasks are more CPU-intensive than other and thus generate more heat during execution. The cooling system operates non-stop, reducing (at an exponential rate) the deviation of the processor's temperature from the ambient temperature. As a result, the processor's temperature, and thus the performance as well, depends on the order of the task execution. Given a variety of possible underlying architectures, models for cooling and for hardware thermal management, as well as types of tasks, this scenario gives rise to a plethora of interesting and never studied scheduling problems. We focus on scheduling real-time jobs in a simplified model for cooling and thermal management. A collection of unit-length jobs is given, each job specified by its release time, deadline and heat contribution. If, at some time step, the temperature of the system is t and the processor executes a job with heat contribution h, then the temperature at the next step is (t+h)/2. The temperature cannot exceed the given thermal threshold T. The objective is to maximize the throughput, that is, the number of tasks that meet their deadlines. We prove that, in the offline case, computing the optimum schedule is NP-hard, even if all jobs are released at the same time. In the online case, we show a 2-competitive deterministic algorithm and a matching lower bound.<|reference_end|> | arxiv | @article{chrobak2008algorithms,
title={Algorithms for Temperature-Aware Task Scheduling in Microprocessor
Systems},
author={Marek Chrobak, Christoph Durr, Mathilde Hurand and Julien Robert},
journal={arXiv preprint arXiv:0801.4238},
year={2008},
archivePrefix={arXiv},
eprint={0801.4238},
primaryClass={cs.DS}
} | chrobak2008algorithms |
arxiv-2491 | 0801.4268 | Protecting Spreadsheets Against Fraud | <|reference_start|>Protecting Spreadsheets Against Fraud: Previous research on spreadsheet risks has predominantly focussed on errors inadvertently introduced by spreadsheet writers i.e. it focussed on the end-user aspects of spreadsheet development. When analyzing a faulty spreadsheet, one might not be able to determine whether a particular error (fault) has been made by mistake or with fraudulent intentions. However, the fences protecting against fraudulent errors have to be different from those shielding against inadvertent mistakes. Faults resulting from errors committed inadvertently can be prevented ab initio by tools that notify the spreadsheet writer about potential problems whereas faults that are introduced on purpose have to be discovered by auditors without the cooperation of their originators. Even worse, some spreadsheet writers will do their best to conceal fraudulent parts of their spreadsheets from auditors. In this paper we survey the available means for fraud protection by contrasting approaches suitable for spreadsheets with those known from fraud protection for conventional software.<|reference_end|> | arxiv | @article{mittermeir2008protecting,
title={Protecting Spreadsheets Against Fraud},
author={Roland T. Mittermeir, Markus Clermont, Karin Hodnigg},
journal={Proc. European Spreadsheet Risks Int. Grp. 2005 69-80
ISBN:1-902724-16-X},
year={2008},
archivePrefix={arXiv},
eprint={0801.4268},
primaryClass={cs.CY cs.CR}
} | mittermeir2008protecting |
arxiv-2492 | 0801.4274 | Computational Models of Spreadsheet Development: Basis for Educational Approaches | <|reference_start|>Computational Models of Spreadsheet Development: Basis for Educational Approaches: Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.<|reference_end|> | arxiv | @article{hodnigg2008computational,
title={Computational Models of Spreadsheet Development: Basis for Educational
Approaches},
author={Karin Hodnigg, Markus Clermont, Roland T. Mittermeir},
journal={Proc. European Spreadsheet Risks Int. Grp. 2004 153-168 ISBN 1
902724 94 1},
year={2008},
archivePrefix={arXiv},
eprint={0801.4274},
primaryClass={cs.HC cs.SE}
} | hodnigg2008computational |
arxiv-2493 | 0801.4280 | Spreadsheet Debugging | <|reference_start|>Spreadsheet Debugging: Spreadsheet programs, artifacts developed by non-programmers, are used for a variety of important tasks and decisions. Yet a significant proportion of them have severe quality problems. To address this issue, our previous work presented an interval-based testing methodology for spreadsheets. Interval-based testing rests on the observation that spreadsheets are mainly used for numerical computations. It also incorporates ideas from symbolic testing and interval analysis. This paper addresses the issue of efficiently debugging spreadsheets. Based on the interval-based testing methodology, this paper presents a technique for tracing faults in spreadsheet programs. The fault tracing technique proposed uses the dataflow information and cell marks to identify the most influential faulty cell(s) for a given formula cell containing a propagated fault.<|reference_end|> | arxiv | @article{ayalew2008spreadsheet,
title={Spreadsheet Debugging},
author={Yirsaw Ayalew, Roland Mittermeir},
journal={Proc. European Spreadsheet Risks Int. Grp. 2003 67-79 ISBN 1 86166
199 1},
year={2008},
archivePrefix={arXiv},
eprint={0801.4280},
primaryClass={cs.SE cs.PL}
} | ayalew2008spreadsheet |
arxiv-2494 | 0801.4287 | Movie Recommendation Systems Using An Artificial Immune System | <|reference_start|>Movie Recommendation Systems Using An Artificial Immune System: We apply the Artificial Immune System (AIS) technology to the Collaborative Filtering (CF) technology when we build the movie recommendation system. Two different affinity measure algorithms of AIS, Kendall tau and Weighted Kappa, are used to calculate the correlation coefficients for this movie recommendation system. From the testing we think that Weighted Kappa is more suitable than Kendall tau for movie problems.<|reference_end|> | arxiv | @article{chen2008movie,
title={Movie Recommendation Systems Using An Artificial Immune System},
author={Qi Chen and Uwe Aickelin},
journal={6th International Conference in Adaptive Computing in Design and
Manufacture (ACDM 2004), Bristol, UK, 2004},
year={2008},
archivePrefix={arXiv},
eprint={0801.4287},
primaryClass={cs.NE cs.AI}
} | chen2008movie |
arxiv-2495 | 0801.4292 | Exact Feasibility Tests for Real-Time Scheduling of Periodic Tasks upon Multiprocessor Platforms | <|reference_start|>Exact Feasibility Tests for Real-Time Scheduling of Periodic Tasks upon Multiprocessor Platforms: In this paper we study the global scheduling of periodic task systems upon multiprocessor platforms. We first show two very general properties which are well-known for uniprocessor platforms and which remain for multiprocessor platforms: (i) under few and not so restrictive assumptions, we show that feasible schedules of periodic task systems are periodic from some point with a period equal to the least common multiple of task periods and (ii) for the specific case of synchronous periodic task systems, we show that feasible schedules repeat from the origin. We then present our main result: we characterize, for task-level fixed-priority schedulers and for asynchronous constrained or arbitrary deadline periodic task models, upper bounds of the first time instant where the schedule repeats. We show that job-level fixed-priority schedulers are predictable upon unrelated multiprocessor platforms. For task-level fixed-priority schedulers, based on the upper bounds and the predictability property, we provide for asynchronous constrained or arbitrary deadline periodic task sets, exact feasibility tests. Finally, for the job-level fixed-priority EDF scheduler, for which such an upper bound remains unknown, we provide an exact feasibility test as well.<|reference_end|> | arxiv | @article{cucu2008exact,
title={Exact Feasibility Tests for Real-Time Scheduling of Periodic Tasks upon
Multiprocessor Platforms},
author={Liliana Cucu and Jo"el Goossens},
journal={arXiv preprint arXiv:0801.4292},
year={2008},
archivePrefix={arXiv},
eprint={0801.4292},
primaryClass={cs.OS}
} | cucu2008exact |
arxiv-2496 | 0801.4305 | Risk-Seeking versus Risk-Avoiding Investments in Noisy Periodic Environments | <|reference_start|>Risk-Seeking versus Risk-Avoiding Investments in Noisy Periodic Environments: We study the performance of various agent strategies in an artificial investment scenario. Agents are equipped with a budget, $x(t)$, and at each time step invest a particular fraction, $q(t)$, of their budget. The return on investment (RoI), $r(t)$, is characterized by a periodic function with different types and levels of noise. Risk-avoiding agents choose their fraction $q(t)$ proportional to the expected positive RoI, while risk-seeking agents always choose a maximum value $q_{max}$ if they predict the RoI to be positive ("everything on red"). In addition to these different strategies, agents have different capabilities to predict the future $r(t)$, dependent on their internal complexity. Here, we compare 'zero-intelligent' agents using technical analysis (such as moving least squares) with agents using reinforcement learning or genetic algorithms to predict $r(t)$. The performance of agents is measured by their average budget growth after a certain number of time steps. We present results of extensive computer simulations, which show that, for our given artificial environment, (i) the risk-seeking strategy outperforms the risk-avoiding one, and (ii) the genetic algorithm was able to find this optimal strategy itself, and thus outperforms other prediction approaches considered.<|reference_end|> | arxiv | @article{barrientos2008risk-seeking,
title={Risk-Seeking versus Risk-Avoiding Investments in Noisy Periodic
Environments},
author={J. Emeterio Navarro Barrientos, Frank E. Walter, Frank Schweitzer},
journal={International Journal of Modern Physics C vol. 19, no. 6 (2008)
971-994},
year={2008},
doi={10.1142/S0129183108012662},
archivePrefix={arXiv},
eprint={0801.4305},
primaryClass={q-fin.PM cs.CE physics.soc-ph}
} | barrientos2008risk-seeking |
arxiv-2497 | 0801.4307 | On Affinity Measures for Artificial Immune System Movie Recommenders | <|reference_start|>On Affinity Measures for Artificial Immune System Movie Recommenders: We combine Artificial Immune Systems 'AIS', technology with Collaborative Filtering 'CF' and use it to build a movie recommendation system. We already know that Artificial Immune Systems work well as movie recommenders from previous work by Cayzer and Aickelin 3, 4, 5. Here our aim is to investigate the effect of different affinity measure algorithms for the AIS. Two different affinity measures, Kendalls Tau and Weighted Kappa, are used to calculate the correlation coefficients for the movie recommender. We compare the results with those published previously and show that Weighted Kappa is more suitable than others for movie problems. We also show that AIS are generally robust movie recommenders and that, as long as a suitable affinity measure is chosen, results are good.<|reference_end|> | arxiv | @article{aickelin2008on,
title={On Affinity Measures for Artificial Immune System Movie Recommenders},
author={Uwe Aickelin and Qi Chen},
journal={Proceedings of the 5th International Conference on Recent Advances
in Soft Computing (RASC 2004), Nottingham, UK},
year={2008},
archivePrefix={arXiv},
eprint={0801.4307},
primaryClass={cs.NE cs.AI cs.CY}
} | aickelin2008on |
arxiv-2498 | 0801.4312 | Investigating Artificial Immune Systems For Job Shop Rescheduling In Changing Environments | <|reference_start|>Investigating Artificial Immune Systems For Job Shop Rescheduling In Changing Environments: Artificial immune system can be used to generate schedules in changing environments and it has been proven to be more robust than schedules developed using a genetic algorithm. Good schedules can be produced especially when the number of the antigens is increased. However, an increase in the range of the antigens had somehow affected the fitness of the immune system. In this research, we are trying to improve the result of the system by rescheduling the same problem using the same method while at the same time maintaining the robustness of the schedules.<|reference_end|> | arxiv | @article{aickelin2008investigating,
title={Investigating Artificial Immune Systems For Job Shop Rescheduling In
Changing Environments},
author={Uwe Aickelin, Edmund Burke and Aniza Din},
journal={6th International Conference in Adaptive Computing in Design and
Manufacture (ACDM 2004), Bristol, UK, 2004},
year={2008},
archivePrefix={arXiv},
eprint={0801.4312},
primaryClass={cs.NE cs.CE}
} | aickelin2008investigating |
arxiv-2499 | 0801.4314 | Artificial Immune Systems (AIS) - A New Paradigm for Heuristic Decision Making | <|reference_start|>Artificial Immune Systems (AIS) - A New Paradigm for Heuristic Decision Making: Over the last few years, more and more heuristic decision making techniques have been inspired by nature, e.g. evolutionary algorithms, ant colony optimisation and simulated annealing. More recently, a novel computational intelligence technique inspired by immunology has emerged, called Artificial Immune Systems (AIS). This immune system inspired technique has already been useful in solving some computational problems. In this keynote, we will very briefly describe the immune system metaphors that are relevant to AIS. We will then give some illustrative real-world problems suitable for AIS use and show a step-by-step algorithm walkthrough. A comparison of AIS to other well-known algorithms and areas for future work will round this keynote off. It should be noted that as AIS is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from the examples given here.<|reference_end|> | arxiv | @article{aickelin2008artificial,
title={Artificial Immune Systems (AIS) - A New Paradigm for Heuristic Decision
Making},
author={Uwe Aickelin},
journal={Invited Keynote Talk, Annual Operational Research Conference 46,
York, UK, 2004},
year={2008},
archivePrefix={arXiv},
eprint={0801.4314},
primaryClass={cs.NE cs.AI}
} | aickelin2008artificial |
arxiv-2500 | 0801.4355 | TER: A Robot for Remote Ultrasonic Examination: Experimental Evaluations | <|reference_start|>TER: A Robot for Remote Ultrasonic Examination: Experimental Evaluations: This chapter: o Motivates the clinical use of robotic tele-echography o Introduces the TER system o Describes technical and clinical evaluations performed with TER<|reference_end|> | arxiv | @article{banihachemi2008ter:,
title={TER: A Robot for Remote Ultrasonic Examination: Experimental Evaluations},
author={Jean-Jacques Banihachemi (TIMC), Eric Boidard (TIMC), Jean-Luc Bosson
(TIMC, CHU-Grenoble CIC), Luc Bressollette, Ivan Bricault (TIMC, CHU-Grenoble
radio), Philippe Cinquin (TIMC), Gilbert Ferretti (CHU-Grenoble radio), Maud
Marchal (TIMC), Thomas Martinelli (CHU-Grenoble radio), Alexandre
Moreau-Gaudry (CHU-Grenoble CIC), Franck Pelissier, Christian Roux, Dominique
Saragaglia, Pierre Thorel, Jocelyne Troccaz (TIMC), Adriana Vilchis (TIMC)},
journal={Telesurgery, Springer Verlag (Ed.) (2008) 91-99},
year={2008},
archivePrefix={arXiv},
eprint={0801.4355},
primaryClass={cs.OH cs.RO}
} | banihachemi2008ter: |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.