corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-675701 | cs/0702145 | Designing a Resource Broker for Heterogeneous Grids | <|reference_start|>Designing a Resource Broker for Heterogeneous Grids: Grids provide uniform access to aggregations of heterogeneous resources and services such as computers, networks and storage owned by multiple organizations. However, such a dynamic environment poses many challenges for application composition and deployment. In this paper, we present the design of the Gridbus Grid resource broker that allows users to create applications and specify different objectives through different interfaces without having to deal with the complexity of Grid infrastructure. We present the unique requirements that motivated our design and discuss how these provide flexibility in extending the functionality of the broker to support different low-level middlewares and user interfaces. We evaluate the broker with different job profiles and Grid middleware and conclude with the lessons learnt from our development experience.<|reference_end|> | arxiv | @article{venugopal2007designing,
title={Designing a Resource Broker for Heterogeneous Grids},
author={Srikumar Venugopal, Krishna Nadiminti, Hussein Gibbins and Rajkumar
Buyya},
journal={arXiv preprint arXiv:cs/0702145},
year={2007},
number={GRIDS-TR-2007-2},
archivePrefix={arXiv},
eprint={cs/0702145},
primaryClass={cs.DC cs.SE}
} | venugopal2007designing |
arxiv-675702 | cs/0702146 | A Local Tree Structure is NOT Sufficient for the Local Optimality of Message-Passing Decoding in Low Density Parity Check Codes | <|reference_start|>A Local Tree Structure is NOT Sufficient for the Local Optimality of Message-Passing Decoding in Low Density Parity Check Codes: We address the problem,`Is a local tree structure sufficient for the local optimality of message passing algorithm in low density parity check codes?'.It is shown that the answer is negative. Using this observation, we pinpoint a flaw in the proof of Theorem 1 in the paper `The Capacity of Low-Density Parity-Check Codes Under Message-Passing Decoding' by Thomas J. Richardson and R\"udiger L.Urbanke\cite{RUCapacity}. We further provide a new proof of that theorem based on a different argument.<|reference_end|> | arxiv | @article{xu2007a,
title={A Local Tree Structure is NOT Sufficient for the Local Optimality of
Message-Passing Decoding in Low Density Parity Check Codes},
author={Weiyu Xu},
journal={arXiv preprint arXiv:cs/0702146},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702146},
primaryClass={cs.IT math.IT}
} | xu2007a |
arxiv-675703 | cs/0702147 | On the Complexity of Exact Maximum-Likelihood Decoding for Asymptotically Good Low Density Parity Check Codes | <|reference_start|>On the Complexity of Exact Maximum-Likelihood Decoding for Asymptotically Good Low Density Parity Check Codes: Since the classical work of Berlekamp, McEliece and van Tilborg, it is well known that the problem of exact maximum-likelihood (ML) decoding of general linear codes is NP-hard. In this paper, we show that exact ML decoding of a classs of asymptotically good error correcting codes--expander codes, a special case of low density parity check (LDPC) codes--over binary symmetric channels (BSCs) is possible with an expected polynomial complexity. More precisely, for any bit-flipping probability, $p$, in a nontrivial range, there exists a rate region of non-zero support and a family of asymptotically good codes, whose error probability decays exponentially in coding length $n$, for which ML decoding is feasible in expected polynomial time. Furthermore, as $p$ approaches zero, this rate region approaches the channel capacity region. The result is based on the existence of polynomial-time suboptimal decoding algorithms that provide an ML certificate and the ability to compute the probability that the suboptimal decoder yields the ML solution. One such ML certificate decoder is the LP decoder of Feldman; we also propose a more efficient $O(n^2)$ algorithm based on the work of Sipser and Spielman and the Ford-Fulkerson algorithm. The results can be extended to AWGN channels and suggest that it may be feasible to eliminate the error floor phenomenon associated with message-passage decoding of LDPC codes in the high SNR regime. Finally, we observe that the argument of Berlekamp, McEliece and van Tilborg can be used to show that ML decoding of the considered class of codes constructed from LDPC codes with regular left degree, of which the considered expander codes are a special case, remains NP-hard; thus giving an interesting contrast between the worst-case and expected complexities.<|reference_end|> | arxiv | @article{xu2007on,
title={On the Complexity of Exact Maximum-Likelihood Decoding for
Asymptotically Good Low Density Parity Check Codes},
author={Weiyu Xu, Babak Hassibi},
journal={arXiv preprint arXiv:cs/0702147},
year={2007},
doi={10.1109/ITW.2007.4313065},
archivePrefix={arXiv},
eprint={cs/0702147},
primaryClass={cs.IT math.IT}
} | xu2007on |
arxiv-675704 | cs/0702148 | Linking Microscopic and Macroscopic Models for Evolution: Markov Chain Network Training and Conservation Law Approximations | <|reference_start|>Linking Microscopic and Macroscopic Models for Evolution: Markov Chain Network Training and Conservation Law Approximations: In this paper, a general framework for the analysis of a connection between the training of artificial neural networks via the dynamics of Markov chains and the approximation of conservation law equations is proposed. This framework allows us to demonstrate an intrinsic link between microscopic and macroscopic models for evolution via the concept of perturbed generalized dynamic systems. The main result is exemplified with a number of illustrative examples where efficient numerical approximations follow directly from network-based computational models, viewed here as Markov chain approximations. Finally, stability and consistency conditions of such computational models are discussed.<|reference_end|> | arxiv | @article{melnik2007linking,
title={Linking Microscopic and Macroscopic Models for Evolution: Markov Chain
Network Training and Conservation Law Approximations},
author={Roderick V.N. Melnik},
journal={Markov Chain network training and conservation law approximations:
Linking microscopic and macroscopic models for evolution, Melnik, R.V.N.,
Applied Mathematics and Computation, 199 (1), 315--333, 2008},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702148},
primaryClass={cs.CE cs.IT cs.NA cs.NE math.IT}
} | melnik2007linking |
arxiv-675705 | cs/0702149 | Coupling Control and Human-Centered Automation in Mathematical Models of Complex Systems | <|reference_start|>Coupling Control and Human-Centered Automation in Mathematical Models of Complex Systems: In this paper we analyze mathematically how human factors can be effectively incorporated into the analysis and control of complex systems. As an example, we focus our discussion around one of the key problems in the Intelligent Transportation Systems (ITS) theory and practice, the problem of speed control, considered here as a decision making process with limited information available. The problem is cast mathematically in the general framework of control problems and is treated in the context of dynamically changing environments where control is coupled to human-centered automation. Since in this case control might not be limited to a small number of control settings, as it is often assumed in the control literature, serious difficulties arise in the solution of this problem. We demonstrate that the problem can be reduced to a set of Hamilton-Jacobi-Bellman equations where human factors are incorporated via estimations of the system Hamiltonian. In the ITS context, these estimations can be obtained with the use of on-board equipment like sensors/receivers/actuators, in-vehicle communication devices, etc. The proposed methodology provides a way to integrate human factor into the solving process of the models for other complex dynamic systems.<|reference_end|> | arxiv | @article{melnik2007coupling,
title={Coupling Control and Human-Centered Automation in Mathematical Models of
Complex Systems},
author={Roderick V.N. Melnik},
journal={arXiv preprint arXiv:cs/0702149},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702149},
primaryClass={cs.CE cs.AI cs.HC cs.IT math.IT}
} | melnik2007coupling |
arxiv-675706 | cs/0702150 | A note on rate-distortion functions for nonstationary Gaussian autoregressive processes | <|reference_start|>A note on rate-distortion functions for nonstationary Gaussian autoregressive processes: Source coding theorems and Shannon rate-distortion functions were studied for the discrete-time Wiener process by Berger and generalized to nonstationary Gaussian autoregressive processes by Gray and by Hashimoto and Arimoto. Hashimoto and Arimoto provided an example apparently contradicting the methods used in Gray, implied that Gray's rate-distortion evaluation was not correct in the nonstationary case, and derived a new formula that agreed with previous results for the stationary case and held in the nonstationary case. In this correspondence it is shown that the rate-distortion formulas of Gray and Hashimoto and Arimoto are in fact consistent and that the example of of Hashimoto and Arimoto does not form a counter example to the methods or results of the earlier paper. Their results do provide an alternative, but equivalent, formula for the rate-distortion function in the nonstationary case and they provide a concrete example that the classic Kolmogorov formula differs from the autoregressive formula when the autoregressive source is not stationary. Some observations are offered on the different versions of the Toeplitz asymptotic eigenvalue distribution theorem used in the two papers to emphasize how a slight modification of the classic theorem avoids the problems with certain singularities.<|reference_end|> | arxiv | @article{gray2007a,
title={A note on rate-distortion functions for nonstationary Gaussian
autoregressive processes},
author={Robert M. Gray and Takeshi Hashimoto},
journal={arXiv preprint arXiv:cs/0702150},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702150},
primaryClass={cs.IT math.IT}
} | gray2007a |
arxiv-675707 | cs/0702151 | Succinct Sampling on Streams | <|reference_start|>Succinct Sampling on Streams: A streaming model is one where data items arrive over long period of time, either one item at a time or in bursts. Typical tasks include computing various statistics over a sliding window of some fixed time-horizon. What makes the streaming model interesting is that as the time progresses, old items expire and new ones arrive. One of the simplest and central tasks in this model is sampling. That is, the task of maintaining up to $k$ uniformly distributed items from a current time-window as old items expire and new ones arrive. We call sampling algorithms {\bf succinct} if they use provably optimal (up to constant factors) {\bf worst-case} memory to maintain $k$ items (either with or without replacement). We stress that in many applications structures that have {\em expected} succinct representation as the time progresses are not sufficient, as small probability events eventually happen with probability 1. Thus, in this paper we ask the following question: are Succinct Sampling on Streams (or $S^3$-algorithms)possible, and if so for what models? Perhaps somewhat surprisingly, we show that $S^3$-algorithms are possible for {\em all} variants of the problem mentioned above, i.e. both with and without replacement and both for one-at-a-time and bursty arrival models. Finally, we use $S^3$ algorithms to solve various problems in sliding windows model, including frequency moments, counting triangles, entropy and density estimations. For these problems we present \emph{first} solutions with provable worst-case memory guarantees.<|reference_end|> | arxiv | @article{braverman2007succinct,
title={Succinct Sampling on Streams},
author={Vladimir Braverman, Rafail Ostrovsky, Carlo Zaniolo},
journal={arXiv preprint arXiv:cs/0702151},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702151},
primaryClass={cs.DS}
} | braverman2007succinct |
arxiv-675708 | cs/0702152 | A Simplified Suspension Calculus and its Relationship to Other Explicit Substitution Calculi | <|reference_start|>A Simplified Suspension Calculus and its Relationship to Other Explicit Substitution Calculi: This paper concerns the explicit treatment of substitutions in the lambda calculus. One of its contributions is the simplification and rationalization of the suspension calculus that embodies such a treatment. The earlier version of this calculus provides a cumbersome encoding of substitution composition, an operation that is important to the efficient realization of reduction. This encoding is simplified here, resulting in a treatment that is easy to use directly in applications. The rationalization consists of the elimination of a practically inconsequential flexibility in the unravelling of substitutions that has the inadvertent side effect of losing contextual information in terms; the modified calculus now has a structure that naturally supports logical analyses, such as ones related to the assignment of types, over lambda terms. The overall calculus is shown to have pleasing theoretical properties such as a strongly terminating sub-calculus for substitution and confluence even in the presence of term meta variables that are accorded a grafting interpretation. Another contribution of the paper is the identification of a broad set of properties that are desirable for explicit substitution calculi to support and a classification of a variety of proposed systems based on these. The suspension calculus is used as a tool in this study. In particular, mappings are described between it and the other calculi towards understanding the characteristics of the latter.<|reference_end|> | arxiv | @article{gacek2007a,
title={A Simplified Suspension Calculus and its Relationship to Other Explicit
Substitution Calculi},
author={Andrew Gacek and Gopalan Nadathur},
journal={arXiv preprint arXiv:cs/0702152},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702152},
primaryClass={cs.LO}
} | gacek2007a |
arxiv-675709 | cs/0702153 | Games on the Sperner Triangle | <|reference_start|>Games on the Sperner Triangle: We create a new two-player game on the Sperner Triangle based on Sperner's lemma. Our game has simple rules and several desirable properties. First, the game is always certain to have a winner. Second, like many other interesting games such as Hex and Geography, we prove that deciding whether one can win our game is a PSPACE-complete problem. Third, there is an elegant balance in the game such that neither the first nor the second player always has a decisive advantage. We provide a web-based version of the game, playable at: http://cs-people.bu.edu/paithan/spernerGame/ . In addition we propose other games, also based on fixed-point theorems.<|reference_end|> | arxiv | @article{burke2007games,
title={Games on the Sperner Triangle},
author={Kyle Burke and Shang-Hua Teng},
journal={arXiv preprint arXiv:cs/0702153},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702153},
primaryClass={cs.GT cs.CC}
} | burke2007games |
arxiv-675710 | cs/0702154 | On the Capacity of the Single Source Multiple Relay Single Destination Mesh Network | <|reference_start|>On the Capacity of the Single Source Multiple Relay Single Destination Mesh Network: In this paper, we derive the information theoretic capacity of a special class of mesh networks. A mesh network is a heterogeneous wireless network in which the transmission among power limited nodes is assisted by powerful relays, which use the same wireless medium. We investigate the mesh network when there is one source, one destination, and multiple relays, which we call the single source multiple relay single destination (SSMRSD) mesh network. We derive the asymptotic capacity of the SSMRSD mesh network when the relay powers grow to infinity. Our approach is as follows. We first look at an upper bound on the information theoretic capacity of these networks in a Gaussian setting. We then show that this bound is achievable asymptotically using the compress-and-forward strategy for the multiple relay channel. We also perform numerical computations for the case when the relays have finite powers. We observe that even when the relay power is only a few times larger than the source power, the compress-and-forward rate gets close to the capacity. The results indicate the value of cooperation in wireless mesh networks. The capacity characterization quantifies how the relays can cooperate, using the compress-and-forward strategy, to either conserve node energy or to increase transmission rate.<|reference_end|> | arxiv | @article{ong2007on,
title={On the Capacity of the Single Source Multiple Relay Single Destination
Mesh Network},
author={Lawrence Ong and Mehul Motani},
journal={Elsevier Ad Hoc Networks: Special Issue on Wireless Mesh Networks,
Vol. 5, No. 6, pp. 786-800, Aug. 2007.},
year={2007},
doi={10.1016/j.adhoc.2006.12.006},
archivePrefix={arXiv},
eprint={cs/0702154},
primaryClass={cs.IT cs.NI math.IT}
} | ong2007on |
arxiv-675711 | cs/0702155 | On a characterization of cellular automata in tilings of the hyperbolic plane | <|reference_start|>On a characterization of cellular automata in tilings of the hyperbolic plane: In this paper, we look at the extention of Hedlund's characterization of cellular automata to the case of cellular automata in the hyperbolic plane. This requires an additionnal condition. The new theorem is proved with full details in the case of the pentagrid and in the case of the ternary heptagrid and enough indications to show that it holds also on the grids $\{p,q\}$ of the hyperbolic plane.<|reference_end|> | arxiv | @article{margenstern2007on,
title={On a characterization of cellular automata in tilings of the hyperbolic
plane},
author={Maurice Margenstern},
journal={International Journal of Foundations of Computer Science, volume
19,(5), (2008), 1235-1257},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702155},
primaryClass={cs.DM cs.CG}
} | margenstern2007on |
arxiv-675712 | cs/0702156 | Analysis of Steiner subtrees of Random Trees for Traceroute Algorithms | <|reference_start|>Analysis of Steiner subtrees of Random Trees for Traceroute Algorithms: We consider in this paper the problem of discovering, via a traceroute algorithm, the topology of a network, whose graph is spanned by an infinite branching process. A subset of nodes is selected according to some criterion. As a measure of efficiency of the algorithm, the Steiner distance of the selected nodes, i.e. the size of the spanning sub-tree of these nodes, is investigated. For the selection of nodes, two criteria are considered: A node is randomly selected with a probability, which is either independent of the depth of the node (uniform model) or else in the depth biased model, is exponentially decaying with respect to its depth. The limiting behavior the size of the discovered subtree is investigated for both models.<|reference_end|> | arxiv | @article{guillemin2007analysis,
title={Analysis of Steiner subtrees of Random Trees for Traceroute Algorithms},
author={Fabrice Guillemin, Philippe Robert (INRIA Rocquencourt)},
journal={Random Structures and Algorithms, 35(2):194-215, September 2009},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702156},
primaryClass={cs.NI cs.DS}
} | guillemin2007analysis |
arxiv-675713 | cs/0702157 | In Search of Simplicity: A Self-Organizing Multi-Source Multicast Overlay | <|reference_start|>In Search of Simplicity: A Self-Organizing Multi-Source Multicast Overlay: Multicast communication primitives have broad utility as building blocks for distributed applications. The challenge is to create and maintain the distributed structures that support these primitives while accounting for volatile end nodes and variable network characteristics. Most solutions proposed to date rely on complex algorithms or global information, thus limiting the scale of deployments and acceptance outside the academic realm. This article introduces a low-complexity, self organizing solution for maintaining multicast trees, that we refer to as UMM (Unstructured Multi-source Multicast). UMM uses traditional distributed systems techniques: layering, soft-state, and passive data collection to adapt to the dynamics of the physical network and maintain data dissemination trees. The result is a simple, adaptive system with lower overheads than more complex alternatives. We have implemented UMM and evaluated it on a 100-node PlanetLab testbed and on up to 1024-node emulated ModelNet networks Extensive experimental evaluations demonstrate UMM's low overhead, efficient network usage compared to alternative solutions, and ability to quickly adapt to network changes and to recover from failures.<|reference_end|> | arxiv | @article{ripeanu2007in,
title={In Search of Simplicity: A Self-Organizing Multi-Source Multicast
Overlay},
author={Matei Ripeanu, Adriana Iamnitchi, Ian Foster, Anne Rogers},
journal={arXiv preprint arXiv:cs/0702157},
year={2007},
number={DSL-TR-2007-02},
archivePrefix={arXiv},
eprint={cs/0702157},
primaryClass={cs.DC cs.NI cs.PF}
} | ripeanu2007in |
arxiv-675714 | cs/0702158 | Joint Design and Separation Principle for Opportunistic Spectrum Access in the Presence of Sensing Errors | <|reference_start|>Joint Design and Separation Principle for Opportunistic Spectrum Access in the Presence of Sensing Errors: We address the design of opportunistic spectrum access (OSA) strategies that allow secondary users to independently search for and exploit instantaneous spectrum availability. Integrated in the joint design are three basic components: a spectrum sensor that identifies spectrum opportunities, a sensing strategy that determines which channels in the spectrum to sense, and an access strategy that decides whether to access based on imperfect sensing outcomes. We formulate the joint PHY-MAC design of OSA as a constrained partially observable Markov decision process (POMDP). Constrained POMDPs generally require randomized policies to achieve optimality, which are often intractable. By exploiting the rich structure of the underlying problem, we establish a separation principle for the joint design of OSA. This separation principle reveals the optimality of myopic policies for the design of the spectrum sensor and the access strategy, leading to closed-form optimal solutions. Furthermore, decoupling the design of the sensing strategy from that of the spectrum sensor and the access strategy, the separation principle reduces the constrained POMDP to an unconstrained one, which admits deterministic optimal policies. Numerical examples are provided to study the design tradeoffs, the interaction between the spectrum sensor and the sensing and access strategies, and the robustness of the ensuing design to model mismatch.<|reference_end|> | arxiv | @article{chen2007joint,
title={Joint Design and Separation Principle for Opportunistic Spectrum Access
in the Presence of Sensing Errors},
author={Yunxia Chen, Qing Zhao and Ananthram Swami},
journal={arXiv preprint arXiv:cs/0702158},
year={2007},
doi={10.1109/TIT.2008.920248},
archivePrefix={arXiv},
eprint={cs/0702158},
primaryClass={cs.NI}
} | chen2007joint |
arxiv-675715 | cs/0702159 | Perfect Hashing for Data Management Applications | <|reference_start|>Perfect Hashing for Data Management Applications: Perfect hash functions can potentially be used to compress data in connection with a variety of data management tasks. Though there has been considerable work on how to construct good perfect hash functions, there is a gap between theory and practice among all previous methods on minimal perfect hashing. On one side, there are good theoretical results without experimentally proven practicality for large key sets. On the other side, there are the theoretically analyzed time and space usage algorithms that assume that truly random hash functions are available for free, which is an unrealistic assumption. In this paper we attempt to bridge this gap between theory and practice, using a number of techniques from the literature to obtain a novel scheme that is theoretically well-understood and at the same time achieves an order-of-magnitude increase in performance compared to previous ``practical'' methods. This improvement comes from a combination of a novel, theoretically optimal perfect hashing scheme that greatly simplifies previous methods, and the fact that our algorithm is designed to make good use of the memory hierarchy. We demonstrate the scalability of our algorithm by considering a set of over one billion URLs from the World Wide Web of average length 64, for which we construct a minimal perfect hash function on a commodity PC in a little more than 1 hour. Our scheme produces minimal perfect hash functions using slightly more than 3 bits per key. For perfect hash functions in the range $\{0,...,2n-1\}$ the space usage drops to just over 2 bits per key (i.e., one bit more than optimal for representing the key). This is significantly below of what has been achieved previously for very large values of $n$.<|reference_end|> | arxiv | @article{botelho2007perfect,
title={Perfect Hashing for Data Management Applications},
author={Fabiano C. Botelho, Rasmus Pagh, Nivio Ziviani},
journal={arXiv preprint arXiv:cs/0702159},
year={2007},
number={RT.DCC.002/2007},
archivePrefix={arXiv},
eprint={cs/0702159},
primaryClass={cs.DS cs.DB}
} | botelho2007perfect |
arxiv-675716 | cs/0702160 | A Quantifier-Free String Theory for ALOGTIME Reasoning | <|reference_start|>A Quantifier-Free String Theory for ALOGTIME Reasoning: The main contribution of this work is the definition of a quantifier-free string theory T_1 suitable for formalizing ALOGTIME reasoning. After describing L_1 -- a new, simple, algebraic characterization of the complexity class ALOGTIME based on strings instead of numbers -- the theory T_1 is defined (based on L_1), and a detailed formal development of T_1 is given. Then, theorems of T_1 are shown to translate into families of propositional tautologies that have uniform polysize Frege proofs, T_1 is shown to prove the soundness of a particular Frege system F, and F is shown to provably p-simulate any proof system whose soundness can be proved in T_1. Finally, T_1 is compared with other theories for ALOGTIME reasoning in the literature. To our knowledge, this is the first formal theory for ALOGTIME reasoning whose basic objects are strings instead of numbers, and the first quantifier-free theory formalizing ALOGTIME reasoning in which a direct proof of the soundness of some Frege system has been given (in the case of first-order theories, such a proof was first given by Arai for his theory AID). Also, the polysize Frege proofs we give for the propositional translations of theorems of T_1 are considerably simpler than those for other theories, and so is our proof of the soundness of a particular F-system in T_1. Together with the simplicity of T_1's recursion schemes, axioms, and rules these facts suggest that T_1 is one of the most natural theories available for ALOGTIME reasoning.<|reference_end|> | arxiv | @article{pitt2007a,
title={A Quantifier-Free String Theory for ALOGTIME Reasoning},
author={Franc{c}ois Pitt},
journal={arXiv preprint arXiv:cs/0702160},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702160},
primaryClass={cs.CC}
} | pitt2007a |
arxiv-675717 | cs/0702161 | Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions | <|reference_start|>Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions: An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and random-coding exponents for perfectly-secure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. We address the potential loss in communication performance due to the perfect-security requirement. This loss is the same as the loss obtained under a weaker order-1 steganographic requirement that would just require matching of first-order marginals of the covertext and stegotext distributions. Furthermore, no loss occurs if the covertext distribution is uniform and the distortion metric is cyclically symmetric; steganographic capacity is then achieved by randomized linear codes. Our framework may also be useful for developing computationally secure steganographic systems that have near-optimal communication performance.<|reference_end|> | arxiv | @article{wang2007perfectly,
title={Perfectly Secure Steganography: Capacity, Error Exponents, and Code
Constructions},
author={Ying Wang, Pierre Moulin},
journal={arXiv preprint arXiv:cs/0702161},
year={2007},
doi={10.1109/TIT.2008.921684},
archivePrefix={arXiv},
eprint={cs/0702161},
primaryClass={cs.IT cs.CR math.IT}
} | wang2007perfectly |
arxiv-675718 | cs/0702162 | Distributed Power Allocation with Rate Constraints in Gaussian Parallel Interference Channels | <|reference_start|>Distributed Power Allocation with Rate Constraints in Gaussian Parallel Interference Channels: This paper considers the minimization of transmit power in Gaussian parallel interference channels, subject to a rate constraint for each user. To derive decentralized solutions that do not require any cooperation among the users, we formulate this power control problem as a (generalized) Nash equilibrium game. We obtain sufficient conditions that guarantee the existence and nonemptiness of the solution set to our problem. Then, to compute the solutions of the game, we propose two distributed algorithms based on the single user waterfilling solution: The \emph{sequential} and the \emph{simultaneous} iterative waterfilling algorithms, wherein the users update their own strategies sequentially and simultaneously, respectively. We derive a unified set of sufficient conditions that guarantee the uniqueness of the solution and global convergence of both algorithms. Our results are applicable to all practical distributed multipoint-to-multipoint interference systems, either wired or wireless, where a quality of service in terms of information rate must be guaranteed for each link.<|reference_end|> | arxiv | @article{pang2007distributed,
title={Distributed Power Allocation with Rate Constraints in Gaussian Parallel
Interference Channels},
author={Jong-Shi Pang, Gesualdo Scutari, Francisco Facchinei, and Chaoxiong
Wang},
journal={arXiv preprint arXiv:cs/0702162},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702162},
primaryClass={cs.IT cs.GT math.IT}
} | pang2007distributed |
arxiv-675719 | cs/0702163 | First Passage Time for Multivariate Jump-diffusion Stochastic Models With Applications in Finance | <|reference_start|>First Passage Time for Multivariate Jump-diffusion Stochastic Models With Applications in Finance: The ``first passage-time'' (FPT) problem is an important problem with a wide range of applications in mathematics, physics, biology and finance. Mathematically, such a problem can be reduced to estimating the probability of a (stochastic) process first to reach a critical level or threshold. While in other areas of applications the FPT problem can often be solved analytically, in finance we usually have to resort to the application of numerical procedures, in particular when we deal with jump-diffusion stochastic processes (JDP). In this paper, we develop a Monte-Carlo-based methodology for the solution of the FPT problem in the context of a multivariate jump-diffusion stochastic process. The developed methodology is tested by using different parameters, the simulation results indicate that the developed methodology is much more efficient than the conventional Monte Carlo method. It is an efficient tool for further practical applications, such as the analysis of default correlation and predicting barrier options in finance.<|reference_end|> | arxiv | @article{zhang2007first,
title={First Passage Time for Multivariate Jump-diffusion Stochastic Models
With Applications in Finance},
author={Di Zhang and Roderick V.N. Melnik},
journal={arXiv preprint arXiv:cs/0702163},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702163},
primaryClass={cs.CE cs.NA}
} | zhang2007first |
arxiv-675720 | cs/0702164 | Monte-Carlo Simulations of the First Passage Time for Multivariate Jump-Diffusion Processes in Financial Applications | <|reference_start|>Monte-Carlo Simulations of the First Passage Time for Multivariate Jump-Diffusion Processes in Financial Applications: Many problems in finance require the information on the first passage time (FPT) of a stochastic process. Mathematically, such problems are often reduced to the evaluation of the probability density of the time for such a process to cross a certain level, a boundary, or to enter a certain region. While in other areas of applications the FPT problem can often be solved analytically, in finance we usually have to resort to the application of numerical procedures, in particular when we deal with jump-diffusion stochastic processes (JDP). In this paper, we propose a Monte-Carlo-based methodology for the solution of the first passage time problem in the context of multivariate (and correlated) jump-diffusion processes. The developed technique provide an efficient tool for a number of applications, including credit risk and option pricing. We demonstrate its applicability to the analysis of the default rates and default correlations of several different, but correlated firms via a set of empirical data.<|reference_end|> | arxiv | @article{zhang2007monte-carlo,
title={Monte-Carlo Simulations of the First Passage Time for Multivariate
Jump-Diffusion Processes in Financial Applications},
author={Di Zhang and Roderick V.N. Melnik},
journal={arXiv preprint arXiv:cs/0702164},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702164},
primaryClass={cs.CE cs.NA}
} | zhang2007monte-carlo |
arxiv-675721 | cs/0702165 | Efficient estimation of default correlation for multivariate jump-diffusion processes | <|reference_start|>Efficient estimation of default correlation for multivariate jump-diffusion processes: Evaluation of default correlation is an important task in credit risk analysis. In many practical situations, it concerns the joint defaults of several correlated firms, the task that is reducible to a first passage time (FPT) problem. This task represents a great challenge for jump-diffusion processes (JDP), where except for very basic cases, there are no analytical solutions for such problems. In this contribution, we generalize our previous fast Monte-Carlo method (non-correlated jump-diffusion cases) for multivariate (and correlated) jump-diffusion processes. This generalization allows us, among other things, to evaluate the default events of several correlated assets based on a set of empirical data. The developed technique is an efficient tool for a number of other applications, including credit risk and option pricing.<|reference_end|> | arxiv | @article{zhang2007efficient,
title={Efficient estimation of default correlation for multivariate
jump-diffusion processes},
author={Di Zhang and Roderick V.N. Melnik},
journal={arXiv preprint arXiv:cs/0702165},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702165},
primaryClass={cs.CE cs.NA}
} | zhang2007efficient |
arxiv-675722 | cs/0702166 | Solving Stochastic Differential Equations with Jump-Diffusion Efficiently: Applications to FPT Problems in Credit Risk | <|reference_start|>Solving Stochastic Differential Equations with Jump-Diffusion Efficiently: Applications to FPT Problems in Credit Risk: The first passage time (FPT) problem is ubiquitous in many applications. In finance, we often have to deal with stochastic processes with jump-diffusion, so that the FTP problem is reducible to a stochastic differential equation with jump-diffusion. While the application of the conventional Monte-Carlo procedure is possible for the solution of the resulting model, it becomes computationally inefficient which severely restricts its applicability in many practically interesting cases. In this contribution, we focus on the development of efficient Monte-Carlo-based computational procedures for solving the FPT problem under the multivariate (and correlated) jump-diffusion processes. We also discuss the implementation of the developed Monte-Carlo-based technique for multivariate jump-diffusion processes driving by several compound Poisson shocks. Finally, we demonstrate the application of the developed methodologies for analyzing the default rates and default correlations of differently rated firms via historical data.<|reference_end|> | arxiv | @article{zhang2007solving,
title={Solving Stochastic Differential Equations with Jump-Diffusion
Efficiently: Applications to FPT Problems in Credit Risk},
author={Di Zhang and Roderick V.N. Melnik},
journal={arXiv preprint arXiv:cs/0702166},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702166},
primaryClass={cs.CE cs.NA}
} | zhang2007solving |
arxiv-675723 | cs/0702167 | Finite Volume Analysis of Nonlinear Thermo-mechanical Dynamics of Shape Memory Alloys | <|reference_start|>Finite Volume Analysis of Nonlinear Thermo-mechanical Dynamics of Shape Memory Alloys: In this paper, the finite volume method is developed to analyze coupled dynamic problems of nonlinear thermoelasticity. The major focus is given to the description of martensitic phase transformations essential in the modelling of shape memory alloys. Computational experiments are carried out to study the thermo-mechanical wave interactions in a shape memory alloy rod, and a patch. Both mechanically and thermally induced phase transformations, as well as hysteresis effects, in a one-dimensional structure are successfully simulated with the developed methodology. In the two-dimensional case, the main focus is given to square-to-rectangular transformations and examples of martensitic combinations under different mechanical loadings are provided.<|reference_end|> | arxiv | @article{wang2007finite,
title={Finite Volume Analysis of Nonlinear Thermo-mechanical Dynamics of Shape
Memory Alloys},
author={Linxiang X. Wang and Roderick V.N. Melnik},
journal={arXiv preprint arXiv:cs/0702167},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702167},
primaryClass={cs.CE cs.NA}
} | wang2007finite |
arxiv-675724 | cs/0702168 | Simulation of Phase Combinations in Shape Memory Alloys Patches by Hybrid Optimization Methods | <|reference_start|>Simulation of Phase Combinations in Shape Memory Alloys Patches by Hybrid Optimization Methods: In this paper, phase combinations among martensitic variants in shape memory alloys patches and bars are simulated by a hybrid optimization methodology. The mathematical model is based on the Landau theory of phase transformations. Each stable phase is associated with a local minimum of the free energy function, and the phase combinations are simulated by minimizing the bulk energy. At low temperature, the free energy function has double potential wells leading to non-convexity of the optimization problem. The methodology proposed in the present paper is based on an initial estimate of the global solution by a genetic algorithm, followed by a refined quasi-Newton procedure to locally refine the optimum. By combining the local and global search algorithms, the phase combinations are successfully simulated. Numerical experiments are presented for the phase combinations in a SMA patch under several typical mechanical loadings.<|reference_end|> | arxiv | @article{wang2007simulation,
title={Simulation of Phase Combinations in Shape Memory Alloys Patches by
Hybrid Optimization Methods},
author={Linxiang X. Wang and Roderick V.N. Melnik},
journal={arXiv preprint arXiv:cs/0702168},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702168},
primaryClass={cs.CE cs.NA}
} | wang2007simulation |
arxiv-675725 | cs/0702169 | Bistable Biorders: A Sequential Domain Theory | <|reference_start|>Bistable Biorders: A Sequential Domain Theory: We give a simple order-theoretic construction of a Cartesian closed category of sequential functions. It is based on bistable biorders, which are sets with a partial order -- the extensional order -- and a bistable coherence, which captures equivalence of program behaviour, up to permutation of top (error) and bottom (divergence). We show that monotone and bistable functions (which are required to preserve bistably bounded meets and joins) are strongly sequential, and use this fact to prove universality results for the bistable biorder semantics of the simply-typed lambda-calculus (with atomic constants), and an extension with arithmetic and recursion. We also construct a bistable model of SPCF, a higher-order functional programming language with non-local control. We use our universality result for the lambda-calculus to show that the semantics of SPCF is fully abstract. We then establish a direct correspondence between bistable functions and sequential algorithms by showing that sequential data structures give rise to bistable biorders, and that each bistable function between such biorders is computed by a sequential algorithm.<|reference_end|> | arxiv | @article{laird2007bistable,
title={Bistable Biorders: A Sequential Domain Theory},
author={James Laird},
journal={Logical Methods in Computer Science, Volume 3, Issue 2 (May 15,
2007) lmcs:2222},
year={2007},
doi={10.2168/LMCS-3(2:5)2007},
archivePrefix={arXiv},
eprint={cs/0702169},
primaryClass={cs.PL cs.LO}
} | laird2007bistable |
arxiv-675726 | cs/0702170 | Generic Global Constraints based on MDDs | <|reference_start|>Generic Global Constraints based on MDDs: Constraint Programming (CP) has been successfully applied to both constraint satisfaction and constraint optimization problems. A wide variety of specialized global constraints provide critical assistance in achieving a good model that can take advantage of the structure of the problem in the search for a solution. However, a key outstanding issue is the representation of 'ad-hoc' constraints that do not have an inherent combinatorial nature, and hence are not modeled well using narrowly specialized global constraints. We attempt to address this issue by considering a hybrid of search and compilation. Specifically we suggest the use of Reduced Ordered Multi-Valued Decision Diagrams (ROMDDs) as the supporting data structure for a generic global constraint. We give an algorithm for maintaining generalized arc consistency (GAC) on this constraint that amortizes the cost of the GAC computation over a root-to-leaf path in the search tree without requiring asymptotically more space than used for the MDD. Furthermore we present an approach for incrementally maintaining the reduced property of the MDD during the search, and show how this can be used for providing domain entailment detection. Finally we discuss how to apply our approach to other similar data structures such as AOMDDs and Case DAGs. The technique used can be seen as an extension of the GAC algorithm for the regular language constraint on finite length input.<|reference_end|> | arxiv | @article{tiedemann2007generic,
title={Generic Global Constraints based on MDDs},
author={Peter Tiedemann, Henrik Reif Andersen, Rasmus Pagh},
journal={arXiv preprint arXiv:cs/0702170},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702170},
primaryClass={cs.AI}
} | tiedemann2007generic |
arxiv-675727 | cs/0702171 | How Overlap Determines the Macronuclear Genes in Ciliates | <|reference_start|>How Overlap Determines the Macronuclear Genes in Ciliates: Formal models for gene assembly in ciliates have been developed, in particular the string pointer reduction system (SPRS) and the graph pointer reduction system (GPRS). The reduction graph is a valuable tool within the SPRS, revealing much information about how gene assembly is performed for a given gene. The GPRS is more abstract than the SPRS and not all information present in the SPRS is retained in the GPRS. As a consequence the reduction graph cannot be defined for the GPRS in general, but we show that it can be defined (in an equivalent manner as defined for the SPRS) if we restrict ourselves to so-called realistic overlap graphs. Fortunately, only these graphs correspond to genes occurring in nature. Defining the reduction graph within the GPRS allows one to carry over several results within the SPRS that rely on the reduction graph.<|reference_end|> | arxiv | @article{brijder2007how,
title={How Overlap Determines the Macronuclear Genes in Ciliates},
author={Robert Brijder, Hendrik Jan Hoogeboom, Grzegorz Rozenberg},
journal={arXiv preprint arXiv:cs/0702171},
year={2007},
number={LIACS Technical Report 2007-02},
archivePrefix={arXiv},
eprint={cs/0702171},
primaryClass={cs.LO}
} | brijder2007how |
arxiv-675728 | cs/0702172 | Numerical Model For Vibration Damping Resulting From the First Order Phase Transformations | <|reference_start|>Numerical Model For Vibration Damping Resulting From the First Order Phase Transformations: A numerical model is constructed for modelling macroscale damping effects induced by the first order martensite phase transformations in a shape memory alloy rod. The model is constructed on the basis of the modified Landau-Ginzburg theory that couples nonlinear mechanical and thermal fields. The free energy function for the model is constructed as a double well function at low temperature, such that the external energy can be absorbed during the phase transformation and converted into thermal form. The Chebyshev spectral methods are employed together with backward differentiation for the numerical analysis of the problem. Computational experiments performed for different vibration energies demonstrate the importance of taking into account damping effects induced by phase transformations.<|reference_end|> | arxiv | @article{wang2007numerical,
title={Numerical Model For Vibration Damping Resulting From the First Order
Phase Transformations},
author={Linxiang X. Wang and Roderick V.N. Melnik},
journal={arXiv preprint arXiv:cs/0702172},
year={2007},
archivePrefix={arXiv},
eprint={cs/0702172},
primaryClass={cs.CE cs.NA}
} | wang2007numerical |
arxiv-675729 | cs/0703001 | Embedding Graphs into the Extended Grid | <|reference_start|>Embedding Graphs into the Extended Grid: Let $G=(V,E)$ be an arbitrary undirected source graph to be embedded in a target graph $EM$, the extended grid with vertices on integer grid points and edges to nearest and next-nearest neighbours. We present an algorithm showing how to embed $G$ into $EM$ in both time and space $O(|V|^2)$ using the new notions of islands and bridges. An island is a connected subgraph in the target graph which is mapped from exactly one vertex in the source graph while a bridge is an edge between two islands which is mapped from exactly one edge in the source graph. This work is motivated by real industrial applications in the field of quantum computing and a need to efficiently embed source graphs in the extended grid.<|reference_end|> | arxiv | @article{coury2007embedding,
title={Embedding Graphs into the Extended Grid},
author={Michael D. Coury},
journal={arXiv preprint arXiv:cs/0703001},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703001},
primaryClass={cs.DM cs.DS}
} | coury2007embedding |
arxiv-675730 | cs/0703002 | Integral Biomathics: A Post-Newtonian View into the Logos of Bios (On the New Meaning, Relations and Principles of Life in Science) | <|reference_start|>Integral Biomathics: A Post-Newtonian View into the Logos of Bios (On the New Meaning, Relations and Principles of Life in Science): This work is an attempt for a state-of-the-art survey of natural and life sciences with the goal to define the scope and address the central questions of an original research program. It is focused on the phenomena of emergence, adaptive dynamics and evolution of self-assembling, self-organizing, self-maintaining and self-replicating biosynthetic systems viewed from a newly-arranged perspective and understanding of computation and communication in the living nature.<|reference_end|> | arxiv | @article{simeonov2007integral,
title={Integral Biomathics: A Post-Newtonian View into the Logos of Bios (On
the New Meaning, Relations and Principles of Life in Science)},
author={Plamen L. Simeonov},
journal={Progress in Biophysics and Molecular Biology, Vol. 102, Issues
2-3, 2010, pp. 85-121},
year={2007},
doi={10.1016/j.pbiomolbio.2010.01.005},
archivePrefix={arXiv},
eprint={cs/0703002},
primaryClass={cs.NE cs.CC}
} | simeonov2007integral |
arxiv-675731 | cs/0703003 | Functions to Support Input and Output of Intervals | <|reference_start|>Functions to Support Input and Output of Intervals: Interval arithmetic is hardly feasible without directed rounding as provided, for example, by the IEEE floating-point standard. Equally essential for interval methods is directed rounding for conversion between the external decimal and internal binary numerals. This is not provided by the standard I/O libraries. Conversion algorithms exist that guarantee identity upon conversion followed by its inverse. Although it may be possible to adapt these algorithms for use in decimal interval I/O, we argue that outward rounding in radix conversion is computationally a simpler problem than guaranteeing identity. Hence it is preferable to develop decimal interval I/O ab initio, which is what we do in this paper.<|reference_end|> | arxiv | @article{van emden2007functions,
title={Functions to Support Input and Output of Intervals},
author={M.H. van Emden, B. Moa, and S.C. Somosan},
journal={arXiv preprint arXiv:cs/0703003},
year={2007},
number={DCS-311-IR},
archivePrefix={arXiv},
eprint={cs/0703003},
primaryClass={cs.NA}
} | van emden2007functions |
arxiv-675732 | cs/0703004 | Accelerating Socio-Technological Evolution: from ephemeralization and stigmergy to the global brain | <|reference_start|>Accelerating Socio-Technological Evolution: from ephemeralization and stigmergy to the global brain: Evolution is presented as a trial-and-error process that produces a progressive accumulation of knowledge. At the level of technology, this leads to ephemeralization, i.e. ever increasing productivity, or decreasing of the friction that normally dissipates resources. As a result, flows of matter, energy and information circulate ever more easily across the planet. This global connectivity increases the interactions between agents, and thus the possibilities for conflict. However, evolutionary progress also reduces social friction, via the creation of institutions. The emergence of such "mediators" is facilitated by stigmergy: the unintended collaboration between agents resulting from their actions on a shared environment. The Internet is a near ideal medium for stigmergic interaction. Quantitative stigmergy allows the web to learn from the activities of its users, thus becoming ever better at helping them to answer their queries. Qualitative stigmergy stimulates agents to collectively develop novel knowledge. Both mechanisms have direct analogues in the functioning of the human brain. This leads us to envision the future, super-intelligent web as a "global brain" for humanity. The feedback between social and technological advances leads to an extreme acceleration of innovation. An extrapolation of the corresponding hyperbolic growth model would forecast a singularity around 2040. This can be interpreted as the evolutionary transition to the Global Brain regime.<|reference_end|> | arxiv | @article{heylighen2007accelerating,
title={Accelerating Socio-Technological Evolution: from ephemeralization and
stigmergy to the global brain},
author={Francis Heylighen},
journal={arXiv preprint arXiv:cs/0703004},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703004},
primaryClass={cs.CY cs.NI}
} | heylighen2007accelerating |
arxiv-675733 | cs/0703005 | State Amplification | <|reference_start|>State Amplification: We consider the problem of transmitting data at rate R over a state dependent channel p(y|x,s) with the state information available at the sender and at the same time conveying the information about the channel state itself to the receiver. The amount of state information that can be learned at the receiver is captured by the mutual information I(S^n; Y^n) between the state sequence S^n and the channel output Y^n. The optimal tradeoff is characterized between the information transmission rate R and the state uncertainty reduction rate \Delta, when the state information is either causally or noncausally available at the sender. This result is closely related and in a sense dual to a recent study by Merhav and Shamai, which solves the problem of masking the state information from the receiver rather than conveying it.<|reference_end|> | arxiv | @article{kim2007state,
title={State Amplification},
author={Young-Han Kim, Arak Sutivong, and Thomas M. Cover},
journal={arXiv preprint arXiv:cs/0703005},
year={2007},
doi={10.1109/TIT.2008.920242},
archivePrefix={arXiv},
eprint={cs/0703005},
primaryClass={cs.IT math.IT}
} | kim2007state |
arxiv-675734 | cs/0703006 | XORSAT: An Efficient Algorithm for the DIMACS 32-bit Parity Problem | <|reference_start|>XORSAT: An Efficient Algorithm for the DIMACS 32-bit Parity Problem: The DIMACS 32-bit parity problem is a satisfiability (SAT) problem hard to solve. So far, EqSatz by Li is the only solver which can solve this problem. However, This solver is very slow. It is reported that it spent 11855 seconds to solve a par32-5 instance on a Maxintosh G3 300 MHz. The paper introduces a new solver, XORSAT, which splits the original problem into two parts: structured part and random part, and then solves separately them with WalkSAT and an XOR equation solver. Based our empirical observation, XORSAT is surprisingly fast, which is approximately 1000 times faster than EqSatz. For a par32-5 instance, XORSAT took 2.9 seconds, while EqSatz took 2844 seconds on Intel Pentium IV 2.66GHz CPU. We believe that this method significantly different from traditional methods is also useful beyond this domain.<|reference_end|> | arxiv | @article{chen2007xorsat:,
title={XORSAT: An Efficient Algorithm for the DIMACS 32-bit Parity Problem},
author={Jing-Chao Chen},
journal={arXiv preprint arXiv:cs/0703006},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703006},
primaryClass={cs.DS}
} | chen2007xorsat: |
arxiv-675735 | cs/0703007 | Intensional properties of polygraphs | <|reference_start|>Intensional properties of polygraphs: We present polygraphic programs, a subclass of Albert Burroni's polygraphs, as a computational model, showing how these objects can be seen as first-order functional programs. We prove that the model is Turing complete. We use polygraphic interpretations, a termination proof method introduced by the second author, to characterize polygraphic programs that compute in polynomial time. We conclude with a characterization of polynomial time functions and non-deterministic polynomial time functions.<|reference_end|> | arxiv | @article{bonfante2007intensional,
title={Intensional properties of polygraphs},
author={Guillaume Bonfante, Yves Guiraud},
journal={Electronic Notes in Theoretical Computer Science 203(1):65-77
(2008)},
year={2007},
doi={10.1016/j.entcs.2008.03.034},
archivePrefix={arXiv},
eprint={cs/0703007},
primaryClass={cs.LO cs.CC math.CT}
} | bonfante2007intensional |
arxiv-675736 | cs/0703008 | Strategies in object-oriented design | <|reference_start|>Strategies in object-oriented design: This paper presents a study aiming to analyse the design strategies of experts in object-oriented programming. We report an experiment conducted with four experts. Each subject solved three problems. Our results show that three strategies may be used in program design according to the solution structure. An object-centred strategy and a function-centred strategy are used when the solution has a hierarchical structure with vertical communication between objects. In this case, the plan which guides the design activity is declarative. A procedure-centred strategy is used when the solution has a flat structure with horizontal communication between objects. In this case, the plan which guides the design activity is procedural. These results are discussed in relation with results on design strategies in procedural design. Furthermore, our results provide insight into the knowledge structures of experts in object-oriented design. To conclude, we point out limitations of this study and discuss implications of our results for Human-Computer Interaction systems, in particular for systems assisting experts in their design activity.<|reference_end|> | arxiv | @article{chatel2007strategies,
title={Strategies in object-oriented design},
author={Sophie Chatel, Franc{c}oise D'etienne},
journal={Acta Psychologica 91 (1996) 245-269},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703008},
primaryClass={cs.HC}
} | chatel2007strategies |
arxiv-675737 | cs/0703009 | A Methodological Framework for Socio-Cognitive Analyses of Collaborative Design of Open Source Software | <|reference_start|>A Methodological Framework for Socio-Cognitive Analyses of Collaborative Design of Open Source Software: Open Source Software (OSS) development challenges traditional software engineering practices. In particular, OSS projects are managed by a large number of volunteers, working freely on the tasks they choose to undertake. OSS projects also rarely rely on explicit system-level design, or on project plans or schedules. Moreover, OSS developers work in arbitrary locations and collaborate almost exclusively over the Internet, using simple tools such as email and software code tracking databases (e.g. CVS). All the characteristics above make OSS development akin to weaving a tapestry of heterogeneous components. The OSS design process relies on various types of actors: people with prescribed roles, but also elements coming from a variety of information spaces (such as email and software code). The objective of our research is to understand the specific hybrid weaving accomplished by the actors of this distributed, collective design process. This, in turn, challenges traditional methodologies used to understand distributed software engineering: OSS development is simply too "fibrous" to lend itself well to analysis under a single methodological lens. In this paper, we describe the methodological framework we articulated to analyze collaborative design in the Open Source world. Our framework focuses on the links between the heterogeneous components of a project's hybrid network. We combine ethnography, text mining, and socio-technical network analysis and visualization to understand OSS development in its totality. This way, we are able to simultaneously consider the social, technical, and cognitive aspects of OSS development. We describe our methodology in detail, and discuss its implications for future research on distributed collective practices.<|reference_end|> | arxiv | @article{sack2007a,
title={A Methodological Framework for Socio-Cognitive Analyses of Collaborative
Design of Open Source Software},
author={Warren Sack, Franc{c}oise D'etienne (INRIA Rocquencourt), Nicholas
Ducheneaut, Jean-Marie Burkhardt (LEI), Dilan Mahendran, Flore Barcellini
(INRIA Rocquencourt, EIFFEL)},
journal={Computer Supported Cooperative Work (CSCW), the Journal of
Collaborative Computing 15, 2-3 (2006) 229-250},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703009},
primaryClass={cs.HC}
} | sack2007a |
arxiv-675738 | cs/0703010 | An optimal bifactor approximation algorithm for the metric uncapacitated facility location problem | <|reference_start|>An optimal bifactor approximation algorithm for the metric uncapacitated facility location problem: We obtain a 1.5-approximation algorithm for the metric uncapacitated facility location problem (UFL), which improves on the previously best known 1.52-approximation algorithm by Mahdian, Ye and Zhang. Note, that the approximability lower bound by Guha and Khuller is 1.463. An algorithm is a {\em ($\lambda_f$,$\lambda_c$)-approximation algorithm} if the solution it produces has total cost at most $\lambda_f \cdot F^* + \lambda_c \cdot C^*$, where $F^*$ and $C^*$ are the facility and the connection cost of an optimal solution. Our new algorithm, which is a modification of the $(1+2/e)$-approximation algorithm of Chudak and Shmoys, is a (1.6774,1.3738)-approximation algorithm for the UFL problem and is the first one that touches the approximability limit curve $(\gamma_f, 1+2e^{-\gamma_f})$ established by Jain, Mahdian and Saberi. As a consequence, we obtain the first optimal approximation algorithm for instances dominated by connection costs. When combined with a (1.11,1.7764)-approximation algorithm proposed by Jain et al., and later analyzed by Mahdian et al., we obtain the overall approximation guarantee of 1.5 for the metric UFL problem. We also describe how to use our algorithm to improve the approximation ratio for the 3-level version of UFL.<|reference_end|> | arxiv | @article{byrka2007an,
title={An optimal bifactor approximation algorithm for the metric uncapacitated
facility location problem},
author={Jaroslaw Byrka and Karen Aardal},
journal={arXiv preprint arXiv:cs/0703010},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703010},
primaryClass={cs.DS}
} | byrka2007an |
arxiv-675739 | cs/0703011 | Can we Compute the Similarity Between Surfaces? | <|reference_start|>Can we Compute the Similarity Between Surfaces?: A suitable measure for the similarity of shapes represented by parameterized curves or surfaces is the Fr\'echet distance. Whereas efficient algorithms are known for computing the Fr\'echet distance of polygonal curves, the same problem for triangulated surfaces is NP-hard. Furthermore, it remained open whether it is computable at all. Here, using a discrete approximation we show that it is {\em upper semi-computable}, i.e., there is a non-halting Turing machine which produces a monotone decreasing sequence of rationals converging to the result. It follows that the decision problem, whether the Fr\'echet distance of two given surfaces lies below some specified value, is recursively enumerable. Furthermore, we show that a relaxed version of the problem, the computation of the {\em weak Fr\'echet distance} can be solved in polynomial time. For this, we give a computable characterization of the weak Fr\'echet distance in a geometric data structure called the {\em free space diagram}.<|reference_end|> | arxiv | @article{alt2007can,
title={Can we Compute the Similarity Between Surfaces?},
author={Helmut Alt, Maike Buchin},
journal={arXiv preprint arXiv:cs/0703011},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703011},
primaryClass={cs.CG cs.CC}
} | alt2007can |
arxiv-675740 | cs/0703012 | Pre-Requirement Specification Traceability: Bridging the Complexity Gap through Capabilities | <|reference_start|>Pre-Requirement Specification Traceability: Bridging the Complexity Gap through Capabilities: Pre-Requirement Specification traceability is the activity of capturing relations between requirements and their sources, in particular user needs. Requirements are formal technical specifications in the solution space; needs are natural language expressions codifying user expectations in the problem space. Current traceability techniques are challenged by the complexity gap that results from the disparity between the spaces, and thereby, often neglect traceability to and from requirements. We identify the existence of an intermediary region -- the transition space -- which structures the progression from needs to requirements. More specifically, our approach to developing change-tolerant systems, termed Capabilities Engineering, identifies highly cohesive, minimally coupled, optimized functional abstractions called Capabilities in the transition space. These Capabilities link the problem and solution spaces through directives (entities derived from user needs). Directives connect the problem and transition spaces; Capabilities link the transition and solution spaces. Furthermore, the process of Capabilities Engineering addresses specific traceability challenges. It supports the evolution of traces, provides semantic and structural information about dependencies, incorporates human factors, generates traceability relations with negligible overhead, and thereby, fosters pre-Requirement Specification traceability.<|reference_end|> | arxiv | @article{ravichandar2007pre-requirement,
title={Pre-Requirement Specification Traceability: Bridging the Complexity Gap
through Capabilities},
author={Ramya Ravichandar, James D. Arthur, Manuel P'erez-Qui~nones},
journal={arXiv preprint arXiv:cs/0703012},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703012},
primaryClass={cs.SE}
} | ravichandar2007pre-requirement |
arxiv-675741 | cs/0703013 | NLC-2 graph recognition and isomorphism | <|reference_start|>NLC-2 graph recognition and isomorphism: NLC-width is a variant of clique-width with many application in graph algorithmic. This paper is devoted to graphs of NLC-width two. After giving new structural properties of the class, we propose a $O(n^2 m)$-time algorithm, improving Johansson's algorithm \cite{Johansson00}. Moreover, our alogrithm is simple to understand. The above properties and algorithm allow us to propose a robust $O(n^2 m)$-time isomorphism algorithm for NLC-2 graphs. As far as we know, it is the first polynomial-time algorithm.<|reference_end|> | arxiv | @article{limouzy2007nlc-2,
title={NLC-2 graph recognition and isomorphism},
author={Vincent Limouzy (LIAFA), Fabien De Montgolfier (LIAFA), Micha"el Rao
(LIAFA)},
journal={Dans Lecture Notes In Computer Science - Graph-Theoretic Concepts
in Computer Science 33rd International Workshop, WG 2007, Dornburg, Germany,
June 21-23, 2007., Dornburg : Allemagne (2007)},
year={2007},
doi={10.1007/978-3-540-74839-7_9},
archivePrefix={arXiv},
eprint={cs/0703013},
primaryClass={cs.DS}
} | limouzy2007nlc-2 |
arxiv-675742 | cs/0703014 | Asymptotic Capacity Bounds for Wireless Networks with Non-Uniform Traffic | <|reference_start|>Asymptotic Capacity Bounds for Wireless Networks with Non-Uniform Traffic: We develop bounds on the capacity of wireless networks when the traffic is non-uniform, i.e., not all nodes are required to receive and send similar volumes of traffic. Our results are asymptotic, i.e., they hold with probability going to unity as the number of nodes goes to infinity. We study \emph{(i)} asymmetric networks, where the numbers of sources and destinations of traffic are unequal, \emph{(ii)} multicast networks, in which each created packet has multiple destinations, \emph{(iii)} cluster networks, that consist of clients and a limited number of cluster heads, and each client wants to communicate with any of the cluster heads, and \emph{(iv)} hybrid networks, in which the nodes are supported by a limited infrastructure. Our findings quantify the fundamental capabilities of these wireless networks to handle traffic bottlenecks, and point to correct design principles that achieve the capacity without resorting to overly complicated protocols.<|reference_end|> | arxiv | @article{toumpis2007asymptotic,
title={Asymptotic Capacity Bounds for Wireless Networks with Non-Uniform
Traffic},
author={Stavros Toumpis},
journal={Asymptotic Capacity Bounds for Wireless Networks with Non-Uniform
Traffic Patterns, S. Toumpis, IEEE Trans. Wireless Comm., vol. 7, no. 6, pp.
2231-2242, June 2008},
year={2007},
doi={10.1109/TWC.2008.061010},
archivePrefix={arXiv},
eprint={cs/0703014},
primaryClass={cs.NI}
} | toumpis2007asymptotic |
arxiv-675743 | cs/0703015 | Graph representation of context-free grammars | <|reference_start|>Graph representation of context-free grammars: In modern mathematics, graphs figure as one of the better-investigated class of mathematical objects. Various properties of graphs, as well as graph-processing algorithms, can be useful if graphs of a certain kind are used as denotations for CF-grammars. Furthermore, graph are well adapted to various extensions (one kind of such extensions being attributes).<|reference_end|> | arxiv | @article{shkotin2007graph,
title={Graph representation of context-free grammars},
author={Alex Shkotin},
journal={arXiv preprint arXiv:cs/0703015},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703015},
primaryClass={cs.LO}
} | shkotin2007graph |
arxiv-675744 | cs/0703016 | Outage Probability of Multiple-Input Single-Output (MISO) Systems with Delayed Feedback | <|reference_start|>Outage Probability of Multiple-Input Single-Output (MISO) Systems with Delayed Feedback: We investigate the effect of feedback delay on the outage probability of multiple-input single-output (MISO) fading channels. Channel state information at the transmitter (CSIT) is a delayed version of the channel state information available at the receiver (CSIR). We consider two cases of CSIR: (a) perfect CSIR and (b) CSI estimated at the receiver using training symbols. With perfect CSIR, under a short-term power constraint, we determine: (a) the outage probability for beamforming with imperfect CSIT (BF-IC) analytically, and (b) the optimal spatial power allocation (OSPA) scheme that minimizes outage numerically. Results show that, for delayed CSIT, BF-IC is close to optimal for low SNR and uniform spatial power allocation (USPA) is close to optimal at high SNR. Similarly, under a long-term power constraint, we show that BF-IC is close to optimal for low SNR and USPA is close to optimal at high SNR. With imperfect CSIR, we obtain an upper bound on the outage probability with USPA and BF-IC. Results show that the loss in performance due to imperfection in CSIR is not significant, if the training power is chosen appropriately.<|reference_end|> | arxiv | @article{annpureddy2007outage,
title={Outage Probability of Multiple-Input Single-Output (MISO) Systems with
Delayed Feedback},
author={Venkata Sreekanta Annpureddy, Devdutt V. Marathe, T. R. Ramya and
Srikrishna Bhashyam},
journal={arXiv preprint arXiv:cs/0703016},
year={2007},
doi={10.1109/TCOMM.2009.02.0700152},
archivePrefix={arXiv},
eprint={cs/0703016},
primaryClass={cs.IT math.IT}
} | annpureddy2007outage |
arxiv-675745 | cs/0703017 | Performance Bounds for Bi-Directional Coded Cooperation Protocols | <|reference_start|>Performance Bounds for Bi-Directional Coded Cooperation Protocols: In coded bi-directional cooperation, two nodes wish to exchange messages over a shared half-duplex channel with the help of a relay. In this paper, we derive performance bounds for this problem for each of three protocols. The first protocol is a two phase protocol were both users simultaneously transmit during the first phase and the relay alone transmits during the second. In this protocol, our bounds are tight and a multiple-access channel transmission from the two users to the relay followed by a coded broadcast-type transmission from the relay to the users achieves all points in the two-phase capacity region. The second protocol considers sequential transmissions from the two users followed by a transmission from the relay while the third protocol is a hybrid of the first two protocols and has four phases. In the latter two protocols the inner and outer bounds are not identical, and differ in a manner similar to the inner and outer bounds of Cover's relay channel. Numerical evaluation shows that at least in some cases of interest our bounds do not differ significantly. Finally, in the Gaussian case with path loss, we derive achievable rates and compare the relative merits of each protocol in various regimes. This case is of interest in cellular systems. Surprisingly, we find that in some cases, the achievable rate region of the four phase protocol sometimes contains points that are outside the outer bounds of the other protocols.<|reference_end|> | arxiv | @article{kim2007performance,
title={Performance Bounds for Bi-Directional Coded Cooperation Protocols},
author={Sang Joon Kim, Patrick Mitran, Vahid Tarokh},
journal={arXiv preprint arXiv:cs/0703017},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703017},
primaryClass={cs.IT math.IT}
} | kim2007performance |
arxiv-675746 | cs/0703018 | A Coding Theoretic Study on MLL proof nets | <|reference_start|>A Coding Theoretic Study on MLL proof nets: Coding theory is very useful for real world applications. A notable example is digital television. Basically, coding theory is to study a way of detecting and/or correcting data that may be true or false. Moreover coding theory is an area of mathematics, in which there is an interplay between many branches of mathematics, e.g., abstract algebra, combinatorics, discrete geometry, information theory, etc. In this paper we propose a novel approach for analyzing proof nets of Multiplicative Linear Logic (MLL) by coding theory. We define families of proof structures and introduce a metric space for each family. In each family, 1. an MLL proof net is a true code element; 2. a proof structure that is not an MLL proof net is a false (or corrupted) code element. The definition of our metrics reflects the duality of the multiplicative connectives elegantly. In this paper we show that in the framework one error-detecting is possible but one error-correcting not. Our proof of the impossibility of one error-correcting is interesting in the sense that a proof theoretical property is proved using a graph theoretical argument. In addition, we show that affine logic and MLL + MIX are not appropriate for this framework. That explains why MLL is better than such similar logics.<|reference_end|> | arxiv | @article{matsuoka2007a,
title={A Coding Theoretic Study on MLL proof nets},
author={Satoshi Matsuoka},
journal={arXiv preprint arXiv:cs/0703018},
year={2007},
doi={10.1017/S0960129511000582},
archivePrefix={arXiv},
eprint={cs/0703018},
primaryClass={cs.LO cs.DM}
} | matsuoka2007a |
arxiv-675747 | cs/0703019 | The Stackelberg Minimum Spanning Tree Game | <|reference_start|>The Stackelberg Minimum Spanning Tree Game: We consider a one-round two-player network pricing game, the Stackelberg Minimum Spanning Tree game or StackMST. The game is played on a graph (representing a network), whose edges are colored either red or blue, and where the red edges have a given fixed cost (representing the competitor's prices). The first player chooses an assignment of prices to the blue edges, and the second player then buys the cheapest possible minimum spanning tree, using any combination of red and blue edges. The goal of the first player is to maximize the total price of purchased blue edges. This game is the minimum spanning tree analog of the well-studied Stackelberg shortest-path game. We analyze the complexity and approximability of the first player's best strategy in StackMST. In particular, we prove that the problem is APX-hard even if there are only two different red costs, and give an approximation algorithm whose approximation ratio is at most $\min \{k,1+\ln b,1+\ln W\}$, where $k$ is the number of distinct red costs, $b$ is the number of blue edges, and $W$ is the maximum ratio between red costs. We also give a natural integer linear programming formulation of the problem, and show that the integrality gap of the fractional relaxation asymptotically matches the approximation guarantee of our algorithm.<|reference_end|> | arxiv | @article{cardinal2007the,
title={The Stackelberg Minimum Spanning Tree Game},
author={Jean Cardinal, Erik D. Demaine, Samuel Fiorini, Gwena"el Joret,
Stefan Langerman, Ilan Newman, Oren Weimann},
journal={Algorithmica, vol. 59, no. 2, pp. 129--144, 2011},
year={2007},
doi={10.1007/s00453-009-9299-y},
archivePrefix={arXiv},
eprint={cs/0703019},
primaryClass={cs.GT cs.DS}
} | cardinal2007the |
arxiv-675748 | cs/0703020 | Counting preimages of TCP reordering patterns | <|reference_start|>Counting preimages of TCP reordering patterns: Packet reordering is an important property of network traffic that should be captured by analytical models of the Transmission Control Protocol (TCP). We study a combinatorial problem motivated by RESTORED, a TCP modeling methodology that incorporates information about packet dynamics. A significant component of this model is a many-to-one mapping B that transforms sequences of packet IDs into buffer sequences, in a manner that is compatible with TCP semantics. We show that the following hold: 1. There exists a linear time algorithm that, given a buffer sequence W of length n, decides whether there exists a permutation A of 1,2,..., n such that $A\in B^{-1}(W)$ (and constructs such a permutation, when it exists). 2. The problem of counting the number of permutations in $B^{-1}(W)$ has a polynomial time algorithm. We also show how to extend these results to sequences of IDs that contain repeated packets.<|reference_end|> | arxiv | @article{hansson2007counting,
title={Counting preimages of TCP reordering patterns},
author={Anders Hansson, Gabriel Istrate},
journal={arXiv preprint arXiv:cs/0703020},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703020},
primaryClass={cs.DS cs.DM math.CO}
} | hansson2007counting |
arxiv-675749 | cs/0703021 | Addressing Components' Evolvement and Execution Behavior to Measure Component-Based Software Reliability | <|reference_start|>Addressing Components' Evolvement and Execution Behavior to Measure Component-Based Software Reliability: Software reliability is an important quality attrib-ute, often evaluated as either a function of time or of system structures. The goal of this study is to have this metric cover both for component-based software, be-cause its reliability strongly depends on the quality of constituent components and their interactions. To achieve this, we apply a convolution modeling ap-proach, based on components' execution behavior, to integrate their individual reliability evolvement and simultaneously address failure fixes in the time do-main. Modeling at the component level can be more economical to accommodate software evolution, be-cause the reliability metric can be evaluated by reus-ing the quality measures of unaffected components and adapting only to the affected ones to save cost. The adaptation capability also supports the incremental software development processes that constantly add in new components over time. Experiments were con-ducted to discuss the usefulness of this approach.<|reference_end|> | arxiv | @article{wang2007addressing,
title={Addressing Components' Evolvement and Execution Behavior to Measure
Component-Based Software Reliability},
author={Wen-Li Wang and Mei-Huei Tang},
journal={arXiv preprint arXiv:cs/0703021},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703021},
primaryClass={cs.SE}
} | wang2007addressing |
arxiv-675750 | cs/0703022 | Rate of Channel Hardening of Antenna Selection Diversity Schemes and Its Implication on Scheduling | <|reference_start|>Rate of Channel Hardening of Antenna Selection Diversity Schemes and Its Implication on Scheduling: For a multiple antenna system, we compute the asymptotic distribution of antenna selection gain when the transmitter selects the transmit antenna with the strongest channel. We use this to asymptotically estimate the underlying channel capacity distributions, and demonstrate that unlike multiple-input/multiple-output (MIMO) systems, the channel for antenna selection systems hardens at a slower rate, and thus a significant multiuser scheduling gain can exist - O(1/ log m) for channel selection as opposed to O(1/ sqrt{m}) for MIMO, where m is the number of transmit antennas. Additionally, even without this scheduling gain, it is demonstrated that transmit antenna selection systems outperform open loop MIMO systems in low signal-to-interference-plus-noise ratio (SINR) regimes, particularly for a small number of receive antennas. This may have some implications on wireless system design, because most of the users in modern wireless systems have low SINRs<|reference_end|> | arxiv | @article{bai2007rate,
title={Rate of Channel Hardening of Antenna Selection Diversity Schemes and Its
Implication on Scheduling},
author={Dongwoon Bai, Patrick Mitran, Saeed S. Ghassemzadeh, Robert R. Miller,
Vahid Tarokh},
journal={arXiv preprint arXiv:cs/0703022},
year={2007},
doi={10.1109/TIT.2009.2027529},
archivePrefix={arXiv},
eprint={cs/0703022},
primaryClass={cs.IT math.IT}
} | bai2007rate |
arxiv-675751 | cs/0703023 | Computing a Minimum-Dilation Spanning Tree is NP-hard | <|reference_start|>Computing a Minimum-Dilation Spanning Tree is NP-hard: In a geometric network G = (S, E), the graph distance between two vertices u, v in S is the length of the shortest path in G connecting u to v. The dilation of G is the maximum factor by which the graph distance of a pair of vertices differs from their Euclidean distance. We show that given a set S of n points with integer coordinates in the plane and a rational dilation delta > 1, it is NP-hard to determine whether a spanning tree of S with dilation at most delta exists.<|reference_end|> | arxiv | @article{cheong2007computing,
title={Computing a Minimum-Dilation Spanning Tree is NP-hard},
author={Otfried Cheong, Herman Haverkort, Mira Lee},
journal={arXiv preprint arXiv:cs/0703023},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703023},
primaryClass={cs.CG}
} | cheong2007computing |
arxiv-675752 | cs/0703024 | Algorithmic Information Theory: a brief non-technical guide to the field | <|reference_start|>Algorithmic Information Theory: a brief non-technical guide to the field: This article is a brief guide to the field of algorithmic information theory (AIT), its underlying philosophy, and the most important concepts. AIT arises by mixing information theory and computation theory to obtain an objective and absolute notion of information in an individual object, and in so doing gives rise to an objective and robust notion of randomness of individual objects. This is in contrast to classical information theory that is based on random variables and communication, and has no bearing on information and randomness of individual objects. After a brief overview, the major subfields, applications, history, and a map of the field are presented.<|reference_end|> | arxiv | @article{hutter2007algorithmic,
title={Algorithmic Information Theory: a brief non-technical guide to the field},
author={Marcus Hutter},
journal={Scholarpedia, 2:3 (2007) page 2519},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703024},
primaryClass={cs.IT cs.CC math.IT}
} | hutter2007algorithmic |
arxiv-675753 | cs/0703025 | LIBOPT - An environment for testing solvers on heterogeneous collections of problems - Version 10 | <|reference_start|>LIBOPT - An environment for testing solvers on heterogeneous collections of problems - Version 10: The Libopt environment is both a methodology and a set of tools that can be used for testing, comparing, and profiling solvers on problems belonging to various collections. These collections can be heterogeneous in the sense that their problems can have common features that differ from one collection to the other. Libopt brings a unified view on this composite world by offering, for example, the possibility to run any solver on any problem compatible with it, using the same Unix/Linux command. The environment also provides tools for comparing the results obtained by solvers on a specified set of problems. Most of the scripts going with the Libopt environment have been written in Perl.<|reference_end|> | arxiv | @article{gilbert2007libopt,
title={LIBOPT - An environment for testing solvers on heterogeneous collections
of problems - Version 1.0},
author={Jean Charles Gilbert (INRIA Rocquencourt), Xavier Jonsson},
journal={arXiv preprint arXiv:cs/0703025},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703025},
primaryClass={cs.MS cs.NA math.OC}
} | gilbert2007libopt |
arxiv-675754 | cs/0703026 | Formal proof for delayed finite field arithmetic using floating point operators | <|reference_start|>Formal proof for delayed finite field arithmetic using floating point operators: Formal proof checkers such as Coq are capable of validating proofs of correction of algorithms for finite field arithmetics but they require extensive training from potential users. The delayed solution of a triangular system over a finite field mixes operations on integers and operations on floating point numbers. We focus in this report on verifying proof obligations that state that no round off error occurred on any of the floating point operations. We use a tool named Gappa that can be learned in a matter of minutes to generate proofs related to floating point arithmetic and hide technicalities of formal proof checkers. We found that three facilities are missing from existing tools. The first one is the ability to use in Gappa new lemmas that cannot be easily expressed as rewriting rules. We coined the second one ``variable interchange'' as it would be required to validate loop interchanges. The third facility handles massive loop unrolling and argument instantiation by generating traces of execution for a large number of cases. We hope that these facilities may sometime in the future be integrated into mainstream code validation.<|reference_end|> | arxiv | @article{boldo2007formal,
title={Formal proof for delayed finite field arithmetic using floating point
operators},
author={Sylvie Boldo (INRIA Futurs), Marc Daumas (ELIAUS), Pascal Giorgi
(LIRMM)},
journal={arXiv preprint arXiv:cs/0703026},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703026},
primaryClass={cs.SC}
} | boldo2007formal |
arxiv-675755 | cs/0703027 | Interroger un corpus par le sens | <|reference_start|>Interroger un corpus par le sens: In textual knowledge management, statistical methods prevail. Nonetheless, some difficulties cannot be overcome by these methodologies. I propose a symbolic approach using a complete textual analysis to identify which analysis level can improve the the answers provided by a system. The approach identifies word senses and relation between words and generates as many rephrasings as possible. Using synonyms and derivative, the system provides new utterances without changing the original meaning of the sentences. Such a way, an information can be retrieved whatever the question or answer's wording may be.<|reference_end|> | arxiv | @article{jacquemin2007interroger,
title={Interroger un corpus par le sens},
author={Bernard Jacquemin (ISC)},
journal={Dans "Mots, termes et contextes", Actes des septi\`emes Journ\'ees
scientifiques du r\'eseau de chercheurs Lexicologie, Terminologie, Traduction
- Bruxelles : Belgique (2005)},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703027},
primaryClass={cs.CL cs.IR}
} | jacquemin2007interroger |
arxiv-675756 | cs/0703028 | Graphic processors to speed-up simulations for the design of high performance solar receptors | <|reference_start|>Graphic processors to speed-up simulations for the design of high performance solar receptors: Graphics Processing Units (GPUs) are now powerful and flexible systems adapted and used for other purposes than graphics calculations (General Purpose computation on GPU -- GPGPU). We present here a prototype to be integrated into simulation codes that estimate temperature, velocity and pressure to design next generations of solar receptors. Such codes will delegate to our contribution on GPUs the computation of heat transfers due to radiations. We use Monte-Carlo line-by-line ray-tracing through finite volumes. This means data-parallel arithmetic transformations on large data structures. Our prototype is inspired on the source code of GPUBench. Our performances on two recent graphics cards (Nvidia 7800GTX and ATI RX1800XL) show some speed-up higher than 400 compared to CPU implementations leaving most of CPU computing resources available. As there were some questions pending about the accuracy of the operators implemented in GPUs, we start this report with a survey and some contributed tests on the various floating point units available on GPUs.<|reference_end|> | arxiv | @article{collange2007graphic,
title={Graphic processors to speed-up simulations for the design of high
performance solar receptors},
author={Sylvain Collange (LP2A), Marc Daumas (LP2A, LIRMM), David Defour
(LP2A)},
journal={arXiv preprint arXiv:cs/0703028},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703028},
primaryClass={cs.DC physics.class-ph}
} | collange2007graphic |
arxiv-675757 | cs/0703029 | FPRAS for computing a lower bound for weighted matching polynomial of graphs | <|reference_start|>FPRAS for computing a lower bound for weighted matching polynomial of graphs: We give a fully polynomial randomized approximation scheme to compute a lower bound for the matching polynomial of any weighted graph at a positive argument. For the matching polynomial of complete bipartite graphs with bounded weights these lower bounds are asymptotically optimal.<|reference_end|> | arxiv | @article{friedland2007fpras,
title={FPRAS for computing a lower bound for weighted matching polynomial of
graphs},
author={Shmuel Friedland},
journal={arXiv preprint arXiv:cs/0703029},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703029},
primaryClass={cs.CC cs.DM}
} | friedland2007fpras |
arxiv-675758 | cs/0703030 | An Efficient Local Approach to Convexity Testing of Piecewise-Linear Hypersurfaces | <|reference_start|>An Efficient Local Approach to Convexity Testing of Piecewise-Linear Hypersurfaces: We show that a closed piecewise-linear hypersurface immersed in $R^n$ ($n\ge 3$) is the boundary of a convex body if and only if every point in the interior of each $(n-3)$-face has a neighborhood that lies on the boundary of some convex body; no assumptions about the hypersurface's topology are needed. We derive this criterion from our generalization of Van Heijenoort's (1952) theorem on locally convex hypersurfaces in $R^n$ to spherical spaces. We also give an easy-to-implement convexity testing algorithm, which is based on our criterion. For $R^3$ the number of arithmetic operations used by the algorithm is at most linear in the number of vertices, while in general it is at most linear in the number of incidences between the $(n-2)$-faces and $(n-3)$-faces. When the dimension $n$ is not fixed and only ring arithmetic is allowed, the algorithm still remains polynomial. Our method works in more general situations than the convexity verification algorithms developed by Mehlhorn et al. (1996) and Devillers et al. (1998) -- for example, our method does not require the input surface to be orientable, nor it requires the input data to include normal vectors to the facets that are oriented "in a coherent way". For $R^3$ the complexity of our algorithm is the same as that of previous algorithms; for higher dimensions there seems to be no clear winner, but our approach is the only one that easily handles inputs in which the facet normals are not known to be coherently oriented or are not given at all. Furthermore, our method can be extended to piecewise-polynomial surfaces of small degree.<|reference_end|> | arxiv | @article{rybnikov2007an,
title={An Efficient Local Approach to Convexity Testing of Piecewise-Linear
Hypersurfaces},
author={Konstantin Rybnikov},
journal={arXiv preprint arXiv:cs/0703030},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703030},
primaryClass={cs.CG}
} | rybnikov2007an |
arxiv-675759 | cs/0703031 | Sampling Eulerian orientations of triangular lattice graphs | <|reference_start|>Sampling Eulerian orientations of triangular lattice graphs: We consider the problem of sampling from the uniform distribution on the set of Eulerian orientations of subgraphs of the triangular lattice. Although it is known that this can be achieved in polynomial time for any graph, the algorithm studied here is more natural in the context of planar Eulerian graphs. We analyse the mixing time of a Markov chain on the Eulerian orientations of a planar graph which moves between orientations by reversing the edges of directed faces. Using path coupling and the comparison method we obtain a polynomial upper bound on the mixing time of this chain for any solid subgraph of the triangular lattice. By considering the conductance of the chain we show that there exist subgraphs with holes for which the chain will always take an exponential amount of time to converge. Finally, as an additional justification for studying a Markov chain on the set of Eulerian orientations of planar graphs, we show that the problem of counting Eulerian orientations remains #P-complete when restricted to planar graphs. A preliminary version of this work appeared as an extended abstract in the 2nd Algorithms and Complexity in Durham workshop.<|reference_end|> | arxiv | @article{creed2007sampling,
title={Sampling Eulerian orientations of triangular lattice graphs},
author={Paidi Creed},
journal={arXiv preprint arXiv:cs/0703031},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703031},
primaryClass={cs.DM cs.DS}
} | creed2007sampling |
arxiv-675760 | cs/0703032 | An $L (1/3 + \epsilon)$ Algorithm for the Discrete Logarithm Problem for Low Degree Curves | <|reference_start|>An $L (1/3 + \epsilon)$ Algorithm for the Discrete Logarithm Problem for Low Degree Curves: The discrete logarithm problem in Jacobians of curves of high genus $g$ over finite fields $\FF_q$ is known to be computable with subexponential complexity $L_{q^g}(1/2, O(1))$. We present an algorithm for a family of plane curves whose degrees in $X$ and $Y$ are low with respect to the curve genus, and suitably unbalanced. The finite base fields are arbitrary, but their sizes should not grow too fast compared to the genus. For this family, the group structure can be computed in subexponential time of $L_{q^g}(1/3, O(1))$, and a discrete logarithm computation takes subexponential time of $L_{q^g}(1/3+\epsilon, o(1))$ for any positive $\epsilon$. These runtime bounds rely on heuristics similar to the ones used in the number field sieve or the function field sieve algorithms.<|reference_end|> | arxiv | @article{enge2007an,
title={An $L (1/3 + \epsilon)$ Algorithm for the Discrete Logarithm Problem for
Low Degree Curves},
author={Andreas Enge (INRIA FUTURS, INRIA Futurs), Pierrick Gaudry (INRIA
Lorraine - LORIA)},
journal={Dans Eurocrypt 2007 (2007)},
year={2007},
doi={10.1007/978-3-540-72540-4_22},
archivePrefix={arXiv},
eprint={cs/0703032},
primaryClass={cs.CR math.AG}
} | enge2007an |
arxiv-675761 | cs/0703033 | Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching | <|reference_start|>Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching: In a way similar to the string-to-string correction problem we address time series similarity in the light of a time-series-to-time-series-correction problem for which the similarity between two time series is measured as the minimum cost sequence of "edit operations" needed to transform one time series into another. To define the "edit operations" we use the paradigm of a graphical editing process and end up with a dynamic programming algorithm that we call Time Warp Edit Distance (TWED). TWED is slightly different in form from Dynamic Time Warping, Longest Common Subsequence or Edit Distance with Real Penalty algorithms. In particular, it highlights a parameter which drives a kind of stiffness of the elastic measure along the time axis. We show that the similarity provided by TWED is a metric potentially useful in time series retrieval applications since it could benefit from the triangular inequality property to speed up the retrieval process while tuning the parameters of the elastic measure. In that context, a lower bound is derived to relate the matching of time series into down sampled representation spaces to the matching into the original space. Empiric quality of the TWED distance is evaluated on a simple classification task. Compared to Edit Distance, Dynamic Time Warping, Longest Common Subsequnce and Edit Distance with Real Penalty, TWED has proven to be quite effective on the considered experimental task.<|reference_end|> | arxiv | @article{marteau2007time,
title={Time Warp Edit Distance with Stiffness Adjustment for Time Series
Matching},
author={Pierre-Franc{c}ois Marteau (VALORIA)},
journal={IEEE Transaction on Pattern Analysis and Machine Intelligence 31,
2 (2009) 306-318},
year={2007},
doi={10.1109/TPAMI.2008.76},
archivePrefix={arXiv},
eprint={cs/0703033},
primaryClass={cs.IR}
} | marteau2007time |
arxiv-675762 | cs/0703034 | Nanoscale Communication with Brownian Motion | <|reference_start|>Nanoscale Communication with Brownian Motion: In this paper, the problem of communicating using chemical messages propagating using Brownian motion, rather than electromagnetic messages propagating as waves in free space or along a wire, is considered. This problem is motivated by nanotechnological and biotechnological applications, where the energy cost of electromagnetic communication might be prohibitive. Models are given for communication using particles that propagate with Brownian motion, and achievable capacity results are given. Under conservative assumptions, it is shown that rates exceeding one bit per particle are achievable.<|reference_end|> | arxiv | @article{eckford2007nanoscale,
title={Nanoscale Communication with Brownian Motion},
author={Andrew W. Eckford},
journal={arXiv preprint arXiv:cs/0703034},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703034},
primaryClass={cs.IT math.IT}
} | eckford2007nanoscale |
arxiv-675763 | cs/0703035 | On the Distortion SNR Exponent of Some Layered Transmission Schemes | <|reference_start|>On the Distortion SNR Exponent of Some Layered Transmission Schemes: We consider the problem of joint source-channel coding for transmitting K samples of a complex Gaussian source over T = bK uses of a block-fading multiple input multiple output (MIMO) channel with M transmit and N receive antennas. We consider the case when we are allowed to code over L blocks. The channel gain is assumed to be constant over a block and channel gains for different blocks are assumed to be independent. The performance measure of interest is the rate of decay of the expected mean squared error with the signal-to-noise ratio (SNR), called the distortion SNR exponent. We first show that using a broadcast strategy of Gunduz and Erkip, but with a different power and rate allocation policy, the optimal distortion SNR exponent can be achieved for bandwidth efficiencies 0 < b < (|N-M|+1)/min(M,N). This is the first time the optimal exponent is characterized for 1/min(M,N) < b < (|N-M |+ 1)/ min(M, N). Also, for b > MNL^2, we show that the broadcast scheme achieves the optimal exponent of MNL. Special cases of this result have been derived for the L=1 case and for M=N=1 by Gunduz and Erkip. We then propose a digital layered transmission scheme that uses both time layering and superposition. This includes many previously known schemes as special cases. The proposed scheme is at least as good as the currently best known schemes for the entire range of bandwidth efficiencies, whereas at least for some M, N, and b, it is strictly better than the currently best known schemes.<|reference_end|> | arxiv | @article{bhattad2007on,
title={On the Distortion SNR Exponent of Some Layered Transmission Schemes},
author={Kapil Bhattad, Krishna R. Narayanan, Giuseppe Caire},
journal={arXiv preprint arXiv:cs/0703035},
year={2007},
doi={10.1109/TIT.2008.924703},
archivePrefix={arXiv},
eprint={cs/0703035},
primaryClass={cs.IT math.IT}
} | bhattad2007on |
arxiv-675764 | cs/0703036 | Constructions of Grassmannian Simplices | <|reference_start|>Constructions of Grassmannian Simplices: In this article an explicit method (relying on representation theory) to construct packings in Grassmannian space is presented. Infinite families of configurations having only one non-trivial set of principal angles are found using 2-transitive groups. These packings are proved to reach the simplex bound and are therefore optimal w.r.t. the chordal distance. The construction is illustrated by an example on the symmetric group. Then some natural extends and consequences of this situation are given.<|reference_end|> | arxiv | @article{creignou2007constructions,
title={Constructions of Grassmannian Simplices},
author={Jean Creignou (IMB)},
journal={arXiv preprint arXiv:cs/0703036},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703036},
primaryClass={cs.IT math.IT}
} | creignou2007constructions |
arxiv-675765 | cs/0703037 | Constructing Optimal Highways | <|reference_start|>Constructing Optimal Highways: For two points $p$ and $q$ in the plane, a straight line $h$, called a highway, and a real $v>1$, we define the \emph{travel time} (also known as the \emph{city distance}) from $p$ and $q$ to be the time needed to traverse a quickest path from $p$ to $q$, where the distance is measured with speed $v$ on $h$ and with speed 1 in the underlying metric elsewhere. Given a set $S$ of $n$ points in the plane and a highway speed $v$, we consider the problem of finding a \emph{highway} that minimizes the maximum travel time over all pairs of points in $S$. If the orientation of the highway is fixed, the optimal highway can be computed in linear time, both for the $L_1$- and the Euclidean metric as the underlying metric. If arbitrary orientations are allowed, then the optimal highway can be computed in $O(n^{2} \log n)$ time. We also consider the problem of computing an optimal pair of highways, one being horizontal, one vertical.<|reference_end|> | arxiv | @article{ahn2007constructing,
title={Constructing Optimal Highways},
author={Hee-Kap Ahn and Helmut Alt and Tetsuo Asano and Sang Won Bae and Peter
Brass and Otfried Cheong and Christian Knauer and Hyeon-Suk Na and Chan-Su
Shin and Alexander Wolff},
journal={International Journal of Foundations of Computer Science
20(2009):3-23},
year={2007},
doi={10.1142/S0129054109006425},
archivePrefix={arXiv},
eprint={cs/0703037},
primaryClass={cs.CG}
} | ahn2007constructing |
arxiv-675766 | cs/0703038 | Delay and Throughput Optimal Scheduling for OFDM Broadcast Channels | <|reference_start|>Delay and Throughput Optimal Scheduling for OFDM Broadcast Channels: In this paper a scheduling policy is presented which minimizes the average delay of the users. The scheduling scheme is investigated both by analysis and simulations carried out in the context of Orthogonal Frequency Division Multiplexing (OFDM) broadcast channels (BC). First the delay optimality is obtained for a static scenario providing solutions for specific subproblems, then the analysis is carried over to the dynamic scheme. Furthermore auxiliary tools are given for proving throughput optimality. Finally simulations show the superior performance of the presented scheme.<|reference_end|> | arxiv | @article{zhou2007delay,
title={Delay and Throughput Optimal Scheduling for OFDM Broadcast Channels},
author={Chan Zhou and Gerhard Wunder},
journal={publisched in Proc. IEEE Globecom 2007},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703038},
primaryClass={cs.IT math.IT}
} | zhou2007delay |
arxiv-675767 | cs/0703039 | Transforming structures by set interpretations | <|reference_start|>Transforming structures by set interpretations: We consider a new kind of interpretation over relational structures: finite sets interpretations. Those interpretations are defined by weak monadic second-order (WMSO) formulas with free set variables. They transform a given structure into a structure with a domain consisting of finite sets of elements of the orignal structure. The definition of these interpretations directly implies that they send structures with a decidable WMSO theory to structures with a decidable first-order theory. In this paper, we investigate the expressive power of such interpretations applied to infinite deterministic trees. The results can be used in the study of automatic and tree-automatic structures.<|reference_end|> | arxiv | @article{colcombet2007transforming,
title={Transforming structures by set interpretations},
author={Thomas Colcombet, Christof L"oding},
journal={Logical Methods in Computer Science, Volume 3, Issue 2 (May 4,
2007) lmcs:2221},
year={2007},
doi={10.2168/LMCS-3(2:4)2007},
archivePrefix={arXiv},
eprint={cs/0703039},
primaryClass={cs.LO}
} | colcombet2007transforming |
arxiv-675768 | cs/0703040 | Why the Standard Data Processing should be changed | <|reference_start|>Why the Standard Data Processing should be changed: The basic statistical methods of data representation did not change since their emergence. Their simplicity was dictated by the intricacies of computations in the before computers epoch. It turns out that such approach is not uniquely possible in the presence of quick computers. The suggested here method improves significantly the reliability of data processing and their graphical representation. In this paper we show problems of the standard data processing which can bring to incorrect results. A method solving these problems is proposed. It is based on modification of data representation. The method was implemented in a computer program Consensus5. The program performances are illustrated through varied examples.<|reference_end|> | arxiv | @article{bakman2007why,
title={Why the Standard Data Processing should be changed},
author={Yefim Bakman (Tel-Aviv University)},
journal={arXiv preprint arXiv:cs/0703040},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703040},
primaryClass={cs.MS}
} | bakman2007why |
arxiv-675769 | cs/0703041 | Comparing Architectures of Mobile Applications | <|reference_start|>Comparing Architectures of Mobile Applications: This article describes various advantages and disadvantages of SMS, WAP, J2ME and Windows CE technologies in designing mobile applications. In defining the architecture of any software application it is important to get the best trade-off between platform's possibilities and design requirements. Achieving optimum software design is even more important with mobile applications where all computer resources are limited. Therefore, it is important to have a comparative analysis of all relevant contemporary approaches in designing mobile applications. As always, the choice between these technologies is determined by application requirements and system capabilities.<|reference_end|> | arxiv | @article{fertalj2007comparing,
title={Comparing Architectures of Mobile Applications},
author={Kresimir Fertalj, Marko Horvat},
journal={WSEAS Trans. on COMMUNICATIONS, Issue 4, Volume 3, October 2004,
pp. 946-952},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703041},
primaryClass={cs.OH}
} | fertalj2007comparing |
arxiv-675770 | cs/0703042 | Recommender System for Online Dating Service | <|reference_start|>Recommender System for Online Dating Service: Users of online dating sites are facing information overload that requires them to manually construct queries and browse huge amount of matching user profiles. This becomes even more problematic for multimedia profiles. Although matchmaking is frequently cited as a typical application for recommender systems, there is a surprising lack of work published in this area. In this paper we describe a recommender system we implemented and perform a quantitative comparison of two collaborative filtering (CF) and two global algorithms. Results show that collaborative filtering recommenders significantly outperform global algorithms that are currently used by dating sites. A blind experiment with real users also confirmed that users prefer CF based recommendations to global popularity recommendations. Recommender systems show a great potential for online dating where they could improve the value of the service to users and improve monetization of the service.<|reference_end|> | arxiv | @article{brozovsky2007recommender,
title={Recommender System for Online Dating Service},
author={Lukas Brozovsky, Vaclav Petricek},
journal={arXiv preprint arXiv:cs/0703042},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703042},
primaryClass={cs.IR cs.SE}
} | brozovsky2007recommender |
arxiv-675771 | cs/0703043 | A Comparison of On-Line Computer Science Citation Databases | <|reference_start|>A Comparison of On-Line Computer Science Citation Databases: This paper examines the difference and similarities between the two on-line computer science citation databases DBLP and CiteSeer. The database entries in DBLP are inserted manually while the CiteSeer entries are obtained autonomously via a crawl of the Web and automatic processing of user submissions. CiteSeer's autonomous citation database can be considered a form of self-selected on-line survey. It is important to understand the limitations of such databases, particularly when citation information is used to assess the performance of authors, institutions and funding bodies. We show that the CiteSeer database contains considerably fewer single author papers. This bias can be modeled by an exponential process with intuitive explanation. The model permits us to predict that the DBLP database covers approximately 24% of the entire literature of Computer Science. CiteSeer is also biased against low-cited papers. Despite their difference, both databases exhibit similar and significantly different citation distributions compared with previous analysis of the Physics community. In both databases, we also observe that the number of authors per paper has been increasing over time.<|reference_end|> | arxiv | @article{petricek2007a,
title={A Comparison of On-Line Computer Science Citation Databases},
author={Vaclav Petricek, Ingemar J. Cox, Hui Han, Isaac G. Councill, C. Lee
Giles},
journal={arXiv preprint arXiv:cs/0703043},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703043},
primaryClass={cs.DL}
} | petricek2007a |
arxiv-675772 | cs/0703044 | BrlAPI: Simple, Portable, Concurrent, Application-level Control of Braille Terminals | <|reference_start|>BrlAPI: Simple, Portable, Concurrent, Application-level Control of Braille Terminals: Screen readers can drive braille devices for allowing visually impaired users to access computer environments, by providing them the same information as sighted users. But in some cases, this view is not easy to use on a braille device. In such cases, it would be much more useful to let applications provide their own braille feedback, specially adapted to visually impaired users. Such applications would then need the ability to output braille ; however, allowing both screen readers and applications access a wide panel of braille devices is not a trivial task. We present an abstraction layer that applications may use to communicate with braille devices. They do not need to deal with the specificities of each device, but can do so if necessary. We show how several applications can communicate with one braille device concurrently, with BrlAPI making sensible choices about which application eventually gets access to the device. The description of a widely used implementation of BrlAPI is included.<|reference_end|> | arxiv | @article{thibault2007brlapi:,
title={BrlAPI: Simple, Portable, Concurrent, Application-level Control of
Braille Terminals},
author={Samuel Thibault (INRIA Futurs), Sebastien Hinderer (INRIA Lorraine -
LORIA)},
journal={Dans International Conference on Information and Communication
Technology Accessibility (ICTA) (2007)},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703044},
primaryClass={cs.HC}
} | thibault2007brlapi: |
arxiv-675773 | cs/0703045 | Performance Bounds on Sparse Representations Using Redundant Frames | <|reference_start|>Performance Bounds on Sparse Representations Using Redundant Frames: We consider approximations of signals by the elements of a frame in a complex vector space of dimension $N$ and formulate both the noiseless and the noisy sparse representation problems. The noiseless representation problem is to find sparse representations of a signal $\mathbf{r}$ given that such representations exist. In this case, we explicitly construct a frame, referred to as the Vandermonde frame, for which the noiseless sparse representation problem can be solved uniquely using $O(N^2)$ operations, as long as the number of non-zero coefficients in the sparse representation of $\mathbf{r}$ is $\epsilon N$ for some $0 \le \epsilon \le 0.5$, thus improving on a result of Candes and Tao \cite{Candes-Tao}. We also show that $\epsilon \le 0.5$ cannot be relaxed without violating uniqueness. The noisy sparse representation problem is to find sparse representations of a signal $\mathbf{r}$ satisfying a distortion criterion. In this case, we establish a lower bound on the trade-off between the sparsity of the representation, the underlying distortion and the redundancy of any given frame.<|reference_end|> | arxiv | @article{akçakaya2007performance,
title={Performance Bounds on Sparse Representations Using Redundant Frames},
author={Mehmet Akc{c}akaya and Vahid Tarokh},
journal={arXiv preprint arXiv:cs/0703045},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703045},
primaryClass={cs.IT math.IT}
} | akçakaya2007performance |
arxiv-675774 | cs/0703046 | Optimal Power Allocation for Distributed Detection over MIMO Channels in Wireless Sensor Networks | <|reference_start|>Optimal Power Allocation for Distributed Detection over MIMO Channels in Wireless Sensor Networks: In distributed detection systems with wireless sensor networks, the communication between sensors and a fusion center is not perfect due to interference and limited transmitter power at the sensors to combat noise at the fusion center's receiver. The problem of optimizing detection performance with such imperfect communication brings a new challenge to distributed detection. In this paper, sensors are assumed to have independent but nonidentically distributed observations, and a multi-input/multi-output (MIMO) channel model is included to account for imperfect communication between the sensors and the fusion center. The J-divergence between the distributions of the detection statistic under different hypotheses is used as a performance criterion in order to provide a tractable analysis. Optimizing the performance (in terms of the J-divergence) with individual and total transmitter power constraints on the sensors is studied, and the corresponding power allocation scheme is provided. It is interesting to see that the proposed power allocation is a tradeoff between two factors, the communication channel quality and the local decision quality. For the case with orthogonal channels under certain conditions, the power allocation can be solved by a weighted water-filling algorithm. Simulations show that, to achieve the same performance, the proposed power allocation in certain cases only consumes as little as 25 percent of the total power used by an equal power allocation scheme.<|reference_end|> | arxiv | @article{zhang2007optimal,
title={Optimal Power Allocation for Distributed Detection over MIMO Channels in
Wireless Sensor Networks},
author={Xin Zhang, H. Vincent Poor and Mung Chiang},
journal={arXiv preprint arXiv:cs/0703046},
year={2007},
doi={10.1109/TSP.2008.924639},
archivePrefix={arXiv},
eprint={cs/0703046},
primaryClass={cs.IT math.IT}
} | zhang2007optimal |
arxiv-675775 | cs/0703047 | Precoding for the AWGN Channel with Discrete Interference | <|reference_start|>Precoding for the AWGN Channel with Discrete Interference: $M$-ary signal transmission over AWGN channel with additive $Q$-ary interference where the sequence of i.i.d. interference symbols is known causally at the transmitter is considered. Shannon's theorem for channels with side information at the transmitter is used to formulate the capacity of the channel. It is shown that by using at most $MQ-Q+1$ out of $M^Q$ input symbols of the \emph{associated} channel, the capacity is achievable. For the special case where the Gaussian noise power is zero, a sufficient condition, which is independent of interference, is given for the capacity to be $\log_2 M$ bits per channel use. The problem of maximization of the transmission rate under the constraint that the channel input given any current interference symbol is uniformly distributed over the channel input alphabet is investigated. For this setting, the general structure of a communication system with optimal precoding is proposed. The extension of the proposed precoding scheme to continuous channel input alphabet is also investigated.<|reference_end|> | arxiv | @article{farmanbar2007precoding,
title={Precoding for the AWGN Channel with Discrete Interference},
author={Hamid Farmanbar and Amir K. Khandani},
journal={arXiv preprint arXiv:cs/0703047},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703047},
primaryClass={cs.IT math.IT}
} | farmanbar2007precoding |
arxiv-675776 | cs/0703048 | Path Loss Models Based on Stochastic Rays | <|reference_start|>Path Loss Models Based on Stochastic Rays: In this paper, two-dimensional percolation lattices are applied to describe wireless propagation environment, and stochastic rays are employed to model the trajectories of radio waves. We first derive the probability that a stochastic ray undergoes certain number of collisions at a specific spatial location. Three classes of stochastic rays with different constraint conditions are considered: stochastic rays of random walks, and generic stochastic rays with two different anomalous levels. Subsequently, we obtain the closed-form formulation of mean received power of radio waves under non line-of-sight conditions for each class of stochastic ray. Specifically, the determination of model parameters and the effects of lattice structures on the path loss are investigated. The theoretical results are validated by comparison with experimental data.<|reference_end|> | arxiv | @article{hu2007path,
title={Path Loss Models Based on Stochastic Rays},
author={Luoquan Hu, Han Yu, Yifan Chen},
journal={arXiv preprint arXiv:cs/0703048},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703048},
primaryClass={cs.IT math.IT}
} | hu2007path |
arxiv-675777 | cs/0703049 | Algorithm of Segment-Syllabic Synthesis in Speech Recognition Problem | <|reference_start|>Algorithm of Segment-Syllabic Synthesis in Speech Recognition Problem: Speech recognition based on the syllable segment is discussed in this paper. The principal search methods in space of states for the speech recognition problem by segment-syllabic parameters trajectory synthesis are investigated. Recognition as comparison the parameters trajectories in chosen speech units on the sections of the segmented speech is realized. Some experimental results are given and discussed.<|reference_end|> | arxiv | @article{karpov2007algorithm,
title={Algorithm of Segment-Syllabic Synthesis in Speech Recognition Problem},
author={Oleg N. Karpov, Olga A. Savenkova},
journal={arXiv preprint arXiv:cs/0703049},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703049},
primaryClass={cs.SD cs.CL}
} | karpov2007algorithm |
arxiv-675778 | cs/0703050 | On The Capacity Deficit of Mobile Wireless Ad Hoc Networks: A Rate Distortion Formulation | <|reference_start|>On The Capacity Deficit of Mobile Wireless Ad Hoc Networks: A Rate Distortion Formulation: Overheads incurred by routing protocols diminish the capacity available for relaying useful data in a mobile wireless ad hoc network. Discovering lower bounds on the amount of protocol overhead incurred for routing data packets is important for the development of efficient routing protocols, and for characterizing the actual (effective) capacity available for network users. This paper presents an information-theoretic framework for characterizing the minimum routing overheads of geographic routing in a network with mobile nodes. specifically, the minimum overhead problem is formulated as a rate-distortion problem. The formulation may be applied to networks with arbitrary traffic arrival and location service schemes. Lower bounds are derived for the minimum overheads incurred for maintaining the location of destination nodes and consistent neighborhood information in terms of node mobility and packet arrival process. This leads to a characterization of the deficit caused by the routing overheads on the overall transport capacity.<|reference_end|> | arxiv | @article{bisnik2007on,
title={On The Capacity Deficit of Mobile Wireless Ad Hoc Networks: A Rate
Distortion Formulation},
author={Nabhendra Bisnik, Alhussein A. Abouzeid},
journal={arXiv preprint arXiv:cs/0703050},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703050},
primaryClass={cs.IT math.IT}
} | bisnik2007on |
arxiv-675779 | cs/0703051 | An ExpTime Procedure for Description Logic $\mathcalALCQI$ (Draft) | <|reference_start|>An ExpTime Procedure for Description Logic $\mathcalALCQI$ (Draft): A worst-case ExpTime tableau-based decision procedure is outlined for the satisfiability problem in $\mathcal{ALCQI}$ w.r.t. general axioms.<|reference_end|> | arxiv | @article{ding2007an,
title={An ExpTime Procedure for Description Logic $\mathcal{ALCQI}$ (Draft)},
author={Yu Ding},
journal={arXiv preprint arXiv:cs/0703051},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703051},
primaryClass={cs.LO}
} | ding2007an |
arxiv-675780 | cs/0703052 | On the densest MIMO lattices from cyclic division algebras | <|reference_start|>On the densest MIMO lattices from cyclic division algebras: It is shown why the discriminant of a maximal order within a cyclic division algebra must be minimized in order to get the densest possible matrix lattices with a prescribed nonvanishing minimum determinant. Using results from class field theory a lower bound to the minimum discriminant of a maximal order with a given center and index (= the number of Tx/Rx antennas) is derived. Also numerous examples of division algebras achieving our bound are given. E.g. we construct a matrix lattice with QAM coefficients that has 2.5 times as many codewords as the celebrated Golden code of the same minimum determinant. We describe a general algorithm due to Ivanyos and Ronyai for finding maximal orders within a cyclic division algebra and discuss our enhancements to this algorithm. We also consider general methods for finding cyclic division algebras of a prescribed index achieving our lower bound.<|reference_end|> | arxiv | @article{hollanti2007on,
title={On the densest MIMO lattices from cyclic division algebras},
author={C. Hollanti, J. Lahtonen, K. Ranto, R. Vehkalahti},
journal={IEEE Trans. Inf. Theory, vol. 55(8), Aug. 2009, pp. 3751-3780},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703052},
primaryClass={cs.IT math.IT}
} | hollanti2007on |
arxiv-675781 | cs/0703053 | Extraction of cartographic objects in high resolution satellite images for object model generation | <|reference_start|>Extraction of cartographic objects in high resolution satellite images for object model generation: The aim of this study is to detect man-made cartographic objects in high-resolution satellite images. New generation satellites offer a sub-metric spatial resolution, in which it is possible (and necessary) to develop methods at object level rather than at pixel level, and to exploit structural features of objects. With this aim, a method to generate structural object models from manually segmented images has been developed. To generate the model from non-segmented images, extraction of the objects from the sample images is required. A hybrid method of extraction (both in terms of input sources and segmentation algorithms) is proposed: A region based segmentation is applied on a 10 meter resolution multi-spectral image. The result is used as marker in a "marker-controlled watershed method using edges" on a 2.5 meter resolution panchromatic image. Very promising results have been obtained even on images where the limits of the target objects are not apparent.<|reference_end|> | arxiv | @article{erus2007extraction,
title={Extraction of cartographic objects in high resolution satellite images
for object model generation},
author={Guray Erus (CRIP5), Nicolas Lom'enie (CRIP5)},
journal={4th Workshop on pattern Recognition in Remote Sensing in
conjunction with ICPR2006 (08/2006) 00-00},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703053},
primaryClass={cs.CV}
} | erus2007extraction |
arxiv-675782 | cs/0703054 | Linear time algorithms for Clobber | <|reference_start|>Linear time algorithms for Clobber: We prove that the single-player game clobber is solvable in linear time when played on a line or on a cycle. For this purpose, we show that this game is equivalent to an optimization problem on a set of words defined by seven classes of forbidden patterns. We also prove that, playing on the cycle, it is always possible to remove at least 2n/3 pawns, and we give a conformation for which it is not possible to do better, answering questions recently asked by Faria et al.<|reference_end|> | arxiv | @article{blondel2007linear,
title={Linear time algorithms for Clobber},
author={Vincent D. Blondel, Julien M. Hendrickx and Raphael M. Jungers},
journal={arXiv preprint arXiv:cs/0703054},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703054},
primaryClass={cs.GT}
} | blondel2007linear |
arxiv-675783 | cs/0703055 | Support and Quantile Tubes | <|reference_start|>Support and Quantile Tubes: This correspondence studies an estimator of the conditional support of a distribution underlying a set of i.i.d. observations. The relation with mutual information is shown via an extension of Fano's theorem in combination with a generalization bound based on a compression argument. Extensions to estimating the conditional quantile interval, and statistical guarantees on the minimal convex hull are given.<|reference_end|> | arxiv | @article{pelckmans2007support,
title={Support and Quantile Tubes},
author={Kristiaan Pelckmans, Jos De Brabanter, Johan A.K. Suykens, Bart De
Moor},
journal={arXiv preprint arXiv:cs/0703055},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703055},
primaryClass={cs.IT cs.LG math.IT}
} | pelckmans2007support |
arxiv-675784 | cs/0703056 | Unasssuming View-Size Estimation Techniques in OLAP | <|reference_start|>Unasssuming View-Size Estimation Techniques in OLAP: Even if storage was infinite, a data warehouse could not materialize all possible views due to the running time and update requirements. Therefore, it is necessary to estimate quickly, accurately, and reliably the size of views. Many available techniques make particular statistical assumptions and their error can be quite large. Unassuming techniques exist, but typically assume we have independent hashing for which there is no known practical implementation. We adapt an unassuming estimator due to Gibbons and Tirthapura: its theoretical bounds do not make unpractical assumptions. We compare this technique experimentally with stochastic probabilistic counting, LogLog probabilistic counting, and multifractal statistical models. Our experiments show that we can reliably and accurately (within 10%, 19 times out 20) estimate view sizes over large data sets (1.5 GB) within minutes, using almost no memory. However, only Gibbons-Tirthapura provides universally tight estimates irrespective of the size of the view. For large views, probabilistic counting has a small edge in accuracy, whereas the competitive sampling-based method (multifractal) we tested is an order of magnitude faster but can sometimes provide poor estimates (relative error of 100%). In our tests, LogLog probabilistic counting is not competitive. Experimental validation on the US Census 1990 data set and on the Transaction Processing Performance (TPC H) data set is provided.<|reference_end|> | arxiv | @article{aouiche2007unasssuming,
title={Unasssuming View-Size Estimation Techniques in OLAP},
author={Kamel Aouiche and Daniel Lemire},
journal={arXiv preprint arXiv:cs/0703056},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703056},
primaryClass={cs.DB cs.PF}
} | aouiche2007unasssuming |
arxiv-675785 | cs/0703057 | Doppler Resilient Waveforms with Perfect Autocorrelation | <|reference_start|>Doppler Resilient Waveforms with Perfect Autocorrelation: We describe a method of constructing a sequence of phase coded waveforms with perfect autocorrelation in the presence of Doppler shift. The constituent waveforms are Golay complementary pairs which have perfect autocorrelation at zero Doppler but are sensitive to nonzero Doppler shifts. We extend this construction to multiple dimensions, in particular to radar polarimetry, where the two dimensions are realized by orthogonal polarizations. Here we determine a sequence of two-by-two Alamouti matrices where the entries involve Golay pairs and for which the sum of the matrix-valued ambiguity functions vanish at small Doppler shifts. The Prouhet-Thue-Morse sequence plays a key role in the construction of Doppler resilient sequences of Golay pairs.<|reference_end|> | arxiv | @article{pezeshki2007doppler,
title={Doppler Resilient Waveforms with Perfect Autocorrelation},
author={Ali Pezeshki, A. Robert Calderbank, William Moran, and Stephen D.
Howard},
journal={arXiv preprint arXiv:cs/0703057},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703057},
primaryClass={cs.IT math.IT}
} | pezeshki2007doppler |
arxiv-675786 | cs/0703058 | A Comparison of Five Probabilistic View-Size Estimation Techniques in OLAP | <|reference_start|>A Comparison of Five Probabilistic View-Size Estimation Techniques in OLAP: A data warehouse cannot materialize all possible views, hence we must estimate quickly, accurately, and reliably the size of views to determine the best candidates for materialization. Many available techniques for view-size estimation make particular statistical assumptions and their error can be large. Comparatively, unassuming probabilistic techniques are slower, but they estimate accurately and reliability very large view sizes using little memory. We compare five unassuming hashing-based view-size estimation techniques including Stochastic Probabilistic Counting and LogLog Probabilistic Counting. Our experiments show that only Generalized Counting, Gibbons-Tirthapura, and Adaptive Counting provide universally tight estimates irrespective of the size of the view; of those, only Adaptive Counting remains constantly fast as we increase the memory budget.<|reference_end|> | arxiv | @article{aouiche2007a,
title={A Comparison of Five Probabilistic View-Size Estimation Techniques in
OLAP},
author={Kamel Aouiche and Daniel Lemire},
journal={arXiv preprint arXiv:cs/0703058},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703058},
primaryClass={cs.DB cs.PF}
} | aouiche2007a |
arxiv-675787 | cs/0703059 | Geometry and the complexity of matrix multiplication | <|reference_start|>Geometry and the complexity of matrix multiplication: We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i.) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii.) to motivate researchers to work on these questions, and (iii.) to point out relations with more general problems in geometry. The key geometric objects for our study are the secant varieties of Segre varieties. We explain how these varieties are also useful for algebraic statistics, the study of phylogenetic invariants, and quantum computing.<|reference_end|> | arxiv | @article{landsberg2007geometry,
title={Geometry and the complexity of matrix multiplication},
author={J.M. Landsberg},
journal={arXiv preprint arXiv:cs/0703059},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703059},
primaryClass={cs.CC math.AG math.RT}
} | landsberg2007geometry |
arxiv-675788 | cs/0703060 | Redesigning Decision Matrix Method with an indeterminacy-based inference process | <|reference_start|>Redesigning Decision Matrix Method with an indeterminacy-based inference process: For academics and practitioners concerned with computers, business and mathematics, one central issue is supporting decision makers. In this paper, we propose a generalization of Decision Matrix Method (DMM), using Neutrosophic logic. It emerges as an alternative to the existing logics and it represents a mathematical model of uncertainty and indeterminacy. This paper proposes the Neutrosophic Decision Matrix Method as a more realistic tool for decision making. In addition, a de-neutrosophication process is included.<|reference_end|> | arxiv | @article{salmeron2007redesigning,
title={Redesigning Decision Matrix Method with an indeterminacy-based inference
process},
author={Jose L. Salmeron, Florentin Smarandache},
journal={A short version published in Advances in Fuzzy Sets and Systems,
Vol. 1(2), 263-271, 2006},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703060},
primaryClass={cs.AI}
} | salmeron2007redesigning |
arxiv-675789 | cs/0703061 | Coding for Errors and Erasures in Random Network Coding | <|reference_start|>Coding for Errors and Erasures in Random Network Coding: The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space $V$ and the collection by the receiver of a basis for a vector space $U$. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space $V \cap U$ is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.<|reference_end|> | arxiv | @article{koetter2007coding,
title={Coding for Errors and Erasures in Random Network Coding},
author={Ralf Koetter and Frank Kschischang},
journal={arXiv preprint arXiv:cs/0703061},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703061},
primaryClass={cs.IT cs.NI math.IT}
} | koetter2007coding |
arxiv-675790 | cs/0703062 | Bandit Algorithms for Tree Search | <|reference_start|>Bandit Algorithms for Tree Search: Bandit based methods for tree search have recently gained popularity when applied to huge trees, e.g. in the game of go (Gelly et al., 2006). The UCT algorithm (Kocsis and Szepesvari, 2006), a tree search method based on Upper Confidence Bounds (UCB) (Auer et al., 2002), is believed to adapt locally to the effective smoothness of the tree. However, we show that UCT is too ``optimistic'' in some cases, leading to a regret O(exp(exp(D))) where D is the depth of the tree. We propose alternative bandit algorithms for tree search. First, a modification of UCT using a confidence sequence that scales exponentially with the horizon depth is proven to have a regret O(2^D \sqrt{n}), but does not adapt to possible smoothness in the tree. We then analyze Flat-UCB performed on the leaves and provide a finite regret bound with high probability. Then, we introduce a UCB-based Bandit Algorithm for Smooth Trees which takes into account actual smoothness of the rewards for performing efficient ``cuts'' of sub-optimal branches with high confidence. Finally, we present an incremental tree search version which applies when the full tree is too big (possibly infinite) to be entirely represented and show that with high probability, essentially only the optimal branches is indefinitely developed. We illustrate these methods on a global optimization problem of a Lipschitz function, given noisy data.<|reference_end|> | arxiv | @article{coquelin2007bandit,
title={Bandit Algorithms for Tree Search},
author={Pierre-Arnaud Coquelin (CMAP), R'emi Munos (INRIA Futurs)},
journal={arXiv preprint arXiv:cs/0703062},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703062},
primaryClass={cs.LG}
} | coquelin2007bandit |
arxiv-675791 | cs/0703063 | Convergence and Optimal Buffer Sizing for Window Based AIMD Congestion Control | <|reference_start|>Convergence and Optimal Buffer Sizing for Window Based AIMD Congestion Control: We study the interaction between the AIMD (Additive Increase Multiplicative Decrease) congestion control and a bottleneck router with Drop Tail buffer. We consider the problem in the framework of deterministic hybrid models. First, we show that the hybrid model of the interaction between the AIMD congestion control and bottleneck router always converges to a cyclic behavior. We characterize the cycles. Necessary and sufficient conditions for the absence of multiple jumps of congestion window in the same cycle are obtained. Then, we propose an analytical framework for the optimal choice of the router buffer size. We formulate the problem of the optimal router buffer size as a multi-criteria optimization problem, in which the Lagrange function corresponds to a linear combination of the average goodput and the average delay in the queue. The solution to the optimization problem provides further evidence that the buffer size should be reduced in the presence of traffic aggregation. Our analytical results are confirmed by simulations performed with Simulink and the NS simulator.<|reference_end|> | arxiv | @article{avrachenkov2007convergence,
title={Convergence and Optimal Buffer Sizing for Window Based AIMD Congestion
Control},
author={Konstantin Avrachenkov (INRIA Sophia Antipolis), Urtzi Ayesta (LAAS),
Alexei Piunovskiy},
journal={arXiv preprint arXiv:cs/0703063},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703063},
primaryClass={cs.NI}
} | avrachenkov2007convergence |
arxiv-675792 | cs/0703064 | Automatic Structures: Richness and Limitations | <|reference_start|>Automatic Structures: Richness and Limitations: We study the existence of automatic presentations for various algebraic structures. An automatic presentation of a structure is a description of the universe of the structure by a regular set of words, and the interpretation of the relations by synchronised automata. Our first topic concerns characterising classes of automatic structures. We supply a characterisation of the automatic Boolean algebras, and it is proven that the free Abelian group of infinite rank, as well as certain Fraisse limits, do not have automatic presentations. In particular, the countably infinite random graph and the random partial order do not have automatic presentations. Furthermore, no infinite integral domain is automatic. Our second topic is the isomorphism problem. We prove that the complexity of the isomorphism problem for the class of all automatic structures is \Sigma_1^1-complete.<|reference_end|> | arxiv | @article{khoussainov2007automatic,
title={Automatic Structures: Richness and Limitations},
author={Bakhadyr Khoussainov, Andre Nies, Sasha Rubin and Frank Stephan},
journal={Logical Methods in Computer Science, Volume 3, Issue 2 (April 26,
2007) lmcs:2219},
year={2007},
doi={10.2168/LMCS-3(2:2)2007},
archivePrefix={arXiv},
eprint={cs/0703064},
primaryClass={cs.DM cs.LO}
} | khoussainov2007automatic |
arxiv-675793 | cs/0703065 | Satisfying assignments of Random Boolean CSP: Clusters and Overlaps | <|reference_start|>Satisfying assignments of Random Boolean CSP: Clusters and Overlaps: The distribution of overlaps of solutions of a random CSP is an indicator of the overall geometry of its solution space. For random $k$-SAT, nonrigorous methods from Statistical Physics support the validity of the ``one step replica symmetry breaking'' approach. Some of these predictions were rigorously confirmed in \cite{cond-mat/0504070/prl} \cite{cond-mat/0506053}. There it is proved that the overlap distribution of random $k$-SAT, $k\geq 9$, has discontinuous support. Furthermore, Achlioptas and Ricci-Tersenghi proved that, for random $k$-SAT, $k\geq 8$. and constraint densities close enough to the phase transition there exists an exponential number of clusters of satisfying assignments; moreover, the distance between satisfying assignments in different clusters is linear. We aim to understand the structural properties of random CSP that lead to solution clustering. To this end, we prove two results on the cluster structure of solutions for binary CSP under the random model from Molloy (STOC 2002) 1. For all constraint sets $S$ (described explicitly in Creignou and Daude (2004), Istrate (2005)) s.t. $SAT(S)$ has a sharp threshold and all $q\in (0,1]$, $q$-overlap-$SAT(S)$ has a sharp threshold (i.e. the first step of the approach in Mora et al. works in all nontrivial cases). 2. For any constraint density value $c<1$, the set of solutions of a random instance of 2-SAT form, w.h.p., a single cluster. Also, for and any $q\in (0,1]$ such an instance has w.h.p. two satisfying assignment of overlap $\sim q$. Thus, as expected from Statistical Physics predictions, the second step of the approach in Mora et al. fails for 2-SAT.<|reference_end|> | arxiv | @article{istrate2007satisfying,
title={Satisfying assignments of Random Boolean CSP: Clusters and Overlaps},
author={Gabriel Istrate},
journal={arXiv preprint arXiv:cs/0703065},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703065},
primaryClass={cs.DM cond-mat.dis-nn cs.CC}
} | istrate2007satisfying |
arxiv-675794 | cs/0703066 | Discriminating and Identifying Codes in the Binary Hamming Space | <|reference_start|>Discriminating and Identifying Codes in the Binary Hamming Space: Let $F^n$ be the binary $n$-cube, or binary Hamming space of dimension $n$, endowed with the Hamming distance, and ${\cal E}^n$ (respectively, ${\cal O}^n$) the set of vectors with even (respectively, odd) weight. For $r\geq 1$ and $x\in F^n$, we denote by $B_r(x)$ the ball of radius $r$ and centre $x$. A code $C\subseteq F^n$ is said to be $r$-identifying if the sets $B_r(x) \cap C$, $x\in F^n$, are all nonempty and distinct. A code $C\subseteq {\cal E}^n$ is said to be $r$-discriminating if the sets $B_r(x) \cap C$, $x\in {\cal O}^n$, are all nonempty and distinct. We show that the two definitions, which were given for general graphs, are equivalent in the case of the Hamming space, in the following sense: for any odd $r$, there is a bijection between the set of $r$-identifying codes in $F^n$ and the set of $r$-discriminating codes in $F^{n+1}$. We then extend previous studies on constructive upper bounds for the minimum cardinalities of identifying codes in the Hamming space.<|reference_end|> | arxiv | @article{cohen2007discriminating,
title={Discriminating and Identifying Codes in the Binary Hamming Space},
author={Charon Cohen, Hudry Lobstein},
journal={arXiv preprint arXiv:cs/0703066},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703066},
primaryClass={cs.DM}
} | cohen2007discriminating |
arxiv-675795 | cs/0703067 | Target assignment for robotic networks: asymptotic performance under limited communication | <|reference_start|>Target assignment for robotic networks: asymptotic performance under limited communication: We are given an equal number of mobile robotic agents, and distinct target locations. Each agent has simple integrator dynamics, a limited communication range, and knowledge of the position of every target. We address the problem of designing a distributed algorithm that allows the group of agents to divide the targets among themselves and, simultaneously, leads each agent to reach its unique target. We do not require connectivity of the communication graph at any time. We introduce a novel assignment-based algorithm with the following features: initial assignments and robot motions follow a greedy rule, and distributed refinements of the assignment exploit an implicit circular ordering of the targets. We prove correctness of the algorithm, and give worst-case asymptotic bounds on the time to complete the assignment as the environment grows with the number of agents. We show that among a certain class of distributed algorithms, our algorithm is asymptotically optimal. The analysis utilizes results on the Euclidean traveling salesperson problem.<|reference_end|> | arxiv | @article{smith2007target,
title={Target assignment for robotic networks: asymptotic performance under
limited communication},
author={Stephen L. Smith, Francesco Bullo},
journal={arXiv preprint arXiv:cs/0703067},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703067},
primaryClass={cs.RO}
} | smith2007target |
arxiv-675796 | cs/0703068 | Option Valuation using Fourier Space Time Stepping | <|reference_start|>Option Valuation using Fourier Space Time Stepping: It is well known that the Black-Scholes-Merton model suffers from several deficiencies. Jump-diffusion and Levy models have been widely used to partially alleviate some of the biases inherent in this classical model. Unfortunately, the resulting pricing problem requires solving a more difficult partial-integro differential equation (PIDE) and although several approaches for solving the PIDE have been suggested in the literature, none are entirely satisfactory. All treat the integral and diffusive terms asymmetrically and are difficult to extend to higher dimensions. We present a new, efficient algorithm, based on transform methods, which symmetrically treats the diffusive and integrals terms, is applicable to a wide class of path-dependent options (such as Bermudan, barrier, and shout options) and options on multiple assets, and naturally extends to regime-switching Levy models. We present a concise study of the precision and convergence properties of our algorithm for several classes of options and Levy models and demonstrate that the algorithm is second-order in space and first-order in time for path-dependent options.<|reference_end|> | arxiv | @article{jackson2007option,
title={Option Valuation using Fourier Space Time Stepping},
author={Kenneth R. Jackson, Sebastian Jaimungal, Vladimir Surkov},
journal={arXiv preprint arXiv:cs/0703068},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703068},
primaryClass={cs.CE}
} | jackson2007option |
arxiv-675797 | cs/0703069 | Portlet Wrappers using JavaScript | <|reference_start|>Portlet Wrappers using JavaScript: In this paper we extend the classical portal (with static portlets) design with HTML DOM Web clipping on the client browser using dynamic JavaScript portlets: the portal server supplies the user/passwords for all services through https and the client browser retrieves web pages and cuts/selects/changes the desired parts using paths (XPath) in the Web page structure. This operation brings along a set of advantages: dynamic wrapping of existing legacy websites in the client browser, the reloading of only changed portlets instead of whole portal, low bandwidth on the server, the elimination of re-writing the URL links in the portal, and last but not least, a support for Java applets in portlets by putting the login cookies on the client browser. Our solution is compliant with JSR168 Portlet Specification allowing portability across all vendor platforms.<|reference_end|> | arxiv | @article{fodor2007portlet,
title={Portlet Wrappers using JavaScript},
author={Paul Fodor},
journal={arXiv preprint arXiv:cs/0703069},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703069},
primaryClass={cs.SE}
} | fodor2007portlet |
arxiv-675798 | cs/0703070 | Flexible Audio Streams | <|reference_start|>Flexible Audio Streams: Tremendous research effort was invested in audio browsers and machine learning techniques to decode the structure of Web pages in order to put them into an audio format. In this paper, we address a simpler and efficient solution for the creation of an audio browser of VOICEXML generated from RSS/Atom stream feeds. We developed a multimodal (audio and graphical) portal application that offers RSS/Atom feeds. By utilizing sing our system, the user can interact using voice or graphic commands, listen and watch digital content, such as news, blogs feeds, podcasts, and even access email and personal schedules. The portal system permits the use of security credentials (user/password authentication) to collect secure RSS/Atom stream in the multimodal browser to connect the user to specific personal services. A series experiments have been conducted to evaluate the performance of the RSS reader and navigator. Our system is extremely beneficial for a wide range of applications, from interfaces for the visual impaired users to browsers for mobile telephonic interfaces.<|reference_end|> | arxiv | @article{fodor2007flexible,
title={Flexible Audio Streams},
author={Paul Fodor},
journal={arXiv preprint arXiv:cs/0703070},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703070},
primaryClass={cs.HC}
} | fodor2007flexible |
arxiv-675799 | cs/0703071 | Automatic Annotation of XHTML Pages with Audio Components | <|reference_start|>Automatic Annotation of XHTML Pages with Audio Components: In this paper we present Deiush, a multimodal system for browsing hypertext Web documents. The Deiush system is based on our novel approach to automatically annotate hypertext Web documents (i.e. XHTML pages) with browsable audio components. It combines two key technologies: (1) middleware automatic separation of Web documents through structural and semantic analysis which is annotated with audio components, transforming them into XHTML+VoiceXML format to represent multimodal dialog; and (2) Opera Browser, an already standardized browser which we adopt as an interface of the XHTML+VoiceXML output of annotating. This paper describes the annotation technology of Deiush and presents an initial system evaluation.<|reference_end|> | arxiv | @article{fodor2007automatic,
title={Automatic Annotation of XHTML Pages with Audio Components},
author={Paul Fodor},
journal={arXiv preprint arXiv:cs/0703071},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703071},
primaryClass={cs.OH}
} | fodor2007automatic |
arxiv-675800 | cs/0703072 | Domain Directed Dialogs for Decision Processes | <|reference_start|>Domain Directed Dialogs for Decision Processes: The search for a standardized optimum way to communicate using natural language dialog has involved a lot of research. However, due to the diversity of communication domains, we think that this is extremely difficult to achieve and different dialogue management techniques should be applied for different situations. Our work presents the basis of a communication mechanism that supports decision processes, is based on decision trees, and minimizes the number of steps (turn-takes) in the dialogue. The initial dialog workflow is automatically generated and the user's interaction with the system can also change the decision tree and create new dialog paths with optimized cost. The decision tree represents the chronological ordering of the actions (via the parent-child relationship) and uses an object frame to represent the information state (capturing the notion of context). This paper presents our framework, the formalism for interaction and dialogue, and an evaluation of the system compared to relevant dialog planning frameworks (i.e. finite state diagrams, frame-based, information state and planning-based dialogue systems).<|reference_end|> | arxiv | @article{fodor2007domain,
title={Domain Directed Dialogs for Decision Processes},
author={Paul Fodor},
journal={arXiv preprint arXiv:cs/0703072},
year={2007},
archivePrefix={arXiv},
eprint={cs/0703072},
primaryClass={cs.OH}
} | fodor2007domain |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.