corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-675401
cs/0701045
Polygon Convexity: Another O(n) Test
<|reference_start|>Polygon Convexity: Another O(n) Test: An n-gon is defined as a sequence \P=(V_0,...,V_{n-1}) of n points on the plane. An n-gon \P is said to be convex if the boundary of the convex hull of the set {V_0,...,V_{n-1}} of the vertices of \P coincides with the union of the edges [V_0,V_1],...,[V_{n-1},V_0]; if at that no three vertices of \P are collinear then \P is called strictly convex. We prove that an n-gon \P with n\ge3 is strictly convex if and only if a cyclic shift of the sequence (\al_0,...,\al_{n-1})\in[0,2\pi)^n of the angles between the x-axis and the vectors V_1-V_0,...,V_0-V_{n-1} is strictly monotone. A ``non-strict'' version of this result is also proved.<|reference_end|>
arxiv
@article{pinelis2007polygon, title={Polygon Convexity: Another O(n) Test}, author={Iosif Pinelis}, journal={arXiv preprint arXiv:cs/0701045}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701045}, primaryClass={cs.CG cs.DS} }
pinelis2007polygon
arxiv-675402
cs/0701046
Cooperation Between Stations in Wireless Networks
<|reference_start|>Cooperation Between Stations in Wireless Networks: In a wireless network, mobile nodes (MNs) repeatedly perform tasks such as layer 2 (L2) handoff, layer 3 (L3) handoff and authentication. These tasks are critical, particularly for real-time applications such as VoIP. We propose a novel approach, namely Cooperative Roaming (CR), in which MNs can collaborate with each other and share useful information about the network in which they move. We show how we can achieve seamless L2 and L3 handoffs regardless of the authentication mechanism used and without any changes to either the infrastructure or the protocol. In particular, we provide a working implementation of CR and show how, with CR, MNs can achieve a total L2+L3 handoff time of less than 16 ms in an open network and of about 21 ms in an IEEE 802.11i network. We consider behaviors typical of IEEE 802.11 networks, although many of the concepts and problems addressed here apply to any kind of mobile network.<|reference_end|>
arxiv
@article{forte2007cooperation, title={Cooperation Between Stations in Wireless Networks}, author={Andrea G. Forte, Henning Schulzrinne}, journal={arXiv preprint arXiv:cs/0701046}, year={2007}, number={cucs-044-06}, archivePrefix={arXiv}, eprint={cs/0701046}, primaryClass={cs.NI} }
forte2007cooperation
arxiv-675403
cs/0701047
On vocabulary size of grammar-based codes
<|reference_start|>On vocabulary size of grammar-based codes: We discuss inequalities holding between the vocabulary size, i.e., the number of distinct nonterminal symbols in a grammar-based compression for a string, and the excess length of the respective universal code, i.e., the code-based analog of algorithmic mutual information. The aim is to strengthen inequalities which were discussed in a weaker form in linguistics but shed some light on redundancy of efficiently computable codes. The main contribution of the paper is a construction of universal grammar-based codes for which the excess lengths can be bounded easily.<|reference_end|>
arxiv
@article{debowski2007on, title={On vocabulary size of grammar-based codes}, author={Lukasz Debowski}, journal={2007 IEEE International Symposium on Information Theory, 91-95}, year={2007}, doi={10.1109/ISIT.2007.4557209}, archivePrefix={arXiv}, eprint={cs/0701047}, primaryClass={cs.IT cs.CL math.IT} }
debowski2007on
arxiv-675404
cs/0701048
Energy Conscious Interactive Communication for Sensor Networks
<|reference_start|>Energy Conscious Interactive Communication for Sensor Networks: In this work, we are concerned with maximizing the lifetime of a cluster of sensors engaged in single-hop communication with a base-station. In a data-gathering network, the spatio-temporal correlation in sensor data induces data-redundancy. Also, the interaction between two communicating parties is well-known to reduce the communication complexity. This paper proposes a formalism that exploits these two opportunities to reduce the number of bits transmitted by a sensor node in a cluster, hence enhancing its lifetime. We argue that our approach has several inherent advantages in scenarios where the sensor nodes are acutely energy and computing-power constrained, but the base-station is not so. This provides us an opportunity to develop communication protocols, where most of the computing and communication is done by the base-station. The proposed framework casts the sensor nodes and base-station communication problem as the problem of multiple informants with correlated information communicating with a recipient and attempts to extend extant work on interactive communication between an informant-recipient pair to such scenarios. Our work makes four major contributions. Firstly, we explicitly show that in such scenarios interaction can help in reducing the communication complexity. Secondly, we show that the order in which the informants communicate with the recipient may determine the communication complexity. Thirdly, we provide the framework to compute the $m$-message communication complexity in such scenarios. Lastly, we prove that in a typical sensor network scenario, the proposed formalism significantly reduces the communication and computational complexities.<|reference_end|>
arxiv
@article{agnihotri2007energy, title={Energy Conscious Interactive Communication for Sensor Networks}, author={Samar Agnihotri and Pavan Nuggehalli}, journal={arXiv preprint arXiv:cs/0701048}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701048}, primaryClass={cs.IT math.IT} }
agnihotri2007energy
arxiv-675405
cs/0701049
On the Complexity of a Derivative Chess Problem
<|reference_start|>On the Complexity of a Derivative Chess Problem: We introduce QUEENS, a derivative chess problem based on the classical n-queens problem. We prove that QUEENS is NP-complete, with respect to polynomial-time reductions.<|reference_end|>
arxiv
@article{martin2007on, title={On the Complexity of a Derivative Chess Problem}, author={Barnaby Martin}, journal={arXiv preprint arXiv:cs/0701049}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701049}, primaryClass={cs.CC} }
martin2007on
arxiv-675406
cs/0701050
A Simple Proof of the Entropy-Power Inequality via Properties of Mutual Information
<|reference_start|>A Simple Proof of the Entropy-Power Inequality via Properties of Mutual Information: While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, Shannon's entropy power inequality (EPI) seems to be an exception: available information theoretic proofs of the EPI hinge on integral representations of differential entropy using either Fisher's information (FI) or minimum mean-square error (MMSE). In this paper, we first present a unified view of proofs via FI and MMSE, showing that they are essentially dual versions of the same proof, and then fill the gap by providing a new, simple proof of the EPI, which is solely based on the properties of mutual information and sidesteps both FI or MMSE representations.<|reference_end|>
arxiv
@article{rioul2007a, title={A Simple Proof of the Entropy-Power Inequality via Properties of Mutual Information}, author={Olivier Rioul}, journal={arXiv preprint arXiv:cs/0701050}, year={2007}, doi={10.1109/ISIT.2007.4557202}, archivePrefix={arXiv}, eprint={cs/0701050}, primaryClass={cs.IT math.IT} }
rioul2007a
arxiv-675407
cs/0701051
Coding, Scheduling, and Cooperation in Wireless Sensor Networks
<|reference_start|>Coding, Scheduling, and Cooperation in Wireless Sensor Networks: We consider a single-hop data gathering sensor cluster consisting of a set of sensors that need to transmit data periodically to a base-station. We are interested in maximizing the lifetime of this network. Even though the setting of our problem is very simple, it turns out that the solution is far from easy. The complexity arises from several competing system-level opportunities available to reduce the energy consumed in radio transmission. First, sensor data is spatially and temporally correlated. Recent advances in distributed source-coding allow us to take advantage of these correlations to reduce the number of transmitted bits, with concomitant savings in energy. Second, it is also well-known that channel-coding can be used to reduce transmission energy by increasing transmission time. Finally, sensor nodes are cooperative, unlike nodes in an ad hoc network that are often modeled as competitive, allowing us to take full advantage of the first two opportunities for the purpose of maximizing cluster lifetime. In this paper, we pose the problem of maximizing lifetime as a max-min optimization problem subject to the constraint of successful data collection and limited energy supply at each node. By introducing the notion of instantaneous decoding, we are able to simplify this optimization problem into a joint scheduling and time allocation problem. We show that even with our ample simplification, the problem remains NP-hard. We provide some algorithms, heuristics and insight for various scenarios. Our chief contribution is to illustrate both the challenges and gains provided by joint source-channel coding and scheduling.<|reference_end|>
arxiv
@article{agnihotri2007coding,, title={Coding, Scheduling, and Cooperation in Wireless Sensor Networks}, author={Samar Agnihotri and Pavan Nuggehalli}, journal={arXiv preprint arXiv:cs/0701051}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701051}, primaryClass={cs.IT math.IT} }
agnihotri2007coding,
arxiv-675408
cs/0701052
Time Series Forecasting: Obtaining Long Term Trends with Self-Organizing Maps
<|reference_start|>Time Series Forecasting: Obtaining Long Term Trends with Self-Organizing Maps: Kohonen self-organisation maps are a well know classification tool, commonly used in a wide variety of problems, but with limited applications in time series forecasting context. In this paper, we propose a forecasting method specifically designed for multi-dimensional long-term trends prediction, with a double application of the Kohonen algorithm. Practical applications of the method are also presented.<|reference_end|>
arxiv
@article{simon2007time, title={Time Series Forecasting: Obtaining Long Term Trends with Self-Organizing Maps}, author={Geoffroy Simon (DICE-MLG), Amaury Lendasse (DICE-MLG), Marie Cottrell (SAMOS, Matisse), Jean-Claude Fort (SAMOS, Matisse), Michel Verleysen (SAMOS, Matisse, Dice-MLG)}, journal={Pattern Recognition Letter 26 n0; 12 (05/2005) 1795-1808}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701052}, primaryClass={cs.LG math.ST stat.TH} }
simon2007time
arxiv-675409
cs/0701053
A Case For Amplify-Forward Relaying in the Block-Fading Multi-Access Channel
<|reference_start|>A Case For Amplify-Forward Relaying in the Block-Fading Multi-Access Channel: This paper demonstrates the significant gains that multi-access users can achieve from sharing a single amplify-forward relay in slow fading environments. The proposed protocol, namely the multi-access relay amplify-forward, allows for a low-complexity relay and achieves the optimal diversity-multiplexing trade-off at high multiplexing gains. Analysis of the protocol reveals that it uniformly dominates the compress-forward strategy and further outperforms the dynamic decode-forward protocol at high multiplexing gains. An interesting feature of the proposed protocol is that, at high multiplexing gains, it resembles a multiple-input single-output system, and at low multiplexing gains, it provides each user with the same diversity-multiplexing trade-off as if there is no contention for the relay from the other users.<|reference_end|>
arxiv
@article{chen2007a, title={A Case For Amplify-Forward Relaying in the Block-Fading Multi-Access Channel}, author={Deqiang Chen, Kambiz Azarian and J. Nicholas Laneman}, journal={arXiv preprint arXiv:cs/0701053}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701053}, primaryClass={cs.IT math.IT} }
chen2007a
arxiv-675410
cs/0701054
Nearly-Exponential Size Lower Bounds for Symbolic Quantifier Elimination Algorithms and OBDD-Based Proofs of Unsatisfiability
<|reference_start|>Nearly-Exponential Size Lower Bounds for Symbolic Quantifier Elimination Algorithms and OBDD-Based Proofs of Unsatisfiability: We demonstrate a family of propositional formulas in conjunctive normal form so that a formula of size $N$ requires size $2^{\Omega(\sqrt[7]{N/logN})}$ to refute using the tree-like OBDD refutation system of Atserias, Kolaitis and Vardi with respect to all variable orderings. All known symbolic quantifier elimination algorithms for satisfiability generate tree-like proofs when run on unsatisfiable CNFs, so this lower bound applies to the run-times of these algorithms. Furthermore, the lower bound generalizes earlier results on OBDD-based proofs of unsatisfiability in that it applies for all variable orderings, it applies when the clauses are processed according to an arbitrary schedule, and it applies when variables are eliminated via quantification.<|reference_end|>
arxiv
@article{segerlind2007nearly-exponential, title={Nearly-Exponential Size Lower Bounds for Symbolic Quantifier Elimination Algorithms and OBDD-Based Proofs of Unsatisfiability}, author={Nathan Segerlind}, journal={arXiv preprint arXiv:cs/0701054}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701054}, primaryClass={cs.CC cs.LO} }
segerlind2007nearly-exponential
arxiv-675411
cs/0701055
Bounds on Space-Time-Frequency Dimensionality
<|reference_start|>Bounds on Space-Time-Frequency Dimensionality: We bound the number of electromagnetic signals which may be observed over a frequency range $2W$ for a time $T$ within a region of space enclosed by a radius $R$. Our result implies that broadband fields in space cannot be arbitrarily complex: there is a finite amount of information which may be extracted from a region of space via electromagnetic radiation. Three-dimensional space allows a trade-off between large carrier frequency and bandwidth. We demonstrate applications in super-resolution and broadband communication.<|reference_end|>
arxiv
@article{hanlen2007bounds, title={Bounds on Space-Time-Frequency Dimensionality}, author={Leif Hanlen, Thushara Abhayapala}, journal={arXiv preprint arXiv:cs/0701055}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701055}, primaryClass={cs.IT math.IT} }
hanlen2007bounds
arxiv-675412
cs/0701056
Space-Time-Frequency Degrees of Freedom: Fundamental Limits for Spatial Information
<|reference_start|>Space-Time-Frequency Degrees of Freedom: Fundamental Limits for Spatial Information: We bound the number of electromagnetic signals which may be observed over a frequency range $[F-W,F+W]$ a time interval $[0,T]$ within a sphere of radius $R$. We show that the such constrained signals may be represented by a series expansion whose terms are bounded exponentially to zero beyond a threshold. Our result implies there is a finite amount of information which may be extracted from a region of space via electromagnetic radiation.<|reference_end|>
arxiv
@article{abhayapala2007space-time-frequency, title={Space-Time-Frequency Degrees of Freedom: Fundamental Limits for Spatial Information}, author={Leif Hanlen Thushara Abhayapala}, journal={arXiv preprint arXiv:cs/0701056}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701056}, primaryClass={cs.IT math.IT} }
abhayapala2007space-time-frequency
arxiv-675413
cs/0701057
Cooperative Optimization for Energy Minimization: A Case Study of Stereo Matching
<|reference_start|>Cooperative Optimization for Energy Minimization: A Case Study of Stereo Matching: Often times, individuals working together as a team can solve hard problems beyond the capability of any individual in the team. Cooperative optimization is a newly proposed general method for attacking hard optimization problems inspired by cooperation principles in team playing. It has an established theoretical foundation and has demonstrated outstanding performances in solving real-world optimization problems. With some general settings, a cooperative optimization algorithm has a unique equilibrium and converges to it with an exponential rate regardless initial conditions and insensitive to perturbations. It also possesses a number of global optimality conditions for identifying global optima so that it can terminate its search process efficiently. This paper offers a general description of cooperative optimization, addresses a number of design issues, and presents a case study to demonstrate its power.<|reference_end|>
arxiv
@article{huang2007cooperative, title={Cooperative Optimization for Energy Minimization: A Case Study of Stereo Matching}, author={Xiaofei Huang}, journal={arXiv preprint arXiv:cs/0701057}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701057}, primaryClass={cs.CV cs.AI} }
huang2007cooperative
arxiv-675414
cs/0701058
Precoding in Multiple-Antenna Broadcast Systems with a Probabilistic Viewpoint
<|reference_start|>Precoding in Multiple-Antenna Broadcast Systems with a Probabilistic Viewpoint: In this paper, we investigate the minimum average transmit energy that can be obtained in multiple antenna broadcast systems with channel inversion technique. The achievable gain can be significantly higher than the conventional gains that are mentioned in methods like perturbation technique of Peel, et al. In order to obtain this gain, we introduce a Selective Mapping (SLM) technique (based on random coding arguments). We propose to implement the SLM method by using nested lattice codes in a trellis precoding framework.<|reference_end|>
arxiv
@article{mobasher2007precoding, title={Precoding in Multiple-Antenna Broadcast Systems with a Probabilistic Viewpoint}, author={Amin Mobasher and Amir K. Khandani}, journal={arXiv preprint arXiv:cs/0701058}, year={2007}, number={UW-E&CE#2007-02}, archivePrefix={arXiv}, eprint={cs/0701058}, primaryClass={cs.IT math.IT} }
mobasher2007precoding
arxiv-675415
cs/0701059
Enhancing Sensor Network Lifetime Using Interactive Communication
<|reference_start|>Enhancing Sensor Network Lifetime Using Interactive Communication: We are concerned with maximizing the lifetime of a data-gathering wireless sensor network consisting of set of nodes directly communicating with a base-station. We model this scenario as the m-message interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). With this framework, we show that m-message interactive communication can indeed enhance network lifetime. Both worst-case and average-case performances are considered.<|reference_end|>
arxiv
@article{agnihotri2007enhancing, title={Enhancing Sensor Network Lifetime Using Interactive Communication}, author={Samar Agnihotri and Pavan Nuggehalli}, journal={arXiv preprint arXiv:cs/0701059}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701059}, primaryClass={cs.IT math.IT} }
agnihotri2007enhancing
arxiv-675416
cs/0701060
Duadic Group Algebra Codes
<|reference_start|>Duadic Group Algebra Codes: Duadic group algebra codes are a generalization of quadratic residue codes. This paper settles an open problem raised by Zhu concerning the existence of duadic group algebra codes. These codes can be used to construct degenerate quantum stabilizer codes that have the nice feature that many errors of small weight do not need error correction; this fact is illustrated by an example.<|reference_end|>
arxiv
@article{aly2007duadic, title={Duadic Group Algebra Codes}, author={Salah A. Aly, Andreas Klappenecker, Pradeep Kiran Sarvepalli}, journal={arXiv preprint arXiv:cs/0701060}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701060}, primaryClass={cs.IT math.IT quant-ph} }
aly2007duadic
arxiv-675417
cs/0701061
Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels
<|reference_start|>Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels: It has been shown recently that the dirty-paper coding is the optimal strategy for maximizing the sum rate of multiple-input multiple-output Gaussian broadcast channels (MIMO BC). Moreover, by the channel duality, the nonconvex MIMO BC sum rate problem can be transformed to the convex dual MIMO multiple-access channel (MIMO MAC) problem with a sum power constraint. In this paper, we design an efficient algorithm based on conjugate gradient projection (CGP) to solve the MIMO BC maximum sum rate problem. Our proposed CGP algorithm solves the dual sum power MAC problem by utilizing the powerful concept of Hessian conjugacy. We also develop a rigorous algorithm to solve the projection problem. We show that CGP enjoys provable convergence, nice scalability, and great efficiency for large MIMO BC systems.<|reference_end|>
arxiv
@article{liu2007conjugate, title={Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels}, author={Jia Liu, Y. Thomas Hou, and Hanif D. Sherali}, journal={arXiv preprint arXiv:cs/0701061}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701061}, primaryClass={cs.IT math.IT} }
liu2007conjugate
arxiv-675418
cs/0701062
Network Coding over a Noisy Relay : a Belief Propagation Approach
<|reference_start|>Network Coding over a Noisy Relay : a Belief Propagation Approach: In recent years, network coding has been investigated as a method to obtain improvements in wireless networks. A typical assumption of previous work is that relay nodes performing network coding can decode the messages from sources perfectly. On a simple relay network, we design a scheme to obtain network coding gain even when the relay node cannot perfectly decode its received messages. In our scheme, the operation at the relay node resembles message passing in belief propagation, sending the logarithm likelihood ratio (LLR) of the network coded message to the destination. Simulation results demonstrate the gain obtained over different channel conditions. The goal of this paper is not to give a theoretical result, but to point to possible interaction of network coding with user cooperation in noisy scenario. The extrinsic information transfer (EXIT) chart is shown to be a useful engineering tool to analyze the performance of joint channel coding and network coding in the network.<|reference_end|>
arxiv
@article{yang2007network, title={Network Coding over a Noisy Relay : a Belief Propagation Approach}, author={Sichao Yang and Ralf Koetter}, journal={arXiv preprint arXiv:cs/0701062}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701062}, primaryClass={cs.IT math.IT} }
yang2007network
arxiv-675419
cs/0701063
Hierarchical Decoupling Principle of a MIMO-CDMA Channel in Asymptotic Limits
<|reference_start|>Hierarchical Decoupling Principle of a MIMO-CDMA Channel in Asymptotic Limits: We analyze an uplink of a fast flat fading MIMO-CDMA channel in the case where the data symbol vector for each user follows an arbitrary distribution. The spectral efficiency of the channel with CSI at the receiver is evaluated analytically with the replica method. The main result is that the hierarchical decoupling principle holds in the MIMO-CDMA channel, i.e., the MIMO-CDMA channel is decoupled into a bank of single-user MIMO channels in the many-user limit, and each single-user MIMO channel is further decoupled into a bank of scalar Gaussian channels in the many-antenna limit for a fading model with a limited number of scatterers.<|reference_end|>
arxiv
@article{takeuchi2007hierarchical, title={Hierarchical Decoupling Principle of a MIMO-CDMA Channel in Asymptotic Limits}, author={Keigo Takeuchi, Toshiyuki Tanaka}, journal={arXiv preprint arXiv:cs/0701063}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701063}, primaryClass={cs.IT math.IT} }
takeuchi2007hierarchical
arxiv-675420
cs/0701064
Causing Communication Closure: Safe Program Composition with Reliable Non-FIFO Channels
<|reference_start|>Causing Communication Closure: Safe Program Composition with Reliable Non-FIFO Channels: A semantic framework for analyzing safe composition of distributed programs is presented. Its applicability is illustrated by a study of program composition when communication is reliable but not necessarily FIFO\@. In this model, special care must be taken to ensure that messages do not accidentally overtake one another in the composed program. We show that barriers do not exist in this model. Indeed, no program that sends or receives messages can automatically be composed with arbitrary programs without jeopardizing their intended behavior. Safety of composition becomes context-sensitive and new tools are needed for ensuring it. A notion of \emph{sealing} is defined, where if a program $P$ is immediately followed by a program $Q$ that seals $P$ then $P$ will be communication-closed--it will execute as if it runs in isolation. The investigation of sealing in this model reveals a novel connection between Lamport causality and safe composition. A characterization of sealable programs is given, as well as efficient algorithms for testing if $Q$ seals $P$ and for constructing a seal for a significant class of programs. It is shown that every sealable program that is open to interference on $O(n^2)$ channels can be sealed using O(n) messages.<|reference_end|>
arxiv
@article{engelhardt2007causing, title={Causing Communication Closure: Safe Program Composition with Reliable Non-FIFO Channels}, author={Kai Engelhardt and Yoram Moses}, journal={arXiv preprint arXiv:cs/0701064}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701064}, primaryClass={cs.DC} }
engelhardt2007causing
arxiv-675421
cs/0701065
Can Punctured Rate-1/2 Turbo Codes Achieve a Lower Error Floor than their Rate-1/3 Parent Codes?
<|reference_start|>Can Punctured Rate-1/2 Turbo Codes Achieve a Lower Error Floor than their Rate-1/3 Parent Codes?: In this paper we concentrate on rate-1/3 systematic parallel concatenated convolutional codes and their rate-1/2 punctured child codes. Assuming maximum-likelihood decoding over an additive white Gaussian channel, we demonstrate that a rate-1/2 non-systematic child code can exhibit a lower error floor than that of its rate-1/3 parent code, if a particular condition is met. However, assuming iterative decoding, convergence of the non-systematic code towards low bit-error rates is problematic. To alleviate this problem, we propose rate-1/2 partially-systematic codes that can still achieve a lower error floor than that of their rate-1/3 parent codes. Results obtained from extrinsic information transfer charts and simulations support our conclusion.<|reference_end|>
arxiv
@article{chatzigeorgiou2007can, title={Can Punctured Rate-1/2 Turbo Codes Achieve a Lower Error Floor than their Rate-1/3 Parent Codes?}, author={I. Chatzigeorgiou, M. R. D. Rodrigues, I. J. Wassell, R. Carrasco}, journal={arXiv preprint arXiv:cs/0701065}, year={2007}, doi={10.1109/ITW2.2006.323763}, archivePrefix={arXiv}, eprint={cs/0701065}, primaryClass={cs.IT math.IT} }
chatzigeorgiou2007can
arxiv-675422
cs/0701066
Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization
<|reference_start|>Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization: In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-binary parts in their factor graph representation. The class of Hybrid LDPC codes is obviously larger than existing types of codes, which gives more degrees of freedom to find good codes where the existing codes show their limits. We give two examples where hybrid LDPC codes show their interest.<|reference_end|>
arxiv
@article{sassatelli2007non-binary, title={Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization}, author={Lucile Sassatelli and David Declercq}, journal={IEEE 2006 Information Theory Workshop, Chengdu, China, Oct.2006, in proceedings}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701066}, primaryClass={cs.IT math.IT} }
sassatelli2007non-binary
arxiv-675423
cs/0701067
On Four-group ML Decodable Distributed Space Time Codes for Cooperative Communication
<|reference_start|>On Four-group ML Decodable Distributed Space Time Codes for Cooperative Communication: A construction of a new family of distributed space time codes (DSTCs) having full diversity and low Maximum Likelihood (ML) decoding complexity is provided for the two phase based cooperative diversity protocols of Jing-Hassibi and the recently proposed Generalized Non-orthogonal Amplify and Forward (GNAF) protocol of Rajan et al. The salient feature of the proposed DSTCs is that they satisfy the extra constraints imposed by the protocols and are also four-group ML decodable which leads to significant reduction in ML decoding complexity compared to all existing DSTC constructions. Moreover these codes have uniform distribution of power among the relays as well as in time. Also, simulations results indicate that these codes perform better in comparison with the only known DSTC with the same rate and decoding complexity, namely the Coordinate Interleaved Orthogonal Design (CIOD). Furthermore, they perform very close to DSTCs from field extensions which have same rate but higher decoding complexity.<|reference_end|>
arxiv
@article{rajan2007on, title={On Four-group ML Decodable Distributed Space Time Codes for Cooperative Communication}, author={G. Susinder Rajan, Anshoo Tandon and B. Sundar Rajan}, journal={arXiv preprint arXiv:cs/0701067}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701067}, primaryClass={cs.IT math.IT} }
rajan2007on
arxiv-675424
cs/0701068
Distributed Space-Time Codes for Cooperative Networks with Partial CSI
<|reference_start|>Distributed Space-Time Codes for Cooperative Networks with Partial CSI: Design criteria and full-diversity Distributed Space Time Codes (DSTCs) for the two phase transmission based cooperative diversity protocol of Jing-Hassibi and the Generalized Nonorthogonal Amplify and Forward (GNAF) protocol are reported, when the relay nodes are assumed to have knowledge of the phase component of the source to relay channel gains. It is shown that this under this partial channel state information (CSI), several well known space time codes for the colocated MIMO (Multiple Input Multiple Output) channel become amenable for use as DSTCs. In particular, the well known complex orthogonal designs, generalized coordinate interleaved orthogonal designs (GCIODs) and unitary weight single symbol decodable (UW-SSD) codes are shown to satisfy the required design constraints for DSTCs. Exploiting the relaxed code design constraints, we propose DSTCs obtained from Clifford Algebras which have low ML decoding complexity.<|reference_end|>
arxiv
@article{rajan2007distributed, title={Distributed Space-Time Codes for Cooperative Networks with Partial CSI}, author={G. Susinder Rajan and B. Sundar Rajan}, journal={arXiv preprint arXiv:cs/0701068}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701068}, primaryClass={cs.IT math.IT} }
rajan2007distributed
arxiv-675425
cs/0701069
Finding low-weight polynomial multiples using discrete logarithm
<|reference_start|>Finding low-weight polynomial multiples using discrete logarithm: Finding low-weight multiples of a binary polynomial is a difficult problem arising in the context of stream ciphers cryptanalysis. The classical algorithm to solve this problem is based on a time memory trade-off. We will present an improvement to this approach using discrete logarithm rather than a direct representation of the involved polynomials. This gives an algorithm which improves the theoretical complexity, and is also very flexible in practice.<|reference_end|>
arxiv
@article{didier2007finding, title={Finding low-weight polynomial multiples using discrete logarithm}, author={Fr'ed'eric Didier (INRIA Rocquencourt), Yann Laigle-Chapuy (INRIA Rocquencourt)}, journal={Dans IEEE International Symposium on Information Theory - ISIT'07 (2007)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701069}, primaryClass={cs.CR} }
didier2007finding
arxiv-675426
cs/0701070
On formulas for decoding binary cyclic codes
<|reference_start|>On formulas for decoding binary cyclic codes: We adress the problem of the algebraic decoding of any cyclic code up to the true minimum distance. For this, we use the classical formulation of the problem, which is to find the error locator polynomial in terms of the syndroms of the received word. This is usually done with the Berlekamp-Massey algorithm in the case of BCH codes and related codes, but for the general case, there is no generic algorithm to decode cyclic codes. Even in the case of the quadratic residue codes, which are good codes with a very strong algebraic structure, there is no available general decoding algorithm. For this particular case of quadratic residue codes, several authors have worked out, by hand, formulas for the coefficients of the locator polynomial in terms of the syndroms, using the Newton identities. This work has to be done for each particular quadratic residue code, and is more and more difficult as the length is growing. Furthermore, it is error-prone. We propose to automate these computations, using elimination theory and Grbner bases. We prove that, by computing appropriate Grbner bases, one automatically recovers formulas for the coefficients of the locator polynomial, in terms of the syndroms.<|reference_end|>
arxiv
@article{augot2007on, title={On formulas for decoding binary cyclic codes}, author={Daniel Augot, Magali Bardet (LITIS), Jean-Charles Faug`ere (LIP6)}, journal={IEEE International Symposium on Information Theory, 2007 (ISIT 2007) (2007) 2646-2650}, year={2007}, doi={10.1109/ISIT.2007.4557618}, archivePrefix={arXiv}, eprint={cs/0701070}, primaryClass={cs.IT math.IT} }
augot2007on
arxiv-675427
cs/0701071
A bounded-degree network formation game
<|reference_start|>A bounded-degree network formation game: Motivated by applications in peer-to-peer and overlay networks we define and study the \emph{Bounded Degree Network Formation} (BDNF) game. In an $(n,k)$-BDNF game, we are given $n$ nodes, a bound $k$ on the out-degree of each node, and a weight $w_{vu}$ for each ordered pair $(v,u)$ representing the traffic rate from node $v$ to node $u$. Each node $v$ uses up to $k$ directed links to connect to other nodes with an objective to minimize its average distance, using weights $w_{vu}$, to all other destinations. We study the existence of pure Nash equilibria for $(n,k)$-BDNF games. We show that if the weights are arbitrary, then a pure Nash wiring may not exist. Furthermore, it is NP-hard to determine whether a pure Nash wiring exists for a given $(n,k)$-BDNF instance. A major focus of this paper is on uniform $(n,k)$-BDNF games, in which all weights are 1. We describe how to construct a pure Nash equilibrium wiring given any $n$ and $k$, and establish that in all pure Nash wirings the cost of individual nodes cannot differ by more than a factor of nearly 2, whereas the diameter cannot exceed $O(\sqrt{n \log_k n})$. We also analyze best-response walks on the configuration space defined by the uniform game, and show that starting from any initial configuration, strong connectivity is reached within $\Theta(n^2)$ rounds. Convergence to a pure Nash equilibrium, however, is not guaranteed. We present simulation results that suggest that loop-free best-response walks always exist, but may not be polynomially bounded. We also study a special family of \emph{regular} wirings, the class of Abelian Cayley graphs, in which all nodes imitate the same wiring pattern, and show that if $n$ is sufficiently large no such regular wiring can be a pure Nash equilibrium.<|reference_end|>
arxiv
@article{laoutaris2007a, title={A bounded-degree network formation game}, author={Nikolaos Laoutaris, Rajmohan Rajaraman, Ravi Sundaram, Shang-Hua Teng}, journal={arXiv preprint arXiv:cs/0701071}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701071}, primaryClass={cs.GT} }
laoutaris2007a
arxiv-675428
cs/0701072
Tagging, Folksonomy & Co - Renaissance of Manual Indexing?
<|reference_start|>Tagging, Folksonomy & Co - Renaissance of Manual Indexing?: This paper gives an overview of current trends in manual indexing on the Web. Along with a general rise of user generated content there are more and more tagging systems that allow users to annotate digital resources with tags (keywords) and share their annotations with other users. Tagging is frequently seen in contrast to traditional knowledge organization systems or as something completely new. This paper shows that tagging should better be seen as a popular form of manual indexing on the Web. Difference between controlled and free indexing blurs with sufficient feedback mechanisms. A revised typology of tagging systems is presented that includes different user roles and knowledge organization systems with hierarchical relationships and vocabulary control. A detailed bibliography of current research in collaborative tagging is included.<|reference_end|>
arxiv
@article{voss2007tagging,, title={Tagging, Folksonomy & Co - Renaissance of Manual Indexing?}, author={Jakob Voss}, journal={arXiv preprint arXiv:cs/0701072}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701072}, primaryClass={cs.IR} }
voss2007tagging,
arxiv-675429
cs/0701073
A decision procedure for linear "big O" equations
<|reference_start|>A decision procedure for linear "big O" equations: Let $F$ be the set of functions from an infinite set, $S$, to an ordered ring, $R$. For $f$, $g$, and $h$ in $F$, the assertion $f = g + O(h)$ means that for some constant $C$, $|f(x) - g(x)| \leq C |h(x)|$ for every $x$ in $S$. Let $L$ be the first-order language with variables ranging over such functions, symbols for $0, +, -, \min, \max$, and absolute value, and a ternary relation $f = g + O(h)$. We show that the set of quantifier-free formulas in this language that are valid in the intended class of interpretations is decidable, and does not depend on the underlying set, $S$, or the ordered ring, $R$. If $R$ is a subfield of the real numbers, we can add a constant 1 function, as well as multiplication by constants from any computable subfield. We obtain further decidability results for certain situations in which one adds symbols denoting the elements of a fixed sequence of functions of strictly increasing rates of growth.<|reference_end|>
arxiv
@article{avigad2007a, title={A decision procedure for linear "big O" equations}, author={Jeremy Avigad and Kevin Donnelly}, journal={arXiv preprint arXiv:cs/0701073}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701073}, primaryClass={cs.LO} }
avigad2007a
arxiv-675430
cs/0701074
On the robustness of the h-index
<|reference_start|>On the robustness of the h-index: The h-index (Hirsch, 2005) is robust, remaining relatively unaffected by errors in the long tails of the citations-rank distribution, such as typographic errors that short-change frequently-cited papers and create bogus additional records. This robustness, and the ease with which h-indices can be verified, support the use of a Hirsch-type index over alternatives such as the journal impact factor. These merits of the h-index apply to both individuals and to journals.<|reference_end|>
arxiv
@article{vanclay2007on, title={On the robustness of the h-index}, author={Jerome K Vanclay}, journal={Journal of the American Society for Information Science and Technology 58(10):1547-1550 (2007)}, year={2007}, doi={10.1002/asi.20616}, archivePrefix={arXiv}, eprint={cs/0701074}, primaryClass={cs.DL} }
vanclay2007on
arxiv-675431
cs/0701075
Open-architecture Implementation of Fragment Molecular Orbital Method for Peta-scale Computing
<|reference_start|>Open-architecture Implementation of Fragment Molecular Orbital Method for Peta-scale Computing: We present our perspective and goals on highperformance computing for nanoscience in accordance with the global trend toward "peta-scale computing." After reviewing our results obtained through the grid-enabled version of the fragment molecular orbital method (FMO) on the grid testbed by the Japanese Grid Project, National Research Grid Initiative (NAREGI), we show that FMO is one of the best candidates for peta-scale applications by predicting its effective performance in peta-scale computers. Finally, we introduce our new project constructing a peta-scale application in an open-architecture implementation of FMO in order to realize both goals of highperformance in peta-scale computers and extendibility to multiphysics simulations.<|reference_end|>
arxiv
@article{takami2007open-architecture, title={Open-architecture Implementation of Fragment Molecular Orbital Method for Peta-scale Computing}, author={Toshiya Takami, Jun Maki, Jun-ichi Ooba, Yuichi Inadomi, Hiroaki Honda, Taizo Kobayashi, Rie Nogita, and Mutsumi Aoyagi}, journal={arXiv preprint arXiv:cs/0701075}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701075}, primaryClass={cs.DC physics.comp-ph} }
takami2007open-architecture
arxiv-675432
cs/0701076
Time-complexity semantics for feasible affine recursions (extended abstract)
<|reference_start|>Time-complexity semantics for feasible affine recursions (extended abstract): The authors' ATR programming formalism is a version of call-by-value PCF under a complexity-theoretically motivated type system. ATR programs run in type-2 polynomial-time and all standard type-2 basic feasible functionals are ATR-definable (ATR types are confined to levels 0, 1, and 2). A limitation of the original version of ATR is that the only directly expressible recursions are tail-recursions. Here we extend ATR so that a broad range of affine recursions are directly expressible. In particular, the revised ATR can fairly naturally express the classic insertion- and selection-sort algorithms, thus overcoming a sticking point of most prior implicit-complexity-based formalisms. The paper's main work is in extending and simplifying the original time-complexity semantics for ATR to develop a set of tools for extracting and solving the higher-type recurrences arising from feasible affine recursions.<|reference_end|>
arxiv
@article{danner2007time-complexity, title={Time-complexity semantics for feasible affine recursions (extended abstract)}, author={Norman Danner and James S. Royer}, journal={arXiv preprint arXiv:cs/0701076}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701076}, primaryClass={cs.LO} }
danner2007time-complexity
arxiv-675433
cs/0701077
Asynchronous Distributed Searchlight Scheduling
<|reference_start|>Asynchronous Distributed Searchlight Scheduling: This paper develops and compares two simple asynchronous distributed searchlight scheduling algorithms for multiple robotic agents in nonconvex polygonal environments. A searchlight is a ray emitted by an agent which cannot penetrate the boundary of the environment. A point is detected by a searchlight if and only if the point is on the ray at some instant. Targets are points which can move continuously with unbounded speed. The objective of the proposed algorithms is for the agents to coordinate the slewing (rotation about a point) of their searchlights in a distributed manner, i.e., using only local sensing and limited communication, such that any target will necessarily be detected in finite time. The first algorithm we develop, called the DOWSS (Distributed One Way Sweep Strategy), is a distributed version of a known algorithm described originally in 1990 by Sugihara et al \cite{KS-IS-MY:90}, but it can be very slow in clearing the entire environment because only one searchlight may slew at a time. In an effort to reduce the time to clear the environment, we develop a second algorithm, called the PTSS (Parallel Tree Sweep Strategy), in which searchlights sweep in parallel if guards are placed according to an environment partition belonging to a class we call PTSS partitions. Finally, we discuss how DOWSS and PTSS could be combined with with deployment, or extended to environments with holes.<|reference_end|>
arxiv
@article{obermeyer2007asynchronous, title={Asynchronous Distributed Searchlight Scheduling}, author={Karl J. Obermeyer, Anurag Ganguli, Francesco Bullo}, journal={arXiv preprint arXiv:cs/0701077}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701077}, primaryClass={cs.MA cs.RO} }
obermeyer2007asynchronous
arxiv-675434
cs/0701078
Low SNR Capacity of Fading Channels -- MIMO and Delay Spread
<|reference_start|>Low SNR Capacity of Fading Channels -- MIMO and Delay Spread: Discrete-time Rayleigh fading multiple-input multiple-output (MIMO) channels are considered, with no channel state information at the transmitter and receiver. The fading is assumed to be correlated in time and independent from antenna to antenna. Peak and average transmit power constraints are imposed, either on the sum over antennas, or on each individual antenna. In both cases, an upper bound and an asymptotic lower bound, as the signal-to-noise ratio approaches zero, on the channel capacity are presented. The limit of normalized capacity is identified under the sum power constraints, and, for a subclass of channels, for individual power constraints. These results carry over to a SISO channel with delay spread (i.e. frequency selective fading).<|reference_end|>
arxiv
@article{sethuraman2007low, title={Low SNR Capacity of Fading Channels -- MIMO and Delay Spread}, author={Vignesh Sethuraman, Ligong Wang, Bruce Hajek, Amos Lapidoth}, journal={arXiv preprint arXiv:cs/0701078}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701078}, primaryClass={cs.IT math.IT} }
sethuraman2007low
arxiv-675435
cs/0701079
Practical Binary Adaptive Block Coder
<|reference_start|>Practical Binary Adaptive Block Coder: This paper describes design of a low-complexity algorithm for adaptive encoding/ decoding of binary sequences produced by memoryless sources. The algorithm implements universal block codes constructed for a set of contexts identified by the numbers of non-zero bits in previous bits in a sequence. We derive a precise formula for asymptotic redundancy of such codes, which refines previous well-known estimate by Krichevsky and Trofimov, and provide experimental verification of this result. In our experimental study we also compare our implementation with existing binary adaptive encoders, such as JBIG's Q-coder, and MPEG AVC (ITU-T H.264)'s CABAC algorithms.<|reference_end|>
arxiv
@article{reznik2007practical, title={Practical Binary Adaptive Block Coder}, author={Yuriy A. Reznik}, journal={arXiv preprint arXiv:cs/0701079}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701079}, primaryClass={cs.IT cs.DS math.IT} }
reznik2007practical
arxiv-675436
cs/0701080
Analysis of the Sufficient Path Elimination Window for the Maximum-Likelihood Sequential-Search Decoding Algorithm for Binary Convolutional Codes
<|reference_start|>Analysis of the Sufficient Path Elimination Window for the Maximum-Likelihood Sequential-Search Decoding Algorithm for Binary Convolutional Codes: A common problem on sequential-type decoding is that at the signal-to-noise ratio (SNR) below the one corresponding to the cutoff rate, the average decoding complexity per information bit and the required stack size grow rapidly with the information length. In order to alleviate the problem in the maximum-likelihood sequential decoding algorithm (MLSDA), we propose to directly eliminate the top path whose end node is $\Delta$-trellis-level prior to the farthest one among all nodes that have been expanded thus far by the sequential search. Following random coding argument, we analyze the early-elimination window $\Delta$ that results in negligible performance degradation for the MLSDA. Our analytical results indicate that the required early elimination window for negligible performance degradation is just twice of the constraint length for rate one-half convolutional codes. For rate one-third convolutional codes, the required early-elimination window even reduces to the constraint length. The suggestive theoretical level thresholds almost coincide with the simulation results. As a consequence of the small early-elimination window required for near maximum-likelihood performance, the MLSDA with early-elimination modification rules out considerable computational burdens, as well as memory requirement, by directly eliminating a big number of the top paths, which makes the MLSDA with early elimination very suitable for applications that dictate a low-complexity software implementation with near maximum-likelihood performance.<|reference_end|>
arxiv
@article{shieh2007analysis, title={Analysis of the Sufficient Path Elimination Window for the Maximum-Likelihood Sequential-Search Decoding Algorithm for Binary Convolutional Codes}, author={Shin-Lin Shieh, Po-Ning Chen, and Yunghsiang S. Han}, journal={arXiv preprint arXiv:cs/0701080}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701080}, primaryClass={cs.IT math.IT} }
shieh2007analysis
arxiv-675437
cs/0701081
Fingerprinting Logic Programs
<|reference_start|>Fingerprinting Logic Programs: In this work we present work in progress on functionality duplication detection in logic programs. Eliminating duplicated functionality recently became prominent in context of refactoring. We describe a quantitative approach that allows to measure the ``similarity'' between two predicate definitions. Moreover, we show how to compute a so-called ``fingerprint'' for every predicate. Fingerprints capture those characteristics of the predicate that are significant when searching for duplicated functionality. Since reasoning on fingerprints is much easier than reasoning on predicate definitions, comparing the fingerprints is a promising direction in automated code duplication in logic programs.<|reference_end|>
arxiv
@article{serebrenik2007fingerprinting, title={Fingerprinting Logic Programs}, author={Alexander Serebrenik and Wim Vanhoof}, journal={arXiv preprint arXiv:cs/0701081}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701081}, primaryClass={cs.PL cs.SE} }
serebrenik2007fingerprinting
arxiv-675438
cs/0701082
Recurrence with affine level mappings is P-time decidable for CLP(R)
<|reference_start|>Recurrence with affine level mappings is P-time decidable for CLP(R): In this paper we introduce a class of constraint logic programs such that their termination can be proved by using affine level mappings. We show that membership to this class is decidable in polynomial time.<|reference_end|>
arxiv
@article{mesnard2007recurrence, title={Recurrence with affine level mappings is P-time decidable for CLP(R)}, author={Fred Mesnard, Alexander Serebrenik}, journal={arXiv preprint arXiv:cs/0701082}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701082}, primaryClass={cs.PL cs.LO} }
mesnard2007recurrence
arxiv-675439
cs/0701083
A Backtracking-Based Algorithm for Computing Hypertree-Decompositions
<|reference_start|>A Backtracking-Based Algorithm for Computing Hypertree-Decompositions: Hypertree decompositions of hypergraphs are a generalization of tree decompositions of graphs. The corresponding hypertree-width is a measure for the cyclicity and therefore tractability of the encoded computation problem. Many NP-hard decision and computation problems are known to be tractable on instances whose structure corresponds to hypergraphs of bounded hypertree-width. Intuitively, the smaller the hypertree-width, the faster the computation problem can be solved. In this paper, we present the new backtracking-based algorithm det-k-decomp for computing hypertree decompositions of small width. Our benchmark evaluations have shown that det-k-decomp significantly outperforms opt-k-decomp, the only exact hypertree decomposition algorithm so far. Even compared to the best heuristic algorithm, we obtained competitive results as long as the hypergraphs are not too large.<|reference_end|>
arxiv
@article{gottlob2007a, title={A Backtracking-Based Algorithm for Computing Hypertree-Decompositions}, author={Georg Gottlob, Marko Samer}, journal={ACM Journal of Experimental Algorithmics (JEA) 13(1):1.1-1.19, 2008.}, year={2007}, doi={10.1145/1412228.1412229}, archivePrefix={arXiv}, eprint={cs/0701083}, primaryClass={cs.DS cs.AI} }
gottlob2007a
arxiv-675440
cs/0701084
Pseudo-codeword Landscape
<|reference_start|>Pseudo-codeword Landscape: We discuss the performance of Low-Density-Parity-Check (LDPC) codes decoded by means of Linear Programming (LP) at moderate and large Signal-to-Noise-Ratios (SNR). Utilizing a combination of the previously introduced pseudo-codeword-search method and a new "dendro" trick, which allows us to reduce the complexity of the LP decoding, we analyze the dependence of the Frame-Error-Rate (FER) on the SNR. Under Maximum-A-Posteriori (MAP) decoding the dendro-code, having only checks with connectivity degree three, performs identically to its original code with high-connectivity checks. For a number of popular LDPC codes performing over the Additive-White-Gaussian-Noise (AWGN) channel we found that either an error-floor sets at a relatively low SNR, or otherwise a transient asymptote, characterized by a faster decay of FER with the SNR increase, precedes the error-floor asymptote. We explain these regimes in terms of the pseudo-codeword spectra of the codes.<|reference_end|>
arxiv
@article{chertkov2007pseudo-codeword, title={Pseudo-codeword Landscape}, author={Michael Chertkov (Los Alamos) and Mikhail Stepanov (UA, Tucson)}, journal={arXiv preprint arXiv:cs/0701084}, year={2007}, number={LA-UR# 07-0144}, archivePrefix={arXiv}, eprint={cs/0701084}, primaryClass={cs.IT cond-mat.stat-mech math.IT} }
chertkov2007pseudo-codeword
arxiv-675441
cs/0701085
Variations on the Fibonacci Universal Code
<|reference_start|>Variations on the Fibonacci Universal Code: This note presents variations on the Fibonacci universal code, that may also be called the Gopala-Hemachandra code, that can have applications in source coding as well as in cryptography.<|reference_end|>
arxiv
@article{thomas2007variations, title={Variations on the Fibonacci Universal Code}, author={James Harold Thomas}, journal={arXiv preprint arXiv:cs/0701085}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701085}, primaryClass={cs.IT cs.CR math.IT} }
thomas2007variations
arxiv-675442
cs/0701086
Loop Calculus and Belief Propagation for q-ary Alphabet: Loop Tower
<|reference_start|>Loop Calculus and Belief Propagation for q-ary Alphabet: Loop Tower: Loop Calculus introduced in [Chertkov, Chernyak '06] constitutes a new theoretical tool that explicitly expresses the symbol Maximum-A-Posteriori (MAP) solution of a general statistical inference problem via a solution of the Belief Propagation (BP) equations. This finding brought a new significance to the BP concept, which in the past was thought of as just a loop-free approximation. In this paper we continue a discussion of the Loop Calculus. We introduce an invariant formulation which allows to generalize the Loop Calculus approach to a q-are alphabet.<|reference_end|>
arxiv
@article{chernyak2007loop, title={Loop Calculus and Belief Propagation for q-ary Alphabet: Loop Tower}, author={Vladimir Y. Chernyak (Wayne State) and Michael Chertkov (Los Alamos)}, journal={arXiv preprint arXiv:cs/0701086}, year={2007}, number={LA-UR# 07-0149}, archivePrefix={arXiv}, eprint={cs/0701086}, primaryClass={cs.IT cond-mat.stat-mech math.IT} }
chernyak2007loop
arxiv-675443
cs/0701087
Artificiality in Social Sciences
<|reference_start|>Artificiality in Social Sciences: This text provides with an introduction to the modern approach of artificiality and simulation in social sciences. It presents the relationship between complexity and artificiality, before introducing the field of artificial societies which greatly benefited from the computer power fast increase, gifting social sciences with formalization and experimentation tools previously owned by "hard" sciences alone. It shows that as "a new way of doing social sciences", artificial societies should undoubtedly contribute to a renewed approach in the study of sociality and should play a significant part in the elaboration of original theories of social phenomena.<|reference_end|>
arxiv
@article{rennard2007artificiality, title={Artificiality in Social Sciences}, author={Jean-Philippe Rennard}, journal={Rennard, J.-P., Artificiality in Social Sciences, in Rennard, J.-P. (Ed.), Handbook of Research on Nature Inspired Computing for Economics and Management, p.1-15, IGR, 2006}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701087}, primaryClass={cs.MA} }
rennard2007artificiality
arxiv-675444
cs/0701088
A Theory and Calculus for Reasoning about Sequential Behavior
<|reference_start|>A Theory and Calculus for Reasoning about Sequential Behavior: Basic results in combinatorial mathematics provide the foundation for a theory and calculus for reasoning about sequential behavior. A key concept of the theory is a generalization of Boolean implicant which deals with statements of the form: A sequence of Boolean expressions alpha is an implicant of a set of sequences of Boolean expressions A This notion of a generalized implicant takes on special significance when each of the sequences in the set A describes a disallowed pattern of behavior. That is because a disallowed sequence of Boolean expressions represents a logical/temporal dependency, and because the implicants of a set of disallowed Boolean sequences A are themselves disallowed and represent precisely those dependencies that follow as a logical consequence from the dependencies represented by A. The main result of the theory is a necessary and sufficient condition for a sequence of Boolean expressions to be an implicant of a regular set of sequences of Boolean expressions. This result is the foundation for two new proof methods. Sequential resolution is a generalization of Boolean resolution which allows new logical/temporal dependencies to be inferred from existing dependencies. Normalization starts with a model (system) and a set of logical/temporal dependencies and determines which of those dependencies are satisfied by the model.<|reference_end|>
arxiv
@article{furtek2007a, title={A Theory and Calculus for Reasoning about Sequential Behavior}, author={Frederick Furtek}, journal={arXiv preprint arXiv:cs/0701088}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701088}, primaryClass={cs.LO cs.DM} }
furtek2007a
arxiv-675445
cs/0701089
Constructive Dimension and Turing Degrees
<|reference_start|>Constructive Dimension and Turing Degrees: This paper examines the constructive Hausdorff and packing dimensions of Turing degrees. The main result is that every infinite sequence S with constructive Hausdorff dimension dim_H(S) and constructive packing dimension dim_P(S) is Turing equivalent to a sequence R with dim_H(R) <= (dim_H(S) / dim_P(S)) - epsilon, for arbitrary epsilon > 0. Furthermore, if dim_P(S) > 0, then dim_P(R) >= 1 - epsilon. The reduction thus serves as a *randomness extractor* that increases the algorithmic randomness of S, as measured by constructive dimension. A number of applications of this result shed new light on the constructive dimensions of Turing degrees. A lower bound of dim_H(S) / dim_P(S) is shown to hold for the Turing degree of any sequence S. A new proof is given of a previously-known zero-one law for the constructive packing dimension of Turing degrees. It is also shown that, for any regular sequence S (that is, dim_H(S) = dim_P(S)) such that dim_H(S) > 0, the Turing degree of S has constructive Hausdorff and packing dimension equal to 1. Finally, it is shown that no single Turing reduction can be a universal constructive Hausdorff dimension extractor, and that bounded Turing reductions cannot extract constructive Hausdorff dimension. We also exhibit sequences on which weak truth-table and bounded Turing reductions differ in their ability to extract dimension.<|reference_end|>
arxiv
@article{bienvenu2007constructive, title={Constructive Dimension and Turing Degrees}, author={Laurent Bienvenu, David Doty, Frank Stephan}, journal={arXiv preprint arXiv:cs/0701089}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701089}, primaryClass={cs.CC cs.IT math.IT} }
bienvenu2007constructive
arxiv-675446
cs/0701090
Ergodic Capacity of Discrete- and Continuous-Time, Frequency-Selective Rayleigh Fading Channels with Correlated Scattering
<|reference_start|>Ergodic Capacity of Discrete- and Continuous-Time, Frequency-Selective Rayleigh Fading Channels with Correlated Scattering: We study the ergodic capacity of a frequency-selective Rayleigh fading channel with correlated scattering, which finds application in the area of UWB. Under an average power constraint, we consider a single-user, single-antenna transmission. Coherent reception is assumed with full CSI at the receiver and no CSI at the transmitter. We distinguish between a continuous- and a discrete-time channel, modeled either as random process or random vector with generic covariance. As a practically relevant example, we examine an exponentially attenuated Ornstein-Uhlenbeck process in detail. Finally, we give numerical results, discuss the relation between the continuous- and the discrete-time channel model and show the significant impact of correlated scattering.<|reference_end|>
arxiv
@article{mittelbach2007ergodic, title={Ergodic Capacity of Discrete- and Continuous-Time, Frequency-Selective Rayleigh Fading Channels with Correlated Scattering}, author={Martin Mittelbach, Christian Mueller and Konrad Schubert (Dresden University of Technology)}, journal={arXiv preprint arXiv:cs/0701090}, year={2007}, doi={10.1109/GLOCOM.2007.632}, archivePrefix={arXiv}, eprint={cs/0701090}, primaryClass={cs.IT math.IT} }
mittelbach2007ergodic
arxiv-675447
cs/0701091
Iterative LDPC decoding using neighborhood reliabilities
<|reference_start|>Iterative LDPC decoding using neighborhood reliabilities: In this paper we study the impact of the processing order of nodes of a bipartite graph, on the performance of an iterative message-passing decoding. To this end, we introduce the concept of neighborhood reliabilities of graph's nodes. Nodes reliabilities are calculated at each iteration and then are used to obtain a processing order within a serial or serial/parallel scheduling. The basic idea is that by processing first the most reliable data, the decoder is reinforced before processing the less reliable one. Using neighborhood reliabilities, the Min-Sum decoder of LDPC codes approaches the performance of the Sum-Product decoder.<|reference_end|>
arxiv
@article{savin2007iterative, title={Iterative LDPC decoding using neighborhood reliabilities}, author={Valentin Savin}, journal={arXiv preprint arXiv:cs/0701091}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701091}, primaryClass={cs.IT math.IT} }
savin2007iterative
arxiv-675448
cs/0701092
The Multiplexing Gain of MIMO X-Channels with Partial Transmit Side-Information
<|reference_start|>The Multiplexing Gain of MIMO X-Channels with Partial Transmit Side-Information: In this paper, we obtain the scaling laws of the sum-rate capacity of a MIMO X-channel, a 2 independent sender, 2 independent receiver channel with messages from each transmitter to each receiver, at high signal to noise ratios (SNR). The X-channel has sparked recent interest in the context of cooperative networks and it encompasses the interference, multiple access, and broadcast channels as special cases. Here, we consider the case with partially cooperative transmitters in which only partial and asymmetric side-information is available at one of the transmitters. It is proved that when there are M antennas at all four nodes, the sum-rate scales like 2Mlog(SNR) which is in sharp contrast to [\lfloor 4M/3 \rfloor,4M/3]log(SNR) for non-cooperative X-channels \cite{maddah-ali,jafar_degrees}. This further proves that, in terms of sum-rate scaling at high SNR, partial side-information at one of the transmitters and full side-information at both transmitters are equivalent in the MIMO X-channel.<|reference_end|>
arxiv
@article{devroye2007the, title={The Multiplexing Gain of MIMO X-Channels with Partial Transmit Side-Information}, author={Natasha Devroye, Masoud Sharif}, journal={arXiv preprint arXiv:cs/0701092}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701092}, primaryClass={cs.IT math.IT} }
devroye2007the
arxiv-675449
cs/0701093
Throughput Scaling Laws for Wireless Networks with Fading Channels
<|reference_start|>Throughput Scaling Laws for Wireless Networks with Fading Channels: A network of $n$ wireless communication links is considered. Fading is assumed to be the dominant factor affecting the strength of the channels between nodes. The objective is to analyze the achievable throughput of the network when power allocation is allowed. By proposing a decentralized on-off power allocation strategy, a lower bound on the achievable throughput is obtained for a general fading model. In particular, under Rayleigh fading conditions the achieved sum-rate is of order $\log n$, which is, by a constant factor, larger than what is obtained with a centralized scheme in the work of Gowaikar et al. Similar to most of previous works on large networks, the proposed scheme assigns a vanishingly small rate for each link. However, it is shown that by allowing the sum-rate to decrease by a factor $\alpha<1$, this scheme is capable of providing non-zero rate-per-links of order $\Theta(1)$. To obtain larger non-zero rate-per-links, the proposed scheme is modified to a centralized version. It turns out that for the same number of active links the centralized scheme achieves a much larger rate-per-link. Moreover, at large values of rate-per-link, it achieves a sum-rate close to $\log n$, i.e., the maximum achieved by the decentralized scheme.<|reference_end|>
arxiv
@article{ebrahimi2007throughput, title={Throughput Scaling Laws for Wireless Networks with Fading Channels}, author={Masoud Ebrahimi, Mohammad Maddah-Ali, and Amir Khandani}, journal={arXiv preprint arXiv:cs/0701093}, year={2007}, doi={10.1109/TIT.2007.907518}, archivePrefix={arXiv}, eprint={cs/0701093}, primaryClass={cs.IT math.IT} }
ebrahimi2007throughput
arxiv-675450
cs/0701094
Maximizing the Probability of Delivery of Multipoint Relay Broadcast Protocol in Wireless Ad Hoc Networks with a Realistic Physical Layer
<|reference_start|>Maximizing the Probability of Delivery of Multipoint Relay Broadcast Protocol in Wireless Ad Hoc Networks with a Realistic Physical Layer: It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors.<|reference_end|>
arxiv
@article{ingelrest2007maximizing, title={Maximizing the Probability of Delivery of Multipoint Relay Broadcast Protocol in Wireless Ad Hoc Networks with a Realistic Physical Layer}, author={Franc{c}ois Ingelrest (LIFL, INRIA Futurs), David Simplot-Ryl (LIFL, INRIA Futurs)}, journal={Proceedings of the 2nd International Conference on Mobile Ad-hoc and Sensor Networks (MSN 2006) (2006) 143}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701094}, primaryClass={cs.NI} }
ingelrest2007maximizing
arxiv-675451
cs/0701095
Propositional theories are strongly equivalent to logic programs
<|reference_start|>Propositional theories are strongly equivalent to logic programs: This paper presents a property of propositional theories under the answer sets semantics (called Equilibrium Logic for this general syntax): any theory can always be reexpressed as a strongly equivalent disjunctive logic program, possibly with negation in the head. We provide two different proofs for this result: one involving a syntactic transformation, and one that constructs a program starting from the countermodels of the theory in the intermediate logic of here-and-there.<|reference_end|>
arxiv
@article{cabalar2007propositional, title={Propositional theories are strongly equivalent to logic programs}, author={Pedro Cabalar and Paolo Ferraris}, journal={arXiv preprint arXiv:cs/0701095}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701095}, primaryClass={cs.AI cs.LO} }
cabalar2007propositional
arxiv-675452
cs/0701096
About the domino problem in the hyperbolic plane, a new solution
<|reference_start|>About the domino problem in the hyperbolic plane, a new solution: In this paper we improve the approach of a previous paper about the domino problem in the hyperbolic plane, see arXiv.cs.CG/0603093. This time, we prove that the general problem of the hyperbolic plane with \`a la Wang tiles is undecidable.<|reference_end|>
arxiv
@article{maurice2007about, title={About the domino problem in the hyperbolic plane, a new solution}, author={Margenstern Maurice}, journal={arXiv preprint arXiv:cs/0701096}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701096}, primaryClass={cs.CG} }
maurice2007about
arxiv-675453
cs/0701097
MacWilliams Identity for the Rank Metric
<|reference_start|>MacWilliams Identity for the Rank Metric: This paper investigates the relationship between the rank weight distribution of a linear code and that of its dual code. The main result of this paper is that, similar to the MacWilliams identity for the Hamming metric, the rank weight distribution of any linear code can be expressed as an analytical expression of that of its dual code. Remarkably, our new identity has a similar form to the MacWilliams identity for the Hamming metric. Our new identity provides a significant analytical tool to the rank weight distribution analysis of linear codes. We use a linear space based approach in the proof for our new identity, and adapt this approach to provide an alternative proof of the MacWilliams identity for the Hamming metric. Finally, we determine the relationship between moments of the rank distribution of a linear code and those of its dual code, and provide an alternative derivation of the rank weight distribution of maximum rank distance codes.<|reference_end|>
arxiv
@article{gadouleau2007macwilliams, title={MacWilliams Identity for the Rank Metric}, author={Maximilien Gadouleau and Zhiyuan Yan}, journal={arXiv preprint arXiv:cs/0701097}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701097}, primaryClass={cs.IT math.IT} }
gadouleau2007macwilliams
arxiv-675454
cs/0701098
Packing and Covering Properties of Rank Metric Codes
<|reference_start|>Packing and Covering Properties of Rank Metric Codes: This paper investigates packing and covering properties of codes with the rank metric. First, we investigate packing properties of rank metric codes. Then, we study sphere covering properties of rank metric codes, derive bounds on their parameters, and investigate their asymptotic covering properties.<|reference_end|>
arxiv
@article{gadouleau2007packing, title={Packing and Covering Properties of Rank Metric Codes}, author={Maximilien Gadouleau and Zhiyuan Yan}, journal={arXiv preprint arXiv:cs/0701098}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701098}, primaryClass={cs.IT math.IT} }
gadouleau2007packing
arxiv-675455
cs/0701099
On the Feedback Capacity of Power Constrained Gaussian Noise Channels with Memory
<|reference_start|>On the Feedback Capacity of Power Constrained Gaussian Noise Channels with Memory: For a stationary additive Gaussian-noise channel with a rational noise power spectrum of a finite-order $L$, we derive two new results for the feedback capacity under an average channel input power constraint. First, we show that a very simple feedback-dependent Gauss-Markov source achieves the feedback capacity, and that Kalman-Bucy filtering is optimal for processing the feedback. Based on these results, we develop a new method for optimizing the channel inputs for achieving the Cover-Pombra block-length-$n$ feedback capacity by using a dynamic programming approach that decomposes the computation into $n$ sequentially identical optimization problems where each stage involves optimizing $O(L^2)$ variables. Second, we derive the explicit maximal information rate for stationary feedback-dependent sources. In general, evaluating the maximal information rate for stationary sources requires solving only a few equations by simple non-linear programming. For first-order autoregressive and/or moving average (ARMA) noise channels, this optimization admits a closed form maximal information rate formula. The maximal information rate for stationary sources is a lower bound on the feedback capacity, and it equals the feedback capacity if the long-standing conjecture, that stationary sources achieve the feedback capacity, holds.<|reference_end|>
arxiv
@article{yang2007on, title={On the Feedback Capacity of Power Constrained Gaussian Noise Channels with Memory}, author={Shaohua Yang, Aleksandar Kavcic, Sekhar Tatikonda}, journal={arXiv preprint arXiv:cs/0701099}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701099}, primaryClass={cs.IT math.IT} }
yang2007on
arxiv-675456
cs/0701100
Delayed Feedback Capacity of Stationary Sources over Linear Gaussian Noise Channels
<|reference_start|>Delayed Feedback Capacity of Stationary Sources over Linear Gaussian Noise Channels: We consider a linear Gaussian noise channel used with delayed feedback. The channel noise is assumed to be a ARMA (autoregressive and/or moving average) process. We reformulate the Gaussian noise channel into an intersymbol interference channel with white noise, and show that the delayed-feedback of the original channel is equivalent to the instantaneous-feedback of the derived channel. By generalizing results previously developed for Gaussian channels with instantaneous feedback and applying them to the derived intersymbol interference channel, we show that conditioned on the delayed feedback, a conditional Gauss-Markov source achieves the feedback capacity and its Markov memory length is determined by the noise spectral order and the feedback delay. A Kalman-Bucy filter is shown to be optimal for processing the feedback. The maximal information rate for stationary sources is derived in terms of channel input power constraint and the steady state solution of the Riccati equation of the Kalman-Bucy filter used in the feedback loop.<|reference_end|>
arxiv
@article{yang2007delayed, title={Delayed Feedback Capacity of Stationary Sources over Linear Gaussian Noise Channels}, author={Shaohua Yang, Aleksandar Kavcic}, journal={arXiv preprint arXiv:cs/0701100}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701100}, primaryClass={cs.IT math.IT} }
yang2007delayed
arxiv-675457
cs/0701101
Citation advantage of Open Access articles likely explained by quality differential and media effects
<|reference_start|>Citation advantage of Open Access articles likely explained by quality differential and media effects: In a study of articles published in the Proceedings of the National Academy of Sciences, Gunther Eysenbach discovered a significant citation advantage for those articles made freely-available upon publication (Eysenbach 2006). While the author attempted to control for confounding factors that may have explained the citation differential, the study was unable to control for characteristics of the article that may have led some authors to pay the additional page charges ($1,000) for immediate OA status. OA articles published in PNAS were more than twice as likely to be featured on the front cover of the journal (3.3% vs. 1.4%), nearly twice as likely to be picked up by the media (15% vs. 8%) and when cited reached, on average, nearly twice as many news outlets as subscription-based articles (4.2 vs. 2.6). The citation advantage of Open Access articles in PNAS may likely be explained by a quality differential and the amplification of media effects.<|reference_end|>
arxiv
@article{davis2007citation, title={Citation advantage of Open Access articles likely explained by quality differential and media effects}, author={Philip M. Davis}, journal={arXiv preprint arXiv:cs/0701101}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701101}, primaryClass={cs.DL} }
davis2007citation
arxiv-675458
cs/0701102
Coding Solutions for the Secure Biometric Storage Problem
<|reference_start|>Coding Solutions for the Secure Biometric Storage Problem: The paper studies the problem of securely storing biometric passwords, such as fingerprints and irises. With the help of coding theory Juels and Wattenberg derived in 1999 a scheme where similar input strings will be accepted as the same biometric. In the same time nothing could be learned from the stored data. They called their scheme a "fuzzy commitment scheme". In this paper we will revisit the solution of Juels and Wattenberg and we will provide answers to two important questions: What type of error-correcting codes should be used and what happens if biometric templates are not uniformly distributed, i.e. the biometric data come with redundancy. Answering the first question will lead us to the search for low-rate large-minimum distance error-correcting codes which come with efficient decoding algorithms up to the designed distance. In order to answer the second question we relate the rate required with a quantity connected to the "entropy" of the string, trying to estimate a sort of "capacity", if we want to see a flavor of the converse of Shannon's noisy coding theorem. Finally we deal with side-problems arising in a practical implementation and we propose a possible solution to the main one that seems to have so far prevented real life applications of the fuzzy scheme, as far as we know.<|reference_end|>
arxiv
@article{schipani2007coding, title={Coding Solutions for the Secure Biometric Storage Problem}, author={Davide Schipani and Joachim Rosenthal}, journal={arXiv preprint arXiv:cs/0701102}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701102}, primaryClass={cs.IT cs.CR math.IT} }
schipani2007coding
arxiv-675459
cs/0701103
Analysis and design of raptor codes for joint decoding using Information Content evolution
<|reference_start|>Analysis and design of raptor codes for joint decoding using Information Content evolution: In this paper, we present an analytical analysis of the convergence of raptor codes under joint decoding over the binary input additive white noise channel (BIAWGNC), and derive an optimization method. We use Information Content evolution under Gaussian approximation, and focus on a new decoding scheme that proves to be more efficient: the joint decoding of the two code components of the raptor code. In our general model, the classical tandem decoding scheme appears to be a subcase, and thus, the design of LT codes is also possible.<|reference_end|>
arxiv
@article{venkiah2007analysis, title={Analysis and design of raptor codes for joint decoding using Information Content evolution}, author={Auguste Venkiah, Charly Poulliat, David Declercq}, journal={arXiv preprint arXiv:cs/0701103}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701103}, primaryClass={cs.IT math.IT} }
venkiah2007analysis
arxiv-675460
cs/0701104
Why is a new Journal of Informetrics needed?
<|reference_start|>Why is a new Journal of Informetrics needed?: In our study we analysed 3.889 records which were indexed in the Library and Information Science Abstracts (LISA) database in the research field of informetrics. We can show the core journals of the field via a Bradford (power law) distribution and corroborate on the basis of the restricted LISA data set that it was the appropriate time to found a new specialized journal dedicated to informetrics. According to Bradford's Law of scattering (pure quantitative calculation), Egghe's Journal of Informetrics (JOI) first issue to appear in 2007, comes most probable at the right time.<|reference_end|>
arxiv
@article{mayr2007why, title={Why is a new Journal of Informetrics needed?}, author={Philipp Mayr, Walther Umstaetter}, journal={arXiv preprint arXiv:cs/0701104}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701104}, primaryClass={cs.DL cs.DB} }
mayr2007why
arxiv-675461
cs/0701105
A Delta Debugger for ILP Query Execution
<|reference_start|>A Delta Debugger for ILP Query Execution: Because query execution is the most crucial part of Inductive Logic Programming (ILP) algorithms, a lot of effort is invested in developing faster execution mechanisms. These execution mechanisms typically have a low-level implementation, making them hard to debug. Moreover, other factors such as the complexity of the problems handled by ILP algorithms and size of the code base of ILP data mining systems make debugging at this level a very difficult job. In this work, we present the trace-based debugging approach currently used in the development of new execution mechanisms in hipP, the engine underlying the ACE Data Mining system. This debugger uses the delta debugging algorithm to automatically reduce the total time needed to expose bugs in ILP execution, thus making manual debugging step much lighter.<|reference_end|>
arxiv
@article{troncon2007a, title={A Delta Debugger for ILP Query Execution}, author={Remko Troncon, Gerda Janssens}, journal={arXiv preprint arXiv:cs/0701105}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701105}, primaryClass={cs.PL cs.LG} }
troncon2007a
arxiv-675462
cs/0701106
On using Tracer Driver for External Dynamic Process Observation
<|reference_start|>On using Tracer Driver for External Dynamic Process Observation: One is interested here in the observation of dynamic processes starting from the traces which they leave or those that one makes them produce. It is considered here that it should be possible to make several observations simultaneously, using a large variety of independently developed analyzers. For this purpose, we introduce the original notion of ``full trace'' to capture the idea that a process can be instrumented in such a way that it may broadcast all information which could ever be requested by any kind of observer. Each analyzer can then find in the full trace the data elements which it needs. This approach uses what has been called a "tracer driver" which completes the tracer and drives it to answer the requests of the analyzers. A tracer driver allows to restrict the flow of information and makes this approach tractable. On the other side, the potential size of a full trace seems to make the idea of full trace unrealistic. In this work we explore the consequences of this notion in term of potential efficiency, by analyzing the respective workloads between the (full) tracer and many different analyzers, all being likely run in true parallel environments. To illustrate this study, we use the example of the observation of the resolution of constraints systems (proof-tree, search-tree and propagation) using sophisticated visualization tools, as developed in the project OADymPPaC (2001-2004). The processes considered here are computer programs, but we believe the approach can be extended to many other kinds of processes.<|reference_end|>
arxiv
@article{deransart2007on, title={On using Tracer Driver for External Dynamic Process Observation}, author={Pierre Deransart}, journal={arXiv preprint arXiv:cs/0701106}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701106}, primaryClass={cs.PL} }
deransart2007on
arxiv-675463
cs/0701107
JavaTA: A Logic-based Debugger for Java
<|reference_start|>JavaTA: A Logic-based Debugger for Java: This paper presents a logic based approach to debugging Java programs. In contrast with traditional debugging we propose a debugging methodology for Java programs using logical queries on individual execution states and also over the history of execution. These queries were arrived at by a systematic study of errors in object-oriented programs in our earlier research. We represent the salient events during the execution of a Java program by a logic database, and implement the queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program, such as the calling sequence that results in a certain outcome, the state of an object at a particular execution point, etc. Our system also provides the ability to compose new queries during a debugging session. We believe that logic programming offers a significant contribution to the art of object-oriented programs debugging.<|reference_end|>
arxiv
@article{girgis2007javata:, title={JavaTA: A Logic-based Debugger for Java}, author={Hani Girgis, Bharat Jayaraman}, journal={arXiv preprint arXiv:cs/0701107}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701107}, primaryClass={cs.PL} }
girgis2007javata:
arxiv-675464
cs/0701108
Towards Execution Time Estimation for Logic Programs via Static Analysis and Profiling
<|reference_start|>Towards Execution Time Estimation for Logic Programs via Static Analysis and Profiling: Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the values of certain parameters for a given platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.<|reference_end|>
arxiv
@article{mera2007towards, title={Towards Execution Time Estimation for Logic Programs via Static Analysis and Profiling}, author={Edison Mera, Pedro Lopez-Garcia, German Puebla, Manuel Carro, Manuel Hermenegildo}, journal={arXiv preprint arXiv:cs/0701108}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701108}, primaryClass={cs.PL} }
mera2007towards
arxiv-675465
cs/0701109
ExSched: Solving Constraint Satisfaction Problems with the Spreadsheet Paradigm
<|reference_start|>ExSched: Solving Constraint Satisfaction Problems with the Spreadsheet Paradigm: We report on the development of a general tool called ExSched, implemented as a plug-in for Microsoft Excel, for solving a class of constraint satisfaction problems. The traditional spreadsheet paradigm is based on attaching arithmetic expressions to individual cells and then evaluating them. The ExSched interface generalizes the spreadsheet paradigm to allow finite domain constraints to be attached to the individual cells that are then solved to get a solution. This extension provides a user-friendly interface for solving constraint satisfaction problems that can be modeled as 2D tables, such as scheduling problems, timetabling problems, product configuration, etc. ExSched can be regarded as a spreadsheet interface to CLP(FD) that hides the syntactic and semantic complexity of CLP(FD) and enables novice users to solve many scheduling and timetabling problems interactively.<|reference_end|>
arxiv
@article{chitnis2007exsched:, title={ExSched: Solving Constraint Satisfaction Problems with the Spreadsheet Paradigm}, author={Siddharth Chitnis, Madhu Yennamani, Gopal Gupta}, journal={arXiv preprint arXiv:cs/0701109}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701109}, primaryClass={cs.PL} }
chitnis2007exsched:
arxiv-675466
cs/0701110
A Web-based Tool Combining Different Type Analyses
<|reference_start|>A Web-based Tool Combining Different Type Analyses: There are various kinds of type analysis of logic programs. These include for example inference of types that describe an over-approximation of the success set of a program, inference of well-typings, and abstractions based on given types. Analyses can be descriptive or prescriptive or a mixture of both, and they can be goal-dependent or goal-independent. We describe a prototype tool that can be accessed from a web browser, allowing various type analyses to be run. The first goal of the tool is to allow the analysis results to be examined conveniently by clicking on points in the original program clauses, and to highlight ill-typed program constructs, empty types or other type anomalies. Secondly the tool allows combination of the various styles of analysis. For example, a descriptive regular type can be automatically inferred for a given program, and then that type can be used to generate the minimal "domain model" of the program with respect to the corresponding pre-interpretation, which can give more precise information than the original descriptive type.<|reference_end|>
arxiv
@article{henriksen2007a, title={A Web-based Tool Combining Different Type Analyses}, author={Kim Henriksen, John Gallagher}, journal={arXiv preprint arXiv:cs/0701110}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701110}, primaryClass={cs.PL} }
henriksen2007a
arxiv-675467
cs/0701111
Some Issues on Incremental Abstraction-Carrying Code
<|reference_start|>Some Issues on Incremental Abstraction-Carrying Code: Abstraction-Carrying Code (ACC) has recently been proposed as a framework for proof-carrying code (PCC) in which the code supplier provides a program together with an abstraction (or abstract model of the program) whose validity entails compliance with a predefined safety policy. The abstraction thus plays the role of safety certificate and its generation (and validation) is carried out automatically by a fixed-point analyzer. Existing approaches for PCC are developed under the assumption that the consumer reads and validates the entire program w.r.t. the full certificate at once, in a non incremental way. In this abstract, we overview the main issues on incremental ACC. In particular, in the context of logic programming, we discuss both the generation of incremental certificates and the design of an incremental checking algorithm for untrusted updates of a (trusted) program, i.e., when a producer provides a modified version of a previously validated program. By update, we refer to any arbitrary change on a program, i.e., the extension of the program with new predicates, the deletion of existing predicates and the replacement of existing predicates by new versions for them. We also discuss how each kind of update affects the incremental extension in terms of accuracy and correctness.<|reference_end|>
arxiv
@article{albert2007some, title={Some Issues on Incremental Abstraction-Carrying Code}, author={Elvira Albert, Puri Arenas, German Puebla}, journal={arXiv preprint arXiv:cs/0701111}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701111}, primaryClass={cs.PL} }
albert2007some
arxiv-675468
cs/0701112
(l,s)-Extension of Linear Codes
<|reference_start|>(l,s)-Extension of Linear Codes: We construct new linear codes with high minimum distance d. In at least 12 cases these codes improve the minimum distance of the previously known best linear codes for fixed parameters n,k. Among these new codes there is an optimal ternary [88,8,54] code. We develop an algorithm, which starts with already good codes C, i.e. codes with high minimum distance d for given length n and dimension k over the field GF(q). The algorithm is based on the newly defined (l,s)-extension. This is a generalization of the well-known method of adding a parity bit in the case of a binary linear code of odd minimum weight. (l,s)-extension tries to extend the generator matrix of C by adding l columns with the property that at least s of the l letters added to each of the codewords of minimum weight in C are different from 0. If one finds such columns the minimum distance of the extended code is d+s provided that the second smallest weight in C was at least d+s. The question whether such columns exist can be settled using a Diophantine system of equations.<|reference_end|>
arxiv
@article{kohnert2007(l,s)-extension, title={(l,s)-Extension of Linear Codes}, author={Axel Kohnert}, journal={arXiv preprint arXiv:cs/0701112}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701112}, primaryClass={cs.IT math.CO math.IT} }
kohnert2007(l,s)-extension
arxiv-675469
cs/0701113
On factorisation forests
<|reference_start|>On factorisation forests: The theorem of factorisation forests shows the existence of nested factorisations -- a la Ramsey -- for finite words. This theorem has important applications in semigroup theory, and beyond. The purpose of this paper is to illustrate the importance of this approach in the context of automata over infinite words and trees. We extend the theorem of factorisation forest in two directions: we show that it is still valid for any word indexed by a linear ordering; and we show that it admits a deterministic variant for words indexed by well-orderings. A byproduct of this work is also an improvement on the known bounds for the original result. We apply the first variant for giving a simplified proof of the closure under complementation of rational sets of words indexed by countable scattered linear orderings. We apply the second variant in the analysis of monadic second-order logic over trees, yielding new results on monadic interpretations over trees. Consequences of it are new caracterisations of prefix-recognizable structures and of the Caucal hierarchy.<|reference_end|>
arxiv
@article{colcombet2007on, title={On factorisation forests}, author={Thomas Colcombet (GALION)}, journal={arXiv preprint arXiv:cs/0701113}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701113}, primaryClass={cs.LO} }
colcombet2007on
arxiv-675470
cs/0701114
The problem determination of Functional Dependencies between attributes Relation Scheme in the Relational Data Model El problema de determinar Dependencias Funcionales entre atributos en los esquemas en el Modelo Relacional
<|reference_start|>The problem determination of Functional Dependencies between attributes Relation Scheme in the Relational Data Model El problema de determinar Dependencias Funcionales entre atributos en los esquemas en el Modelo Relacional: An alternative definition of the concept is given of functional dependence among the attributes of the relational schema in the Relational Model, this definition is obtained in terms of the set theory. For that which a theorem is demonstrated that establishes equivalence and on the basis theorem an algorithm is built for the search of the functional dependences among the attributes. The algorithm is illustrated by a concrete example<|reference_end|>
arxiv
@article{vega-paez2007the, title={The problem determination of Functional Dependencies between attributes Relation Scheme in the Relational Data Model. El problema de determinar Dependencias Funcionales entre atributos en los esquemas en el Modelo Relacional}, author={Ignacio Vega-Paez, Georgina G. Pulido, Jose Angel Ortega}, journal={International Journal of Multidisciplinary Sciences and Engineering, Vol. 2, No. 5, 2011, 1-4}, year={2007}, number={IBP-Memo 2006 07, Sep 2006}, archivePrefix={arXiv}, eprint={cs/0701114}, primaryClass={cs.DB cs.DS} }
vega-paez2007the
arxiv-675471
cs/0701115
Browser-based distributed evolutionary computation: performance and scaling behavior
<|reference_start|>Browser-based distributed evolutionary computation: performance and scaling behavior: The challenge of ad-hoc computing is to find the way of taking advantage of spare cycles in an efficient way that takes into account all capabilities of the devices and interconnections available to them. In this paper we explore distributed evolutionary computation based on the Ruby on Rails framework, which overlays a Model-View-Controller on evolutionary computation. It allows anybody with a web browser (that is, mostly everybody connected to the Internet) to participate in an evolutionary computation experiment. Using a straightforward farming model, we consider different factors, such as the size of the population used. We are mostly interested in how they impact on performance, but also the scaling behavior when a non-trivial number of computers is applied to the problem. Experiments show the impact of different packet sizes on performance, as well as a quite limited scaling behavior, due to the characteristics of the server. Several solutions for that problem are proposed.<|reference_end|>
arxiv
@article{merelo2007browser-based, title={Browser-based distributed evolutionary computation: performance and scaling behavior}, author={J.J. Merelo, Antonio Mora-Garcia, J.L.J. Laredo, Juan Lupion, Fernando Tricas}, journal={arXiv preprint arXiv:cs/0701115}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701115}, primaryClass={cs.DC cs.NE} }
merelo2007browser-based
arxiv-675472
cs/0701116
The Impact of CSI and Power Allocation on Relay Channel Capacity and Cooperation Strategies
<|reference_start|>The Impact of CSI and Power Allocation on Relay Channel Capacity and Cooperation Strategies: Capacity gains from transmitter and receiver cooperation are compared in a relay network where the cooperating nodes are close together. Under quasi-static phase fading, when all nodes have equal average transmit power along with full channel state information (CSI), it is shown that transmitter cooperation outperforms receiver cooperation, whereas the opposite is true when power is optimally allocated among the cooperating nodes but only CSI at the receiver (CSIR) is available. When the nodes have equal power with CSIR only, cooperative schemes are shown to offer no capacity improvement over non-cooperation under the same network power constraint. When the system is under optimal power allocation with full CSI, the decode-and-forward transmitter cooperation rate is close to its cut-set capacity upper bound, and outperforms compress-and-forward receiver cooperation. Under fast Rayleigh fading in the high SNR regime, similar conclusions follow. Cooperative systems provide resilience to fading in channel magnitudes; however, capacity becomes more sensitive to power allocation, and the cooperating nodes need to be closer together for the decode-and-forward scheme to be capacity-achieving. Moreover, to realize capacity improvement, full CSI is necessary in transmitter cooperation, while in receiver cooperation optimal power allocation is essential.<|reference_end|>
arxiv
@article{ng2007the, title={The Impact of CSI and Power Allocation on Relay Channel Capacity and Cooperation Strategies}, author={Chris T. K. Ng and Andrea J. Goldsmith}, journal={IEEE Trans. Wireless Commun., vol. 7, no. 12, pp. 5380-5389, Dec. 2008}, year={2007}, doi={10.1109/T-WC.2008.071185}, archivePrefix={arXiv}, eprint={cs/0701116}, primaryClass={cs.IT math.IT} }
ng2007the
arxiv-675473
cs/0701117
Maximum Entropy in the framework of Algebraic Statistics: A First Step
<|reference_start|>Maximum Entropy in the framework of Algebraic Statistics: A First Step: Algebraic statistics is a recently evolving field, where one would treat statistical models as algebraic objects and thereby use tools from computational commutative algebra and algebraic geometry in the analysis and computation of statistical models. In this approach, calculation of parameters of statistical models amounts to solving set of polynomial equations in several variables, for which one can use celebrated Grobner bases theory. Owing to the important role of information theory in statistics, this paper as a first step, explores the possibility of describing maximum and minimum entropy (ME) models in the framework of algebraic statistics. We show that ME-models are toric models (a class of algebraic statistical models) when the constraint functions (that provide the information about the underlying random variable) are integer valued functions, and the set of statistical models that results from ME-methods are indeed an affine variety.<|reference_end|>
arxiv
@article{dukkipati2007maximum, title={Maximum Entropy in the framework of Algebraic Statistics: A First Step}, author={Ambedkar Dukkipati}, journal={arXiv preprint arXiv:cs/0701117}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701117}, primaryClass={cs.IT cs.SC math.IT} }
dukkipati2007maximum
arxiv-675474
cs/0701118
Optimal Order of Decoding for Max-Min Fairness in $K$-User Memoryless Interference Channels
<|reference_start|>Optimal Order of Decoding for Max-Min Fairness in $K$-User Memoryless Interference Channels: A $K$-user memoryless interference channel is considered where each receiver sequentially decodes the data of a subset of transmitters before it decodes the data of the designated transmitter. Therefore, the data rate of each transmitter depends on (i) the subset of receivers which decode the data of that transmitter, (ii) the decoding order, employed at each of these receivers. In this paper, a greedy algorithm is developed to find the users which are decoded at each receiver and the corresponding decoding order such that the minimum rate of the users is maximized. It is proven that the proposed algorithm is optimal.<|reference_end|>
arxiv
@article{maddah-ali2007optimal, title={Optimal Order of Decoding for Max-Min Fairness in $K$-User Memoryless Interference Channels}, author={Mohammad Ali Maddah-Ali, Hajar Mahdavi-Doost, and Amir K. Khandani}, journal={arXiv preprint arXiv:cs/0701118}, year={2007}, doi={10.1109/ISIT.2007.4557653}, archivePrefix={arXiv}, eprint={cs/0701118}, primaryClass={cs.IT math.IT} }
maddah-ali2007optimal
arxiv-675475
cs/0701119
The framework for simulation of dynamics of mechanical aggregates
<|reference_start|>The framework for simulation of dynamics of mechanical aggregates: A framework for simulation of dynamics of mechanical aggregates has been developed. This framework enables us to build model of aggregate from models of its parts. Framework is a part of universal framework for science and engineering.<|reference_end|>
arxiv
@article{ivankov2007the, title={The framework for simulation of dynamics of mechanical aggregates}, author={Petr R. Ivankov, Nikolay P. Ivankov}, journal={arXiv preprint arXiv:cs/0701119}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701119}, primaryClass={cs.CE} }
ivankov2007the
arxiv-675476
cs/0701120
Algorithmic Complexity Bounds on Future Prediction Errors
<|reference_start|>Algorithmic Complexity Bounds on Future Prediction Errors: We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor $M$ from the true distribution $mu$ by the algorithmic complexity of $mu$. Here we assume we are at a time $t>1$ and already observed $x=x_1...x_t$. We bound the future prediction performance on $x_{t+1}x_{t+2}...$ by a new variant of algorithmic complexity of $mu$ given $x$, plus the complexity of the randomness deficiency of $x$. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.<|reference_end|>
arxiv
@article{chernov2007algorithmic, title={Algorithmic Complexity Bounds on Future Prediction Errors}, author={A. Chernov and M. Hutter and J. Schmidhuber}, journal={Information and Computation, Vol.205,Nr.2 (2007) 242-261}, year={2007}, doi={10.1016/j.ic.2006.10.004}, archivePrefix={arXiv}, eprint={cs/0701120}, primaryClass={cs.LG cs.AI cs.IT math.IT} }
chernov2007algorithmic
arxiv-675477
cs/0701121
Signature Sequence of Intersection Curve of Two Quadrics for Exact Morphological Classification
<|reference_start|>Signature Sequence of Intersection Curve of Two Quadrics for Exact Morphological Classification: We present an efficient method for classifying the morphology of the intersection curve of two quadrics (QSIC) in PR3, 3D real projective space; here, the term morphology is used in a broad sense to mean the shape, topological, and algebraic properties of a QSIC, including singularity, reducibility, the number of connected components, and the degree of each irreducible component, etc. There are in total 35 different QSIC morphologies with non-degenerate quadric pencils. For each of these 35 QSIC morphologies, through a detailed study of the eigenvalue curve and the index function jump we establish a characterizing algebraic condition expressed in terms of the Segre characteristics and the signature sequence of a quadric pencil. We show how to compute a signature sequence with rational arithmetic so as to determine the morphology of the intersection curve of any two given quadrics. Two immediate applications of our results are the robust topological classification of QSIC in computing B-rep surface representation in solid modeling and the derivation of algebraic conditions for collision detection of quadric primitives.<|reference_end|>
arxiv
@article{tu2007signature, title={Signature Sequence of Intersection Curve of Two Quadrics for Exact Morphological Classification}, author={Changhe Tu (Shandong University), Wenping Wang (University of Hong Kong), Bernard Mourrain (INRIA Sophia Antipolis), Jiaye Wang (Shandong University)}, journal={arXiv preprint arXiv:cs/0701121}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701121}, primaryClass={cs.CG cs.SC} }
tu2007signature
arxiv-675478
cs/0701122
Applications of Polyhedral Computations to the Analysis and Verification of Hardware and Software Systems
<|reference_start|>Applications of Polyhedral Computations to the Analysis and Verification of Hardware and Software Systems: Convex polyhedra are the basis for several abstractions used in static analysis and computer-aided verification of complex and sometimes mission critical systems. For such applications, the identification of an appropriate complexity-precision trade-off is a particularly acute problem, so that the availability of a wide spectrum of alternative solutions is mandatory. We survey the range of applications of polyhedral computations in this area; give an overview of the different classes of polyhedra that may be adopted; outline the main polyhedral operations required by automatic analyzers and verifiers; and look at some possible combinations of polyhedra with other numerical abstractions that have the potential to improve the precision of the analysis. Areas where further theoretical investigations can result in important contributions are highlighted.<|reference_end|>
arxiv
@article{bagnara2007applications, title={Applications of Polyhedral Computations to the Analysis and Verification of Hardware and Software Systems}, author={Roberto Bagnara, Patricia M. Hill, Enea Zaffanella}, journal={arXiv preprint arXiv:cs/0701122}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701122}, primaryClass={cs.CG cs.MS} }
bagnara2007applications
arxiv-675479
cs/0701123
Feasible Depth
<|reference_start|>Feasible Depth: This paper introduces two complexity-theoretic formulations of Bennett's logical depth: finite-state depth and polynomial-time depth. It is shown that for both formulations, trivial and random infinite sequences are shallow, and a slow growth law holds, implying that deep sequences cannot be created easily from shallow sequences. Furthermore, the E analogue of the halting language is shown to be polynomial-time deep, by proving a more general result: every language to which a nonnegligible subset of E can be reduced in uniform exponential time is polynomial-time deep.<|reference_end|>
arxiv
@article{doty2007feasible, title={Feasible Depth}, author={David Doty and Philippe Moser}, journal={arXiv preprint arXiv:cs/0701123}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701123}, primaryClass={cs.CC cs.IT math.IT} }
doty2007feasible
arxiv-675480
cs/0701124
Group Secret Key Generation Algorithms
<|reference_start|>Group Secret Key Generation Algorithms: We consider a pair-wise independent network where every pair of terminals in the network observes a common pair-wise source that is independent of all the sources accessible to the other pairs. We propose a method for secret key agreement in such a network that is based on well-established point-to-point techniques and repeated application of the one-time pad. Three specific problems are investigated. 1) Each terminal's observations are correlated only with the observations of a central terminal. All these terminals wish to generate a common secret key. 2) In a pair-wise independent network, two designated terminals wish to generate a secret key with the help of other terminals. 3) All the terminals in a pair-wise independent network wish to generate a common secret key. A separate protocol for each of these problems is proposed. Furthermore, we show that the protocols for the first two problems are optimal and the protocol for the third problem is efficient, in terms of the resulting secret key rates.<|reference_end|>
arxiv
@article{ye2007group, title={Group Secret Key Generation Algorithms}, author={Chunxuan Ye and Alex Reznik}, journal={arXiv preprint arXiv:cs/0701124}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701124}, primaryClass={cs.IT cs.CR math.IT} }
ye2007group
arxiv-675481
cs/0701125
Universal Algorithmic Intelligence: A mathematical top->down approach
<|reference_start|>Universal Algorithmic Intelligence: A mathematical top->down approach: Sequential decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameter-free theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline how the AIXI model can formally solve a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXItl that is still effectively more intelligent than any other time t and length l bounded agent. The computation time of AIXItl is of the order t x 2^l. The discussion includes formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches.<|reference_end|>
arxiv
@article{hutter2007universal, title={Universal Algorithmic Intelligence: A mathematical top->down approach}, author={Marcus Hutter}, journal={In Artificial General Intelligence, Springer (2007) 227-290}, year={2007}, number={IDSIA-01-03}, archivePrefix={arXiv}, eprint={cs/0701125}, primaryClass={cs.AI cs.LG} }
hutter2007universal
arxiv-675482
cs/0701126
Optimal Throughput-Diversity-Delay Tradeoff in MIMO ARQ Block-Fading Channels
<|reference_start|>Optimal Throughput-Diversity-Delay Tradeoff in MIMO ARQ Block-Fading Channels: In this paper, we consider an automatic-repeat-request (ARQ) retransmission protocol signaling over a block-fading multiple-input, multiple-output (MIMO) channel. Unlike previous work, we allow for multiple fading blocks within each transmission (ARQ round), and we constrain the transmitter to fixed rate codes constructed over complex signal constellations. In particular, we examine the general case of average input-power-constrained constellations as well as the practically important case of finite discrete constellations. This scenario is a suitable model for practical wireless communications systems employing orthogonal frequency division multiplexing techniques over a MIMO ARQ channel. Two cases of fading dynamics are considered, namely short-term static fading where channel fading gains change randomly for each ARQ round, and long-term static fading where channel fading gains remain constant over all ARQ rounds pertaining to a given message. As our main result, we prove that for the block-fading MIMO ARQ channel with discrete input signal constellation satisfying a short-term power constraint, the optimal signal-to-noise ratio (SNR) exponent is given by a modified Singleton bound, relating all the system parameters. To demonstrate the practical significance of the theoretical analysis, we present numerical results showing that practical Singleton-bound-achieving maximum distance separable codes achieve the optimal SNR exponent.<|reference_end|>
arxiv
@article{chuang2007optimal, title={Optimal Throughput-Diversity-Delay Tradeoff in MIMO ARQ Block-Fading Channels}, author={Allen Chuang, Albert Guillen i Fabregas, Lars K. Rasmussen, and Iain B. Collings}, journal={arXiv preprint arXiv:cs/0701126}, year={2007}, doi={10.1109/TIT.2008.928264}, archivePrefix={arXiv}, eprint={cs/0701126}, primaryClass={cs.IT math.IT} }
chuang2007optimal
arxiv-675483
cs/0701127
A novel set of rotationally and translationally invariant features for images based on the non-commutative bispectrum
<|reference_start|>A novel set of rotationally and translationally invariant features for images based on the non-commutative bispectrum: We propose a new set of rotationally and translationally invariant features for image or pattern recognition and classification. The new features are cubic polynomials in the pixel intensities and provide a richer representation of the original image than most existing systems of invariants. Our construction is based on the generalization of the concept of bispectrum to the three-dimensional rotation group SO(3), and a projection of the image onto the sphere.<|reference_end|>
arxiv
@article{kondor2007a, title={A novel set of rotationally and translationally invariant features for images based on the non-commutative bispectrum}, author={Risi Kondor}, journal={arXiv preprint arXiv:cs/0701127}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701127}, primaryClass={cs.CV cs.AI} }
kondor2007a
arxiv-675484
cs/0701128
Interference Automata
<|reference_start|>Interference Automata: We propose a computing model, the Two-Way Optical Interference Automata (2OIA), that makes use of the phenomenon of optical interference. We introduce this model to investigate the increase in power, in terms of language recognition, of a classical Deterministic Finite Automaton (DFA) when endowed with the facility of optical interference. The question is in the spirit of Two-Way Finite Automata With Quantum and Classical States (2QCFA) [A. Ambainis and J. Watrous, Two-way Finite Automata With Quantum and Classical States, Theoretical Computer Science, 287 (1), 299-311, (2002)] wherein the classical DFA is augmented with a quantum component of constant size. We test the power of 2OIA against the languages mentioned in the above paper. We give efficient 2OIA algorithms to recognize languages for which 2QCFA machines have been shown to exist, as well as languages whose status vis-a-vis 2QCFA has been posed as open questions. Finally we show the existence of a language that cannot be recognized by a 2OIA but can be recognized by an $O(n^3)$ space Turing machine.<|reference_end|>
arxiv
@article{rao2007interference, title={Interference Automata}, author={M. V. Panduranga Rao}, journal={arXiv preprint arXiv:cs/0701128}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701128}, primaryClass={cs.CC} }
rao2007interference
arxiv-675485
cs/0701129
Space-time codes with controllable ML decoding complexity for any number of transmit antennas
<|reference_start|>Space-time codes with controllable ML decoding complexity for any number of transmit antennas: We construct a class of linear space-time block codes for any number of transmit antennas that have controllable ML decoding complexity with a maximum rate of 1 symbol per channel use. The decoding complexity for $M$ transmit antennas can be varied from ML decoding of $2^{\lceil \log_2M \rceil -1}$ symbols together to single symbol ML decoding. For ML decoding of $2^{\lceil \log_2M \rceil - n}$ ($n=1,2,...$) symbols together, a diversity of $\min(M,2^{\lceil \log_2M \rceil-n+1})$ can be achieved. Numerical results show that the performance of the constructed code when $2^{\lceil \log_2M \rceil-1}$ symbols are decoded together is quite close to the performance of ideal rate-1 orthogonal codes (that are non-existent for more than 2 transmit antennas).<|reference_end|>
arxiv
@article{sharma2007space-time, title={Space-time codes with controllable ML decoding complexity for any number of transmit antennas}, author={Naresh Sharma, Pavan R. Pinnamraju and Constantinos B. Papadias}, journal={arXiv preprint arXiv:cs/0701129}, year={2007}, doi={10.1109/ISIT.2007.4557660}, archivePrefix={arXiv}, eprint={cs/0701129}, primaryClass={cs.IT math.IT} }
sharma2007space-time
arxiv-675486
cs/0701130
On the Correlation of Geographic and Network Proximity at Internet Edges and its Implications for Mobile Unicast and Multicast Routing
<|reference_start|>On the Correlation of Geographic and Network Proximity at Internet Edges and its Implications for Mobile Unicast and Multicast Routing: Significant effort has been invested recently to accelerate handover operations in a next generation mobile Internet. Corresponding works for developing efficient mobile multicast management are emergent. Both problems simultaneously expose routing complexity between subsequent points of attachment as a characteristic parameter for handover performance in access networks. As continuous mobility handovers necessarily occur between access routers located in geographic vicinity, this paper investigates on the hypothesis that geographically adjacent edge networks attain a reduced network distances as compared to arbitrary Internet nodes. We therefore evaluate and analyze edge distance distributions in various regions for clustered IP ranges on their geographic location such as a city. We use traceroute to collect packet forwarding path and round-trip-time of each intermediate node to scan-wise derive an upper bound of the node distances. Results of different scanning origins are compared to obtain the best estimation of network distance of each pair. Our results are compared with corresponding analysis of CAIDA Skitter data, overall leading to fairly stable, reproducible edge distance distributions. As a first conclusion on expected impact on handover performance measures, our results indicate a general optimum for handover anticipation time in 802.11 networks of 25 ms.<|reference_end|>
arxiv
@article{schmidt2007on, title={On the Correlation of Geographic and Network Proximity at Internet Edges and its Implications for Mobile Unicast and Multicast Routing}, author={Thomas C. Schmidt and Matthias W"ahlisch and Ying Zhang}, journal={in Proceedings of the IEEE ICN'07, IEEE Computer Society Press, Washington, DC, USA, April 2007}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701130}, primaryClass={cs.NI cs.AR cs.PF} }
schmidt2007on
arxiv-675487
cs/0701131
Effective Beam Width of Directional Antennas in Wireless Ad Hoc Networks
<|reference_start|>Effective Beam Width of Directional Antennas in Wireless Ad Hoc Networks: It is known at a qualitative level that directional antennas can be used to boost the capacity of wireless ad hoc networks. Lacking is a measure to quantify this advantage and to compare directional antennas of different footprint patterns. This paper introduces the concept of the effective beam width (and the effective null width as its dual counterpart) as a measure which quantitatively captures the capacity-boosting capability of directional antennas. Beam width is commonly defined to be the directional angle spread within which the main-lobe beam power is above a certain threshold. In contrast, our effective beam width definition lumps the effects of the (i) antenna pattern, (ii) active-node distribution, and (iii) channel characteristics, on network capacity into a single quantitative measure. We investigate the mathematical properties of the effective beam width and show how the convenience afforded by these properties can be used to analyze the effectiveness of complex directional antenna patterns in boosting network capacity, with fading and multi-user interference taken into account. In particular, we derive the extent to which network capacity can be scaled with the use of phased array antennas. We show that a phased array antenna with N elements can boost transport capacity of an Aloha-like network by a factor of order N^1.620.<|reference_end|>
arxiv
@article{zhang2007effective, title={Effective Beam Width of Directional Antennas in Wireless Ad Hoc Networks}, author={Jialiang Zhang, Soung Chang Liew}, journal={arXiv preprint arXiv:cs/0701131}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701131}, primaryClass={cs.IT math.IT} }
zhang2007effective
arxiv-675488
cs/0701132
Certifying controls and systems software
<|reference_start|>Certifying controls and systems software: Software system certification presents itself with many challenges, including the necessity to certify the system at the level of functional requirements, code and binary levels, the need to chase down run-time errors, and the need for proving timing properties of the eventual, compiled system. This paper illustrates possible approaches for certifying code that arises from control systems requirements as far as stability properties are concerned. The relative simplicity of the certification process should encourage the development of systematic procedures for certifying control system codes for more complex environments.<|reference_end|>
arxiv
@article{feron2007certifying, title={Certifying controls and systems software}, author={Eric Feron (School of Aerospace Engineering, Georgia Institute of Technology) Mardavij Roozbehani (Department of Aeronautics and Astronautics, Massachusetts Institue of Technology)}, journal={arXiv preprint arXiv:cs/0701132}, year={2007}, number={LIDS # 2745}, archivePrefix={arXiv}, eprint={cs/0701132}, primaryClass={cs.SE} }
feron2007certifying
arxiv-675489
cs/0701133
The Case for Redundant Arrays of Internet Links (RAIL)
<|reference_start|>The Case for Redundant Arrays of Internet Links (RAIL): It is well-known that wide-area networks face today several performance and reliability problems. In this work, we propose to solve these problems by connecting two or more local-area networks together via a Redundant Array of Internet Links (or RAIL) and by proactively replicating each packet over these links. In that sense, RAIL is for networks what RAID (Redundant Array of Inexpensive Disks) was for disks. In this paper, we describe the RAIL approach, present our prototype (called the RAILedge), and evaluate its performance. First, we demonstrate that using multiple Internet links significantly improves the end-to-end performance in terms of network-level as well as application-level metrics for Voice-over-IP and TCP. Second, we show that a delay padding mechanism is needed to complement RAIL when there is significant delay disparity between the paths. Third, we show that two paths provide most of the benefit, if carefully managed. Finally, we discuss a RAIL-network architecture, where RAILedges make use of path redundancy, route control and application-specific mechanisms, to improve WAN performance.<|reference_end|>
arxiv
@article{markopoulou2007the, title={The Case for Redundant Arrays of Internet Links (RAIL)}, author={Athina Markopoulou and David Cheriton}, journal={arXiv preprint arXiv:cs/0701133}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701133}, primaryClass={cs.NI} }
markopoulou2007the
arxiv-675490
cs/0701134
Byzantine Fault Tolerance for Nondeterministic Applications
<|reference_start|>Byzantine Fault Tolerance for Nondeterministic Applications: All practical applications contain some degree of nondeterminism. When such applications are replicated to achieve Byzantine fault tolerance (BFT), their nondeterministic operations must be controlled to ensure replica consistency. To the best of our knowledge, only the most simplistic types of replica nondeterminism have been dealt with. Furthermore, there lacks a systematic approach to handling common types of nondeterminism. In this paper, we propose a classification of common types of replica nondeterminism with respect to the requirement of achieving Byzantine fault tolerance, and describe the design and implementation of the core mechanisms necessary to handle such nondeterminism within a Byzantine fault tolerance framework.<|reference_end|>
arxiv
@article{zhao2007byzantine, title={Byzantine Fault Tolerance for Nondeterministic Applications}, author={Wenbing Zhao}, journal={arXiv preprint arXiv:cs/0701134}, year={2007}, doi={10.1109/DASC.2007.11}, archivePrefix={arXiv}, eprint={cs/0701134}, primaryClass={cs.DC} }
zhao2007byzantine
arxiv-675491
cs/0701135
Complex networks and human language
<|reference_start|>Complex networks and human language: This paper introduces how human languages can be studied in light of recent development of network theories. There are two directions of exploration. One is to study networks existing in the language system. Various lexical networks can be built based on different relationships between words, being semantic or syntactic. Recent studies have shown that these lexical networks exhibit small-world and scale-free features. The other direction of exploration is to study networks of language users (i.e. social networks of people in the linguistic community), and their role in language evolution. Social networks also show small-world and scale-free features, which cannot be captured by random or regular network models. In the past, computational models of language change and language emergence often assume a population to have a random or regular structure, and there has been little discussion how network structures may affect the dynamics. In the second part of the paper, a series of simulation models of diffusion of linguistic innovation are used to illustrate the importance of choosing realistic conditions of population structure for modeling language change. Four types of social networks are compared, which exhibit two categories of diffusion dynamics. While the questions about which type of networks are more appropriate for modeling still remains, we give some preliminary suggestions for choosing the type of social networks for modeling.<|reference_end|>
arxiv
@article{ke2007complex, title={Complex networks and human language}, author={Jinyun KE}, journal={arXiv preprint arXiv:cs/0701135}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701135}, primaryClass={cs.CL} }
ke2007complex
arxiv-675492
cs/0701136
Citation Advantage For OA Self-Archiving Is Independent of Journal Impact Factor, Article Age, and Number of Co-Authors
<|reference_start|>Citation Advantage For OA Self-Archiving Is Independent of Journal Impact Factor, Article Age, and Number of Co-Authors: Eysenbach has suggested that the OA (Green) self-archiving advantage might just be an artifact of potential uncontrolled confounding factors such as article age (older articles may be both more cited and more likely to be self-archived), number of authors (articles with more authors might be more cited and more self-archived), subject matter (the subjects that are cited more, self-archive more), country (same thing), number of authors, citation counts of authors, etc. Chawki Hajjem (doctoral candidate, UQaM) had already shown that the OA advantage was present in all cases when articles were analysed separately by age, subject matter or country. He has now done a multiple regression analysis jointly testing (1) article age, (2) journal impact factor, (3) number of authors, and (4) OA self-archiving as separate factors for 442,750 articles in 576 (biomedical) journals across 11 years, and has shown that each of the four factors contributes an independent, statistically significant increment to the citation counts. The OA-self-archiving advantage remains a robust, independent factor. Having successfully responded to his challenge, we now challenge Eysenbach to demonstrate -- by testing a sufficiently broad and representative sample of journals at all levels of the journal quality, visibility and prestige hierarchy -- that his finding of a citation advantage for Gold OA (articles published OA on the high-profile website of the only journal he tested (PNAS) over Green OA articles in the same journal (self-archived on the author's website) was not just an artifact of having tested only one very high-profile journal.<|reference_end|>
arxiv
@article{hajjem2007citation, title={Citation Advantage For OA Self-Archiving Is Independent of Journal Impact Factor, Article Age, and Number of Co-Authors}, author={Chawki Hajjem, Stevan Harnad}, journal={arXiv preprint arXiv:cs/0701136}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701136}, primaryClass={cs.IR cs.DL} }
hajjem2007citation
arxiv-675493
cs/0701137
The Open Access Citation Advantage: Quality Advantage Or Quality Bias?
<|reference_start|>The Open Access Citation Advantage: Quality Advantage Or Quality Bias?: Many studies have now reported the positive correlation between Open Access (OA) self-archiving and citation counts ("OA Advantage," OAA). But does this OAA occur because (QB) authors are more likely to self-selectively self-archive articles that are more likely to be cited (self-selection "Quality Bias": QB)? or because (QA) articles that are self-archived are more likely to be cited ("Quality Advantage": QA)? The probable answer is both. Three studies [by (i) Kurtz and co-workers in astrophysics, (ii) Moed in condensed matter physics, and (iii) Davis & Fromerth in mathematics] had reported the OAA to be due to QB [plus Early Advantage, EA, from self-archiving the preprint before publication, in (i) and (ii)] rather than QA. These three fields, however, (1) have less of a postprint access problem than most other fields and (i) and (ii) also happen to be among the minority of fields that (2) make heavy use of prepublication preprints. Chawki Hajjem has now analyzed preliminary evidence based on over 100,000 articles from multiple fields, comparing self-selected self-archiving with mandated self-archiving to estimate the contributions of QB and QA to the OAA. Both factors contribute, and the contribution of QA is greater.<|reference_end|>
arxiv
@article{hajjem2007the, title={The Open Access Citation Advantage: Quality Advantage Or Quality Bias?}, author={Chawki Hajjem, Stevan Harnad}, journal={arXiv preprint arXiv:cs/0701137}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701137}, primaryClass={cs.IR cs.DL} }
hajjem2007the
arxiv-675494
cs/0701138
Real-Time Model-Checking: Parameters everywhere
<|reference_start|>Real-Time Model-Checking: Parameters everywhere: In this paper, we study the model-checking and parameter synthesis problems of the logic TCTL over discrete-timed automata where parameters are allowed both in the model (timed automaton) and in the property (temporal formula). Our results are as follows. On the negative side, we show that the model-checking problem of TCTL extended with parameters is undecidable over discrete-timed automata with only one parametric clock. The undecidability result needs equality in the logic. On the positive side, we show that the model-checking and the parameter synthesis problems become decidable for a fragment of the logic where equality is not allowed. Our method is based on automata theoretic principles and an extension of our method to express durations of runs in timed automata using Presburger arithmetic.<|reference_end|>
arxiv
@article{bruyere2007real-time, title={Real-Time Model-Checking: Parameters everywhere}, author={Veronique Bruyere, Jean-Francois Raskin}, journal={Logical Methods in Computer Science, Volume 3, Issue 1 (February 27, 2007) lmcs:2229}, year={2007}, doi={10.2168/LMCS-3(1:7)2007}, archivePrefix={arXiv}, eprint={cs/0701138}, primaryClass={cs.LO} }
bruyere2007real-time
arxiv-675495
cs/0701139
Time and the Prisoner's Dilemma
<|reference_start|>Time and the Prisoner's Dilemma: This paper examines the integration of computational complexity into game theoretic models. The example focused on is the Prisoner's Dilemma, repeated for a finite length of time. We show that a minimal bound on the players' computational ability is sufficient to enable cooperative behavior. In addition, a variant of the repeated Prisoner's Dilemma game is suggested, in which players have the choice of opting out. This modification enriches the game and suggests dominance of cooperative strategies. Competitive analysis is suggested as a tool for investigating sub-optimal (but computationally tractable) strategies and game theoretic models in general. Using competitive analysis, it is shown that for bounded players, a sub-optimal strategy might be the optimal choice, given resource limitations.<|reference_end|>
arxiv
@article{mor2007time, title={Time and the Prisoner's Dilemma}, author={Yishay Mor and Jeffrey S. Rosenschein}, journal={arXiv preprint arXiv:cs/0701139}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701139}, primaryClass={cs.GT cs.AI} }
mor2007time
arxiv-675496
cs/0701140
Predicate Abstraction with Under-approximation Refinement
<|reference_start|>Predicate Abstraction with Under-approximation Refinement: We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to be generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs.<|reference_end|>
arxiv
@article{pasareanu2007predicate, title={Predicate Abstraction with Under-approximation Refinement}, author={Corina S. Pasareanu, Radek Pelanek, Willem Visser}, journal={Logical Methods in Computer Science, Volume 3, Issue 1 (February 26, 2007) lmcs:2227}, year={2007}, doi={10.2168/LMCS-3(1:5)2007}, archivePrefix={arXiv}, eprint={cs/0701140}, primaryClass={cs.GT} }
pasareanu2007predicate
arxiv-675497
cs/0701141
The Fundamental Theorems of Interval Analysis
<|reference_start|>The Fundamental Theorems of Interval Analysis: Expressions are not functions. Confusing the two concepts or failing to define the function that is computed by an expression weakens the rigour of interval arithmetic. We give such a definition and continue with the required re-statements and proofs of the fundamental theorems of interval arithmetic and interval analysis. Revision Feb. 10, 2009: added reference to and acknowledgement of P. Taylor.<|reference_end|>
arxiv
@article{van emden2007the, title={The Fundamental Theorems of Interval Analysis}, author={M.H. van Emden and B. Moa}, journal={arXiv preprint arXiv:cs/0701141}, year={2007}, number={DCS-316-IR}, archivePrefix={arXiv}, eprint={cs/0701141}, primaryClass={cs.NA} }
van emden2007the
arxiv-675498
cs/0701142
Knowledge State Algorithms: Randomization with Limited Information
<|reference_start|>Knowledge State Algorithms: Randomization with Limited Information: We introduce the concept of knowledge states; many well-known algorithms can be viewed as knowledge state algorithms. The knowledge state approach can be used to to construct competitive randomized online algorithms and study the tradeoff between competitiveness and memory. A knowledge state simply states conditional obligations of an adversary, by fixing a work function, and gives a distribution for the algorithm. When a knowledge state algorithm receives a request, it then calculates one or more "subsequent" knowledge states, together with a probability of transition to each. The algorithm then uses randomization to select one of those subsequents to be the new knowledge state. We apply the method to the paging problem. We present optimally competitive algorithm for paging for the cases where the cache sizes are k=2 and k=3. These algorithms use only a very limited number of bookmarks.<|reference_end|>
arxiv
@article{bein2007knowledge, title={Knowledge State Algorithms: Randomization with Limited Information}, author={Wolfgang Bein, Lawrence L. Larmore, R"udiger Reischuk}, journal={arXiv preprint arXiv:cs/0701142}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701142}, primaryClass={cs.DS} }
bein2007knowledge
arxiv-675499
cs/0701143
Dirac Notation, Fock Space and Riemann Metric Tensor in Information Retrieval Models
<|reference_start|>Dirac Notation, Fock Space and Riemann Metric Tensor in Information Retrieval Models: Using Dirac Notation as a powerful tool, we investigate the three classical Information Retrieval (IR) models and some their extensions. We show that almost all such models can be described by vectors in Occupation Number Representations (ONR) of Fock spaces with various specifications on, e.g., occupation number, inner product or term-term interactions. As important cases of study, Concept Fock Space (CFS) is introduced for Boolean model; the basic formulas for Singular Value Decomposition (SVD) of Latent Semantic Indexing (LSI) Model are manipulated in terms of Dirac notation. And, based on SVD, a Riemannian metric tensor is introduced, which not only can be used to calculate the relevance of documents to a query, but also may be used to measure the closeness of documents in data clustering.<|reference_end|>
arxiv
@article{wang2007dirac, title={Dirac Notation, Fock Space and Riemann Metric Tensor in Information Retrieval Models}, author={Xing M. Wang}, journal={arXiv preprint arXiv:cs/0701143}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701143}, primaryClass={cs.IR math-ph math.MP} }
wang2007dirac
arxiv-675500
cs/0701144
Trusted Ticket Systems and Applications
<|reference_start|>Trusted Ticket Systems and Applications: Trusted Computing is a security base technology that will perhaps be ubiquitous in a few years in personal computers and mobile devices alike. Despite its neutrality with respect to applications, it has raised some privacy concerns. We show that trusted computing can be applied for service access control in a manner protecting users' privacy. We construct a ticket system -- a concept which is at the heart of Identity Management -- relying solely on the capabilities of the trusted platform module and the standards specified by the Trusted Computing Group. Two examples show how it can be used for pseudonymous and protected service access.<|reference_end|>
arxiv
@article{kuntze2007trusted, title={Trusted Ticket Systems and Applications}, author={Nicolai Kuntze and Andreas U. Schmidt}, journal={arXiv preprint arXiv:cs/0701144}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701144}, primaryClass={cs.CR} }
kuntze2007trusted