corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-673601
cs/0512027
The Physical Foundation of Human Mind and a New Theory of Investment
<|reference_start|>The Physical Foundation of Human Mind and a New Theory of Investment: This paper consists of two parts. In the first part, we develop a new information theory, in which it is not a coincidence that information and physical entropy share the same mathematical formula. It is an adaptation of mind to help search for resources. We then show that psychological patterns either reflect the constraints of physical laws or are evolutionary adaptations to efficiently process information and to increase the chance of survival in the environment of our evolutionary past. In the second part, we demonstrate that the new information theory provides the foundation to understand market behavior. One fundamental result from the information theory is that information is costly. In general, information with higher value is more costly. Another fundamental result from the information theory is that the amount of information one can receive is the amount of information generated minus equivocation. The level of equivocation, which is the measure of information asymmetry, is determined by the correlation between the source of information and the receiver of information. In general, how much information one can receive depends on the background knowledge of the receiver. The difference in cost different investors are willing to pay for information and the difference in background knowledge about a particular information causes the heterogeneity in information processing by the investment public, which is the main reason of the price and volume patterns observed in the market. Many assumptions in some of the recent models on behavioral finance can be derived naturally from this theory.<|reference_end|>
arxiv
@article{chen2005the, title={The Physical Foundation of Human Mind and a New Theory of Investment}, author={Jing Chen}, journal={arXiv preprint arXiv:cs/0512027}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512027}, primaryClass={cs.IT math.IT} }
chen2005the
arxiv-673602
cs/0512028
Approximately universal optimality over several dynamic and non-dynamic cooperative diversity schemes for wireless networks
<|reference_start|>Approximately universal optimality over several dynamic and non-dynamic cooperative diversity schemes for wireless networks: In this work we explicitly provide the first ever optimal, with respect to the Zheng-Tse diversity multiplexing gain (D-MG) tradeoff, cooperative diversity schemes for wireless relay networks. The schemes are based on variants of perfect space-time codes and are optimal for any number of users and all statistically symmetric (and in some cases, asymmetric) fading distributions. We deduce that, with respect to the D-MG tradeoff, channel knowledge at the intermediate relays and infinite delay are unnecessary. We also show that the non-dynamic selection decode and forward strategy, the non-dynamic amplify and forward, the non-dynamic receive and forward, the dynamic amplify and forward and the dynamic receive and forward cooperative diversity strategies allow for exactly the same D-MG optimal performance.<|reference_end|>
arxiv
@article{elia2005approximately, title={Approximately universal optimality over several dynamic and non-dynamic cooperative diversity schemes for wireless networks}, author={Petros Elia and P. Vijay Kumar}, journal={arXiv preprint arXiv:cs/0512028}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512028}, primaryClass={cs.IT math.IT} }
elia2005approximately
arxiv-673603
cs/0512029
New model for rigorous analysis of LT-codes
<|reference_start|>New model for rigorous analysis of LT-codes: We present a new model for LT codes which simplifies the analysis of the error probability of decoding by belief propagation. For any given degree distribution, we provide the first rigorous expression for the limiting error probability as the length of the code goes to infinity via recent results in random hypergraphs [Darling-Norris 2005]. For a code of finite length, we provide an algorithm for computing the probability of error of the decoder. This algorithm improves the one of [Karp-Luby-Shokrollahi 2004] by a linear factor.<|reference_end|>
arxiv
@article{maneva2005new, title={New model for rigorous analysis of LT-codes}, author={Elitza N. Maneva and Amin Shokrollahi}, journal={arXiv preprint arXiv:cs/0512029}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512029}, primaryClass={cs.IT math.IT} }
maneva2005new
arxiv-673604
cs/0512030
Uncertainty Principles for Signal Concentrations
<|reference_start|>Uncertainty Principles for Signal Concentrations: Uncertainty principles for concentration of signals into truncated subspaces are considered. The ``classic'' uncertainty principle is explored as a special case of a more general operator framework. The time-bandwidth concentration problem is shown as a similar special case. A spatial concentration of radio signals example is provided, and it is shown that an uncertainty principle exists for concentration of single-frequency signals for regions in space. We show that the uncertainty is related to the volumes of the spatial regions.<|reference_end|>
arxiv
@article{somaraju2005uncertainty, title={Uncertainty Principles for Signal Concentrations}, author={Ram Somaraju, Leif W. Hanlen}, journal={arXiv preprint arXiv:cs/0512030}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512030}, primaryClass={cs.IT math.IT} }
somaraju2005uncertainty
arxiv-673605
cs/0512031
Alternating Timed Automata
<|reference_start|>Alternating Timed Automata: A notion of alternating timed automata is proposed. It is shown that such automata with only one clock have decidable emptiness problem over finite words. This gives a new class of timed languages which is closed under boolean operations and which has an effective presentation. We prove that the complexity of the emptiness problem for alternating timed automata with one clock is non-primitive recursive. The proof gives also the same lower bound for the universality problem for nondeterministic timed automata with one clock. We investigate extension of the model with epsilon-transitions and prove that emptiness is undecidable. Over infinite words, we show undecidability of the universality problem.<|reference_end|>
arxiv
@article{lasota2005alternating, title={Alternating Timed Automata}, author={Slawomir Lasota and Igor Walukiewicz}, journal={arXiv preprint arXiv:cs/0512031}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512031}, primaryClass={cs.LO} }
lasota2005alternating
arxiv-673606
cs/0512032
A Software Framework for Vehicle-Infrastructure Cooperative Applications
<|reference_start|>A Software Framework for Vehicle-Infrastructure Cooperative Applications: A growing category of vehicle-infrastructure cooperative (VIC) applications requires telematics software components distributed between an infrastructure-based management center and a number of vehicles. This article presents an approach based on a software framework, focusing on a Telematic Management System (TMS), a component suite aimed to run inside an infrastructure-based operations center, in some cases interacting with legacy systems like Advanced Traffic Management Systems or Vehicle Relationship Management. The TMS framework provides support for modular, flexible, prototyping and implementation of VIC applications. This work has received the support of the European Commission in the context of the projects REACT and CyberCars.<|reference_end|>
arxiv
@article{bengochea2005a, title={A Software Framework for Vehicle-Infrastructure Cooperative Applications}, author={Sebasti'an Bengochea (INRIA Rocquencourt), Angel Talamona (INRIA Rocquencourt), Michel Parent (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:cs/0512032}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512032}, primaryClass={cs.IR} }
bengochea2005a
arxiv-673607
cs/0512033
Bootstrapping the Long Tail in Peer to Peer Systems
<|reference_start|>Bootstrapping the Long Tail in Peer to Peer Systems: We describe an efficient incentive mechanism for P2P systems that generates a wide diversity of content offerings while responding adaptively to customer demand. Files are served and paid for through a parimutuel market similar to that commonly used for betting in horse races. An analysis of the performance of such a system shows that there exists an equilibrium with a long tail in the distribution of content offerings, which guarantees the real time provision of any content regardless of its popularity.<|reference_end|>
arxiv
@article{huberman2005bootstrapping, title={Bootstrapping the Long Tail in Peer to Peer Systems}, author={Bernardo A. Huberman and Fang Wu}, journal={arXiv preprint arXiv:cs/0512033}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512033}, primaryClass={cs.NI cs.CY physics.soc-ph} }
huberman2005bootstrapping
arxiv-673608
cs/0512034
Ensuring Trust in One Time Exchanges: Solving the QoS Problem
<|reference_start|>Ensuring Trust in One Time Exchanges: Solving the QoS Problem: We describe a pricing structure for the provision of IT services that ensures trust without requiring repeated interactions between service providers and users. It does so by offering a pricing structure that elicits truthful reporting of quality of service (QoS) by providers while making them profitable. This mechanism also induces truth-telling on the part of users reserving the service.<|reference_end|>
arxiv
@article{huberman2005ensuring, title={Ensuring Trust in One Time Exchanges: Solving the QoS Problem}, author={Bernardo A. Huberman, Fang Wu and Li Zhang}, journal={arXiv preprint arXiv:cs/0512034}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512034}, primaryClass={cs.GT physics.soc-ph} }
huberman2005ensuring
arxiv-673609
cs/0512035
Semidefinite programming and arithmetic circuit evaluation
<|reference_start|>Semidefinite programming and arithmetic circuit evaluation: A rational number can be naturally presented by an arithmetic computation (AC): a sequence of elementary arithmetic operations starting from a fixed constant, say 1. The asymptotic complexity issues of such a representation are studied e.g. in the framework of the algebraic complexity theory over arbitrary field. Here we study a related problem of the complexity of performing arithmetic operations and computing elementary predicates, e.g. ``='' or ``>'', on rational numbers given by AC. In the first place, we prove that AC can be efficiently simulated by the exact semidefinite programming (SDP). Secondly, we give a BPP-algorithm for the equality predicate. Thirdly, we put ``>''-predicate into the complexity class PSPACE. We conjecture that ``>''-predicate is hard to compute. This conjecture, if true, would clarify the complexity status of the exact SDP - a well known open problem in the field of mathematical programming.<|reference_end|>
arxiv
@article{tarasov2005semidefinite, title={Semidefinite programming and arithmetic circuit evaluation}, author={Sergey P. Tarasov, Mikhail N. Vyalyi}, journal={arXiv preprint arXiv:cs/0512035}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512035}, primaryClass={cs.CC} }
tarasov2005semidefinite
arxiv-673610
cs/0512036
A System of Interaction and Structure II: The Need for Deep Inference
<|reference_start|>A System of Interaction and Structure II: The Need for Deep Inference: This paper studies properties of the logic BV, which is an extension of multiplicative linear logic (MLL) with a self-dual non-commutative operator. BV is presented in the calculus of structures, a proof theoretic formalism that supports deep inference, in which inference rules can be applied anywhere inside logical expressions. The use of deep inference results in a simple logical system for MLL extended with the self-dual non-commutative operator, which has been to date not known to be expressible in sequent calculus. In this paper, deep inference is shown to be crucial for the logic BV, that is, any restriction on the ``depth'' of the inference rules of BV would result in a strictly less expressive logical system.<|reference_end|>
arxiv
@article{tiu2005a, title={A System of Interaction and Structure II: The Need for Deep Inference}, author={Alwen Tiu}, journal={Logical Methods in Computer Science, Volume 2, Issue 2 (April 3, 2006) lmcs:2252}, year={2005}, doi={10.2168/LMCS-2(2:4)2006}, archivePrefix={arXiv}, eprint={cs/0512036}, primaryClass={cs.LO} }
tiu2005a
arxiv-673611
cs/0512037
Evolving Stochastic Learning Algorithm Based on Tsallis Entropic Index
<|reference_start|>Evolving Stochastic Learning Algorithm Based on Tsallis Entropic Index: In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time--dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.<|reference_end|>
arxiv
@article{anastasiadis2005evolving, title={Evolving Stochastic Learning Algorithm Based on Tsallis Entropic Index}, author={Aristoklis D. Anastasiadis, and George D. Magoulas}, journal={arXiv preprint arXiv:cs/0512037}, year={2005}, doi={10.1140/epjb/e2006-00137-6}, archivePrefix={arXiv}, eprint={cs/0512037}, primaryClass={cs.NE cs.AI} }
anastasiadis2005evolving
arxiv-673612
cs/0512038
Capacity of Differential versus Non-Differential Unitary Space-Time Modulation for MIMO channels
<|reference_start|>Capacity of Differential versus Non-Differential Unitary Space-Time Modulation for MIMO channels: Differential Unitary Space-Time Modulation (DUSTM) and its earlier nondifferential counterpart, USTM, permit high-throughput MIMO communication entirely without the possession of channel state information (CSI) by either the transmitter or the receiver. For an isotropically random unitary input we obtain the exact closed-form expression for the probability density of the DUSTM received signal, which permits the straightforward Monte Carlo evaluation of its mutual information. We compare the performance of DUSTM and USTM through both numerical computations of mutual information and through the analysis of low- and high-SNR asymptotic expressions. In our comparisons the symbol durations of the equivalent unitary space-time signals are both equal to T, as are the number of receive antennas N. For DUSTM the number of transmit antennas is constrained by the scheme to be M = T/2, while USTM has no such constraint. If DUSTM and USTM utilize the same number of transmit antennas at high SNR's the normalized mutual information of the differential and the nondifferential schemes expressed in bits/sec/Hz are asymptotically equal, with the differential scheme performing somewhat better, while at low SNR's the normalized mutual information of DUSTM is asymptotically twice the normalized mutual information of USTM. If, instead, USTM utilizes the optimum number of transmit antennas then USTM can outperform DUSTM at sufficiently low SNR's.<|reference_end|>
arxiv
@article{moustakas2005capacity, title={Capacity of Differential versus Non-Differential Unitary Space-Time Modulation for MIMO channels}, author={Aris L. Moustakas, Steven H. Simon and Thomas L. Marzetta}, journal={arXiv preprint arXiv:cs/0512038}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512038}, primaryClass={cs.IT cond-mat.stat-mech math-ph math.IT math.MP} }
moustakas2005capacity
arxiv-673613
cs/0512039
An algorithm for the k-error linear complexity of a sequence with period 2pn over GF(q)
<|reference_start|>An algorithm for the k-error linear complexity of a sequence with period 2pn over GF(q): The union cost is used, so that an efficient algorithm for computing the k-error linear complexity of a sequence with period 2pn over GF(q) is presented, where p and q are odd primes, and q is a primitive root of modulo p2.<|reference_end|>
arxiv
@article{zhou2005an, title={An algorithm for the k-error linear complexity of a sequence with period 2pn over GF(q)}, author={Jianqin Zhou, Xirong Xu}, journal={arXiv preprint arXiv:cs/0512039}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512039}, primaryClass={cs.CR} }
zhou2005an
arxiv-673614
cs/0512040
A fast algorithm for determining the linear complexity of periodic sequences
<|reference_start|>A fast algorithm for determining the linear complexity of periodic sequences: A fast algorithm is presented for determining the linear complexity and the minimal polynomial of periodic sequences over GF(q) with period q n p m, where p is a prime, q is a prime and a primitive root modulo p2. The algorithm presented here generalizes both the algorithm in [4] where the period of a sequence over GF(q) is p m and the algorithm in [5] where the period of a binary sequence is 2 n p m . When m=0, the algorithm simplifies the generalized Games-Chan algorithm.<|reference_end|>
arxiv
@article{zhou2005a, title={A fast algorithm for determining the linear complexity of periodic sequences}, author={Jianqin Zhou}, journal={arXiv preprint arXiv:cs/0512040}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512040}, primaryClass={cs.CR} }
zhou2005a
arxiv-673615
cs/0512041
Generalized partially bent functions
<|reference_start|>Generalized partially bent functions: Based on the definition of generalized partially bent functions, using the theory of linear transformation, the relationship among generalized partially bent functions over ring Z N, generalized bent functions over ring Z N and affine functions is discussed. When N is a prime number, it is proved that a generalized partially bent function can be decomposed as the addition of a generalized bent function and an affine function. The result obtained here generalizes the main works concerning partially bent functions by Claud Carlet in [1].<|reference_end|>
arxiv
@article{zhou2005generalized, title={Generalized partially bent functions}, author={Jianqin Zhou}, journal={arXiv preprint arXiv:cs/0512041}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512041}, primaryClass={cs.CR} }
zhou2005generalized
arxiv-673616
cs/0512042
Points on Computable Curves
<|reference_start|>Points on Computable Curves: The ``analyst's traveling salesman theorem'' of geometric measure theory characterizes those subsets of Euclidean space that are contained in curves of finite length. This result, proven for the plane by Jones (1990) and extended to higher-dimensional Euclidean spaces by Okikiolu (1991), says that a bounded set $K$ is contained in some curve of finite length if and only if a certain ``square beta sum'', involving the ``width of $K$'' in each element of an infinite system of overlapping ``tiles'' of descending size, is finite. In this paper we characterize those {\it points} of Euclidean space that lie on {\it computable} curves of finite length by formulating and proving a computable extension of the analyst's traveling salesman theorem. Our extension says that a point in Euclidean space lies on some computable curve of finite length if and only if it is ``permitted'' by some computable ``Jones constriction''. A Jones constriction here is an explicit assignment of a rational cylinder to each of the above-mentioned tiles in such a way that, when the radius of the cylinder corresponding to a tile is used in place of the ``width of $K$'' in each tile, the square beta sum is finite. A point is permitted by a Jones constriction if it is contained in the cylinder assigned to each tile containing the point. The main part of our proof is the construction of a computable curve of finite length traversing all the points permitted by a given Jones constriction. Our construction uses the main ideas of Jones's ``farthest insertion'' construction, but our algorithm for computing the curve must work exclusively with the Jones constriction itself, because it has no direct access to the (typically uncomputable) points permitted by the Jones constriction.<|reference_end|>
arxiv
@article{gu2005points, title={Points on Computable Curves}, author={Xiaoyang Gu, Jack H. Lutz, Elvira Mayordomo}, journal={arXiv preprint arXiv:cs/0512042}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512042}, primaryClass={cs.CC cs.CG} }
gu2005points
arxiv-673617
cs/0512043
Random Walks with Anti-Correlated Steps
<|reference_start|>Random Walks with Anti-Correlated Steps: We conjecture the expected value of random walks with anti-correlated steps to be exactly 1. We support this conjecture with 2 plausibility arguments and experimental data. The experimental analysis includes the computation of the expected values of random walks for steps up to 22. The result shows the expected value asymptotically converging to 1.<|reference_end|>
arxiv
@article{wagner2005random, title={Random Walks with Anti-Correlated Steps}, author={Dirk Wagner, John Noga}, journal={arXiv preprint arXiv:cs/0512043}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512043}, primaryClass={cs.DM cs.PF} }
wagner2005random
arxiv-673618
cs/0512044
Computation of the Ramsey Number $R(W_5,K_5)$
<|reference_start|>Computation of the Ramsey Number $R(W_5,K_5)$: We determine the value of the Ramsey number $R(W_5,K_5)$ to be 27, where $W_5 = K_1 + C_4$ is the 4-spoked wheel of order 5. This solves one of the four remaining open cases in the tables given in 1989 by George R. T. Hendry, which included the Ramsey numbers $R(G,H)$ for all pairs of graphs $G$ and $H$ having five vertices, except seven entries. In addition, we show that there exists a unique up to isomorphism critical Ramsey graph for $W_5$ versus $K_5$. Our results are based on computer algorithms.<|reference_end|>
arxiv
@article{stinehour2005computation, title={Computation of the Ramsey Number $R(W_5,K_5)$}, author={Joshua Stinehour, Stanis{l}aw Radziszowski and Kung-Kuen Tse}, journal={Bulletin of the Institute of Combinatorics and Its Applications, 47 (2006) 53-57}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512044}, primaryClass={cs.DM} }
stinehour2005computation
arxiv-673619
cs/0512045
Branch-and-Prune Search Strategies for Numerical Constraint Solving
<|reference_start|>Branch-and-Prune Search Strategies for Numerical Constraint Solving: When solving numerical constraints such as nonlinear equations and inequalities, solvers often exploit pruning techniques, which remove redundant value combinations from the domains of variables, at pruning steps. To find the complete solution set, most of these solvers alternate the pruning steps with branching steps, which split each problem into subproblems. This forms the so-called branch-and-prune framework, well known among the approaches for solving numerical constraints. The basic branch-and-prune search strategy that uses domain bisections in place of the branching steps is called the bisection search. In general, the bisection search works well in case (i) the solutions are isolated, but it can be improved further in case (ii) there are continuums of solutions (this often occurs when inequalities are involved). In this paper, we propose a new branch-and-prune search strategy along with several variants, which not only allow yielding better branching decisions in the latter case, but also work as well as the bisection search does in the former case. These new search algorithms enable us to employ various pruning techniques in the construction of inner and outer approximations of the solution set. Our experiments show that these algorithms speed up the solving process often by one order of magnitude or more when solving problems with continuums of solutions, while keeping the same performance as the bisection search when the solutions are isolated.<|reference_end|>
arxiv
@article{vu2005branch-and-prune, title={Branch-and-Prune Search Strategies for Numerical Constraint Solving}, author={Xuan-Ha Vu, Marius-Calin Silaghi, Djamila Sam-Haroud and Boi Faltings}, journal={arXiv preprint arXiv:cs/0512045}, year={2005}, number={LIA-REPORT-2006-007}, archivePrefix={arXiv}, eprint={cs/0512045}, primaryClass={cs.AI} }
vu2005branch-and-prune
arxiv-673620
cs/0512046
A polynomial algorithm for the k-cluster problem on interval graphs
<|reference_start|>A polynomial algorithm for the k-cluster problem on interval graphs: This paper deals with the problem of finding, for a given graph and a given natural number k, a subgraph of k nodes with a maximum number of edges. This problem is known as the k-cluster problem and it is NP-hard on general graphs as well as on chordal graphs. In this paper, it is shown that the k-cluster problem is solvable in polynomial time on interval graphs. In particular, we present two polynomial time algorithms for the class of proper interval graphs and the class of general interval graphs, respectively. Both algorithms are based on a matrix representation for interval graphs. In contrast to representations used in most of the previous work, this matrix representation does not make use of the maximal cliques in the investigated graph.<|reference_end|>
arxiv
@article{mertzios2005a, title={A polynomial algorithm for the k-cluster problem on interval graphs}, author={George B. Mertzios}, journal={arXiv preprint arXiv:cs/0512046}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512046}, primaryClass={cs.DS} }
mertzios2005a
arxiv-673621
cs/0512047
Processing Uncertainty and Indeterminacy in Information Systems success mapping
<|reference_start|>Processing Uncertainty and Indeterminacy in Information Systems success mapping: IS success is a complex concept, and its evaluation is complicated, unstructured and not readily quantifiable. Numerous scientific publications address the issue of success in the IS field as well as in other fields. But, little efforts have been done for processing indeterminacy and uncertainty in success research. This paper shows a formal method for mapping success using Neutrosophic Success Map. This is an emerging tool for processing indeterminacy and uncertainty in success research. EIS success have been analyzed using this tool.<|reference_end|>
arxiv
@article{salmeron2005processing, title={Processing Uncertainty and Indeterminacy in Information Systems success mapping}, author={Jose L. Salmeron, Florentin Smarandache}, journal={arXiv preprint arXiv:cs/0512047}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512047}, primaryClass={cs.AI} }
salmeron2005processing
arxiv-673622
cs/0512048
Spatial Precoder Design for Space-Time Coded MIMO Systems: Based on Fixed Parameters of MIMO Channels
<|reference_start|>Spatial Precoder Design for Space-Time Coded MIMO Systems: Based on Fixed Parameters of MIMO Channels: In this paper, we introduce the novel use of linear spatial precoding based on fixed and known parameters of multiple-input multiple-output (MIMO) channels to improve the performance of space-time coded MIMO systems. We derive linear spatial precoding schemes for both coherent (channel is known at the receiver) and non-coherent (channel is un-known at the receiver) space-time coded MIMO systems. Antenna spacing and antenna placement (geometry) are considered as fixed parameters of MIMO channels, which are readily known at the transmitter. These precoding schemes exploit the antenna placement information at both ends of the MIMO channel to ameliorate the effect of non-ideal antenna placement on the performance of space-time coded systems. In these schemes, the precoder is fixed for given transmit and receive antenna configurations and transmitter does not require any feedback of channel state information (partial or full) from the receiver. Closed form solutions for both precoding schemes are presented for systems with up to three receiver antennas. A generalized method is proposed for more than three receiver antennas. We use the coherent space-time block codes (STBC) and differential space-time block codes to analyze the performance of proposed precoding schemes. Simulation results show that at low SNRs, both precoders give significant performance improvement over a non-precoded system for small antenna aperture sizes.<|reference_end|>
arxiv
@article{lamahewa2005spatial, title={Spatial Precoder Design for Space-Time Coded MIMO Systems: Based on Fixed Parameters of MIMO Channels}, author={Tharaka A. Lamahewa, Rodney A. Kennedy, Thushara D. Abhayapala, Van K. Nguyen}, journal={arXiv preprint arXiv:cs/0512048}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512048}, primaryClass={cs.IT math.IT} }
lamahewa2005spatial
arxiv-673623
cs/0512049
Mastermind is NP-Complete
<|reference_start|>Mastermind is NP-Complete: In this paper we show that the Mastermind Satisfiability Problem (MSP) is NP-complete. The Mastermind is a popular game which can be turned into a logical puzzle called Mastermind Satisfiability Problem in a similar spirit to the Minesweeper puzzle. By proving that MSP is NP-complete, we reveal its intrinsic computational property that makes it challenging and interesting. This serves as an addition to our knowledge about a host of other puzzles, such as Minesweeper, Mah-Jongg, and the 15-puzzle.<|reference_end|>
arxiv
@article{stuckman2005mastermind, title={Mastermind is NP-Complete}, author={Jeff Stuckman and Guo-Qiang Zhang}, journal={arXiv preprint arXiv:cs/0512049}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512049}, primaryClass={cs.CC cs.DM} }
stuckman2005mastermind
arxiv-673624
cs/0512050
Preference Learning in Terminology Extraction: A ROC-based approach
<|reference_start|>Preference Learning in Terminology Extraction: A ROC-based approach: A key data preparation step in Text Mining, Term Extraction selects the terms, or collocation of words, attached to specific concepts. In this paper, the task of extracting relevant collocations is achieved through a supervised learning algorithm, exploiting a few collocations manually labelled as relevant/irrelevant. The candidate terms are described along 13 standard statistical criteria measures. From these examples, an evolutionary learning algorithm termed Roger, based on the optimization of the Area under the ROC curve criterion, extracts an order on the candidate terms. The robustness of the approach is demonstrated on two real-world domain applications, considering different domains (biology and human resources) and different languages (English and French).<|reference_end|>
arxiv
@article{azé2005preference, title={Preference Learning in Terminology Extraction: A ROC-based approach}, author={J'er^ome Az'e (LRI), Mathieu Roche (LRI), Yves Kodratoff (LRI), Mich`ele Sebag (LRI)}, journal={Proceeedings of Applied Stochastic Models and Data Analysis (2005) 209-219}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512050}, primaryClass={cs.LG} }
azé2005preference
arxiv-673625
cs/0512051
Existence of finite test-sets for k-power-freeness of uniform morphisms
<|reference_start|>Existence of finite test-sets for k-power-freeness of uniform morphisms: A challenging problem is to find an algorithm to decide whether a morphism is k-power-free. We provide such an algorithm when k >= 3 for uniform morphisms showing that in such a case, contrarily to the general case, there exist finite test-sets for k-power-freeness.<|reference_end|>
arxiv
@article{richomme2005existence, title={Existence of finite test-sets for k-power-freeness of uniform morphisms}, author={Gw'ena"el Richomme (LaRIA), Francis Wlazinski (LaRIA)}, journal={arXiv preprint arXiv:cs/0512051}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512051}, primaryClass={cs.DM} }
richomme2005existence
arxiv-673626
cs/0512052
High-Throughput SNP Genotyping by SBE/SBH
<|reference_start|>High-Throughput SNP Genotyping by SBE/SBH: Despite much progress over the past decade, current Single Nucleotide Polymorphism (SNP) genotyping technologies still offer an insufficient degree of multiplexing when required to handle user-selected sets of SNPs. In this paper we propose a new genotyping assay architecture combining multiplexed solution-phase single-base extension (SBE) reactions with sequencing by hybridization (SBH) using universal DNA arrays such as all $k$-mer arrays. In addition to PCR amplification of genomic DNA, SNP genotyping using SBE/SBH assays involves the following steps: (1) Synthesizing primers complementing the genomic sequence immediately preceding SNPs of interest; (2) Hybridizing these primers with the genomic DNA; (3) Extending each primer by a single base using polymerase enzyme and dideoxynucleotides labeled with 4 different fluorescent dyes; and finally (4) Hybridizing extended primers to a universal DNA array and determining the identity of the bases that extend each primer by hybridization pattern analysis. Our contributions include a study of multiplexing algorithms for SBE/SBH genotyping assays and preliminary experimental results showing the achievable tradeoffs between the number of array probes and primer length on one hand and the number of SNPs that can be assayed simultaneously on the other. Simulation results on datasets both randomly generated and extracted from the NCBI dbSNP database suggest that the SBE/SBH architecture provides a flexible and cost-effective alternative to genotyping assays currently used in the industry, enabling genotyping of up to hundreds of thousands of user-specified SNPs per assay.<|reference_end|>
arxiv
@article{mandoiu2005high-throughput, title={High-Throughput SNP Genotyping by SBE/SBH}, author={Ion I. Mandoiu and Claudia Prajescu}, journal={arXiv preprint arXiv:cs/0512052}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512052}, primaryClass={cs.DS q-bio.GN} }
mandoiu2005high-throughput
arxiv-673627
cs/0512053
Online Learning and Resource-Bounded Dimension: Winnow Yields New Lower Bounds for Hard Sets
<|reference_start|>Online Learning and Resource-Bounded Dimension: Winnow Yields New Lower Bounds for Hard Sets: We establish a relationship between the online mistake-bound model of learning and resource-bounded dimension. This connection is combined with the Winnow algorithm to obtain new results about the density of hard sets under adaptive reductions. This improves previous work of Fu (1995) and Lutz and Zhao (2000), and solves one of Lutz and Mayordomo's "Twelve Problems in Resource-Bounded Measure" (1999).<|reference_end|>
arxiv
@article{hitchcock2005online, title={Online Learning and Resource-Bounded Dimension: Winnow Yields New Lower Bounds for Hard Sets}, author={John M. Hitchcock}, journal={arXiv preprint arXiv:cs/0512053}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512053}, primaryClass={cs.CC cs.LG} }
hitchcock2005online
arxiv-673628
cs/0512054
Irreducible Frequent Patterns in Transactional Databases
<|reference_start|>Irreducible Frequent Patterns in Transactional Databases: Irreducible frequent patters (IFPs) are introduced for transactional databases. An IFP is such a frequent pattern (FP),(x1,x2,...xn), the probability of which, P(x1,x2,...xn), cannot be represented as a product of the probabilities of two (or more) other FPs of the smaller lengths. We have developed an algorithm for searching IFPs in transactional databases. We argue that IFPs represent useful tools for characterizing the transactional databases and may have important applications to bio-systems including the immune systems and for improving vaccination strategies. The effectiveness of the IFPs approach has been illustrated in application to a classification problem.<|reference_end|>
arxiv
@article{berman2005irreducible, title={Irreducible Frequent Patterns in Transactional Databases}, author={Gennady P.Berman (Los Alamos National Laboratory, T-13), Vyacheslav N.Gorshkov (Los Alamos National Laboratory, Center for Nonlinear Studies), Xidi Wang (Citigroup, Sao Paulo, Brasil)}, journal={arXiv preprint arXiv:cs/0512054}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512054}, primaryClass={cs.DS cs.DB} }
berman2005irreducible
arxiv-673629
cs/0512055
Termination Analysis of General Logic Programs for Moded Queries: A Dynamic Approach
<|reference_start|>Termination Analysis of General Logic Programs for Moded Queries: A Dynamic Approach: The termination problem of a logic program can be addressed in either a static or a dynamic way. A static approach performs termination analysis at compile time, while a dynamic approach characterizes and tests termination of a logic program by applying a loop checking technique. In this paper, we present a novel dynamic approach to termination analysis for general logic programs with moded queries. We address several interesting questions, including how to formulate an SLDNF-derivation for a moded query, how to characterize an infinite SLDNF-derivation with a moded query, and how to apply a loop checking mechanism to cut infinite SLDNF-derivations for the purpose of termination analysis. The proposed approach is very powerful and useful. It can be used (1) to test if a logic program terminates for a given concrete or moded query, (2) to test if a logic program terminates for all concrete or moded queries, and (3) to find all (most general) concrete/moded queries that are most likely terminating (or non-terminating).<|reference_end|>
arxiv
@article{shen2005termination, title={Termination Analysis of General Logic Programs for Moded Queries: A Dynamic Approach}, author={Yi-Dong Shen and Danny De Schreye}, journal={arXiv preprint arXiv:cs/0512055}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512055}, primaryClass={cs.LO cs.PL} }
shen2005termination
arxiv-673630
cs/0512056
PURRS: Towards Computer Algebra Support for Fully Automatic Worst-Case Complexity Analysis
<|reference_start|>PURRS: Towards Computer Algebra Support for Fully Automatic Worst-Case Complexity Analysis: Fully automatic worst-case complexity analysis has a number of applications in computer-assisted program manipulation. A classical and powerful approach to complexity analysis consists in formally deriving, from the program syntax, a set of constraints expressing bounds on the resources required by the program, which are then solved, possibly applying safe approximations. In several interesting cases, these constraints take the form of recurrence relations. While techniques for solving recurrences are known and implemented in several computer algebra systems, these do not completely fulfill the needs of fully automatic complexity analysis: they only deal with a somewhat restricted class of recurrence relations, or sometimes require user intervention, or they are restricted to the computation of exact solutions that are often so complex to be unmanageable, and thus useless in practice. In this paper we briefly describe PURRS, a system and software library aimed at providing all the computer algebra services needed by applications performing or exploiting the results of worst-case complexity analyses. The capabilities of the system are illustrated by means of examples derived from the analysis of programs written in a domain-specific functional programming language for real-time embedded systems.<|reference_end|>
arxiv
@article{bagnara2005purrs:, title={PURRS: Towards Computer Algebra Support for Fully Automatic Worst-Case Complexity Analysis}, author={Roberto Bagnara, Andrea Pescetti, Alessandro Zaccagnini, Enea Zaffanella (University of Parma)}, journal={arXiv preprint arXiv:cs/0512056}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512056}, primaryClass={cs.MS cs.CC} }
bagnara2005purrs:
arxiv-673631
cs/0512057
Resource Control for Synchronous Cooperative Threads
<|reference_start|>Resource Control for Synchronous Cooperative Threads: We develop new methods to statically bound the resources needed for the execution of systems of concurrent, interactive threads. Our study is concerned with a \emph{synchronous} model of interaction based on cooperative threads whose execution proceeds in synchronous rounds called instants. Our contribution is a system of compositional static analyses to guarantee that each instant terminates and to bound the size of the values computed by the system as a function of the size of its parameters at the beginning of the instant. Our method generalises an approach designed for first-order functional languages that relies on a combination of standard termination techniques for term rewriting systems and an analysis of the size of the computed values based on the notion of quasi-interpretation. We show that these two methods can be combined to obtain an explicit polynomial bound on the resources needed for the execution of the system during an instant. As a second contribution, we introduce a virtual machine and a related bytecode thus producing a precise description of the resources needed for the execution of a system. In this context, we present a suitable control flow analysis that allows to formulte the static analyses for resource control at byte code level.<|reference_end|>
arxiv
@article{amadio2005resource, title={Resource Control for Synchronous Cooperative Threads}, author={Roberto Amadio (PPS), Silvano Dal Zilio (LIF)}, journal={Journal of Theoretical Computer Science (TCS) 358 (15/08/2006) 229-254}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512057}, primaryClass={cs.PL} }
amadio2005resource
arxiv-673632
cs/0512058
Reactive concurrent programming revisited
<|reference_start|>Reactive concurrent programming revisited: In this note we revisit the so-called reactive programming style, which evolves from the synchronous programming model of the Esterel language by weakening the assumption that the absence of an event can be detected instantaneously. We review some research directions that have been explored since the emergence of the reactive model ten years ago. We shall also outline some questions that remain to be investigated.<|reference_end|>
arxiv
@article{amadio2005reactive, title={Reactive concurrent programming revisited}, author={Roberto Amadio (PPS), Gerard Boudol, Ilaria Castellani, Frederic Boussinot}, journal={Workshop on Process Algebra (29/09/2006) 49-60}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512058}, primaryClass={cs.PL} }
amadio2005reactive
arxiv-673633
cs/0512059
Competing with wild prediction rules
<|reference_start|>Competing with wild prediction rules: We consider the problem of on-line prediction competitive with a benchmark class of continuous but highly irregular prediction rules. It is known that if the benchmark class is a reproducing kernel Hilbert space, there exists a prediction algorithm whose average loss over the first N examples does not exceed the average loss of any prediction rule in the class plus a "regret term" of O(N^(-1/2)). The elements of some natural benchmark classes, however, are so irregular that these classes are not Hilbert spaces. In this paper we develop Banach-space methods to construct a prediction algorithm with a regret term of O(N^(-1/p)), where p is in [2,infty) and p-2 reflects the degree to which the benchmark class fails to be a Hilbert space.<|reference_end|>
arxiv
@article{vovk2005competing, title={Competing with wild prediction rules}, author={Vladimir Vovk}, journal={arXiv preprint arXiv:cs/0512059}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512059}, primaryClass={cs.LG} }
vovk2005competing
arxiv-673634
cs/0512060
Distributed Navigation Algorithms for Sensor Networks
<|reference_start|>Distributed Navigation Algorithms for Sensor Networks: We propose efficient distributed algorithms to aid navigation of a user through a geographic area covered by sensors. The sensors sense the level of danger at their locations and we use this information to find a safe path for the user through the sensor field. Traditional distributed navigation algorithms rely upon flooding the whole network with packets to find an optimal safe path. To reduce the communication expense, we introduce the concept of a skeleton graph which is a sparse subset of the true sensor network communication graph. Using skeleton graphs we show that it is possible to find approximate safe paths with much lower communication cost. We give tight theoretical guarantees on the quality of our approximation and by simulation, show the effectiveness of our algorithms in realistic sensor network situations.<|reference_end|>
arxiv
@article{buragohain2005distributed, title={Distributed Navigation Algorithms for Sensor Networks}, author={Chiranjeeb Buragohain, Divyakant Agrawal, Subhash Suri}, journal={arXiv preprint arXiv:cs/0512060}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512060}, primaryClass={cs.NI cs.DC cs.DS} }
buragohain2005distributed
arxiv-673635
cs/0512061
Matching Subsequences in Trees
<|reference_start|>Matching Subsequences in Trees: Given two rooted, labeled trees $P$ and $T$ the tree path subsequence problem is to determine which paths in $P$ are subsequences of which paths in $T$. Here a path begins at the root and ends at a leaf. In this paper we propose this problem as a useful query primitive for XML data, and provide new algorithms improving the previously best known time and space bounds.<|reference_end|>
arxiv
@article{bille2005matching, title={Matching Subsequences in Trees}, author={Philip Bille and Inge Li Goertz}, journal={arXiv preprint arXiv:cs/0512061}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512061}, primaryClass={cs.DS} }
bille2005matching
arxiv-673636
cs/0512062
Evolino for recurrent support vector machines
<|reference_start|>Evolino for recurrent support vector machines: Traditional Support Vector Machines (SVMs) need pre-wired finite time windows to predict and classify time series. They do not have an internal state necessary to deal with sequences involving arbitrary long-term dependencies. Here we introduce a new class of recurrent, truly sequential SVM-like devices with internal adaptive states, trained by a novel method called EVOlution of systems with KErnel-based outputs (Evoke), an instance of the recent Evolino class of methods. Evoke evolves recurrent neural networks to detect and represent temporal dependencies while using quadratic programming/support vector regression to produce precise outputs. Evoke is the first SVM-based mechanism learning to classify a context-sensitive language. It also outperforms recent state-of-the-art gradient-based recurrent neural networks (RNNs) on various time series prediction tasks.<|reference_end|>
arxiv
@article{schmidhuber2005evolino, title={Evolino for recurrent support vector machines}, author={Juergen Schmidhuber, Matteo Gagliolo, Daan Wierstra, Faustino Gomez}, journal={arXiv preprint arXiv:cs/0512062}, year={2005}, number={IDSIA-19-05 version 2.0}, archivePrefix={arXiv}, eprint={cs/0512062}, primaryClass={cs.NE} }
schmidhuber2005evolino
arxiv-673637
cs/0512063
Complex Random Vectors and ICA Models: Identifiability, Uniqueness and Separability
<|reference_start|>Complex Random Vectors and ICA Models: Identifiability, Uniqueness and Separability: In this paper the conditions for identifiability, separability and uniqueness of linear complex valued independent component analysis (ICA) models are established. These results extend the well-known conditions for solving real-valued ICA problems to complex-valued models. Relevant properties of complex random vectors are described in order to extend the Darmois-Skitovich theorem for complex-valued models. This theorem is used to construct a proof of a theorem for each of the above ICA model concepts. Both circular and noncircular complex random vectors are covered. Examples clarifying the above concepts are presented.<|reference_end|>
arxiv
@article{eriksson2005complex, title={Complex Random Vectors and ICA Models: Identifiability, Uniqueness and Separability}, author={Jan Eriksson and Visa Koivunen}, journal={Information Theory, IEEE Transactions on , vol.52, no.3pp. 1017- 1029, March 2006}, year={2005}, doi={10.1109/TIT.2005.864440}, archivePrefix={arXiv}, eprint={cs/0512063}, primaryClass={cs.IT cs.CE cs.IR cs.LG math.IT} }
eriksson2005complex
arxiv-673638
cs/0512064
Computing shortest non-trivial cycles on orientable surfaces of bounded genus in almost linear time
<|reference_start|>Computing shortest non-trivial cycles on orientable surfaces of bounded genus in almost linear time: We present an algorithm that computes a shortest non-contractible and a shortest non-separating cycle on an orientable combinatorial surface of bounded genus in O(n \log n) time, where n denotes the complexity of the surface. This solves a central open problem in computational topology, improving upon the current-best O(n^{3/2})-time algorithm by Cabello and Mohar (ESA 2005). Our algorithm uses universal-cover constructions to find short cycles and makes extensive use of existing tools from the field.<|reference_end|>
arxiv
@article{kutz2005computing, title={Computing shortest non-trivial cycles on orientable surfaces of bounded genus in almost linear time}, author={Martin Kutz}, journal={arXiv preprint arXiv:cs/0512064}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512064}, primaryClass={cs.CG} }
kutz2005computing
arxiv-673639
cs/0512065
Tradeoffs in Metaprogramming
<|reference_start|>Tradeoffs in Metaprogramming: The design of metaprogramming languages requires appreciation of the tradeoffs that exist between important language characteristics such as safety properties, expressive power, and succinctness. Unfortunately, such tradeoffs are little understood, a situation we try to correct by embarking on a study of metaprogramming language tradeoffs using tools from computability theory. Safety properties of metaprograms are in general undecidable; for example, the property that a metaprogram always halts and produces a type-correct instance is $\Pi^0_2$-complete. Although such safety properties are undecidable, they may sometimes be captured by a restricted language, a notion we adapt from complexity theory. We give some sufficient conditions and negative results on when languages capturing properties can exist: there can be no languages capturing total correctness for metaprograms, and no `functional' safety properties above $\Sigma^0_3$ can be captured. We prove that translating a metaprogram from a general-purpose to a restricted metaprogramming language capturing a property is tantamount to proving that property for the metaprogram. Surprisingly, when one shifts perspective from programming to metaprogramming, the corresponding safety questions do not become substantially harder -- there is no `jump' of Turing degree for typical safety properties.<|reference_end|>
arxiv
@article{veldhuizen2005tradeoffs, title={Tradeoffs in Metaprogramming}, author={Todd L. Veldhuizen}, journal={arXiv preprint arXiv:cs/0512065}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512065}, primaryClass={cs.PL} }
veldhuizen2005tradeoffs
arxiv-673640
cs/0512066
On the Asymptotic Weight and Stopping Set Distribution of Regular LDPC Ensembles
<|reference_start|>On the Asymptotic Weight and Stopping Set Distribution of Regular LDPC Ensembles: We estimate the variance of weight and stopping set distribution of regular LDPC ensembles. Using this estimate and the second moment method we obtain bounds on the probability that a randomly chosen code from regular LDPC ensemble has its weight distribution and stopping set distribution close to respective ensemble averages. We are able to show that a large fraction of total number of codes have their weight and stopping set distribution close to the average.<|reference_end|>
arxiv
@article{rathi2005on, title={On the Asymptotic Weight and Stopping Set Distribution of Regular LDPC Ensembles}, author={Vishwambhar Rathi}, journal={arXiv preprint arXiv:cs/0512066}, year={2005}, doi={10.1109/TIT.2006.880065}, archivePrefix={arXiv}, eprint={cs/0512066}, primaryClass={cs.IT math.IT} }
rathi2005on
arxiv-673641
cs/0512067
Solving Partial Order Constraints for LPO Termination
<|reference_start|>Solving Partial Order Constraints for LPO Termination: This paper introduces a new kind of propositional encoding for reasoning about partial orders. The symbols in an unspecified partial order are viewed as variables which take integer values and are interpreted as indices in the order. For a partial order statement on n symbols each index is represented in log2 n propositional variables and partial order constraints between symbols are modeled on the bit representations. We illustrate the application of our approach to determine LPO termination for term rewrite systems. Experimental results are unequivocal, indicating orders of magnitude speedups in comparison with current implementations for LPO termination. The proposed encoding is general and relevant to other applications which involve propositional reasoning about partial orders.<|reference_end|>
arxiv
@article{codish2005solving, title={Solving Partial Order Constraints for LPO Termination}, author={Michael Codish, Vitaly Lagoon, and Peter J. Stuckey}, journal={Journal of Satisfiability, Boolean Modeling and Computation, 5:193-215: 2008}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512067}, primaryClass={cs.PL cs.LO cs.SC} }
codish2005solving
arxiv-673642
cs/0512068
Dynamic Web File Format Transformations with Grace
<|reference_start|>Dynamic Web File Format Transformations with Grace: Web accessible content stored in obscure, unpopular or obsolete formats represents a significant problem for digital preservation. The file formats that encode web content represent the implicit and explicit choices of web site maintainers at a particular point in time. Older file formats that have fallen out of favor are obviously a problem, but so are new file formats that have not yet been fully supported by browsers. Often browsers use plug-in software for displaying old and new formats, but plug-ins can be difficult to find, install and replicate across all environments that one may use. We introduce Grace, an http proxy server that transparently converts browser-incompatible and obsolete web content into web content that a browser is able to display without the use of plug-ins. Grace is configurable on a per user basis and can be expanded to provide an array of conversion services. We illustrate how the Grace prototype transforms several image formats (XBM, PNG with various alpha channels, and JPEG 2000) so they are viewable in Internet Explorer.<|reference_end|>
arxiv
@article{swaney2005dynamic, title={Dynamic Web File Format Transformations with Grace}, author={Daniel S. Swaney, Frank McCown and Michael L. Nelson}, journal={5th International Web Archiving Workshop and Digital Preservation (IWAW'05). September 22-23, 2005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512068}, primaryClass={cs.DL} }
swaney2005dynamic
arxiv-673643
cs/0512069
Reconstructing Websites for the Lazy Webmaster
<|reference_start|>Reconstructing Websites for the Lazy Webmaster: Backup or preservation of websites is often not considered until after a catastrophic event has occurred. In the face of complete website loss, "lazy" webmasters or concerned third parties may be able to recover some of their website from the Internet Archive. Other pages may also be salvaged from commercial search engine caches. We introduce the concept of "lazy preservation"- digital preservation performed as a result of the normal operations of the Web infrastructure (search engines and caches). We present Warrick, a tool to automate the process of website reconstruction from the Internet Archive, Google, MSN and Yahoo. Using Warrick, we have reconstructed 24 websites of varying sizes and composition to demonstrate the feasibility and limitations of website reconstruction from the public Web infrastructure. To measure Warrick's window of opportunity, we have profiled the time required for new Web resources to enter and leave search engine caches.<|reference_end|>
arxiv
@article{mccown2005reconstructing, title={Reconstructing Websites for the Lazy Webmaster}, author={Frank McCown, Joan A. Smith, Michael L. Nelson, Johan Bollen}, journal={arXiv preprint arXiv:cs/0512069}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512069}, primaryClass={cs.IR cs.CY} }
mccown2005reconstructing
arxiv-673644
cs/0512070
Incremental and Transitive Discrete Rotations
<|reference_start|>Incremental and Transitive Discrete Rotations: A discrete rotation algorithm can be apprehended as a parametric application $f\_\alpha$ from $\ZZ[i]$ to $\ZZ[i]$, whose resulting permutation ``looks like'' the map induced by an Euclidean rotation. For this kind of algorithm, to be incremental means to compute successively all the intermediate rotate d copies of an image for angles in-between 0 and a destination angle. The di scretized rotation consists in the composition of an Euclidean rotation with a discretization; the aim of this article is to describe an algorithm whic h computes incrementally a discretized rotation. The suggested method uses o nly integer arithmetic and does not compute any sine nor any cosine. More pr ecisely, its design relies on the analysis of the discretized rotation as a step function: the precise description of the discontinuities turns to be th e key ingredient that will make the resulting procedure optimally fast and e xact. A complete description of the incremental rotation process is provided, also this result may be useful in the specification of a consistent set of defin itions for discrete geometry.<|reference_end|>
arxiv
@article{nouvel2005incremental, title={Incremental and Transitive Discrete Rotations}, author={Bertrand Nouvel (LIP), Eric Remila (LIP)}, journal={arXiv preprint arXiv:cs/0512070}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512070}, primaryClass={cs.DM cs.GR} }
nouvel2005incremental
arxiv-673645
cs/0512071
"Going back to our roots": second generation biocomputing
<|reference_start|>"Going back to our roots": second generation biocomputing: Researchers in the field of biocomputing have, for many years, successfully "harvested and exploited" the natural world for inspiration in developing systems that are robust, adaptable and capable of generating novel and even "creative" solutions to human-defined problems. However, in this position paper we argue that the time has now come for a reassessment of how we exploit biology to generate new computational systems. Previous solutions (the "first generation" of biocomputing techniques), whilst reasonably effective, are crude analogues of actual biological systems. We believe that a new, inherently inter-disciplinary approach is needed for the development of the emerging "second generation" of bio-inspired methods. This new modus operandi will require much closer interaction between the engineering and life sciences communities, as well as a bidirectional flow of concepts, applications and expertise. We support our argument by examining, in this new light, three existing areas of biocomputing (genetic programming, artificial immune systems and evolvable hardware), as well as an emerging area (natural genetic engineering) which may provide useful pointers as to the way forward.<|reference_end|>
arxiv
@article{timmis2005"going, title={"Going back to our roots": second generation biocomputing}, author={Jon Timmis, Martyn Amos, Wolfgang Banzhaf and Andy Tyrrell}, journal={arXiv preprint arXiv:cs/0512071}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512071}, primaryClass={cs.AI cs.NE} }
timmis2005"going
arxiv-673646
cs/0512072
Computations with one and two real algebraic numbers
<|reference_start|>Computations with one and two real algebraic numbers: We present algorithmic and complexity results concerning computations with one and two real algebraic numbers, as well as real solving of univariate polynomials and bivariate polynomial systems with integer coefficients using Sturm-Habicht sequences. Our main results, in the univariate case, concern the problems of real root isolation (Th. 19) and simultaneous inequalities (Cor.26) and in the bivariate, the problems of system real solving (Th.42), sign evaluation (Th. 37) and simultaneous inequalities (Cor. 43).<|reference_end|>
arxiv
@article{emiris2005computations, title={Computations with one and two real algebraic numbers}, author={Ioannis Z. Emiris and Elias P. Tsigaridas}, journal={arXiv preprint arXiv:cs/0512072}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512072}, primaryClass={cs.SC cs.MS} }
emiris2005computations
arxiv-673647
cs/0512073
Schwerdtfeger-Fillmore-Springer-Cnops Construction Implemented in GiNaC
<|reference_start|>Schwerdtfeger-Fillmore-Springer-Cnops Construction Implemented in GiNaC: This paper presents an implementation of the Schwerdtfeger-Fillmore-Springer-Cnops construction (SFSCc) along with illustrations of its usage. SFSCc linearises the linear-fraction action of the Moebius group in R^n. This has clear advantages in several theoretical and applied fields including engineering. Our implementation is based on the Clifford algebra capacities of the GiNaC computer algebra system (http://www.ginac.de/), which were described in cs.MS/0410044. The core of this realisation of SFSCc is done for an arbitrary dimension of R^n with a metric given by an arbitrary bilinear form. We also present a subclass for two dimensional cycles (i.e. circles, parabolas and hyperbolas), which add some 2D specific routines including a visualisation to PostScript files through the MetaPost (http://www.tug.org/metapost.html) or Asymptote (http://asymptote.sourceforge.net/) packages. This software is the backbone of many results published in math.CV/0512416 and we use its applications their for demonstration. The library can be ported (with various level of required changes) to other CAS with Clifford algebras capabilities similar to GiNaC. There is an ISO image of a Live Debian DVD attached to this paper as an auxiliary file, a copy is stored on Google Drive as well.<|reference_end|>
arxiv
@article{kisil2005schwerdtfeger-fillmore-springer-cnops, title={Schwerdtfeger-Fillmore-Springer-Cnops Construction Implemented in GiNaC}, author={Vladimir V. Kisil}, journal={Adv. Appl. Clifford Algebr. v.17 (2007), no.1, 59-70}, year={2005}, doi={10.1007/s00006-006-0017-4}, number={LEEDS-MATH-PURE-2005-29}, archivePrefix={arXiv}, eprint={cs/0512073}, primaryClass={cs.MS cs.CG cs.SC} }
kisil2005schwerdtfeger-fillmore-springer-cnops
arxiv-673648
cs/0512074
Analytical Bounds on Maximum-Likelihood Decoded Linear Codes with Applications to Turbo-Like Codes: An Overview
<|reference_start|>Analytical Bounds on Maximum-Likelihood Decoded Linear Codes with Applications to Turbo-Like Codes: An Overview: Upper and lower bounds on the error probability of linear codes under maximum-likelihood (ML) decoding are shortly surveyed and applied to ensembles of codes on graphs. For upper bounds, focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we address de Caen's based bounds and their improvements, sphere-packing bounds, and information-theoretic bounds on the bit error probability of codes defined on graphs. A comprehensive overview is provided in a monograph by the authors which is currently in preparation.<|reference_end|>
arxiv
@article{sason2005analytical, title={Analytical Bounds on Maximum-Likelihood Decoded Linear Codes with Applications to Turbo-Like Codes: An Overview}, author={Igal Sason and Shlomo Shamai}, journal={arXiv preprint arXiv:cs/0512074}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512074}, primaryClass={cs.IT math.IT} }
sason2005analytical
arxiv-673649
cs/0512075
Performance versus Complexity Per Iteration for Low-Density Parity-Check Codes: An Information-Theoretic Approach
<|reference_start|>Performance versus Complexity Per Iteration for Low-Density Parity-Check Codes: An Information-Theoretic Approach: The paper is focused on the tradeoff between performance and decoding complexity per iteration for LDPC codes in terms of their gap (in rate) to capacity. The study of this tradeoff is done via information-theoretic bounds which also enable to get an indication on the sub-optimality of message-passing iterative decoding algorithms (as compared to optimal ML decoding). The bounds are generalized for parallel channels, and are applied to ensembles of punctured LDPC codes where both intentional and random puncturing are addressed. This work suggests an improvement in the tightness of some information-theoretic bounds which were previously derived by Burshtein et al. and by Sason and Urbanke.<|reference_end|>
arxiv
@article{sason2005performance, title={Performance versus Complexity Per Iteration for Low-Density Parity-Check Codes: An Information-Theoretic Approach}, author={Igal Sason and Gil Wiechman}, journal={arXiv preprint arXiv:cs/0512075}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512075}, primaryClass={cs.IT math.IT} }
sason2005performance
arxiv-673650
cs/0512076
On Achievable Rates and Complexity of LDPC Codes for Parallel Channels: Information-Theoretic Bounds and Applications
<|reference_start|>On Achievable Rates and Complexity of LDPC Codes for Parallel Channels: Information-Theoretic Bounds and Applications: The paper presents bounds on the achievable rates and the decoding complexity of low-density parity-check (LDPC) codes. It is assumed that the communication of these codes takes place over statistically independent parallel channels where these channels are memoryless, binary-input and output-symmetric (MBIOS). The bounds are applied to punctured LDPC codes. A diagram concludes our discussion by showing interconnections between the theorems in this paper and some previously reported results.<|reference_end|>
arxiv
@article{sason2005on, title={On Achievable Rates and Complexity of LDPC Codes for Parallel Channels: Information-Theoretic Bounds and Applications}, author={Igal Sason and Gil Wiechman}, journal={arXiv preprint arXiv:cs/0512076}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512076}, primaryClass={cs.IT math.IT} }
sason2005on
arxiv-673651
cs/0512077
Flat Holonomies on Automata Networks
<|reference_start|>Flat Holonomies on Automata Networks: We consider asynchronous networks of identical finite (independent of network's size or topology) automata. Our automata drive any network from any initial configuration of states, to a coherent one in which it can carry efficiently any computations implementable on synchronous properly initialized networks of the same size. A useful data structure on such networks is a partial orientation of its edges. It needs to be flat, i.e. have null holonomy (no excess of up or down edges in any cycle). It also needs to be centered, i.e. have a unique node with no down edges. There are (interdependent) self-stabilizing asynchronous finite automata protocols assuring flat centered orientation. Such protocols may vary in assorted efficiency parameters and it is desirable to have each replaceable with any alternative, responsible for a simple limited task. We describe an efficient reduction of any computational task to any such set of protocols compliant with our interface conditions.<|reference_end|>
arxiv
@article{itkis2005flat, title={Flat Holonomies on Automata Networks}, author={Gene Itkis, Leonid A. Levin}, journal={arXiv preprint arXiv:cs/0512077}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512077}, primaryClass={cs.DC cs.DM} }
itkis2005flat
arxiv-673652
cs/0512078
Graph-Cover Decoding and Finite-Length Analysis of Message-Passing Iterative Decoding of LDPC Codes
<|reference_start|>Graph-Cover Decoding and Finite-Length Analysis of Message-Passing Iterative Decoding of LDPC Codes: The goal of the present paper is the derivation of a framework for the finite-length analysis of message-passing iterative decoding of low-density parity-check codes. To this end we introduce the concept of graph-cover decoding. Whereas in maximum-likelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graph-cover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graph-cover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and message-passing iterative decoding. Namely, on the one hand it turns out that graph-cover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like message-passing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graph-cover decoding can serve as a model to explain the behavior of message-passing iterative decoding. Understanding the behavior of graph-cover decoding is tantamount to understanding the so-called fundamental polytope. Therefore, we give some characterizations of this polytope and explain its relation to earlier concepts that were introduced to understand the behavior of message-passing iterative decoding for finite-length codes.<|reference_end|>
arxiv
@article{vontobel2005graph-cover, title={Graph-Cover Decoding and Finite-Length Analysis of Message-Passing Iterative Decoding of LDPC Codes}, author={Pascal O. Vontobel and Ralf Koetter}, journal={arXiv preprint arXiv:cs/0512078}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512078}, primaryClass={cs.IT math.IT} }
vontobel2005graph-cover
arxiv-673653
cs/0512079
An invariant bayesian model selection principle for gaussian data in a sparse representation
<|reference_start|>An invariant bayesian model selection principle for gaussian data in a sparse representation: We develop a code length principle which is invariant to the choice of parameterization on the model distributions. An invariant approximation formula for easy computation of the marginal distribution is provided for gaussian likelihood models. We provide invariant estimators of the model parameters and formulate conditions under which these estimators are essentially posteriori unbiased for gaussian models. An upper bound on the coarseness of discretization on the model parameters is deduced. We introduce a discrimination measure between probability distributions and use it to construct probability distributions on model classes. The total code length is shown to equal the NML code length of Rissanen to within an additive constant when choosing Jeffreys prior distribution on the model parameters together with a particular choice of prior distribution on the model classes. Our model selection principle is applied to a gaussian estimation problem for data in a wavelet representation and its performance is tested and compared to alternative wavelet-based estimation methods in numerical experiments<|reference_end|>
arxiv
@article{fossgaard2005an, title={An invariant bayesian model selection principle for gaussian data in a sparse representation}, author={Eirik Fossgaard}, journal={arXiv preprint arXiv:cs/0512079}, year={2005}, number={82-92461-43-4}, archivePrefix={arXiv}, eprint={cs/0512079}, primaryClass={cs.IT math.IT} }
fossgaard2005an
arxiv-673654
cs/0512080
EqRank: Theme Evolution in Citation Graphs
<|reference_start|>EqRank: Theme Evolution in Citation Graphs: Time evolution of the classification scheme generated by the EqRank algorithm is studied with hep-th citation graph as an example. Intuitive expectations about evolution of an adequate classification scheme for a growing set of objects are formulated. Evolution compliant with these expectations is called natural. It is demonstrated that EqRank yields a naturally evolving classification scheme. We conclude that EqRank can be used as a means to detect new scientific themes, and to track their development.<|reference_end|>
arxiv
@article{pivovarov2005eqrank:, title={EqRank: Theme Evolution in Citation Graphs}, author={G. B. Pivovarov and S. E. Trunov}, journal={arXiv preprint arXiv:cs/0512080}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512080}, primaryClass={cs.DS cs.DL} }
pivovarov2005eqrank:
arxiv-673655
cs/0512081
De Dictionariis Dynamicis Pauco Spatio Utentibus
<|reference_start|>De Dictionariis Dynamicis Pauco Spatio Utentibus: We develop dynamic dictionaries on the word RAM that use asymptotically optimal space, up to constant factors, subject to insertions and deletions, and subject to supporting perfect-hashing queries and/or membership queries, each operation in constant time with high probability. When supporting only membership queries, we attain the optimal space bound of Theta(n lg(u/n)) bits, where n and u are the sizes of the dictionary and the universe, respectively. Previous dictionaries either did not achieve this space bound or had time bounds that were only expected and amortized. When supporting perfect-hashing queries, the optimal space bound depends on the range {1,2,...,n+t} of hashcodes allowed as output. We prove that the optimal space bound is Theta(n lglg(u/n) + n lg(n/(t+1))) bits when supporting only perfect-hashing queries, and it is Theta(n lg(u/n) + n lg(n/(t+1))) bits when also supporting membership queries. All upper bounds are new, as is the Omega(n lg(n/(t+1))) lower bound.<|reference_end|>
arxiv
@article{demaine2005de, title={De Dictionariis Dynamicis Pauco Spatio Utentibus}, author={Erik D. Demaine, Friedhelm Meyer auf der Heide, Rasmus Pagh and Mihai Patrascu}, journal={arXiv preprint arXiv:cs/0512081}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512081}, primaryClass={cs.DS} }
demaine2005de
arxiv-673656
cs/0512082
A Fixpoint Semantics of Event Systems with and without Fairness Assumptions
<|reference_start|>A Fixpoint Semantics of Event Systems with and without Fairness Assumptions: We present a fixpoint semantics of event systems. The semantics is presented in a general framework without concerns of fairness. Soundness and completeness of rules for deriving "leads-to" properties are proved in this general framework. The general framework is instantiated to minimal progress and weak fairness assumptions and similar results are obtained. We show the power of these results by deriving sufficient conditions for "leads-to" under minimal progress proving soundness of proof obligations without reasoning over state-traces.<|reference_end|>
arxiv
@article{barradas2005a, title={A Fixpoint Semantics of Event Systems with and without Fairness Assumptions}, author={Hector Ruiz Barradas (LSR - IMAG), Didier Bert (LSR - IMAG)}, journal={arXiv preprint arXiv:cs/0512082}, year={2005}, number={RR 1081-L LSR 21}, archivePrefix={arXiv}, eprint={cs/0512082}, primaryClass={cs.LO} }
barradas2005a
arxiv-673657
cs/0512083
New directions in mechanism design
<|reference_start|>New directions in mechanism design: Mechanism design uses the tools of economics and game theory to design rules of interaction for economic transactions that will,in principle, yield some de- sired outcome. In the last few years this field has received much interest of researchers in computer science, especially with the Internet developing as a platform for communications and connections among enormous numbers of computers and humans. Arguably the most positive result in mechanism de- sign is truthful and there are only one general truthful mechanisms so far : the generalized Vickrey-Clarke-Groves (VCG) mechanism. But VCG mecha- nism has one shortage: The implementation of truthfulness is on the cost of decreasing the revenue of the mechanism. (e.g., Ning Chen and Hong Zhu. [1999]). We introduce three new characters of mechanism:partly truthful, criti- cal, consistent, and introduce a new mechanism: X mechanism that satisfy the above three characters. Like VCG mechanism, X mechanism also generalizes from Vickery Auction and is consistent with Vickery auction in many ways, but the extended methods used in X mechanism is different from that in VCG mechanism . This paper will demonstrate that X mechanism better than VCG mechanism in optimizing utility of mechanism, which is the original intention of mechanism design. So partly truthful,critical and consistent are at least as important as truthful in mechanism design, and they beyond truthful in many situations.As a result, we conclude that partly truthful,critical and consistent are three new directions in mechanism design.<|reference_end|>
arxiv
@article{meng2005new, title={New directions in mechanism design}, author={Jiangtao Meng}, journal={arXiv preprint arXiv:cs/0512083}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512083}, primaryClass={cs.GT} }
meng2005new
arxiv-673658
cs/0512084
Understanding physics from interconnected data
<|reference_start|>Understanding physics from interconnected data: Metal melting on release after explosion is a physical system far from quilibrium. A complete physical model of this system does not exist, because many interrelated effects have to be considered. General methodology needs to be developed so as to describe and understand physical phenomena involved. The high noise of the data, moving blur of images, the high degree of uncertainty due to the different types of sensors, and the information entangled and hidden inside the noisy images makes reasoning about the physical processes very difficult. Major problems include proper information extraction and the problem of reconstruction, as well as prediction of the missing data. In this paper, several techniques addressing the first problem are given, building the basis for tackling the second problem.<|reference_end|>
arxiv
@article{sakhanenko2005understanding, title={Understanding physics from interconnected data}, author={Nikita Sakhanenko, Hanna Makaruk}, journal={arXiv preprint arXiv:cs/0512084}, year={2005}, number={LA-UR-05-5921}, archivePrefix={arXiv}, eprint={cs/0512084}, primaryClass={cs.CV} }
sakhanenko2005understanding
arxiv-673659
cs/0512085
Analyzing and Visualizing the Semantic Coverage of Wikipedia and Its Authors
<|reference_start|>Analyzing and Visualizing the Semantic Coverage of Wikipedia and Its Authors: This paper presents a novel analysis and visualization of English Wikipedia data. Our specific interest is the analysis of basic statistics, the identification of the semantic structure and age of the categories in this free online encyclopedia, and the content coverage of its highly productive authors. The paper starts with an introduction of Wikipedia and a review of related work. We then introduce a suite of measures and approaches to analyze and map the semantic structure of Wikipedia. The results show that co-occurrences of categories within individual articles have a power-law distribution, and when mapped reveal the nicely clustered semantic structure of Wikipedia. The results also reveal the content coverage of the article's authors, although the roles these authors play are as varied as the authors themselves. We conclude with a discussion of major results and planned future work.<|reference_end|>
arxiv
@article{holloway2005analyzing, title={Analyzing and Visualizing the Semantic Coverage of Wikipedia and Its Authors}, author={Todd Holloway, Miran Bozicevic, Katy B"orner}, journal={arXiv preprint arXiv:cs/0512085}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512085}, primaryClass={cs.IR} }
holloway2005analyzing
arxiv-673660
cs/0512086
On the Axiomatisation of Boolean Categories with and without Medial
<|reference_start|>On the Axiomatisation of Boolean Categories with and without Medial: The term ``Boolean category'' should be used for describing an object that is to categories what a Boolean algebra is to posets. More specifically, a Boolean category should provide the abstract algebraic structure underlying the proofs in Boolean Logic, in the same sense as a Cartesian closed category captures the proofs in intuitionistic logic and a *-autonomous category captures the proofs in linear logic. However, recent work has shown that there is no canonical axiomatisation of a Boolean category. In this work, we will see a series (with increasing strength) of possible such axiomatisations, all based on the notion of *-autonomous category. We will particularly focus on the medial map, which has its origin in an inference rule in KS, a cut-free deductive system for Boolean logic in the calculus of structures. Finally, we will present a category of proof nets as a particularly well-behaved example of a Boolean category.<|reference_end|>
arxiv
@article{strassburger2005on, title={On the Axiomatisation of Boolean Categories with and without Medial}, author={Lutz Strassburger}, journal={arXiv preprint arXiv:cs/0512086}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512086}, primaryClass={cs.LO} }
strassburger2005on
arxiv-673661
cs/0512087
Fundamental Limits and Scaling Behavior of Cooperative Multicasting in Wireless Networks
<|reference_start|>Fundamental Limits and Scaling Behavior of Cooperative Multicasting in Wireless Networks: A framework is developed for analyzing capacity gains from user cooperation in slow fading wireless networks when the number of nodes (network size) is large. The framework is illustrated for the case of a simple multipath-rich Rayleigh fading channel model. Both unicasting (one source and one destination) and multicasting (one source and several destinations) scenarios are considered. We introduce a meaningful notion of Shannon capacity for such systems, evaluate this capacity as a function of signal-to-noise ratio (SNR), and develop a simple two-phase cooperative network protocol that achieves it. We observe that the resulting capacity is the same for both unicasting and multicasting, but show that the network size required to achieve any target error probability is smaller for unicasting than for multicasting. Finally, we introduce the notion of a network ``scaling exponent'' to quantify the rate of decay of error probability with network size as a function of the targeted fraction of the capacity. This exponent provides additional insights to system designers by enabling a finer grain comparison of candidate cooperative transmission protocols in even moderately sized networks.<|reference_end|>
arxiv
@article{khisti2005fundamental, title={Fundamental Limits and Scaling Behavior of Cooperative Multicasting in Wireless Networks}, author={Ashish Khisti, Uri Erez, Gregory Wornell}, journal={arXiv preprint arXiv:cs/0512087}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512087}, primaryClass={cs.IT cs.NI math.IT} }
khisti2005fundamental
arxiv-673662
cs/0512088
Analysis of loss networks with routing
<|reference_start|>Analysis of loss networks with routing: This paper analyzes stochastic networks consisting of finite capacity nodes with different classes of requests which move according to some routing policy. The Markov processes describing these networks do not, in general, have reversibility properties, so the explicit expression of their invariant distribution is not known. Kelly's limiting regime is considered: the arrival rates of calls as well as the capacities of the nodes are proportional to a factor going to infinity. It is proved that, in limit, the associated rescaled Markov process converges to a deterministic dynamical system with a unique equilibrium point characterized by a nonstandard fixed point equation.<|reference_end|>
arxiv
@article{antunes2005analysis, title={Analysis of loss networks with routing}, author={Nelson Antunes, Christine Fricker, Philippe Robert, Danielle Tibi}, journal={Annals of Applied Probability 16, 4 (2006) 2007-2026}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512088}, primaryClass={cs.NI math.PR} }
antunes2005analysis
arxiv-673663
cs/0512089
On The Effectiveness of Kolmogorov Complexity Estimation to Discriminate Semantic Types
<|reference_start|>On The Effectiveness of Kolmogorov Complexity Estimation to Discriminate Semantic Types: We present progress on the experimental validation of a fundamental and universally applicable vulnerability analysis framework that is capable of identifying new types of vulnerabilities before attackers innovate attacks. This new framework proactively identifies system components that are vulnerable based upon their Kolmogorov Complexity estimates and it facilitates prediction of previously unknown vulnerabilities that are likely to be exploited by future attack methods. A tool that utilizes a growing library of complexity estimators is presented. This work is an incremental step towards validation of the concept of complexity-based vulnerability analysis. In particular, results indicate that data types (semantic types) can be identified by estimates of their complexity. Thus, a map of complexity can identify suspicious types, such as executable data embedded within passive data types, without resorting to predefined headers, signatures, or other limiting a priori information.<|reference_end|>
arxiv
@article{bush2005on, title={On The Effectiveness of Kolmogorov Complexity Estimation to Discriminate Semantic Types}, author={Stephen F. Bush, Todd Hughes}, journal={SFI Workshop: Resilient and Adaptive Defense of Computing Networks 2003, Santa Fe Institute, Santa Fe, NM, Nov 5-6, 2003}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512089}, primaryClass={cs.NI cs.CR} }
bush2005on
arxiv-673664
cs/0512090
Collaborative tagging as a tripartite network
<|reference_start|>Collaborative tagging as a tripartite network: We describe online collaborative communities by tripartite networks, the nodes being persons, items and tags. We introduce projection methods in order to uncover the structures of the networks, i.e. communities of users, genre families... To do so, we focus on the correlations between the nodes, depending on their profiles, and use percolation techniques that consist in removing less correlated links and observing the shaping of disconnected islands. The structuring of the network is visualised by using a tree representation. The notion of diversity in the system is also discussed.<|reference_end|>
arxiv
@article{lambiotte2005collaborative, title={Collaborative tagging as a tripartite network}, author={R. Lambiotte and M. Ausloos}, journal={Lecture Notes in Computer Science, 3993 (2006) 1114 - 1117}, year={2005}, doi={10.1007/11758532_152}, archivePrefix={arXiv}, eprint={cs/0512090}, primaryClass={cs.DS cs.DL} }
lambiotte2005collaborative
arxiv-673665
cs/0512091
Data Structures for Halfplane Proximity Queries and Incremental Voronoi Diagrams
<|reference_start|>Data Structures for Halfplane Proximity Queries and Incremental Voronoi Diagrams: We consider preprocessing a set $S$ of $n$ points in convex position in the plane into a data structure supporting queries of the following form: given a point $q$ and a directed line $\ell$ in the plane, report the point of $S$ that is farthest from (or, alternatively, nearest to) the point $q$ among all points to the left of line $\ell$. We present two data structures for this problem. The first data structure uses $O(n^{1+\varepsilon})$ space and preprocessing time, and answers queries in $O(2^{1/\varepsilon} \log n)$ time, for any $0 < \varepsilon < 1$. The second data structure uses $O(n \log^3 n)$ space and polynomial preprocessing time, and answers queries in $O(\log n)$ time. These are the first solutions to the problem with $O(\log n)$ query time and $o(n^2)$ space. The second data structure uses a new representation of nearest- and farthest-point Voronoi diagrams of points in convex position. This representation supports the insertion of new points in clockwise order using only $O(\log n)$ amortized pointer changes, in addition to $O(\log n)$-time point-location queries, even though every such update may make $\Theta(n)$ combinatorial changes to the Voronoi diagram. This data structure is the first demonstration that deterministically and incrementally constructed Voronoi diagrams can be maintained in $o(n)$ amortized pointer changes per operation while keeping $O(\log n)$-time point-location queries.<|reference_end|>
arxiv
@article{aronov2005data, title={Data Structures for Halfplane Proximity Queries and Incremental Voronoi Diagrams}, author={Boris Aronov, Prosenjit Bose, Erik D. Demaine, Joachim Gudmundsson, John Iacono, Stefan Langerman, Michiel Smid}, journal={arXiv preprint arXiv:cs/0512091}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512091}, primaryClass={cs.CG cs.DS} }
aronov2005data
arxiv-673666
cs/0512092
The Limits of Motion Prediction Support for Ad hoc Wireless Network Performance
<|reference_start|>The Limits of Motion Prediction Support for Ad hoc Wireless Network Performance: A fundamental understanding of gain provided by motion prediction in wireless ad hoc routing is currently lacking. This paper examines benefits in routing obtainable via prediction. A theoretical best-case non-predictive routing model is quantified in terms of both message overhead and update time for non-predictive routing. This best- case model of existing routing performance is compared with predictive routing. Several specific instances of predictive improvements in routing are examined. The primary contribution of this paper is quantification of predictive gain for wireless ad hoc routing.<|reference_end|>
arxiv
@article{bush2005the, title={The Limits of Motion Prediction Support for Ad hoc Wireless Network Performance}, author={Stephen F. Bush and Nathan Smith}, journal={The 2005 International Conference on Wireless Networks (ICWN-05) Monte Carlo Resort, Las Vegas, Nevada, USA, June 27-30, 2005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512092}, primaryClass={cs.NI} }
bush2005the
arxiv-673667
cs/0512093
Construction of Turbo Code Interleavers from 3-Regular Hamiltonian Graphs
<|reference_start|>Construction of Turbo Code Interleavers from 3-Regular Hamiltonian Graphs: In this letter we present a new construction of interleavers for turbo codes from 3-regular Hamiltonian graphs. The interleavers can be generated using a few parameters, which can be selected in such a way that the girth of the interleaver graph (IG) becomes large, inducing a high summary distance. The size of the search space for these parameters is derived. The proposed interleavers themselves work as their de-interleavers.<|reference_end|>
arxiv
@article{mazumdar2005construction, title={Construction of Turbo Code Interleavers from 3-Regular Hamiltonian Graphs}, author={Arya Mazumdar, A K Chaturvedi, Adrish Banerjee}, journal={IEEE Communications Letters, pp. 284-286, Vol. 10, Issue 4, April, 2006.}, year={2005}, doi={10.1109/LCOMM.2006.04013}, archivePrefix={arXiv}, eprint={cs/0512093}, primaryClass={cs.IT math.IT} }
mazumdar2005construction
arxiv-673668
cs/0512094
Low-Energy Sensor Network Time Synchronization as an Emergent Property
<|reference_start|>Low-Energy Sensor Network Time Synchronization as an Emergent Property: The primary contribution of this work is to examine the energy efficiency of pulse coupled oscillation for time synchronization in a realistic wireless network environment and to explore the impact of mobility on convergence rate. Energy coupled oscillation is susceptible to interference; this approach uses reception and decoding of short packet bursts to eliminate this problem. The energy efficiency of a commonly used timestamp broadcast algorithm is compared and contrasted with pulse-coupled oscillation. The emergent pulse coupled oscillation technique shows greater energy efficiency as well as robustness with mobility. A proportion of the sensors may be integrated with GPS receivers in order to obtain a master clock time.<|reference_end|>
arxiv
@article{bush2005low-energy, title={Low-Energy Sensor Network Time Synchronization as an Emergent Property}, author={Stephen F. Bush}, journal={arXiv preprint arXiv:cs/0512094}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512094}, primaryClass={cs.NI} }
bush2005low-energy
arxiv-673669
cs/0512095
The Internet AS-Level Topology: Three Data Sources and One Definitive Metric
<|reference_start|>The Internet AS-Level Topology: Three Data Sources and One Definitive Metric: We calculate an extensive set of characteristics for Internet AS topologies extracted from the three data sources most frequently used by the research community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP topologies are similar to one another but differ substantially from the WHOIS topology. Among the widely considered metrics, we find that the joint degree distribution appears to fundamentally characterize Internet AS topologies as well as narrowly define values for other important metrics. We discuss the interplay between the specifics of the three data collection mechanisms and the resulting topology views. In particular, we show how the data collection peculiarities explain differences in the resulting joint degree distributions of the respective topologies. Finally, we release to the community the input topology datasets, along with the scripts and output of our calculations. This supplement should enable researchers to validate their models against real data and to make more informed selection of topology data sources for their specific needs.<|reference_end|>
arxiv
@article{mahadevan2005the, title={The Internet AS-Level Topology: Three Data Sources and One Definitive Metric}, author={Priya Mahadevan, Dmitri Krioukov, Marina Fomenkov, Bradley Huffaker, Xenofontas Dimitropoulos, kc claffy, Amin Vahdat}, journal={ACM SIGCOMM Computer Communication Review (CCR), v.36, n.1, p.17-26, 2006}, year={2005}, doi={10.1145/1111322.1111328}, archivePrefix={arXiv}, eprint={cs/0512095}, primaryClass={cs.NI physics.soc-ph} }
mahadevan2005the
arxiv-673670
cs/0512096
Book review "The Haskell Road to Logic, Maths and Programming"
<|reference_start|>Book review "The Haskell Road to Logic, Maths and Programming": The textbook by Doets and van Eijck puts the Haskell programming language systematically to work for presenting a major piece of logic and mathematics. The reader is taken through chapters on basic logic, proof recipes, sets and lists, relations and functions, recursion and co-recursion, the number systems, polynomials and power series, ending with Cantor's infinities. The book uses Haskell for the executable and strongly typed manifestation of various mathematical notions at the level of declarative programming. The book adopts a systematic but relaxed mathematical style (definition, example, exercise, ...); the text is very pleasant to read due to a small amount of anecdotal information, and due to the fact that definitions are fluently integrated in the running text. An important goal of the book is to get the reader acquainted with reasoning about programs.<|reference_end|>
arxiv
@article{laemmel2005book, title={Book review "The Haskell Road to Logic, Maths and Programming"}, author={Ralf Laemmel}, journal={arXiv preprint arXiv:cs/0512096}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512096}, primaryClass={cs.PL cs.LO} }
laemmel2005book
arxiv-673671
cs/0512097
Gaussian Channels with Feedback: Optimality, Fundamental Limitations, and Connections of Communication, Estimation, and Control
<|reference_start|>Gaussian Channels with Feedback: Optimality, Fundamental Limitations, and Connections of Communication, Estimation, and Control: Gaussian channels with memory and with noiseless feedback have been widely studied in the information theory literature. However, a coding scheme to achieve the feedback capacity is not available. In this paper, a coding scheme is proposed to achieve the feedback capacity for Gaussian channels. The coding scheme essentially implements the celebrated Kalman filter algorithm, and is equivalent to an estimation system over the same channel without feedback. It reveals that the achievable information rate of the feedback communication system can be alternatively given by the decay rate of the Cramer-Rao bound of the associated estimation system. Thus, combined with the control theoretic characterizations of feedback communication (proposed by Elia), this implies that the fundamental limitations in feedback communication, estimation, and control coincide. This leads to a unifying perspective that integrates information, estimation, and control. We also establish the optimality of the Kalman filtering in the sense of information transmission, a supplement to the optimality of Kalman filtering in the sense of information processing proposed by Mitter and Newton. In addition, the proposed coding scheme generalizes the Schalkwijk-Kailath codes and reduces the coding complexity and coding delay. The construction of the coding scheme amounts to solving a finite-dimensional optimization problem. A simplification to the optimal stationary input distribution developed by Yang, Kavcic, and Tatikonda is also obtained. The results are verified in a numerical example.<|reference_end|>
arxiv
@article{liu2005gaussian, title={Gaussian Channels with Feedback: Optimality, Fundamental Limitations, and Connections of Communication, Estimation, and Control}, author={Jialing Liu and Nicola Elia}, journal={arXiv preprint arXiv:cs/0512097}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512097}, primaryClass={cs.IT math.IT} }
liu2005gaussian
arxiv-673672
cs/0512098
Mathematical models of the complex surfaces in simulation and visualization systems
<|reference_start|>Mathematical models of the complex surfaces in simulation and visualization systems: Modeling, simulation and visualization of three-dimension complex bodies widely use mathematical model of curves and surfaces. The most important curves and surfaces for these purposes are curves and surfaces in Hermite and Bezier forms, splines and NURBS. Article is devoted to survey this way to use geometrical data in various computer graphics systems and adjacent fields.<|reference_end|>
arxiv
@article{paukov2005mathematical, title={Mathematical models of the complex surfaces in simulation and visualization systems}, author={Dmitry P. Paukov}, journal={arXiv preprint arXiv:cs/0512098}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512098}, primaryClass={cs.GR cs.CG} }
paukov2005mathematical
arxiv-673673
cs/0512099
Mathematical Models in Schema Theory
<|reference_start|>Mathematical Models in Schema Theory: In this paper, a mathematical schema theory is developed. This theory has three roots: brain theory schemas, grid automata, and block-shemas. In Section 2 of this paper, elements of the theory of grid automata necessary for the mathematical schema theory are presented. In Section 3, elements of brain theory necessary for the mathematical schema theory are presented. In Section 4, other types of schemas are considered. In Section 5, the mathematical schema theory is developed. The achieved level of schema representation allows one to model by mathematical tools virtually any type of schemas considered before, including schemas in neurophisiology, psychology, computer science, Internet technology, databases, logic, and mathematics.<|reference_end|>
arxiv
@article{burgin2005mathematical, title={Mathematical Models in Schema Theory}, author={Mark Burgin}, journal={arXiv preprint arXiv:cs/0512099}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512099}, primaryClass={cs.AI} }
burgin2005mathematical
arxiv-673674
cs/0512100
The logic of interactive Turing reduction
<|reference_start|>The logic of interactive Turing reduction: The paper gives a soundness and completeness proof for the implicative fragment of intuitionistic calculus with respect to the semantics of computability logic, which understands intuitionistic implication as interactive algorithmic reduction. This concept -- more precisely, the associated concept of reducibility -- is a generalization of Turing reducibility from the traditional, input/output sorts of problems to computational tasks of arbitrary degrees of interactivity. See http://www.cis.upenn.edu/~giorgi/cl.html for a comprehensive online source on computability logic.<|reference_end|>
arxiv
@article{japaridze2005the, title={The logic of interactive Turing reduction}, author={Giorgi Japaridze}, journal={Journal of Symbolic Logic 72 (2007), pp. 243-276}, year={2005}, doi={10.2178/jsl/1174668394}, archivePrefix={arXiv}, eprint={cs/0512100}, primaryClass={cs.LO cs.AI math.LO} }
japaridze2005the
arxiv-673675
cs/0512101
On the Complexity of finding Stopping Distance in Tanner Graphs
<|reference_start|>On the Complexity of finding Stopping Distance in Tanner Graphs: Two decision problems related to the computation of stopping sets in Tanner graphs are shown to be NP-complete. NP-hardness of the problem of computing the stopping distance of a Tanner graph follows as a consequence<|reference_end|>
arxiv
@article{krishnan2005on, title={On the Complexity of finding Stopping Distance in Tanner Graphs}, author={K. Murali Krishnan, Priti Shankar}, journal={IEEE Trans. Info. Theory, 53(6), 2007, pp. 2278-2280.}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512101}, primaryClass={cs.IT cs.CC math.IT} }
krishnan2005on
arxiv-673676
cs/0512102
Statistical Parameters of the Novel "Perekhresni stezhky" ("The Cross-Paths") by Ivan Franko
<|reference_start|>Statistical Parameters of the Novel "Perekhresni stezhky" ("The Cross-Paths") by Ivan Franko: In the paper, a complex statistical characteristics of a Ukrainian novel is given for the first time. The distribution of word-forms with respect to their size is studied. The linguistic laws by Zipf-Mandelbrot and Altmann-Menzerath are analyzed.<|reference_end|>
arxiv
@article{buk2005statistical, title={Statistical Parameters of the Novel "Perekhresni stezhky" ("The Cross-Paths") by Ivan Franko}, author={Solomija Buk and Andrij Rovenchak}, journal={Quantitative Linguistics 62: Exact methods in the study of language and text: dedicated to Professor Gabriel Altmann on the occasion of his 75th birthday / Ed. by P. Grzybek and R. Kohler (Berlin; New York: de Gruyter), 39-48 (2007)}, year={2005}, doi={10.1515/9783110894219.39}, archivePrefix={arXiv}, eprint={cs/0512102}, primaryClass={cs.CL} }
buk2005statistical
arxiv-673677
cs/0512103
The Fibonacci Sequence Mod m
<|reference_start|>The Fibonacci Sequence Mod m: This paper proposes a computational method for obtaining the length of the cycle that arises from the Fibonacci series taken mod m (some number) and mod p (some prime number).<|reference_end|>
arxiv
@article{mello2005the, title={The Fibonacci Sequence Mod m}, author={Louis Mello}, journal={arXiv preprint arXiv:cs/0512103}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512103}, primaryClass={cs.OH} }
mello2005the
arxiv-673678
cs/0512104
Reversible CAM Processor Modeled After Quantum Computer Behavior
<|reference_start|>Reversible CAM Processor Modeled After Quantum Computer Behavior: Proposed below is a reversible digital computer modeled after the natural behavior of a quantum system. Using approaches usually reserved for idealized quantum computers, the Reversible CAM, or State Vector Parallel (RSVP) processor can easily find keywords in an unstructured database (that is, it can solve a needle in a haystack problem). The RSVP processor efficiently solves a SAT (Satisfiability of Boolean Formulae) problem; also it can aid in the solution of a GP (Global Properties of Truth Table) problem. The power delay product of the RSVP processor is exponentially lower than that of a standard CAM programmed to perform similar operations.<|reference_end|>
arxiv
@article{burger2005reversible, title={Reversible CAM Processor Modeled After Quantum Computer Behavior}, author={John Robert Burger}, journal={arXiv preprint arXiv:cs/0512104}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512104}, primaryClass={cs.AR quant-ph} }
burger2005reversible
arxiv-673679
cs/0512105
A study of the edge-switching Markov-chain method for the generation of random graphs
<|reference_start|>A study of the edge-switching Markov-chain method for the generation of random graphs: We study the problem of generating connected random graphs with no self-loops or multiple edges and that, in addition, have a given degree sequence. The generation method we focus on is the edge-switching Markov-chain method, whose functioning depends on a parameter w related to the method's core operation of an edge switch. We analyze two existing heuristics for adjusting w during the generation of a graph and show that they result in a Markov chain whose stationary distribution is uniform, thus ensuring that generation occurs uniformly at random. We also introduce a novel w-adjusting heuristic which, even though it does not always lead to a Markov chain, is still guaranteed to converge to the uniform distribution under relatively mild conditions. We report on extensive computer experiments comparing the three heuristics' performance at generating random graphs whose node degrees are distributed as power laws.<|reference_end|>
arxiv
@article{stauffer2005a, title={A study of the edge-switching Markov-chain method for the generation of random graphs}, author={Alexandre O. Stauffer, Valmir C. Barbosa}, journal={arXiv preprint arXiv:cs/0512105}, year={2005}, archivePrefix={arXiv}, eprint={cs/0512105}, primaryClass={cs.DM} }
stauffer2005a
arxiv-673680
cs/0601001
Truecluster: robust scalable clustering with model selection
<|reference_start|>Truecluster: robust scalable clustering with model selection: Data-based classification is fundamental to most branches of science. While recent years have brought enormous progress in various areas of statistical computing and clustering, some general challenges in clustering remain: model selection, robustness, and scalability to large datasets. We consider the important problem of deciding on the optimal number of clusters, given an arbitrary definition of space and clusteriness. We show how to construct a cluster information criterion that allows objective model selection. Differing from other approaches, our truecluster method does not require specific assumptions about underlying distributions, dissimilarity definitions or cluster models. Truecluster puts arbitrary clustering algorithms into a generic unified (sampling-based) statistical framework. It is scalable to big datasets and provides robust cluster assignments and case-wise diagnostics. Truecluster will make clustering more objective, allows for automation, and will save time and costs. Free R software is available.<|reference_end|>
arxiv
@article{oehlschlägel2006truecluster:, title={Truecluster: robust scalable clustering with model selection}, author={Jens Oehlschl"agel}, journal={arXiv preprint arXiv:cs/0601001}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601001}, primaryClass={cs.AI} }
oehlschlägel2006truecluster:
arxiv-673681
cs/0601002
Minimum-weight triangulation is NP-hard
<|reference_start|>Minimum-weight triangulation is NP-hard: A triangulation of a planar point set S is a maximal plane straight-line graph with vertex set S. In the minimum-weight triangulation (MWT) problem, we are looking for a triangulation of a given point set that minimizes the sum of the edge lengths. We prove that the decision version of this problem is NP-hard. We use a reduction from PLANAR-1-IN-3-SAT. The correct working of the gadgets is established with computer assistance, using dynamic programming on polygonal faces, as well as the beta-skeleton heuristic to certify that certain edges belong to the minimum-weight triangulation.<|reference_end|>
arxiv
@article{mulzer2006minimum-weight, title={Minimum-weight triangulation is NP-hard}, author={Wolfgang Mulzer and Guenter Rote}, journal={Journal of the ACM, 55, no. 2 (May 2008), Article 11, 29 pp.}, year={2006}, doi={10.1145/1346330.1346336}, number={B 05-23 (revised)}, archivePrefix={arXiv}, eprint={cs/0601002}, primaryClass={cs.CG cs.CC} }
mulzer2006minimum-weight
arxiv-673682
cs/0601003
Incremental copying garbage collection for WAM-based Prolog systems
<|reference_start|>Incremental copying garbage collection for WAM-based Prolog systems: The design and implementation of an incremental copying heap garbage collector for WAM-based Prolog systems is presented. Its heap layout consists of a number of equal-sized blocks. Other changes to the standard WAM allow these blocks to be garbage collected independently. The independent collection of heap blocks forms the basis of an incremental collecting algorithm which employs copying without marking (contrary to the more frequently used mark&copy or mark&slide algorithms in the context of Prolog). Compared to standard semi-space copying collectors, this approach to heap garbage collection lowers in many cases the memory usage and reduces pause times. The algorithm also allows for a wide variety of garbage collection policies including generational ones. The algorithm is implemented and evaluated in the context of hProlog.<|reference_end|>
arxiv
@article{vandeginste2006incremental, title={Incremental copying garbage collection for WAM-based Prolog systems}, author={Ruben Vandeginste, Bart Demoen}, journal={arXiv preprint arXiv:cs/0601003}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601003}, primaryClass={cs.PL} }
vandeginste2006incremental
arxiv-673683
cs/0601004
Integration of navigation and action selection functionalities in a computational model of cortico-basal ganglia-thalamo-cortical loops
<|reference_start|>Integration of navigation and action selection functionalities in a computational model of cortico-basal ganglia-thalamo-cortical loops: This article describes a biomimetic control architecture affording an animat both action selection and navigation functionalities. It satisfies the survival constraint of an artificial metabolism and supports several complementary navigation strategies. It builds upon an action selection model based on the basal ganglia of the vertebrate brain, using two interconnected cortico-basal ganglia-thalamo-cortical loops: a ventral one concerned with appetitive actions and a dorsal one dedicated to consummatory actions. The performances of the resulting model are evaluated in simulation. The experiments assess the prolonged survival permitted by the use of high level navigation strategies and the complementarity of navigation strategies in dynamic environments. The correctness of the behavioral choices in situations of antagonistic or synergetic internal states are also tested. Finally, the modelling choices are discussed with regard to their biomimetic plausibility, while the experimental results are estimated in terms of animat adaptivity.<|reference_end|>
arxiv
@article{girard2006integration, title={Integration of navigation and action selection functionalities in a computational model of cortico-basal ganglia-thalamo-cortical loops}, author={Beno^it Girard (LIP6, LPPA), David Filliat, Jean-Arcady Meyer (LIP6), Alain Berthoz (LPPA), Agn`es Guillot (LIP6)}, journal={Adaptive Behavior 13 (2005) 115-130}, year={2006}, doi={10.1177/105971230501300204}, archivePrefix={arXiv}, eprint={cs/0601004}, primaryClass={cs.AI cs.RO} }
girard2006integration
arxiv-673684
cs/0601005
Analyzing language development from a network approach
<|reference_start|>Analyzing language development from a network approach: In this paper we propose some new measures of language development using network analyses, which is inspired by the recent surge of interests in network studies of many real-world systems. Children's and care-takers' speech data from a longitudinal study are represented as a series of networks, word forms being taken as nodes and collocation of words as links. Measures on the properties of the networks, such as size, connectivity, hub and authority analyses, etc., allow us to make quantitative comparison so as to reveal different paths of development. For example, the asynchrony of development in network size and average degree suggests that children cannot be simply classified as early talkers or late talkers by one or two measures. Children follow different paths in a multi-dimensional space. They may develop faster in one dimension but slower in another dimension. The network approach requires little preprocessing of words and analyses on sentence structures, and the characteristics of words and their usage emerge from the network and are independent of any grammatical presumptions. We show that the change of the two articles "the" and "a" in their roles as important nodes in the network reflects the progress of children's syntactic development: the two articles often start in children's networks as hubs and later shift to authorities, while they are authorities constantly in the adult's networks. The network analyses provide a new approach to study language development, and at the same time language development also presents a rich area for network theories to explore.<|reference_end|>
arxiv
@article{ke2006analyzing, title={Analyzing language development from a network approach}, author={J-Y Ke and Y. Yao}, journal={arXiv preprint arXiv:cs/0601005}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601005}, primaryClass={cs.CL} }
ke2006analyzing
arxiv-673685
cs/0601006
On the Joint Source-Channel Coding Error Exponent for Discrete Memoryless Systems: Computation and Comparison with Separate Coding
<|reference_start|>On the Joint Source-Channel Coding Error Exponent for Discrete Memoryless Systems: Computation and Comparison with Separate Coding: We investigate the computation of Csiszar's bounds for the joint source-channel coding (JSCC) error exponent, E_J, of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary source-channel pairs via Arimoto's algorithm. When the channel's distribution satisfies a symmetry property, the bounds admit closed-form parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent E_J and the tandem coding error exponent E_T, which applies if the source and channel are separately coded. It is shown that E_T <= E_J <= 2E_T. We establish conditions for which E_J > E_T and for which E_J = 2E_T. Numerical examples indicate that E_J is close to 2E_T for many source-channel pairs. This gain translates into a power saving larger than 2 dB for a binary source transmitted over additive white Gaussian noise channels and Rayleigh fading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure.<|reference_end|>
arxiv
@article{zhong2006on, title={On the Joint Source-Channel Coding Error Exponent for Discrete Memoryless Systems: Computation and Comparison with Separate Coding}, author={Y. Zhong, F. Alajaji and L. L. Campbell}, journal={arXiv preprint arXiv:cs/0601006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601006}, primaryClass={cs.IT math.IT} }
zhong2006on
arxiv-673686
cs/0601007
The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link Part I: scalar systems
<|reference_start|>The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link Part I: scalar systems: We review how Shannon's classical notion of capacity is not enough to characterize a noisy communication channel if the channel is intended to be used as part of a feedback loop to stabilize an unstable scalar linear system. While classical capacity is not enough, another sense of capacity (parametrized by reliability) called ``anytime capacity'' is shown to be necessary for the stabilization of an unstable process. The required rate is given by the log of the unstable system gain and the required reliability comes from the sense of stability desired. A consequence of this necessity result is a sequential generalization of the Schalkwijk/Kailath scheme for communication over the AWGN channel with feedback. In cases of sufficiently rich information patterns between the encoder and decoder, adequate anytime capacity is also shown to be sufficient for there to exist a stabilizing controller. These sufficiency results are then generalized to cases with noisy observations, delayed control actions, and without any explicit feedback between the observer and the controller. Both necessary and sufficient conditions are extended to continuous time systems as well. We close with comments discussing a hierarchy of difficulty for communication problems and how these results establish where stabilization problems sit in that hierarchy.<|reference_end|>
arxiv
@article{sahai2006the, title={The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link Part I: scalar systems}, author={Anant Sahai and Sanjoy Mitter}, journal={arXiv preprint arXiv:cs/0601007}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601007}, primaryClass={cs.IT math.IT} }
sahai2006the
arxiv-673687
cs/0601008
A Hierarchical Analysis of Propositional Temporal Logic Based on Intervals
<|reference_start|>A Hierarchical Analysis of Propositional Temporal Logic Based on Intervals: We present a hierarchical framework for analysing propositional linear-time temporal logic (PTL) to obtain standard results such as a small model property, decision procedures and axiomatic completeness. Both finite time and infinite time are considered and one consequent benefit of the framework is the ability to systematically reduce infinite-time reasoning to finite-time reasoning. The treatment of PTL with both the operator Until and past time naturally reduces to that for PTL without either one. Our method utilises a low-level normal form for PTL called a "transition configuration". In addition, we employ reasoning about intervals of time. Besides being hierarchical and interval-based, the approach differs from other analyses of PTL typically based on sets of formulas and sequences of such sets. Instead we describe models using time intervals represented as finite and infinite sequences of states. The analysis relates larger intervals with smaller ones. Steps involved are expressed in Propositional Interval Temporal Logic (PITL) which is better suited than PTL for sequentially combining and decomposing formulas. Consequently, we can articulate issues in PTL model construction of equal relevance in more conventional analyses but normally only considered at the metalevel. We also describe a decision procedure based on Binary Decision Diagrams.<|reference_end|>
arxiv
@article{moszkowski2006a, title={A Hierarchical Analysis of Propositional Temporal Logic Based on Intervals}, author={Ben Moszkowski}, journal={arXiv preprint arXiv:cs/0601008}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601008}, primaryClass={cs.LO} }
moszkowski2006a
arxiv-673688
cs/0601009
Gaussian Fading is the Worst Fading
<|reference_start|>Gaussian Fading is the Worst Fading: The capacity of peak-power limited, single-antenna, non-coherent, flat-fading channels with memory is considered. The emphasis is on the capacity pre-log, i.e., on the limiting ratio of channel capacity to the logarithm of the signal-to-noise ratio (SNR), as the SNR tends to infinity. It is shown that, among all stationary and ergodic fading processes of a given spectral distribution function whose law has no mass point at zero, the Gaussian process gives rise to the smallest pre-log.<|reference_end|>
arxiv
@article{koch2006gaussian, title={Gaussian Fading is the Worst Fading}, author={Tobias Koch, Amos Lapidoth}, journal={arXiv preprint arXiv:cs/0601009}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601009}, primaryClass={cs.IT math.IT} }
koch2006gaussian
arxiv-673689
cs/0601010
Multi-Map Orbit Hopping Chaotic Stream Cipher
<|reference_start|>Multi-Map Orbit Hopping Chaotic Stream Cipher: In this paper we propose a multi-map orbit hopping chaotic stream cipher that utilizes the idea of spread spectrum mechanism for secure digital communications and fundamental chaos characteristics of mixing, unpredictable, and extremely sensitive to initial conditions. The design, key and subkeys, and detail implementation of the system are addressed. A variable number of well studied chaotic maps form a map bank. And the key determines how the system hops between multiple orbits, and it also determines the number of maps, the number of orbits for each map, and the number of sample points for each orbits. A detailed example is provided.<|reference_end|>
arxiv
@article{zhang2006multi-map, title={Multi-Map Orbit Hopping Chaotic Stream Cipher}, author={Xiaowen Zhang, Li Shu, Ke Tang}, journal={arXiv preprint arXiv:cs/0601010}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601010}, primaryClass={cs.CR} }
zhang2006multi-map
arxiv-673690
cs/0601011
Integrality gaps of semidefinite programs for Vertex Cover and relations to $\ell_1$ embeddability of Negative Type metrics
<|reference_start|>Integrality gaps of semidefinite programs for Vertex Cover and relations to $\ell_1$ embeddability of Negative Type metrics: We study various SDP formulations for {\sc Vertex Cover} by adding different constraints to the standard formulation. We show that {\sc Vertex Cover} cannot be approximated better than $2-o(1)$ even when we add the so called pentagonal inequality constraints to the standard SDP formulation, en route answering an open question of Karakostas~\cite{Karakostas}. We further show the surprising fact that by strengthening the SDP with the (intractable) requirement that the metric interpretation of the solution is an $\ell_1$ metric, we get an exact relaxation (integrality gap is 1), and on the other hand if the solution is arbitrarily close to being $\ell_1$ embeddable, the integrality gap may be as big as $2-o(1)$. Finally, inspired by the above findings, we use ideas from the integrality gap construction of Charikar \cite{Char02} to provide a family of simple examples for negative type metrics that cannot be embedded into $\ell_1$ with distortion better than $8/7-\eps$. To this end we prove a new isoperimetric inequality for the hypercube.<|reference_end|>
arxiv
@article{hatami2006integrality, title={Integrality gaps of semidefinite programs for Vertex Cover and relations to $\ell_1$ embeddability of Negative Type metrics}, author={Hamed Hatami and Avner Magen and Vangelis Markakis}, journal={arXiv preprint arXiv:cs/0601011}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601011}, primaryClass={cs.DS cs.DM math.MG} }
hatami2006integrality
arxiv-673691
cs/0601012
Product Multicommodity Flow in Wireless Networks
<|reference_start|>Product Multicommodity Flow in Wireless Networks: We provide a tight approximate characterization of the $n$-dimensional product multicommodity flow (PMF) region for a wireless network of $n$ nodes. Separate characterizations in terms of the spectral properties of appropriate network graphs are obtained in both an information theoretic sense and for a combinatorial interference model (e.g., Protocol model). These provide an inner approximation to the $n^2$ dimensional capacity region. These results answer the following questions which arise naturally from previous work: (a) What is the significance of $1/\sqrt{n}$ in the scaling laws for the Protocol interference model obtained by Gupta and Kumar (2000)? (b) Can we obtain a tight approximation to the "maximum supportable flow" for node distributions more general than the geometric random distribution, traffic models other than randomly chosen source-destination pairs, and under very general assumptions on the channel fading model? We first establish that the random source-destination model is essentially a one-dimensional approximation to the capacity region, and a special case of product multi-commodity flow. Building on previous results, for a combinatorial interference model given by a network and a conflict graph, we relate the product multicommodity flow to the spectral properties of the underlying graphs resulting in computational upper and lower bounds. For the more interesting random fading model with additive white Gaussian noise (AWGN), we show that the scaling laws for PMF can again be tightly characterized by the spectral properties of appropriately defined graphs. As an implication, we obtain computationally efficient upper and lower bounds on the PMF for any wireless network with a guaranteed approximation factor.<|reference_end|>
arxiv
@article{madan2006product, title={Product Multicommodity Flow in Wireless Networks}, author={Ritesh Madan, Devavrat Shah, and Olivier Leveque}, journal={arXiv preprint arXiv:cs/0601012}, year={2006}, doi={10.1109/TIT.2008.917663}, archivePrefix={arXiv}, eprint={cs/0601012}, primaryClass={cs.IT math.IT} }
madan2006product
arxiv-673692
cs/0601013
Forward slicing of functional logic programs by partial evaluation
<|reference_start|>Forward slicing of functional logic programs by partial evaluation: Program slicing has been mainly studied in the context of imperative languages, where it has been applied to a wide variety of software engineering tasks, like program understanding, maintenance, debugging, testing, code reuse, etc. This work introduces the first forward slicing technique for declarative multi-paradigm programs which integrate features from functional and logic programming. Basically, given a program and a slicing criterion (a function call in our setting), the computed forward slice contains those parts of the original program which are reachable from the slicing criterion. Our approach to program slicing is based on an extension of (online) partial evaluation. Therefore, it provides a simple way to develop program slicing tools from existing partial evaluators and helps to clarify the relation between both methodologies. A slicing tool for the multi-paradigm language Curry, which demonstrates the usefulness of our approach, has been implemented in Curry itself.<|reference_end|>
arxiv
@article{silva2006forward, title={Forward slicing of functional logic programs by partial evaluation}, author={Josep Silva and Germ'an Vidal}, journal={arXiv preprint arXiv:cs/0601013}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601013}, primaryClass={cs.PL cs.LO} }
silva2006forward
arxiv-673693
cs/0601014
Probabilistic bisimilarities between quantum processes
<|reference_start|>Probabilistic bisimilarities between quantum processes: Modeling and reasoning about concurrent quantum systems is very important both for distributed quantum computing and for quantum protocol verification. As a consequence, a general framework describing formally the communication and concurrency in complex quantum systems is necessary. For this purpose, we propose a model qCCS which is a natural quantum extension of classical value-passing CCS with the input and output of quantum states, and unitary transformations and measurements on quantum systems. The operational semantics of qCCS is given based on probabilistic labeled transition system. This semantics has many different features compared with the proposals in literature in order to describe input and output of quantum systems which are possibly correlated with other components. Based on this operational semantics, we introduce the notions of strong probabilistic bisimilarity and weak probabilistic bisimilarity between quantum processes and discuss some properties of them, such as congruence under various combinators.<|reference_end|>
arxiv
@article{feng2006probabilistic, title={Probabilistic bisimilarities between quantum processes}, author={Yuan Feng, Runyao Duan, Zhengfeng Ji, and Mingsheng Ying}, journal={Information and Computation 2007, 205:1608-1639}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601014}, primaryClass={cs.LO quant-ph} }
feng2006probabilistic
arxiv-673694
cs/0601015
Perturbation Analysis of a Variable M/M/1 Queue: A Probabilistic Approach
<|reference_start|>Perturbation Analysis of a Variable M/M/1 Queue: A Probabilistic Approach: Motivated by the problem of the coexistence on transmission links of telecommunication networks of elastic and unresponsive traffic, we study in this paper the impact on the busy period of an M/M/1 queue of a small perturbation in the server rate. The perturbation depends upon an independent stationary process (X(t)) and is quantified by means of a parameter \eps \ll 1. We specifically compute the two first terms of the power series expansion in \eps of the mean value of the busy period duration. This allows us to study the validity of the Reduced Service Rate (RSR) approximation, which consists in comparing the perturbed M/M/1 queue with the M/M/1 queue where the service rate is constant and equal to the mean value of the perturbation. For the first term of the expansion, the two systems are equivalent. For the second term, the situation is more complex and it is shown that the correlations of the environment process (X(t)) play a key role.<|reference_end|>
arxiv
@article{antunes2006perturbation, title={Perturbation Analysis of a Variable M/M/1 Queue: A Probabilistic Approach}, author={Nelson Antunes, Christine Fricker, Fabrice Guillemin (FT R&D), Philippe Robert}, journal={Advances in Applied Probability 38, 1 (2006) 263-283}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601015}, primaryClass={cs.NI} }
antunes2006perturbation
arxiv-673695
cs/0601016
Integration of streaming services and TCP data transmission in the Internet
<|reference_start|>Integration of streaming services and TCP data transmission in the Internet: We study in this paper the integration of elastic and streaming traffic on a same link in an IP network. We are specifically interested in the computation of the mean bit rate obtained by a data transfer. For this purpose, we consider that the bit rate offered by streaming traffic is low, of the order of magnitude of a small parameter \eps \ll 1 and related to an auxiliary stationary Markovian process (X(t)). Under the assumption that data transfers are exponentially distributed, arrive according to a Poisson process, and share the available bandwidth according to the ideal processor sharing discipline, we derive the mean bit rate of a data transfer as a power series expansion in \eps. Since the system can be described by means of an M/M/1 queue with a time-varying server rate, which depends upon the parameter \eps and process (X(t)), the key issue is to compute an expansion of the area swept under the occupation process of this queue in a busy period. We obtain closed formulas for the power series expansion in \eps of the mean bit rate, which allow us to verify the validity of the so-called reduced service rate at the first order. The second order term yields more insight into the negative impact of the variability of streaming flows.<|reference_end|>
arxiv
@article{antunes2006integration, title={Integration of streaming services and TCP data transmission in the Internet}, author={Nelson Antunes, Christine Fricker, Fabrice Guillemin (FT R&D), Philippe Robert}, journal={Performance Evaluation 62, 1-4 (2006) 263-277}, year={2006}, doi={10.1016/j.peva.2005.07.006}, archivePrefix={arXiv}, eprint={cs/0601016}, primaryClass={cs.NI} }
antunes2006integration
arxiv-673696
cs/0601017
Weighted Norms of Ambiguity Functions and Wigner Distributions
<|reference_start|>Weighted Norms of Ambiguity Functions and Wigner Distributions: In this article new bounds on weighted p-norms of ambiguity functions and Wigner functions are derived. Such norms occur frequently in several areas of physics and engineering. In pulse optimization for Weyl--Heisenberg signaling in wide-sense stationary uncorrelated scattering channels for example it is a key step to find the optimal waveforms for a given scattering statistics which is a problem also well known in radar and sonar waveform optimizations. The same situation arises in quantum information processing and optical communication when optimizing pure quantum states for communicating in bosonic quantum channels, i.e. find optimal channel input states maximizing the pure state channel fidelity. Due to the non-convex nature of this problem the optimum and the maximizers itself are in general difficult find, numerically and analytically. Therefore upper bounds on the achievable performance are important which will be provided by this contribution. Based on a result due to E. Lieb, the main theorem states a new upper bound which is independent of the waveforms and becomes tight only for Gaussian weights and waveforms. A discussion of this particular important case, which tighten recent results on Gaussian quantum fidelity and coherent states, will be given. Another bound is presented for the case where scattering is determined only by some arbitrary region in phase space.<|reference_end|>
arxiv
@article{jung2006weighted, title={Weighted Norms of Ambiguity Functions and Wigner Distributions}, author={Peter Jung}, journal={arXiv preprint arXiv:cs/0601017}, year={2006}, doi={10.1109/ISIT.2006.262122}, archivePrefix={arXiv}, eprint={cs/0601017}, primaryClass={cs.IT math.IT quant-ph} }
jung2006weighted
arxiv-673697
cs/0601018
A comparison between two logical formalisms for rewriting
<|reference_start|>A comparison between two logical formalisms for rewriting: Meseguer's rewriting logic and the rewriting logic CRWL are two well-known approaches to rewriting as logical deduction that, despite some clear similarities, were designed with different objectives. Here we study the relationships between them, both at a syntactic and at a semantic level. Even though it is not possible to establish an entailment system map between them, both can be naturally simulated in each other. Semantically, there is no embedding between the corresponding institutions. Along the way, the notions of entailment and satisfaction in Meseguer's rewriting logic are generalized. We also use the syntactic results to prove reflective properties of CRWL.<|reference_end|>
arxiv
@article{palomino2006a, title={A comparison between two logical formalisms for rewriting}, author={Miguel Palomino}, journal={arXiv preprint arXiv:cs/0601018}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601018}, primaryClass={cs.LO} }
palomino2006a
arxiv-673698
cs/0601019
Canonical Abstract Syntax Trees
<|reference_start|>Canonical Abstract Syntax Trees: This paper presents Gom, a language for describing abstract syntax trees and generating a Java implementation for those trees. Gom includes features allowing the user to specify and modify the interface of the data structure. These features provide in particular the capability to maintain the internal representation of data in canonical form with respect to a rewrite system. This explicitly guarantees that the client program only manipulates normal forms for this rewrite system, a feature which is only implicitly used in many implementations.<|reference_end|>
arxiv
@article{reilles2006canonical, title={Canonical Abstract Syntax Trees}, author={Antoine Reilles (INRIA Lorraine - LORIA)}, journal={Dans Workshop on Rewriting Techniques and Applications (2006)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601019}, primaryClass={cs.PL} }
reilles2006canonical
arxiv-673699
cs/0601020
The Evolution of Cyberinsurance
<|reference_start|>The Evolution of Cyberinsurance: Cyberinsurance is a powerful tool to align market incentives toward improving Internet security. We trace the evolution of cyberinsurance from traditional insurance policies to early cyber-risk insurance policies to current comprehensive cyberinsurance products. We find that increasing Internet security risk in combination with the need for compliance with recent corporate legislation has contributed significantly to the demand for cyberinsurance. Cyberinsurance policies have become more comprehensive as insurers better understand the risk landscape and specific business needs. More specifically, cyberinsurers are addressing what used to be considered insurmountable problems (e.g., adverse selection/asymmetric information, moral hazard, etc.) that could lead to a failure of this market solution. Although some implementation issues remain, we suggest the future development of cyberinsurance will resolve these issues as evidenced by insurance solutions in other risk domains.<|reference_end|>
arxiv
@article{majuca2006the, title={The Evolution of Cyberinsurance}, author={Ruperto P. Majuca, William Yurcik, Jay P. Kesan}, journal={arXiv preprint arXiv:cs/0601020}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601020}, primaryClass={cs.CR cs.CY} }
majuca2006the
arxiv-673700
cs/0601021
Lighting Control using Pressure-Sensitive Touchpads
<|reference_start|>Lighting Control using Pressure-Sensitive Touchpads: We introduce a novel approach to control physical lighting parameters by means of a pressure-sensitive touchpad. The two-dimensional area of the touchpad is subdivided into 5 virtual sliders, each controlling the intensity of a color (red, green, blue, yellow, and white). The physical interaction methodology is modeled directly after ubiquitous mechanical sliders and dimmers which tend to be used for intensity/volume control. Our abstraction to a pressure-sensitive touchpad provides advantages and introduces additional benefits over such existing devices.<|reference_end|>
arxiv
@article{haubold2006lighting, title={Lighting Control using Pressure-Sensitive Touchpads}, author={Alexander Haubold}, journal={arXiv preprint arXiv:cs/0601021}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601021}, primaryClass={cs.HC} }
haubold2006lighting