corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-673701 | cs/0601022 | On the Fading Number of Multiple-Input Single-Output Fading Channels with Memory | <|reference_start|>On the Fading Number of Multiple-Input Single-Output Fading Channels with Memory: We derive new upper and lower bounds on the fading number of multiple-input single-output (MISO) fading channels of general (not necessarily Gaussian) regular law with spatial and temporal memory. The fading number is the second term, after the double-logarithmic term, of the high signal-to-noise ratio (SNR) expansion of channel capacity. In case of an isotropically distributed fading vector it is proven that the upper and lower bound coincide, i.e., the general MISO fading number with memory is known precisely. The upper and lower bounds show that a type of beam-forming is asymptotically optimal.<|reference_end|> | arxiv | @article{moser2006on,
title={On the Fading Number of Multiple-Input Single-Output Fading Channels
with Memory},
author={Stefan M. Moser},
journal={arXiv preprint arXiv:cs/0601022},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601022},
primaryClass={cs.IT math.IT}
} | moser2006on |
arxiv-673702 | cs/0601023 | Efficient Convergent Maximum Likelihood Decoding on Tail-Biting Trellises | <|reference_start|>Efficient Convergent Maximum Likelihood Decoding on Tail-Biting Trellises: An algorithm for exact maximum likelihood(ML) decoding on tail-biting trellises is presented, which exhibits very good average case behavior. An approximate variant is proposed, whose simulated performance is observed to be virtually indistinguishable from the exact one at all values of signal to noise ratio, and which effectively performs computations equivalent to at most two rounds on the tail-biting trellis. The approximate algorithm is analyzed, and the conditions under which its output is different from the ML output are deduced. The results of simulations on an AWGN channel for the exact and approximate algorithms on the 16 state tail-biting trellis for the (24,12) Extended Golay Code, and tail-biting trellises for two rate 1/2 convolutional codes with memories of 4 and 6 respectively, are reported. An advantage of our algorithms is that they do not suffer from the effects of limit cycles or the presence of pseudocodewords.<|reference_end|> | arxiv | @article{shankar2006efficient,
title={Efficient Convergent Maximum Likelihood Decoding on Tail-Biting
Trellises},
author={Priti Shankar, P.N.A. Kumar, K. Sasidharan, B.S. Rajan, A.S. Madhu},
journal={arXiv preprint arXiv:cs/0601023},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601023},
primaryClass={cs.IT math.IT}
} | shankar2006efficient |
arxiv-673703 | cs/0601024 | Further Results on the Distinctness of Decimations of l-sequences | <|reference_start|>Further Results on the Distinctness of Decimations of l-sequences: Let $\underline{a}$ be an \textit{l}-sequence generated by a feedback-with-carry shift register with connection integer $q=p^{e}$, where $ p$ is an odd prime and $e\geq 1$. Goresky and Klapper conjectured that when $ p^{e}\notin \{5,9,11,13\}$, all decimations of $\underline{a}$ are cyclically distinct. When $e=1$ and $p>13$, they showed that the set of distinct decimations is large and, in some cases, all deciamtions are distinct. In this article, we further show that when $e\geq 2$ and$ p^{e}\neq 9$, all decimations of $\underline{a}$ are also cyclically distinct.<|reference_end|> | arxiv | @article{xu2006further,
title={Further Results on the Distinctness of Decimations of l-sequences},
author={Hong Xu and Wen-Feng Qi},
journal={arXiv preprint arXiv:cs/0601024},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601024},
primaryClass={cs.CR}
} | xu2006further |
arxiv-673704 | cs/0601025 | Prop-Based Haptic Interaction with Co-location and Immersion: an Automotive Application | <|reference_start|>Prop-Based Haptic Interaction with Co-location and Immersion: an Automotive Application: Most research on 3D user interfaces aims at providing only a single sensory modality. One challenge is to integrate several sensory modalities into a seamless system while preserving each modality's immersion and performance factors. This paper concerns manipulation tasks and proposes a visuo-haptic system integrating immersive visualization, tactile force and tactile feedback with co-location. An industrial application is presented.<|reference_end|> | arxiv | @article{ortega2006prop-based,
title={Prop-Based Haptic Interaction with Co-location and Immersion: an
Automotive Application},
author={Michael Ortega (INRIA Rh^one-Alpes), Sabine Coquillart (INRIA
Rh^one-Alpes)},
journal={Dans HAVE 2005 - IEEE International Workshop on Haptic Audio
Visual Environments and their Applications},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601025},
primaryClass={cs.HC}
} | ortega2006prop-based |
arxiv-673705 | cs/0601026 | Algebraic Structures and Algorithms for Matching and Matroid Problems (Preliminary Version) | <|reference_start|>Algebraic Structures and Algorithms for Matching and Matroid Problems (Preliminary Version): Basic path-matchings, introduced by Cunningham and Geelen (FOCS 1996), are a common generalization of matroid intersection and non-bipartite matching. The main results of this paper are a new algebraic characterization of basic path-matching problems and an algorithm for constructing basic path-matchings in O(n^w) time, where n is the number of vertices and w is the exponent for matrix multiplication. Our algorithms are randomized, and our approach assumes that the given matroids are linear and can be represented over the same field. Our main results have interesting consequences for several special cases of path-matching problems. For matroid intersection, we obtain an algorithm with running time O(nr^(w-1))=O(nr^1.38), where the matroids have n elements and rank r. This improves the long-standing bound of O(nr^1.62) due to Gabow and Xu (FOCS 1989). Also, we obtain a simple, purely algebraic algorithm for non-bipartite matching with running time O(n^w). This resolves the central open problem of Mucha and Sankowski (FOCS 2004).<|reference_end|> | arxiv | @article{harvey2006algebraic,
title={Algebraic Structures and Algorithms for Matching and Matroid Problems
(Preliminary Version)},
author={Nicholas J. A. Harvey},
journal={arXiv preprint arXiv:cs/0601026},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601026},
primaryClass={cs.DS cs.DM}
} | harvey2006algebraic |
arxiv-673706 | cs/0601027 | Quasiperiodic Sturmian words and morphisms | <|reference_start|>Quasiperiodic Sturmian words and morphisms: We characterize all quasiperiodic Sturmian words: a Sturmian word is not quasiperiodic if and only if it is a Lyndon word. Moreover, we study links between Sturmian morphisms and quasiperiodicity.<|reference_end|> | arxiv | @article{levé2006quasiperiodic,
title={Quasiperiodic Sturmian words and morphisms},
author={Florence Lev'e (LaRIA), Gw'ena"el Richomme (LaRIA)},
journal={arXiv preprint arXiv:cs/0601027},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601027},
primaryClass={cs.DM}
} | levé2006quasiperiodic |
arxiv-673707 | cs/0601028 | Superimposed Coded and Uncoded Transmissions of a Gaussian Source over the Gaussian Channel | <|reference_start|>Superimposed Coded and Uncoded Transmissions of a Gaussian Source over the Gaussian Channel: We propose to send a Gaussian source over an average-power limited additive white Gaussian noise channel by transmitting a linear combination of the source sequence and the result of its quantization using a high dimensional Gaussian vector quantizer. We show that, irrespective of the rate of the vector quantizer (assumed to be fixed and smaller than the channel's capacity), this transmission scheme is asymptotically optimal (as the quantizer's dimension tends to infinity) under the mean squared-error fidelity criterion. This generalizes the classical result of Goblick about the optimality of scaled uncoded transmission, which corresponds to choosing the rate of the vector quantizer as zero, and the classical source-channel separation approach, which corresponds to choosing the rate of the vector quantizer arbitrarily close to the capacity of the channel.<|reference_end|> | arxiv | @article{bross2006superimposed,
title={Superimposed Coded and Uncoded Transmissions of a Gaussian Source over
the Gaussian Channel},
author={Shraga Bross, Amos Lapidoth, Stephan Tinguely},
journal={arXiv preprint arXiv:cs/0601028},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601028},
primaryClass={cs.IT math.IT}
} | bross2006superimposed |
arxiv-673708 | cs/0601029 | Sending a Bi-Variate Gaussian Source over a Gaussian MAC | <|reference_start|>Sending a Bi-Variate Gaussian Source over a Gaussian MAC: We consider a problem where a memoryless bi-variate Gaussian source is to be transmitted over an additive white Gaussian multiple-access channel with two transmitting terminals and one receiving terminal. The first transmitter only sees the first source component and the second transmitter only sees the second source component. We are interested in the pair of mean squared-error distortions at which the receiving terminal can reproduce each of the source components. It is demonstrated that in the symmetric case, below a certain signal-to-noise ratio (SNR) threshold, which is determined by the source correlation, uncoded communication is optimal. For SNRs above this threshold we present outer and inner bounds on the achievable distortions.<|reference_end|> | arxiv | @article{lapidoth2006sending,
title={Sending a Bi-Variate Gaussian Source over a Gaussian MAC},
author={Amos Lapidoth, Stephan Tinguely},
journal={arXiv preprint arXiv:cs/0601029},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601029},
primaryClass={cs.IT math.IT}
} | lapidoth2006sending |
arxiv-673709 | cs/0601030 | Journal Status | <|reference_start|>Journal Status: The status of an actor in a social context is commonly defined in terms of two factors: the total number of endorsements the actor receives from other actors and the prestige of the endorsing actors. These two factors indicate the distinction between popularity and expert appreciation of the actor, respectively. We refer to the former as popularity and to the latter as prestige. These notions of popularity and prestige also apply to the domain of scholarly assessment. The ISI Impact Factor (ISI IF) is defined as the mean number of citations a journal receives over a 2 year period. By merely counting the amount of citations and disregarding the prestige of the citing journals, the ISI IF is a metric of popularity, not of prestige. We demonstrate how a weighted version of the popular PageRank algorithm can be used to obtain a metric that reflects prestige. We contrast the rankings of journals according to their ISI IF and their weighted PageRank, and we provide an analysis that reveals both significant overlaps and differences. Furthermore, we introduce the Y-factor which is a simple combination of both the ISI IF and the weighted PageRank, and find that the resulting journal rankings correspond well to a general understanding of journal status.<|reference_end|> | arxiv | @article{bollen2006journal,
title={Journal Status},
author={Johan Bollen, Marko A. Rodriguez and Herbert Van de Sompel},
journal={Scientometrics, volume 69, number 3, pp. 669-687, 2006},
year={2006},
doi={10.1007/s11192-006-0176-z},
number={LA-UR-05-6466},
archivePrefix={arXiv},
eprint={cs/0601030},
primaryClass={cs.DL cs.CY}
} | bollen2006journal |
arxiv-673710 | cs/0601031 | Divide-and-Evolve: a New Memetic Scheme for Domain-Independent Temporal Planning | <|reference_start|>Divide-and-Evolve: a New Memetic Scheme for Domain-Independent Temporal Planning: An original approach, termed Divide-and-Evolve is proposed to hybridize Evolutionary Algorithms (EAs) with Operational Research (OR) methods in the domain of Temporal Planning Problems (TPPs). Whereas standard Memetic Algorithms use local search methods to improve the evolutionary solutions, and thus fail when the local method stops working on the complete problem, the Divide-and-Evolve approach splits the problem at hand into several, hopefully easier, sub-problems, and can thus solve globally problems that are intractable when directly fed into deterministic OR algorithms. But the most prominent advantage of the Divide-and-Evolve approach is that it immediately opens up an avenue for multi-objective optimization, even though the OR method that is used is single-objective. Proof of concept approach on the standard (single-objective) Zeno transportation benchmark is given, and a small original multi-objective benchmark is proposed in the same Zeno framework to assess the multi-objective capabilities of the proposed methodology, a breakthrough in Temporal Planning.<|reference_end|> | arxiv | @article{schoenauer2006divide-and-evolve:,
title={Divide-and-Evolve: a New Memetic Scheme for Domain-Independent Temporal
Planning},
author={Marc Schoenauer (INRIA Futurs), Pierre Sav'eant (TRT), Vincent Vidal
(CRIL)},
journal={Dans EvoCOP2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601031},
primaryClass={cs.AI}
} | schoenauer2006divide-and-evolve: |
arxiv-673711 | cs/0601032 | Efficient Open World Reasoning for Planning | <|reference_start|>Efficient Open World Reasoning for Planning: We consider the problem of reasoning and planning with incomplete knowledge and deterministic actions. We introduce a knowledge representation scheme called PSIPLAN that can effectively represent incompleteness of an agent's knowledge while allowing for sound, complete and tractable entailment in domains where the set of all objects is either unknown or infinite. We present a procedure for state update resulting from taking an action in PSIPLAN that is correct, complete and has only polynomial complexity. State update is performed without considering the set of all possible worlds corresponding to the knowledge state. As a result, planning with PSIPLAN is done without direct manipulation of possible worlds. PSIPLAN representation underlies the PSIPOP planning algorithm that handles quantified goals with or without exceptions that no other domain independent planner has been shown to achieve. PSIPLAN has been implemented in Common Lisp and used in an application on planning in a collaborative interface.<|reference_end|> | arxiv | @article{babaian2006efficient,
title={Efficient Open World Reasoning for Planning},
author={Tamara Babaian and James G. Schmolze},
journal={Logical Methods in Computer Science, Volume 2, Issue 3 (September
26, 2006) lmcs:2247},
year={2006},
doi={10.2168/LMCS-2(3:5)2006},
archivePrefix={arXiv},
eprint={cs/0601032},
primaryClass={cs.AI cs.LO}
} | babaian2006efficient |
arxiv-673712 | cs/0601033 | The density of iterated crossing points and a gap result for triangulations of finite point sets | <|reference_start|>The density of iterated crossing points and a gap result for triangulations of finite point sets: Consider a plane graph G, drawn with straight lines. For every pair a,b of vertices of G, we compare the shortest-path distance between a and b in G (with Euclidean edge lengths) to their actual distance in the plane. The worst-case ratio of these two values, for all pairs of points, is called the dilation of G. All finite plane graphs of dilation 1 have been classified. They are closely related to the following iterative procedure. For a given point set P in R^2, we connect every pair of points in P by a line segment and then add to P all those points where two such line segments cross. Repeating this process infinitely often, yields a limit point set P*. The main result of this paper is the following gap theorem: For any finite point set P in the plane for which the above P* is infinite, there exists a threshold t > 1 such that P is not contained in the vertex set of any finite plane graph of dilation at most t. We construct a concrete point set Q such that any planar graph that contains this set amongst its vertices must have a dilation larger than 1.0000047.<|reference_end|> | arxiv | @article{klein2006the,
title={The density of iterated crossing points and a gap result for
triangulations of finite point sets},
author={Rolf Klein and Martin Kutz},
journal={arXiv preprint arXiv:cs/0601033},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601033},
primaryClass={cs.CG}
} | klein2006the |
arxiv-673713 | cs/0601034 | Using First-Order Logic to Reason about Policies | <|reference_start|>Using First-Order Logic to Reason about Policies: A policy describes the conditions under which an action is permitted or forbidden. We show that a fragment of (multi-sorted) first-order logic can be used to represent and reason about policies. Because we use first-order logic, policies have a clear syntax and semantics. We show that further restricting the fragment results in a language that is still quite expressive yet is also tractable. More precisely, questions about entailment, such as `May Alice access the file?', can be answered in time that is a low-order polynomial (indeed, almost linear in some cases), as can questions about the consistency of policy sets.<|reference_end|> | arxiv | @article{halpern2006using,
title={Using First-Order Logic to Reason about Policies},
author={Joseph Y. Halpern and Vicky Weissman},
journal={arXiv preprint arXiv:cs/0601034},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601034},
primaryClass={cs.LO cs.CR}
} | halpern2006using |
arxiv-673714 | cs/0601035 | Deductive Object Programming | <|reference_start|>Deductive Object Programming: We propose some slight additions to O-O languages to implement the necessary features for using Deductive Object Programming (DOP). This way of programming based upon the manipulation of the Production Tree of the Objects of Interest, result in making Persistent these Objects and in sensibly lowering the code complexity.<|reference_end|> | arxiv | @article{colonna2006deductive,
title={Deductive Object Programming},
author={Francois Colonna (LCT)},
journal={arXiv preprint arXiv:cs/0601035},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601035},
primaryClass={cs.SE}
} | colonna2006deductive |
arxiv-673715 | cs/0601036 | On the complexity of computing the capacity of codes that avoid forbidden difference patterns | <|reference_start|>On the complexity of computing the capacity of codes that avoid forbidden difference patterns: We consider questions related to the computation of the capacity of codes that avoid forbidden difference patterns. The maximal number of $n$-bit sequences whose pairwise differences do not contain some given forbidden difference patterns increases exponentially with $n$. The exponent is the capacity of the forbidden patterns, which is given by the logarithm of the joint spectral radius of a set of matrices constructed from the forbidden difference patterns. We provide a new family of bounds that allows for the approximation, in exponential time, of the capacity with arbitrary high degree of accuracy. We also provide a polynomial time algorithm for the problem of determining if the capacity of a set is positive, but we prove that the same problem becomes NP-hard when the sets of forbidden patterns are defined over an extended set of symbols. Finally, we prove the existence of extremal norms for the sets of matrices arising in the capacity computation. This result makes it possible to apply a specific (even though non polynomial) approximation algorithm. We illustrate this fact by computing exactly the capacity of codes that were only known approximately.<|reference_end|> | arxiv | @article{blondel2006on,
title={On the complexity of computing the capacity of codes that avoid
forbidden difference patterns},
author={Vincent D. Blondel, Raphael Jungers and Vladimir Protasov},
journal={arXiv preprint arXiv:cs/0601036},
year={2006},
doi={10.1109/TIT.2006.883615},
archivePrefix={arXiv},
eprint={cs/0601036},
primaryClass={cs.IT math.IT}
} | blondel2006on |
arxiv-673716 | cs/0601037 | Constraint-based verification of abstract models of multitreaded programs | <|reference_start|>Constraint-based verification of abstract models of multitreaded programs: We present a technique for the automated verification of abstract models of multithreaded programs providing fresh name generation, name mobility, and unbounded control. As high level specification language we adopt here an extension of communication finite-state machines with local variables ranging over an infinite name domain, called TDL programs. Communication machines have been proved very effective for representing communication protocols as well as for representing abstractions of multithreaded software. The verification method that we propose is based on the encoding of TDL programs into a low level language based on multiset rewriting and constraints that can be viewed as an extension of Petri Nets. By means of this encoding, the symbolic verification procedure developed for the low level language in our previous work can now be applied to TDL programs. Furthermore, the encoding allows us to isolate a decidable class of verification problems for TDL programs that still provide fresh name generation, name mobility, and unbounded control. Our syntactic restrictions are in fact defined on the internal structure of threads: In order to obtain a complete and terminating method, threads are only allowed to have at most one local variable (ranging over an infinite domain of names).<|reference_end|> | arxiv | @article{delzanno2006constraint-based,
title={Constraint-based verification of abstract models of multitreaded
programs},
author={Giorgio Delzanno},
journal={arXiv preprint arXiv:cs/0601037},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601037},
primaryClass={cs.CL cs.PL}
} | delzanno2006constraint-based |
arxiv-673717 | cs/0601038 | Constraint-based automatic verification of abstract models of multithreaded programs | <|reference_start|>Constraint-based automatic verification of abstract models of multithreaded programs: We present a technique for the automated verification of abstract models of multithreaded programs providing fresh name generation, name mobility, and unbounded control. As high level specification language we adopt here an extension of communication finite-state machines with local variables ranging over an infinite name domain, called TDL programs. Communication machines have been proved very effective for representing communication protocols as well as for representing abstractions of multithreaded software. The verification method that we propose is based on the encoding of TDL programs into a low level language based on multiset rewriting and constraints that can be viewed as an extension of Petri Nets. By means of this encoding, the symbolic verification procedure developed for the low level language in our previous work can now be applied to TDL programs. Furthermore, the encoding allows us to isolate a decidable class of verification problems for TDL programs that still provide fresh name generation, name mobility, and unbounded control. Our syntactic restrictions are in fact defined on the internal structure of threads: In order to obtain a complete and terminating method, threads are only allowed to have at most one local variable (ranging over an infinite domain of names).<|reference_end|> | arxiv | @article{delzanno2006constraint-based,
title={Constraint-based automatic verification of abstract models of
multithreaded programs},
author={Giorgio Delzanno},
journal={arXiv preprint arXiv:cs/0601038},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601038},
primaryClass={cs.LO cs.PL}
} | delzanno2006constraint-based |
arxiv-673718 | cs/0601039 | Removing Redundant Arguments Automatically | <|reference_start|>Removing Redundant Arguments Automatically: The application of automatic transformation processes during the formal development and optimization of programs can introduce encumbrances in the generated code that programmers usually (or presumably) do not write. An example is the introduction of redundant arguments in the functions defined in the program. Redundancy of a parameter means that replacing it by any expression does not change the result. In this work, we provide methods for the analysis and elimination of redundant arguments in term rewriting systems as a model for the programs that can be written in more sophisticated languages. On the basis of the uselessness of redundant arguments, we also propose an erasure procedure which may avoid wasteful computations while still preserving the semantics (under ascertained conditions). A prototype implementation of these methods has been undertaken, which demonstrates the practicality of our approach.<|reference_end|> | arxiv | @article{alpuente2006removing,
title={Removing Redundant Arguments Automatically},
author={Maria Alpuente, Santiago Escobar, Salvador Lucas},
journal={arXiv preprint arXiv:cs/0601039},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601039},
primaryClass={cs.PL}
} | alpuente2006removing |
arxiv-673719 | cs/0601040 | New Technologies for Sustainable Urban Transport in Europe | <|reference_start|>New Technologies for Sustainable Urban Transport in Europe: In the past few years, the European Commission has financed several projects to examine how new technologies could improve the sustainability of European cities. These technologies concern new public transportation modes such as guided buses to form high capacity networks similar to light rail but at a lower cost and better flexibility, PRT (Personal Rapid Transit) and cybercars (small urban vehicles with fully automatic driving capabilities to be used in carsharing mode, mostly as a complement to mass transport). They also concern private vehicles with technologies which could improve the efficiency of the vehicles as well as their safety (Intelligent Speed Adaptation, Adaptive Cruise >.Control, Stop&Go, Lane Keeping,...) and how these new vehicles can complement mass transport in the form of car-sharing services.<|reference_end|> | arxiv | @article{parent2006new,
title={New Technologies for Sustainable Urban Transport in Europe},
author={Michel Parent (INRIA Rocquencourt)},
journal={Transportation Research Board 85th Annual Meeting (2006)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601040},
primaryClass={cs.RO}
} | parent2006new |
arxiv-673720 | cs/0601041 | Oblivious channels | <|reference_start|>Oblivious channels: Let C = {x_1,...,x_N} \subset {0,1}^n be an [n,N] binary error correcting code (not necessarily linear). Let e \in {0,1}^n be an error vector. A codeword x in C is said to be "disturbed" by the error e if the closest codeword to x + e is no longer x. Let A_e be the subset of codewords in C that are disturbed by e. In this work we study the size of A_e in random codes C (i.e. codes in which each codeword x_i is chosen uniformly and independently at random from {0,1}^n). Using recent results of Vu [Random Structures and Algorithms 20(3)] on the concentration of non-Lipschitz functions, we show that |A_e| is strongly concentrated for a wide range of values of N and ||e||. We apply this result in the study of communication channels we refer to as "oblivious". Roughly speaking, a channel W(y|x) is said to be oblivious if the error distribution imposed by the channel is independent of the transmitted codeword x. For example, the well studied Binary Symmetric Channel is an oblivious channel. In this work, we define oblivious and partially oblivious channels and present lower bounds on their capacity. The oblivious channels we define have connections to Arbitrarily Varying Channels with state constraints.<|reference_end|> | arxiv | @article{langberg2006oblivious,
title={Oblivious channels},
author={Michael Langberg},
journal={arXiv preprint arXiv:cs/0601041},
year={2006},
doi={10.1109/ISIT.2006.261560},
archivePrefix={arXiv},
eprint={cs/0601041},
primaryClass={cs.IT math.IT}
} | langberg2006oblivious |
arxiv-673721 | cs/0601042 | LPAR-05 Workshop: Empirically Successfull Automated Reasoning in Higher-Order Logic (ESHOL) | <|reference_start|>LPAR-05 Workshop: Empirically Successfull Automated Reasoning in Higher-Order Logic (ESHOL): This workshop brings together practioners and researchers who are involved in the everyday aspects of logical systems based on higher-order logic. We hope to create a friendly and highly interactive setting for discussions around the following four topics. Implementation and development of proof assistants based on any notion of impredicativity, automated theorem proving tools for higher-order logic reasoning systems, logical framework technology for the representation of proofs in higher-order logic, formal digital libraries for storing, maintaining and querying databases of proofs. We envision attendees that are interested in fostering the development and visibility of reasoning systems for higher-order logics. We are particularly interested in a discusssion on the development of a higher-order version of the TPTP and in comparisons of the practical strengths of automated higher-order reasoning systems. Additionally, the workshop includes system demonstrations. ESHOL is the successor of the ESCAR and ESFOR workshops held at CADE 2005 and IJCAR 2004.<|reference_end|> | arxiv | @article{benzmueller2006lpar-05,
title={LPAR-05 Workshop: Empirically Successfull Automated Reasoning in
Higher-Order Logic (ESHOL)},
author={Christoph Benzmueller, John Harrison, and Carsten Schuermann (Eds.)},
journal={arXiv preprint arXiv:cs/0601042},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601042},
primaryClass={cs.AI cs.LO}
} | benzmueller2006lpar-05 |
arxiv-673722 | cs/0601043 | Combining Relational Algebra, SQL, Constraint Modelling, and Local Search | <|reference_start|>Combining Relational Algebra, SQL, Constraint Modelling, and Local Search: The goal of this paper is to provide a strong integration between constraint modelling and relational DBMSs. To this end we propose extensions of standard query languages such as relational algebra and SQL, by adding constraint modelling capabilities to them. In particular, we propose non-deterministic extensions of both languages, which are specially suited for combinatorial problems. Non-determinism is introduced by means of a guessing operator, which declares a set of relations to have an arbitrary extension. This new operator results in languages with higher expressive power, able to express all problems in the complexity class NP. Some syntactical restrictions which make data complexity polynomial are shown. The effectiveness of both extensions is demonstrated by means of several examples. The current implementation, written in Java using local search techniques, is described. To appear in Theory and Practice of Logic Programming (TPLP)<|reference_end|> | arxiv | @article{cadoli2006combining,
title={Combining Relational Algebra, SQL, Constraint Modelling, and Local
Search},
author={Marco Cadoli and Toni Mancini},
journal={arXiv preprint arXiv:cs/0601043},
year={2006},
doi={10.1017/S1471068406002857},
archivePrefix={arXiv},
eprint={cs/0601043},
primaryClass={cs.AI cs.LO}
} | cadoli2006combining |
arxiv-673723 | cs/0601044 | Genetic Programming, Validation Sets, and Parsimony Pressure | <|reference_start|>Genetic Programming, Validation Sets, and Parsimony Pressure: Fitness functions based on test cases are very common in Genetic Programming (GP). This process can be assimilated to a learning task, with the inference of models from a limited number of samples. This paper is an investigation on two methods to improve generalization in GP-based learning: 1) the selection of the best-of-run individuals using a three data sets methodology, and 2) the application of parsimony pressure in order to reduce the complexity of the solutions. Results using GP in a binary classification setup show that while the accuracy on the test sets is preserved, with less variances compared to baseline results, the mean tree size obtained with the tested methods is significantly reduced.<|reference_end|> | arxiv | @article{gagné2006genetic,
title={Genetic Programming, Validation Sets, and Parsimony Pressure},
author={Christian Gagn'e (INRIA Futurs, LVSN, IIS), Marc Schoenauer (INRIA
Futurs), Marc Parizeau (LVSN), Marco Tomassini (IIS)},
journal={Dans EuroGP 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601044},
primaryClass={cs.LG}
} | gagné2006genetic |
arxiv-673724 | cs/0601045 | PageRank without hyperlinks: Structural re-ranking using links induced by language models | <|reference_start|>PageRank without hyperlinks: Structural re-ranking using links induced by language models: Inspired by the PageRank and HITS (hubs and authorities) algorithms for Web search, we propose a structural re-ranking approach to ad hoc information retrieval: we reorder the documents in an initially retrieved set by exploiting asymmetric relationships between them. Specifically, we consider generation links, which indicate that the language model induced from one document assigns high probability to the text of another; in doing so, we take care to prevent bias against long documents. We study a number of re-ranking criteria based on measures of centrality in the graphs formed by generation links, and show that integrating centrality into standard language-model-based retrieval is quite effective at improving precision at top ranks.<|reference_end|> | arxiv | @article{kurland2006pagerank,
title={PageRank without hyperlinks: Structural re-ranking using links induced
by language models},
author={Oren Kurland and Lillian Lee},
journal={Proceedings of SIGIR 2005, pp. 306--313},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601045},
primaryClass={cs.IR cs.CL}
} | kurland2006pagerank |
arxiv-673725 | cs/0601046 | Better than the real thing? Iterative pseudo-query processing using cluster-based language models | <|reference_start|>Better than the real thing? Iterative pseudo-query processing using cluster-based language models: We present a novel approach to pseudo-feedback-based ad hoc retrieval that uses language models induced from both documents and clusters. First, we treat the pseudo-feedback documents produced in response to the original query as a set of pseudo-queries that themselves can serve as input to the retrieval process. Observing that the documents returned in response to the pseudo-queries can then act as pseudo-queries for subsequent rounds, we arrive at a formulation of pseudo-query-based retrieval as an iterative process. Experiments show that several concrete instantiations of this idea, when applied in conjunction with techniques designed to heighten precision, yield performance results rivaling those of a number of previously-proposed algorithms, including the standard language-modeling approach. The use of cluster-based language models is a key contributing factor to our algorithms' success.<|reference_end|> | arxiv | @article{kurland2006better,
title={Better than the real thing? Iterative pseudo-query processing using
cluster-based language models},
author={Oren Kurland, Lillian Lee, Carmel Domshlak},
journal={Proceedings of SIGIR 2005, pp. 19--26},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601046},
primaryClass={cs.IR cs.CL}
} | kurland2006better |
arxiv-673726 | cs/0601047 | Automatic Detection of Trends in Dynamical Text: An Evolutionary Approach | <|reference_start|>Automatic Detection of Trends in Dynamical Text: An Evolutionary Approach: This paper presents an evolutionary algorithm for modeling the arrival dates of document streams, which is any time-stamped collection of documents, such as newscasts, e-mails, IRC conversations, scientific journals archives and weblog postings. This algorithm assigns frequencies (number of document arrivals per time unit) to time intervals so that it produces an optimal fit to the data. The optimization is a trade off between accurately fitting the data and avoiding too many frequency changes; this way the analysis is able to find fits which ignore the noise. Classical dynamic programming algorithms are limited by memory and efficiency requirements, which can be a problem when dealing with long streams. This suggests to explore alternative search methods which allow for some degree of uncertainty to achieve tractability. Experiments have shown that the designed evolutionary algorithm is able to reach the same solution quality as those classical dynamic programming algorithms in a shorter time. We have also explored different probabilistic models to optimize the fitting of the date streams, and applied these algorithms to infer whether a new arrival increases or decreases {\em interest} in the topic the document stream is about.<|reference_end|> | arxiv | @article{araujo2006automatic,
title={Automatic Detection of Trends in Dynamical Text: An Evolutionary
Approach},
author={Lourdes Araujo and Juan J. Merelo},
journal={arXiv preprint arXiv:cs/0601047},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601047},
primaryClass={cs.IR cs.NE}
} | araujo2006automatic |
arxiv-673727 | cs/0601048 | Permutation Polynomial Interleavers: An Algebraic-Geometric Perspective | <|reference_start|>Permutation Polynomial Interleavers: An Algebraic-Geometric Perspective: An interleaver is a critical component for the channel coding performance of turbo codes. Algebraic constructions are important because they admit analytical designs and simple, practical hardware implementation. The spread factor of an interleaver is a common measure for turbo coding applications. Maximum-spread interleavers are interleavers whose spread factors achieve the upper bound. An infinite sequence of quadratic permutation polynomials over integer rings that generate maximum-spread interleavers is presented. New properties of permutation polynomial interleavers are investigated from an algebraic-geometric perspective resulting in a new non-linearity metric for interleavers. A new interleaver metric that is a function of both the non-linearity metric and the spread factor is proposed. It is numerically demonstrated that the spread factor has a diminishing importance with the block length. A table of good interleavers for a variety of interleaver lengths according to the new metric is listed. Extensive computer simulation results with impressive frame error rates confirm the efficacy of the new metric. Further, when tail-biting constituent codes are used, the resulting turbo codes are quasi-cyclic.<|reference_end|> | arxiv | @article{takeshita2006permutation,
title={Permutation Polynomial Interleavers: An Algebraic-Geometric Perspective},
author={Oscar Y. Takeshita},
journal={arXiv preprint arXiv:cs/0601048},
year={2006},
doi={10.1109/TIT.2007.896870},
archivePrefix={arXiv},
eprint={cs/0601048},
primaryClass={cs.IT cs.DM math.IT}
} | takeshita2006permutation |
arxiv-673728 | cs/0601049 | Undeniable Signature Schemes Using Braid Groups | <|reference_start|>Undeniable Signature Schemes Using Braid Groups: Artin's braid groups have been recently suggested as a new source for public-key cryptography. In this paper we propose the first undeniable signature schemes using the conjugacy problem and the decomposition problem in the braid groups which are believed to be hard problems.<|reference_end|> | arxiv | @article{thomas2006undeniable,
title={Undeniable Signature Schemes Using Braid Groups},
author={Tony Thomas, Arbind Kumar Lal},
journal={arXiv preprint arXiv:cs/0601049},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601049},
primaryClass={cs.CR}
} | thomas2006undeniable |
arxiv-673729 | cs/0601050 | Computing Fibonacci numbers on a Turing Machine | <|reference_start|>Computing Fibonacci numbers on a Turing Machine: A Turing machine that computes Fibonacci numbers is described.<|reference_end|> | arxiv | @article{vinokur2006computing,
title={Computing Fibonacci numbers on a Turing Machine},
author={Alex Vinokur},
journal={arXiv preprint arXiv:cs/0601050},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601050},
primaryClass={cs.DM}
} | vinokur2006computing |
arxiv-673730 | cs/0601051 | A Constructive Semantic Characterization of Aggregates in ASP | <|reference_start|>A Constructive Semantic Characterization of Aggregates in ASP: This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator $\Phi^{aggr}_P$ for aggregate programs, independently proposed Pelov et al. This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP).<|reference_end|> | arxiv | @article{son2006a,
title={A Constructive Semantic Characterization of Aggregates in ASP},
author={Tran Cao Son and Enrico Pontelli},
journal={arXiv preprint arXiv:cs/0601051},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601051},
primaryClass={cs.AI cs.LO cs.PL cs.SC}
} | son2006a |
arxiv-673731 | cs/0601052 | Artificial and Biological Intelligence | <|reference_start|>Artificial and Biological Intelligence: This article considers evidence from physical and biological sciences to show machines are deficient compared to biological systems at incorporating intelligence. Machines fall short on two counts: firstly, unlike brains, machines do not self-organize in a recursive manner; secondly, machines are based on classical logic, whereas Nature's intelligence may depend on quantum mechanics.<|reference_end|> | arxiv | @article{kak2006artificial,
title={Artificial and Biological Intelligence},
author={Subhash Kak},
journal={ACM Ubiquity, vol. 6, number 42, 2005, pp. 1-20},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601052},
primaryClass={cs.AI}
} | kak2006artificial |
arxiv-673732 | cs/0601053 | Wavefront Propagation and Fuzzy Based Autonomous Navigation | <|reference_start|>Wavefront Propagation and Fuzzy Based Autonomous Navigation: Path planning and obstacle avoidance are the two major issues in any navigation system. Wavefront propagation algorithm, as a good path planner, can be used to determine an optimal path. Obstacle avoidance can be achieved using possibility theory. Combining these two functions enable a robot to autonomously navigate to its destination. This paper presents the approach and results in implementing an autonomous navigation system for an indoor mobile robot. The system developed is based on a laser sensor used to retrieve data to update a two dimensional world model of therobot environment. Waypoints in the path are incorporated into the obstacle avoidance. Features such as ageing of objects and smooth motion planning are implemented to enhance efficiency and also to cater for dynamic environments.<|reference_end|> | arxiv | @article{al-jumaily2006wavefront,
title={Wavefront Propagation and Fuzzy Based Autonomous Navigation},
author={Adel Al-Jumaily & Cindy Leung},
journal={International Journal of Advanced Robotics Systems, Vol.2No2.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601053},
primaryClass={cs.RO}
} | al-jumaily2006wavefront |
arxiv-673733 | cs/0601054 | Control of a Lightweight Flexible Robotic Arm Using Sliding Modes | <|reference_start|>Control of a Lightweight Flexible Robotic Arm Using Sliding Modes: This paper presents a robust control scheme for flexible link robotic manipulators, which is based on considering the flexible mechanical structure as a system with slow (rigid) and fast (flexible) modes that can be controlled separately. The rigid dynamics is controlled by means of a robust sliding-mode approach with wellestablished stability properties while an LQR optimal design is adopted for the flexible dynamics. Experimental results show that this composite approach achieves good closed loop tracking properties both for the rigid and the flexible dynamics.<|reference_end|> | arxiv | @article{etxebarria2006control,
title={Control of a Lightweight Flexible Robotic Arm Using Sliding Modes},
author={Victor Etxebarria, Arantza Sanz & Ibone Lizarraga},
journal={International Journal of Advanced Robotics Systems, Vol.2No2.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601054},
primaryClass={cs.RO}
} | etxebarria2006control |
arxiv-673734 | cs/0601055 | A Hybrid Three Layer Architecture for Fire Agent Management in Rescue Simulation Environment | <|reference_start|>A Hybrid Three Layer Architecture for Fire Agent Management in Rescue Simulation Environment: This paper presents a new architecture called FAIS for imple- menting intelligent agents cooperating in a special Multi Agent environ- ment, namely the RoboCup Rescue Simulation System. This is a layered architecture which is customized for solving fire extinguishing problem. Structural decision making algorithms are combined with heuristic ones in this model, so it's a hybrid architecture.<|reference_end|> | arxiv | @article{geramifard2006a,
title={A Hybrid Three Layer Architecture for Fire Agent Management in Rescue
Simulation Environment},
author={Alborz Geramifard, Peyman Nayeri, Reza Zamani-Nasab & Jafar Habibi},
journal={International Journal of Advanced Robotics Systems, Vol.2No2.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601055},
primaryClass={cs.RO}
} | geramifard2006a |
arxiv-673735 | cs/0601056 | Dynamic Balance Control of Multi-arm Free-Floating Space Robots | <|reference_start|>Dynamic Balance Control of Multi-arm Free-Floating Space Robots: This paper investigates the problem of the dynamic balance control of multi-arm free-floating space robot during capturing an active object in close proximity. The position and orientation of space base will be affected during the operation of space manipulator because of the dynamics coupling between the manipulator and space base. This dynamics coupling is unique characteristics of space robot system. Such a disturbance will produce a serious impact between the manipulator hand and the object. To ensure reliable and precise operation, we propose to develop a space robot system consisting of two arms, with one arm (mission arm) for accomplishing the capture mission, and the other one (balance arm) compensating for the disturbance of the base. We present the coordinated control concept for balance of the attitude of the base using the balance arm. The mission arm can move along the given trajectory to approach and capture the target with no considering the disturbance from the coupling of the base. We establish a relationship between the motion of two arm that can realize the zeros reaction to the base. The simulation studies verified the validity and efficiency of the proposed control method.<|reference_end|> | arxiv | @article{huang2006dynamic,
title={Dynamic Balance Control of Multi-arm Free-Floating Space Robots},
author={Panfeng Huang, Yangsheng Xu & Bin Liang},
journal={International Journal of Advanced Robotics Systems, Vol.2No2.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601056},
primaryClass={cs.RO}
} | huang2006dynamic |
arxiv-673736 | cs/0601057 | Robust Motion Control for Mobile Manipulator Using Resolved Acceleration and Proportional-Integral Active Force Control | <|reference_start|>Robust Motion Control for Mobile Manipulator Using Resolved Acceleration and Proportional-Integral Active Force Control: A resolved acceleration control (RAC) and proportional-integral active force control (PIAFC) is proposed as an approach for the robust motion control of a mobile manipulator (MM) comprising a differentially driven wheeled mobile platform with a two-link planar arm mounted on top of the platform. The study emphasizes on the integrated kinematic and dynamic control strategy in which the RAC is used to manipulate the kinematic component while the PIAFC is implemented to compensate the dynamic effects including the bounded known/unknown disturbances and uncertainties. The effectivenss and robustness of the proposed scheme are investigated through a rigorous simulation study and later complemented with experimental results obtained through a number of experiments performed on a fully developed working prototype in a laboratory environment. A number of disturbances in the form of vibratory and impact forces are deliberately introduced into the system to evaluate the system performances. The investigation clearly demonstrates the extreme robustness feature of the proposed control scheme compared to other systems considered in the study.<|reference_end|> | arxiv | @article{mailah2006robust,
title={Robust Motion Control for Mobile Manipulator Using Resolved Acceleration
and Proportional-Integral Active Force Control},
author={Musa Mailah, Endra Pitowarno & Hishamuddin Jamaluddin},
journal={International Journal of Advanced Robotics Systems, Vol.2No2.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601057},
primaryClass={cs.RO}
} | mailah2006robust |
arxiv-673737 | cs/0601058 | CAGD - Computer Aided Gripper Design for a Flexible Gripping System | <|reference_start|>CAGD - Computer Aided Gripper Design for a Flexible Gripping System: This paper is a summary of the recently accomplished research work on flexible gripping systems. The goal is to develop a gripper which can be used for a great amount of geometrically variant workpieces. The economic aspect is of particular importance during the whole development. The high flexibility of the gripper is obtained by three parallel used principles. These are human and computer based analysis of the gripping object as well as mechanical adaptation of the gripper to the object with the help of servo motors. The focus is on the gripping of free-form surfaces with suction cup.<|reference_end|> | arxiv | @article{sdahl2006cagd,
title={CAGD - Computer Aided Gripper Design for a Flexible Gripping System},
author={Michael Sdahl & Bernd Kuhlenkoetter},
journal={International Journal of Advanced Robotics Systems, Vol.2No2.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601058},
primaryClass={cs.RO}
} | sdahl2006cagd |
arxiv-673738 | cs/0601059 | A Descriptive Model of Robot Team and the Dynamic Evolution of Robot Team Cooperation | <|reference_start|>A Descriptive Model of Robot Team and the Dynamic Evolution of Robot Team Cooperation: At present, the research on robot team cooperation is still in qualitative analysis phase and lacks the description model that can quantitatively describe the dynamical evolution of team cooperative relationships with constantly changeable task demand in Multi-robot field. First this paper whole and static describes organization model HWROM of robot team, then uses Markov course and Bayesian theorem for reference, dynamical describes the team cooperative relationships building. Finally from cooperative entity layer, ability layer and relative layer we research team formation and cooperative mechanism, and discuss how to optimize relative action sets during the evolution. The dynamic evolution model of robot team and cooperative relationships between robot teams proposed and described in this paper can not only generalize the robot team as a whole, but also depict the dynamic evolving process quantitatively. Users can also make the prediction of the cooperative relationship and the action of the robot team encountering new demands based on this model. Journal web page & a lot of robotic related papers www.ars-journal.com<|reference_end|> | arxiv | @article{li2006a,
title={A Descriptive Model of Robot Team and the Dynamic Evolution of Robot
Team Cooperation},
author={Shu-qin Li, Lan Shuai, Xian-yi Cheng, Zhen-min Tang & Jing-yu Yang},
journal={International Journal of Advanced Robotics Systems, Vol.2No2.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601059},
primaryClass={cs.RO}
} | li2006a |
arxiv-673739 | cs/0601060 | Robot Swarms in an Uncertain World: Controllable Adaptability | <|reference_start|>Robot Swarms in an Uncertain World: Controllable Adaptability: There is a belief that complexity and chaos are essential for adaptability. But life deals with complexity every moment, without the chaos that engineers fear so, by invoking goal-directed behaviour. Goals can be programmed. That is why living organisms give us hope to achieve adaptability in robots. In this paper a method for the description of a goal-directed, or programmed, behaviour, interacting with uncertainty of environment, is described. We suggest reducing the structural (goals, intentions) and stochastic components (probability to realise the goal) of individual behaviour to random variables with nominal values to apply probabilistic approach. This allowed us to use a Normalized Entropy Index to detect the system state by estimating the contribution of each agent to the group behaviour. The number of possible group states is 27. We argue that adaptation has a limited number of possible paths between these 27 states. Paths and states can be programmed so that after adjustment to any particular case of task and conditions, adaptability will never involve chaos. We suggest the application of the model to operation of robots or other devices in remote and/or dangerous places.<|reference_end|> | arxiv | @article{bogatyreva2006robot,
title={Robot Swarms in an Uncertain World: Controllable Adaptability},
author={Olga Bogatyreva & Alexandr Shillerov},
journal={International Journal of Advanced Robotics Systems, Vol.2No3.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601060},
primaryClass={cs.RO}
} | bogatyreva2006robot |
arxiv-673740 | cs/0601061 | Modular Adaptive System Based on a Multi-Stage Neural Structure for Recognition of 2D Objects of Discontinuous Production | <|reference_start|>Modular Adaptive System Based on a Multi-Stage Neural Structure for Recognition of 2D Objects of Discontinuous Production: This is a presentation of a new system for invariant recognition of 2D objects with overlapping classes, that can not be effectively recognized with the traditional methods. The translation, scale and partial rotation invariant contour object description is transformed in a DCT spectrum space. The obtained frequency spectrums are decomposed into frequency bands in order to feed different BPG neural nets (NNs). The NNs are structured in three stages - filtering and full rotation invariance; partial recognition; general classification. The designed multi-stage BPG Neural Structure shows very good accuracy and flexibility when tested with 2D objects used in the discontinuous production. The reached speed and the opportunuty for an easy restructuring and reprogramming of the system makes it suitable for application in different applied systems for real time work.<|reference_end|> | arxiv | @article{topalova2006modular,
title={Modular Adaptive System Based on a Multi-Stage Neural Structure for
Recognition of 2D Objects of Discontinuous Production},
author={I. Topalova},
journal={International Journal of Advanced Robotics Systems, Vol.2No1.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601061},
primaryClass={cs.RO}
} | topalova2006modular |
arxiv-673741 | cs/0601062 | Study of Self-Organization Model of Multiple Mobile Robot | <|reference_start|>Study of Self-Organization Model of Multiple Mobile Robot: A good organization model of multiple mobile robot should be able to improve the efficiency of the system, reduce the complication of robot interactions, and detract the difficulty of computation. From the sociology aspect of topology, structure and organization, this paper studies the multiple mobile robot organization formation and running mechanism in the dynamic, complicated and unknown environment. It presents and describes in detail a Hierarchical- Web Recursive Organization Model (HWROM) and forming algorithm. It defines the robot society leader; robotic team leader and individual robot as the same structure by the united framework and describes the organization model by the recursive structure. The model uses task-oriented and top-down method to dynamically build and maintain structures and organization. It uses market-based techniques to assign task, form teams and allocate resources in dynamic environment. The model holds several characteristics of self-organization, dynamic, conciseness, commonness and robustness.<|reference_end|> | arxiv | @article{xian-yi2006study,
title={Study of Self-Organization Model of Multiple Mobile Robot},
author={Ceng Xian-yi, Li Shu-qin & Xia De-shen},
journal={International Journal of Advanced Robotics Systems, Vol.2No3.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601062},
primaryClass={cs.RO}
} | xian-yi2006study |
arxiv-673742 | cs/0601063 | Optimal Point-to-Point Trajectory Tracking of Redundant Manipulators using Generalized Pattern Search | <|reference_start|>Optimal Point-to-Point Trajectory Tracking of Redundant Manipulators using Generalized Pattern Search: Optimal point-to-point trajectory planning for planar redundant manipulator is considered in this study. The main objective is to minimize the sum of the position error of the end-effector at each intermediate point along the trajectory so that the end-effector can track the prescribed trajectory accurately. An algorithm combining Genetic Algorithm and Pattern Search as a Generalized Pattern Search GPS is introduced to design the optimal trajectory. To verify the proposed algorithm, simulations for a 3-D-O-F planar manipulator with different end-effector trajectories have been carried out. A comparison between the Genetic Algorithm and the Generalized Pattern Search shows that The GPS gives excellent tracking performance.<|reference_end|> | arxiv | @article{ata2006optimal,
title={Optimal Point-to-Point Trajectory Tracking of Redundant Manipulators
using Generalized Pattern Search},
author={Atef A. Ata & Thi Rein Myo},
journal={International Journal of Advanced Robotics Systems, Vol.2No3.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601063},
primaryClass={cs.RO}
} | ata2006optimal |
arxiv-673743 | cs/0601064 | Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation | <|reference_start|>Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation: This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. ...<|reference_end|> | arxiv | @article{kia2006robotics,
title={Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking
and Navigation},
author={Chua Kia & Mohd Rizal Arshad},
journal={International Journal of Advanced Robotics Systems, Vol.2No3.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601064},
primaryClass={cs.RO}
} | kia2006robotics |
arxiv-673744 | cs/0601065 | New Intelligent Transmission Concept for Hybrid Mobile Robot Speed Control | <|reference_start|>New Intelligent Transmission Concept for Hybrid Mobile Robot Speed Control: This paper presents a new concept of a mobile robot speed control by using two degree of freedom gear transmission. The developed intelligent speed controller utilizes a gear box which comprises of epicyclic gear train with two inputs, one coupled with the engine shaft and another with the shaft of a variable speed dc motor. The net output speed is a combination of the two input speeds and is governed by the transmission ratio of the planetary gear train. This new approach eliminates the use of a torque converter which is otherwise an indispensable part of all available automatic transmissions, thereby reducing the power loss that occurs in the box during the fluid coupling. By gradually varying the speed of the dc motor a stepless transmission has been achieved. The other advantages of the developed controller are pulling over and reversing the vehicle, implemented by intelligent mixing of the dc motor and engine speeds. This approach eliminates traditional braking system in entire vehicle design. The use of two power sources, IC engine and battery driven DC motor, utilizes the modern idea of hybrid vehicles. The new mobile robot speed controller is capable of driving the vehicle even in extreme case of IC engine failure, for example, due to gas depletion.<|reference_end|> | arxiv | @article{mir-nasiri2006new,
title={New Intelligent Transmission Concept for Hybrid Mobile Robot Speed
Control},
author={Nazim Mir-Nasiri & Sulaiman Hussaini},
journal={International Journal of Advanced Robotics Systems, Vol.2No3.
(2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601065},
primaryClass={cs.RO}
} | mir-nasiri2006new |
arxiv-673745 | cs/0601066 | On the Existence of Universally Decodable Matrices | <|reference_start|>On the Existence of Universally Decodable Matrices: Universally decodable matrices (UDMs) can be used for coding purposes when transmitting over slow fading channels. These matrices are parameterized by positive integers $L$ and $N$ and a prime power $q$. The main result of this paper is that the simple condition $L \leq q+1$ is both necessary and sufficient for $(L,N,q)$-UDMs to exist. The existence proof is constructive and yields a coding scheme that is equivalent to a class of codes that was proposed by Rosenbloom and Tsfasman. Our work resolves an open problem posed recently in the literature.<|reference_end|> | arxiv | @article{ganesan2006on,
title={On the Existence of Universally Decodable Matrices},
author={Ashwin Ganesan, Pascal O. Vontobel},
journal={arXiv preprint arXiv:cs/0601066},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601066},
primaryClass={cs.IT cs.DM math.IT}
} | ganesan2006on |
arxiv-673746 | cs/0601067 | Design of Rate-Compatible Serially Concatenated Convolutional Codes | <|reference_start|>Design of Rate-Compatible Serially Concatenated Convolutional Codes: Recently a powerful class of rate-compatible serially concatenated convolutional codes (SCCCs) have been proposed based on minimizing analytical upper bounds on the error probability in the error floor region. Here this class of codes is further investigated by combining analytical upper bounds with extrinsic information transfer charts analysis. Following this approach, we construct a family of rate-compatible SCCCs with good performance in both the error floor and the waterfall regions over a broad range of code rates.<|reference_end|> | arxiv | @article{amat2006design,
title={Design of Rate-Compatible Serially Concatenated Convolutional Codes},
author={Alexandre Graell i Amat, Fredrik Brannstrom, and Lars K. Rasmussen},
journal={arXiv preprint arXiv:cs/0601067},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601067},
primaryClass={cs.IT math.IT}
} | amat2006design |
arxiv-673747 | cs/0601068 | Checkbochs: Use Hardware to Check Software | <|reference_start|>Checkbochs: Use Hardware to Check Software: In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as `plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user/kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs.<|reference_end|> | arxiv | @article{bansal2006checkbochs:,
title={Checkbochs: Use Hardware to Check Software},
author={Sorav Bansal},
journal={arXiv preprint arXiv:cs/0601068},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601068},
primaryClass={cs.OS cs.CR}
} | bansal2006checkbochs: |
arxiv-673748 | cs/0601069 | Fast Enumeration of Combinatorial Objects | <|reference_start|>Fast Enumeration of Combinatorial Objects: The problem of ranking can be described as follows. We have a set of combinatorial objects $S$, such as, say, the k-subsets of n things, and we can imagine that they have been arranged in some list, say lexicographically, and we want to have a fast method for obtaining the rank of a given object in the list. This problem is widely known in Combinatorial Analysis, Computer Science and Information Theory. Ranking is closely connected with the hashing problem, especially with perfect hashing and with generating of random combinatorial objects. In Information Theory the ranking problem is closely connected with so-called enumerative encoding, which may be described as follows: there is a set of words $S$ and an enumerative code has to one-to-one encode every $s \in S$ by a binary word $code(s)$. The length of the $code(s)$ must be the same for all $s \in S$. Clearly, $|code (s)|\geq \log |S|$. (Here and below $\log x=\log_{2}x)$.) The suggested method allows the exponential growth of the speed of encoding and decoding for all combinatorial problems of enumeration which are considered, including the enumeration of permutations, compositions and others.<|reference_end|> | arxiv | @article{ryabko2006fast,
title={Fast Enumeration of Combinatorial Objects},
author={Boris Ryabko},
journal={published in Discrete Math.and Applications, v.10, n2, 1998},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601069},
primaryClass={cs.CC cs.DM}
} | ryabko2006fast |
arxiv-673749 | cs/0601070 | Instanton analysis of Low-Density-Parity-Check codes in the error-floor regime | <|reference_start|>Instanton analysis of Low-Density-Parity-Check codes in the error-floor regime: In this paper we develop instanton method introduced in [1], [2], [3] to analyze quantitatively performance of Low-Density-Parity-Check (LDPC) codes decoded iteratively in the so-called error-floor regime. We discuss statistical properties of the numerical instanton-amoeba scheme focusing on detailed analysis and comparison of two regular LDPC codes: Tanner's (155, 64, 20) and Margulis' (672, 336, 16) codes. In the regime of moderate values of the signal-to-noise ratio we critically compare results of the instanton-amoeba evaluations against the standard Monte-Carlo calculations of the Frame-Error-Rate.<|reference_end|> | arxiv | @article{stepanov2006instanton,
title={Instanton analysis of Low-Density-Parity-Check codes in the error-floor
regime},
author={M.G. Stepanov, M. Chertkov},
journal={arXiv preprint arXiv:cs/0601070},
year={2006},
number={LA-UR-06-0126},
archivePrefix={arXiv},
eprint={cs/0601070},
primaryClass={cs.IT cond-mat.dis-nn math.IT}
} | stepanov2006instanton |
arxiv-673750 | cs/0601071 | Constraint Functional Logic Programming over Finite Domains | <|reference_start|>Constraint Functional Logic Programming over Finite Domains: In this paper, we present our proposal to Constraint Functional Logic Programming over Finite Domains (CFLP(FD)) with a lazy functional logic programming language which seamlessly embodies finite domain (FD) constraints. This proposal increases the expressiveness and power of constraint logic programming over finite domains (CLP(FD)) by combining functional and relational notation, curried expressions, higher-order functions, patterns, partial applications, non-determinism, lazy evaluation, logical variables, types, domain variables, constraint composition, and finite domain constraints. We describe the syntax of the language, its type discipline, and its declarative and operational semantics. We also describe TOY(FD), an implementation for CFLPFD(FD), and a comparison of our approach with respect to CLP(FD) from a programming point of view, showing the new features we introduce. And, finally, we show a performance analysis which demonstrates that our implementation is competitive with respect to existing CLP(FD) systems and that clearly outperforms the closer approach to CFLP(FD).<|reference_end|> | arxiv | @article{fernandez2006constraint,
title={Constraint Functional Logic Programming over Finite Domains},
author={Antonio J. Fernandez, Teresa Hortala-Gonzalez, Fernando Saenz-Perez
and Rafael del Vado-Virseda},
journal={arXiv preprint arXiv:cs/0601071},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601071},
primaryClass={cs.PL}
} | fernandez2006constraint |
arxiv-673751 | cs/0601072 | Fast Frequent Querying with Lazy Control Flow Compilation | <|reference_start|>Fast Frequent Querying with Lazy Control Flow Compilation: Control flow compilation is a hybrid between classical WAM compilation and meta-call, limited to the compilation of non-recursive clause bodies. This approach is used successfully for the execution of dynamically generated queries in an inductive logic programming setting (ILP). Control flow compilation reduces compilation times up to an order of magnitude, without slowing down execution. A lazy variant of control flow compilation is also presented. By compiling code by need, it removes the overhead of compiling unreached code (a frequent phenomenon in practical ILP settings), and thus reduces the size of the compiled code. Both dynamic compilation approaches have been implemented and were combined with query packs, an efficient ILP execution mechanism. It turns out that locality of data and code is important for performance. The experiments reported in the paper show that lazy control flow compilation is superior in both artificial and real life settings.<|reference_end|> | arxiv | @article{tronçon2006fast,
title={Fast Frequent Querying with Lazy Control Flow Compilation},
author={Remko Tronc{c}on, Gerda Janssens, Bart Demoen, Henk Vandecasteele},
journal={arXiv preprint arXiv:cs/0601072},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601072},
primaryClass={cs.PL cs.AI cs.SE}
} | tronçon2006fast |
arxiv-673752 | cs/0601073 | A Theory of Routing for Large-Scale Wireless Ad-Hoc Networks | <|reference_start|>A Theory of Routing for Large-Scale Wireless Ad-Hoc Networks: In this work we develop a new theory to analyse the process of routing in large-scale ad-hoc wireless networks. We use a path integral formulation to examine the properties of the paths generated by different routing strategies in these kinds of networks. Using this theoretical framework, we calculate the statistical distribution of the distances between any source to any destination in the network, hence we are able to deduce a length parameter that is unique for each routing strategy. This parameter, defined as the {\it effective radius}, effectively encodes the routing information required by a node. Analysing the aforementioned statistical distribution for different routing strategies, we obtain a threefold result for practical Large-Scale Wireless Ad-Hoc Networks: 1) We obtain the distribution of the lengths of all the paths in a network for any given routing strategy, 2) We are able to identify "good" routing strategies depending on the evolution of its effective radius as the number of nodes, $N$, increases to infinity, 3) For any routing strategy with finite effective radius, we demonstrate that, in a large-scale network, is equivalent to a random routing strategy and that its transport capacity scales as $\Theta(\sqrt{N})$ bit-meters per second, thus retrieving the scaling law that Gupta and Kumar (2000) obtained as the limit for single-route large-scale wireless networks.<|reference_end|> | arxiv | @article{caamaño2006a,
title={A Theory of Routing for Large-Scale Wireless Ad-Hoc Networks},
author={Antonio J. Caama~no, Juan J. Vinagre, Mark Wilby and Javier Ramos},
journal={arXiv preprint arXiv:cs/0601073},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601073},
primaryClass={cs.IT cs.NI math.IT}
} | caamaño2006a |
arxiv-673753 | cs/0601074 | Joint universal lossy coding and identification of iid vector sources | <|reference_start|>Joint universal lossy coding and identification of iid vector sources: The problem of joint universal source coding and modeling, addressed by Rissanen in the context of lossless codes, is generalized to fixed-rate lossy coding of continuous-alphabet memoryless sources. We show that, for bounded distortion measures, any compactly parametrized family of i.i.d. real vector sources with absolutely continuous marginals (satisfying appropriate smoothness and Vapnik--Chervonenkis learnability conditions) admits a joint scheme for universal lossy block coding and parameter estimation, and give nonasymptotic estimates of convergence rates for distortion redundancies and variational distances between the active source and the estimated source. We also present explicit examples of parametric sources admitting such joint universal compression and modeling schemes.<|reference_end|> | arxiv | @article{raginsky2006joint,
title={Joint universal lossy coding and identification of i.i.d. vector sources},
author={Maxim Raginsky},
journal={arXiv preprint arXiv:cs/0601074},
year={2006},
doi={10.1109/ISIT.2006.261782},
archivePrefix={arXiv},
eprint={cs/0601074},
primaryClass={cs.IT cs.LG math.IT}
} | raginsky2006joint |
arxiv-673754 | cs/0601075 | On Universally Decodable Matrices for Space-Time Coding | <|reference_start|>On Universally Decodable Matrices for Space-Time Coding: The notion of universally decodable matrices (UDMs) was recently introduced by Tavildar and Viswanath while studying slow fading channels. It turns out that the problem of constructing UDMs is tightly connected to the problem of constructing maximum distance separable (MDS) codes. In this paper, we first study the properties of UDMs in general and then we discuss an explicit construction of a class of UDMs, a construction which can be seen as an extension of Reed-Solomon codes. In fact, we show that this extension is, in a sense to be made more precise later on, unique. Moreover, the structure of this class of UDMs allows us to answer some open conjectures by Tavildar, Viswanath, and Doshi in the positive, and it also allows us to formulate an efficient decoding algorithm for this class of UDMs. It turns out that our construction yields a coding scheme that is essentially equivalent to a class of codes that was proposed by Rosenbloom and Tsfasman. Moreover, we point out connections to so-called repeated-root cyclic codes.<|reference_end|> | arxiv | @article{vontobel2006on,
title={On Universally Decodable Matrices for Space-Time Coding},
author={Pascal O. Vontobel, Ashwin Ganesan},
journal={arXiv preprint arXiv:cs/0601075},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601075},
primaryClass={cs.IT cs.DM math.IT}
} | vontobel2006on |
arxiv-673755 | cs/0601076 | Learning by Test-infecting Symmetric Ciphers | <|reference_start|>Learning by Test-infecting Symmetric Ciphers: We describe a novel way in which students can learn the cipher systems without much supervision. In this work we focus on learning symmetric ciphers by altering them using the agile development approach. Two agile approaches the eXtreme Programming (XP) and the closely related Test-Driven Development (TDD) are mentioned or discussed. To facilitate this development we experiment with an approach that is based on refactoring, with JUnit serves as the automatic testing framework. In this work we exemplify our learning approach by test-infecting the Vernam cipher, an aged but still widely used stream cipher. One can replace the cipher with another symmetric cipher with the same behavior. Software testing is briefly described. Just-in-time introduction to Object-oriented programming (OOP), exemplified by using JavaTM, is advocated. Refactoring exercises, as argued, are kept strategically simple so that they do not become intensive class redesign exercises. The use of free or open-source tools and frameworks is mentioned.<|reference_end|> | arxiv | @article{ooi2006learning,
title={Learning by Test-infecting Symmetric Ciphers},
author={K. S. Ooi},
journal={arXiv preprint arXiv:cs/0601076},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601076},
primaryClass={cs.CR}
} | ooi2006learning |
arxiv-673756 | cs/0601077 | IDBE - An Intelligent Dictionary Based Encoding Algorithm for Text Data Compression for High Speed Data Transmission Over Internet | <|reference_start|>IDBE - An Intelligent Dictionary Based Encoding Algorithm for Text Data Compression for High Speed Data Transmission Over Internet: Compression algorithms reduce the redundancy in data representation to decrease the storage required for that data. Data compression offers an attractive approach to reducing communication costs by using available bandwidth effectively. Over the last decade there has been an unprecedented explosion in the amount of digital data transmitted via the Internet, representing text, images, video, sound, computer programs, etc. With this trend expected to continue, it makes sense to pursue research on developing algorithms that can most effectively use available network bandwidth by maximally compressing data. This research paper is focused on addressing this problem of lossless compression of text files. Lossless compression researchers have developed highly sophisticated approaches, such as Huffman encoding, arithmetic encoding, the Lempel-Ziv family, Dynamic Markov Compression (DMC), Prediction by Partial Matching (PPM), and Burrows-Wheeler Transform (BWT) based algorithms. However, none of these methods has been able to reach the theoretical best-case compression ratio consistently, which suggests that better algorithms may be possible. One approach for trying to attain better compression ratios is to develop new compression algorithms. An alternative approach, however, is to develop intelligent, reversible transformations that can be applied to a source text that improve an existing, or backend, algorithm's ability to compress. The latter strategy has been explored here.<|reference_end|> | arxiv | @article{mohan2006idbe,
title={IDBE - An Intelligent Dictionary Based Encoding Algorithm for Text Data
Compression for High Speed Data Transmission Over Internet},
author={B.S. Shajee Mohan, V.K. Govindan},
journal={arXiv preprint arXiv:cs/0601077},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601077},
primaryClass={cs.IT math.IT}
} | mohan2006idbe |
arxiv-673757 | cs/0601078 | Exploring high performance distributed file storage using LDPC codes | <|reference_start|>Exploring high performance distributed file storage using LDPC codes: We explore the feasibility of implementing a reliable, high performance, distributed storage system on a commodity computing cluster. Files are distributed across storage nodes using erasure coding with small Low-Density Parity-Check (LDPC) codes which provide high reliability while keeping the storage and performance overhead small. We present performance measurements done on a prototype system comprising 50 nodes which are self organised using a peer-to-peer overlay.<|reference_end|> | arxiv | @article{gaidioz2006exploring,
title={Exploring high performance distributed file storage using LDPC codes},
author={Benjamin Gaidioz, Birger Koblitz, Nuno Santos},
journal={arXiv preprint arXiv:cs/0601078},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601078},
primaryClass={cs.DC}
} | gaidioz2006exploring |
arxiv-673758 | cs/0601079 | SCRUB-PA: A Multi-Level Multi-Dimensional Anonymization Tool for Process Accounting | <|reference_start|>SCRUB-PA: A Multi-Level Multi-Dimensional Anonymization Tool for Process Accounting: In the UNIX/Linux environment the kernel can log every command process created by every user using process accounting. This data has many potential uses, including the investigation of security incidents. However, process accounting data is also sensitive since it contains private user information. Consequently, security system administrators have been hindered from sharing these logs. Given that many interesting security applications could use process accounting data, it would be useful to have a tool that could protect private user information in the logs. For this reason we introduce SCRUB-PA, a tool that uses multi-level multi-dimensional anonymization on process accounting log files in order to provide different levels of privacy protection. It is our goal that SCRUB-PA will promote the sharing of process accounting logs while preserving privacy.<|reference_end|> | arxiv | @article{luo2006scrub-pa:,
title={SCRUB-PA: A Multi-Level Multi-Dimensional Anonymization Tool for Process
Accounting},
author={Katherine Luo, Yifan Li, Charis Ermopoulos, William Yurcik, Adam
Slagell},
journal={arXiv preprint arXiv:cs/0601079},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601079},
primaryClass={cs.CR}
} | luo2006scrub-pa: |
arxiv-673759 | cs/0601080 | On Measure Theoretic definitions of Generalized Information Measures and Maximum Entropy Prescriptions | <|reference_start|>On Measure Theoretic definitions of Generalized Information Measures and Maximum Entropy Prescriptions: Though Shannon entropy of a probability measure $P$, defined as $- \int_{X} \frac{\ud P}{\ud \mu} \ln \frac{\ud P}{\ud\mu} \ud \mu$ on a measure space $(X, \mathfrak{M},\mu)$, does not qualify itself as an information measure (it is not a natural extension of the discrete case), maximum entropy (ME) prescriptions in the measure-theoretic case are consistent with that of discrete case. In this paper, we study the measure-theoretic definitions of generalized information measures and discuss the ME prescriptions. We present two results in this regard: (i) we prove that, as in the case of classical relative-entropy, the measure-theoretic definitions of generalized relative-entropies, R\'{e}nyi and Tsallis, are natural extensions of their respective discrete cases, (ii) we show that, ME prescriptions of measure-theoretic Tsallis entropy are consistent with the discrete case.<|reference_end|> | arxiv | @article{dukkipati2006on,
title={On Measure Theoretic definitions of Generalized Information Measures and
Maximum Entropy Prescriptions},
author={Ambedkar Dukkipati, M Narasimha Murty and Shalabh Bhatnagar},
journal={arXiv preprint arXiv:cs/0601080},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601080},
primaryClass={cs.IT math.IT}
} | dukkipati2006on |
arxiv-673760 | cs/0601081 | An O(1) Solution to the Prefix Sum Problem on a Specialized Memory Architecture | <|reference_start|>An O(1) Solution to the Prefix Sum Problem on a Specialized Memory Architecture: In this paper we study the Prefix Sum problem introduced by Fredman. We show that it is possible to perform both update and retrieval in O(1) time simultaneously under a memory model in which individual bits may be shared by several words. We also show that two variants (generalizations) of the problem can be solved optimally in $\Theta(\lg N)$ time under the comparison based model of computation.<|reference_end|> | arxiv | @article{brodnik2006an,
title={An O(1) Solution to the Prefix Sum Problem on a Specialized Memory
Architecture},
author={Andrej Brodnik, Johan Karlsson, J. Ian Munro, Andreas Nilsson},
journal={arXiv preprint arXiv:cs/0601081},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601081},
primaryClass={cs.DS cs.CC cs.IR}
} | brodnik2006an |
arxiv-673761 | cs/0601082 | Search in Complex Networks : a New Method of Naming | <|reference_start|>Search in Complex Networks : a New Method of Naming: We suggest a method for routing when the source does not posses full information about the shortest path to the destination. The method is particularly useful for scale-free networks, and exploits its unique characteristics. By assigning new (short) names to nodes (aka labelling) we are able to reduce significantly the memory requirement at the routers, yet we succeed in routing with high probability through paths very close in distance to the shortest ones.<|reference_end|> | arxiv | @article{carmi2006search,
title={Search in Complex Networks : a New Method of Naming},
author={Shai Carmi, Reuven Cohen and Danny Dolev},
journal={Europhys. Lett., 74 (6), pp. 1102-1108 (2006)},
year={2006},
doi={10.1209/epl/i2006-10049-1},
archivePrefix={arXiv},
eprint={cs/0601082},
primaryClass={cs.NI cond-mat.dis-nn}
} | carmi2006search |
arxiv-673762 | cs/0601083 | Multilevel Coding for Channels with Non-uniform Inputs and Rateless Transmission over the BSC | <|reference_start|>Multilevel Coding for Channels with Non-uniform Inputs and Rateless Transmission over the BSC: We consider coding schemes for channels with non-uniform inputs (NUI), where standard linear block codes can not be applied directly. We show that multilevel coding (MLC) with a set of linear codes and a deterministic mapper can achieve the information rate of the channel with NUI. The mapper, however, does not have to be one-to-one. As an application of the proposed MLC scheme, we present a rateless transmission scheme over the binary symmetric channel (BSC).<|reference_end|> | arxiv | @article{jiang2006multilevel,
title={Multilevel Coding for Channels with Non-uniform Inputs and Rateless
Transmission over the BSC},
author={Jing Jiang and Krishna R. Narayanan},
journal={arXiv preprint arXiv:cs/0601083},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601083},
primaryClass={cs.IT math.IT}
} | jiang2006multilevel |
arxiv-673763 | cs/0601084 | Randomized Fast Design of Short DNA Words | <|reference_start|>Randomized Fast Design of Short DNA Words: We consider the problem of efficiently designing sets (codes) of equal-length DNA strings (words) that satisfy certain combinatorial constraints. This problem has numerous motivations including DNA computing and DNA self-assembly. Previous work has extended results from coding theory to obtain bounds on code size for new biologically motivated constraints and has applied heuristic local search and genetic algorithm techniques for code design. This paper proposes a natural optimization formulation of the DNA code design problem in which the goal is to design n strings that satisfy a given set of constraints while minimizing the length of the strings. For multiple sets of constraints, we provide high-probability algorithms that run in time polynomial in n and any given constraint parameters, and output strings of length within a constant factor of the optimal. To the best of our knowledge, this work is the first to consider this type of optimization problem in the context of DNA code design.<|reference_end|> | arxiv | @article{kao2006randomized,
title={Randomized Fast Design of Short DNA Words},
author={Ming-Yang Kao, Manan Sanghi, Robert Schweller},
journal={Proceedings of the 32nd International Colloquium on Automata,
Languages and Programming (ICALP 2005), Lisboa, Portugal, July 11-15, 2005,
pp. 1275-1286},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601084},
primaryClass={cs.DS}
} | kao2006randomized |
arxiv-673764 | cs/0601085 | A Formal Foundation for ODRL | <|reference_start|>A Formal Foundation for ODRL: ODRL is a popular XML-based language for stating the conditions under which resources can be accessed legitimately. The language is described in English and, as a result, agreements written in ODRL are open to interpretation. To address this problem, we propose a formal semantics for a representative fragment of the language. We use this semantics to determine precisely when a permission is implied by a set of ODRL statements and show that answering such questions is a decidable NP-hard problem. Finally, we define a tractable fragment of ODRL that is also fairly expressive.<|reference_end|> | arxiv | @article{pucella2006a,
title={A Formal Foundation for ODRL},
author={Riccardo Pucella and Vicky Weissman},
journal={arXiv preprint arXiv:cs/0601085},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601085},
primaryClass={cs.LO cs.CR}
} | pucella2006a |
arxiv-673765 | cs/0601086 | Comments on Beckmann's Uniform Reducts | <|reference_start|>Comments on Beckmann's Uniform Reducts: Arnold Beckmann defined the uniform reduct of a propositional proof system f to be the set of those bounded arithmetical formulas whose propositional translations have polynomial size f-proofs. We prove that the uniform reduct of f + Extended Frege consists of all true bounded arithmetical formulas iff f + Extended Frege simulates every proof system.<|reference_end|> | arxiv | @article{cook2006comments,
title={Comments on Beckmann's Uniform Reducts},
author={Stephen Cook},
journal={arXiv preprint arXiv:cs/0601086},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601086},
primaryClass={cs.CC}
} | cook2006comments |
arxiv-673766 | cs/0601087 | Processing of Test Matrices with Guessing Correction | <|reference_start|>Processing of Test Matrices with Guessing Correction: It is suggested to insert into test matrix 1s for correct responses, 0s for response refusals, and negative corrective elements for incorrect responses. With the classical test theory approach test scores of examinees and items are calculated traditionally as sums of matrix elements, organized in rows and columns. Correlation coefficients are estimated using correction coefficients. In item response theory approach examinee and item logits are estimated using maximum likelihood method and probabilities of all matrix elements.<|reference_end|> | arxiv | @article{victor2006processing,
title={Processing of Test Matrices with Guessing Correction},
author={Kromer Victor},
journal={arXiv preprint arXiv:cs/0601087},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601087},
primaryClass={cs.LG}
} | victor2006processing |
arxiv-673767 | cs/0601088 | An Algorithm for Constructing All Families of Codes of Arbitrary Requirement in an OCDMA System | <|reference_start|>An Algorithm for Constructing All Families of Codes of Arbitrary Requirement in an OCDMA System: A novel code construction algorithm is presented to find all the possible code families for code reconfiguration in an OCDMA system. The algorithm is developed through searching all the complete subgraphs of a constructed graph. The proposed algorithm is flexible and practical for constructing optical orthogonal codes (OOCs) of arbitrary requirement. Simulation results show that one should choose an appropriate code length in order to obtain sufficient number of code families for code reconfiguration with reasonable cost.<|reference_end|> | arxiv | @article{lu2006an,
title={An Algorithm for Constructing All Families of Codes of Arbitrary
Requirement in an OCDMA System},
author={Xiang Lu, Jiajia Chen and Sailing He},
journal={arXiv preprint arXiv:cs/0601088},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601088},
primaryClass={cs.IT math.IT}
} | lu2006an |
arxiv-673768 | cs/0601089 | Distributed Kernel Regression: An Algorithm for Training Collaboratively | <|reference_start|>Distributed Kernel Regression: An Algorithm for Training Collaboratively: This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting.<|reference_end|> | arxiv | @article{predd2006distributed,
title={Distributed Kernel Regression: An Algorithm for Training Collaboratively},
author={Joel B. Predd, Sanjeev R. Kulkarni, and H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0601089},
year={2006},
doi={10.1109/ITW.2006.1633840},
archivePrefix={arXiv},
eprint={cs/0601089},
primaryClass={cs.LG cs.AI cs.DC cs.IT math.IT}
} | predd2006distributed |
arxiv-673769 | cs/0601090 | Improved Nearly-MDS Expander Codes | <|reference_start|>Improved Nearly-MDS Expander Codes: A construction of expander codes is presented with the following three properties: (i) the codes lie close to the Singleton bound, (ii) they can be encoded in time complexity that is linear in their code length, and (iii) they have a linear-time bounded-distance decoder. By using a version of the decoder that corrects also erasures, the codes can replace MDS outer codes in concatenated constructions, thus resulting in linear-time encodable and decodable codes that approach the Zyablov bound or the capacity of memoryless channels. The presented construction improves on an earlier result by Guruswami and Indyk in that any rate and relative minimum distance that lies below the Singleton bound is attainable for a significantly smaller alphabet size.<|reference_end|> | arxiv | @article{roth2006improved,
title={Improved Nearly-MDS Expander Codes},
author={Ron M. Roth, Vitaly Skachek},
journal={arXiv preprint arXiv:cs/0601090},
year={2006},
doi={10.1109/TIT.2006.878232},
archivePrefix={arXiv},
eprint={cs/0601090},
primaryClass={cs.IT math.IT}
} | roth2006improved |
arxiv-673770 | cs/0601091 | Communication Over MIMO Broadcast Channels Using Lattice-Basis Reduction | <|reference_start|>Communication Over MIMO Broadcast Channels Using Lattice-Basis Reduction: A simple scheme for communication over MIMO broadcast channels is introduced which adopts the lattice reduction technique to improve the naive channel inversion method. Lattice basis reduction helps us to reduce the average transmitted energy by modifying the region which includes the constellation points. Simulation results show that the proposed scheme performs well, and as compared to the more complex methods (such as the perturbation method) has a negligible loss. Moreover, the proposed method is extended to the case of different rates for different users. The asymptotic behavior of the symbol error rate of the proposed method and the perturbation technique, and also the outage probability for the case of fixed-rate users is analyzed. It is shown that the proposed method, based on LLL lattice reduction, achieves the optimum asymptotic slope of symbol-error-rate (called the precoding diversity). Also, the outage probability for the case of fixed sum-rate is analyzed.<|reference_end|> | arxiv | @article{taherzadeh2006communication,
title={Communication Over MIMO Broadcast Channels Using Lattice-Basis Reduction},
author={Mahmoud Taherzadeh, Amin Mobasher, and Amir K. Khandani},
journal={arXiv preprint arXiv:cs/0601091},
year={2006},
doi={10.1109/TIT.2007.909095},
archivePrefix={arXiv},
eprint={cs/0601091},
primaryClass={cs.IT math.IT}
} | taherzadeh2006communication |
arxiv-673771 | cs/0601092 | LLL Reduction Achieves the Receive Diversity in MIMO Decoding | <|reference_start|>LLL Reduction Achieves the Receive Diversity in MIMO Decoding: Diversity order is an important measure for the performance of communication systems over MIMO fading channels. In this paper, we prove that in MIMO multiple access systems (or MIMO point-to-point systems with V-BLAST transmission), lattice-reduction-aided decoding achieves the maximum receive diversity (which is equal to the number of receive antennas). Also, we prove that the naive lattice decoding (which discards the out-of-region decoded points) achieves the maximum diversity.<|reference_end|> | arxiv | @article{taherzadeh2006lll,
title={LLL Reduction Achieves the Receive Diversity in MIMO Decoding},
author={Mahmoud Taherzadeh, Amin Mobasher, and Amir K. Khandani},
journal={arXiv preprint arXiv:cs/0601092},
year={2006},
doi={10.1109/TIT.2007.909169},
archivePrefix={arXiv},
eprint={cs/0601092},
primaryClass={cs.IT math.IT}
} | taherzadeh2006lll |
arxiv-673772 | cs/0601093 | Stability of Scheduled Multi-access Communication over Quasi-static Flat Fading Channels with Random Coding and Joint Maximum Likelihood Decoding | <|reference_start|>Stability of Scheduled Multi-access Communication over Quasi-static Flat Fading Channels with Random Coding and Joint Maximum Likelihood Decoding: We consider stability of scheduled multiaccess message communication with random coding and joint maximum-likehood decoding of messages. The framework we consider here models both the random message arrivals and the subsequent reliable communication by suitably combining techniques from queueing theory and information theory. The number of messages that may be scheduled for simultaneous transmission is limited to a given maximum value, and the channels from transmitters to receiver are quasi-static, flat, and have independent fades. Requests for message transmissions are assumed to arrive according to an i.i.d. arrival process. Then, (i) we derive an outer bound to the region of message arrival rate vectors achievable by the class of stationary scheduling policies, (ii) we show for any message arrival rate vector that satisfies the outerbound, that there exists a stationary state-independent policy that results in a stable system for the corresponding message arrival process, and (iii) in the limit of large message lengths, we show that the stability region of message nat arrival rate vectors has information-theoretic capacity region interpretation.<|reference_end|> | arxiv | @article{sayee2006stability,
title={Stability of Scheduled Multi-access Communication over Quasi-static Flat
Fading Channels with Random Coding and Joint Maximum Likelihood Decoding},
author={KCV Kalyanarama Sesha Sayee and Utpal Mukherji},
journal={arXiv preprint arXiv:cs/0601093},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601093},
primaryClass={cs.IT math.IT}
} | sayee2006stability |
arxiv-673773 | cs/0601094 | Stability of Scheduled Message Communication over Degraded Broadcast Channels | <|reference_start|>Stability of Scheduled Message Communication over Degraded Broadcast Channels: We consider scheduled message communication over a discrete memoryless degraded broadcast channel. The framework we consider here models both the random message arrivals and the subsequent reliable communication by suitably combining techniques from queueing theory and information theory. The channel from the transmitter to each of the receivers is quasi-static, flat, and with independent fades across the receivers. Requests for message transmissions are assumed to arrive according to an i.i.d. arrival process. Then, (i) we derive an outer bound to the region of message arrival vectors achievable by the class of stationary scheduling policies, (ii) we show for any message arrival vector that satisfies the outerbound, that there exists a stationary ``state-independent'' policy that results in a stable system for the corresponding message arrival process, and (iii) under two asymptotic regimes, we show that the stability region of nat arrival rate vectors has information-theoretic capacity region interpretation.<|reference_end|> | arxiv | @article{sayee2006stability,
title={Stability of Scheduled Message Communication over Degraded Broadcast
Channels},
author={KCV Kalyanarama Sesha Sayee and Utpal Mukherji},
journal={arXiv preprint arXiv:cs/0601094},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601094},
primaryClass={cs.IT math.IT}
} | sayee2006stability |
arxiv-673774 | cs/0601095 | On the Weight Enumerator and the Maximum Likelihood Performance of Linear Product Codes | <|reference_start|>On the Weight Enumerator and the Maximum Likelihood Performance of Linear Product Codes: Product codes are widely used in data-storage, optical and wireless applications. Their analytical performance evaluation usually relies on the truncated union bound, which provides a low error rate approximation based on the minimum distance term only. In fact, the complete weight enumerator of most product codes remains unknown. In this paper, concatenated representations are introduced and applied to compute the complete average enumerators of arbitrary product codes over a field Fq. The split weight enumerators of some important constituent codes (Hamming, Reed-Solomon) are studied and used in the analysis. The average binary weight enumerators of Reed Solomon product codes are also derived. Numerical results showing the enumerator behavior are presented. By using the complete enumerators, Poltyrev bounds on the maximum likelihood performance, holding at both high and low error rates, are finally shown and compared against truncated union bounds and simulation results.<|reference_end|> | arxiv | @article{el-khamy2006on,
title={On the Weight Enumerator and the Maximum Likelihood Performance of
Linear Product Codes},
author={Mostafa El-Khamy and Roberto Garello},
journal={arXiv preprint arXiv:cs/0601095},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601095},
primaryClass={cs.IT math.IT}
} | el-khamy2006on |
arxiv-673775 | cs/0601096 | On timed automata with input-determined guards | <|reference_start|>On timed automata with input-determined guards: We consider a general notion of timed automata with input-determined guards and show that they admit a robust logical framework along the lines of [D 'Souza03], in terms of a monadic second order logic characterisation and an expressively complete timed temporal logic. We then generalize these automata using the notion of recursive operators introduced by Henzinger, Raskin, and Schobbens, and show that they admit a similar logical framework. These results hold in the ``pointwise'' semantics. We finally use this framework to show that the real-time logic MITL of Alur et al is expressively complete with respect to an MSO corresponding to an appropriate input-determined operator.<|reference_end|> | arxiv | @article{d'souza2006on,
title={On timed automata with input-determined guards},
author={Deepak D'Souza, Nicolas Tabareau (PPS)},
journal={arXiv preprint arXiv:cs/0601096},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601096},
primaryClass={cs.LO}
} | d'souza2006on |
arxiv-673776 | cs/0601097 | Compression Scheme for Faster and Secure Data Transmission Over Internet | <|reference_start|>Compression Scheme for Faster and Secure Data Transmission Over Internet: Compression algorithms reduce the redundancy in data representation to decrease the storage required for that data. Data compression offers an attractive approach to reducing communication costs by using available bandwidth effectively. Over the last decade there has been an unprecedented explosion in the amount of digital data transmitted via the Internet, representing text, images, video, sound, computer programs, etc. With this trend expected to continue, it makes sense to pursue research on developing algorithms that can most effectively use available network bandwidth by maximally compressing data. It is also important to consider the security aspects of the data being transmitted while compressing it, as most of the text data transmitted over the Internet is very much vulnerable to a multitude of attacks. This paper is focused on addressing this problem of lossless compression of text files with an added security.<|reference_end|> | arxiv | @article{shajeemohan2006compression,
title={Compression Scheme for Faster and Secure Data Transmission Over Internet},
author={B.S. Shajeemohan, Dr. V.K. Govindan},
journal={arXiv preprint arXiv:cs/0601097},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601097},
primaryClass={cs.PF cs.DC}
} | shajeemohan2006compression |
arxiv-673777 | cs/0601098 | Energy Efficiency and Delay Quality-of-Service in Wireless Networks | <|reference_start|>Energy Efficiency and Delay Quality-of-Service in Wireless Networks: The energy-delay tradeoffs in wireless networks are studied using a game-theoretic framework. A multi-class multiple-access network is considered in which users choose their transmit powers, and possibly transmission rates, in a distributed manner to maximize their own utilities while satisfying their delay quality-of-service (QoS) requirements. The utility function considered here measures the number of reliable bits transmitted per Joule of energy consumed and is particularly useful for energy-constrained networks. The Nash equilibrium solution for the proposed non-cooperative game is presented and closed-form expressions for the users' utilities at equilibrium are obtained. Based on this, the losses in energy efficiency and network capacity due to presence of delay-sensitive users are quantified. The analysis is extended to the scenario where the QoS requirements include both the average source rate and a bound on the average total delay (including queuing delay). It is shown that the incoming traffic rate and the delay constraint of a user translate into a "size" for the user, which is an indication of the amount of resources consumed by the user. Using this framework, the tradeoffs among throughput, delay, network capacity and energy efficiency are also quantified.<|reference_end|> | arxiv | @article{meshkati2006energy,
title={Energy Efficiency and Delay Quality-of-Service in Wireless Networks},
author={Farhad Meshkati, H. Vincent Poor, Stuart C. Schwartz and Radu V. Balan},
journal={arXiv preprint arXiv:cs/0601098},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601098},
primaryClass={cs.IT math.IT}
} | meshkati2006energy |
arxiv-673778 | cs/0601099 | Adaptive Linear Programming Decoding | <|reference_start|>Adaptive Linear Programming Decoding: Detectability of failures of linear programming (LP) decoding and its potential for improvement by adding new constraints motivate the use of an adaptive approach in selecting the constraints for the LP problem. In this paper, we make a first step in studying this method, and show that it can significantly reduce the complexity of the problem, which was originally exponential in the maximum check-node degree. We further show that adaptively adding new constraints, e.g. by combining parity checks, can provide large gains in the performance.<|reference_end|> | arxiv | @article{n.2006adaptive,
title={Adaptive Linear Programming Decoding},
author={Mohammad H. Taghavi N. and Paul H. Siegel},
journal={arXiv preprint arXiv:cs/0601099},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601099},
primaryClass={cs.IT math.IT}
} | n.2006adaptive |
arxiv-673779 | cs/0601100 | Pseudorandomness and Combinatorial Constructions | <|reference_start|>Pseudorandomness and Combinatorial Constructions: In combinatorics, the probabilistic method is a very powerful tool to prove the existence of combinatorial objects with interesting and useful properties. Explicit constructions of objects with such properties are often very difficult, or unknown. In computer science, probabilistic algorithms are sometimes simpler and more efficient than the best known deterministic algorithms for the same problem. Despite this evidence for the power of random choices, the computational theory of pseudorandomness shows that, under certain complexity-theoretic assumptions, every probabilistic algorithm has an efficient deterministic simulation and a large class of applications of the the probabilistic method can be converted into explicit constructions. In this survey paper we describe connections between the conditional ``derandomization'' results of the computational theory of pseudorandomness and unconditional explicit constructions of certain combinatorial objects such as error-correcting codes and ``randomness extractors.''<|reference_end|> | arxiv | @article{trevisan2006pseudorandomness,
title={Pseudorandomness and Combinatorial Constructions},
author={Luca Trevisan},
journal={arXiv preprint arXiv:cs/0601100},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601100},
primaryClass={cs.CC math.CO}
} | trevisan2006pseudorandomness |
arxiv-673780 | cs/0601101 | The topology of covert conflict | <|reference_start|>The topology of covert conflict: Often an attacker tries to disconnect a network by destroying nodes or edges, while the defender counters using various resilience mechanisms. Examples include a music industry body attempting to close down a peer-to-peer file-sharing network; medics attempting to halt the spread of an infectious disease by selective vaccination; and a police agency trying to decapitate a terrorist organisation. Albert, Jeong and Barabasi famously analysed the static case, and showed that vertex-order attacks are effective against scale-free networks. We extend this work to the dynamic case by developing a framework based on evolutionary game theory to explore the interaction of attack and defence strategies. We show, first, that naive defences don't work against vertex-order attack; second, that defences based on simple redundancy don't work much better, but that defences based on cliques work well; third, that attacks based on centrality work better against clique defences than vertex-order attacks do; and fourth, that defences based on complex strategies such as delegation plus clique resist centrality attacks better than simple clique defences. Our models thus build a bridge between network analysis and evolutionary game theory, and provide a framework for analysing defence and attack in networks where topology matters. They suggest definitions of efficiency of attack and defence, and may even explain the evolution of insurgent organisations from networks of cells to a more virtual leadership that facilitates operations rather than directing them. Finally, we draw some conclusions and present possible directions for future research.<|reference_end|> | arxiv | @article{nagaraja2006the,
title={The topology of covert conflict},
author={Shishir Nagaraja},
journal={arXiv preprint arXiv:cs/0601101},
year={2006},
number={UCAM-CL-TR-637},
archivePrefix={arXiv},
eprint={cs/0601101},
primaryClass={cs.NI cs.GT}
} | nagaraja2006the |
arxiv-673781 | cs/0601102 | Geometric symmetry in the quadratic Fisher discriminant operating on image pixels | <|reference_start|>Geometric symmetry in the quadratic Fisher discriminant operating on image pixels: This article examines the design of Quadratic Fisher Discriminants (QFDs) that operate directly on image pixels, when image ensembles are taken to comprise all rotated and reflected versions of distinct sample images. A procedure based on group theory is devised to identify and discard QFD coefficients made redundant by symmetry, for arbitrary sampling lattices. This procedure introduces the concept of a degeneracy matrix. Tensor representations are established for the square lattice point group (8-fold symmetry) and hexagonal lattice point group (12-fold symmetry). The analysis is largely applicable to the symmetrisation of any quadratic filter, and generalises to higher order polynomial (Volterra) filters. Experiments on square lattice sampled synthetic aperture radar (SAR) imagery verify that symmetrisation of QFDs can improve their generalisation and discrimination ability.<|reference_end|> | arxiv | @article{caprari2006geometric,
title={Geometric symmetry in the quadratic Fisher discriminant operating on
image pixels},
author={Robert S. Caprari},
journal={IEEE Transactions on Information Theory 52(4), April 2006, pp.
1780-1788},
year={2006},
doi={10.1109/TIT.2006.871581},
archivePrefix={arXiv},
eprint={cs/0601102},
primaryClass={cs.IT cs.CV math.IT}
} | caprari2006geometric |
arxiv-673782 | cs/0601103 | Google Web APIs - an Instrument for Webometric Analyses? | <|reference_start|>Google Web APIs - an Instrument for Webometric Analyses?: This paper introduces Google Web APIs (Google APIs) as an instrument and playground for webometric studies. Several examples of Google APIs implementations are given. Our examples show that this Google Web Service can be used successfully for informetric Internet based studies albeit with some restrictions.<|reference_end|> | arxiv | @article{mayr2006google,
title={Google Web APIs - an Instrument for Webometric Analyses?},
author={Philipp Mayr, Fabio Tosques},
journal={arXiv preprint arXiv:cs/0601103},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601103},
primaryClass={cs.IR}
} | mayr2006google |
arxiv-673783 | cs/0601104 | The complexity of class polynomial computation via floating point approximations | <|reference_start|>The complexity of class polynomial computation via floating point approximations: We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmetic-geometric mean. It runs in time $O (|D| \log^5 |D| \log \log |D|) = O (|D|^{1 + \epsilon}) = O (h^{2 + \epsilon})$ for any $\epsilon > 0$, where $D$ is the CM discriminant and $h$ is the degree of the class polynomial. Another fast algorithm uses multipoint evaluation techniques known from symbolic computation; its asymptotic complexity is worse by a factor of $\log |D|$. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary-quadratic order and on a rigorously proven upper bound for the height of class polynomials.<|reference_end|> | arxiv | @article{enge2006the,
title={The complexity of class polynomial computation via floating point
approximations},
author={Andreas Enge (INRIA Futurs, Lix)},
journal={Mathematics of Computation 78, 266 (2009) 1089-1107},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601104},
primaryClass={cs.NA cs.SC math.NT}
} | enge2006the |
arxiv-673784 | cs/0601105 | The Perceptron Algorithm: Image and Signal Decomposition, Compression, and Analysis by Iterative Gaussian Blurring | <|reference_start|>The Perceptron Algorithm: Image and Signal Decomposition, Compression, and Analysis by Iterative Gaussian Blurring: A novel algorithm for tunable compression to within the precision of reproduction targets, or storage, is proposed. The new algorithm is termed the `Perceptron Algorithm', which utilises simple existing concepts in a novel way, has multiple immediate commercial application aspects as well as it opens up a multitude of fronts in computational science and technology. The aims of this paper are to present the concepts underlying the algorithm, observations by its application to some example cases, and the identification of a multitude of potential areas of applications such as: image compression by orders of magnitude, signal compression including sound as well, image analysis in a multilayered detailed analysis, pattern recognition and matching and rapid database searching (e.g. face recognition), motion analysis, biomedical applications e.g. in MRI and CAT scan image analysis and compression, as well as hints on the link of these ideas to the way how biological memory might work leading to new points of view in neural computation. Commercial applications of immediate interest are the compression of images at the source (e.g. photographic equipment, scanners, satellite imaging systems), DVD film compression, pay-per-view downloads acceleration and many others identified in the present paper at its conclusion and future work section.<|reference_end|> | arxiv | @article{vassiliadis2006the,
title={The Perceptron Algorithm: Image and Signal Decomposition, Compression,
and Analysis by Iterative Gaussian Blurring},
author={Vassilios S. Vassiliadis},
journal={arXiv preprint arXiv:cs/0601105},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601105},
primaryClass={cs.CV}
} | vassiliadis2006the |
arxiv-673785 | cs/0601106 | The `Face on Mars': a photographic approach for the search of signs of past civilizations from a macroscopic point of view, factoring long-term erosion in image reconstruction | <|reference_start|>The `Face on Mars': a photographic approach for the search of signs of past civilizations from a macroscopic point of view, factoring long-term erosion in image reconstruction: This short article presents an alternative view of high resolution imaging from various sources with the aim of the discovery of potential sites of archaeological importance, or sites that exhibit `anomalies' such that they may merit closer inspection and analysis. It is conjectured, and to a certain extent demonstrated here, that it is possible for advanced civilizations to factor in erosion by natural processes into a large scale design so that main features be preserved even with the passage of millions of years. Alternatively viewed, even without such intent embedded in a design left for posterity, it is possible that a gigantic construction may naturally decay in such a way that even cataclysmic (massive) events may leave sufficient information intact with the passage of time, provided one changes the point of view from high resolution images to enhanced blurred renderings of the sites in question.<|reference_end|> | arxiv | @article{vassiliadis2006the,
title={The `Face on Mars': a photographic approach for the search of signs of
past civilizations from a macroscopic point of view, factoring long-term
erosion in image reconstruction},
author={Vassilios S. Vassiliadis},
journal={arXiv preprint arXiv:cs/0601106},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601106},
primaryClass={cs.CV}
} | vassiliadis2006the |
arxiv-673786 | cs/0601107 | Structure of Optimal Input Covariance Matrices for MIMO Systems with Covariance Feedback under General Correlated Fading | <|reference_start|>Structure of Optimal Input Covariance Matrices for MIMO Systems with Covariance Feedback under General Correlated Fading: We describe the structure of optimal Input covariance matrices for single user multiple-input/multiple-output (MIMO) communication system with covariance feedback and for general correlated fading. Our approach is based on the novel concept of right commutant and recovers previously derived results for the Kronecker product models. Conditions are derived which allow a significant simplification of the optimization problem.<|reference_end|> | arxiv | @article{bjelakovic2006structure,
title={Structure of Optimal Input Covariance Matrices for MIMO Systems with
Covariance Feedback under General Correlated Fading},
author={Igor Bjelakovic, Holger Boche},
journal={Proc. of the 2006 IEEE International Symposium on Information
Theory, ISIT 2006 Seattle, pp. 1041-1045},
year={2006},
doi={10.1109/ISIT.2006.261886},
archivePrefix={arXiv},
eprint={cs/0601107},
primaryClass={cs.IT math.IT}
} | bjelakovic2006structure |
arxiv-673787 | cs/0601108 | Fast Lexically Constrained Viterbi Algorithm (FLCVA): Simultaneous Optimization of Speed and Memory | <|reference_start|>Fast Lexically Constrained Viterbi Algorithm (FLCVA): Simultaneous Optimization of Speed and Memory: Lexical constraints on the input of speech and on-line handwriting systems improve the performance of such systems. A significant gain in speed can be achieved by integrating in a digraph structure the different Hidden Markov Models (HMM) corresponding to the words of the relevant lexicon. This integration avoids redundant computations by sharing intermediate results between HMM's corresponding to different words of the lexicon. In this paper, we introduce a token passing method to perform simultaneously the computation of the a posteriori probabilities of all the words of the lexicon. The coding scheme that we introduce for the tokens is optimal in the information theory sense. The tokens use the minimum possible number of bits. Overall, we optimize simultaneously the execution speed and the memory requirement of the recognition systems.<|reference_end|> | arxiv | @article{lifchitz2006fast,
title={Fast Lexically Constrained Viterbi Algorithm (FLCVA): Simultaneous
Optimization of Speed and Memory},
author={Alain Lifchitz, Frederic Maire and Dominique Revuz},
journal={arXiv preprint arXiv:cs/0601108},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601108},
primaryClass={cs.CV cs.AI cs.DS}
} | lifchitz2006fast |
arxiv-673788 | cs/0601109 | Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data | <|reference_start|>Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data: Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.<|reference_end|> | arxiv | @article{yorke-smith2006certainty,
title={Certainty Closure: Reliable Constraint Reasoning with Incomplete or
Erroneous Data},
author={Neil Yorke-Smith and Carmen Gervet},
journal={ACM Transactions on Computational Logic, volume 10, number 1,
article 3, 2009},
year={2006},
doi={10.1145/1459010.1459013},
archivePrefix={arXiv},
eprint={cs/0601109},
primaryClass={cs.AI}
} | yorke-smith2006certainty |
arxiv-673789 | cs/0601110 | Mutual Information Games in Multi-user Channels with Correlated Jamming | <|reference_start|>Mutual Information Games in Multi-user Channels with Correlated Jamming: We investigate the behavior of two users and one jammer in an AWGN channel with and without fading when they participate in a non-cooperative zero-sum game, with the channel's input/output mutual information as the objective function. We assume that the jammer can eavesdrop the channel and can use the information obtained to perform correlated jamming. Under various assumptions on the channel characteristics, and the extent of information available at the users and the jammer, we show the existence, or otherwise non-existence of a simultaneously optimal set of strategies for the users and the jammer. In all the cases where the channel is non-fading, we show that the game has a solution, and the optimal strategies are Gaussian signalling for the users and linear jamming for the jammer. In fading channels, we envision each player's strategy as a power allocation function over the channel states, together with the signalling strategies at each channel state. We reduce the game solution to a set of power allocation functions for the players and show that when the jammer is uncorrelated, the game has a solution, but when the jammer is correlated, a set of simultaneously optimal power allocation functions for the users and the jammer does not always exist. In this case, we characterize the max-min user power allocation strategies and the corresponding jammer power allocation strategy.<|reference_end|> | arxiv | @article{shafiee2006mutual,
title={Mutual Information Games in Multi-user Channels with Correlated Jamming},
author={Shabnam Shafiee, Sennur Ulukus},
journal={arXiv preprint arXiv:cs/0601110},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601110},
primaryClass={cs.IT math.IT}
} | shafiee2006mutual |
arxiv-673790 | cs/0601111 | Localization in Wireless Sensor Grids | <|reference_start|>Localization in Wireless Sensor Grids: This work reports experiences on using radio ranging to position sensors in a grid topology. The implementation is simple, efficient, and could be practically distributed. The paper describes an implementation and experimental result based on RSSI distance estimation. Novel techniques such as fuzzy membership functions and table lookup are used to obtain more accurate result and simplify the computation. An 86% accuracy is achieved in the experiment in spite of inaccurate RSSI distance estimates with errors up to 60%.<|reference_end|> | arxiv | @article{zhang2006localization,
title={Localization in Wireless Sensor Grids},
author={Chen Zhang, Ted Herman},
journal={arXiv preprint arXiv:cs/0601111},
year={2006},
number={TR01-06},
archivePrefix={arXiv},
eprint={cs/0601111},
primaryClass={cs.DC}
} | zhang2006localization |
arxiv-673791 | cs/0601112 | Complexity of the Guarded Two-Variable Fragment with Counting Quantifiers | <|reference_start|>Complexity of the Guarded Two-Variable Fragment with Counting Quantifiers: We show that the finite satisfiability problem for the guarded two-variable fragment with counting quantifiers is in EXPTIME. The method employed also yields a simple proof of a result recently obtained by Y. Kazakov, that the satisfiability problem for the guarded two-variable fragment with counting quantifiers is in EXPTIME.<|reference_end|> | arxiv | @article{pratt-hartmann2006complexity,
title={Complexity of the Guarded Two-Variable Fragment with Counting
Quantifiers},
author={Ian Pratt-Hartmann},
journal={Journal of Logic and Computation, 17(1), 2007, pp. 133--155},
year={2006},
doi={10.1093/logcom/exl034},
archivePrefix={arXiv},
eprint={cs/0601112},
primaryClass={cs.LO cs.CC}
} | pratt-hartmann2006complexity |
arxiv-673792 | cs/0601113 | An Efficient Pseudo-Codeword Search Algorithm for Linear Programming Decoding of LDPC Codes | <|reference_start|>An Efficient Pseudo-Codeword Search Algorithm for Linear Programming Decoding of LDPC Codes: In Linear Programming (LP) decoding of a Low-Density-Parity-Check (LDPC) code one minimizes a linear functional, with coefficients related to log-likelihood ratios, over a relaxation of the polytope spanned by the codewords \cite{03FWK}. In order to quantify LP decoding, and thus to describe performance of the error-correction scheme at moderate and large Signal-to-Noise-Ratios (SNR), it is important to study the relaxed polytope to understand better its vertexes, so-called pseudo-codewords, especially those which are neighbors of the zero codeword. In this manuscript we propose a technique to heuristically create a list of these neighbors and their distances. Our pseudo-codeword-search algorithm starts by randomly choosing the initial configuration of the noise. The configuration is modified through a discrete number of steps. Each step consists of two sub-steps. Firstly, one applies an LP decoder to the noise-configuration deriving a pseudo-codeword. Secondly, one finds configuration of the noise equidistant from the pseudo codeword and the zero codeword. The resulting noise configuration is used as an entry for the next step. The iterations converge rapidly to a pseudo-codeword neighboring the zero codeword. Repeated many times, this procedure is characterized by the distribution function (frequency spectrum) of the pseudo-codeword effective distance. The effective distance of the coding scheme is approximated by the shortest distance pseudo-codeword in the spectrum. The efficiency of the procedure is demonstrated on examples of the Tanner $[155,64,20]$ code and Margulis $p=7$ and $p=11$ codes (672 and 2640 bits long respectively) operating over an Additive-White-Gaussian-Noise (AWGN) channel.<|reference_end|> | arxiv | @article{chertkov2006an,
title={An Efficient Pseudo-Codeword Search Algorithm for Linear Programming
Decoding of LDPC Codes},
author={Michael Chertkov and Mikhail G. Stepanov},
journal={arXiv preprint arXiv:cs/0601113},
year={2006},
number={LA-UR-06-0124/06-6751},
archivePrefix={arXiv},
eprint={cs/0601113},
primaryClass={cs.IT cond-mat.dis-nn math.IT}
} | chertkov2006an |
arxiv-673793 | cs/0601114 | Efficient Query Answering over Conceptual Schemas of Relational Databases : Technical Report | <|reference_start|>Efficient Query Answering over Conceptual Schemas of Relational Databases : Technical Report: We develop a query answering system, where at the core of the work there is an idea of query answering by rewriting. For this purpose we extend the DL DL-Lite with the ability to support n-ary relations, obtaining the DL DLR-Lite, which is still polynomial in the size of the data. We devise a flexible way of mapping the conceptual level to the relational level, which provides the users an SQL-like query language over the conceptual schema. The rewriting technique adds value to conventional query answering techniques, allowing to formulate simpler queries, with the ability to infer additional information that was not stated explicitly in the user query. The formalization of the conceptual schema and the developed reasoning technique allow checking for consistency between the database and the conceptual schema, thus improving the trustiness of the information system.<|reference_end|> | arxiv | @article{simkus2006efficient,
title={Efficient Query Answering over Conceptual Schemas of Relational
Databases : Technical Report},
author={Mantas Simkus, Evaldas Taroza, Lina Lubyte, Daniel Trivellato, Zivile
Norkunaite},
journal={arXiv preprint arXiv:cs/0601114},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601114},
primaryClass={cs.DB cs.LO}
} | simkus2006efficient |
arxiv-673794 | cs/0601115 | Decision Making with Side Information and Unbounded Loss Functions | <|reference_start|>Decision Making with Side Information and Unbounded Loss Functions: We consider the problem of decision-making with side information and unbounded loss functions. Inspired by probably approximately correct learning model, we use a slightly different model that incorporates the notion of side information in a more generic form to make it applicable to a broader class of applications including parameter estimation and system identification. We address sufficient conditions for consistent decision-making with exponential convergence behavior. In this regard, besides a certain condition on the growth function of the class of loss functions, it suffices that the class of loss functions be dominated by a measurable function whose exponential Orlicz expectation is uniformly bounded over the probabilistic model. Decay exponent, decay constant, and sample complexity are discussed. Example applications to method of moments, maximum likelihood estimation, and system identification are illustrated, as well.<|reference_end|> | arxiv | @article{fozunbal2006decision,
title={Decision Making with Side Information and Unbounded Loss Functions},
author={Majid Fozunbal and Ton Kalker},
journal={arXiv preprint arXiv:cs/0601115},
year={2006},
number={HPL-2006-17},
archivePrefix={arXiv},
eprint={cs/0601115},
primaryClass={cs.LG cs.IT math.IT}
} | fozunbal2006decision |
arxiv-673795 | cs/0601116 | A unifying framework for seed sensitivity and its application to subset seeds | <|reference_start|>A unifying framework for seed sensitivity and its application to subset seeds: We propose a general approach to compute the seed sensitivity, that can be applied to different definitions of seeds. It treats separately three components of the seed sensitivity problem -- a set of target alignments, an associated probability distribution, and a seed model -- that are specified by distinct finite automata. The approach is then applied to a new concept of subset seeds for which we propose an efficient automaton construction. Experimental results confirm that sensitive subset seeds can be efficiently designed using our approach, and can then be used in similarity search producing better results than ordinary spaced seeds.<|reference_end|> | arxiv | @article{kucherov2006a,
title={A unifying framework for seed sensitivity and its application to subset
seeds},
author={Gregory Kucherov (LIFL), Laurent No'e (LIFL), Mihkail Roytberg (LIFL)},
journal={Journal of Bioinformatics and Computational Biology 4 (2006) 2, pp
553--569},
year={2006},
doi={10.1142/S0219720006001977},
archivePrefix={arXiv},
eprint={cs/0601116},
primaryClass={cs.DS q-bio.QM}
} | kucherov2006a |
arxiv-673796 | cs/0601117 | Finding Cliques of a Graph using Prime Numbers | <|reference_start|>Finding Cliques of a Graph using Prime Numbers: This paper proposes a new algorithm for solving maximal cliques for simple undirected graphs using the theory of prime numbers. A novel approach using prime numbers is used to find cliques and ends with a discussion of the algorithm.<|reference_end|> | arxiv | @article{kulkarni2006finding,
title={Finding Cliques of a Graph using Prime Numbers},
author={Dhananjay D. Kulkarni, Shekhar Verma, Prashant},
journal={arXiv preprint arXiv:cs/0601117},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601117},
primaryClass={cs.DS}
} | kulkarni2006finding |
arxiv-673797 | cs/0601118 | A Formal Architecture-Centric Model-Driven Approach for the Automatic Generation of Grid Applications | <|reference_start|>A Formal Architecture-Centric Model-Driven Approach for the Automatic Generation of Grid Applications: This paper discusses the concept of model-driven software engineering applied to the Grid application domain. As an extension to this concept, the approach described here, attempts to combine both formal architecture-centric and model-driven paradigms. It is a commonly recognized statement that Grid systems have seldom been designed using formal techniques although from past experience such techniques have shown advantages. This paper advocates a formal engineering approach to Grid system developments in an effort to contribute to the rigorous development of Grids software architectures. This approach addresses quality of service and cross-platform developments by applying the model-driven paradigm to a formal architecture-centric engineering method. This combination benefits from a formal semantic description power in addition to model-based transformations. The result of such a novel combined concept promotes the re-use of design models and facilitates developments in Grid computing.<|reference_end|> | arxiv | @article{manset2006a,
title={A Formal Architecture-Centric Model-Driven Approach for the Automatic
Generation of Grid Applications},
author={David Manset, Herve Verjus, Richard McClatchey, Flavio Oquendo},
journal={arXiv preprint arXiv:cs/0601118},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601118},
primaryClass={cs.SE}
} | manset2006a |
arxiv-673798 | cs/0601119 | Engineering Conceptual Data Models from Domain Ontologies: A Critical Evaluation | <|reference_start|>Engineering Conceptual Data Models from Domain Ontologies: A Critical Evaluation: This paper studies the differences and similarities between domain ontologies and conceptual data models and the role that ontologies can play in establishing conceptual data models during the process of information systems development. A mapping algorithm has been proposed and embedded in a special purpose Transformation Engine to generate a conceptual data model from a given domain ontology. Both quantitative and qualitative methods have been adopted to critically evaluate this new approach. In addition, this paper focuses on evaluating the quality of the generated conceptual data model elements using Bunge-Wand-Weber and OntoClean ontologies. The results of this evaluation indicate that the generated conceptual data model provides a high degree of accuracy in identifying the substantial domain entities along with their attributes and relationships being derived from the consensual semantics of domain knowledge. The results are encouraging and support the potential role that this approach can take part in process of information system development.<|reference_end|> | arxiv | @article{el-ghalayini2006engineering,
title={Engineering Conceptual Data Models from Domain Ontologies: A Critical
Evaluation},
author={Haya El-Ghalayini, Mohammed Odeh, & Richard McClatchey},
journal={arXiv preprint arXiv:cs/0601119},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601119},
primaryClass={cs.SE}
} | el-ghalayini2006engineering |
arxiv-673799 | cs/0601120 | On The Minimum Mean-Square Estimation Error of the Normalized Sum of Independent Narrowband Waves in the Gaussian Channel | <|reference_start|>On The Minimum Mean-Square Estimation Error of the Normalized Sum of Independent Narrowband Waves in the Gaussian Channel: The minimum mean-square error of the estimation of a signal where observed from the additive white Gaussian noise (WGN) channel's output, is analyzed. It is assumed that the channel input's signal is composed of a (normalized) sum of N narrowband, mutually independent waves. It is shown that if N goes to infinity, then for any fixed signal energy to noise energy ratio (no mater how big) both the causal minimum mean-square error CMMSE and the non-causal minimum mean-square error MMSE converge to the signal energy at a rate which is proportional to 1/N.<|reference_end|> | arxiv | @article{binia2006on,
title={On The Minimum Mean-Square Estimation Error of the Normalized Sum of
Independent Narrowband Waves in the Gaussian Channel},
author={Jacob Binia},
journal={arXiv preprint arXiv:cs/0601120},
year={2006},
archivePrefix={arXiv},
eprint={cs/0601120},
primaryClass={cs.IT math.IT}
} | binia2006on |
arxiv-673800 | cs/0601121 | A Multi-Relational Network to Support the Scholarly Communication Process | <|reference_start|>A Multi-Relational Network to Support the Scholarly Communication Process: The general pupose of the scholarly communication process is to support the creation and dissemination of ideas within the scientific community. At a finer granularity, there exists multiple stages which, when confronted by a member of the community, have different requirements and therefore different solutions. In order to take a researcher's idea from an initial inspiration to a community resource, the scholarly communication infrastructure may be required to 1) provide a scientist initial seed ideas; 2) form a team of well suited collaborators; 3) located the most appropriate venue to publish the formalized idea; 4) determine the most appropriate peers to review the manuscript; and 5) disseminate the end product to the most interested members of the community. Through the various delinieations of this process, the requirements of each stage are tied soley to the multi-functional resources of the community: its researchers, its journals, and its manuscritps. It is within the collection of these resources and their inherent relationships that the solutions to scholarly communication are to be found. This paper describes an associative network composed of multiple scholarly artifacts that can be used as a medium for supporting the scholarly communication process.<|reference_end|> | arxiv | @article{rodriguez2006a,
title={A Multi-Relational Network to Support the Scholarly Communication
Process},
author={Marko A. Rodriguez},
journal={International Journal of Public Information Systems, volume 2007,
issue 1, pp. 13-29},
year={2006},
number={LA-UR-06-2416},
archivePrefix={arXiv},
eprint={cs/0601121},
primaryClass={cs.DL cs.AI cs.IR}
} | rodriguez2006a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.