corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-674101 | cs/0604056 | A Short Note on The Volume of Hypersphere | <|reference_start|>A Short Note on The Volume of Hypersphere: In this note, a new method for deriving the volume of hypersphere is proposed by using probability theory. The explicit expression of the multiple times convolution of the probability density functions we should use is very complicated. But in here, we don't need its whole explicit expression. We just need the only a part of information and this fact make it possible to derive the general expression of the voulume of hypersphere. We also comments about the paradox in the hypersphere which was introduced by R.W.Hamming.<|reference_end|> | arxiv | @article{ham2006a,
title={A Short Note on The Volume of Hypersphere},
author={Woonchul Ham, Kemin Zhou},
journal={arXiv preprint arXiv:cs/0604056},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604056},
primaryClass={cs.IT math.IT}
} | ham2006a |
arxiv-674102 | cs/0604057 | A New Fault-Tolerant M-network and its Analysis | <|reference_start|>A New Fault-Tolerant M-network and its Analysis: This paper introduces a new class of efficient inter connection networks called as M-graphs for large multi-processor systems.The concept of M-matrix and M-graph is an extension of Mn-matrices and Mn-graphs.We analyze these M-graphs regarding their suitability for large multi-processor systems. An(p,N) M-graph consists of N nodes, where p is the degree of each node.The topology is found to be having many attractive features prominent among them is the capability of maximal fault-tolerance, high density and constant diameter.It is found that these combinatorial structures exibit some properties like symmetry,and an inter-relation with the nodes, and degree of the concerned graph, which can be utilized for the purposes of inter connected networks.But many of the properties of these mathematical and graphical structures still remained unexplored and the present aim of the paper is to study and analyze some of the properties of these M-graphs and explore their application in networks and multi-processor systems.<|reference_end|> | arxiv | @article{mohan2006a,
title={A New Fault-Tolerant M-network and its Analysis},
author={R.N.Mohan and P.T.Kulkarni},
journal={arXiv preprint arXiv:cs/0604057},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604057},
primaryClass={cs.IT math.IT}
} | mohan2006a |
arxiv-674103 | cs/0604058 | Solving Classical String Problems on Compressed Texts | <|reference_start|>Solving Classical String Problems on Compressed Texts: Here we study the complexity of string problems as a function of the size of a program that generates input. We consider straight-line programs (SLP), since all algorithms on SLP-generated strings could be applied to processing LZ-compressed texts. The main result is a new algorithm for pattern matching when both a text T and a pattern P are presented by SLPs (so-called fully compressed pattern matching problem). We show how to find a first occurrence, count all occurrences, check whether any given position is an occurrence or not in time O(n^2m). Here m,n are the sizes of straight-line programs generating correspondingly P and T. Then we present polynomial algorithms for computing fingerprint table and compressed representation of all covers (for the first time) and for finding periods of a given compressed string (our algorithm is faster than previously known). On the other hand, we show that computing the Hamming distance between two SLP-generated strings is NP- and coNP-hard.<|reference_end|> | arxiv | @article{lifshits2006solving,
title={Solving Classical String Problems on Compressed Texts},
author={Yury Lifshits},
journal={arXiv preprint arXiv:cs/0604058},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604058},
primaryClass={cs.DS cs.CC}
} | lifshits2006solving |
arxiv-674104 | cs/0604059 | Inner and Outer Rounding of Boolean Operations on Lattice Polygonal Regions | <|reference_start|>Inner and Outer Rounding of Boolean Operations on Lattice Polygonal Regions: Robustness problems due to the substitution of the exact computation on real numbers by the rounded floating point arithmetic are often an obstacle to obtain practical implementation of geometric algorithms. If the adoption of the --exact computation paradigm--[Yap et Dube] gives a satisfactory solution to this kind of problems for purely combinatorial algorithms, this solution does not allow to solve in practice the case of algorithms that cascade the construction of new geometric objects. In this report, we consider the problem of rounding the intersection of two polygonal regions onto the integer lattice with inclusion properties. Namely, given two polygonal regions A and B having their vertices on the integer lattice, the inner and outer rounding modes construct two polygonal regions with integer vertices which respectively is included and contains the true intersection. We also prove interesting results on the Hausdorff distance, the size and the convexity of these polygonal regions.<|reference_end|> | arxiv | @article{devillers2006inner,
title={Inner and Outer Rounding of Boolean Operations on Lattice Polygonal
Regions},
author={Olivier Devillers (INRIA Sophia Antipolis), Philippe Guigue (INRIA
Sophia Antipolis)},
journal={arXiv preprint arXiv:cs/0604059},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604059},
primaryClass={cs.CG}
} | devillers2006inner |
arxiv-674105 | cs/0604060 | Polynomial Time Nondimensionalisation of Ordinary Differential Equations via their Lie Point Symmetries | <|reference_start|>Polynomial Time Nondimensionalisation of Ordinary Differential Equations via their Lie Point Symmetries: Lie group theory states that knowledge of a $m$-parameters solvable group of symmetries of a system of ordinary differential equations allows to reduce by $m$ the number of equation. We apply this principle by finding dilatations and translations that are Lie point symmetries of considered ordinary differential system. By rewriting original problem in an invariant coordinates set for these symmetries, one can reduce the involved number of parameters. This process is classically call nondimensionalisation in dimensional analysis. We present an algorithm based on this standpoint and show that its arithmetic complexity is polynomial in input's size.<|reference_end|> | arxiv | @article{hubert2006polynomial,
title={Polynomial Time Nondimensionalisation of Ordinary Differential Equations
via their Lie Point Symmetries},
author={'Evelyne Hubert (INRIA Sophia Antipolis), Alexandre Sedoglavic (INRIA
Futurs, LIFL)},
journal={arXiv preprint arXiv:cs/0604060},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604060},
primaryClass={cs.SC}
} | hubert2006polynomial |
arxiv-674106 | cs/0604061 | Effect of E-printing on Citation Rates in Astronomy and Physics | <|reference_start|>Effect of E-printing on Citation Rates in Astronomy and Physics: In this report we examine the change in citation behavior since the introduction of the arXiv e-print repository (Ginsparg, 2001). It has been observed that papers that initially appear as arXiv e-prints get cited more than papers that do not (Lawrence, 2001; Brody et al., 2004; Schwarz & Kennicutt, 2004; Kurtz et al., 2005a, Metcalfe, 2005). Using the citation statistics from the NASA-Smithsonian Astrophysics Data System (ADS; Kurtz et al., 1993, 2000), we confirm the findings from other studies, we examine the average citation rate to e-printed papers in the Astrophysical Journal, and we show that for a number of major astronomy and physics journals the most important papers are submitted to the arXiv e-print repository first.<|reference_end|> | arxiv | @article{henneken2006effect,
title={Effect of E-printing on Citation Rates in Astronomy and Physics},
author={Edwin A. Henneken, Michael J. Kurtz, Guenther Eichhorn, Alberto
Accomazzi, Carolyn Grant, Donna Thompson, Stephen S. Murray},
journal={arXiv preprint arXiv:cs/0604061},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604061},
primaryClass={cs.DL astro-ph}
} | henneken2006effect |
arxiv-674107 | cs/0604062 | Biologically Inspired Hierarchical Model for Feature Extraction and Localization | <|reference_start|>Biologically Inspired Hierarchical Model for Feature Extraction and Localization: Feature extraction and matching are among central problems of computer vision. It is inefficent to search features over all locations and scales. Neurophysiological evidence shows that to locate objects in a digital image the human visual system employs visual attention to a specific object while ignoring others. The brain also has a mechanism to search from coarse to fine. In this paper, we present a feature extractor and an associated hierarchical searching model to simulate such processes. With the hierarchical representation of the object, coarse scanning is done through the matching of the larger scale and precise localization is conducted through the matching of the smaller scale. Experimental results justify the proposed model in its effectiveness and efficiency to localize features.<|reference_end|> | arxiv | @article{wu2006biologically,
title={Biologically Inspired Hierarchical Model for Feature Extraction and
Localization},
author={Liang Wu},
journal={arXiv preprint arXiv:cs/0604062},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604062},
primaryClass={cs.CV}
} | wu2006biologically |
arxiv-674108 | cs/0604063 | Golden Space-Time Trellis Coded Modulation | <|reference_start|>Golden Space-Time Trellis Coded Modulation: In this paper, we present a concatenated coding scheme for a high rate $2\times 2$ multiple-input multiple-output (MIMO) system over slow fading channels. The inner code is the Golden code \cite{Golden05} and the outer code is a trellis code. Set partitioning of the Golden code is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding. In order to compute the branch metrics a lattice sphere decoder is used. The general framework for code optimization is given. The performance of the proposed concatenated scheme is evaluated by simulation. It is shown that the proposed scheme achieves significant performance gains over uncoded Golden code.<|reference_end|> | arxiv | @article{hong2006golden,
title={Golden Space-Time Trellis Coded Modulation},
author={Yi Hong, Emanuele Viterbo, and Jean-Claude Belfiore},
journal={arXiv preprint arXiv:cs/0604063},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604063},
primaryClass={cs.IT math.IT}
} | hong2006golden |
arxiv-674109 | cs/0604064 | Quantum Fuzzy Sets: Blending Fuzzy Set Theory and Quantum Computation | <|reference_start|>Quantum Fuzzy Sets: Blending Fuzzy Set Theory and Quantum Computation: In this article we investigate a way in which quantum computing can be used to extend the class of fuzzy sets. The core idea is to see states of a quantum register as characteristic functions of quantum fuzzy subsets of a given set. As the real unit interval is embedded in the Bloch sphere, every fuzzy set is automatically a quantum fuzzy set. However, a generic quantum fuzzy set can be seen as a (possibly entangled) superposition of many fuzzy sets at once, offering new opportunities for modeling uncertainty. After introducing the main framework of quantum fuzzy set theory, we analyze the standard operations of fuzzification and defuzzification from our viewpoint. We conclude this preliminary paper with a list of possible applications of quantum fuzzy sets to pattern recognition, as well as future directions of pure research in quantum fuzzy set theory.<|reference_end|> | arxiv | @article{mannucci2006quantum,
title={Quantum Fuzzy Sets: Blending Fuzzy Set Theory and Quantum Computation},
author={Mirco A. Mannucci},
journal={arXiv preprint arXiv:cs/0604064},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604064},
primaryClass={cs.LO cs.AI}
} | mannucci2006quantum |
arxiv-674110 | cs/0604065 | Unifying two Graph Decompositions with Modular Decomposition | <|reference_start|>Unifying two Graph Decompositions with Modular Decomposition: We introduces the umodules, a generalisation of the notion of graph module. The theory we develop captures among others undirected graphs, tournaments, digraphs, and $2-$structures. We show that, under some axioms, a unique decomposition tree exists for umodules. Polynomial-time algorithms are provided for: non-trivial umodule test, maximal umodule computation, and decomposition tree computation when the tree exists. Our results unify many known decomposition like modular and bi-join decomposition of graphs, and a new decomposition of tournaments.<|reference_end|> | arxiv | @article{bui-xuan2006unifying,
title={Unifying two Graph Decompositions with Modular Decomposition},
author={Binh-Minh Bui-Xuan (LIRMM), Michel Habib (LIAFA), Vincent Limouzy
(LIAFA), Fabien De Montgolfier (LIAFA)},
journal={Dans Lecture Notes in Computer Science - International Symposium
on Algorithms and Computation (ISAAC, Sendai : Japon (2007)},
year={2006},
doi={10.1007/978-3-540-77120-3},
archivePrefix={arXiv},
eprint={cs/0604065},
primaryClass={cs.DS}
} | bui-xuan2006unifying |
arxiv-674111 | cs/0604066 | Univariate polynomial real root isolation: Continued Fractions revisited | <|reference_start|>Univariate polynomial real root isolation: Continued Fractions revisited: We present algorithmic, complexity and implementation results concerning real root isolation of integer univariate polynomials using the continued fraction expansion of real algebraic numbers. One motivation is to explain the method's good performance in practice. We improve the previously known bound by a factor of $d \tau$, where $d$ is the polynomial degree and $\tau$ bounds the coefficient bitsize, thus matching the current record complexity for real root isolation by exact methods. Namely, the complexity bound is $\sOB(d^4 \tau^2)$ using the standard bound on the expected bitsize of the integers in the continued fraction expansion. We show how to compute the multiplicities within the same complexity and extend the algorithm to non square-free polynomials. Finally, we present an efficient open-source \texttt{C++} implementation in the algebraic library \synaps, and illustrate its efficiency as compared to other available software. We use polynomials with coefficient bitsize up to 8000 and degree up to 1000.<|reference_end|> | arxiv | @article{tsigaridas2006univariate,
title={Univariate polynomial real root isolation: Continued Fractions revisited},
author={Elias P. Tsigaridas and Ioannis Z. Emiris},
journal={arXiv preprint arXiv:cs/0604066},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604066},
primaryClass={cs.SC cs.CC cs.MS}
} | tsigaridas2006univariate |
arxiv-674112 | cs/0604067 | Certain t-partite graphs | <|reference_start|>Certain t-partite graphs: By making use of the generalized concept of orthogonality in Latin squares, certain t-partite graphs have been constructed and a suggestion for a net work system and some applications have been made.<|reference_end|> | arxiv | @article{mohan2006certain,
title={Certain t-partite graphs},
author={R.N.Mohan, Moon Ho Lee, Subhash Pokrel},
journal={arXiv preprint arXiv:cs/0604067},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604067},
primaryClass={cs.DM}
} | mohan2006certain |
arxiv-674113 | cs/0604068 | Unbiased Matrix Rounding | <|reference_start|>Unbiased Matrix Rounding: We show several ways to round a real matrix to an integer one such that the rounding errors in all rows and columns as well as the whole matrix are less than one. This is a classical problem with applications in many fields, in particular, statistics. We improve earlier solutions of different authors in two ways. For rounding matrices of size $m \times n$, we reduce the runtime from $O((m n)^2 Second, our roundings also have a rounding error of less than one in all initial intervals of rows and columns. Consequently, arbitrary intervals have an error of at most two. This is particularly useful in the statistics application of controlled rounding. The same result can be obtained via (dependent) randomized rounding. This has the additional advantage that the rounding is unbiased, that is, for all entries $y_{ij}$ of our rounding, we have $E(y_{ij}) = x_{ij}$, where $x_{ij}$ is the corresponding entry of the input matrix.<|reference_end|> | arxiv | @article{doerr2006unbiased,
title={Unbiased Matrix Rounding},
author={Benjamin Doerr and Tobias Friedrich and Christian Klein and Ralf
Osbild},
journal={arXiv preprint arXiv:cs/0604068},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604068},
primaryClass={cs.DS cs.DM}
} | doerr2006unbiased |
arxiv-674114 | cs/0604069 | Universal decoding with an erasure option | <|reference_start|>Universal decoding with an erasure option: Motivated by applications of rateless coding, decision feedback, and ARQ, we study the problem of universal decoding for unknown channels, in the presence of an erasure option. Specifically, we harness the competitive minimax methodology developed in earlier studies, in order to derive a universal version of Forney's classical erasure/list decoder, which in the erasure case, optimally trades off between the probability of erasure and the probability of undetected error. The proposed universal erasure decoder guarantees universal achievability of a certain fraction $\xi$ of the optimum error exponents of these probabilities (in a sense to be made precise in the sequel). A single--letter expression for $\xi$, which depends solely on the coding rate and the threshold, is provided. The example of the binary symmetric channel is studied in full detail, and some conclusions are drawn.<|reference_end|> | arxiv | @article{merhav2006universal,
title={Universal decoding with an erasure option},
author={Neri Merhav and Meir Feder},
journal={arXiv preprint arXiv:cs/0604069},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604069},
primaryClass={cs.IT math.IT}
} | merhav2006universal |
arxiv-674115 | cs/0604070 | Retraction and Generalized Extension of Computing with Words | <|reference_start|>Retraction and Generalized Extension of Computing with Words: Fuzzy automata, whose input alphabet is a set of numbers or symbols, are a formal model of computing with values. Motivated by Zadeh's paradigm of computing with words rather than numbers, Ying proposed a kind of fuzzy automata, whose input alphabet consists of all fuzzy subsets of a set of symbols, as a formal model of computing with all words. In this paper, we introduce a somewhat general formal model of computing with (some special) words. The new features of the model are that the input alphabet only comprises some (not necessarily all) fuzzy subsets of a set of symbols and the fuzzy transition function can be specified arbitrarily. By employing the methodology of fuzzy control, we establish a retraction principle from computing with words to computing with values for handling crisp inputs and a generalized extension principle from computing with words to computing with all words for handling fuzzy inputs. These principles show that computing with values and computing with all words can be respectively implemented by computing with words. Some algebraic properties of retractions and generalized extensions are addressed as well.<|reference_end|> | arxiv | @article{cao2006retraction,
title={Retraction and Generalized Extension of Computing with Words},
author={Yongzhi Cao, Mingsheng Ying, and Guoqing Chen},
journal={IEEE Transactions on Fuzzy Systems, vol. 15(6): 1238-1250, Dec.
2007},
year={2006},
doi={10.1109/TED.2007.893191},
archivePrefix={arXiv},
eprint={cs/0604070},
primaryClass={cs.AI}
} | cao2006retraction |
arxiv-674116 | cs/0604071 | Distributed Metadata with the AMGA Metadata Catalog | <|reference_start|>Distributed Metadata with the AMGA Metadata Catalog: Catalog Services play a vital role on Data Grids by allowing users and applications to discover and locate the data needed. On large Data Grids, with hundreds of geographically distributed sites, centralized Catalog Services do not provide the required scalability, performance or fault-tolerance. In this article, we start by presenting and discussing the general requirements on Grid Catalogs of applications being developed by the EGEE user community. This provides the motivation for the second part of the article, where we present the replication and distribution mechanisms we have designed and implemented into the AMGA Metadata Catalog, which is part of the gLite software stack being developed for the EGEE project. Implementing these mechanisms in the catalog itself has the advantages of not requiring any special support from the relational database back-end, of being database independent, and of allowing tailoring the mechanisms to the specific requirements and characteristics of Metadata Catalogs.<|reference_end|> | arxiv | @article{santos2006distributed,
title={Distributed Metadata with the AMGA Metadata Catalog},
author={Nuno Santos, Birger Koblitz},
journal={arXiv preprint arXiv:cs/0604071},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604071},
primaryClass={cs.DC cs.DB}
} | santos2006distributed |
arxiv-674117 | cs/0604072 | Complexity and Philosophy | <|reference_start|>Complexity and Philosophy: The science of complexity is based on a new way of thinking that stands in sharp contrast to the philosophy underlying Newtonian science, which is based on reductionism, determinism, and objective knowledge. This paper reviews the historical development of this new world view, focusing on its philosophical foundations. Determinism was challenged by quantum mechanics and chaos theory. Systems theory replaced reductionism by a scientifically based holism. Cybernetics and postmodern social science showed that knowledge is intrinsically subjective. These developments are being integrated under the header of "complexity science". Its central paradigm is the multi-agent system. Agents are intrinsically subjective and uncertain about their environment and future, but out of their local interactions, a global organization emerges. Although different philosophers, and in particular the postmodernists, have voiced similar ideas, the paradigm of complexity still needs to be fully assimilated by philosophy. This will throw a new light on old philosophical issues such as relativism, ethics and the role of the subject.<|reference_end|> | arxiv | @article{heylighen2006complexity,
title={Complexity and Philosophy},
author={Francis Heylighen, Paul Cilliers, and Carlos Gershenson},
journal={In Bogg, J. and R. Geyer (eds.) Complexity, Science and Society.
Radcliffe Publishing, Oxford. 2007.},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604072},
primaryClass={cs.CC cond-mat.other}
} | heylighen2006complexity |
arxiv-674118 | cs/0604073 | Octave-GTK: A GTK binding for GNU Octave | <|reference_start|>Octave-GTK: A GTK binding for GNU Octave: This paper discusses the problems faced with interoperability between two programming languages, with respect to GNU Octave, and GTK API written in C, to provide the GTK API on Octave.Octave-GTK is the fusion of two different API's: one exported by GNU Octave [scientific computing tool] and the other GTK [GUI toolkit]; this enables one to use GTK primitives within GNU Octave, to build graphical front ends,at the same time using octave engine for number crunching power. This paper illustrates our implementation of binding logic, and shows results extended to various other libraries using the same base code generator. Also shown, are methods of code generation, binding automation, and the niche we plan to fill in the absence of GUI in Octave. Canonical discussion of advantages, feasibility and problems faced in the process are elucidated.<|reference_end|> | arxiv | @article{annamalai2006octave-gtk:,
title={Octave-GTK: A GTK binding for GNU Octave},
author={Muthiah Annamalai, Hemant Kumar, Leela Velusamy},
journal={arXiv preprint arXiv:cs/0604073},
year={2006},
number={Octave2006/02},
archivePrefix={arXiv},
eprint={cs/0604073},
primaryClass={cs.SE}
} | annamalai2006octave-gtk: |
arxiv-674119 | cs/0604074 | Information and multiaccess interference in a complexity-constrained vector channel | <|reference_start|>Information and multiaccess interference in a complexity-constrained vector channel: Rodrigo de Miguel et al 2007 J. Phys. A: Math. Theor. 40 5241-5260: A noisy vector channel operating under a strict complexity constraint at the receiver is introduced. According to this constraint, detected bits, obtained by performing hard decisions directly on the channel's matched filter output, must be the same as the transmitted binary inputs. An asymptotic analysis is carried out using mathematical tools imported from the study of neural networks, and it is shown that, under a bounded noise assumption, such complexity-constrained channel exhibits a non-trivial Shannon-theoretic capacity. It is found that performance relies on rigorous interference-based multiuser cooperation at the transmitter and that this cooperation is best served when all transmitters use the same amplitude.<|reference_end|> | arxiv | @article{de miguel2006information,
title={Information and multiaccess interference in a complexity-constrained
vector channel},
author={Rodrigo de Miguel, Ori Shental, Ralf R. Muller and Ido Kanter},
journal={arXiv preprint arXiv:cs/0604074},
year={2006},
doi={10.1088/1751-8113/40/20/002},
archivePrefix={arXiv},
eprint={cs/0604074},
primaryClass={cs.IT math.IT}
} | de miguel2006information |
arxiv-674120 | cs/0604075 | Naming Games in Spatially-Embedded Random Networks | <|reference_start|>Naming Games in Spatially-Embedded Random Networks: We investigate a prototypical agent-based model, the Naming Game, on random geometric networks. The Naming Game is a minimal model, employing local communications that captures the emergence of shared communication schemes (languages) in a population of autonomous semiotic agents. Implementing the Naming Games on random geometric graphs, local communications being local broadcasts, serves as a model for agreement dynamics in large-scale, autonomously operating wireless sensor networks. Further, it captures essential features of the scaling properties of the agreement process for spatially-embedded autonomous agents. We also present results for the case when a small density of long-range communication links are added on top of the random geometric graph, resulting in a "small-world"-like network and yielding a significantly reduced time to reach global agreement.<|reference_end|> | arxiv | @article{lu2006naming,
title={Naming Games in Spatially-Embedded Random Networks},
author={Qiming Lu, G. Korniss, and Boleslaw K. Szymanski},
journal={Proceedings of the 2006 American Association for Artificial
Intelligence Fall Symposium Series, Interaction and Emergent Phenomena in
Societies of Agents (AAAI Press, Menlo Park, CA 2006) pp. 148-155},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604075},
primaryClass={cs.MA cond-mat.stat-mech cs.AI}
} | lu2006naming |
arxiv-674121 | cs/0604076 | Semantically Correct Query Answers in the Presence of Null Values | <|reference_start|>Semantically Correct Query Answers in the Presence of Null Values: For several reasons a database may not satisfy a given set of integrity constraints(ICs), but most likely most of the information in it is still consistent with those ICs; and could be retrieved when queries are answered. Consistent answers to queries wrt a set of ICs have been characterized as answers that can be obtained from every possible minimally repaired consistent version of the original database. In this paper we consider databases that contain null values and are also repaired, if necessary, using null values. For this purpose, we propose first a precise semantics for IC satisfaction in a database with null values that is compatible with the way null values are treated in commercial database management systems. Next, a precise notion of repair is introduced that privileges the introduction of null values when repairing foreign key constraints, in such a way that these new values do not create an infinite cycle of new inconsistencies. Finally, we analyze how to specify this kind of repairs of a database that contains null values using disjunctive logic programs with stable model semantics.<|reference_end|> | arxiv | @article{bravo2006semantically,
title={Semantically Correct Query Answers in the Presence of Null Values},
author={Loreto Bravo and Leopoldo Bertossi},
journal={arXiv preprint arXiv:cs/0604076},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604076},
primaryClass={cs.DB}
} | bravo2006semantically |
arxiv-674122 | cs/0604077 | Successive Wyner-Ziv Coding Scheme and its Application to the Quadratic Gaussian CEO Problem | <|reference_start|>Successive Wyner-Ziv Coding Scheme and its Application to the Quadratic Gaussian CEO Problem: We introduce a distributed source coding scheme called successive Wyner-Ziv coding. We show that any point in the rate region of the quadratic Gaussian CEO problem can be achieved via the successive Wyner-Ziv coding. The concept of successive refinement in the single source coding is generalized to the distributed source coding scenario, which we refer to as distributed successive refinement. For the quadratic Gaussian CEO problem, we establish a necessary and sufficient condition for distributed successive refinement, where the successive Wyner-Ziv coding scheme plays an important role.<|reference_end|> | arxiv | @article{chen2006successive,
title={Successive Wyner-Ziv Coding Scheme and its Application to the Quadratic
Gaussian CEO Problem},
author={Jun Chen, Toby Berger},
journal={arXiv preprint arXiv:cs/0604077},
year={2006},
doi={10.1109/TIT.2008.917687},
archivePrefix={arXiv},
eprint={cs/0604077},
primaryClass={cs.IT math.IT}
} | chen2006successive |
arxiv-674123 | cs/0604078 | The emergence of knowledge exchange: an agent-based model of a software market | <|reference_start|>The emergence of knowledge exchange: an agent-based model of a software market: We investigate knowledge exchange among commercial organisations, the rationale behind it and its effects on the market. Knowledge exchange is known to be beneficial for industry, but in order to explain it, authors have used high level concepts like network effects, reputation and trust. We attempt to formalise a plausible and elegant explanation of how and why companies adopt information exchange and why it benefits the market as a whole when this happens. This explanation is based on a multi-agent model that simulates a market of software providers. Even though the model does not include any high-level concepts, information exchange naturally emerges during simulations as a successful profitable behaviour. The conclusions reached by this agent-based analysis are twofold: (1) A straightforward set of assumptions is enough to give rise to exchange in a software market. (2) Knowledge exchange is shown to increase the efficiency of the market.<|reference_end|> | arxiv | @article{chli2006the,
title={The emergence of knowledge exchange: an agent-based model of a software
market},
author={Maria Chli, Philippe De Wilde},
journal={arXiv preprint arXiv:cs/0604078},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604078},
primaryClass={cs.MA cs.CE}
} | chli2006the |
arxiv-674124 | cs/0604079 | Polynomial Constraint Satisfaction, Graph Bisection, and the Ising Partition Function | <|reference_start|>Polynomial Constraint Satisfaction, Graph Bisection, and the Ising Partition Function: We introduce a problem class we call Polynomial Constraint Satisfaction Problems, or PCSP. Where the usual CSPs from computer science and optimization have real-valued score functions, and partition functions from physics have monomials, PCSP has scores that are arbitrary multivariate formal polynomials, or indeed take values in an arbitrary ring. Although PCSP is much more general than CSP, remarkably, all (exact, exponential-time) algorithms we know of for 2-CSP (where each score depends on at most 2 variables) extend to 2-PCSP, at the expense of just a polynomial factor in running time. Specifically, we extend the reduction-based algorithm of Scott and Sorkin; the specialization of that approach to sparse random instances, where the algorithm runs in polynomial expected time; dynamic-programming algorithms based on tree decompositions; and the split-and-list matrix-multiplication algorithm of Williams. This gives the first polynomial-space exact algorithm more efficient than exhaustive enumeration for the well-studied problems of finding a minimum bisection of a graph, and calculating the partition function of an Ising model, and the most efficient algorithm known for certain instances of Maximum Independent Set. Furthermore, PCSP solves both optimization and counting versions of a wide range of problems, including all CSPs, and thus enables samplers including uniform sampling of optimal solutions and Gibbs sampling of all solutions.<|reference_end|> | arxiv | @article{scott2006polynomial,
title={Polynomial Constraint Satisfaction, Graph Bisection, and the Ising
Partition Function},
author={Alexander D. Scott and Gregory B. Sorkin},
journal={ACM Transactions on Algorithms, 5(4):45:1-27, October 2009.},
year={2006},
doi={10.1145/1597036.1597049},
archivePrefix={arXiv},
eprint={cs/0604079},
primaryClass={cs.DM}
} | scott2006polynomial |
arxiv-674125 | cs/0604080 | Linear-programming design and analysis of fast algorithms for Max 2-Sat and Max 2-CSP | <|reference_start|>Linear-programming design and analysis of fast algorithms for Max 2-Sat and Max 2-CSP: The class $(r,2)$-CSP, or simply Max 2-CSP, consists of constraint satisfaction problems with at most two $r$-valued variables per clause. For instances with $n$ variables and $m$ binary clauses, we present an $O(n r^{5+19m/100})$-time algorithm which is the fastest polynomial-space algorithm for many problems in the class, including Max Cut. The method also proves a treewidth bound $\tw(G) \leq (13/75+o(1))m$, which gives a faster Max 2-CSP algorithm that uses exponential space: running in time $\Ostar{2^{(13/75+o(1))m}}$, this is fastest for most problems in Max 2-CSP. Parametrizing in terms of $n$ rather than $m$, for graphs of average degree $d$ we show a simple algorithm running time $\Ostar{2^{(1-\frac{2}{d+1})n}}$, the fastest polynomial-space algorithm known. In combination with ``Polynomial CSPs'' introduced in a companion paper, these algorithms also allow (with an additional polynomial-factor overhead in space and time) counting and sampling, and the solution of problems like Max Bisection that escape the usual CSP framework. Linear programming is key to the design as well as the analysis of the algorithms.<|reference_end|> | arxiv | @article{scott2006linear-programming,
title={Linear-programming design and analysis of fast algorithms for Max 2-Sat
and Max 2-CSP},
author={Alexander D. Scott and Gregory B. Sorkin},
journal={Discrete Optimization, 4(3-4): 260-287, 2007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604080},
primaryClass={cs.DM}
} | scott2006linear-programming |
arxiv-674126 | cs/0604081 | Event Systems and Access Control | <|reference_start|>Event Systems and Access Control: We consider the interpretations of notions of access control (permissions, interdictions, obligations, and user rights) as run-time properties of information systems specified as event systems with fairness. We give proof rules for verifying that an access control policy is enforced in a system, and consider preservation of access control by refinement of event systems. In particular, refinement of user rights is non-trivial; we propose to combine low-level user rights and system obligations to implement high-level user rights.<|reference_end|> | arxiv | @article{méry2006event,
title={Event Systems and Access Control},
author={Dominique M'ery (INRIA Lorraine - LORIA), Stephan Merz (INRIA
Lorraine - LORIA)},
journal={arXiv preprint arXiv:cs/0604081},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604081},
primaryClass={cs.LO cs.CR}
} | méry2006event |
arxiv-674127 | cs/0604082 | Energy-Efficient Power and Rate Control with QoS Constraints: A Game-Theoretic Approach | <|reference_start|>Energy-Efficient Power and Rate Control with QoS Constraints: A Game-Theoretic Approach: A game-theoretic model is proposed to study the cross-layer problem of joint power and rate control with quality of service (QoS) constraints in multiple-access networks. In the proposed game, each user seeks to choose its transmit power and rate in a distributed manner in order to maximize its own utility and at the same time satisfy its QoS requirements. The user's QoS constraints are specified in terms of the average source rate and average delay. The utility function considered here measures energy efficiency and the delay includes both transmission and queueing delays. The Nash equilibrium solution for the proposed non-cooperative game is derived and a closed-form expression for the utility achieved at equilibrium is obtained. It is shown that the QoS requirements of a user translate into a "size" for the user which is an indication of the amount of network resources consumed by the user. Using this framework, the tradeoffs among throughput, delay, network capacity and energy efficiency are also studied.<|reference_end|> | arxiv | @article{meshkati2006energy-efficient,
title={Energy-Efficient Power and Rate Control with QoS Constraints: A
Game-Theoretic Approach},
author={Farhad Meshkati, H. Vincent Poor, Stuart C. Schwartz and Radu V. Balan},
journal={arXiv preprint arXiv:cs/0604082},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604082},
primaryClass={cs.IT math.IT}
} | meshkati2006energy-efficient |
arxiv-674128 | cs/0604083 | Optimum Asymptotic Multiuser Efficiency of Pseudo-Orthogonal Randomly Spread CDMA | <|reference_start|>Optimum Asymptotic Multiuser Efficiency of Pseudo-Orthogonal Randomly Spread CDMA: A $K$-user pseudo-orthogonal (PO) randomly spread CDMA system, equivalent to transmission over a subset of $K'\leq K$ single-user Gaussian channels, is introduced. The high signal-to-noise ratio performance of the PO-CDMA is analyzed by rigorously deriving its asymptotic multiuser efficiency (AME) in the large system limit. Interestingly, the $K'$-optimized PO-CDMA transceiver scheme yields an AME which is practically equal to 1 for system loads smaller than 0.1 and lower bounded by 1/4 for increasing loads. As opposed to the vanishing efficiency of linear multiuser detectors, the derived efficiency is comparable to the ultimate CDMA efficiency achieved for the intractable optimal multiuser detector.<|reference_end|> | arxiv | @article{shental2006optimum,
title={Optimum Asymptotic Multiuser Efficiency of Pseudo-Orthogonal Randomly
Spread CDMA},
author={Ori Shental and Ido Kanter},
journal={arXiv preprint arXiv:cs/0604083},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604083},
primaryClass={cs.IT math.IT}
} | shental2006optimum |
arxiv-674129 | cs/0604084 | A Recursive Method for Determining the One-Dimensional Submodules of Laurent-Ore Modules | <|reference_start|>A Recursive Method for Determining the One-Dimensional Submodules of Laurent-Ore Modules: We present a method for determining the one-dimensional submodules of a Laurent-Ore module. The method is based on a correspondence between hyperexponential solutions of associated systems and one-dimensional submodules. The hyperexponential solutions are computed recursively by solving a sequence of first-order ordinary matrix equations. As the recursion proceeds, the matrix equations will have constant coefficients with respect to the operators that have been considered.<|reference_end|> | arxiv | @article{li2006a,
title={A Recursive Method for Determining the One-Dimensional Submodules of
Laurent-Ore Modules},
author={Ziming Li, Michael F. Singer, Min Wu, Dabin Zheng},
journal={arXiv preprint arXiv:cs/0604084},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604084},
primaryClass={cs.SC math.CA}
} | li2006a |
arxiv-674130 | cs/0604085 | Information in Quantum Description and Gate Implementation | <|reference_start|>Information in Quantum Description and Gate Implementation: This note considers Kak's observer-reference model of quantum information, where it is shown that qubits carry information that is sqrt n / ln n times classical information, where n is the number of components in the measurement system, to analyze information processing in quantum gates. The obverse side of this exponential nature of quantum information is that the computational complexity of implementing unconditionally reliable quantum gates is also exponential.<|reference_end|> | arxiv | @article{krishnan2006information,
title={Information in Quantum Description and Gate Implementation},
author={Gayathre Krishnan},
journal={arXiv preprint arXiv:cs/0604085},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604085},
primaryClass={cs.IT math.IT}
} | krishnan2006information |
arxiv-674131 | cs/0604086 | A Knowledge-Based Approach for Selecting Information Sources | <|reference_start|>A Knowledge-Based Approach for Selecting Information Sources: Through the Internet and the World-Wide Web, a vast number of information sources has become available, which offer information on various subjects by different providers, often in heterogeneous formats. This calls for tools and methods for building an advanced information-processing infrastructure. One issue in this area is the selection of suitable information sources in query answering. In this paper, we present a knowledge-based approach to this problem, in the setting where one among a set of information sources (prototypically, data repositories) should be selected for evaluating a user query. We use extended logic programs (ELPs) to represent rich descriptions of the information sources, an underlying domain theory, and user queries in a formal query language (here, XML-QL, but other languages can be handled as well). Moreover, we use ELPs for declarative query analysis and generation of a query description. Central to our approach are declarative source-selection programs, for which we define syntax and semantics. Due to the structured nature of the considered data items, the semantics of such programs must carefully respect implicit context information in source-selection rules, and furthermore combine it with possible user preferences. A prototype implementation of our approach has been realized exploiting the DLV KR system and its plp front-end for prioritized ELPs. We describe a representative example involving specific movie databases, and report about experimental results.<|reference_end|> | arxiv | @article{eiter2006a,
title={A Knowledge-Based Approach for Selecting Information Sources},
author={Thomas Eiter, Michael Fink, and Hans Tompits},
journal={arXiv preprint arXiv:cs/0604086},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604086},
primaryClass={cs.AI}
} | eiter2006a |
arxiv-674132 | cs/0604087 | Probabilistic Automata for Computing with Words | <|reference_start|>Probabilistic Automata for Computing with Words: Usually, probabilistic automata and probabilistic grammars have crisp symbols as inputs, which can be viewed as the formal models of computing with values. In this paper, we first introduce probabilistic automata and probabilistic grammars for computing with (some special) words in a probabilistic framework, where the words are interpreted as probabilistic distributions or possibility distributions over a set of crisp symbols. By probabilistic conditioning, we then establish a retraction principle from computing with words to computing with values for handling crisp inputs and a generalized extension principle from computing with words to computing with all words for handling arbitrary inputs. These principles show that computing with values and computing with all words can be respectively implemented by computing with some special words. To compare the transition probabilities of two near inputs, we also examine some analytical properties of the transition probability functions of generalized extensions. Moreover, the retractions and the generalized extensions are shown to be equivalence-preserving. Finally, we clarify some relationships among the retractions, the generalized extensions, and the extensions studied recently by Qiu and Wang.<|reference_end|> | arxiv | @article{cao2006probabilistic,
title={Probabilistic Automata for Computing with Words},
author={Yongzhi Cao, Lirong Xia and Mingsheng Ying},
journal={arXiv preprint arXiv:cs/0604087},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604087},
primaryClass={cs.AI cs.CL}
} | cao2006probabilistic |
arxiv-674133 | cs/0604088 | How to Run Mathematica Batch-files in Background ? | <|reference_start|>How to Run Mathematica Batch-files in Background ?: Mathematica is a versatile equipment for doing numeric and symbolic computations and it has wide spread applications in all branches of science. Mathematica has a complete consistency to design it at every stage that gives it multilevel capability and helps advanced usage evolve naturally. Mathematica functions work for any precision of number and it can be easily computed with symbols, represented graphically to get the best answer. Mathematica is a robust software development that can be used in any popular operating systems and it can be communicated with external programs by using proper mathlink commands. Sometimes it is quite desirable to run jobs in background of a computer which can take considerable amount of time to finish, and this allows us to do work on other tasks, while keeping the jobs running. Most of us are very familiar to run jobs in background for the programs written in the languages like C, C++, F77, F90, F95, etc. But the way of running jobs, written in a mathematica notebook, in background is quite different from the conventional method. In this article, we explore how to create a mathematica batch-file from a mathematica notebook and run it in background. Here we concentrate our study only for the Unix version, but one can run mathematica programs in background for the Windows version as well by using proper mathematica batch-file.<|reference_end|> | arxiv | @article{maiti2006how,
title={How to Run Mathematica Batch-files in Background ?},
author={Santanu K. Maiti},
journal={arXiv preprint arXiv:cs/0604088},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604088},
primaryClass={cs.MS}
} | maiti2006how |
arxiv-674134 | cs/0604089 | Evolutionary Socioeconomics: a Schumpeterian Computer Simulation | <|reference_start|>Evolutionary Socioeconomics: a Schumpeterian Computer Simulation: The following note contains a computer simulation concerning the struggle between two companies: the first one is "the biggest zaibatsu of all", while the second one is "small, fast, ruthless". The model is based on a neo-Schumpeterian framework operating in a Darwinian evolutionary environment. After running the program a large number of times, two characteristics stand out: -- There is always a winner which takes it all, while the loser disappears. -- The key to success is the ability to employ efficiently the technological innovations. The topic of the present paper is strictly related with the content of the following notes: Michele Tucci, Evolution and Gravitation: a Computer Simulation of a Non-Walrasian Equilibrium Model; Michele Tucci, Oligopolistic Competition in an Evolutionary Environment: a Computer Simulation. The texts can be downloaded respectively at the following addresses: http://arxiv.org/abs/cs.CY/0209017 http://arxiv.org/abs/cs.CY/0501037 These references include some preliminary considerations regarding the comparison between the evolutionary and the gravitational paradigms and the evaluation of approaches belonging to rival schools of economic thought.<|reference_end|> | arxiv | @article{tucci2006evolutionary,
title={Evolutionary Socioeconomics: a Schumpeterian Computer Simulation},
author={Michele Tucci},
journal={arXiv preprint arXiv:cs/0604089},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604089},
primaryClass={cs.GT cs.CY}
} | tucci2006evolutionary |
arxiv-674135 | cs/0604090 | Simplicial models of social aggregation I | <|reference_start|>Simplicial models of social aggregation I: This paper presents the foundational ideas for a new way of modeling social aggregation. Traditional approaches have been using network theory, and the theory of random networks. Under that paradigm, every social agent is represented by a node, and every social interaction is represented by a segment connecting two nodes. Early work in family interactions, as well as more recent work in the study of terrorist organizations, shows that network modeling may be insufficient to describe the complexity of human social structures. Specifically, network theory does not seem to have enough flexibility to represent higher order aggregations, where several agents interact as a group, rather than as a collection of pairs. The model we present here uses a well established mathematical theory, the theory of simplicial complexes, to address this complex issue prevalent in interpersonal and intergroup communication. The theory enables us to provide a richer graphical representation of social interactions, and to determine quantitative mechanisms to describe the robustness of a social structure. We also propose a methodology to create random simplicial complexes, with the purpose of providing a new method to simulate computationally the creation and disgregation of social structures. Finally, we propose several measures which could be taken and observed in order to describe and study an actual social aggregation occurring in interpersonal and intergroup contexts.<|reference_end|> | arxiv | @article{mannucci2006simplicial,
title={Simplicial models of social aggregation I},
author={Mirco A. Mannucci, Lisa Sparks, Daniele C. Struppa},
journal={arXiv preprint arXiv:cs/0604090},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604090},
primaryClass={cs.CE}
} | mannucci2006simplicial |
arxiv-674136 | cs/0604091 | Robust Distributed Source Coding | <|reference_start|>Robust Distributed Source Coding: We consider a distributed source coding system in which several observations are communicated to the decoder using limited transmission rate. The observations must be separately coded. We introduce a robust distributed coding scheme which flexibly trades off between system robustness and compression efficiency. The optimality of this coding scheme is proved for various special cases.<|reference_end|> | arxiv | @article{chen2006robust,
title={Robust Distributed Source Coding},
author={Jun Chen, Toby Berger},
journal={arXiv preprint arXiv:cs/0604091},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604091},
primaryClass={cs.IT math.IT}
} | chen2006robust |
arxiv-674137 | cs/0604092 | Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake Receivers in Impulse Radio UWB Systems | <|reference_start|>Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake Receivers in Impulse Radio UWB Systems: The problem of choosing the optimal multipath components to be employed at a minimum mean square error (MMSE) selective Rake receiver is considered for an impulse radio ultra-wideband system. First, the optimal finger selection problem is formulated as an integer programming problem with a non-convex objective function. Then, the objective function is approximated by a convex function and the integer programming problem is solved by means of constraint relaxation techniques. The proposed algorithms are suboptimal due to the approximate objective function and the constraint relaxation steps. However, they perform better than the conventional finger selection algorithm, which is suboptimal since it ignores the correlation between multipath components, and they can get quite close to the optimal scheme that cannot be implemented in practice due to its complexity. In addition to the convex relaxation techniques, a genetic algorithm (GA) based approach is proposed, which does not need any approximations or integer relaxations. This iterative algorithm is based on the direct evaluation of the objective function, and can achieve near-optimal performance with a reasonable number of iterations. Simulation results are presented to compare the performance of the proposed finger selection algorithms with that of the conventional and the optimal schemes.<|reference_end|> | arxiv | @article{gezici2006optimal,
title={Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake
Receivers in Impulse Radio UWB Systems},
author={Sinan Gezici, Mung Chiang, H. Vincent Poor, and Hisashi Kobayashi},
journal={arXiv preprint arXiv:cs/0604092},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604092},
primaryClass={cs.IT math.IT}
} | gezici2006optimal |
arxiv-674138 | cs/0604093 | Perfect Space Time Block Codes | <|reference_start|>Perfect Space Time Block Codes: In this paper, we introduce the notion of perfect space-time block codes (STBC). These codes have full rate, full diversity, non-vanishing constant minimum determinant for increasing spectral efficiency, uniform average transmitted energy per antenna and good shaping. We present algebraic constructions of perfect STBCs for 2, 3, 4 and 6 antennas.<|reference_end|> | arxiv | @article{oggier2006perfect,
title={Perfect Space Time Block Codes},
author={F. Oggier, G. Rekaya-Ben Othman, J.-C. Belfiore and E. Viterbo},
journal={arXiv preprint arXiv:cs/0604093},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604093},
primaryClass={cs.IT math.IT}
} | oggier2006perfect |
arxiv-674139 | cs/0604094 | A Fast and Accurate Nonlinear Spectral Method for Image Recognition and Registration | <|reference_start|>A Fast and Accurate Nonlinear Spectral Method for Image Recognition and Registration: This article addresses the problem of two- and higher dimensional pattern matching, i.e. the identification of instances of a template within a larger signal space, which is a form of registration. Unlike traditional correlation, we aim at obtaining more selective matchings by considering more strict comparisons of gray-level intensity. In order to achieve fast matching, a nonlinear thresholded version of the fast Fourier transform is applied to a gray-level decomposition of the original 2D image. The potential of the method is substantiated with respect to real data involving the selective identification of neuronal cell bodies in gray-level images.<|reference_end|> | arxiv | @article{costa2006a,
title={A Fast and Accurate Nonlinear Spectral Method for Image Recognition and
Registration},
author={Luciano da Fontoura Costa and Erik Bollt},
journal={Appl. Phys. Lett. 89, 174102 (2006)},
year={2006},
doi={10.1063/1.2358325},
archivePrefix={arXiv},
eprint={cs/0604094},
primaryClass={cs.DC cond-mat.stat-mech cs.CG cs.CV}
} | costa2006a |
arxiv-674140 | cs/0604095 | Fixed-Parameter Complexity of Minimum Profile Problems | <|reference_start|>Fixed-Parameter Complexity of Minimum Profile Problems: Let $G=(V,E)$ be a graph. An ordering of $G$ is a bijection $\alpha: V\dom \{1,2,..., |V|\}.$ For a vertex $v$ in $G$, its closed neighborhood is $N[v]=\{u\in V: uv\in E\}\cup \{v\}.$ The profile of an ordering $\alpha$ of $G$ is $\prf_{\alpha}(G)=\sum_{v\in V}(\alpha(v)-\min\{\alpha(u): u\in N[v]\}).$ The profile $\prf(G)$ of $G$ is the minimum of $\prf_{\alpha}(G)$ over all orderings $\alpha$ of $G$. It is well-known that $\prf(G)$ is the minimum number of edges in an interval graph $H$ that contains $G$ is a subgraph. Since $|V|-1$ is a tight lower bound for the profile of connected graphs $G=(V,E)$, the parametrization above the guaranteed value $|V|-1$ is of particular interest. We show that deciding whether the profile of a connected graph $G=(V,E)$ is at most $|V|-1+k$ is fixed-parameter tractable with respect to the parameter $k$. We achieve this result by reduction to a problem kernel of linear size.<|reference_end|> | arxiv | @article{gutin2006fixed-parameter,
title={Fixed-Parameter Complexity of Minimum Profile Problems},
author={Gregory Gutin, Stefan Szeider, Anders Yeo},
journal={arXiv preprint arXiv:cs/0604095},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604095},
primaryClass={cs.DS cs.DM}
} | gutin2006fixed-parameter |
arxiv-674141 | cs/0604096 | Polynomial-time algorithms for coding across multiple unicasts | <|reference_start|>Polynomial-time algorithms for coding across multiple unicasts: We consider the problem of network coding across multiple unicasts. We give, for wired and wireless networks, efficient polynomial time algorithms for finding optimal network codes within the class of network codes restricted to XOR coding between pairs of flows.<|reference_end|> | arxiv | @article{ho2006polynomial-time,
title={Polynomial-time algorithms for coding across multiple unicasts},
author={Tracey Ho},
journal={arXiv preprint arXiv:cs/0604096},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604096},
primaryClass={cs.NI}
} | ho2006polynomial-time |
arxiv-674142 | cs/0604097 | Approximation algorithms for wavelet transform coding of data streams | <|reference_start|>Approximation algorithms for wavelet transform coding of data streams: This paper addresses the problem of finding a B-term wavelet representation of a given discrete function $f \in \real^n$ whose distance from f is minimized. The problem is well understood when we seek to minimize the Euclidean distance between f and its representation. The first known algorithms for finding provably approximate representations minimizing general $\ell_p$ distances (including $\ell_\infty$) under a wide variety of compactly supported wavelet bases are presented in this paper. For the Haar basis, a polynomial time approximation scheme is demonstrated. These algorithms are applicable in the one-pass sublinear-space data stream model of computation. They generalize naturally to multiple dimensions and weighted norms. A universal representation that provides a provable approximation guarantee under all p-norms simultaneously; and the first approximation algorithms for bit-budget versions of the problem, known as adaptive quantization, are also presented. Further, it is shown that the algorithms presented here can be used to select a basis from a tree-structured dictionary of bases and find a B-term representation of the given function that provably approximates its best dictionary-basis representation.<|reference_end|> | arxiv | @article{guha2006approximation,
title={Approximation algorithms for wavelet transform coding of data streams},
author={Sudipto Guha and Boulos Harb},
journal={arXiv preprint arXiv:cs/0604097},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604097},
primaryClass={cs.DS}
} | guha2006approximation |
arxiv-674143 | cs/0604098 | Achievable Rates for the Multiple Access Channel with Feedback and Correlated Sources | <|reference_start|>Achievable Rates for the Multiple Access Channel with Feedback and Correlated Sources: In this paper, we investigate achievable rates on the multiple access channel with feedback and correlated sources (MACFCS). The motivation for studying the MACFCS stems from the fact that in a sensor network, sensors collect and transmit correlated data to a common sink. We derive two achievable rate regions for the three-node MACFCS.<|reference_end|> | arxiv | @article{ong2006achievable,
title={Achievable Rates for the Multiple Access Channel with Feedback and
Correlated Sources},
author={Lawrence Ong, Mehul Motani},
journal={Proceedings of the 43rd Annual Allerton Conference on
Communication, Control, and Computing, Allerton House, the University of
Illinois, Sept 28-30 2005.},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604098},
primaryClass={cs.IT math.IT}
} | ong2006achievable |
arxiv-674144 | cs/0604099 | Myopic Coding in Wireless Networks | <|reference_start|>Myopic Coding in Wireless Networks: We investigate the achievable rate of data transmission from sources to sinks through a multiple-relay network. We study achievable rates for omniscient coding, in which all nodes are considered in the coding design at each node. We find that, when maximizing the achievable rate, not all nodes need to ``cooperate'' with all other nodes in terms of coding and decoding. This leads us to suggest a constrained network, whereby each node only considers a few neighboring nodes during encoding and decoding. We term this myopic coding and calculate achievable rates for myopic coding. We show by examples that, when nodes transmit at low SNR, these rates are close to that achievable by omniscient coding, when the network is unconstrained . This suggests that a myopic view of the network might be as good as a global view. In addition, myopic coding has the practical advantage of being more robust to topology changes. It also mitigates the high computational complexity and large buffer/memory requirements of omniscient coding schemes.<|reference_end|> | arxiv | @article{ong2006myopic,
title={Myopic Coding in Wireless Networks},
author={Lawrence Ong, Mehul Motani},
journal={Proceedings of the 39th Conference on Information Sciences and
Systems (CISS 2005), John Hopkins University, Baltimore, MD, March 16-18
2005.},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604099},
primaryClass={cs.IT math.IT}
} | ong2006myopic |
arxiv-674145 | cs/0604100 | Protocols for Kak's Cubic Cipher and Diffie-Hellman Based Asymmetric Oblivious Key Exchange | <|reference_start|>Protocols for Kak's Cubic Cipher and Diffie-Hellman Based Asymmetric Oblivious Key Exchange: This paper presents protocols for Kak's cubic transformation and proposes a modification to Diffie-Hellman key exchange protocol in order to achieve asymmetric oblivious exchange of keys.<|reference_end|> | arxiv | @article{parakh2006protocols,
title={Protocols for Kak's Cubic Cipher and Diffie-Hellman Based Asymmetric
Oblivious Key Exchange},
author={Abhishek Parakh},
journal={arXiv preprint arXiv:cs/0604100},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604100},
primaryClass={cs.CR}
} | parakh2006protocols |
arxiv-674146 | cs/0604101 | Fast computation of power series solutions of systems of differential equations | <|reference_start|>Fast computation of power series solutions of systems of differential equations: We propose new algorithms for the computation of the first N terms of a vector (resp. a basis) of power series solutions of a linear system of differential equations at an ordinary point, using a number of arithmetic operations which is quasi-linear with respect to N. Similar results are also given in the non-linear case. This extends previous results obtained by Brent and Kung for scalar differential equations of order one and two.<|reference_end|> | arxiv | @article{bostan2006fast,
title={Fast computation of power series solutions of systems of differential
equations},
author={Alin Bostan (INRIA Rocquencourt), Fr'ed'eric Chyzak (INRIA
Rocquencourt), Franc{c}ois Ollivier (LIX), Bruno Salvy (INRIA Rocquencourt),
'Eric Schost (LIX), Alexandre Sedoglavic (LIFL)},
journal={Dans 2007 ACM-SIAM Symposium on Discrete Algorithms (2007)
1012--1021},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604101},
primaryClass={cs.SC}
} | bostan2006fast |
arxiv-674147 | cs/0604102 | HCI and Educational Metrics as Tools for VLE Evaluation | <|reference_start|>HCI and Educational Metrics as Tools for VLE Evaluation: The general set of HCI and Educational principles are considered and a classification system constructed. A frequency analysis of principles is used to obtain the most significant set. Metrics are devised to provide objective measures of these principles and a consistent testing regime devised. These principles are used to analyse Blackboard and Moodle.<|reference_end|> | arxiv | @article{hinze-hoare2006hci,
title={HCI and Educational Metrics as Tools for VLE Evaluation},
author={Vita Hinze-Hoare},
journal={arXiv preprint arXiv:cs/0604102},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604102},
primaryClass={cs.HC cs.LG}
} | hinze-hoare2006hci |
arxiv-674148 | cs/0604103 | Further Evaluationh of VLEs using HCI and Educational Metrics | <|reference_start|>Further Evaluationh of VLEs using HCI and Educational Metrics: Under consideration are the general set of Human computer Interaction (HCI) and Educational principles from prominent authors in the field and the construction of a system for evaluating Virtual Learning Environments (VLEs) with respect to the application of these HCI and Educational Principles. A frequency analysis of principles is used to obtain the most significant set. Metrics are devised to provide objective measures of these principles and a consistent testing regime is introduced. These principles are used to analyse the University VLE Blackboard. An open source VLE is also constructed with similar content to Blackboard courses so that a systematic comparison can be made. HCI and Educational metrics are determined for each VLE.<|reference_end|> | arxiv | @article{hinze-hoare2006further,
title={Further Evaluationh of VLEs using HCI and Educational Metrics},
author={Vita Hinze-Hoare},
journal={arXiv preprint arXiv:cs/0604103},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604103},
primaryClass={cs.HC}
} | hinze-hoare2006further |
arxiv-674149 | cs/0604104 | On the Shannon Covers of Certain Irreducible Constrained Systems of Finite Type | <|reference_start|>On the Shannon Covers of Certain Irreducible Constrained Systems of Finite Type: A construction of Crochemore, Mignosi and Restivo in the automata theory literature gives a presentation of a finite-type constrained system (FTCS) that is deterministic and has a relatively small number of states. This construction is thus a good starting point for determining the minimal deterministic presentation, known as the Shannon cover, of an FTCS. We analyze in detail the Crochemore-Mignosi-Restivo (CMR) construction in the case when the list of forbidden words defining the FTCS is of size at most two. We show that if the FTCS is irreducible, then an irreducible presentation for the system can be easily obtained by deleting a prescribed few states from the CMR presentation. By studying the follower sets of the states in this irreducible presentation, we are able to explicitly determine the Shannon cover in some cases. In particular, our results show that the CMR construction directly yields the Shannon cover in the case of an irreducible FTCS with exactly one forbidden word, but this is not in general the case for FTCS's with two forbidden words.<|reference_end|> | arxiv | @article{manada2006on,
title={On the Shannon Covers of Certain Irreducible Constrained Systems of
Finite Type},
author={Akiko Manada and Navin Kashyap},
journal={arXiv preprint arXiv:cs/0604104},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604104},
primaryClass={cs.IT cs.DM math.IT}
} | manada2006on |
arxiv-674150 | cs/0604105 | Jumps: Enhancing hop-count positioning in sensor networks using multiple coordinates | <|reference_start|>Jumps: Enhancing hop-count positioning in sensor networks using multiple coordinates: Positioning systems in self-organizing networks generally rely on measurements such as delay and received signal strength, which may be difficult to obtain and often require dedicated equipment. An alternative to such approaches is to use simple connectivity information, that is, the presence or absence of a link between any pair of nodes, and to extend it to hop-counts, in order to obtain an approximate coordinate system. Such an approximation is sufficient for a large number of applications, such as routing. In this paper, we propose Jumps, a positioning system for those self-organizing networks in which other types of (exact) positioning systems cannot be used or are deemed to be too costly. Jumps builds a multiple coordinate system based solely on nodes neighborhood knowledge. Jumps is interesting in the context of wireless sensor networks, as it neither requires additional embedded equipment nor relies on any nodes capabilities. While other approaches use only three hop-count measurements to infer the position of a node, Jumps uses an arbitrary number. We observe that an increase in the number of measurements leads to an improvement in the localization process, without requiring a high dense environment. We show through simulations that Jumps, when compared with existing approaches, reduces the number of nodes sharing the same coordinates, which paves the way for functions such as position-based routing.<|reference_end|> | arxiv | @article{benbadis2006jumps:,
title={Jumps: Enhancing hop-count positioning in sensor networks using multiple
coordinates},
author={Farid Benbadis, Jean-Jacques Puig, Marcelo Dias de Amorim, Claude
Chaudet, Timur Friedman, David Simplot-Ryl},
journal={arXiv preprint arXiv:cs/0604105},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604105},
primaryClass={cs.NI}
} | benbadis2006jumps: |
arxiv-674151 | cs/0604106 | Bounded expected delay in arithmetic coding | <|reference_start|>Bounded expected delay in arithmetic coding: We address the problem of delay in an arithmetic coding system. Due to the nature of the arithmetic coding process, source sequences causing arbitrarily large encoding or decoding delays exist. This phenomena raises the question of just how large is the expected input to output delay in these systems, i.e., once a source sequence has been encoded, what is the expected number of source letters that should be further encoded to allow full decoding of that sequence. In this paper, we derive several new upper bounds on the expected delay for a memoryless source, which improve upon a known bound due to Gallager. The bounds provided are uniform in the sense of being independent of the sequence's history. In addition, we give a sufficient condition for a source to admit a bounded expected delay, which holds for a stationary ergodic Markov source of any order.<|reference_end|> | arxiv | @article{shayevitz2006bounded,
title={Bounded expected delay in arithmetic coding},
author={Ofer Shayevitz, Ram Zamir and Meir Feder},
journal={arXiv preprint arXiv:cs/0604106},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604106},
primaryClass={cs.IT math.IT}
} | shayevitz2006bounded |
arxiv-674152 | cs/0604107 | Cognitive Radio: An Information-Theoretic Perspective | <|reference_start|>Cognitive Radio: An Information-Theoretic Perspective: Cognitive radios have been proposed as a means to implement efficient reuse of the licensed spectrum. The key feature of a cognitive radio is its ability to recognize the primary (licensed) user and adapt its communication strategy to minimize the interference that it generates. We consider a communication scenario in which the primary and the cognitive user wish to communicate to different receivers, subject to mutual interference. Modeling the cognitive radio as a transmitter with side-information about the primary transmission, we characterize the largest rate at which the cognitive radio can reliably communicate under the constraint that (i) no interference is created for the primary user, and (ii) the primary encoder-decoder pair is oblivious to the presence of the cognitive radio.<|reference_end|> | arxiv | @article{jovicic2006cognitive,
title={Cognitive Radio: An Information-Theoretic Perspective},
author={Aleksandar Jovicic and Pramod Viswanath},
journal={arXiv preprint arXiv:cs/0604107},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604107},
primaryClass={cs.IT math.IT}
} | jovicic2006cognitive |
arxiv-674153 | cs/0604108 | An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees | <|reference_start|>An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees: The relationship between two important problems in tree pattern matching, the largest common subtree and the smallest common supertree problems, is established by means of simple constructions, which allow one to obtain a largest common subtree of two trees from a smallest common supertree of them, and vice versa. These constructions are the same for isomorphic, homeomorphic, topological, and minor embeddings, they take only time linear in the size of the trees, and they turn out to have a clear algebraic meaning.<|reference_end|> | arxiv | @article{rossello2006an,
title={An Algebraic View of the Relation between Largest Common Subtrees and
Smallest Common Supertrees},
author={Francesc Rossello, Gabriel Valiente},
journal={arXiv preprint arXiv:cs/0604108},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604108},
primaryClass={cs.DS cs.DM math.CT}
} | rossello2006an |
arxiv-674154 | cs/0604109 | CMS Software Distribution on the LCG and OSG Grids | <|reference_start|>CMS Software Distribution on the LCG and OSG Grids: The efficient exploitation of worldwide distributed storage and computing resources available in the grids require a robust, transparent and fast deployment of experiment specific software. The approach followed by the CMS experiment at CERN in order to enable Monte-Carlo simulations, data analysis and software development in an international collaboration is presented. The current status and future improvement plans are described.<|reference_end|> | arxiv | @article{rabbertz2006cms,
title={CMS Software Distribution on the LCG and OSG Grids},
author={K. Rabbertz (1), M. Thomas (2), S. Ashby (3), M. Corvo (3 and 4), S.
Argir`o (3 and 5), N. Darmenov (3 and 6), R. Darwish (7), D. Evans (7), B.
Holzman (7), N. Ratnikova (7), S. Muzaffar (8), A. Nowack (9), T. Wildish
(10), B. Kim (11), J. Weng (1 and 3), V. B"uge (1 and 12) (for the CMS
Collaboration) ((1) University of Karlsruhe, (2) CALTECH, (3) CERN, (4) INFN
Padova, (5) INFN-CNAF, (6) INRNE Sofia, (7) FERMILAB, (8) Northeastern
University, (9) RWTH Aachen, (10) Princeton University, (11) University of
Florida, (12) FZ Karlsruhe)},
journal={arXiv preprint arXiv:cs/0604109},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604109},
primaryClass={cs.DC}
} | rabbertz2006cms |
arxiv-674155 | cs/0604110 | Modeling and Mathematical Analysis of Swarms of Microscopic Robots | <|reference_start|>Modeling and Mathematical Analysis of Swarms of Microscopic Robots: The biologically-inspired swarm paradigm is being used to design self-organizing systems of locally interacting artificial agents. A major difficulty in designing swarms with desired characteristics is understanding the causal relation between individual agent and collective behaviors. Mathematical analysis of swarm dynamics can address this difficulty to gain insight into system design. This paper proposes a framework for mathematical modeling of swarms of microscopic robots that may one day be useful in medical applications. While such devices do not yet exist, the modeling approach can be helpful in identifying various design trade-offs for the robots and be a useful guide for their eventual fabrication. Specifically, we examine microscopic robots that reside in a fluid, for example, a bloodstream, and are able to detect and respond to different chemicals. We present the general mathematical model of a scenario in which robots locate a chemical source. We solve the scenario in one-dimension and show how results can be used to evaluate certain design decisions.<|reference_end|> | arxiv | @article{galstyan2006modeling,
title={Modeling and Mathematical Analysis of Swarms of Microscopic Robots},
author={Aram Galstyan, Tad Hogg, Kristina Lerman},
journal={arXiv preprint arXiv:cs/0604110},
year={2006},
doi={10.1109/SIS.2005.1501623},
archivePrefix={arXiv},
eprint={cs/0604110},
primaryClass={cs.MA cs.RO}
} | galstyan2006modeling |
arxiv-674156 | cs/0604111 | Analysis of Dynamic Task Allocation in Multi-Robot Systems | <|reference_start|>Analysis of Dynamic Task Allocation in Multi-Robot Systems: Dynamic task allocation is an essential requirement for multi-robot systems operating in unknown dynamic environments. It allows robots to change their behavior in response to environmental changes or actions of other robots in order to improve overall system performance. Emergent coordination algorithms for task allocation that use only local sensing and no direct communication between robots are attractive because they are robust and scalable. However, a lack of formal analysis tools makes emergent coordination algorithms difficult to design. In this paper we present a mathematical model of a general dynamic task allocation mechanism. Robots using this mechanism have to choose between two types of task, and the goal is to achieve a desired task division in the absence of explicit communication and global knowledge. Robots estimate the state of the environment from repeated local observations and decide which task to choose based on these observations. We model the robots and observations as stochastic processes and study the dynamics of the collective behavior. Specifically, we analyze the effect that the number of observations and the choice of the decision function have on the performance of the system. The mathematical models are validated in a multi-robot multi-foraging scenario. The model's predictions agree very closely with experimental results from sensor-based simulations.<|reference_end|> | arxiv | @article{lerman2006analysis,
title={Analysis of Dynamic Task Allocation in Multi-Robot Systems},
author={Kristina Lerman, Chris Jones, Aram Galstyan and Maja J Mataric},
journal={arXiv preprint arXiv:cs/0604111},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604111},
primaryClass={cs.RO cs.MA}
} | lerman2006analysis |
arxiv-674157 | cs/0604112 | Designing a Multi-petabyte Database for LSST | <|reference_start|>Designing a Multi-petabyte Database for LSST: The 3.2 giga-pixel LSST camera will produce approximately half a petabyte of archive images every month. These data need to be reduced in under a minute to produce real-time transient alerts, and then added to the cumulative catalog for further analysis. The catalog is expected to grow about three hundred terabytes per year. The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on a database for catalogs and metadata. Several database systems are being evaluated to understand how they perform at these data rates, data volumes, and access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, results to date from evaluating available database technologies against LSST requirements, and the proposed database architecture to meet the data challenges.<|reference_end|> | arxiv | @article{becla2006designing,
title={Designing a Multi-petabyte Database for LSST},
author={Jacek Becla, Andrew Hanushevsky, Sergei Nikolaev, Ghaleb Abdulla, Alex
Szalay, Maria Nieto-Santisteban, Ani Thakar, Jim Gray},
journal={arXiv preprint arXiv:cs/0604112},
year={2006},
doi={10.1117/12.671721},
archivePrefix={arXiv},
eprint={cs/0604112},
primaryClass={cs.DB cs.DL}
} | becla2006designing |
arxiv-674158 | cs/0604113 | One-in-Two-Matching Problem is NP-complete | <|reference_start|>One-in-Two-Matching Problem is NP-complete: 2-dimensional Matching Problem, which requires to find a matching of left- to right-vertices in a balanced $2n$-vertex bipartite graph, is a well-known polynomial problem, while various variants, like the 3-dimensional analogoue (3DM, with triangles on a tripartite graph), or the Hamiltonian Circuit Problem (HC, a restriction to ``unicyclic'' matchings) are among the main examples of NP-hard problems, since the first Karp reduction series of 1972. The same holds for the weighted variants of these problems, the Linear Assignment Problem being polynomial, and the Numerical 3-Dimensional Matching and Travelling Salesman Problem being NP-complete. In this paper we show that a small modification of the 2-dimensional Matching and Assignment Problems in which for each $i \leq n/2$ it is required that either $\pi(2i-1)=2i-1$ or $\pi(2i)=2i$, is a NP-complete problem. The proof is by linear reduction from SAT (or NAE-SAT), with the size $n$ of the Matching Problem being four times the number of edges in the factor graph representation of the boolean problem. As a corollary, in combination with the simple linear reduction of One-in-Two Matching to 3-Dimensional Matching, we show that SAT can be linearly reduced to 3DM, while the original Karp reduction was only cubic.<|reference_end|> | arxiv | @article{caracciolo2006one-in-two-matching,
title={One-in-Two-Matching Problem is NP-complete},
author={Sergio Caracciolo, Davide Fichera, Andrea Sportiello},
journal={arXiv preprint arXiv:cs/0604113},
year={2006},
archivePrefix={arXiv},
eprint={cs/0604113},
primaryClass={cs.CC}
} | caracciolo2006one-in-two-matching |
arxiv-674159 | cs/0605001 | On Multistage Successive Refinement for Wyner-Ziv Source Coding with Degraded Side Informations | <|reference_start|>On Multistage Successive Refinement for Wyner-Ziv Source Coding with Degraded Side Informations: We provide a complete characterization of the rate-distortion region for the multistage successive refinement of the Wyner-Ziv source coding problem with degraded side informations at the decoder. Necessary and sufficient conditions for a source to be successively refinable along a distortion vector are subsequently derived. A source-channel separation theorem is provided when the descriptions are sent over independent channels for the multistage case. Furthermore, we introduce the notion of generalized successive refinability with multiple degraded side informations. This notion captures whether progressive encoding to satisfy multiple distortion constraints for different side informations is as good as encoding without progressive requirement. Necessary and sufficient conditions for generalized successive refinability are given. It is shown that the following two sources are generalized successively refinable: (1) the Gaussian source with degraded Gaussian side informations, (2) the doubly symmetric binary source when the worse side information is a constant. Thus for both cases, the failure of being successively refinable is only due to the inherent uncertainty on which side information will occur at the decoder, but not the progressive encoding requirement.<|reference_end|> | arxiv | @article{tian2006on,
title={On Multistage Successive Refinement for Wyner-Ziv Source Coding with
Degraded Side Informations},
author={Chao Tian and Suhas Diggavi},
journal={arXiv preprint arXiv:cs/0605001},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605001},
primaryClass={cs.IT math.IT}
} | tian2006on |
arxiv-674160 | cs/0605002 | A Hybrid Quantum Encoding Algorithm of Vector Quantization for Image Compression | <|reference_start|>A Hybrid Quantum Encoding Algorithm of Vector Quantization for Image Compression: Many classical encoding algorithms of Vector Quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45sqrt(N) times approximately. In this paper, a hybrid quantum VQ encoding algorithm between classical method and quantum algorithm is presented. The number of its operations is less than sqrt(N) for most images, and it is more efficient than the pure quantum algorithm. Key Words: Vector Quantization, Grover's Algorithm, Image Compression, Quantum Algorithm<|reference_end|> | arxiv | @article{pang2006a,
title={A Hybrid Quantum Encoding Algorithm of Vector Quantization for Image
Compression},
author={Chao-Yang Pang, Zheng-Wei Zhou, and Guang-Can Guo},
journal={arXiv preprint arXiv:cs/0605002},
year={2006},
doi={10.1088/1009-1963/15/12/044},
archivePrefix={arXiv},
eprint={cs/0605002},
primaryClass={cs.MM cs.DS}
} | pang2006a |
arxiv-674161 | cs/0605003 | A New Cryptosystem Based On Hidden Order Groups | <|reference_start|>A New Cryptosystem Based On Hidden Order Groups: Let $G_1$ be a cyclic multiplicative group of order $n$. It is known that the Diffie-Hellman problem is random self-reducible in $G_1$ with respect to a fixed generator $g$ if $\phi(n)$ is known. That is, given $g, g^x\in G_1$ and having oracle access to a `Diffie-Hellman Problem' solver with fixed generator $g$, it is possible to compute $g^{1/x} \in G_1$ in polynomial time (see theorem 3.2). On the other hand, it is not known if such a reduction exists when $\phi(n)$ is unknown (see conjuncture 3.1). We exploit this ``gap'' to construct a cryptosystem based on hidden order groups and present a practical implementation of a novel cryptographic primitive called an \emph{Oracle Strong Associative One-Way Function} (O-SAOWF). O-SAOWFs have applications in multiparty protocols. We demonstrate this by presenting a key agreement protocol for dynamic ad-hoc groups.<|reference_end|> | arxiv | @article{saxena2006a,
title={A New Cryptosystem Based On Hidden Order Groups},
author={Amitabh Saxena and Ben Soh},
journal={arXiv preprint arXiv:cs/0605003},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605003},
primaryClass={cs.CR cs.CC}
} | saxena2006a |
arxiv-674162 | cs/0605004 | Novel Reversible Multiplier Architecture Using Reversible TSG Gate | <|reference_start|>Novel Reversible Multiplier Architecture Using Reversible TSG Gate: In the recent years, reversible logic has emerged as a promising technology having its applications in low power CMOS, quantum computing, nanotechnology, and optical computing. The classical set of gates such as AND, OR, and EXOR are not reversible. Recently a 4 * 4 reversible gate called TSG is proposed. The most significant aspect of the proposed gate is that it can work singly as a reversible full adder, that is reversible full adder can now be implemented with a single gate only. This paper proposes a NXN reversible multiplier using TSG gate. It is based on two concepts. The partial products can be generated in parallel with a delay of d using Fredkin gates and thereafter the addition can be reduced to log2N steps by using reversible parallel adder designed from TSG gates. Similar multiplier architecture in conventional arithmetic (using conventional logic) has been reported in existing literature, but the proposed one in this paper is totally based on reversible logic and reversible cells as its building block. A 4x4 architecture of the proposed reversible multiplier is also designed. It is demonstrated that the proposed multiplier architecture using the TSG gate is much better and optimized, compared to its existing counterparts in literature; in terms of number of reversible gates and garbage outputs. Thus, this paper provides the initial threshold to building of more complex system which can execute more complicated operations using reversible logic.<|reference_end|> | arxiv | @article{thapliyal2006novel,
title={Novel Reversible Multiplier Architecture Using Reversible TSG Gate},
author={Himanshu Thapliyal and M.B Srinivas},
journal={arXiv preprint arXiv:cs/0605004},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605004},
primaryClass={cs.AR}
} | thapliyal2006novel |
arxiv-674163 | cs/0605005 | The Discrete Memoryless Multiple Access Channel with Confidential Messages | <|reference_start|>The Discrete Memoryless Multiple Access Channel with Confidential Messages: A multiple-access channel is considered in which messages from one encoder are confidential. Confidential messages are to be transmitted with perfect secrecy, as measured by equivocation at the other encoder. The upper bounds and the achievable rates for this communication situation are determined.<|reference_end|> | arxiv | @article{liu2006the,
title={The Discrete Memoryless Multiple Access Channel with Confidential
Messages},
author={Ruoheng Liu, Ivana Maric, Roy D. Yates, and Predrag Spasojevic},
journal={arXiv preprint arXiv:cs/0605005},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605005},
primaryClass={cs.IT math.IT}
} | liu2006the |
arxiv-674164 | cs/0605006 | An Information-Spectrum Approach to Multiterminal Rate-Distortion Theory | <|reference_start|>An Information-Spectrum Approach to Multiterminal Rate-Distortion Theory: An information-spectrum approach is applied to solve the multiterminal source coding problem for correlated general sources, where sources may be nonstationary and/or nonergodic, and the distortion measure is arbitrary and may be nonadditive. A general formula for the rate-distortion region of the multiterminal source coding problem with the maximum distortion criterion under fixed-length coding is shown in this correspondence.<|reference_end|> | arxiv | @article{yang2006an,
title={An Information-Spectrum Approach to Multiterminal Rate-Distortion Theory},
author={Shengtian Yang, Peiliang Qiu},
journal={arXiv preprint arXiv:cs/0605006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605006},
primaryClass={cs.IT math.IT}
} | yang2006an |
arxiv-674165 | cs/0605007 | Systematic Topology Analysis and Generation Using Degree Correlations | <|reference_start|>Systematic Topology Analysis and Generation Using Degree Correlations: We present a new, systematic approach for analyzing network topologies. We first introduce the dK-series of probability distributions specifying all degree correlations within d-sized subgraphs of a given graph G. Increasing values of d capture progressively more properties of G at the cost of more complex representation of the probability distribution. Using this series, we can quantitatively measure the distance between two graphs and construct random graphs that accurately reproduce virtually all metrics proposed in the literature. The nature of the dK-series implies that it will also capture any future metrics that may be proposed. Using our approach, we construct graphs for d=0,1,2,3 and demonstrate that these graphs reproduce, with increasing accuracy, important properties of measured and modeled Internet topologies. We find that the d=2 case is sufficient for most practical purposes, while d=3 essentially reconstructs the Internet AS- and router-level topologies exactly. We hope that a systematic method to analyze and synthesize topologies offers a significant improvement to the set of tools available to network topology and protocol researchers.<|reference_end|> | arxiv | @article{mahadevan2006systematic,
title={Systematic Topology Analysis and Generation Using Degree Correlations},
author={Priya Mahadevan, Dmitri Krioukov, Kevin Fall, Amin Vahdat},
journal={SIGCOMM 2006 (ACM SIGCOMM Computer Communication Review (CCR),
v.36, n.4, p.135-146, 2006)},
year={2006},
doi={10.1145/1151659.1159930},
archivePrefix={arXiv},
eprint={cs/0605007},
primaryClass={cs.NI cond-mat.stat-mech physics.soc-ph}
} | mahadevan2006systematic |
arxiv-674166 | cs/0605008 | The complexity of acyclic conjunctive queries revisited | <|reference_start|>The complexity of acyclic conjunctive queries revisited: In this paper, we consider first-order logic over unary functions and study the complexity of the evaluation problem for conjunctive queries described by such kind of formulas. A natural notion of query acyclicity for this language is introduced and we study the complexity of a large number of variants or generalizations of acyclic query problems in that context (Boolean or not Boolean, with or without inequalities, comparisons, etc...). Our main results show that all those problems are \textit{fixed-parameter linear} i.e. they can be evaluated in time $f(|Q|).|\textbf{db}|.|Q(\textbf{db})|$ where $|Q|$ is the size of the query $Q$, $|\textbf{db}|$ the database size, $|Q(\textbf{db})|$ is the size of the output and $f$ is some function whose value depends on the specific variant of the query problem (in some cases, $f$ is the identity function). Our results have two kinds of consequences. First, they can be easily translated in the relational (i.e., classical) setting. Previously known bounds for some query problems are improved and new tractable cases are then exhibited. Among others, as an immediate corollary, we improve a result of \~\cite{PapadimitriouY-99} by showing that any (relational) acyclic conjunctive query with inequalities can be evaluated in time $f(|Q|).|\textbf{db}|.|Q(\textbf{db})|$. A second consequence of our method is that it provides a very natural descriptive approach to the complexity of well-known algorithmic problems. A number of examples (such as acyclic subgraph problems, multidimensional matching, etc...) are considered for which new insights of their complexity are given.<|reference_end|> | arxiv | @article{durand2006the,
title={The complexity of acyclic conjunctive queries revisited},
author={Arnaud Durand (ELM), Etienne Grandjean (GREYC)},
journal={arXiv preprint arXiv:cs/0605008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605008},
primaryClass={cs.LO}
} | durand2006the |
arxiv-674167 | cs/0605009 | On the Foundations of Universal Sequence Prediction | <|reference_start|>On the Foundations of Universal Sequence Prediction: Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. We discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. We show that Solomonoff's model possesses many desirable properties: Fast convergence and strong bounds, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the old-evidence and updating problem. It even performs well (actually better) in non-computable environments.<|reference_end|> | arxiv | @article{hutter2006on,
title={On the Foundations of Universal Sequence Prediction},
author={Marcus Hutter},
journal={Proc. 3rd Annual Conference on Theory and Applications of Models
of Computation (TAMC 2006) pages 408-420},
year={2006},
number={IDSIA-03-06},
archivePrefix={arXiv},
eprint={cs/0605009},
primaryClass={cs.LG cs.IT math.IT math.ST stat.TH}
} | hutter2006on |
arxiv-674168 | cs/0605010 | Complementary Set Matrices Satisfying a Column Correlation Constraint | <|reference_start|>Complementary Set Matrices Satisfying a Column Correlation Constraint: Motivated by the problem of reducing the peak to average power ratio (PAPR) of transmitted signals, we consider a design of complementary set matrices whose column sequences satisfy a correlation constraint. The design algorithm recursively builds a collection of $2^{t+1}$ mutually orthogonal (MO) complementary set matrices starting from a companion pair of sequences. We relate correlation properties of column sequences to that of the companion pair and illustrate how to select an appropriate companion pair to ensure that a given column correlation constraint is satisfied. For $t=0$, companion pair properties directly determine matrix column correlation properties. For $t\geq 1$, reducing correlation merits of the companion pair may lead to improved column correlation properties. However, further decrease of the maximum out-off-phase aperiodic autocorrelation of column sequences is not possible once the companion pair correlation merit is less than a threshold determined by $t$. We also reveal a design of the companion pair which leads to complementary set matrices with Golay column sequences. Exhaustive search for companion pairs satisfying a column correlation constraint is infeasible for medium and long sequences. We instead search for two shorter length sequences by minimizing a cost function in terms of their autocorrelation and crosscorrelation merits. Furthermore, an improved cost function which helps in reducing the maximum out-off-phase column correlation is derived based on the properties of the companion pair. By exploiting the well-known Welch bound, sufficient conditions for the existence of companion pairs which satisfy a set of column correlation constraints are also given.<|reference_end|> | arxiv | @article{wu2006complementary,
title={Complementary Set Matrices Satisfying a Column Correlation Constraint},
author={Di Wu and Predrag Spasojevic},
journal={arXiv preprint arXiv:cs/0605010},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605010},
primaryClass={cs.IT math.IT}
} | wu2006complementary |
arxiv-674169 | cs/0605011 | A Characterization of the Degree Sequences of 2-Trees | <|reference_start|>A Characterization of the Degree Sequences of 2-Trees: A graph G is a 2-tree if G=K_3, or G has a vertex v of degree 2, whose neighbours are adjacent, and G\v{i}s a 2-tree. A characterization of the degree sequences of 2-trees is given. This characterization yields a linear-time algorithm for recognizing and realizing degree sequences of 2-trees.<|reference_end|> | arxiv | @article{bose2006a,
title={A Characterization of the Degree Sequences of 2-Trees},
author={Prosenjit Bose, Vida Dujmovi'c, Danny Krizanc, Stefan Langerman, Pat
Morin, David R. Wood, Stefanie Wuhrer},
journal={The Journal of Graph Theory, 58(3):191-209, 2008},
year={2006},
doi={10.1002/jgt.20302},
archivePrefix={arXiv},
eprint={cs/0605011},
primaryClass={cs.DM math.CO}
} | bose2006a |
arxiv-674170 | cs/0605012 | Perspective alignment in spatial language | <|reference_start|>Perspective alignment in spatial language: It is well known that perspective alignment plays a major role in the planning and interpretation of spatial language. In order to understand the role of perspective alignment and the cognitive processes involved, we have made precise complete cognitive models of situated embodied agents that self-organise a communication system for dialoging about the position and movement of real world objects in their immediate surroundings. We show in a series of robotic experiments which cognitive mechanisms are necessary and sufficient to achieve successful spatial language and why and how perspective alignment can take place, either implicitly or based on explicit marking.<|reference_end|> | arxiv | @article{steels2006perspective,
title={Perspective alignment in spatial language},
author={L. Steels, M. Loetzsch},
journal={arXiv preprint arXiv:cs/0605012},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605012},
primaryClass={cs.AI}
} | steels2006perspective |
arxiv-674171 | cs/0605013 | Geometric representation of graphs in low dimension | <|reference_start|>Geometric representation of graphs in low dimension: We give an efficient randomized algorithm to construct a box representation of any graph G on n vertices in $1.5 (\Delta + 2) \ln n$ dimensions, where $\Delta$ is the maximum degree of G. We also show that $\boxi(G) \le (\Delta + 2) \ln n$ for any graph G. Our bound is tight up to a factor of $\ln n$. We also show that our randomized algorithm can be derandomized to get a polynomial time deterministic algorithm. Though our general upper bound is in terms of maximum degree $\Delta$, we show that for almost all graphs on n vertices, its boxicity is upper bound by $c\cdot(d_{av} + 1) \ln n$ where d_{av} is the average degree and c is a small constant. Also, we show that for any graph G, $\boxi(G) \le \sqrt{8 n d_{av} \ln n}$, which is tight up to a factor of $b \sqrt{\ln n}$ for a constant b.<|reference_end|> | arxiv | @article{chandran2006geometric,
title={Geometric representation of graphs in low dimension},
author={L. Sunil Chandran and Mathew C Francis and Naveen Sivadasan},
journal={arXiv preprint arXiv:cs/0605013},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605013},
primaryClass={cs.DM cs.DS}
} | chandran2006geometric |
arxiv-674172 | cs/0605014 | Generalized Multiple Access Channels with Confidential Messages | <|reference_start|>Generalized Multiple Access Channels with Confidential Messages: A discrete memoryless generalized multiple access channel (GMAC) with confidential messages is studied, where two users attempt to transmit common information to a destination and each user also has private (confidential) information intended for the destination. The two users are allowed to receive channel outputs, and hence may obtain the confidential information sent by each other from channel outputs they receive. However, each user views the other user as a wire-tapper, and wishes to keep its confidential information as secret as possible from the other user. The level of secrecy of the confidential information is measured by the equivocation rate, i.e., the entropy rate of the confidential information conditioned on channel outputs at the wire-tapper. The performance measure of interest for the GMAC with confidential messages is the rate-equivocation tuple that includes the common rate, two private rates and two equivocation rates as components. The set that includes all these achievable rate-equivocation tuples is referred to as the capacity-equivocation region. The GMAC with one confidential message set is first studied, where only one user (user 1) has private (confidential) information for the destination. Inner and outer bounds on the capacity-equivocation region are derived, and the capacity-equivocation are established for some classes of channels including the Gaussian GMAC. Furthermore, the secrecy capacity region is established, which is the set of all achievable rates with user 2 being perfectly ignorant of confidential messages of user 1. For the GMAC with two confidential message sets, where both users have confidential messages for the destination, an inner bound on the capacity-equivocation region is obtained.<|reference_end|> | arxiv | @article{liang2006generalized,
title={Generalized Multiple Access Channels with Confidential Messages},
author={Yingbin Liang and H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0605014},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605014},
primaryClass={cs.IT math.IT}
} | liang2006generalized |
arxiv-674173 | cs/0605015 | Collaborative Tagging and Semiotic Dynamics | <|reference_start|>Collaborative Tagging and Semiotic Dynamics: Collaborative tagging has been quickly gaining ground because of its ability to recruit the activity of web users into effectively organizing and sharing vast amounts of information. Here we collect data from a popular system and investigate the statistical properties of tag co-occurrence. We introduce a stochastic model of user behavior embodying two main aspects of collaborative tagging: (i) a frequency-bias mechanism related to the idea that users are exposed to each other's tagging activity; (ii) a notion of memory - or aging of resources - in the form of a heavy-tailed access to the past state of the system. Remarkably, our simple modeling is able to account quantitatively for the observed experimental features, with a surprisingly high accuracy. This points in the direction of a universal behavior of users, who - despite the complexity of their own cognitive processes and the uncoordinated and selfish nature of their tagging activity - appear to follow simple activity patterns.<|reference_end|> | arxiv | @article{cattuto2006collaborative,
title={Collaborative Tagging and Semiotic Dynamics},
author={Ciro Cattuto, Vittorio Loreto, Luciano Pietronero},
journal={PNAS 104, 1461 (2007)},
year={2006},
doi={10.1073/pnas.0610487104},
archivePrefix={arXiv},
eprint={cs/0605015},
primaryClass={cs.CY cs.DL physics.data-an physics.soc-ph}
} | cattuto2006collaborative |
arxiv-674174 | cs/0605016 | Cooperative Relay Broadcast Channels | <|reference_start|>Cooperative Relay Broadcast Channels: The capacity regions are investigated for two relay broadcast channels (RBCs), where relay links are incorporated into standard two-user broadcast channels to support user cooperation. In the first channel, the Partially Cooperative Relay Broadcast Channel, only one user in the system can act as a relay and transmit to the other user through a relay link. An achievable rate region is derived based on the relay using the decode-and-forward scheme. An outer bound on the capacity region is derived and is shown to be tighter than the cut-set bound. For the special case where the Partially Cooperative RBC is degraded, the achievable rate region is shown to be tight and provides the capacity region. Gaussian Partially Cooperative RBCs and Partially Cooperative RBCs with feedback are further studied. In the second channel model being studied in the paper, the Fully Cooperative Relay Broadcast Channel, both users can act as relay nodes and transmit to each other through relay links. This is a more general model than the Partially Cooperative RBC. All the results for Partially Cooperative RBCs are correspondingly generalized to the Fully Cooperative RBCs. It is further shown that the AWGN Fully Cooperative RBC has a larger achievable rate region than the AWGN Partially Cooperative RBC. The results illustrate that relaying and user cooperation are powerful techniques in improving the capacity of broadcast channels.<|reference_end|> | arxiv | @article{liang2006cooperative,
title={Cooperative Relay Broadcast Channels},
author={Yingbin Liang and Venugopal V. Veeravalli},
journal={arXiv preprint arXiv:cs/0605016},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605016},
primaryClass={cs.IT math.IT}
} | liang2006cooperative |
arxiv-674175 | cs/0605017 | Reasoning and Planning with Sensing Actions, Incomplete Information, and Static Causal Laws using Answer Set Programming | <|reference_start|>Reasoning and Planning with Sensing Actions, Incomplete Information, and Static Causal Laws using Answer Set Programming: We extend the 0-approximation of sensing actions and incomplete information in [Son and Baral 2000] to action theories with static causal laws and prove its soundness with respect to the possible world semantics. We also show that the conditional planning problem with respect to this approximation is NP-complete. We then present an answer set programming based conditional planner, called ASCP, that is capable of generating both conformant plans and conditional plans in the presence of sensing actions, incomplete information about the initial state, and static causal laws. We prove the correctness of our implementation and argue that our planner is sound and complete with respect to the proposed approximation. Finally, we present experimental results comparing ASCP to other planners.<|reference_end|> | arxiv | @article{tu2006reasoning,
title={Reasoning and Planning with Sensing Actions, Incomplete Information, and
Static Causal Laws using Answer Set Programming},
author={Phan Huy Tu, Tran Cao Son, and Chitta Baral},
journal={arXiv preprint arXiv:cs/0605017},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605017},
primaryClass={cs.AI}
} | tu2006reasoning |
arxiv-674176 | cs/0605018 | Computational Modeling in Applied Problems: collected papers on econometrics, operations research, game theory and simulation | <|reference_start|>Computational Modeling in Applied Problems: collected papers on econometrics, operations research, game theory and simulation: Computational models pervade all branches of the exact sciences and have in recent times also started to prove to be of immense utility in some of the traditionally 'soft' sciences like ecology, sociology and politics. This volume is a collection of a few cutting-edge research papers on the application of variety of computational models and tools in the analysis, interpretation and solution of vexing real-world problems and issues in economics, management, ecology and global politics by some prolific researchers in the field.<|reference_end|> | arxiv | @article{smarandache2006computational,
title={Computational Modeling in Applied Problems: collected papers on
econometrics, operations research, game theory and simulation},
author={Florentin Smarandache, Sukanto Bhattacharya, Mohammad Khoshnevisan,
Housila P. Singh, Rajesh Singh, F. Kaymram, S. Malakar, Jose L. Salmeron},
journal={Hexis, 2006.},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605018},
primaryClass={cs.OH}
} | smarandache2006computational |
arxiv-674177 | cs/0605019 | The Distribution of Patterns in Random Trees | <|reference_start|>The Distribution of Patterns in Random Trees: Let $T\_n$ denote the set of unrooted labeled trees of size $n$ and let $T\_n$ be a particular (finite, unlabeled) tree. Assuming that every tree of $T\_n$ is equally likely, it is shown that the limiting distribution as $n$ goes to infinity of the number of occurrences of $M$ as an induced subtree is asymptotically normal with mean value and variance asymptotically equivalent to $\mu n$ and $\sigma^2n$, respectively, where the constants $\mu>0$ and $\sigma\ge 0$ are computable.<|reference_end|> | arxiv | @article{chyzak2006the,
title={The Distribution of Patterns in Random Trees},
author={Fr'ed'eric Chyzak (INRIA Rocquencourt), Michael Drmota, Thomas
Klausner, Gerard Kok},
journal={arXiv preprint arXiv:cs/0605019},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605019},
primaryClass={cs.DM math.CO}
} | chyzak2006the |
arxiv-674178 | cs/0605020 | Applied MVC Patterns A pattern language | <|reference_start|>Applied MVC Patterns A pattern language: How to get advantages of MVC model without making applications unnecessarily complex? The full-featured MVC implementation is on the top end of ladder of complexity. The other end is meant for simple cases that do not call for such complex designs, however still in need of the advantages of MVC patterns, such as ability to change the look-and-feel. This paper presents patterns of MVC implementation that help to benefit from the paradigm and keep the right balance between flexibility and implementation complexity.<|reference_end|> | arxiv | @article{alpaev2006applied,
title={Applied MVC Patterns. A pattern language},
author={Sergey Alpaev},
journal={arXiv preprint arXiv:cs/0605020},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605020},
primaryClass={cs.SE}
} | alpaev2006applied |
arxiv-674179 | cs/0605021 | SAT Techniques for Lexicographic Path Orders | <|reference_start|>SAT Techniques for Lexicographic Path Orders: This seminar report is concerned with expressing LPO-termination of term rewrite systems as a satisfiability problem in propositional logic. After relevant algorithms are explained, experimental results are reported.<|reference_end|> | arxiv | @article{zankl2006sat,
title={SAT Techniques for Lexicographic Path Orders},
author={Harald Zankl},
journal={arXiv preprint arXiv:cs/0605021},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605021},
primaryClass={cs.SC}
} | zankl2006sat |
arxiv-674180 | cs/0605022 | Toward a Collection-based Metadata Maintenance Model | <|reference_start|>Toward a Collection-based Metadata Maintenance Model: In this paper, the authors identify key entities and relationships in the operational management of metadata catalogs that describe digital collections, and they draft a data model to support the administration of metadata maintenance for collections. Further, they consider this proposed model in light of other data schemes to which it relates and discuss the implications of the model for library metadata maintenance operations.<|reference_end|> | arxiv | @article{kurth2006toward,
title={Toward a Collection-based Metadata Maintenance Model},
author={Martin Kurth, Jim LeBlanc},
journal={arXiv preprint arXiv:cs/0605022},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605022},
primaryClass={cs.DL}
} | kurth2006toward |
arxiv-674181 | cs/0605023 | The Gaussian Multiple Access Wire-Tap Channel with Collective Secrecy Constraints | <|reference_start|>The Gaussian Multiple Access Wire-Tap Channel with Collective Secrecy Constraints: We consider the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver. We define a suitable security measure for this multi-access environment. We derive an outer bound for the rate region such that secrecy to some pre-determined degree can be maintained. We also find, using Gaussian codebooks, an achievable such secrecy region. Gaussian codewords are shown to achieve the sum capacity outer bound, and the achievable region concides with the outer bound for Gaussian codewords, giving the capacity region when inputs are constrained to be Gaussian. We present numerical results showing the new rate region and compare it with that of the Gaussian Multiple-Access Channel (GMAC) with no secrecy constraints.<|reference_end|> | arxiv | @article{tekin2006the,
title={The Gaussian Multiple Access Wire-Tap Channel with Collective Secrecy
Constraints},
author={Ender Tekin and Aylin Yener},
journal={arXiv preprint arXiv:cs/0605023},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605023},
primaryClass={cs.IT cs.CR math.IT}
} | tekin2006the |
arxiv-674182 | cs/0605024 | A Formal Measure of Machine Intelligence | <|reference_start|>A Formal Measure of Machine Intelligence: A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this measure formally captures the concept of machine intelligence in the broadest reasonable sense.<|reference_end|> | arxiv | @article{legg2006a,
title={A Formal Measure of Machine Intelligence},
author={Shane Legg and Marcus Hutter},
journal={Proc. 15th Annual Machine Learning Conference of {B}elgium and The
Netherlands (Benelearn 2006) pages 73-80},
year={2006},
number={IDSIA-10-06},
archivePrefix={arXiv},
eprint={cs/0605024},
primaryClass={cs.AI cs.LG}
} | legg2006a |
arxiv-674183 | cs/0605025 | Face Recognition using Principal Component Analysis and Log-Gabor Filters | <|reference_start|>Face Recognition using Principal Component Analysis and Log-Gabor Filters: In this article we propose a novel face recognition method based on Principal Component Analysis (PCA) and Log-Gabor filters. The main advantages of the proposed method are its simple implementation, training, and very high recognition accuracy. For recognition experiments we used 5151 face images of 1311 persons from different sets of the FERET and AR databases that allow to analyze how recognition accuracy is affected by the change of facial expressions, illumination, and aging. Recognition experiments with the FERET database (containing photographs of 1196 persons) showed that our method can achieve maximal 97-98% first one recognition rate and 0.3-0.4% Equal Error Rate. The experiments also showed that the accuracy of our method is less affected by eye location errors and used image normalization method than of traditional PCA -based recognition method.<|reference_end|> | arxiv | @article{perlibakas2006face,
title={Face Recognition using Principal Component Analysis and Log-Gabor
Filters},
author={Vytautas Perlibakas},
journal={arXiv preprint arXiv:cs/0605025},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605025},
primaryClass={cs.CV}
} | perlibakas2006face |
arxiv-674184 | cs/0605026 | Strongly Almost Periodic Sequences under Finite Automata Mappings | <|reference_start|>Strongly Almost Periodic Sequences under Finite Automata Mappings: The notion of almost periodicity nontrivially generalizes the notion of periodicity. Strongly almost periodic sequences (=uniformly recurrent infinite words) first appeared in the field of symbolic dynamics, but then turned out to be interesting in connection with computer science. The paper studies the class of eventually strongly almost periodic sequences (i. e., becoming strongly almost periodic after deleting some prefix). We prove that the property of eventual strong almost periodicity is preserved under the mappings done by finite automata and finite transducers. The class of almost periodic sequences includes the class of eventually strongly almost periodic sequences. We prove this inclusion to be strict.<|reference_end|> | arxiv | @article{pritykin2006strongly,
title={Strongly Almost Periodic Sequences under Finite Automata Mappings},
author={Yuri Pritykin},
journal={arXiv preprint arXiv:cs/0605026},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605026},
primaryClass={cs.DM}
} | pritykin2006strongly |
arxiv-674185 | cs/0605027 | Recognition of expression variant faces using masked log-Gabor features and Principal Component Analysis | <|reference_start|>Recognition of expression variant faces using masked log-Gabor features and Principal Component Analysis: In this article we propose a method for the recognition of faces with different facial expressions. For recognition we extract feature vectors by using log-Gabor filters of multiple orientations and scales. Using sliding window algorithm and variances -based masking these features are extracted at image regions that are less affected by the changes of facial expressions. Extracted features are passed to the Principal Component Analysis (PCA) -based recognition method. The results of face recognition experiments using expression variant faces showed that the proposed method could achieve higher recognition accuracy than many other methods. For development and testing we used facial images from the AR and FERET databases. Using facial photographs of more than one thousand persons from the FERET database the proposed method achieved 96.6-98.9% first one recognition rate and 0.2-0.6% Equal Error Rate (EER).<|reference_end|> | arxiv | @article{perlibakas2006recognition,
title={Recognition of expression variant faces using masked log-Gabor features
and Principal Component Analysis},
author={Vytautas Perlibakas},
journal={arXiv preprint arXiv:cs/0605027},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605027},
primaryClass={cs.CV}
} | perlibakas2006recognition |
arxiv-674186 | cs/0605028 | The Gaussian Multiple Access Wire-Tap Channel | <|reference_start|>The Gaussian Multiple Access Wire-Tap Channel: We consider the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver. We define suitable security measures for this multi-access environment. Using codebooks generated randomly according to a Gaussian distribution, achievable secrecy rate regions are identified using superposition coding and TDMA coding schemes. An upper bound for the secrecy sum-rate is derived, and our coding schemes are shown to achieve the sum capacity. Numerical results showing the new rate region are presented and compared with the capacity region of the Gaussian Multiple-Access Channel (GMAC) with no secrecy constraints, quantifying the price paid for secrecy.<|reference_end|> | arxiv | @article{tekin2006the,
title={The Gaussian Multiple Access Wire-Tap Channel},
author={Ender Tekin and Aylin Yener},
journal={arXiv preprint arXiv:cs/0605028},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605028},
primaryClass={cs.IT cs.CR math.IT}
} | tekin2006the |
arxiv-674187 | cs/0605029 | Spanners for Geometric Intersection Graphs | <|reference_start|>Spanners for Geometric Intersection Graphs: Efficient algorithms are presented for constructing spanners in geometric intersection graphs. For a unit ball graph in R^k, a (1+\epsilon)-spanner is obtained using efficient partitioning of the space into hypercubes and solving bichromatic closest pair problems. The spanner construction has almost equivalent complexity to the construction of Euclidean minimum spanning trees. The results are extended to arbitrary ball graphs with a sub-quadratic running time. For unit ball graphs, the spanners have a small separator decomposition which can be used to obtain efficient algorithms for approximating proximity problems like diameter and distance queries. The results on compressed quadtrees, geometric graph separators, and diameter approximation might be of independent interest.<|reference_end|> | arxiv | @article{furer2006spanners,
title={Spanners for Geometric Intersection Graphs},
author={Martin Furer, Shiva Prasad Kasiviswanathan},
journal={Journal of Computational Geometry 3(1) (2012) 31-64},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605029},
primaryClass={cs.CG}
} | furer2006spanners |
arxiv-674188 | cs/0605030 | A Delay Analysis of Maximal Matching Switching with Speedup | <|reference_start|>A Delay Analysis of Maximal Matching Switching with Speedup: In this paper we analyze the average queue backlog in a combined input-output queued switch using a maximal size matching scheduling algorithm. We compare this average backlog to the average backlog achieved by an optimal switch. We model the cell arrival process as independent and identically distributed between time slots and uniformly distributed among input and output ports. For switches with many input and output ports, the backlog associated with maximal size matching with speedup 3 is no more than 10/3 times the backlog associated with an optimal switch. Moreover, this performance ratio rapidly approaches 2 as speedup increases.<|reference_end|> | arxiv | @article{cogill2006a,
title={A Delay Analysis of Maximal Matching Switching with Speedup},
author={Randy Cogill and Sanjay Lall},
journal={arXiv preprint arXiv:cs/0605030},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605030},
primaryClass={cs.NI cs.PF}
} | cogill2006a |
arxiv-674189 | cs/0605031 | On the Design of Agent-Based Systems using UML and Extensions | <|reference_start|>On the Design of Agent-Based Systems using UML and Extensions: The Unified Software Development Process (USDP) and UML have been now generally accepted as the standard methodology and modeling language for developing Object-Oriented Systems. Although Agent-based Systems introduces new issues, we consider that USDP and UML can be used in an extended manner for modeling Agent-based Systems. The paper presents a methodology for designing agent-based systems and the specific models expressed in an UML-based notation corresponding to each phase of the software development process. UML was extended using the provided mechanism: stereotypes. Therefore, this approach can be managed with any CASE tool supporting UML. A Case Study, the development of a specific agent-based Student Evaluation System (SAS), is presented.<|reference_end|> | arxiv | @article{dinsoreanu2006on,
title={On the Design of Agent-Based Systems using UML and Extensions},
author={Mihaela Dinsoreanu, Ioan Salomie, Kalman Pusztai},
journal={arXiv preprint arXiv:cs/0605031},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605031},
primaryClass={cs.AI cs.MA cs.SE}
} | dinsoreanu2006on |
arxiv-674190 | cs/0605032 | A framework of reusable structures for mobile agent development | <|reference_start|>A framework of reusable structures for mobile agent development: Mobile agents research is clearly aiming towards imposing agent based development as the next generation of tools for writing software. This paper comes with its own contribution to this global goal by introducing a novel unifying framework meant to bring simplicity and interoperability to and among agent platforms as we know them today. In addition to this, we also introduce a set of agent behaviors which, although tailored for and from the area of virtual learning environments, are none the less generic enough to be used for rapid, simple, useful and reliable agent deployment. The paper also presents an illustrative case study brought forward to prove the feasibility of our design.<|reference_end|> | arxiv | @article{marian2006a,
title={A framework of reusable structures for mobile agent development},
author={Tudor Marian, Bogdan Dumitriu, Mihaela Dinsoreanu, Ioan Salomie},
journal={arXiv preprint arXiv:cs/0605032},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605032},
primaryClass={cs.MA cs.AI cs.SE}
} | marian2006a |
arxiv-674191 | cs/0605033 | Mobile Agent Based Solutions for Knowledge Assessment in elearning Environments | <|reference_start|>Mobile Agent Based Solutions for Knowledge Assessment in elearning Environments: E-learning is nowadays one of the most interesting of the "e- " domains available through the Internet. The main problem to create a Web-based, virtual environment is to model the traditional domain and to implement the model using the most suitable technologies. We analyzed the distance learning domain and investigated the possibility to implement some e-learning services using mobile agent technologies. This paper presents a model of the Student Assessment Service (SAS) and an agent-based framework developed to be used for implementing specific applications. A specific Student Assessment application that relies on the framework was developed.<|reference_end|> | arxiv | @article{dinsoreanu2006mobile,
title={Mobile Agent Based Solutions for Knowledge Assessment in elearning
Environments},
author={Mihaela Dinsoreanu, Cristian Godja, Claudiu Anghel, Ioan Salomie, Tom
Coffey},
journal={arXiv preprint arXiv:cs/0605033},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605033},
primaryClass={cs.MA cs.AI cs.SE}
} | dinsoreanu2006mobile |
arxiv-674192 | cs/0605034 | Peer to Peer Networks for Defense Against Internet Worms | <|reference_start|>Peer to Peer Networks for Defense Against Internet Worms: Internet worms, which spread in computer networks without human mediation, pose a severe threat to computer systems today. The rate of propagation of worms has been measured to be extremely high and they can infect a large fraction of their potential hosts in a short time. We study two different methods of patch dissemination to combat the spread of worms. We first show that using a fixed number of patch servers performs woefully inadequately against Internet worms. We then show that by exploiting the exponential data dissemination capability of P2P systems, the spread of worms can be halted very effectively. We compare the two methods by using fluid models to compute two quantities of interest: the time taken to effectively combat the progress of the worm and the maximum number of infected hosts. We validate our models using Internet measurements and simulations.<|reference_end|> | arxiv | @article{shakkottai2006peer,
title={Peer to Peer Networks for Defense Against Internet Worms},
author={Srinivas Shakkottai, R. Srikant},
journal={arXiv preprint arXiv:cs/0605034},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605034},
primaryClass={cs.CR cs.AR cs.NI}
} | shakkottai2006peer |
arxiv-674193 | cs/0605035 | Query Chains: Learning to Rank from Implicit Feedback | <|reference_start|>Query Chains: Learning to Rank from Implicit Feedback: This paper presents a novel approach for using clickthrough data to learn ranked retrieval functions for web search results. We observe that users searching the web often perform a sequence, or chain, of queries with a similar information need. Using query chains, we generate new types of preference judgments from search engine logs, thus taking advantage of user intelligence in reformulating queries. To validate our method we perform a controlled user study comparing generated preference judgments to explicit relevance judgments. We also implemented a real-world search engine to test our approach, using a modified ranking SVM to learn an improved ranking function from preference data. Our results demonstrate significant improvements in the ranking given by the search engine. The learned rankings outperform both a static ranking function, as well as one trained without considering query chains.<|reference_end|> | arxiv | @article{radlinski2006query,
title={Query Chains: Learning to Rank from Implicit Feedback},
author={Filip Radlinski and Thorsten Joachims},
journal={Proceedings of the ACM Conference on Knowledge Discovery and Data
Mining (KDD), ACM, 2005},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605035},
primaryClass={cs.LG cs.IR}
} | radlinski2006query |
arxiv-674194 | cs/0605036 | Evaluating the Robustness of Learning from Implicit Feedback | <|reference_start|>Evaluating the Robustness of Learning from Implicit Feedback: This paper evaluates the robustness of learning from implicit feedback in web search. In particular, we create a model of user behavior by drawing upon user studies in laboratory and real-world settings. The model is used to understand the effect of user behavior on the performance of a learning algorithm for ranked retrieval. We explore a wide range of possible user behaviors and find that learning from implicit feedback can be surprisingly robust. This complements previous results that demonstrated our algorithm's effectiveness in a real-world search engine application.<|reference_end|> | arxiv | @article{radlinski2006evaluating,
title={Evaluating the Robustness of Learning from Implicit Feedback},
author={Filip Radlinski and Thorsten Joachims},
journal={arXiv preprint arXiv:cs/0605036},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605036},
primaryClass={cs.LG cs.IR}
} | radlinski2006evaluating |
arxiv-674195 | cs/0605037 | Minimally Invasive Randomization for Collecting Unbiased Preferences from Clickthrough Logs | <|reference_start|>Minimally Invasive Randomization for Collecting Unbiased Preferences from Clickthrough Logs: Clickthrough data is a particularly inexpensive and plentiful resource to obtain implicit relevance feedback for improving and personalizing search engines. However, it is well known that the probability of a user clicking on a result is strongly biased toward documents presented higher in the result set irrespective of relevance. We introduce a simple method to modify the presentation of search results that provably gives relevance judgments that are unaffected by presentation bias under reasonable assumptions. We validate this property of the training data in interactive real world experiments. Finally, we show that using these unbiased relevance judgments learning methods can be guaranteed to converge to an ideal ranking given sufficient data.<|reference_end|> | arxiv | @article{radlinski2006minimally,
title={Minimally Invasive Randomization for Collecting Unbiased Preferences
from Clickthrough Logs},
author={Filip Radlinski and Thorsten Joachims},
journal={arXiv preprint arXiv:cs/0605037},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605037},
primaryClass={cs.IR cs.LG}
} | radlinski2006minimally |
arxiv-674196 | cs/0605038 | An Unfolding-Based Semantics for Logic Programming with Aggregates | <|reference_start|>An Unfolding-Based Semantics for Logic Programming with Aggregates: The paper presents two equivalent definitions of answer sets for logic programs with aggregates. These definitions build on the notion of unfolding of aggregates, and they are aimed at creating methodologies to translate logic programs with aggregates to normal logic programs or positive programs, whose answer set semantics can be used to defined the semantics of the original programs. The first definition provides an alternative view of the semantics for logic programming with aggregates described by Pelov et al. The second definition is similar to the traditional answer set definition for normal logic programs, in that, given a logic program with aggregates and an interpretation, the unfolding process produces a positive program. The paper shows how this definition can be extended to consider aggregates in the head of the rules. The proposed views of logic programming with aggregates are simple and coincide with the ultimate stable model semantics, and with other semantic characterizations for large classes of program (e.g., programs with monotone aggregates and programs that are aggregate-stratified). Moreover, it can be directly employed to support an implementation using available answer set solvers. The paper describes a system, called ASP^A, that is capable of computing answer sets of programs with arbitrary (e.g., recursively defined) aggregates.<|reference_end|> | arxiv | @article{son2006an,
title={An Unfolding-Based Semantics for Logic Programming with Aggregates},
author={Tran Cao Son, Enrico Pontelli, Islam Elkabani},
journal={arXiv preprint arXiv:cs/0605038},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605038},
primaryClass={cs.SE cs.AI}
} | son2006an |
arxiv-674197 | cs/0605039 | Fast and Generalized Polynomial Time Memory Consistency Verification | <|reference_start|>Fast and Generalized Polynomial Time Memory Consistency Verification: The problem of verifying multi-threaded execution against the memory consistency model of a processor is known to be an NP hard problem. However polynomial time algorithms exist that detect almost all failures in such execution. These are often used in practice for microprocessor verification. We present a low complexity and fully parallelized algorithm to check program execution against the processor consistency model. In addition our algorithm is general enough to support a number of consistency models without any degradation in performance. An implementation of this algorithm is currently used in practice to verify processors in the post silicon stage for multiple architectures.<|reference_end|> | arxiv | @article{roy2006fast,
title={Fast and Generalized Polynomial Time Memory Consistency Verification},
author={Amitabha Roy, Stephan Zeisset, Charles J. Fleckenstein, John C. Huang},
journal={arXiv preprint arXiv:cs/0605039},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605039},
primaryClass={cs.AR cs.LO cs.PF}
} | roy2006fast |
arxiv-674198 | cs/0605040 | General Discounting versus Average Reward | <|reference_start|>General Discounting versus Average Reward: Consider an agent interacting with an environment in cycles. In every interaction cycle the agent is rewarded for its performance. We compare the average reward U from cycle 1 to m (average value) with the future discounted reward V from cycle k to infinity (discounted value). We consider essentially arbitrary (non-geometric) discount sequences and arbitrary reward sequences (non-MDP environments). We show that asymptotically U for m->infinity and V for k->infinity are equal, provided both limits exist. Further, if the effective horizon grows linearly with k or faster, then existence of the limit of U implies that the limit of V exists. Conversely, if the effective horizon grows linearly with k or slower, then existence of the limit of V implies that the limit of U exists.<|reference_end|> | arxiv | @article{hutter2006general,
title={General Discounting versus Average Reward},
author={Marcus Hutter},
journal={Proc. 17th International Conf. on Algorithmic Learning Theory (ALT
2006) pages 244-258},
year={2006},
number={IDSIA-11-06},
archivePrefix={arXiv},
eprint={cs/0605040},
primaryClass={cs.LG}
} | hutter2006general |
arxiv-674199 | cs/0605041 | Asymptotically Optimal Multiple-access Communication via Distributed Rate Splitting | <|reference_start|>Asymptotically Optimal Multiple-access Communication via Distributed Rate Splitting: We consider the multiple-access communication problem in a distributed setting for both the additive white Gaussian noise channel and the discrete memoryless channel. We propose a scheme called Distributed Rate Splitting to achieve the optimal rates allowed by information theory in a distributed manner. In this scheme, each real user creates a number of virtual users via a power/rate splitting mechanism in the M-user Gaussian channel or via a random switching mechanism in the M-user discrete memoryless channel. At the receiver, all virtual users are successively decoded. Compared with other multiple-access techniques, Distributed Rate Splitting can be implemented with lower complexity and less coordination. Furthermore, in a symmetric setting, we show that the rate tuple achieved by this scheme converges to the maximum equal rate point allowed by the information-theoretic bound as the number of virtual users per real user tends to infinity. When the capacity regions are asymmetric, we show that a point on the dominant face can be achieved asymptotically. Finally, when there is an unequal number of virtual users per real user, we show that differential user rate requirements can be accommodated in a distributed fashion.<|reference_end|> | arxiv | @article{cao2006asymptotically,
title={Asymptotically Optimal Multiple-access Communication via Distributed
Rate Splitting},
author={Jian Cao, Edmund M. Yeh},
journal={arXiv preprint arXiv:cs/0605041},
year={2006},
doi={10.1109/TIT.2006.887497},
archivePrefix={arXiv},
eprint={cs/0605041},
primaryClass={cs.IT math.IT}
} | cao2006asymptotically |
arxiv-674200 | cs/0605042 | Throughput Optimal Distributed Control of Stochastic Wireless Networks | <|reference_start|>Throughput Optimal Distributed Control of Stochastic Wireless Networks: This paper has been withdrawn by the author due to the need for further revision.<|reference_end|> | arxiv | @article{xi2006throughput,
title={Throughput Optimal Distributed Control of Stochastic Wireless Networks},
author={Yufang Xi, Edmund M. Yeh},
journal={arXiv preprint arXiv:cs/0605042},
year={2006},
archivePrefix={arXiv},
eprint={cs/0605042},
primaryClass={cs.NI}
} | xi2006throughput |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.