corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-674201
cs/0605043
Continuations, proofs and tests
<|reference_start|>Continuations, proofs and tests: Continuation Passing Style (CPS) is one of the most important issues in the field of functional programming languages, and the quest for a primitive notion of types for continuation is still open. Starting from the notion of ``test'' proposed by Girard, we develop a notion of test for intuitionistic logic. We give a complete deductive system for tests and we show that it is good to deal with ``continuations''. In particular, in the proposed system it is possible to work with Call by Value and Call by Name translations in a uniform way.<|reference_end|>
arxiv
@article{guerrini2006continuations,, title={Continuations, proofs and tests}, author={Stefano Guerrini and Andrea Masini}, journal={arXiv preprint arXiv:cs/0605043}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605043}, primaryClass={cs.LO cs.PL} }
guerrini2006continuations,
arxiv-674202
cs/0605044
Linear Shift-Register Synthesis for Multiple Sequences of Varying Length
<|reference_start|>Linear Shift-Register Synthesis for Multiple Sequences of Varying Length: The problem of finding the shortest linear shift-register capable of generating t finite length sequences over some field F is considered. A similar problem was already addressed by Feng and Tzeng. They presented an iterative algorithm for solving this multi-sequence shift-register synthesis problem, which can be considered as generalization of the well known Berlekamp-Massey algorithm. The Feng-Tzeng algorithm works indeed, if all t sequences have the same length. This paper focuses on multi-sequence shift-register synthesis for generating sequences of varying length. It is exposed, that the Feng-Tzeng algorithm does not always give the correct solution in this case. A modified algorithm is proposed and formally proved, which overcomes this problem.<|reference_end|>
arxiv
@article{schmidt2006linear, title={Linear Shift-Register Synthesis for Multiple Sequences of Varying Length}, author={Georg Schmidt, and Vladimir R. Sidorenko}, journal={arXiv preprint arXiv:cs/0605044}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605044}, primaryClass={cs.IT math.IT} }
schmidt2006linear
arxiv-674203
cs/0605045
On Orthogonalities in Matrices
<|reference_start|>On Orthogonalities in Matrices: In this paper we have discussed different possible orthogonalities in matrices, namely orthogonal, quasi-orthogonal, semi-orthogonal and non-orthogonal matrices including completely positive matrices, while giving some of their constructions besides studying some of their properties.<|reference_end|>
arxiv
@article{mohan2006on, title={On Orthogonalities in Matrices}, author={R.N.Mohan}, journal={arXiv preprint arXiv:cs/0605045}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605045}, primaryClass={cs.DM} }
mohan2006on
arxiv-674204
cs/0605046
Patterns of iid Sequences and Their Entropy - Part I: General Bounds
<|reference_start|>Patterns of iid Sequences and Their Entropy - Part I: General Bounds: Tight bounds on the block entropy of patterns of sequences generated by independent and identically distributed (i.i.d.) sources are derived. A pattern of a sequence is a sequence of integer indices with each index representing the order of first occurrence of the respective symbol in the original sequence. Since a pattern is the result of data processing on the original sequence, its entropy cannot be larger. Bounds derived here describe the pattern entropy as function of the original i.i.d. source entropy, the alphabet size, the symbol probabilities, and their arrangement in the probability space. Matching upper and lower bounds derived provide a useful tool for very accurate approximations of pattern block entropies for various distributions, and for assessing the decrease of the pattern entropy from that of the original i.i.d. sequence.<|reference_end|>
arxiv
@article{shamir2006patterns, title={Patterns of i.i.d. Sequences and Their Entropy - Part I: General Bounds}, author={Gil I. Shamir}, journal={arXiv preprint arXiv:cs/0605046}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605046}, primaryClass={cs.IT math.IT} }
shamir2006patterns
arxiv-674205
cs/0605047
Generalized Entropy Power Inequalities and Monotonicity Properties of Information
<|reference_start|>Generalized Entropy Power Inequalities and Monotonicity Properties of Information: New families of Fisher information and entropy power inequalities for sums of independent random variables are presented. These inequalities relate the information in the sum of $n$ independent random variables to the information contained in sums over subsets of the random variables, for an arbitrary collection of subsets. As a consequence, a simple proof of the monotonicity of information in central limit theorems is obtained, both in the setting of i.i.d. summands as well as in the more general setting of independent summands with variance-standardized sums.<|reference_end|>
arxiv
@article{madiman2006generalized, title={Generalized Entropy Power Inequalities and Monotonicity Properties of Information}, author={Mokshay Madiman and Andrew Barron}, journal={IEEE Transactions on Information Theory, Vol. 53(7), pp. 2317-2329, July 2007}, year={2006}, doi={10.1109/TIT.2007.899484}, archivePrefix={arXiv}, eprint={cs/0605047}, primaryClass={cs.IT math.IT math.PR math.ST stat.TH} }
madiman2006generalized
arxiv-674206
cs/0605048
On Learning Thresholds of Parities and Unions of Rectangles in Random Walk Models
<|reference_start|>On Learning Thresholds of Parities and Unions of Rectangles in Random Walk Models: In a recent breakthrough, [Bshouty et al., 2005] obtained the first passive-learning algorithm for DNFs under the uniform distribution. They showed that DNFs are learnable in the Random Walk and Noise Sensitivity models. We extend their results in several directions. We first show that thresholds of parities, a natural class encompassing DNFs, cannot be learned efficiently in the Noise Sensitivity model using only statistical queries. In contrast, we show that a cyclic version of the Random Walk model allows to learn efficiently polynomially weighted thresholds of parities. We also extend the algorithm of Bshouty et al. to the case of Unions of Rectangles, a natural generalization of DNFs to $\{0,...,b-1\}^n$.<|reference_end|>
arxiv
@article{roch2006on, title={On Learning Thresholds of Parities and Unions of Rectangles in Random Walk Models}, author={S. Roch}, journal={arXiv preprint arXiv:cs/0605048}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605048}, primaryClass={cs.LG cs.CC math.PR} }
roch2006on
arxiv-674207
cs/0605049
On fractionally linear functions over a finite field
<|reference_start|>On fractionally linear functions over a finite field: Abstrct: In this note, by considering fractionally linear functions over a finite field and consequently developing an abstract sequence, we study some of its properties.<|reference_end|>
arxiv
@article{siddlenikov2006on, title={On fractionally linear functions over a finite field}, author={V.M.Siddlenikov, R.N.Mohan, Moon Ho Lee}, journal={arXiv preprint arXiv:cs/0605049}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605049}, primaryClass={cs.DM} }
siddlenikov2006on
arxiv-674208
cs/0605050
A Polynomial Time Nilpotence Test for Galois Groups and Related Results
<|reference_start|>A Polynomial Time Nilpotence Test for Galois Groups and Related Results: We give a deterministic polynomial-time algorithm to check whether the Galois group $\Gal{f}$ of an input polynomial $f(X) \in \Q[X]$ is nilpotent: the running time is polynomial in $\size{f}$. Also, we generalize the Landau-Miller solvability test to an algorithm that tests if $\Gal{f}$ is in $\Gamma_d$: this algorithm runs in time polynomial in $\size{f}$ and $n^d$ and, moreover, if $\Gal{f}\in\Gamma_d$ it computes all the prime factors of $# \Gal{f}$.<|reference_end|>
arxiv
@article{arvind2006a, title={A Polynomial Time Nilpotence Test for Galois Groups and Related Results}, author={V. Arvind and Piyush P Kurur}, journal={arXiv preprint arXiv:cs/0605050}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605050}, primaryClass={cs.CC cs.DS} }
arvind2006a
arxiv-674209
cs/0605051
A General Method for Finding Low Error Rates of LDPC Codes
<|reference_start|>A General Method for Finding Low Error Rates of LDPC Codes: This paper outlines a three-step procedure for determining the low bit error rate performance curve of a wide class of LDPC codes of moderate length. The traditional method to estimate code performance in the higher SNR region is to use a sum of the contributions of the most dominant error events to the probability of error. These dominant error events will be both code and decoder dependent, consisting of low-weight codewords as well as non-codeword events if ML decoding is not used. For even moderate length codes, it is not feasible to find all of these dominant error events with a brute force search. The proposed method provides a convenient way to evaluate very low bit error rate performance of an LDPC code without requiring knowledge of the complete error event weight spectrum or resorting to a Monte Carlo simulation. This new method can be applied to various types of decoding such as the full belief propagation version of the message passing algorithm or the commonly used min-sum approximation to belief propagation. The proposed method allows one to efficiently see error performance at bit error rates that were previously out of reach of Monte Carlo methods. This result will provide a solid foundation for the analysis and design of LDPC codes and decoders that are required to provide a guaranteed very low bit error rate performance at certain SNRs.<|reference_end|>
arxiv
@article{cole2006a, title={A General Method for Finding Low Error Rates of LDPC Codes}, author={Chad A. Cole, Stephen G. Wilson, Eric. K. Hall, Thomas R. Giallorenzi}, journal={arXiv preprint arXiv:cs/0605051}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605051}, primaryClass={cs.IT math.IT} }
cole2006a
arxiv-674210
cs/0605052
Node-Based Optimal Power Control, Routing, and Congestion Control in Wireless Networks
<|reference_start|>Node-Based Optimal Power Control, Routing, and Congestion Control in Wireless Networks: We present a unified analytical framework within which power control, rate allocation, routing, and congestion control for wireless networks can be optimized in a coherent and integrated manner. We consider a multi-commodity flow model with an interference-limited physical-layer scheme in which power control and routing variables are chosen to minimize the sum of convex link costs reflecting, for instance, queuing delay. Distributed network algorithms where joint power control and routing are performed on a node-by-node basis are presented. We show that with appropriately chosen parameters, these algorithms iteratively converge to the global optimum from any initial point with finite cost. Next, we study refinements of the algorithms for more accurate link capacity models, and extend the results to wireless networks where the physical-layer achievable rate region is given by an arbitrary convex set, and the link costs are strictly quasiconvex. Finally, we demonstrate that congestion control can be seamlessly incorporated into our framework, so that algorithms developed for power control and routing can naturally be extended to optimize user input rates.<|reference_end|>
arxiv
@article{xi2006node-based, title={Node-Based Optimal Power Control, Routing, and Congestion Control in Wireless Networks}, author={Yufang Xi and Edmund M. Yeh}, journal={arXiv preprint arXiv:cs/0605052}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605052}, primaryClass={cs.NI} }
xi2006node-based
arxiv-674211
cs/0605053
Gridscape II: A Customisable and Pluggable Grid Monitoring Portal and its Integration with Google Maps
<|reference_start|>Gridscape II: A Customisable and Pluggable Grid Monitoring Portal and its Integration with Google Maps: Grid computing has emerged as an effective means of facilitating the sharing of distributed heterogeneous resources, enabling collaboration in large scale environments. However, the nature of Grid systems, coupled with the overabundance and fragmentation of information, makes it difficult to monitor resources, services, and computations in order to plan and make decisions. In this paper we present Gridscape II, a customisable portal component that can be used on its own or plugged in to compliment existing Grid portals. Gridscape II manages the gathering of information from arbitrary, heterogeneous and distributed sources and presents them together seamlessly within a single interface. It also leverages the Google Maps API in order to provide a highly interactive user interface. Gridscape II is simple and easy to use, providing a solution to those users who do not wish to invest heavily in developing their own monitoring portal from scratch, and also for those users who want something that is easy to customise and extend for their specific needs.<|reference_end|>
arxiv
@article{gibbins2006gridscape, title={Gridscape II: A Customisable and Pluggable Grid Monitoring Portal and its Integration with Google Maps}, author={Hussein Gibbins and Rajkumar Buyya}, journal={arXiv preprint arXiv:cs/0605053}, year={2006}, number={Technical Report, GRIDS-TR-2006-8, Grid Computing and Distributed Systems Laboratory, The University of Melbourne, Australia, May 12, 2006}, archivePrefix={arXiv}, eprint={cs/0605053}, primaryClass={cs.DC} }
gibbins2006gridscape
arxiv-674212
cs/0605054
From Proof Nets to the Free *-Autonomous Category
<|reference_start|>From Proof Nets to the Free *-Autonomous Category: In the first part of this paper we present a theory of proof nets for full multiplicative linear logic, including the two units. It naturally extends the well-known theory of unit-free multiplicative proof nets. A linking is no longer a set of axiom links but a tree in which the axiom links are subtrees. These trees will be identified according to an equivalence relation based on a simple form of graph rewriting. We show the standard results of sequentialization and strong normalization of cut elimination. In the second part of the paper we show that the identifications enforced on proofs are such that the class of two-conclusion proof nets defines the free *-autonomous category.<|reference_end|>
arxiv
@article{lamarche2006from, title={From Proof Nets to the Free *-Autonomous Category}, author={Francois Lamarche and Lutz Strassburger}, journal={Logical Methods in Computer Science, Volume 2, Issue 4 (October 5, 2006) lmcs:2239}, year={2006}, doi={10.2168/LMCS-2(4:3)2006}, archivePrefix={arXiv}, eprint={cs/0605054}, primaryClass={cs.LO} }
lamarche2006from
arxiv-674213
cs/0605055
Approximate Discrete Probability Distribution Representation using a Multi-Resolution Binary Tree
<|reference_start|>Approximate Discrete Probability Distribution Representation using a Multi-Resolution Binary Tree: Computing and storing probabilities is a hard problem as soon as one has to deal with complex distributions over multiple random variables. The problem of efficient representation of probability distributions is central in term of computational efficiency in the field of probabilistic reasoning. The main problem arises when dealing with joint probability distributions over a set of random variables: they are always represented using huge probability arrays. In this paper, a new method based on binary-tree representation is introduced in order to store efficiently very large joint distributions. Our approach approximates any multidimensional joint distributions using an adaptive discretization of the space. We make the assumption that the lower is the probability mass of a particular region of feature space, the larger is the discretization step. This assumption leads to a very optimized representation in term of time and memory. The other advantages of our approach are the ability to refine dynamically the distribution every time it is needed leading to a more accurate representation of the probability distribution and to an anytime representation of the distribution.<|reference_end|>
arxiv
@article{bellot2006approximate, title={Approximate Discrete Probability Distribution Representation using a Multi-Resolution Binary Tree}, author={David Bellot (INRIA Rh^one-Alpes / Gravir-Imag), Pierre Bessiere (INRIA Rh^one-Alpes / Gravir-Imag)}, journal={International Conference on Tools for Artificial Intelligence - ITCAI 2003, Sacramento, (2003) --}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605055}, primaryClass={cs.AI} }
bellot2006approximate
arxiv-674214
cs/0605056
Utility Computing and Global Grids
<|reference_start|>Utility Computing and Global Grids: This chapter focuses on the use of Grid technologies to achieve utility computing. An overview of how Grids can support utility computing is first presented through the architecture of Utility Grids. Then, utility-based resource allocation is described in detail at each level of the architecture. Finally, some industrial solutions for utility computing are discussed.<|reference_end|>
arxiv
@article{yeo2006utility, title={Utility Computing and Global Grids}, author={Chee Shin Yeo, Marcos Dias de Assuncao, Jia Yu, Anthony Sulistio, Srikumar Venugopal, Martin Placek, and Rajkumar Buyya}, journal={arXiv preprint arXiv:cs/0605056}, year={2006}, number={Technical Report, GRIDS-TR-2006-7, Grid Computing and Distributed Systems Laboratory, The University of Melbourne, Australia, April 13, 2006}, archivePrefix={arXiv}, eprint={cs/0605056}, primaryClass={cs.DC} }
yeo2006utility
arxiv-674215
cs/0605057
SLA-Based Coordinated Superscheduling Scheme and Performance for Computational Grids
<|reference_start|>SLA-Based Coordinated Superscheduling Scheme and Performance for Computational Grids: The Service Level Agreement~(SLA) based grid superscheduling approach promotes coordinated resource sharing. Superscheduling is facilitated between administratively and topologically distributed grid sites by grid schedulers such as Resource brokers. In this work, we present a market-based SLA coordination mechanism. We based our SLA model on a well known \emph{contract net protocol}. The key advantages of our approach are that it allows:~(i) resource owners to have finer degree of control over the resource allocation that was previously not possible through traditional mechanism; and (ii) superschedulers to bid for SLA contracts in the contract net with focus on completing the job within the user specified deadline. In this work, we use simulation to show the effectiveness of our proposed approach.<|reference_end|>
arxiv
@article{ranjan2006sla-based, title={SLA-Based Coordinated Superscheduling Scheme and Performance for Computational Grids}, author={Rajiv Ranjan, Aaron Harwood and Rajkumar Buyya}, journal={In Proceedings of the 8th IEEE International Conference on Cluster Computing (Cluster 2006), IEEE Computer Society Press, September 27 - 30, 2006, Barcelona, Spain.}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605057}, primaryClass={cs.DC} }
ranjan2006sla-based
arxiv-674216
cs/0605058
A Monadic, Functional Implementation of Real Numbers
<|reference_start|>A Monadic, Functional Implementation of Real Numbers: Large scale real number computation is an essential ingredient in several modern mathematical proofs. Because such lengthy computations cannot be verified by hand, some mathematicians want to use software proof assistants to verify the correctness of these proofs. This paper develops a new implementation of the constructive real numbers and elementary functions for such proofs by using the monad properties of the completion operation on metric spaces. Bishop and Bridges's notion of regular sequences is generalized to, what I call, regular functions which form the completion of any metric space. Using the monad operations, continuous functions on length spaces (a common subclass of metric spaces) are created by lifting continuous functions on the original space. A prototype Haskell implementation has been created. I believe that this approach yields a real number library that is reasonably efficient for computation, and still simple enough to easily verify its correctness.<|reference_end|>
arxiv
@article{o'connor2006a, title={A Monadic, Functional Implementation of Real Numbers}, author={Russell O'Connor}, journal={Russell O'Connor: A monadic, functional implementation of real numbers. Mathematical Structures in Computer Science 17(1): 129-159 (2007)}, year={2006}, doi={10.1017/S0960129506005871}, archivePrefix={arXiv}, eprint={cs/0605058}, primaryClass={cs.NA cs.MS} }
o'connor2006a
arxiv-674217
cs/0605059
Ontological Representations of Software Patterns
<|reference_start|>Ontological Representations of Software Patterns: This paper is based on and advocates the trend in software engineering of extending the use of software patterns as means of structuring solutions to software development problems (be they motivated by best practice or by company interests and policies). The paper argues that, on the one hand, this development requires tools for automatic organisation, retrieval and explanation of software patterns. On the other hand, that the existence of such tools itself will facilitate the further development and employment of patterns in the software development process. The paper analyses existing pattern representations and concludes that they are inadequate for the kind of automation intended here. Adopting a standpoint similar to that taken in the semantic web, the paper proposes that feasible solutions can be built on the basis of ontological representations.<|reference_end|>
arxiv
@article{rosengard2006ontological, title={Ontological Representations of Software Patterns}, author={Jean-Marc Rosengard, Marian Ursu}, journal={Proceedings of KES'04, Lecture Notes in Computer Science, vol. 3215, pp. 31-37, Springer-Verlag, 2004}, year={2006}, doi={10.1007/b100916}, archivePrefix={arXiv}, eprint={cs/0605059}, primaryClass={cs.SE cs.AI} }
rosengard2006ontological
arxiv-674218
cs/0605060
A Case for Cooperative and Incentive-Based Coupling of Distributed Clusters
<|reference_start|>A Case for Cooperative and Incentive-Based Coupling of Distributed Clusters: Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to superscheduling in a grid environment are non-coordinated since application level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to suboptimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called \emph{Grid-Federation}, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the QoS based scheduling, but also enhances utility delivered by resources.<|reference_end|>
arxiv
@article{ranjan2006a, title={A Case for Cooperative and Incentive-Based Coupling of Distributed Clusters}, author={Rajiv Ranjan, Aaron Harwood and Rajkumar Buyya}, journal={In Proceedings of the 7th IEEE International Conference on Cluster Computing (Cluster 2005), IEEE Computer Society Press, September 27 - 30, 2005, Boston, Massachusetts, USA.}, year={2006}, doi={10.1109/CLUSTR.2005.347038}, archivePrefix={arXiv}, eprint={cs/0605060}, primaryClass={cs.DC} }
ranjan2006a
arxiv-674219
cs/0605061
An Internet Framework to Bring Coherence between WAP and HTTP Ensuring Better Mobile Internet Security
<|reference_start|>An Internet Framework to Bring Coherence between WAP and HTTP Ensuring Better Mobile Internet Security: To bring coherence between Wireless Access Protocol (WAP) and Hyper Text Transfer Protocol (HTTP), in this paper, we have proposed an enhanced Internet framework, which incorporates a new markup language and a browser compatible with both of the access control protocols. This Markup Language and the browser enables co-existence of both Hyper Text Markup Language (HTML) and Wireless Markup Language (WML) contents in a single source file, whereas the browser incorporates the ability to hold contents compliant with both HTTP and WAP. The proposed framework also bridges the security gap that is present in the existing mobile Internet framework. Keywords: WAP, WML, HTTP, HTML, browser, parser, wireless devices.<|reference_end|>
arxiv
@article{pathan2006an, title={An Internet Framework to Bring Coherence between WAP and HTTP Ensuring Better Mobile Internet Security}, author={Al-Mukaddim Khan Pathan, Md. Abdul Mottalib and Minhaz Fahim Zibran}, journal={arXiv preprint arXiv:cs/0605061}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605061}, primaryClass={cs.NI} }
pathan2006an
arxiv-674220
cs/0605062
QoSIP: A QoS Aware IP Routing Ptotocol for Multimedia Data
<|reference_start|>QoSIP: A QoS Aware IP Routing Ptotocol for Multimedia Data: Conventional IP routing protocols are not suitable for multimedia applications which have very stringent Quality-of-Service (QoS) demands and they require a connection oriented service. For multimedia applications it is expected that the router should be able to forward the packet according to the demand of the packet and it is necessary to find a path that satisfies the specific demands of a particular application. In order to address these issues, in this paper, we have presented a QoS aware IP routing protocol where a router stores information about the QoS parameters and routes the packet accordingly. Keywords: IP Routing Protocol, Quality of Service (QoS) parameter, QoSIP, Selective Flooding.<|reference_end|>
arxiv
@article{pathan2006qosip:, title={QoSIP: A QoS Aware IP Routing Ptotocol for Multimedia Data}, author={Al-Mukaddim Khan Pathan and Md. Golam Shagadul Amin Talukder}, journal={arXiv preprint arXiv:cs/0605062}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605062}, primaryClass={cs.NI} }
pathan2006qosip:
arxiv-674221
cs/0605063
An Electronic Payment System to Ensure Cost Effectiveness with Easy Security Incorporation for the Developing Countries
<|reference_start|>An Electronic Payment System to Ensure Cost Effectiveness with Easy Security Incorporation for the Developing Countries: With the rapid growth of Information and Communication Technology, Electronic commerce is now acting as a new means of carrying out business transactions through electronic means such as Internet environment. To avoid the complexities associated with the digital cash and electronic cash, consumers and vendors are looking for credit card payments on the Internet as one possible time-tested alternative. This gave rise of the on-line payment processing using a third-party verification; which is not suitable for the developing countries in most of the cases because of the excessive costs associated with it for maintenance and establishment of an online third-party processor. As a remedy of this problem, in this paper, we have proposed a framework for easy security incorporation in credit card based electronic payment system without the use of an on-line third- party processor; which tends to be low cost and effective for the developing countries.<|reference_end|>
arxiv
@article{anik2006an, title={An Electronic Payment System to Ensure Cost Effectiveness with Easy Security Incorporation for the Developing Countries}, author={Asif Ahmed Anik and Al-Mukaddim Khan Pathan}, journal={arXiv preprint arXiv:cs/0605063}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605063}, primaryClass={cs.OH} }
anik2006an
arxiv-674222
cs/0605064
Modal Logics of Topological Relations
<|reference_start|>Modal Logics of Topological Relations: Logical formalisms for reasoning about relations between spatial regions play a fundamental role in geographical information systems, spatial and constraint databases, and spatial reasoning in AI. In analogy with Halpern and Shoham's modal logic of time intervals based on the Allen relations, we introduce a family of modal logics equipped with eight modal operators that are interpreted by the Egenhofer-Franzosa (or RCC8) relations between regions in topological spaces such as the real plane. We investigate the expressive power and computational complexity of logics obtained in this way. It turns out that our modal logics have the same expressive power as the two-variable fragment of first-order logic, but are exponentially less succinct. The complexity ranges from (undecidable and) recursively enumerable to highly undecidable, where the recursively enumerable logics are obtained by considering substructures of structures induced by topological spaces. As our undecidability results also capture logics based on the real line, they improve upon undecidability results for interval temporal logics by Halpern and Shoham. We also analyze modal logics based on the five RCC5 relations, with similar results regarding the expressive power, but weaker results regarding the complexity.<|reference_end|>
arxiv
@article{lutz2006modal, title={Modal Logics of Topological Relations}, author={Carsten Lutz and Frank Wolter}, journal={Logical Methods in Computer Science, Volume 2, Issue 2 (June 22, 2006) lmcs:2253}, year={2006}, doi={10.2168/LMCS-2(2:5)2006}, archivePrefix={arXiv}, eprint={cs/0605064}, primaryClass={cs.LO cs.AI cs.CC} }
lutz2006modal
arxiv-674223
cs/0605065
On the possible Computational Power of the Human Mind
<|reference_start|>On the possible Computational Power of the Human Mind: The aim of this paper is to address the question: Can an artificial neural network (ANN) model be used as a possible characterization of the power of the human mind? We will discuss what might be the relationship between such a model and its natural counterpart. A possible characterization of the different power capabilities of the mind is suggested in terms of the information contained (in its computational complexity) or achievable by it. Such characterization takes advantage of recent results based on natural neural networks (NNN) and the computational power of arbitrary artificial neural networks (ANN). The possible acceptance of neural networks as the model of the human mind's operation makes the aforementioned quite relevant.<|reference_end|>
arxiv
@article{zenil2006on, title={On the possible Computational Power of the Human Mind}, author={Hector Zenil, Francisco Hernandez-Quiroz}, journal={arXiv preprint arXiv:cs/0605065}, year={2006}, doi={10.1142/9789812707420_0020}, archivePrefix={arXiv}, eprint={cs/0605065}, primaryClass={cs.NE cs.AI cs.CC} }
zenil2006on
arxiv-674224
cs/0605066
A Chaotic Cipher Mmohocc and Its Randomness Evaluation
<|reference_start|>A Chaotic Cipher Mmohocc and Its Randomness Evaluation: After a brief introduction to a new chaotic stream cipher Mmohocc which utilizes the fundamental chaos characteristics of mixing, unpredictability, and sensitivity to initial conditions, we conducted the randomness statistical tests against the keystreams generated by the cipher. Two batteries of most stringent randomness tests, namely the NIST Suite and the Diehard Suite, were performed. The results showed that the keystreams have successfully passed all the statistical tests. We conclude that Mmohocc can generate high-quality pseudorandom numbers from a statistical point of view.<|reference_end|>
arxiv
@article{zhang2006a, title={A Chaotic Cipher Mmohocc and Its Randomness Evaluation}, author={Xiaowen Zhang, Ke Tang, Li Shu}, journal={arXiv preprint arXiv:cs/0605066}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605066}, primaryClass={cs.CR} }
zhang2006a
arxiv-674225
cs/0605067
Efficient Operation of Coded Packet Networks
<|reference_start|>Efficient Operation of Coded Packet Networks: A fundamental problem faced in the design of almost all packet networks is that of efficient operation--of reliably communicating given messages among nodes at minimum cost in resource usage. We present a solution to the efficient operation problem for coded packet networks, i.e., packet networks where the contents of outgoing packets are arbitrary, causal functions of the contents of received packets. Such networks are in contrast to conventional, routed packet networks, where outgoing packets are restricted to being copies of received packets and where reliability is provided by the use of retransmissions. This thesis introduces four considerations to coded packet networks: 1. efficiency, 2. the lack of synchronization in packet networks, 3. the possibility of broadcast links, and 4. packet loss. We take these considerations and give a prescription for operation that is novel and general, yet simple, useful, and extensible.<|reference_end|>
arxiv
@article{lun2006efficient, title={Efficient Operation of Coded Packet Networks}, author={Desmond S. Lun}, journal={Ph.D. dissertation, Massachusetts Institute of Technology, June 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605067}, primaryClass={cs.IT cs.NI math.IT} }
lun2006efficient
arxiv-674226
cs/0605068
Low Complexity Algorithms for Linear Recurrences
<|reference_start|>Low Complexity Algorithms for Linear Recurrences: We consider two kinds of problems: the computation of polynomial and rational solutions of linear recurrences with coefficients that are polynomials with integer coefficients; indefinite and definite summation of sequences that are hypergeometric over the rational numbers. The algorithms for these tasks all involve as an intermediate quantity an integer $N$ (dispersion or root of an indicial polynomial) that is potentially exponential in the bit size of their input. Previous algorithms have a bit complexity that is at least quadratic in $N$. We revisit them and propose variants that exploit the structure of solutions and avoid expanding polynomials of degree $N$. We give two algorithms: a probabilistic one that detects the existence or absence of nonzero polynomial and rational solutions in $O(\sqrt{N}\log^{2}N)$ bit operations; a deterministic one that computes a compact representation of the solution in $O(N\log^{3}N)$ bit operations. Similar speed-ups are obtained in indefinite and definite hypergeometric summation. We describe the results of an implementation.<|reference_end|>
arxiv
@article{bostan2006low, title={Low Complexity Algorithms for Linear Recurrences}, author={Alin Bostan (INRIA Rocquencourt), Fr'ed'eric Chyzak (INRIA Rocquencourt), Bruno Salvy (INRIA Rocquencourt), Thomas Cluzeau (INRIA Sophia Antipolis)}, journal={ISSAC'06, pages 31--38, ACM Press, 2006}, year={2006}, doi={10.1145/1145768.1145781}, archivePrefix={arXiv}, eprint={cs/0605068}, primaryClass={cs.SC} }
bostan2006low
arxiv-674227
cs/0605069
Parallel vs Sequential Belief Propagation Decoding of LDPC Codes over GF(q) and Markov Sources
<|reference_start|>Parallel vs Sequential Belief Propagation Decoding of LDPC Codes over GF(q) and Markov Sources: A sequential updating scheme (SUS) for belief propagation (BP) decoding of LDPC codes over Galois fields, $GF(q)$, and correlated Markov sources is proposed, and compared with the standard parallel updating scheme (PUS). A thorough experimental study of various transmission settings indicates that the convergence rate, in iterations, of the BP algorithm (and subsequently its complexity) for the SUS is about one half of that for the PUS, independent of the finite field size $q$. Moreover, this 1/2 factor appears regardless of the correlations of the source and the channel's noise model, while the error correction performance remains unchanged. These results may imply on the 'universality' of the one half convergence speed-up of SUS decoding.<|reference_end|>
arxiv
@article{yacov2006parallel, title={Parallel vs. Sequential Belief Propagation Decoding of LDPC Codes over GF(q) and Markov Sources}, author={Nadav Yacov, Hadar Efraim, Haggai Kfir, Ido Kanter and Ori Shental}, journal={arXiv preprint arXiv:cs/0605069}, year={2006}, doi={10.1016/j.physa.2006.12.009}, archivePrefix={arXiv}, eprint={cs/0605069}, primaryClass={cs.IT math.IT} }
yacov2006parallel
arxiv-674228
cs/0605070
Curve Shortening and the Rendezvous Problem for Mobile Autonomous Robots
<|reference_start|>Curve Shortening and the Rendezvous Problem for Mobile Autonomous Robots: If a smooth, closed, and embedded curve is deformed along its normal vector field at a rate proportional to its curvature, it shrinks to a circular point. This curve evolution is called Euclidean curve shortening and the result is known as the Gage-Hamilton-Grayson Theorem. Motivated by the rendezvous problem for mobile autonomous robots, we address the problem of creating a polygon shortening flow. A linear scheme is proposed that exhibits several analogues to Euclidean curve shortening: The polygon shrinks to an elliptical point, convex polygons remain convex, and the perimeter of the polygon is monotonically decreasing.<|reference_end|>
arxiv
@article{smith2006curve, title={Curve Shortening and the Rendezvous Problem for Mobile Autonomous Robots}, author={Stephen L. Smith, Mireille E. Broucke, and Bruce A. Francis}, journal={arXiv preprint arXiv:cs/0605070}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605070}, primaryClass={cs.RO cs.MA} }
smith2006curve
arxiv-674229
cs/0605071
On the Capacity of Interference Channels with Degraded Message sets
<|reference_start|>On the Capacity of Interference Channels with Degraded Message sets: This paper is motivated by a sensor network on a correlated field where nearby sensors share information, and can thus assist rather than interfere with one another. A special class of two-user Gaussian interference channels (IFCs) is considered where one of the two transmitters knows both the messages to be conveyed to the two receivers (called the IFC with degraded message sets). Both achievability and converse arguments are provided for this scenario for a class of discrete memoryless channels with weak interference. For the case of the Gaussian weak interference channel with degraded message sets, optimality of Gaussian inputs is also shown, resulting in the capacity region of this channel.<|reference_end|>
arxiv
@article{wu2006on, title={On the Capacity of Interference Channels with Degraded Message sets}, author={Wei Wu, Sriram Vishwanath and Ari Arapostathis}, journal={arXiv preprint arXiv:cs/0605071}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605071}, primaryClass={cs.IT math.IT} }
wu2006on
arxiv-674230
cs/0605072
On the Capacity of Gaussian Weak Interference Channels with Degraded Message sets
<|reference_start|>On the Capacity of Gaussian Weak Interference Channels with Degraded Message sets: This paper is motivated by a sensor network on a correlated field where nearby sensors share information, and can thus assist rather than interfere with one another. We consider a special class of two-user Gaussian interference channels (IFCs) where one of the two transmitters knows both the messages to be conveyed to the two receivers. Both achievability and converse arguments are provided for a channel with Gaussian inputs and Gaussian noise when the interference is weaker than the direct link (a so called weak IFC). In general, this region serves as an outer bound on the capacity of weak IFCs with no shared knowledge between transmitters.<|reference_end|>
arxiv
@article{wu2006on, title={On the Capacity of Gaussian Weak Interference Channels with Degraded Message sets}, author={Wei Wu, Sriram Vishwanath and Ari Arapostathis}, journal={arXiv preprint arXiv:cs/0605072}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605072}, primaryClass={cs.IT math.IT} }
wu2006on
arxiv-674231
cs/0605073
Analytic Properties and Covariance Functions of a New Class of Generalized Gibbs Random Fields
<|reference_start|>Analytic Properties and Covariance Functions of a New Class of Generalized Gibbs Random Fields: Spartan Spatial Random Fields (SSRFs) are generalized Gibbs random fields, equipped with a coarse-graining kernel that acts as a low-pass filter for the fluctuations. SSRFs are defined by means of physically motivated spatial interactions and a small set of free parameters (interaction couplings). This paper focuses on the FGC-SSRF model, which is defined on the Euclidean space $\mathbb{R}^{d}$ by means of interactions proportional to the squares of the field realizations, as well as their gradient and curvature. The permissibility criteria of FGC-SSRFs are extended by considering the impact of a finite-bandwidth kernel. It is proved that the FGC-SSRFs are almost surely differentiable in the case of finite bandwidth. Asymptotic explicit expressions for the Spartan covariance function are derived for $d=1$ and $d=3$; both known and new covariance functions are obtained depending on the value of the FGC-SSRF shape parameter. Nonlinear dependence of the covariance integral scale on the FGC-SSRF characteristic length is established, and it is shown that the relation becomes linear asymptotically. The results presented in this paper are useful in random field parameter inference, as well as in spatial interpolation of irregularly-spaced samples.<|reference_end|>
arxiv
@article{hristopulos2006analytic, title={Analytic Properties and Covariance Functions of a New Class of Generalized Gibbs Random Fields}, author={Dionissios T. Hristopulos, Samuel Elogne}, journal={Information Theory, IEEE Transactions on Volume: 53 , Issue: 12 , 4667 - 4679 (2007)}, year={2006}, doi={10.1109/TIT.2007.909163}, archivePrefix={arXiv}, eprint={cs/0605073}, primaryClass={cs.IT cs.CE math.IT} }
hristopulos2006analytic
arxiv-674232
cs/0605074
SAT Solving for Argument Filterings
<|reference_start|>SAT Solving for Argument Filterings: This paper introduces a propositional encoding for lexicographic path orders in connection with dependency pairs. This facilitates the application of SAT solvers for termination analysis of term rewrite systems based on the dependency pair method. We address two main inter-related issues and encode them as satisfiability problems of propositional formulas that can be efficiently handled by SAT solving: (1) the combined search for a lexicographic path order together with an \emph{argument filtering} to orient a set of inequalities; and (2) how the choice of the argument filtering influences the set of inequalities that have to be oriented. We have implemented our contributions in the termination prover AProVE. Extensive experiments show that by our encoding and the application of SAT solvers one obtains speedups in orders of magnitude as well as increased termination proving power.<|reference_end|>
arxiv
@article{codish2006sat, title={SAT Solving for Argument Filterings}, author={Michael Codish (1), Peter Schneider-Kamp (2), Vitaly Lagoon (3), Ren'e Thiemann (2), J"urgen Giesl (2) ((1) Department of Computer Science, Ben-Gurion University, Israel (2) LuFG Informatik 2, RWTH Aachen, Germany (3) Department of Computer Science and Software Engineering, University of Melbourne, Australia)}, journal={arXiv preprint arXiv:cs/0605074}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605074}, primaryClass={cs.LO} }
codish2006sat
arxiv-674233
cs/0605075
On the Capacity and Mutual Information of Memoryless Noncoherent Rayleigh-Fading Channels
<|reference_start|>On the Capacity and Mutual Information of Memoryless Noncoherent Rayleigh-Fading Channels: The memoryless noncoherent single-input single-output (SISO) Rayleigh-fading channel is considered. Closed-form expressions for the mutual information between the output and the input of this channel when the input magnitude distribution is discrete and restricted to having two mass points are derived, and it is subsequently shown how these expressions can be used to obtain closed-form expressions for the capacity of this channel for signal to noise ratio (SNR) values of up to approximately 0 dB, and a tight capacity lower bound for SNR values between 0 dB and 10 dB. The expressions for the channel capacity and its lower bound are given as functions of a parameter which can be obtained via numerical root-finding algorithms.<|reference_end|>
arxiv
@article{de ryhove2006on, title={On the Capacity and Mutual Information of Memoryless Noncoherent Rayleigh-Fading Channels}, author={Sebastien de la Kethulle de Ryhove, Ninoslav Marina, and Geir E. Oien}, journal={arXiv preprint arXiv:cs/0605075}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605075}, primaryClass={cs.IT math.IT} }
de ryhove2006on
arxiv-674234
cs/0605076
Numeration-automatic sequences
<|reference_start|>Numeration-automatic sequences: We present a base class of automata that induce a numeration system and we give an algorithm to give the n-th word in the language of the automaton when the expansion of n in the induced numeration system is feeded to the automaton. Furthermore we give some algorithms for reverse reading of this expansion and a way to combine automata to other automata having the same properties.<|reference_end|>
arxiv
@article{laros2006numeration-automatic, title={Numeration-automatic sequences}, author={J. F. J. Laros}, journal={arXiv preprint arXiv:cs/0605076}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605076}, primaryClass={cs.CL cs.DM} }
laros2006numeration-automatic
arxiv-674235
cs/0605077
Universal Filtering via Hidden Markov Modeling
<|reference_start|>Universal Filtering via Hidden Markov Modeling: The problem of discrete universal filtering, in which the components of a discrete signal emitted by an unknown source and corrupted by a known DMC are to be causally estimated, is considered. A family of filters are derived, and are shown to be universally asymptotically optimal in the sense of achieving the optimum filtering performance when the clean signal is stationary, ergodic, and satisfies an additional mild positivity condition. Our schemes are comprised of approximating the noisy signal using a hidden Markov process (HMP) via maximum-likelihood (ML) estimation, followed by the use of the forward recursions for HMP state estimation. It is shown that as the data length increases, and as the number of states in the HMP approximation increases, our family of filters attain the performance of the optimal distribution-dependent filter.<|reference_end|>
arxiv
@article{moon2006universal, title={Universal Filtering via Hidden Markov Modeling}, author={Taesup Moon, Tsachy Weissman}, journal={arXiv preprint arXiv:cs/0605077}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605077}, primaryClass={cs.IT math.IT} }
moon2006universal
arxiv-674236
cs/0605078
The Complexity of Mean Flow Time Scheduling Problems with Release Times
<|reference_start|>The Complexity of Mean Flow Time Scheduling Problems with Release Times: We study the problem of preemptive scheduling n jobs with given release times on m identical parallel machines. The objective is to minimize the average flow time. We show that when all jobs have equal processing times then the problem can be solved in polynomial time using linear programming. Our algorithm can also be applied to the open-shop problem with release times and unit processing times. For the general case (when processing times are arbitrary), we show that the problem is unary NP-hard.<|reference_end|>
arxiv
@article{baptiste2006the, title={The Complexity of Mean Flow Time Scheduling Problems with Release Times}, author={Philippe Baptiste, Peter Brucker, Marek Chrobak, Christoph Durr, Svetlana A. Kravchenko, Francis Sourd}, journal={arXiv preprint arXiv:cs/0605078}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605078}, primaryClass={cs.DS} }
baptiste2006the
arxiv-674237
cs/0605079
On the Capacity of Fading MIMO Broadcast Channels with Imperfect Transmitter Side-Information
<|reference_start|>On the Capacity of Fading MIMO Broadcast Channels with Imperfect Transmitter Side-Information: A fading broadcast channel is considered where the transmitter employs two antennas and each of the two receivers employs a single receive antenna. It is demonstrated that even if the realization of the fading is precisely known to the receivers, the high signal-to-noise (SNR) throughput is greatly reduced if, rather than knowing the fading realization \emph{precisely}, the trasmitter only knows the fading realization \emph{approximately}. The results are general and are not limited to memoryless Gaussian fading.<|reference_end|>
arxiv
@article{lapidoth2006on, title={On the Capacity of Fading MIMO Broadcast Channels with Imperfect Transmitter Side-Information}, author={Amos Lapidoth, Shlomo Shamai, Michele Wigger}, journal={arXiv preprint arXiv:cs/0605079}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605079}, primaryClass={cs.IT math.IT} }
lapidoth2006on
arxiv-674238
cs/0605080
A Locating-First Approach for Scalable Overlay Multicast
<|reference_start|>A Locating-First Approach for Scalable Overlay Multicast: Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications.<|reference_end|>
arxiv
@article{kaafar2006a, title={A Locating-First Approach for Scalable Overlay Multicast}, author={Mohamed Ali Dali Kaafar (INRIA Sophia Antipolis / INRIA Rh^one-Alpes), Thierry Turletti (INRIA Sophia Antipolis / INRIA Rh^one-Alpes), Walid Dabbous (INRIA Sophia Antipolis / INRIA Rh^one-Alpes)}, journal={Dans IEEE IWQoS 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605080}, primaryClass={cs.NI} }
kaafar2006a
arxiv-674239
cs/0605081
Caract\'eristiques arithm\'etiques des processeurs graphiques
<|reference_start|>Caract\'eristiques arithm\'etiques des processeurs graphiques: Les unit\'{e}s graphiques (Graphic Processing Units- GPU) sont d\'{e}sormais des processeurs puissants et flexibles. Les derni\`{e}res g\'{e}n\'{e}rations de GPU contiennent des unit\'{e}s programmables de traitement des sommets (vertex shader) et des pixels (pixel shader) supportant des op\'{e}rations en virgule flottante sur 8, 16 ou 32 bits. La repr\'{e}sentation flottante sur 32 bits correspond \`{a} la simple pr\'{e}cision de la norme IEEE sur l'arithm\'{e}tique en virgule flottante (IEEE-754). Les GPU sont bien adapt\'{e}s aux applications avec un fort parall\'{e}lisme de donn\'{e}es. Cependant ils ne sont que peu utilis\'{e}s en dehors des calculs graphiques (General Purpose computation on GPU -- GPGPU). Une des raisons de cet \'{e}tat de faits est la pauvret\'{e} des documentations techniques fournies par les fabricants (ATI et Nvidia), particuli\`{e}rement en ce qui concerne l'implantation des diff\'{e}rents op\'{e}rateurs arithm\'{e}tiques embarqu\'{e}s dans les diff\'{e}rentes unit\'{e}s de traitement. Or ces informations sont essentielles pour estimer et contr\^{o}ler les erreurs d'arrondi ou pour mettre en oeuvre des techniques de r\'{e}duction ou de compensation afin de travailler en pr\'{e}cision double, quadruple ou arbitrairement \'{e}tendue. Nous proposons dans cet article un ensemble de programmes qui permettent de d\'{e}couvrir les caract\'{e}ristiques principales des GPU en ce qui concerne l'arithm\'{e}tique \`{a} virgule flottante. Nous donnons les r\'{e}sultats obtenus sur deux cartes graphiques r\'{e}centes: la Nvidia 7800GTX et l'ATI RX1800XL.<|reference_end|>
arxiv
@article{daumas2006caract\'{e}ristiques, title={Caract\'{e}ristiques arithm\'{e}tiques des processeurs graphiques}, author={Marc Daumas (LP2A, LIRMM), Guillaume Da Grac{c}a (LP2A), David Defour (LP2A)}, journal={arXiv preprint arXiv:cs/0605081}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605081}, primaryClass={cs.MS} }
daumas2006caract\'{e}ristiques
arxiv-674240
cs/0605082
Efficient algorithm for computing the Euler-Poincar\'e characteristic of a semi-algebraic set defined by few quadratic inequalities
<|reference_start|>Efficient algorithm for computing the Euler-Poincar\'e characteristic of a semi-algebraic set defined by few quadratic inequalities: We present an algorithm which takes as input a closed semi-algebraic set, $S \subset \R^k$, defined by \[ P_1 \leq 0, ..., P_\ell \leq 0, P_i \in \R[X_1,...,X_k], \deg(P_i) \leq 2, \] and computes the Euler-Poincar\'e characteristic of $S$. The complexity of the algorithm is $k^{O(\ell)}$.<|reference_end|>
arxiv
@article{basu2006efficient, title={Efficient algorithm for computing the Euler-Poincar\'e characteristic of a semi-algebraic set defined by few quadratic inequalities}, author={Saugata Basu}, journal={arXiv preprint arXiv:cs/0605082}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605082}, primaryClass={cs.SC cs.CG} }
basu2006efficient
arxiv-674241
cs/0605083
Classical Authentication Aided Three-Stage Quantum Protocol
<|reference_start|>Classical Authentication Aided Three-Stage Quantum Protocol: This paper modifies Kak's three-stage protocol so that it can guarantee secure transmission of information. Although avoiding man-in-the-middle attack is our primary objective in the introduction of classical authentication inside the three-stage protocol, we also benefit from the inherent advantages of the chosen classical authentication protocol. We have tried to implement ideas like key distribution center, session key, time-stamp, and nonce, within the quantum cryptography protocol.<|reference_end|>
arxiv
@article{basuchowdhuri2006classical, title={Classical Authentication Aided Three-Stage Quantum Protocol}, author={Partha Basuchowdhuri}, journal={arXiv preprint arXiv:cs/0605083}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605083}, primaryClass={cs.CR} }
basuchowdhuri2006classical
arxiv-674242
cs/0605084
The Generalized Multiple Access Channel with Confidential Messages
<|reference_start|>The Generalized Multiple Access Channel with Confidential Messages: A discrete memoryless generalized multiple access channel (GMAC) with confidential messages is studied, where two users attempt to transmit common information to a destination and each user also has private (confidential) information intended for the destination. This channel generalizes the multiple access channel (MAC) in that the two users also receive channel outputs. It is assumed that each user views the other user as a wire-tapper, and wishes to keep its confidential information as secret as possible from the other user. The level of secrecy of the confidential information is measured by the equivocation rate. The performance measure of interest is the rate-equivocation tuple that includes the common rate, two private rates and two equivocation rates as components. The set that includes all achievable rate-equivocation tuples is referred to as the capacity-equivocation region. For the GMAC with one confidential message set, where only one user (user 1) has private (confidential) information for the destination, inner and outer bounds on the capacity-equivocation region are derived. The secrecy capacity region is established, which is the set of all achievable rates with user 2 being perfectly ignorant of confidential messages of user 1. Furthermore, the capacity-equivocation region and the secrecy capacity region are established for the degraded GMAC with one confidential message set. For the GMAC with two confidential message sets, where both users have confidential messages for the destination, inner bounds on the capacity-equivocation region and the secrecy capacity region are obtained.<|reference_end|>
arxiv
@article{liang2006the, title={The Generalized Multiple Access Channel with Confidential Messages}, author={Yingbin Liang and H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0605084}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605084}, primaryClass={cs.IT math.IT} }
liang2006the
arxiv-674243
cs/0605085
A Scalable Algorithm for Minimal Unsatisfiable Core Extraction
<|reference_start|>A Scalable Algorithm for Minimal Unsatisfiable Core Extraction: We propose a new algorithm for minimal unsatisfiable core extraction, based on a deeper exploration of resolution-refutation properties. We provide experimental results on formal verification benchmarks confirming that our algorithm finds smaller cores than suboptimal algorithms; and that it runs faster than those algorithms that guarantee minimality of the core.<|reference_end|>
arxiv
@article{dershowitz2006a, title={A Scalable Algorithm for Minimal Unsatisfiable Core Extraction}, author={Nachum Dershowitz, Ziyad Hanna and Alexander Nadel}, journal={Proceedings of the 9th International Conference Theory and Applications of Satisfiability Testing (SAT 2006), Lecture Notes in Computer Science, volume 4121, Springer-Verlag, Berlin, pp. 36-41}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605085}, primaryClass={cs.LO} }
dershowitz2006a
arxiv-674244
cs/0605086
Upper Bounding the Performance of Arbitrary Finite LDPC Codes on Binary Erasure Channels
<|reference_start|>Upper Bounding the Performance of Arbitrary Finite LDPC Codes on Binary Erasure Channels: Assuming iterative decoding for binary erasure channels (BECs), a novel tree-based technique for upper bounding the bit error rates (BERs) of arbitrary, finite low-density parity-check (LDPC) codes is provided and the resulting bound can be evaluated for all operating erasure probabilities, including both the waterfall and the error floor regions. This upper bound can also be viewed as a narrowing search of stopping sets, which is an approach different from the stopping set enumeration used for lower bounding the error floor. When combined with optimal leaf-finding modules, this upper bound is guaranteed to be tight in terms of the asymptotic order. The Boolean framework proposed herein further admits a composite search for even tighter results. For comparison, a refinement of the algorithm is capable of exhausting all stopping sets of size <14 for irregular LDPC codes of length n=500, which requires approximately 1.67*10^25 trials if a brute force approach is taken. These experiments indicate that this upper bound can be used both as an analytical tool and as a deterministic worst-performance (error floor) guarantee, the latter of which is crucial to optimizing LDPC codes for extremely low BER applications, e.g., optical/satellite communications.<|reference_end|>
arxiv
@article{wang2006upper, title={Upper Bounding the Performance of Arbitrary Finite LDPC Codes on Binary Erasure Channels}, author={Chih-Chun Wang (1), Sanjeev R. Kulkarni (2), H. Vincent Poor (2) ((1) Purdue University, (2) Princeton University)}, journal={arXiv preprint arXiv:cs/0605086}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605086}, primaryClass={cs.IT math.IT} }
wang2006upper
arxiv-674245
cs/0605087
Error Exponents and Cutoff Rate for Noncoherent Rician Fading Channels
<|reference_start|>Error Exponents and Cutoff Rate for Noncoherent Rician Fading Channels: In this paper, random coding error exponents and cutoff rate are studied for noncoherent Rician fading channels, where neither the receiver nor the transmitter has channel side information. First, it is assumed that the input is subject only to an average power constraint. In this case, a lower bound to the random coding error exponent is considered and the optimal input achieving this lower bound is shown to have a discrete amplitude and uniform phase. If the input is subject to both average and peak power constraints, it is proven that the optimal input achieving the random coding error exponent has again a discrete nature. Finally, the cutoff rate is analyzed, and the optimality of the single-mass input amplitude distribution in the low-power regime is discussed.<|reference_end|>
arxiv
@article{gursoy2006error, title={Error Exponents and Cutoff Rate for Noncoherent Rician Fading Channels}, author={Mustafa Cenk Gursoy}, journal={arXiv preprint arXiv:cs/0605087}, year={2006}, doi={10.1109/ICC.2006.254944}, archivePrefix={arXiv}, eprint={cs/0605087}, primaryClass={cs.IT math.IT} }
gursoy2006error
arxiv-674246
cs/0605088
TARMAC: Traffic-Analysis Reslient MAC Protocol for Multi-Hop Wireless Networks
<|reference_start|>TARMAC: Traffic-Analysis Reslient MAC Protocol for Multi-Hop Wireless Networks: Traffic analysis in Multi-hop Wireless Networks can expose the structure of the network allowing attackers to focus their efforts on critical nodes. For example, jamming the only data sink in a sensor network can cripple the network. We propose a new communication protocol that is part of the MAC layer, but resides conceptually between the routing layer and MAC, that is resilient to traffic analysis. Each node broadcasts the data that it has to transmit according to a fixed transmission schedule that is independent of the traffic being generated, making the network immune to time correlation analysis. The transmission pattern is identical, with the exception of a possible time shift, at all nodes, removing spatial correlation of transmissions to network strucutre. Data for all neighbors resides in the same encrypted packet. Each neighbor then decides which subset of the data in a packet to forward onwards using a routing protocol whose details are orthogonal to the proposed scheme. We analyze the basic scheme, exploring the tradeoffs in terms of frequency of transmission and packet size. We also explore adaptive and time changing patterns and analyze their performance under a number of representative scenarios.<|reference_end|>
arxiv
@article{liu2006tarmac:, title={TARMAC: Traffic-Analysis Reslient MAC Protocol for Multi-Hop Wireless Networks}, author={Ke Liu, Adnan Majeed and Nael B. Abu-Ghazaleh}, journal={arXiv preprint arXiv:cs/0605088}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605088}, primaryClass={cs.NI cs.CR} }
liu2006tarmac:
arxiv-674247
cs/0605089
Aligned Virtual Coordinates for Greedy Routing in WSNs
<|reference_start|>Aligned Virtual Coordinates for Greedy Routing in WSNs: Geographic routing provides relatively good performance at a much lower overhead than conventional routing protocols such as AODV. However, the performance of these protocols is impacted by physical voids, and localization errors. Accordingly, virtual coordinate systems (VCS) were proposed as an alternative approach that is resilient to localization errors and that naturally routes around physical voids. However, we show that VCS is vulnerable to different forms of the void problem and the performance of greedy routing on VCS is worse than that of geographic forwarding. We show that these anomalies are due to the integral nature of VCS, which causes quantization noise in the estimate of connectivity and node location. We propose an aligned virtual coordinate system (AVCS) on which the greedy routing success can be significantly improved. With our approach, and for the first time, we show that greedy routing on VCS out-performs that on physical coordinate systems even in the absence of localization errors. We compare AVCS against some of the most popular geographical routing protocols both on physical coordinate system and the virtual coordinate systems and show that AVCS significantly improves performance over the best known solutions.<|reference_end|>
arxiv
@article{liu2006aligned, title={Aligned Virtual Coordinates for Greedy Routing in WSNs}, author={Ke Liu and Nael Abu-Ghazaleh}, journal={MASS 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605089}, primaryClass={cs.NI} }
liu2006aligned
arxiv-674248
cs/0605090
Mathematica: A System of Computer Programs
<|reference_start|>Mathematica: A System of Computer Programs: Starting from the basic level of mathematica here we illustrate how to use a mathematica notebook and write a program in the notebook. Next, we investigate elaborately the way of linking of external programs with mathematica, so-called the mathlink operation. Using this technique we can run very tedious jobs quite efficiently, and the operations become extremely fast. Sometimes it is quite desirable to run jobs in background of a computer which can take considerable amount of time to finish, and this allows us to do work on other tasks, while keeping the jobs running. The way of running jobs, written in a mathematica notebook, in background is quite different from the conventional methods i.e., the techniques for the programs written in other languages like C, C++, F77, F90, F95, etc. To illustrate it, in the present article we study how to create a mathematica batch-file from a mathematica notebook and run it in the background. Finally, we explore the most significant issue of this article. Here we describe the basic ideas for parallelizing a mathematica program by sharing its independent parts into all other remote computers available in the network. Doing the parallelization, we can perform large computational operations within a very short period of time, and therefore, the efficiency of the numerical works can be achieved. Parallel computation supports any version of mathematica and it also works significantly well even if different versions of mathematica are installed in different computers. All the operations studied in this article run under any supported operating system like Unix, Windows, Macintosh, etc. For the sake of our illustrations, here we concentrate all the discussions only for the Unix based operating system.<|reference_end|>
arxiv
@article{maiti2006mathematica:, title={Mathematica: A System of Computer Programs}, author={Santanu K. Maiti}, journal={arXiv preprint arXiv:cs/0605090}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605090}, primaryClass={cs.MS cs.PL} }
maiti2006mathematica:
arxiv-674249
cs/0605091
Low-density constructions can achieve the Wyner-Ziv and Gelfand-Pinsker bounds
<|reference_start|>Low-density constructions can achieve the Wyner-Ziv and Gelfand-Pinsker bounds: We describe and analyze sparse graphical code constructions for the problems of source coding with decoder side information (the Wyner-Ziv problem), and channel coding with encoder side information (the Gelfand-Pinsker problem). Our approach relies on a combination of low-density parity check (LDPC) codes and low-density generator matrix (LDGM) codes, and produces sparse constructions that are simultaneously good as both source and channel codes. In particular, we prove that under maximum likelihood encoding/decoding, there exist low-density codes (i.e., with finite degrees) from our constructions that can saturate both the Wyner-Ziv and Gelfand-Pinsker bounds.<|reference_end|>
arxiv
@article{martinian2006low-density, title={Low-density constructions can achieve the Wyner-Ziv and Gelfand-Pinsker bounds}, author={Emin Martinian, Martin J. Wainwright}, journal={arXiv preprint arXiv:cs/0605091}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605091}, primaryClass={cs.IT math.IT} }
martinian2006low-density
arxiv-674250
cs/0605092
The Multiple Access Channel with Feedback and Correlated Sources
<|reference_start|>The Multiple Access Channel with Feedback and Correlated Sources: In this paper, we investigate communication strategies for the multiple access channel with feedback and correlated sources (MACFCS). The MACFCS models a wireless sensor network scenario in which sensors distributed throughout an arbitrary random field collect correlated measurements and transmit them to a common sink. We derive achievable rate regions for the three-node MACFCS. First, we study the strategy when source coding and channel coding are combined, which we term full decoding at sources. Second, we look at several strategies when source coding and channel coding are separated, which we term full decoding at destination. From numerical computations on Gaussian channels, we see that different strategies perform better under certain source correlations and channel setups.<|reference_end|>
arxiv
@article{ong2006the, title={The Multiple Access Channel with Feedback and Correlated Sources}, author={Lawrence Ong and Mehul Motani}, journal={Proceedings of the 2006 IEEE International Symposium on Information Theory (ISIT 2006), The Westin Seattle, Seattle, WA, pp. 2129-2133, Jul. 9-14 2006.}, year={2006}, doi={10.1109/ISIT.2006.261927}, archivePrefix={arXiv}, eprint={cs/0605092}, primaryClass={cs.IT math.IT} }
ong2006the
arxiv-674251
cs/0605093
The Capacity of the Single Source Multiple Relay Single Destination Mesh Network
<|reference_start|>The Capacity of the Single Source Multiple Relay Single Destination Mesh Network: In this paper, we derive the capacity of a special class of mesh networks. A mesh network is defined as a heterogeneous wireless network in which the transmission among power limited nodes is assisted by powerful relays, which use the same wireless medium. We find the capacity of the mesh network when there is one source, one destination, and multiple relays. We call this channel the single source multiple relay single destination (SSMRSD) mesh network. Our approach is as follows. We first look at an upper bound on the information theoretic capacity of these networks in the Gaussian setting. We then show that the bound is achievable asymptotically using the compress-forward strategy for the multiple relay channel. Theoretically, the results indicate the value of cooperation and the utility of carefully deployed relays in wireless ad-hoc and sensor networks. The capacity characterization quantifies how the relays can be used to either conserve node energy or to increase transmission rate.<|reference_end|>
arxiv
@article{ong2006the, title={The Capacity of the Single Source Multiple Relay Single Destination Mesh Network}, author={Lawrence Ong and Mehul Motani}, journal={Proceedings of the 2006 IEEE International Symposium on Information Theory (ISIT 2006), The Westin Seattle, Seattle, WA, pp. 1673-1677, Jul. 9-14 2006.}, year={2006}, doi={10.1109/ISIT.2006.261639}, archivePrefix={arXiv}, eprint={cs/0605093}, primaryClass={cs.IT math.IT} }
ong2006the
arxiv-674252
cs/0605094
Proof Search in Hajek's Basic Logic
<|reference_start|>Proof Search in Hajek's Basic Logic: We introduce a proof system for Hajek's logic BL based on a relational hypersequents framework. We prove that the rules of our logical calculus, called RHBL, are sound and invertible with respect to any valuation of BL into a suitable algebra, called omega[0,1]. Refining the notion of reduction tree that arises naturally from RHBL, we obtain a decision algorithm for BL provability whose running time upper bound is 2^O(n), where n is the number of connectives of the input formula. Moreover, if a formula is unprovable, we exploit the constructiveness of a polynomial time algorithm for leaves validity for providing a procedure to build countermodels in omega[0,1]. Finally, since the size of the reduction tree branches is O(n^3), we can describe a polynomial time verification algorithm for BL unprovability.<|reference_end|>
arxiv
@article{bova2006proof, title={Proof Search in Hajek's Basic Logic}, author={S. Bova (1) and F. Montagna (1) ((1) University of Siena, Italy)}, journal={arXiv preprint arXiv:cs/0605094}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605094}, primaryClass={cs.LO cs.CC} }
bova2006proof
arxiv-674253
cs/0605095
Single-Symbol-Decodable Differential Space-Time Modulation Based on QO-STBC
<|reference_start|>Single-Symbol-Decodable Differential Space-Time Modulation Based on QO-STBC: We present a novel differential space-time modulation (DSTM) scheme that is single-symbol decodable and can provide full transmit diversity. It is the first known singlesymbol- decodable DSTM scheme not based on Orthogonal STBC (O-STBC), and it is constructed based on the recently proposed Minimum-Decoding-Complexity Quasi-Orthogonal Space-Time Block Code (MDC-QOSTBC). We derive the code design criteria and present systematic methodology to find the solution sets. The proposed DSTM scheme can provide higher code rate than DSTM schemes based on O-STBC. Its decoding complexity is also considerably lower than DSTM schemes based on Sp(2) and double-symbol-decodable QOSTBC, with negligible or slight trade-off in decoding error probability performance.<|reference_end|>
arxiv
@article{yuen2006single-symbol-decodable, title={Single-Symbol-Decodable Differential Space-Time Modulation Based on QO-STBC}, author={Chau Yuen, Yong Liang Guan, Tjeng Thiang Tjhung}, journal={arXiv preprint arXiv:cs/0605095}, year={2006}, doi={10.1109/TWC.2006.256950}, archivePrefix={arXiv}, eprint={cs/0605095}, primaryClass={cs.IT math.IT} }
yuen2006single-symbol-decodable
arxiv-674254
cs/0605096
Circle Formation of Weak Robots and Lyndon Words
<|reference_start|>Circle Formation of Weak Robots and Lyndon Words: A Lyndon word is a non-empty word strictly smaller in the lexicographic order than any of its suffixes, except itself and the empty word. In this paper, we show how Lyndon words can be used in the distributed control of a set of n weak mobile robots. By weak, we mean that the robots are anonymous, memoryless, without any common sense of direction, and unable to communicate in an other way than observation. An efficient and simple deterministic protocol to form a regular n-gon is presented and proven for n prime.<|reference_end|>
arxiv
@article{dieudonné2006circle, title={Circle Formation of Weak Robots and Lyndon Words}, author={Yoann Dieudonn'e (LaRIA), Franck Petit (LaRIA)}, journal={arXiv preprint arXiv:cs/0605096}, year={2006}, number={LaRIA-2006-05}, archivePrefix={arXiv}, eprint={cs/0605096}, primaryClass={cs.DC cs.RO} }
dieudonné2006circle
arxiv-674255
cs/0605097
A Generalized Two-Phase Analysis of Knowledge Flows in Security Protocols
<|reference_start|>A Generalized Two-Phase Analysis of Knowledge Flows in Security Protocols: We introduce knowledge flow analysis, a simple and flexible formalism for checking cryptographic protocols. Knowledge flows provide a uniform language for expressing the actions of principals, assump- tions about intruders, and the properties of cryptographic primitives. Our approach enables a generalized two-phase analysis: we extend the two-phase theory by identifying the necessary and sufficient proper- ties of a broad class of cryptographic primitives for which the theory holds. We also contribute a library of standard primitives and show that they satisfy our criteria.<|reference_end|>
arxiv
@article{van dijk2006a, title={A Generalized Two-Phase Analysis of Knowledge Flows in Security Protocols}, author={Marten van Dijk, Emina Torlak, Blaise Gassend, and Srinivas Devadas}, journal={arXiv preprint arXiv:cs/0605097}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605097}, primaryClass={cs.CR} }
van dijk2006a
arxiv-674256
cs/0605098
Energy Efficiency in Multi-hop CDMA Networks: A Game Theoretic Analysis
<|reference_start|>Energy Efficiency in Multi-hop CDMA Networks: A Game Theoretic Analysis: A game-theoretic analysis is used to study the effects of receiver choice on the energy efficiency of multi-hop networks in which the nodes communicate using Direct-Sequence Code Division Multiple Access (DS-CDMA). A Nash equilibrium of the game in which the network nodes can choose their receivers as well as their transmit powers to maximize the total number of bits they transmit per unit of energy is derived. The energy efficiencies resulting from the use of different linear multiuser receivers in this context are compared, looking at both the non-cooperative game and the Pareto optimal solution. For analytical ease, particular attention is paid to asymptotically large networks. Significant gains in energy efficiency are observed when multiuser receivers, particularly the linear minimum mean-square error (MMSE) receiver, are used instead of conventional matched filter receivers.<|reference_end|>
arxiv
@article{betz2006energy, title={Energy Efficiency in Multi-hop CDMA Networks: A Game Theoretic Analysis}, author={Sharon Betz and H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0605098}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605098}, primaryClass={cs.IT math.IT} }
betz2006energy
arxiv-674257
cs/0605099
Alphabetic Coding with Exponential Costs
<|reference_start|>Alphabetic Coding with Exponential Costs: An alphabetic binary tree formulation applies to problems in which an outcome needs to be determined via alphabetically ordered search prior to the termination of some window of opportunity. Rather than finding a decision tree minimizing $\sum_{i=1}^n w(i) l(i)$, this variant involves minimizing $\log_a \sum_{i=1}^n w(i) a^{l(i)}$ for a given $a \in (0,1)$. This note introduces a dynamic programming algorithm that finds the optimal solution in polynomial time and space, and shows that methods traditionally used to improve the speed of optimizations in related problems, such as the Hu-Tucker procedure, fail for this problem. This note thus also introduces two approximation algorithms which can find a suboptimal solution in linear time (for one) or $\order(n \log n)$ time (for the other), with associated coding redundancy bounds.<|reference_end|>
arxiv
@article{baer2006alphabetic, title={Alphabetic Coding with Exponential Costs}, author={Michael B. Baer}, journal={arXiv preprint arXiv:cs/0605099}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605099}, primaryClass={cs.IT cs.DS math.IT} }
baer2006alphabetic
arxiv-674258
cs/0605100
Network Inference from Co-Occurrences
<|reference_start|>Network Inference from Co-Occurrences: The recovery of network structure from experimental data is a basic and fundamental problem. Unfortunately, experimental data often do not directly reveal structure due to inherent limitations such as imprecision in timing or other observation mechanisms. We consider the problem of inferring network structure in the form of a directed graph from co-occurrence observations. Each observation arises from a transmission made over the network and indicates which vertices carry the transmission without explicitly conveying their order in the path. Without order information, there are an exponential number of feasible graphs which agree with the observed data equally well. Yet, the basic physical principles underlying most networks strongly suggest that all feasible graphs are not equally likely. In particular, vertices that co-occur in many observations are probably closely connected. Previous approaches to this problem are based on ad hoc heuristics. We model the experimental observations as independent realizations of a random walk on the underlying graph, subjected to a random permutation which accounts for the lack of order information. Treating the permutations as missing data, we derive an exact expectation-maximization (EM) algorithm for estimating the random walk parameters. For long transmission paths the exact E-step may be computationally intractable, so we also describe an efficient Monte Carlo EM (MCEM) algorithm and derive conditions which ensure convergence of the MCEM algorithm with high probability. Simulations and experiments with Internet measurements demonstrate the promise of this approach.<|reference_end|>
arxiv
@article{rabbat2006network, title={Network Inference from Co-Occurrences}, author={Michael Rabbat, Mario Figueiredo, and Robert Nowak}, journal={arXiv preprint arXiv:cs/0605100}, year={2006}, doi={10.1109/TIT.2008.926315}, archivePrefix={arXiv}, eprint={cs/0605100}, primaryClass={cs.IT math.IT} }
rabbat2006network
arxiv-674259
cs/0605101
Modeling the Dynamics of Social Networks
<|reference_start|>Modeling the Dynamics of Social Networks: Modeling human dynamics responsible for the formation and evolution of the so-called social networks - structures comprised of individuals or organizations and indicating connectivities existing in a community - is a topic recently attracting a significant research interest. It has been claimed that these dynamics are scale-free in many practically important cases, such as impersonal and personal communication, auctioning in a market, accessing sites on the WWW, etc., and that human response times thus conform to the power law. While a certain amount of progress has recently been achieved in predicting the general response rate of a human population, existing formal theories of human behavior can hardly be found satisfactory to accommodate and comprehensively explain the scaling observed in social networks. In the presented study, a novel system-theoretic modeling approach is proposed and successfully applied to determine important characteristics of a communication network and to analyze consumer behavior on the WWW.<|reference_end|>
arxiv
@article{kryssanov2006modeling, title={Modeling the Dynamics of Social Networks}, author={Victor V. Kryssanov, Frank J. Rinaldo, Evgeny L. Kuleshov, Hitoshi Ogawa}, journal={arXiv preprint arXiv:cs/0605101}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605101}, primaryClass={cs.CY cs.CE cs.CL cs.HC cs.NI physics.data-an} }
kryssanov2006modeling
arxiv-674260
cs/0605102
Restricted Strip Covering and the Sensor Cover Problem
<|reference_start|>Restricted Strip Covering and the Sensor Cover Problem: Given a set of objects with durations (jobs) that cover a base region, can we schedule the jobs to maximize the duration the original region remains covered? We call this problem the sensor cover problem. This problem arises in the context of covering a region with sensors. For example, suppose you wish to monitor activity along a fence by sensors placed at various fixed locations. Each sensor has a range and limited battery life. The problem is to schedule when to turn on the sensors so that the fence is fully monitored for as long as possible. This one dimensional problem involves intervals on the real line. Associating a duration to each yields a set of rectangles in space and time, each specified by a pair of fixed horizontal endpoints and a height. The objective is to assign a position to each rectangle to maximize the height at which the spanning interval is fully covered. We call this one dimensional problem restricted strip covering. If we replace the covering constraint by a packing constraint, the problem is identical to dynamic storage allocation, a scheduling problem that is a restricted case of the strip packing problem. We show that the restricted strip covering problem is NP-hard and present an O(log log n)-approximation algorithm. We present better approximations or exact algorithms for some special cases. For the uniform-duration case of restricted strip covering we give a polynomial-time, exact algorithm but prove that the uniform-duration case for higher-dimensional regions is NP-hard. Finally, we consider regions that are arbitrary sets, and we present an O(log n)-approximation algorithm.<|reference_end|>
arxiv
@article{buchsbaum2006restricted, title={Restricted Strip Covering and the Sensor Cover Problem}, author={Adam L. Buchsbaum, Alon Efrat, Shaili Jain, Suresh Venkatasubramanian and Ke Yi}, journal={arXiv preprint arXiv:cs/0605102}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605102}, primaryClass={cs.DS cs.CG} }
buchsbaum2006restricted
arxiv-674261
cs/0605103
A Better Alternative to Piecewise Linear Time Series Segmentation
<|reference_start|>A Better Alternative to Piecewise Linear Time Series Segmentation: Time series are difficult to monitor, summarize and predict. Segmentation organizes time series into few intervals having uniform characteristics (flatness, linearity, modality, monotonicity and so on). For scalability, we require fast linear time algorithms. The popular piecewise linear model can determine where the data goes up or down and at what rate. Unfortunately, when the data does not follow a linear model, the computation of the local slope creates overfitting. We propose an adaptive time series model where the polynomial degree of each interval vary (constant, linear and so on). Given a number of regressors, the cost of each interval is its polynomial degree: constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so on. Our goal is to minimize the Euclidean (l_2) error for a given model complexity. Experimentally, we investigate the model where intervals can be either constant or linear. Over synthetic random walks, historical stock market prices, and electrocardiograms, the adaptive model provides a more accurate segmentation than the piecewise linear model without increasing the cross-validation error or the running time, while providing a richer vocabulary to applications. Implementation issues, such as numerical stability and real-world performance, are discussed.<|reference_end|>
arxiv
@article{lemire2006a, title={A Better Alternative to Piecewise Linear Time Series Segmentation}, author={Daniel Lemire}, journal={arXiv preprint arXiv:cs/0605103}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605103}, primaryClass={cs.DB cs.CV} }
lemire2006a
arxiv-674262
cs/0605104
Parsing Transformative LR(1) Languages
<|reference_start|>Parsing Transformative LR(1) Languages: We consider, as a means of making programming languages more flexible and powerful, a parsing algorithm in which the parser may freely modify the grammar while parsing. We are particularly interested in a modification of the canonical LR(1) parsing algorithm in which, after the reduction of certain productions, we examine the source sentence seen so far to determine the grammar to use to continue parsing. A naive modification of the canonical LR(1) parsing algorithm along these lines cannot be guaranteed to halt; as a result, we develop a test which examines the grammar as it changes, stopping the parse if the grammar changes in a way that would invalidate earlier assumptions made by the parser. With this test in hand, we can develop our parsing algorithm and prove that it is correct. That being done, we turn to earlier, related work; the idea of programming languages which can be extended to include new syntactic constructs has existed almost as long as the idea of high-level programming languages. Early efforts to construct such a programming language were hampered by an immature theory of formal languages. More recent efforts to construct transformative languages relied either on an inefficient chain of source-to-source translators; or they have a defect, present in our naive parsing algorithm, in that they cannot be known to halt. The present algorithm does not have these undesirable properties, and as such, it should prove a useful foundation for a new kind of programming language.<|reference_end|>
arxiv
@article{hegerle2006parsing, title={Parsing Transformative LR(1) Languages}, author={Blake Hegerle}, journal={arXiv preprint arXiv:cs/0605104}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605104}, primaryClass={cs.PL} }
hegerle2006parsing
arxiv-674263
cs/0605105
An outer bound to the capacity region of the broadcast channel
<|reference_start|>An outer bound to the capacity region of the broadcast channel: An outer bound to the capacity region of the two-receiver discrete memoryless broadcast channel is given. The outer bound is tight for all cases where the capacity region is known. When specialized to the case of no common information, this outer bound is contained in the Korner-Marton outer bound. This containment is shown to be strict for the binary skew-symmetric broadcast channel. Thus, this outer bound is in general tighter than all other known outer bounds on the discrete memoryless broadcast channel.<|reference_end|>
arxiv
@article{nair2006an, title={An outer bound to the capacity region of the broadcast channel}, author={Chandra Nair, Abbas El Gamal}, journal={arXiv preprint arXiv:cs/0605105}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605105}, primaryClass={cs.IT math.IT} }
nair2006an
arxiv-674264
cs/0605106
Supervisory Control of Fuzzy Discrete Event Systems: A Formal Approach
<|reference_start|>Supervisory Control of Fuzzy Discrete Event Systems: A Formal Approach: Fuzzy {\it discrete event systems} (DESs) were proposed recently by Lin and Ying [19], which may better cope with the real-world problems with fuzziness, impreciseness, and subjectivity such as those in biomedicine. As a continuation of [19], in this paper we further develop fuzzy DESs by dealing with supervisory control of fuzzy DESs. More specifically, (i) we reformulate the parallel composition of crisp DESs, and then define the parallel composition of fuzzy DESs that is equivalent to that in [19]; {\it max-product} and {\it max-min} automata for modeling fuzzy DESs are considered; (ii) we deal with a number of fundamental problems regarding supervisory control of fuzzy DESs, particularly demonstrate controllability theorem and nonblocking controllability theorem of fuzzy DESs, and thus present the conditions for the existence of supervisors in fuzzy DESs; (iii) we analyze the complexity for presenting a uniform criterion to test the fuzzy controllability condition of fuzzy DESs modeled by max-product automata; in particular, we present in detail a general computing method for checking whether or not the fuzzy controllability condition holds, if max-min automata are used to model fuzzy DESs, and by means of this method we can search for all possible fuzzy states reachable from initial fuzzy state in max-min automata; also, we introduce the fuzzy $n$-controllability condition for some practical problems; (iv) a number of examples serving to illustrate the applications of the derived results and methods are described; some basic properties related to supervisory control of fuzzy DESs are investigated. To conclude, some related issues are raised for further consideration.<|reference_end|>
arxiv
@article{qiu2006supervisory, title={Supervisory Control of Fuzzy Discrete Event Systems: A Formal Approach}, author={Daowen Qiu}, journal={IEEE Trans. SMC-Part B, VOL.35, NO. 1, February 2005, pp. 72-88}, year={2006}, doi={10.1109/TSMCB.2004.840457}, archivePrefix={arXiv}, eprint={cs/0605106}, primaryClass={cs.LO cs.AI} }
qiu2006supervisory
arxiv-674265
cs/0605107
Fuzzy Discrete Event Systems under Fuzzy Observability and a test-algorithm
<|reference_start|>Fuzzy Discrete Event Systems under Fuzzy Observability and a test-algorithm: In order to more effectively cope with the real-world problems of vagueness, impreciseness, and subjectivity, fuzzy discrete event systems (FDESs) were proposed recently. Notably, FDESs have been applied to biomedical control for HIV/AIDS treatment planning and sensory information processing for robotic control. Qiu, Cao and Ying independently developed supervisory control theory of FDESs. We note that the controllability of events in Qiu's work is fuzzy but the observability of events is crisp, and, the observability of events in Cao and Ying's work is also crisp although the controllability is not completely crisp since the controllable events can be disabled with any degrees. Motivated by the necessity to consider the situation that the events may be observed or controlled with some membership degrees, in this paper, we establish the supervisory control theory of FDESs with partial observations, in which both the observability and controllability of events are fuzzy instead. We formalize the notions of fuzzy controllability condition and fuzzy observability condition. And Controllability and Observability Theorem of FDESs is set up in a more generic framework. In particular, we present a detailed computing flow to verify whether the controllability and observability conditions hold. Thus, this result can decide the existence of supervisors. Also, we use this computing method to check the existence of supervisors in the Controllability and Observability Theorem of classical discrete event systems (DESs), which is a new method and different from classical case. A number of examples are elaborated on to illustrate the presented results.<|reference_end|>
arxiv
@article{qiu2006fuzzy, title={Fuzzy Discrete Event Systems under Fuzzy Observability and a test-algorithm}, author={Daowen Qiu, Fuchun Liu}, journal={IEEE Transactions on Fuzzy Systems, 2009, 17 (3): 578-589.}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605107}, primaryClass={cs.LO} }
qiu2006fuzzy
arxiv-674266
cs/0605108
Diagnosability of Fuzzy Discrete Event Systems
<|reference_start|>Diagnosability of Fuzzy Discrete Event Systems: In order to more effectively cope with the real-world problems of vagueness, {\it fuzzy discrete event systems} (FDESs) were proposed recently, and the supervisory control theory of FDESs was developed. In view of the importance of failure diagnosis, in this paper, we present an approach of the failure diagnosis in the framework of FDESs. More specifically: (1) We formalize the definition of diagnosability for FDESs, in which the observable set and failure set of events are {\it fuzzy}, that is, each event has certain degree to be observable and unobservable, and, also, each event may possess different possibility of failure occurring. (2) Through the construction of observability-based diagnosers of FDESs, we investigate its some basic properties. In particular, we present a necessary and sufficient condition for diagnosability of FDESs. (3) Some examples serving to illuminate the applications of the diagnosability of FDESs are described. To conclude, some related issues are raised for further consideration.<|reference_end|>
arxiv
@article{liu2006diagnosability, title={Diagnosability of Fuzzy Discrete Event Systems}, author={Fuchun Liu, Daowen Qiu, Hongyan Xing, and Zhujun Fan}, journal={arXiv preprint arXiv:cs/0605108}, year={2006}, doi={10.1007/978-3-540-74205-0_73}, archivePrefix={arXiv}, eprint={cs/0605108}, primaryClass={cs.AI} }
liu2006diagnosability
arxiv-674267
cs/0605109
Knowledge Flow Analysis for Security Protocols
<|reference_start|>Knowledge Flow Analysis for Security Protocols: Knowledge flow analysis offers a simple and flexible way to find flaws in security protocols. A protocol is described by a collection of rules constraining the propagation of knowledge amongst principals. Because this characterization corresponds closely to informal descriptions of protocols, it allows a succinct and natural formalization; because it abstracts away message ordering, and handles communications between principals and applications of cryptographic primitives uniformly, it is readily represented in a standard logic. A generic framework in the Alloy modelling language is presented, and instantiated for two standard protocols, and a new key management scheme.<|reference_end|>
arxiv
@article{torlak2006knowledge, title={Knowledge Flow Analysis for Security Protocols}, author={Emina Torlak, Marten van Dijk, Blaise Gassend, Daniel Jackson, and Srinivas Devadas}, journal={arXiv preprint arXiv:cs/0605109}, year={2006}, number={MIT-CSAIL-TR-2005-066}, archivePrefix={arXiv}, eprint={cs/0605109}, primaryClass={cs.CR cs.SE} }
torlak2006knowledge
arxiv-674268
cs/0605110
Mapping the Bid Behavior of Conference Referees
<|reference_start|>Mapping the Bid Behavior of Conference Referees: The peer-review process, in its present form, has been repeatedly criticized. Of the many critiques ranging from publication delays to referee bias, this paper will focus specifically on the issue of how submitted manuscripts are distributed to qualified referees. Unqualified referees, without the proper knowledge of a manuscript's domain, may reject a perfectly valid study or potentially more damaging, unknowingly accept a faulty or fraudulent result. In this paper, referee competence is analyzed with respect to referee bid data collected from the 2005 Joint Conference on Digital Libraries (JCDL). The analysis of the referee bid behavior provides a validation of the intuition that referees are bidding on conference submissions with regards to the subject domain of the submission. Unfortunately, this relationship is not strong and therefore suggests that there exists other factors beyond subject domain that may be influencing referees to bid for particular submissions.<|reference_end|>
arxiv
@article{rodriguez2006mapping, title={Mapping the Bid Behavior of Conference Referees}, author={Marko A. Rodriguez, Johan Bollen, Herbert Van de Sompel}, journal={Journal of Informetrics, volume 1, number 1, pp. 62-82, ISSN: 1751-1577, January 2007}, year={2006}, doi={10.1016/j.joi.2006.09.006}, number={LA-UR-06-0749}, archivePrefix={arXiv}, eprint={cs/0605110}, primaryClass={cs.DL cs.CY} }
rodriguez2006mapping
arxiv-674269
cs/0605111
A Metadata Registry from Vocabularies UP: The NSDL Registry Project
<|reference_start|>A Metadata Registry from Vocabularies UP: The NSDL Registry Project: The NSDL Metadata Registry is designed to provide humans and machines with the means to discover, create, access and manage metadata schemes, schemas, application profiles, crosswalks and concept mappings. This paper describes the general goals and architecture of the NSDL Metadata Registry as well as issues encountered during the first year of the project's implementation.<|reference_end|>
arxiv
@article{hillmann2006a, title={A Metadata Registry from Vocabularies UP: The NSDL Registry Project}, author={Diane I. Hillmann (1), Stuart A. Sutton (2), Jon Phipps (1), Ryan Laundry (2) ((1) Cornell University, (2) University of Washington)}, journal={arXiv preprint arXiv:cs/0605111}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605111}, primaryClass={cs.DL} }
hillmann2006a
arxiv-674270
cs/0605112
An Algorithm to Determine Peer-Reviewers
<|reference_start|>An Algorithm to Determine Peer-Reviewers: The peer-review process is the most widely accepted certification mechanism for officially accepting the written results of researchers within the scientific community. An essential component of peer-review is the identification of competent referees to review a submitted manuscript. This article presents an algorithm to automatically determine the most appropriate reviewers for a manuscript by way of a co-authorship network data structure and a relative-rank particle-swarm algorithm. This approach is novel in that it is not limited to a pre-selected set of referees, is computationally efficient, requires no human-intervention, and, in some instances, can automatically identify conflict of interest situations. A useful application of this algorithm would be to open commentary peer-review systems because it provides a weighting for each referee with respects to their expertise in the domain of a manuscript. The algorithm is validated using referee bid data from the 2005 Joint Conference on Digital Libraries.<|reference_end|>
arxiv
@article{rodriguez2006an, title={An Algorithm to Determine Peer-Reviewers}, author={Marko A. Rodriguez, Johan Bollen}, journal={Conference on Information and Knowledge Management (CIKM), ACM, pages 319-328, (October 2008)}, year={2006}, doi={10.1145/1458082.1458127}, number={LA-UR-06-2261}, archivePrefix={arXiv}, eprint={cs/0605112}, primaryClass={cs.DL cs.AI cs.DS} }
rodriguez2006an
arxiv-674271
cs/0605113
An Architecture for the Aggregation and Analysis of Scholarly Usage Data
<|reference_start|>An Architecture for the Aggregation and Analysis of Scholarly Usage Data: Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. This paper presents a technical, standards-based architecture for sharing usage information, which we have designed and implemented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service. This paper also discusses issues that were encountered when implementing the proposed approach, and it presents preliminary results obtained from analyzing a usage data set containing about 3,500,000 requests aggregated by a federation of linking servers at the California State University system over a 20 month period.<|reference_end|>
arxiv
@article{bollen2006an, title={An Architecture for the Aggregation and Analysis of Scholarly Usage Data}, author={Johan Bollen and Herbert Van de Sompel}, journal={arXiv preprint arXiv:cs/0605113}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605113}, primaryClass={cs.DL} }
bollen2006an
arxiv-674272
cs/0605114
Oblivious Transfer using Elliptic Curves
<|reference_start|>Oblivious Transfer using Elliptic Curves: This paper proposes an algorithm for oblivious transfer using elliptic curves. Also, we present its application to chosen one-out-of-two oblivious transfer.<|reference_end|>
arxiv
@article{parakh2006oblivious, title={Oblivious Transfer using Elliptic Curves}, author={Abhishek Parakh}, journal={Cryptologia, Volume 31, Issue 2 April 2007, pages 125 - 132}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605114}, primaryClass={cs.CR} }
parakh2006oblivious
arxiv-674273
cs/0605115
Key Distillation and the Secret-Bit Fraction
<|reference_start|>Key Distillation and the Secret-Bit Fraction: We consider distillation of secret bits from partially secret noisy correlations P_ABE, shared between two honest parties and an eavesdropper. The most studied distillation scenario consists of joint operations on a large number of copies of the distribution (P_ABE)^N, assisted with public communication. Here we consider distillation with only one copy of the distribution, and instead of rates, the 'quality' of the distilled secret bits is optimized, where the 'quality' is quantified by the secret-bit fraction of the result. The secret-bit fraction of a binary distribution is the proportion which constitutes a secret bit between Alice and Bob. With local operations and public communication the maximal extractable secret-bit fraction from a distribution P_ABE is found, and is denoted by Lambda[P_ABE]. This quantity is shown to be nonincreasing under local operations and public communication, and nondecreasing under eavesdropper's local operations: it is a secrecy monotone. It is shown that if Lambda[P_ABE]>1/2 then P_ABE is distillable, thus providing a sufficient condition for distillability. A simple expression for Lambda[P_ABE] is found when the eavesdropper is decoupled, and when the honest parties' information is binary and the local operations are reversible. Intriguingly, for general distributions the (optimal) operation requires local degradation of the data.<|reference_end|>
arxiv
@article{jones2006key, title={Key Distillation and the Secret-Bit Fraction}, author={Nick S. Jones, Lluis Masanes}, journal={IEEE Trans. Inf. Theory, Vol. 54, No. 2, pp 680-691 (2008)}, year={2006}, doi={10.1109/TIT.2007.913264}, archivePrefix={arXiv}, eprint={cs/0605115}, primaryClass={cs.CR cs.IT math.IT quant-ph} }
jones2006key
arxiv-674274
cs/0605116
Optimal Distortion-Power Tradeoffs in Gaussian Sensor Networks
<|reference_start|>Optimal Distortion-Power Tradeoffs in Gaussian Sensor Networks: We investigate the optimal performance of dense sensor networks by studying the joint source-channel coding problem. The overall goal of the sensor network is to take measurements from an underlying random process, code and transmit those measurement samples to a collector node in a cooperative multiple access channel with imperfect feedback, and reconstruct the entire random process at the collector node. We provide lower and upper bounds for the minimum achievable expected distortion when the underlying random process is Gaussian. In the case where the random process satisfies some general conditions, we evaluate the lower and upper bounds explicitly and show that they are of the same order for a wide range of sum power constraints. Thus, for these random processes, under these sum power constraints, we determine the achievability scheme that is order-optimal, and express the minimum achievable expected distortion as a function of the sum power constraint.<|reference_end|>
arxiv
@article{liu2006optimal, title={Optimal Distortion-Power Tradeoffs in Gaussian Sensor Networks}, author={Nan Liu and Sennur Ulukus}, journal={arXiv preprint arXiv:cs/0605116}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605116}, primaryClass={cs.IT math.IT} }
liu2006optimal
arxiv-674275
cs/0605117
A Lattice-Based MIMO Broadcast Precoder for Multi-Stream Transmission
<|reference_start|>A Lattice-Based MIMO Broadcast Precoder for Multi-Stream Transmission: Precoding with block diagonalization is an attractive scheme for approaching sum capacity in multiuser multiple input multiple output (MIMO) broadcast channels. This method requires either global channel state information at every receiver or an additional training phase, which demands additional system planning. In this paper we propose a lattice based multi-user precoder that uses block diagonalization combined with pre-equalization and perturbation for the multiuser MIMO broadcast channel. An achievable sum rate of the proposed scheme is derived and used to show that the proposed technique approaches the achievable sum rate of block diagonalization with water-filling but does not require the additional information at the receiver. Monte Carlo simulations with equal power allocation show that the proposed method provides better bit error rate and diversity performance than block diagonalization with a zero-forcing receiver. Additionally, the proposed method shows similar performance to the maximum likelihood receiver but with much lower receiver complexity.<|reference_end|>
arxiv
@article{shim2006a, title={A Lattice-Based MIMO Broadcast Precoder for Multi-Stream Transmission}, author={Seijoon Shim, Chan-Byoung Chae, and Robert W. Heath Jr}, journal={arXiv preprint arXiv:cs/0605117}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605117}, primaryClass={cs.IT math.IT} }
shim2006a
arxiv-674276
cs/0605118
Pseudocodeword weights for non-binary LDPC codes
<|reference_start|>Pseudocodeword weights for non-binary LDPC codes: Pseudocodewords of q-ary LDPC codes are examined and the weight of a pseudocodeword on the q-ary symmetric channel is defined. The weight definition of a pseudocodeword on the AWGN channel is also extended to two-dimensional q-ary modulation such as q-PAM and q-PSK. The tree-based lower bounds on the minimum pseudocodeword weight are shown to also hold for q-ary LDPC codes on these channels.<|reference_end|>
arxiv
@article{kelley2006pseudocodeword, title={Pseudocodeword weights for non-binary LDPC codes}, author={Christine A. Kelley, Deepak Sridhara, and Joachim Rosenthal}, journal={arXiv preprint arXiv:cs/0605118}, year={2006}, doi={10.1109/ISIT.2006.262072}, archivePrefix={arXiv}, eprint={cs/0605118}, primaryClass={cs.IT math.IT} }
kelley2006pseudocodeword
arxiv-674277
cs/0605119
An Internet-enabled technology to support Evolutionary Design
<|reference_start|>An Internet-enabled technology to support Evolutionary Design: This paper discusses the systematic use of product feedback information to support life-cycle design approaches and provides guidelines for developing a design at both the product and the system levels. Design activities are surveyed in the light of the product life cycle, and the design information flow is interpreted from a semiotic perspective. The natural evolution of a design is considered, the notion of design expectations is introduced, and the importance of evaluation of these expectations in dynamic environments is argued. Possible strategies for reconciliation of the expectations and environmental factors are described. An Internet-enabled technology is proposed to monitor product functionality, usage, and operational environment and supply the designer with relevant information. A pilot study of assessing design expectations of a refrigerator is outlined, and conclusions are drawn.<|reference_end|>
arxiv
@article{kryssanov2006an, title={An Internet-enabled technology to support Evolutionary Design}, author={V.V. Kryssanov, H. Tamaki, and K. Ueda}, journal={Journal of Engineering Manufacture. 2001, Vol.215, No.B5, 647-655}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605119}, primaryClass={cs.CE cs.AI cs.AR cs.MA cs.NI} }
kryssanov2006an
arxiv-674278
cs/0605120
Understanding Design Fundamentals: How Synthesis and Analysis Drive Creativity, Resulting in Emergence
<|reference_start|>Understanding Design Fundamentals: How Synthesis and Analysis Drive Creativity, Resulting in Emergence: This paper presents results of an ongoing interdisciplinary study to develop a computational theory of creativity for engineering design. Human design activities are surveyed, and popular computer-aided design methodologies are examined. It is argued that semiotics has the potential to merge and unite various design approaches into one fundamental theory that is naturally interpretable and so comprehensible in terms of computer use. Reviewing related work in philosophy, psychology, and cognitive science provides a general and encompassing vision of the creativity phenomenon. Basic notions of algebraic semiotics are given and explained in terms of design. This is to define a model of the design creative process, which is seen as a process of semiosis, where concepts and their attributes represented as signs organized into systems are evolved, blended, and analyzed, resulting in the development of new concepts. The model allows us to formally describe and investigate essential properties of the design process, namely its dynamics and non-determinism inherent in creative thinking. A stable pattern of creative thought - analogical and metaphorical reasoning - is specified to demonstrate the expressive power of the modeling approach; illustrative examples are given. The developed theory is applied to clarify the nature of emergence in design: it is shown that while emergent properties of a product may influence its creative value, emergence can simply be seen as a by-product of the creative process. Concluding remarks summarize the research, point to some unresolved issues, and outline directions for future work.<|reference_end|>
arxiv
@article{kryssanov2006understanding, title={Understanding Design Fundamentals: How Synthesis and Analysis Drive Creativity, Resulting in Emergence}, author={V.V. Kryssanov, H. Tamaki, and S. Kitamura}, journal={AI in Engineering. 2001, Vol.15/4, 329-342}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605120}, primaryClass={cs.AI cs.CE cs.HC} }
kryssanov2006understanding
arxiv-674279
cs/0605121
Communication of Social Agents and the Digital City - A Semiotic Perspective
<|reference_start|>Communication of Social Agents and the Digital City - A Semiotic Perspective: This paper investigates the concept of digital city. First, a functional analysis of a digital city is made in the light of the modern study of urbanism; similarities between the virtual and urban constructions are pointed out. Next, a semiotic perspective on the subject matter is elaborated, and a terminological basis is introduced to treat a digital city as a self-organizing meaning-producing system intended to support social or spatial navigation. An explicit definition of a digital city is formulated. Finally, the proposed approach is discussed, conclusions are given, and future work is outlined.<|reference_end|>
arxiv
@article{kryssanov2006communication, title={Communication of Social Agents and the Digital City - A Semiotic Perspective}, author={Victor V. Kryssanov, Masayuki Okabe, Koh Kakusho, and Michihiko Minoh}, journal={Lecture Notes in Computer Science. 2002, Vol. 2362, 56-70}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605121}, primaryClass={cs.AI cs.CL cs.CY cs.HC} }
kryssanov2006communication
arxiv-674280
cs/0605122
Modeling Hypermedia-Based Communication
<|reference_start|>Modeling Hypermedia-Based Communication: In this article, we explore two approaches to modeling hypermedia-based communication. It is argued that the classical conveyor-tube framework is not applicable to the case of computer- and Internet- mediated communication. We then present a simple but very general system-theoretic model of the communication process, propose its mathematical interpretation, and derive several formulas, which qualitatively and quantitatively accord with data obtained on-line. The devised theoretical results generalize and correct the Zipf-Mandelbrot law and can be used in information system design. At the paper's end, we give some conclusions and draw implications for future work.<|reference_end|>
arxiv
@article{kryssanov2006modeling, title={Modeling Hypermedia-Based Communication}, author={V.V. Kryssanov, K. Kakusho, E.L. Kuleshov, and M. Minoh}, journal={Information Sciences. 2005, Vol. 174/1-2, 37-53}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605122}, primaryClass={cs.HC cs.CY cs.IR cs.IT math.IT} }
kryssanov2006modeling
arxiv-674281
cs/0605123
Classification of Ordinal Data
<|reference_start|>Classification of Ordinal Data: Classification of ordinal data is one of the most important tasks of relation learning. In this thesis a novel framework for ordered classes is proposed. The technique reduces the problem of classifying ordered classes to the standard two-class problem. The introduced method is then mapped into support vector machines and neural networks. Compared with a well-known approach using pairwise objects as training samples, the new algorithm has a reduced complexity and training time. A second novel model, the unimodal model, is also introduced and a parametric version is mapped into neural networks. Several case studies are presented to assert the validity of the proposed models.<|reference_end|>
arxiv
@article{cardoso2006classification, title={Classification of Ordinal Data}, author={Jaime S. Cardoso}, journal={arXiv preprint arXiv:cs/0605123}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605123}, primaryClass={cs.AI} }
cardoso2006classification
arxiv-674282
cs/0605124
Semantics and Complexity of SPARQL
<|reference_start|>Semantics and Complexity of SPARQL: SPARQL is the W3C candidate recommendation query language for RDF. In this paper we address systematically the formal study of SPARQL, concentrating in its graph pattern facility. We consider for this study a fragment without literals and a simple version of filters which encompasses all the main issues yet is simple to formalize. We provide a compositional semantics, prove there are normal forms, prove complexity bounds, among others that the evaluation of SPARQL patterns is PSPACE-complete, compare our semantics to an alternative operational semantics, give simple and natural conditions when both semantics coincide and discuss optimizations procedures.<|reference_end|>
arxiv
@article{perez2006semantics, title={Semantics and Complexity of SPARQL}, author={Jorge Perez, Marcelo Arenas and Claudio Gutierrez}, journal={arXiv preprint arXiv:cs/0605124}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605124}, primaryClass={cs.DB} }
perez2006semantics
arxiv-674283
cs/0605125
Combinational Logic Circuit Design with the Buchberger Algorithm
<|reference_start|>Combinational Logic Circuit Design with the Buchberger Algorithm: We detail a procedure for the computation of the polynomial form of an electronic combinational circuit from the design equations in a truth table. The method uses the Buchberger algorithm rather than current traditional methods based on search algorithms. We restrict the analysis to a single output, but the procedure can be generalized to multiple outputs. The procedure is illustrated with the design of a simple arithmetic and logic unit with two 3-bit operands and two control bits.<|reference_end|>
arxiv
@article{drolet2006combinational, title={Combinational Logic Circuit Design with the Buchberger Algorithm}, author={Germain Drolet (Department of Electrical & Computer Engineering, Royal Military College of Canada)}, journal={arXiv preprint arXiv:cs/0605125}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605125}, primaryClass={cs.AR} }
drolet2006combinational
arxiv-674284
cs/0605126
Power-aware scheduling for makespan and flow
<|reference_start|>Power-aware scheduling for makespan and flow: We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by Pruhs et al. to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.<|reference_end|>
arxiv
@article{bunde2006power-aware, title={Power-aware scheduling for makespan and flow}, author={David P. Bunde}, journal={arXiv preprint arXiv:cs/0605126}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605126}, primaryClass={cs.DS} }
bunde2006power-aware
arxiv-674285
cs/0605127
Analyzing Large Collections of Electronic Text Using OLAP
<|reference_start|>Analyzing Large Collections of Electronic Text Using OLAP: Computer-assisted reading and analysis of text has various applications in the humanities and social sciences. The increasing size of many electronic text archives has the advantage of a more complete analysis but the disadvantage of taking longer to obtain results. On-Line Analytical Processing is a method used to store and quickly analyze multidimensional data. By storing text analysis information in an OLAP system, a user can obtain solutions to inquiries in a matter of seconds as opposed to minutes, hours, or even days. This analysis is user-driven allowing various users the freedom to pursue their own direction of research.<|reference_end|>
arxiv
@article{keith2006analyzing, title={Analyzing Large Collections of Electronic Text Using OLAP}, author={Steven Keith, Owen Kaser, Daniel Lemire}, journal={arXiv preprint arXiv:cs/0605127}, year={2006}, number={TR-05-001}, archivePrefix={arXiv}, eprint={cs/0605127}, primaryClass={cs.DB cs.DL} }
keith2006analyzing
arxiv-674286
cs/0605128
Logic Column 15: Coalgebras and Their Logics
<|reference_start|>Logic Column 15: Coalgebras and Their Logics: This article describes recent work on the topic of specifying properties of transition systems. By giving a suitably abstract description of transition systems as coalgebras, it is possible to derive logics for capturing properties of these transition systems in an elegant way.<|reference_end|>
arxiv
@article{kurz2006logic, title={Logic Column 15: Coalgebras and Their Logics}, author={Alexander Kurz}, journal={SIGACT News 37 (2), pp. 57-77, 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605128}, primaryClass={cs.LO} }
kurz2006logic
arxiv-674287
cs/0605129
An Outer Bound for the Multi-Terminal Rate-Distortion Region
<|reference_start|>An Outer Bound for the Multi-Terminal Rate-Distortion Region: The multi-terminal rate-distortion problem has been studied extensively. Notably, among these, Tung and Housewright have provided the best known inner and outer bounds for the rate region under certain distortion constraints. In this paper, we first propose an outer bound for the rate region, and show that it is tighter than the outer bound of Tung and Housewright. Our outer bound involves some $n$-letter Markov chain constraints, which cause computational difficulties. We utilize a necessary condition for the Markov chain constraints to obtain another outer bound, which is represented in terms of some single-letter mutual information expressions evaluated over probability distributions that satisfy some single-letter conditions.<|reference_end|>
arxiv
@article{kang2006an, title={An Outer Bound for the Multi-Terminal Rate-Distortion Region}, author={W. Kang and S. Ulukus}, journal={arXiv preprint arXiv:cs/0605129}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605129}, primaryClass={cs.IT math.IT} }
kang2006an
arxiv-674288
cs/0605130
Error Exponents of Low-Density Parity-Check Codes on the Binary Erasure Channel
<|reference_start|>Error Exponents of Low-Density Parity-Check Codes on the Binary Erasure Channel: We introduce a thermodynamic (large deviation) formalism for computing error exponents in error-correcting codes. Within this framework, we apply the heuristic cavity method from statistical mechanics to derive the average and typical error exponents of low-density parity-check (LDPC) codes on the binary erasure channel (BEC) under maximum-likelihood decoding.<|reference_end|>
arxiv
@article{mora2006error, title={Error Exponents of Low-Density Parity-Check Codes on the Binary Erasure Channel}, author={Thierry Mora and Olivier Rivoire}, journal={Proceeding of the IEEE Information Theory Workshop, 2006 (ITW '06), Chengdu, pp. 81-85}, year={2006}, doi={10.1109/ITW2.2006.323761}, archivePrefix={arXiv}, eprint={cs/0605130}, primaryClass={cs.IT cond-mat.dis-nn math.IT} }
mora2006error
arxiv-674289
cs/0605131
Notes on Geometric Measure Theory Applications to Image Processing; De-noising, Segmentation, Pattern, Texture, Lines, Gestalt and Occlusion
<|reference_start|>Notes on Geometric Measure Theory Applications to Image Processing; De-noising, Segmentation, Pattern, Texture, Lines, Gestalt and Occlusion: Regularization functionals that lower level set boundary length when used with L^1 fidelity functionals on signal de-noising on images create artifacts. These are (i) rounding of corners, (ii) shrinking of radii, (iii) shrinking of cusps, and (iv) non-smoothing of staircasing. Regularity functionals based upon total curvature of level set boundaries do not create artifacts (i) and (ii). An adjusted fidelity term based on the flat norm on the current (a distributional graph) representing the density of curvature of level sets boundaries can minimize (iii) by weighting the position of a cusp. A regularity term to eliminate staircasing can be based upon the mass of the current representing the graph of an image function or its second derivatives. Densities on the Grassmann bundle of the Grassmann bundle of the ambient space of the graph can be used to identify patterns, textures, occlusion and lines.<|reference_end|>
arxiv
@article{morgan2006notes, title={Notes on Geometric Measure Theory Applications to Image Processing; De-noising, Segmentation, Pattern, Texture, Lines, Gestalt and Occlusion}, author={Simon P Morgan}, journal={arXiv preprint arXiv:cs/0605131}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605131}, primaryClass={cs.CV} }
morgan2006notes
arxiv-674290
cs/0605132
Stable partitions in coalitional games
<|reference_start|>Stable partitions in coalitional games: We propose a notion of a stable partition in a coalitional game that is parametrized by the concept of a defection function. This function assigns to each partition of the grand coalition a set of different coalition arrangements for a group of defecting players. The alternatives are compared using their social welfare. We characterize the stability of a partition for a number of most natural defection functions and investigate whether and how so defined stable partitions can be reached from any initial partition by means of simple transformations. The approach is illustrated by analyzing an example in which a set of stores seeks an optimal transportation arrangement.<|reference_end|>
arxiv
@article{apt2006stable, title={Stable partitions in coalitional games}, author={Krzysztof R. Apt and Tadeusz Radzik}, journal={arXiv preprint arXiv:cs/0605132}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605132}, primaryClass={cs.GT cs.MA} }
apt2006stable
arxiv-674291
cs/0605133
Efficient Route Tracing from a Single Source
<|reference_start|>Efficient Route Tracing from a Single Source: Traceroute is a networking tool that allows one to discover the path that packets take from a source machine, through the network, to a destination machine. It is widely used as an engineering tool, and also as a scientific tool, such as for discovery of the network topology at the IP level. In prior work, authors on this technical report have shown how to improve the efficiency of route tracing from multiple cooperating monitors. However, it is not unusual for a route tracing monitor to operate in isolation. Somewhat different strategies are required for this case, and this report is the first systematic study of those requirements. Standard traceroute is inefficient when used repeatedly towards multiple destinations, as it repeatedly probes the same interfaces close to the source. Others have recognized this inefficiency and have proposed tracing backwards from the destinations and stopping probing upon encounter with a previously-seen interface. One of this technical report's contributions is to quantify for the first time the efficiency of this approach. Another contribution is to describe the effect of non-responding destinations on this efficiency. Since a large portion of destination machines do not reply to probe packets, backwards probing from the destination is often infeasible. We propose an algorithm to tackle non-responding destinations, and we find that our algorithm can strongly decrease probing redundancy at the cost of a small reduction in node and link discovery.<|reference_end|>
arxiv
@article{friedman2006efficient, title={Efficient Route Tracing from a Single Source}, author={Benoit Donnet Philippe Raoult Timur Friedman}, journal={arXiv preprint arXiv:cs/0605133}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605133}, primaryClass={cs.NI} }
friedman2006efficient
arxiv-674292
cs/0605134
DSR with Non-Optimal Route Suppression for MANETs
<|reference_start|>DSR with Non-Optimal Route Suppression for MANETs: This paper revisits the issue of route discovery in dynamic source routing (DSR) for mobile ad hoc networks (MANETs), and puts forward a proposal of a lightweight non-optimal route suppression technique based on the observation of a rarely noted but commonly occurring phenomenon in route discovery. The technique exploits the observed phenomenon to extract query state information that permits intermediate nodes to identify and suppress the initiation of route replies with non-optimal routes, even if the route query is received for the first time. A detailed evaluation of DSR with non-optimal route suppression is found to yield significant improvements in both protocol efficiency and performance.<|reference_end|>
arxiv
@article{seet2006dsr, title={DSR with Non-Optimal Route Suppression for MANETs}, author={Boon-Chong Seet, Bu-Sung Lee, and Chiew-Tong Lau}, journal={arXiv preprint arXiv:cs/0605134}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605134}, primaryClass={cs.NI} }
seet2006dsr
arxiv-674293
cs/0605135
On the Role of Estimate-and-Forward with Time-Sharing in Cooperative Communications
<|reference_start|>On the Role of Estimate-and-Forward with Time-Sharing in Cooperative Communications: In this work we focus on the general relay channel. We investigate the application of estimate-and-forward (EAF) to different scenarios. Specifically, we consider assignments of the auxiliary random variables that always satisfy the feasibility constraints. We first consider the multiple relay channel and obtain an achievable rate without decoding at the relays. We demonstrate the benefits of this result via an explicit discrete memoryless multiple relay scenario where multi-relay EAF is superior to multi-relay decode-and-forward (DAF). We then consider the Gaussian relay channel with coded modulation, where we show that a three-level quantization outperforms the Gaussian quantization commonly used to evaluate the achievable rates in this scenario. Finally we consider the cooperative general broadcast scenario with a multi-step conference. We apply estimate-and-forward to obtain a general multi-step achievable rate region. We then give an explicit assignment of the auxiliary random variables, and use this result to obtain an explicit expression for the single common message broadcast scenario with a two-step conference.<|reference_end|>
arxiv
@article{dabora2006on, title={On the Role of Estimate-and-Forward with Time-Sharing in Cooperative Communications}, author={R. Dabora and S. D. Servetto (Cornell University)}, journal={arXiv preprint arXiv:cs/0605135}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605135}, primaryClass={cs.IT math.IT} }
dabora2006on
arxiv-674294
cs/0605136
Attaque algebrique de NTRU a l'aide des vecteurs de Witt
<|reference_start|>Attaque algebrique de NTRU a l'aide des vecteurs de Witt: One improves an algebraic attack of NTRU due to Silverman, Smart and Vercauteren; the latter considered the first 2 bits of a Witt vector attached to the research of the secret key; here the first 4 bits are considered, which provides additional equations of degrees 4 and 8.<|reference_end|>
arxiv
@article{bourgeois2006attaque, title={Attaque algebrique de NTRU a l'aide des vecteurs de Witt}, author={Gerald Bourgeois}, journal={arXiv preprint arXiv:cs/0605136}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605136}, primaryClass={cs.CR} }
bourgeois2006attaque
arxiv-674295
cs/0605137
Capacity Results for Block-Stationary Gaussian Fading Channels with a Peak Power Constraint
<|reference_start|>Capacity Results for Block-Stationary Gaussian Fading Channels with a Peak Power Constraint: We consider a peak-power-limited single-antenna block-stationary Gaussian fading channel where neither the transmitter nor the receiver knows the channel state information, but both know the channel statistics. This model subsumes most previously studied Gaussian fading models. We first compute the asymptotic channel capacity in the high SNR regime and show that the behavior of channel capacity depends critically on the channel model. For the special case where the fading process is symbol-by-symbol stationary, we also reveal a fundamental interplay between the codeword length, communication rate, and decoding error probability. Specifically, we show that the codeword length must scale with SNR in order to guarantee that the communication rate can grow logarithmically with SNR with bounded decoding error probability, and we find a necessary condition for the growth rate of the codeword length. We also derive an expression for the capacity per unit energy. Furthermore, we show that the capacity per unit energy is achievable using temporal ON-OFF signaling with optimally allocated ON symbols, where the optimal ON-symbol allocation scheme may depend on the peak power constraint.<|reference_end|>
arxiv
@article{chen2006capacity, title={Capacity Results for Block-Stationary Gaussian Fading Channels with a Peak Power Constraint}, author={Jun Chen, Venugopal V. Veeravalli}, journal={arXiv preprint arXiv:cs/0605137}, year={2006}, doi={10.1109/TIT.2007.909083}, archivePrefix={arXiv}, eprint={cs/0605137}, primaryClass={cs.IT math.IT} }
chen2006capacity
arxiv-674296
cs/0605138
The meaning of manufacturing know-how
<|reference_start|>The meaning of manufacturing know-how: This paper investigates the phenomenon of manufacturing know-how. First, the abstract notion of knowledge is discussed, and a terminological basis is introduced to treat know-how as a kind of knowledge. Next, a brief survey of the recently reported works dealt with manufacturing know-how is presented, and an explicit definition of know-how is formulated. Finally, the problem of utilizing know-how with knowledge-based systems is analyzed, and some ideas useful for its solving are given.<|reference_end|>
arxiv
@article{kryssanov2006the, title={The meaning of manufacturing know-how}, author={V.V. Kryssanov, V.A. Abramov, Y. Fukuda, K. Konishi}, journal={In: G. Jacucci, G.J. Olling, K. Preiss, and M. Wozny (eds), The Globalization of Manufacturing in the Digital Communications Era of the 21st Century: Innovation, Agility and the Virtual Enterprise, pp. 375-387. 1998, Kluwer Academic Publishers}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605138}, primaryClass={cs.AI cs.CE} }
kryssanov2006the
arxiv-674297
cs/0605139
Construction and Count of Boolean Functions of an Odd Number of Variables with Maximum Algebraic Immunity
<|reference_start|>Construction and Count of Boolean Functions of an Odd Number of Variables with Maximum Algebraic Immunity: Algebraic immunity has been proposed as an important property of Boolean functions. To resist algebraic attack, a Boolean function should possess high algebraic immunity. It is well known now that the algebraic immunity of an $n$-variable Boolean function is upper bounded by $\left\lceil {\frac{n}{2}} \right\rceil $. In this paper, for an odd integer $n$, we present a construction method which can efficiently generate a Boolean function of $n$ variables with maximum algebraic immunity, and we also show that any such function can be generated by this method. Moreover, the number of such Boolean functions is greater than $2^{2^{n-1}}$.<|reference_end|>
arxiv
@article{li2006construction, title={Construction and Count of Boolean Functions of an Odd Number of Variables with Maximum Algebraic Immunity}, author={Na Li and Wen-Feng Qi}, journal={arXiv preprint arXiv:cs/0605139}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605139}, primaryClass={cs.CR} }
li2006construction
arxiv-674298
cs/0605140
Inapproximability of the Tutte polynomial
<|reference_start|>Inapproximability of the Tutte polynomial: The Tutte polynomial of a graph G is a two-variable polynomial T(G;x,y) that encodes many interesting properties of the graph. We study the complexity of the following problem, for rationals x and y: take as input a graph G, and output a value which is a good approximation to T(G;x,y). Jaeger, Vertigan and Welsh have completely mapped the complexity of exactly computing the Tutte polynomial. They have shown that this is #P-hard, except along the hyperbola (x-1)(y-1)=1 and at four special points. We are interested in determining for which points (x,y) there is a "fully polynomial randomised approximation scheme" (FPRAS) for T(G;x,y). Under the assumption RP is not equal to NP, we prove that there is no FPRAS at (x,y) if (x,y) is in one of the half-planes x<-1 or y<-1 (excluding the easy-to-compute cases mentioned above). Two exceptions to this result are the half-line x<-1, y=1 (which is still open) and the portion of the hyperbola (x-1)(y-1)=2 corresponding to y<-1 which we show to be equivalent in difficulty to approximately counting perfect matchings. We give further intractability results for (x,y) in the vicinity of the origin. A corollary of our results is that, under the assumption RP is not equal to NP, there is no FPRAS at the point (x,y)=(0,1--lambda) when \lambda>2 is a positive integer. Thus there is no FPRAS for counting nowhere-zero \lambda flows for \lambda>2. This is an interesting consequence of our work since the corresponding decision problem is in P for example for \lambda=6.<|reference_end|>
arxiv
@article{goldberg2006inapproximability, title={Inapproximability of the Tutte polynomial}, author={Leslie Ann Goldberg and Mark Jerrum}, journal={Infomation and Computation 206(7), 908-929 (July 2008)}, year={2006}, doi={10.1016/j.ic.2008.04.003}, archivePrefix={arXiv}, eprint={cs/0605140}, primaryClass={cs.CC math.CO} }
goldberg2006inapproximability
arxiv-674299
cs/0605141
General Compact Labeling Schemes for Dynamic Trees
<|reference_start|>General Compact Labeling Schemes for Dynamic Trees: Let $F$ be a function on pairs of vertices. An {\em $F$- labeling scheme} is composed of a {\em marker} algorithm for labeling the vertices of a graph with short labels, coupled with a {\em decoder} algorithm allowing one to compute $F(u,v)$ of any two vertices $u$ and $v$ directly from their labels. As applications for labeling schemes concern mainly large and dynamically changing networks, it is of interest to study {\em distributed dynamic} labeling schemes. This paper investigates labeling schemes for dynamic trees. This paper presents a general method for constructing labeling schemes for dynamic trees. Our method is based on extending an existing {\em static} tree labeling scheme to the dynamic setting. This approach fits many natural functions on trees, such as ancestry relation, routing (in both the adversary and the designer port models), nearest common ancestor etc.. Our resulting dynamic schemes incur overheads (over the static scheme) on the label size and on the communication complexity. Informally, for any function $k(n)$ and any static $F$-labeling scheme on trees, we present an $F$-labeling scheme on dynamic trees incurring multiplicative overhead factors (over the static scheme) of $O(\log_{k(n)} n)$ on the label size and $O(k(n)\log_{k(n)} n)$ on the amortized message complexity. In particular, by setting $k(n)=n^{\epsilon}$ for any $0<\epsilon<1$, we obtain dynamic labeling schemes with asymptotically optimal label sizes and sublinear amortized message complexity for all the above mentioned functions.<|reference_end|>
arxiv
@article{korman2006general, title={General Compact Labeling Schemes for Dynamic Trees}, author={Amos Korman}, journal={arXiv preprint arXiv:cs/0605141}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605141}, primaryClass={cs.DC} }
korman2006general
arxiv-674300
cs/0605142
Int\'egration de la synth\`ese m\'emoire dans l'outil de synth\`ese d'architecture GAUT Low Power
<|reference_start|>Int\'egration de la synth\`ese m\'emoire dans l'outil de synth\`ese d'architecture GAUT Low Power: The systems supporting signal and image applications process large amount of data. That involves an intensive use of the memory which becomes the bottleneck of systems. Memory limits performances and represents a significant proportion of total consumption. In the development high level synthesis tool called GAUT Low Power, we are interested in the synthesis of the memory unit. In this work, we integrate the data storage and data transfert to constraint the high level synthesis of the datapath's execution unit.<|reference_end|>
arxiv
@article{corre2006int\'{e}gration, title={Int\'{e}gration de la synth\`{e}se m\'{e}moire dans l'outil de synth\`{e}se d'architecture GAUT Low Power}, author={Gwenol'e Corre (LESTER), Nathalie Julien (LESTER), Eric Senn (LESTER), Eric Martin (LESTER)}, journal={JFAAA'02 (Journ\'{e}es Francophone Ad\'{e}quation Algorithme Architecture), Tunisie (2002)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605142}, primaryClass={cs.AR} }
corre2006int\'{e}gration