corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-674501 | cs/0607068 | Computation of the Weight Distribution of CRC Codes | <|reference_start|>Computation of the Weight Distribution of CRC Codes: In this article, we illustrate an algorithm for the computation of the weight distribution of CRC codes. The recursive structure of CRC codes will give us an iterative way to compute the weight distribution of their dual codes starting from just some ``representative'' words. Thanks to MacWilliams Theorem, the computation of the weight distribution of dual codes can be easily brought back to that of CRC codes. This algorithm is a good alternative to the standard algorithm that involves listing every word of the code.<|reference_end|> | arxiv | @article{manganiello2006computation,
title={Computation of the Weight Distribution of CRC Codes},
author={Felice Manganiello},
journal={arXiv preprint arXiv:cs/0607068},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607068},
primaryClass={cs.IT math.AC math.IT}
} | manganiello2006computation |
arxiv-674502 | cs/0607069 | The B-Exponential Map: A Generalization of the Logistic Map, and Its Applications In Generating Pseudo-random Numbers | <|reference_start|>The B-Exponential Map: A Generalization of the Logistic Map, and Its Applications In Generating Pseudo-random Numbers: A 1-dimensional generalization of the well known Logistic Map is proposed. The proposed family of maps is referred to as the B-Exponential Map. The dynamics of this map are analyzed and found to have interesting properties. In particular, the B-Exponential Map exhibits robust chaos for all real values of the parameter B >= e^(-4). We then propose a pseudo-random number generator based on the B-Exponential Map by chaotically hopping between different trajectories for different values of B. We call this BEACH (B-Exponential All-Chaotic Map Hopping) pseudo-random number generator. BEACH successfully passes stringent statistical randomness tests such as ENT, NIST and Diehard. An implementation of BEACH is also outlined.<|reference_end|> | arxiv | @article{shastry2006the,
title={The B-Exponential Map: A Generalization of the Logistic Map, and Its
Applications In Generating Pseudo-random Numbers},
author={Mahesh C Shastry, Nithin Nagaraj, Prabhakar G Vaidya},
journal={arXiv preprint arXiv:cs/0607069},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607069},
primaryClass={cs.CR nlin.CD}
} | shastry2006the |
arxiv-674503 | cs/0607070 | Citation as a Representation Process | <|reference_start|>Citation as a Representation Process: The presented work proposes a novel approach to model the citation rate. The paper begins with a brief introduction into informetrics studies and highlights drawbacks of the contemporary approaches to modeling the citation process as a product of social interactions. An alternative modeling framework based on results obtained in cognitive psychology is then introduced and applied in an experiment to investigate properties of the citation process, as they are revealed by a large collection of citation statistics. Major research findings are discussed, and a summary is given.<|reference_end|> | arxiv | @article{kryssanov2006citation,
title={Citation as a Representation Process},
author={V. V. Kryssanov, F. J. Rinaldo, H. Ogawa, E. Kuleshov},
journal={arXiv preprint arXiv:cs/0607070},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607070},
primaryClass={cs.DL cs.CY physics.data-an}
} | kryssanov2006citation |
arxiv-674504 | cs/0607071 | Islands for SAT | <|reference_start|>Islands for SAT: In this note we introduce the notion of islands for restricting local search. We show how we can construct islands for CNF SAT problems, and how much search space can be eliminated by restricting search to the island.<|reference_end|> | arxiv | @article{fang2006islands,
title={Islands for SAT},
author={H. Fang, Y. Kilani, J.H.M. Lee, and P.J. Stuckey},
journal={arXiv preprint arXiv:cs/0607071},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607071},
primaryClass={cs.AI}
} | fang2006islands |
arxiv-674505 | cs/0607072 | Effect of Interface Style in Peer Review Comments for UML Designs | <|reference_start|>Effect of Interface Style in Peer Review Comments for UML Designs: This paper presents our evaluation of using a Tablet-PC to provide peer-review comments in the first year Computer Science course. Our exploration consisted of an evaluation of how students write comments on other students' assignments using three different methods: pen and paper, a Tablet-PC, and a desktop computer. Our ultimate goal is to explore the effect that interface style (Tablet vs. Desktop) has on the quality and quantity of the comments provided.<|reference_end|> | arxiv | @article{turner2006effect,
title={Effect of Interface Style in Peer Review Comments for UML Designs},
author={Scott A. Turner, Manuel A. Perez-Quinones, Stephen H. Edwards},
journal={arXiv preprint arXiv:cs/0607072},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607072},
primaryClass={cs.HC}
} | turner2006effect |
arxiv-674506 | cs/0607073 | Counting good truth assignments of random k-SAT formulae | <|reference_start|>Counting good truth assignments of random k-SAT formulae: We present a deterministic approximation algorithm to compute logarithm of the number of `good' truth assignments for a random k-satisfiability (k-SAT) formula in polynomial time (by `good' we mean that violate a small fraction of clauses). The relative error is bounded above by an arbitrarily small constant epsilon with high probability as long as the clause density (ratio of clauses to variables) alpha<alpha_{u}(k) = 2k^{-1}\log k(1+o(1)). The algorithm is based on computation of marginal distribution via belief propagation and use of an interpolation procedure. This scheme substitutes the traditional one based on approximation of marginal probabilities via MCMC, in conjunction with self-reduction, which is not easy to extend to the present problem. We derive 2k^{-1}\log k (1+o(1)) as threshold for uniqueness of the Gibbs distribution on satisfying assignment of random infinite tree k-SAT formulae to establish our results, which is of interest in its own right.<|reference_end|> | arxiv | @article{montanari2006counting,
title={Counting good truth assignments of random k-SAT formulae},
author={Andrea Montanari, Devavrat Shah},
journal={arXiv preprint arXiv:cs/0607073},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607073},
primaryClass={cs.DM cond-mat.dis-nn}
} | montanari2006counting |
arxiv-674507 | cs/0607074 | On Construction of the (24,12,8) Golay Codes | <|reference_start|>On Construction of the (24,12,8) Golay Codes: Two product array codes are used to construct the (24, 12, 8) binary Golay code through the direct sum operation. This construction provides a systematic way to find proper (8, 4, 4) linear block component codes for generating the Golay code, and it generates and extends previously existing methods that use a similar construction framework. The code constructed is simple to decode.<|reference_end|> | arxiv | @article{peng2006on,
title={On Construction of the (24,12,8) Golay Codes},
author={Xiao-Hong Peng and Paddy Farrell},
journal={arXiv preprint arXiv:cs/0607074},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607074},
primaryClass={cs.IT math.IT}
} | peng2006on |
arxiv-674508 | cs/0607075 | On entropy for mixtures of discrete and continuous variables | <|reference_start|>On entropy for mixtures of discrete and continuous variables: Let $X$ be a discrete random variable with support $S$ and $f : S \to S^\prime$ be a bijection. Then it is well-known that the entropy of $X$ is the same as the entropy of $f(X)$. This entropy preservation property has been well-utilized to establish non-trivial properties of discrete stochastic processes, e.g. queuing process \cite{prg03}. Entropy as well as entropy preservation is well-defined only in the context of purely discrete or continuous random variables. However for a mixture of discrete and continuous random variables, which arise in many interesting situations, the notions of entropy and entropy preservation have not been well understood. In this paper, we extend the notion of entropy in a natural manner for a mixed-pair random variable, a pair of random variables with one discrete and the other continuous. Our extensions are consistent with the existing definitions of entropy in the sense that there exist natural injections from discrete or continuous random variables into mixed-pair random variables such that their entropy remains the same. This extension of entropy allows us to obtain sufficient conditions for entropy preservation in mixtures of discrete and continuous random variables under bijections. The extended definition of entropy leads to an entropy rate for continuous time Markov chains. As an application, we recover a known probabilistic result related to Poisson process. We strongly believe that the frame-work developed in this paper can be useful in establishing probabilistic properties of complex processes, such as load balancing systems, queuing network, caching algorithms.<|reference_end|> | arxiv | @article{nair2006on,
title={On entropy for mixtures of discrete and continuous variables},
author={Chandra Nair, Balaji Prabhakar, Devavrat Shah},
journal={arXiv preprint arXiv:cs/0607075},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607075},
primaryClass={cs.IT math.IT}
} | nair2006on |
arxiv-674509 | cs/0607076 | Capacity of Cooperative Fusion in the Presence of Byzantine Sensors | <|reference_start|>Capacity of Cooperative Fusion in the Presence of Byzantine Sensors: The problem of cooperative fusion in the presence of Byzantine sensors is considered. An information theoretic formulation is used to characterize the Shannon capacity of sensor fusion. It is shown that when less than half of the sensors are Byzantine, the effect of Byzantine attack can be entirely mitigated, and the fusion capacity is identical to that when all sensors are honest. But when at least half of the sensors are Byzantine, they can completely defeat the sensor fusion so that no information can be transmitted reliably. A capacity achieving transmit-then-verify strategy is proposed for the case that less than half of the sensors are Byzantine, and its error probability and coding rate is analyzed by using a Markov decision process modeling of the transmission protocol.<|reference_end|> | arxiv | @article{kosut2006capacity,
title={Capacity of Cooperative Fusion in the Presence of Byzantine Sensors},
author={Oliver Kosut and Lang Tong},
journal={arXiv preprint arXiv:cs/0607076},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607076},
primaryClass={cs.IT math.IT}
} | kosut2006capacity |
arxiv-674510 | cs/0607077 | Fault-Tolerant Real-Time Streaming with FEC thanks to Capillary Multi-Path Routing | <|reference_start|>Fault-Tolerant Real-Time Streaming with FEC thanks to Capillary Multi-Path Routing: Erasure resilient FEC codes in off-line packetized streaming rely on time diversity. This requires unrestricted buffering time at the receiver. In real-time streaming the playback buffering time must be very short. Path diversity is an orthogonal strategy. However, the large number of long paths increases the number of underlying links and consecutively the overall link failure rate. This may increase the overall requirement in redundant FEC packets for combating the link failures. We introduce the Redundancy Overall Requirement (ROR) metric, a routing coefficient specifying the total number of FEC packets required for compensation of all underlying link failures. We present a capillary routing algorithm for constructing layer by layer steadily diversifying multi-path routing patterns. By measuring the ROR coefficients of a dozen of routing layers on hundreds of network samples, we show that the number of required FEC packets decreases substantially when the path diversity is increased by the capillary routing construction algorithm.<|reference_end|> | arxiv | @article{gabrielyan2006fault-tolerant,
title={Fault-Tolerant Real-Time Streaming with FEC thanks to Capillary
Multi-Path Routing},
author={Emin Gabrielyan},
journal={Emin Gabrielyan, Fault-Tolerant Real-Time Streaming with FEC
thanks to Capillary Multi-Path Routing, International Conference on
Communications, Circuits and Systems - ICCCAS'06 - Gui Lin, China, 25-28 June
2006, Vol. 3, pp. 1497-1501},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607077},
primaryClass={cs.NI cs.MM}
} | gabrielyan2006fault-tolerant |
arxiv-674511 | cs/0607078 | Complex Lattice Reduction Algorithm for Low-Complexity MIMO Detection | <|reference_start|>Complex Lattice Reduction Algorithm for Low-Complexity MIMO Detection: Recently, lattice-reduction-aided detectors have been proposed for multiple-input multiple-output (MIMO) systems to give performance with full diversity like maximum likelihood receiver, and yet with complexity similar to linear receivers. However, these lattice-reduction-aided detectors are based on the traditional LLL reduction algorithm that was originally introduced for reducing real lattice bases, in spite of the fact that the channel matrices are inherently complex-valued. In this paper, we introduce the complex LLL algorithm for direct application to reduce the basis of a complex lattice which is naturally defined by a complex-valued channel matrix. We prove that complex LLL reduction-aided detection can also achieve full diversity. Our analysis reveals that the new complex LLL algorithm can achieve a reduction in complexity of nearly 50% over the traditional LLL algorithm, and this is confirmed by simulation. It is noteworthy that the complex LLL algorithm aforementioned has nearly the same bit-error-rate performance as the traditional LLL algorithm.<|reference_end|> | arxiv | @article{gan2006complex,
title={Complex Lattice Reduction Algorithm for Low-Complexity MIMO Detection},
author={Ying Hung Gan, Cong Ling and Wai Ho Mow},
journal={arXiv preprint arXiv:cs/0607078},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607078},
primaryClass={cs.DS cs.IT math.IT}
} | gan2006complex |
arxiv-674512 | cs/0607079 | Length-based cryptanalysis: The case of Thompson's Group | <|reference_start|>Length-based cryptanalysis: The case of Thompson's Group: The length-based approach is a heuristic for solving randomly generated equations in groups which possess a reasonably behaved length function. We describe several improvements of the previously suggested length-based algorithms, that make them applicable to Thompson's group with significant success rates. In particular, this shows that the Shpilrain-Ushakov public key cryptosystem based on Thompson's group is insecure, and suggests that no practical public key cryptosystem based on this group can be secure.<|reference_end|> | arxiv | @article{ruinskiy2006length-based,
title={Length-based cryptanalysis: The case of Thompson's Group},
author={Dima Ruinskiy, Adi Shamir, and Boaz Tsaban},
journal={Journal of Mathematical Cryptology 1 (2007), 359--372},
year={2006},
doi={10.1515/jmc.2007.018},
archivePrefix={arXiv},
eprint={cs/0607079},
primaryClass={cs.CR cs.CC math.GR}
} | ruinskiy2006length-based |
arxiv-674513 | cs/0607080 | New Model of Internet Topology Using k-shell Decomposition | <|reference_start|>New Model of Internet Topology Using k-shell Decomposition: We introduce and use k-shell decomposition to investigate the topology of the Internet at the AS level. Our analysis separates the Internet into three sub-components: (a) a nucleus which is a small (~100 nodes) very well connected globally distributed subgraph; (b) a fractal sub-component that is able to connect the bulk of the Internet without congesting the nucleus, with self similar properties and critical exponents; and (c) dendrite-like structures, usually isolated nodes that are connected to the rest of the network through the nucleus only. This unique decomposition is robust, and provides insight into the underlying structure of the Internet and its functional consequences. Our approach is general and useful also when studying other complex networks.<|reference_end|> | arxiv | @article{carmi2006new,
title={New Model of Internet Topology Using k-shell Decomposition},
author={Shai Carmi, Shlomo Havlin, Scott Kirkpatrick, Yuval Shavitt and Eran
Shir},
journal={PNAS 104, 11150-11154 (2007).},
year={2006},
doi={10.1073/pnas.0701175104},
archivePrefix={arXiv},
eprint={cs/0607080},
primaryClass={cs.NI cond-mat.dis-nn}
} | carmi2006new |
arxiv-674514 | cs/0607081 | Syst\`eme de repr\'esentation d'aide au besoin dans le domaine architectural | <|reference_start|>Syst\`eme de repr\'esentation d'aide au besoin dans le domaine architectural: The image is a very important mean of communication in the field of architectural who intervenes in the various phases of the design of a project. It can be regarded as a tool of decision-making aid. The study of our research aims at to see the contribution of the Economic Intelligence in the resolution of a decisional problem of the various partners (Architect, Contractor, Customer) in the architectural field, in order to make strategic decisions within the framework of the realization or design of an architectural work. The economic Intelligence allows the taking into account of the real needs for the user-decision makers, so that their waiting are considered at the first stage of a search for information and not in the final stage of the development of the tool in the evaluation of this last.<|reference_end|> | arxiv | @article{ango-obiang2006syst\`{e}me,
title={Syst\`{e}me de repr\'{e}sentation d'aide au besoin dans le domaine
architectural},
author={Marie-France Ango-Obiang (LORIA)},
journal={Dans CONFERE 2006, Conception et Innovation},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607081},
primaryClass={cs.OH cs.IR}
} | ango-obiang2006syst\`{e}me |
arxiv-674515 | cs/0607082 | Well quasi-orders and the shuffle closure of finite sets | <|reference_start|>Well quasi-orders and the shuffle closure of finite sets: Given a set I of word, the set of all words obtained by the shuffle of (copies of) words of I is naturally provided with a partial order. In [FS05], the authors have opened the problem of the characterization of the finite sets I such that the order is a well quasi-order . In this paper we give an answer in the case when I consists of a single word w.<|reference_end|> | arxiv | @article{d'alessandro2006well,
title={Well quasi-orders and the shuffle closure of finite sets},
author={Flavio D'Alessandro, Gw'ena"el Richomme (LaRIA), Stefano Varrichio},
journal={arXiv preprint arXiv:cs/0607082},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607082},
primaryClass={cs.DM}
} | d'alessandro2006well |
arxiv-674516 | cs/0607083 | Mathematical Modelling of the Thermal Accumulation in Hot Water Solar Systems | <|reference_start|>Mathematical Modelling of the Thermal Accumulation in Hot Water Solar Systems: Mathematical modelling and defining useful recommendations for construction and regimes of exploitation for hot water solar installation with thermal stratification is the main purpose of this work. A special experimental solar module for hot water was build and equipped with sufficient measure apparatus. The main concept of investigation is to optimise the stratified regime of thermal accumulation and constructive parameters of heat exchange equipment (heat serpentine in tank). Accumulation and heat exchange processes were investigated by theoretical end experimental means. Special mathematical model was composed to simulate the energy transfer in stratified tank. Computer program was developed to solve mathematical equations for thermal accumulation and energy exchange. Extensive numerical and experimental tests were carried out. A good correspondence between theoretical and experimental data was arrived. Keywords: Mathematical modelling, accumulation<|reference_end|> | arxiv | @article{shtrakov2006mathematical,
title={Mathematical Modelling of the Thermal Accumulation in Hot Water Solar
Systems},
author={Stanko Vl. Shtrakov, Anton Stoilov},
journal={arXiv preprint arXiv:cs/0607083},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607083},
primaryClass={cs.CE}
} | shtrakov2006mathematical |
arxiv-674517 | cs/0607084 | About Norms and Causes | <|reference_start|>About Norms and Causes: Knowing the norms of a domain is crucial, but there exist no repository of norms. We propose a method to extract them from texts: texts generally do not describe a norm, but rather how a state-of-affairs differs from it. Answers concerning the cause of the state-of-affairs described often reveal the implicit norm. We apply this idea to the domain of driving, and validate it by designing algorithms that identify, in a text, the "basic" norms to which it refers implicitly.<|reference_end|> | arxiv | @article{kayser2006about,
title={About Norms and Causes},
author={Daniel Kayser (LIPN), Farid Nouioua (LIPN)},
journal={The 17th FLAIRS'04 Conference (2004) 502-507},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607084},
primaryClass={cs.AI}
} | kayser2006about |
arxiv-674518 | cs/0607085 | Using Pseudo-Stochastic Rational Languages in Probabilistic Grammatical Inference | <|reference_start|>Using Pseudo-Stochastic Rational Languages in Probabilistic Grammatical Inference: In probabilistic grammatical inference, a usual goal is to infer a good approximation of an unknown distribution P called a stochastic language. The estimate of P stands in some class of probabilistic models such as probabilistic automata (PA). In this paper, we focus on probabilistic models based on multiplicity automata (MA). The stochastic languages generated by MA are called rational stochastic languages; they strictly include stochastic languages generated by PA; they also admit a very concise canonical representation. Despite the fact that this class is not recursively enumerable, it is efficiently identifiable in the limit by using the algorithm DEES, introduced by the authors in a previous paper. However, the identification is not proper and before the convergence of the algorithm, DEES can produce MA that do not define stochastic languages. Nevertheless, it is possible to use these MA to define stochastic languages. We show that they belong to a broader class of rational series, that we call pseudo-stochastic rational languages. The aim of this paper is twofold. First we provide a theoretical study of pseudo-stochastic rational languages, the languages output by DEES, showing for example that this class is decidable within polynomial time. Second, we have carried out a lot of experiments in order to compare DEES to classical inference algorithms such as ALERGIA and MDI. They show that DEES outperforms them in most cases.<|reference_end|> | arxiv | @article{habrard2006using,
title={Using Pseudo-Stochastic Rational Languages in Probabilistic Grammatical
Inference},
author={Amaury Habrard (LIF), Francois Denis (LIF), Yann Esposito (LIF)},
journal={8th International Colloquium on Grammatical Inference (ICGI'06),
Japan (2006)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607085},
primaryClass={cs.LG}
} | habrard2006using |
arxiv-674519 | cs/0607086 | Representing Knowledge about Norms | <|reference_start|>Representing Knowledge about Norms: Norms are essential to extend inference: inferences based on norms are far richer than those based on logical implications. In the recent decades, much effort has been devoted to reason on a domain, once its norms are represented. How to extract and express those norms has received far less attention. Extraction is difficult: as the readers are supposed to know them, the norms of a domain are seldom made explicit. For one thing, extracting norms requires a language to represent them, and this is the topic of this paper. We apply this language to represent norms in the domain of driving, and show that it is adequate to reason on the causes of accidents, as described by car-crash reports.<|reference_end|> | arxiv | @article{kayser2006representing,
title={Representing Knowledge about Norms},
author={Daniel Kayser (LIPN), Farid Nouioua (LIPN)},
journal={The 16th European Conference on Artificial Intelligence (ECAI'04)
(2004) 363-367},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607086},
primaryClass={cs.AI}
} | kayser2006representing |
arxiv-674520 | cs/0607087 | Un filtre temporel cr\'edibiliste pour la reconnaissance d'actions humaines dans les vid\'eos | <|reference_start|>Un filtre temporel cr\'edibiliste pour la reconnaissance d'actions humaines dans les vid\'eos: In the context of human action recognition in video sequences, a temporal belief filter is presented. It allows to cope with human action disparity and low quality videos. The whole system of action recognition is based on the Transferable Belief Model (TBM) proposed by P. Smets. The TBM allows to explicitly model the doubt between actions. Furthermore, the TBM emphasizes the conflict which is exploited for action recognition. The filtering performance is assessed on real video sequences acquired by a moving camera and under several unknown view angles.<|reference_end|> | arxiv | @article{ramasso2006un,
title={Un filtre temporel cr\'edibiliste pour la reconnaissance d'actions
humaines dans les vid\'eos},
author={Emmanuel Ramasso (LIS), Mich`ele Rombaut (LIS), Denis Pellerin (LIS)},
journal={arXiv preprint arXiv:cs/0607087},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607087},
primaryClass={cs.MM}
} | ramasso2006un |
arxiv-674521 | cs/0607088 | Using Answer Set Programming in an Inference-Based approach to Natural Language Semantics | <|reference_start|>Using Answer Set Programming in an Inference-Based approach to Natural Language Semantics: Using Answer Set Programming in an Inference-Based approach to Natural Language Semantics<|reference_end|> | arxiv | @article{nouioua2006using,
title={Using Answer Set Programming in an Inference-Based approach to Natural
Language Semantics},
author={Farid Nouioua (LIPN), Pascal Nicolas (LERIA)},
journal={Inference in Computational Semantics ICoS-5, France (2006) 77-86},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607088},
primaryClass={cs.CL cs.AI}
} | nouioua2006using |
arxiv-674522 | cs/0607089 | Superregular Matrices and the Construction of Convolutional Codes having a Maximum Distance Profile | <|reference_start|>Superregular Matrices and the Construction of Convolutional Codes having a Maximum Distance Profile: Superregular matrices are a class of lower triangular Toeplitz matrices that arise in the context of constructing convolutional codes having a maximum distance profile. These matrices are characterized by the property that no submatrix has a zero determinant unless it is trivially zero due to the lower triangular structure. In this paper, we discuss how superregular matrices may be used to construct codes having a maximum distance profile. We also introduce group actions that preserve the superregularity property and present an upper bound on the minimum size a finite field must have in order that a superregular matrix of a given size can exist over that field.<|reference_end|> | arxiv | @article{hutchinson2006superregular,
title={Superregular Matrices and the Construction of Convolutional Codes having
a Maximum Distance Profile},
author={R. Hutchinson, R. Smarandache, J. Trumpf},
journal={arXiv preprint arXiv:cs/0607089},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607089},
primaryClass={cs.IT math.CO math.IT}
} | hutchinson2006superregular |
arxiv-674523 | cs/0607090 | Neural Networks with Complex and Quaternion Inputs | <|reference_start|>Neural Networks with Complex and Quaternion Inputs: This article investigates Kak neural networks, which can be instantaneously trained, for complex and quaternion inputs. The performance of the basic algorithm has been analyzed and shown how it provides a plausible model of human perception and understanding of images. The motivation for studying quaternion inputs is their use in representing spatial rotations that find applications in computer graphics, robotics, global navigation, computer vision and the spatial orientation of instruments. The problem of efficient mapping of data in quaternion neural networks is examined. Some problems that need to be addressed before quaternion neural networks find applications are identified.<|reference_end|> | arxiv | @article{rishiyur2006neural,
title={Neural Networks with Complex and Quaternion Inputs},
author={Adityan Rishiyur},
journal={arXiv preprint arXiv:cs/0607090},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607090},
primaryClass={cs.NE}
} | rishiyur2006neural |
arxiv-674524 | cs/0607091 | Finite element method for thermal analysis of concentrating solar receivers | <|reference_start|>Finite element method for thermal analysis of concentrating solar receivers: Application of finite element method and heat conductivity transfer model for calculation of temperature distribution in receiver for dish-Stirling concentrating solar system is described. The method yields discretized equations that are entirely local to the elements and provides complete geometric flexibility. A computer program solving the finite element method problem is created and great number of numerical experiments is carried out. Illustrative numerical results are given for an array of triangular elements in receiver for dish-Stirling system.<|reference_end|> | arxiv | @article{shtrakov2006finite,
title={Finite element method for thermal analysis of concentrating solar
receivers},
author={Stanko Shtrakov, Anton Stoilov},
journal={arXiv preprint arXiv:cs/0607091},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607091},
primaryClass={cs.CE}
} | shtrakov2006finite |
arxiv-674525 | cs/0607092 | Representing graphs as the intersection of axis-parallel cubes | <|reference_start|>Representing graphs as the intersection of axis-parallel cubes: A unit cube in $k$ dimensional space (or \emph{$k$-cube} in short) is defined as the Cartesian product $R_1\times R_2\times...\times R_k$ where $R_i$(for $1\leq i\leq k$) is a closed interval of the form $[a_i,a_i+1]$ on the real line. A $k$-cube representation of a graph $G$ is a mapping of the vertices of $G$ to $k$-cubes such that two vertices in $G$ are adjacent if and only if their corresponding $k$-cubes have a non-empty intersection. The \emph{cubicity} of $G$, denoted as $\cubi(G)$, is the minimum $k$ such that $G$ has a $k$-cube representation. Roberts \cite{Roberts} showed that for any graph $G$ on $n$ vertices, $\cubi(G)\leq 2n/3$. Many NP-complete graph problems have polynomial time deterministic algorithms or have good approximation ratios in graphs of low cubicity. In most of these algorithms, computing a low dimensional cube representation of the given graph is usually the first step. We present an efficient algorithm to compute the $k$-cube representation of $G$ with maximum degree $\Delta$ in $O(\Delta \ln b)$ dimensions where $b$ is the bandwidth of $G$. Bandwidth of $G$ is at most $n$ and can be much lower. The algorithm takes as input a bandwidth ordering of the vertices in $G$. Though computing the bandwidth ordering of vertices for a graph is NP-hard, there are heuristics that perform very well in practice. Even theoretically, there is an $O(\log^4 n)$ approximation algorithm for computing the bandwidth ordering of a graph using which our algorithm can produce a $k$-cube representation of any given graph in $k=O(\Delta(\ln b + \ln\ln n))$ dimensions. Both the bounds on cubicity are shown to be tight upto a factor of $O(\log\log n)$.<|reference_end|> | arxiv | @article{chandran2006representing,
title={Representing graphs as the intersection of axis-parallel cubes},
author={L. Sunil Chandran, Mathew C. Francis, Naveen Sivadasan},
journal={arXiv preprint arXiv:cs/0607092},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607092},
primaryClass={cs.DM}
} | chandran2006representing |
arxiv-674526 | cs/0607093 | An Elegant Argument that P is not NP | <|reference_start|>An Elegant Argument that P is not NP: In this note, we present an elegant argument that P is not NP by demonstrating that the Meet-in-the-Middle algorithm must have the fastest running-time of all deterministic and exact algorithms which solve the SUBSET-SUM problem on a classical computer.<|reference_end|> | arxiv | @article{feinstein2006an,
title={An Elegant Argument that P is not NP},
author={Craig Alan Feinstein},
journal={Progress in Physics Volume 2 (2011), 30-31},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607093},
primaryClass={cs.CC}
} | feinstein2006an |
arxiv-674527 | cs/0607094 | Upright-Quad Drawing of st-Planar Learning Spaces | <|reference_start|>Upright-Quad Drawing of st-Planar Learning Spaces: We consider graph drawing algorithms for learning spaces, a type of st-oriented partial cube derived from antimatroids and used to model states of knowledge of students. We show how to draw any st-planar learning space so all internal faces are convex quadrilaterals with the bottom side horizontal and the left side vertical, with one minimal and one maximal vertex. Conversely, every such drawing represents an st-planar learning space. We also describe connections between these graphs and arrangements of translates of a quadrant.<|reference_end|> | arxiv | @article{eppstein2006upright-quad,
title={Upright-Quad Drawing of st-Planar Learning Spaces},
author={David Eppstein},
journal={J. Graph Algorithms & Applications 12(1):51-72, 2008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607094},
primaryClass={cs.CG}
} | eppstein2006upright-quad |
arxiv-674528 | cs/0607095 | Gallager's Exponent for MIMO Channels: A Reliability-Rate Tradeoff | <|reference_start|>Gallager's Exponent for MIMO Channels: A Reliability-Rate Tradeoff: In this paper, we derive Gallager's random coding error exponent for multiple-input multiple-output (MIMO) channels, assuming no channel-state information (CSI) at the transmitter and perfect CSI at the receiver. This measure gives insight into a fundamental tradeoff between the communication reliability and information rate of MIMO channels, enabling to determine the required codeword length to achieve a prescribed error probability at a given rate below the channel capacity. We quantify the effects of the number of antennas, channel coherence time, and spatial fading correlation on the MIMO exponent. In addition, general formulae for the ergodic capacity and the cutoff rate in the presence of spatial correlation are deduced from the exponent expressions. These formulae are applicable to arbitrary structures of transmit and receive correlation, encompassing all the previously known results as special cases of our expressions.<|reference_end|> | arxiv | @article{shin2006gallager's,
title={Gallager's Exponent for MIMO Channels: A Reliability-Rate Tradeoff},
author={Hyundong Shin, Moe Z. Win},
journal={arXiv preprint arXiv:cs/0607095},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607095},
primaryClass={cs.IT math.IT}
} | shin2006gallager's |
arxiv-674529 | cs/0607096 | Logical settings for concept learning from incomplete examples in First Order Logic | <|reference_start|>Logical settings for concept learning from incomplete examples in First Order Logic: We investigate here concept learning from incomplete examples. Our first purpose is to discuss to what extent logical learning settings have to be modified in order to cope with data incompleteness. More precisely we are interested in extending the learning from interpretations setting introduced by L. De Raedt that extends to relational representations the classical propositional (or attribute-value) concept learning from examples framework. We are inspired here by ideas presented by H. Hirsh in a work extending the Version space inductive paradigm to incomplete data. H. Hirsh proposes to slightly modify the notion of solution when dealing with incomplete examples: a solution has to be a hypothesis compatible with all pieces of information concerning the examples. We identify two main classes of incompleteness. First, uncertainty deals with our state of knowledge concerning an example. Second, generalization (or abstraction) deals with what part of the description of the example is sufficient for the learning purpose. These two main sources of incompleteness can be mixed up when only part of the useful information is known. We discuss a general learning setting, referred to as "learning from possibilities" that formalizes these ideas, then we present a more specific learning setting, referred to as "assumption-based learning" that cope with examples which uncertainty can be reduced when considering contextual information outside of the proper description of the examples. Assumption-based learning is illustrated on a recent work concerning the prediction of a consensus secondary structure common to a set of RNA sequences.<|reference_end|> | arxiv | @article{bouthinon2006logical,
title={Logical settings for concept learning from incomplete examples in First
Order Logic},
author={Dominique Bouthinon (LIPN), Henry Soldano (LIPN), V'eronique Ventos
(LRI)},
journal={arXiv preprint arXiv:cs/0607096},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607096},
primaryClass={cs.LG}
} | bouthinon2006logical |
arxiv-674530 | cs/0607097 | Dynamic Packet Aggregation to Solve Performance Anomaly in 80211 Wireless Networks | <|reference_start|>Dynamic Packet Aggregation to Solve Performance Anomaly in 80211 Wireless Networks: In the widely used 802.11 standard, the so called performance anomaly is a well known issue. Several works have tried to solve this problem by introducing mechanisms such as packet fragmentation, backoff adaptation, or packet aggregation during a fixed time interval. In this paper, we propose a novel approach solving the performance anomaly problem by packet aggregation using a dynamic time interval, which depends on the busy time of the wireless medium. Our solution differs from other proposition in the literature because of this dynamic time interval, which allows increasing fairness, reactivity, and in some cases efficiency. In this article, we emphasize the performance evaluation of our proposal.<|reference_end|> | arxiv | @article{razafindralambo2006dynamic,
title={Dynamic Packet Aggregation to Solve Performance Anomaly in 802.11
Wireless Networks},
author={Tahiry Razafindralambo (INRIA Rh^one-Alpes), Isabelle
Gu'erin-Lassous (INRIA Rh^one-Alpes), Luigi Iannone (LIP6), Serge Fdida
(LIP6)},
journal={arXiv preprint arXiv:cs/0607097},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607097},
primaryClass={cs.NI}
} | razafindralambo2006dynamic |
arxiv-674531 | cs/0607098 | List decoding of noisy Reed-Muller-like codes | <|reference_start|>List decoding of noisy Reed-Muller-like codes: First- and second-order Reed-Muller (RM(1) and RM(2), respectively) codes are two fundamental error-correcting codes which arise in communication as well as in probabilistically-checkable proofs and learning. In this paper, we take the first steps toward extending the quick randomized decoding tools of RM(1) into the realm of quadratic binary and, equivalently, Z_4 codes. Our main algorithmic result is an extension of the RM(1) techniques from Goldreich-Levin and Kushilevitz-Mansour algorithms to the Hankel code, a code between RM(1) and RM(2). That is, given signal s of length N, we find a list that is a superset of all Hankel codewords phi with dot product to s at least (1/sqrt(k)) times the norm of s, in time polynomial in k and log(N). We also give a new and simple formulation of a known Kerdock code as a subcode of the Hankel code. As a corollary, we can list-decode Kerdock, too. Also, we get a quick algorithm for finding a sparse Kerdock approximation. That is, for k small compared with 1/sqrt{N} and for epsilon > 0, we find, in time polynomial in (k log(N)/epsilon), a k-Kerdock-term approximation s~ to s with Euclidean error at most the factor (1+epsilon+O(k^2/sqrt{N})) times that of the best such approximation.<|reference_end|> | arxiv | @article{calderbank2006list,
title={List decoding of noisy Reed-Muller-like codes},
author={A. R. Calderbank, Anna C. Gilbert, and Martin J. Strauss},
journal={arXiv preprint arXiv:cs/0607098},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607098},
primaryClass={cs.DS cs.IT math.IT}
} | calderbank2006list |
arxiv-674532 | cs/0607099 | Degrees of Freedom Region for the MIMO X Channel | <|reference_start|>Degrees of Freedom Region for the MIMO X Channel: We provide achievability as well as converse results for the degrees of freedom region of a MIMO $X$ channel, i.e., a system with two transmitters, two receivers, each equipped with multiple antennas, where independent messages need to be conveyed over fixed channels from each transmitter to each receiver. With M=1 antennas at each node, we find that the total (sum rate) degrees of freedom are bounded above and below as $1 \leq\eta_X^\star \leq {4/3}$. If $M>1$ and channel matrices are non-degenerate then the precise degrees of freedom $\eta_X^\star = {4/3}M$. Simple zero forcing without dirty paper encoding or successive decoding, suffices to achieve the ${4/3}M$ degrees of freedom. With equal number of antennas at all nodes, we explore the increase in degrees of freedom when some of the messages are made available to a transmitter or receiver in the manner of cognitive radio. With a cognitive transmitter we show that the number of degrees of freedom $\eta = {3/2}M$ (for $M>1$) on the MIMO $X$ channel. The same degrees of freedom are obtained on the MIMO $X$ channel with a cognitive receiver as well. In contrast to the $X$ channel result, we show that for the MIMO \emph{interference} channel, the degrees of freedom are not increased even if both the transmitter and the receiver of one user know the other user's message. However, the interference channel can achieve the full $2M$ degrees of freedom if \emph{each} user has either a cognitive transmitter or a cognitive receiver. Lastly, if the channels vary with time/frequency then the $X$ channel with single antennas $(M=1)$ at all nodes has exactly 4/3 degrees of freedom with no shared messages and exactly 3/2 degrees of freedom with a cognitive transmitter or a cognitive receiver.<|reference_end|> | arxiv | @article{jafar2006degrees,
title={Degrees of Freedom Region for the MIMO X Channel},
author={Syed A. Jafar, Shlomo Shamai (Shitz)},
journal={arXiv preprint arXiv:cs/0607099},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607099},
primaryClass={cs.IT math.IT}
} | jafar2006degrees |
arxiv-674533 | cs/0607100 | New Upper Bounds on The Approximability of 3D Strip Packing | <|reference_start|>New Upper Bounds on The Approximability of 3D Strip Packing: In this paper, we study the 3D strip packing problem in which we are given a list of 3-dimensional boxes and required to pack all of them into a 3-dimensional strip with length 1 and width 1 and unlimited height to minimize the height used. Our results are below: i) we give an approximation algorithm with asymptotic worst-case ratio 1.69103, which improves the previous best bound of $2+\epsilon$ by Jansen and Solis-Oba of SODA 2006; ii) we also present an asymptotic PTAS for the case in which all items have {\em square} bases.<|reference_end|> | arxiv | @article{han2006new,
title={New Upper Bounds on The Approximability of 3D Strip Packing},
author={Xin Han, Kazuo Iwama, Guochuan Zhang},
journal={arXiv preprint arXiv:cs/0607100},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607100},
primaryClass={cs.DS}
} | han2006new |
arxiv-674534 | cs/0607101 | Deriving Escape Analysis by Abstract Interpretation: Proofs of results | <|reference_start|>Deriving Escape Analysis by Abstract Interpretation: Proofs of results: Escape analysis of object-oriented languages approximates the set of objects which do not escape from a given context. If we take a method as context, the non-escaping objects can be allocated on its activation stack; if we take a thread, Java synchronisation locks on such objects are not needed. In this paper, we formalise a basic escape domain e as an abstract interpretation of concrete states, which we then refine into an abstract domain er which is more concrete than e and, hence, leads to a more precise escape analysis than e. We provide optimality results for both e and er, in the form of Galois insertions from the concrete to the abstract domains and of optimal abstract operations. The Galois insertion property is obtained by restricting the abstract domains to those elements which do not contain garbage, by using an abstract garbage collector. Our implementation of er is hence an implementation of a formally correct escape analyser, able to detect the stack allocatable creation points of Java (bytecode) applications. This report contains the proofs of results of a paper with the same title and authors and to be published in the Journal "Higher-Order Symbolic Computation".<|reference_end|> | arxiv | @article{hill2006deriving,
title={Deriving Escape Analysis by Abstract Interpretation: Proofs of results},
author={Patricia M. Hill, Fausto Spoto},
journal={arXiv preprint arXiv:cs/0607101},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607101},
primaryClass={cs.PL}
} | hill2006deriving |
arxiv-674535 | cs/0607102 | Multiaccess Channels with State Known to Some Encoders and Independent Messages | <|reference_start|>Multiaccess Channels with State Known to Some Encoders and Independent Messages: We consider a state-dependent multiaccess channel (MAC) with state non-causally known to some encoders. We derive an inner bound for the capacity region in the general discrete memoryless case and specialize to a binary noiseless case. In the case of maximum entropy channel state, we obtain the capacity region for binary noiseless MAC with one informed encoder by deriving a non-trivial outer bound for this case. For a Gaussian state-dependent MAC with one encoder being informed of the channel state, we present an inner bound by applying a slightly generalized dirty paper coding (GDPC) at the informed encoder that allows for partial state cancellation, and a trivial outer bound by providing channel state to the decoder also. The uninformed encoders benefit from the state cancellation in terms of achievable rates, however, appears that GDPC cannot completely eliminate the effect of the channel state on the achievable rate region, in contrast to the case of all encoders being informed. In the case of infinite state variance, we analyze how the uninformed encoder benefits from the informed encoder's actions using the inner bound and also provide a non-trivial outer bound for this case which is better than the trivial outer bound.<|reference_end|> | arxiv | @article{kotagiri2006multiaccess,
title={Multiaccess Channels with State Known to Some Encoders and Independent
Messages},
author={Shiva Prasad Kotagiri and J. Nicholas Laneman},
journal={arXiv preprint arXiv:cs/0607102},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607102},
primaryClass={cs.IT math.IT}
} | kotagiri2006multiaccess |
arxiv-674536 | cs/0607103 | Ideas by Statistical Mechanics (ISM) | <|reference_start|>Ideas by Statistical Mechanics (ISM): Ideas by Statistical Mechanics (ISM) is a generic program to model evolution and propagation of ideas/patterns throughout populations subjected to endogenous and exogenous interactions. The program is based on the author's work in Statistical Mechanics of Neocortical Interactions (SMNI), and uses the author's Adaptive Simulated Annealing (ASA) code for optimizations of training sets, as well as for importance-sampling to apply the author's copula financial risk-management codes, Trading in Risk Dimensions (TRD), for assessments of risk and uncertainty. This product can be used for decision support for projects ranging from diplomatic, information, military, and economic (DIME) factors of propagation/evolution of ideas, to commercial sales, trading indicators across sectors of financial markets, advertising and political campaigns, etc. A statistical mechanical model of neocortical interactions, developed by the author and tested successfully in describing short-term memory and EEG indicators, is the proposed model. Parameters with a given subset of macrocolumns will be fit using ASA to patterns representing ideas. Parameters of external and inter-regional interactions will be determined that promote or inhibit the spread of these ideas. Tools of financial risk management, developed by the author to process correlated multivariate systems with differing non-Gaussian distributions using modern copula analysis, importance-sampled using ASA, will enable bona fide correlations and uncertainties of success and failure to be calculated. Marginal distributions will be evolved to determine their expected duration and stability using algorithms developed by the author, i.e., PATHTREE and PATHINT codes.<|reference_end|> | arxiv | @article{ingber2006ideas,
title={Ideas by Statistical Mechanics (ISM)},
author={Lester Ingber},
journal={arXiv preprint arXiv:cs/0607103},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607103},
primaryClass={cs.CE cs.MS cs.NE}
} | ingber2006ideas |
arxiv-674537 | cs/0607104 | Reducing the Computation of Linear Complexities of Periodic Sequences over $GF(p^m)$ | <|reference_start|>Reducing the Computation of Linear Complexities of Periodic Sequences over $GF(p^m)$: The linear complexity of a periodic sequence over $GF(p^m)$ plays an important role in cryptography and communication [12]. In this correspondence, we prove a result which reduces the computation of the linear complexity and minimal connection polynomial of a period $un$ sequence over $GF(p^m)$ to the computation of the linear complexities and minimal connection polynomials of $u$ period $n$ sequences. The conditions $u|p^m-1$ and $\gcd(n,p^m-1)=1$ are required for the result to hold. Some applications of this reduction in fast algorithms to determine the linear complexities and minimal connection polynomials of sequences over $GF(p^m)$ are presented.<|reference_end|> | arxiv | @article{chen2006reducing,
title={Reducing the Computation of Linear Complexities of Periodic Sequences
over $GF(p^m)$},
author={Hao Chen},
journal={arXiv preprint arXiv:cs/0607104},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607104},
primaryClass={cs.CR cs.IT math.IT}
} | chen2006reducing |
arxiv-674538 | cs/0607105 | Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems | <|reference_start|>Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems: We present a randomized algorithm that, on input a symmetric, weakly diagonally dominant n-by-n matrix A with m nonzero entries and an n-vector b, produces a y such that $\norm{y - \pinv{A} b}_{A} \leq \epsilon \norm{\pinv{A} b}_{A}$ in expected time $O (m \log^{c}n \log (1/\epsilon)),$ for some constant c. By applying this algorithm inside the inverse power method, we compute approximate Fiedler vectors in a similar amount of time. The algorithm applies subgraph preconditioners in a recursive fashion. These preconditioners improve upon the subgraph preconditioners first introduced by Vaidya (1990). For any symmetric, weakly diagonally-dominant matrix A with non-positive off-diagonal entries and $k \geq 1$, we construct in time $O (m \log^{c} n)$ a preconditioner B of A with at most $2 (n - 1) + O ((m/k) \log^{39} n)$ nonzero off-diagonal entries such that the finite generalized condition number $\kappa_{f} (A,B)$ is at most k, for some other constant c. In the special case when the nonzero structure of the matrix is planar the corresponding linear system solver runs in expected time $ O (n \log^{2} n + n \log n \ \log \log n \ \log (1/\epsilon))$. We hope that our introduction of algorithms of low asymptotic complexity will lead to the development of algorithms that are also fast in practice.<|reference_end|> | arxiv | @article{spielman2006nearly-linear,
title={Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric,
Diagonally Dominant Linear Systems},
author={Daniel A. Spielman and Shang-Hua Teng},
journal={arXiv preprint arXiv:cs/0607105},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607105},
primaryClass={cs.NA cs.DS}
} | spielman2006nearly-linear |
arxiv-674539 | cs/0607106 | The Complexity of Quantified Constraint Satisfaction: Collapsibility, Sink Algebras, and the Three-Element Case | <|reference_start|>The Complexity of Quantified Constraint Satisfaction: Collapsibility, Sink Algebras, and the Three-Element Case: The constraint satisfaction probem (CSP) is a well-acknowledged framework in which many combinatorial search problems can be naturally formulated. The CSP may be viewed as the problem of deciding the truth of a logical sentence consisting of a conjunction of constraints, in front of which all variables are existentially quantified. The quantified constraint satisfaction problem (QCSP) is the generalization of the CSP where universal quantification is permitted in addition to existential quantification. The general intractability of these problems has motivated research studying the complexity of these problems under a restricted constraint language, which is a set of relations that can be used to express constraints. This paper introduces collapsibility, a technique for deriving positive complexity results on the QCSP. In particular, this technique allows one to show that, for a particular constraint language, the QCSP reduces to the CSP. We show that collapsibility applies to three known tractable cases of the QCSP that were originally studied using disparate proof techniques in different decades: Quantified 2-SAT (Aspvall, Plass, and Tarjan 1979), Quantified Horn-SAT (Karpinski, Kleine B\"{u}ning, and Schmitt 1987), and Quantified Affine-SAT (Creignou, Khanna, and Sudan 2001). This reconciles and reveals common structure among these cases, which are describable by constraint languages over a two-element domain. In addition to unifying these known tractable cases, we study constraint languages over domains of larger size.<|reference_end|> | arxiv | @article{chen2006the,
title={The Complexity of Quantified Constraint Satisfaction: Collapsibility,
Sink Algebras, and the Three-Element Case},
author={Hubie Chen},
journal={arXiv preprint arXiv:cs/0607106},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607106},
primaryClass={cs.LO cs.CC}
} | chen2006the |
arxiv-674540 | cs/0607107 | Linear Predictive Coding as an Estimator of Volatility | <|reference_start|>Linear Predictive Coding as an Estimator of Volatility: In this paper, we present a method of estimating the volatility of a signal that displays stochastic noise (such as a risky asset traded on an open market) utilizing Linear Predictive Coding. The main purpose is to associate volatility with a series of statistical properties that can lead us, through further investigation, toward a better understanding of structural volatility as well as to improve the quality of our current estimates.<|reference_end|> | arxiv | @article{mello2006linear,
title={Linear Predictive Coding as an Estimator of Volatility},
author={Louis Mello},
journal={arXiv preprint arXiv:cs/0607107},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607107},
primaryClass={cs.IT math.IT}
} | mello2006linear |
arxiv-674541 | cs/0607108 | Properties of subspace subcodes of optimum codes in rank metric | <|reference_start|>Properties of subspace subcodes of optimum codes in rank metric: Maximum rank distance codes denoted MRD-codes are the equivalent in rank metric of MDS-codes. Given any integer $q$ power of a prime and any integer $n$ there is a family of MRD-codes of length $n$ over $\FF{q^n}$ having polynomial-time decoding algorithms. These codes can be seen as the analogs of Reed-Solomon codes (hereafter denoted RS-codes) for rank metric. In this paper their subspace subcodes are characterized. It is shown that hey are equivalent to MRD-codes constructed in the same way but with smaller parameters. A specific polynomial-time decoding algorithm is designed. Moreover, it is shown that the direct sum of subspace subcodes is equivalent to the direct product of MRD-codes with smaller parameters. This implies that the decoding procedure can correct errors of higher rank than the error-correcting capability. Finally it is shown that, for given parameters, subfield subcodes are completely characterized by elements of the general linear group ${GL}_n(\FF{q})$ of non-singular $q$-ary matrices of size $n$.<|reference_end|> | arxiv | @article{gabidulin2006properties,
title={Properties of subspace subcodes of optimum codes in rank metric},
author={E. M. Gabidulin and P. Loidreau},
journal={arXiv preprint arXiv:cs/0607108},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607108},
primaryClass={cs.IT cs.DM math.IT}
} | gabidulin2006properties |
arxiv-674542 | cs/0607109 | Complexity and Applications of Edge-Induced Vertex-Cuts | <|reference_start|>Complexity and Applications of Edge-Induced Vertex-Cuts: Motivated by hypergraph decomposition algorithms, we introduce the notion of edge-induced vertex-cuts and compare it with the well-known notions of edge-cuts and vertex-cuts. We investigate the complexity of computing minimum edge-induced vertex-cuts and demonstrate the usefulness of our notion by applications in network reliability and constraint satisfaction.<|reference_end|> | arxiv | @article{samer2006complexity,
title={Complexity and Applications of Edge-Induced Vertex-Cuts},
author={Marko Samer and Stefan Szeider},
journal={arXiv preprint arXiv:cs/0607109},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607109},
primaryClass={cs.DM cs.CC}
} | samer2006complexity |
arxiv-674543 | cs/0607110 | A Theory of Probabilistic Boosting, Decision Trees and Matryoshki | <|reference_start|>A Theory of Probabilistic Boosting, Decision Trees and Matryoshki: We present a theory of boosting probabilistic classifiers. We place ourselves in the situation of a user who only provides a stopping parameter and a probabilistic weak learner/classifier and compare three types of boosting algorithms: probabilistic Adaboost, decision tree, and tree of trees of ... of trees, which we call matryoshka. "Nested tree," "embedded tree" and "recursive tree" are also appropriate names for this algorithm, which is one of our contributions. Our other contribution is the theoretical analysis of the algorithms, in which we give training error bounds. This analysis suggests that the matryoshka leverages probabilistic weak classifiers more efficiently than simple decision trees.<|reference_end|> | arxiv | @article{grossmann2006a,
title={A Theory of Probabilistic Boosting, Decision Trees and Matryoshki},
author={Etienne Grossmann},
journal={arXiv preprint arXiv:cs/0607110},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607110},
primaryClass={cs.LG}
} | grossmann2006a |
arxiv-674544 | cs/0607111 | UCLog+ : A Security Data Management System for Correlating Alerts, Incidents, and Raw Data From Remote Logs | <|reference_start|>UCLog+ : A Security Data Management System for Correlating Alerts, Incidents, and Raw Data From Remote Logs: Source data for computer network security analysis takes different forms (alerts, incidents, logs) and each source may be voluminous. Due to the challenge this presents for data management, this has often lead to security stovepipe operations which focus primarily on a small number of data sources for analysis with little or no automated correlation between data sources (although correlation may be done manually). We seek to address this systemic problem. In previous work we developed a unified correlated logging system (UCLog) that automatically processes alerts from different devices. We take this work one step further by presenting the architecture and applications of UCLog+ which adds the new capability to correlate between alerts and incidents and raw data located on remote logs. UCLog+ can be used for forensic analysis including queries and report generation but more importantly it can be used for near-real-time situational awareness of attack patterns in progress. The system, implemented with open source tools, can also be a repository for secure information sharing by different organizations.<|reference_end|> | arxiv | @article{yurcik2006uclog+,
title={UCLog+ : A Security Data Management System for Correlating Alerts,
Incidents, and Raw Data From Remote Logs},
author={William Yurcik, Cristina Abad, Ragib Hasan, Moazzam Saleem, Shyama
Sridharan},
journal={arXiv preprint arXiv:cs/0607111},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607111},
primaryClass={cs.CR}
} | yurcik2006uclog+ |
arxiv-674545 | cs/0607112 | Improving convergence of Belief Propagation decoding | <|reference_start|>Improving convergence of Belief Propagation decoding: The decoding of Low-Density Parity-Check codes by the Belief Propagation (BP) algorithm is revisited. We check the iterative algorithm for its convergence to a codeword (termination), we run Monte Carlo simulations to find the probability distribution function of the termination time, n_it. Tested on an example [155, 64, 20] code, this termination curve shows a maximum and an extended algebraic tail at the highest values of n_it. Aiming to reduce the tail of the termination curve we consider a family of iterative algorithms modifying the standard BP by means of a simple relaxation. The relaxation parameter controls the convergence of the modified BP algorithm to a minimum of the Bethe free energy. The improvement is experimentally demonstrated for Additive-White-Gaussian-Noise channel in some range of the signal-to-noise ratios. We also discuss the trade-off between the relaxation parameter of the improved iterative scheme and the number of iterations.<|reference_end|> | arxiv | @article{stepanov2006improving,
title={Improving convergence of Belief Propagation decoding},
author={M.G. Stepanov, M. Chertkov},
journal={arXiv preprint arXiv:cs/0607112},
year={2006},
number={LA-UR-06-5058},
archivePrefix={arXiv},
eprint={cs/0607112},
primaryClass={cs.IT math.IT}
} | stepanov2006improving |
arxiv-674546 | cs/0607113 | Trees with Convex Faces and Optimal Angles | <|reference_start|>Trees with Convex Faces and Optimal Angles: We consider drawings of trees in which all edges incident to leaves can be extended to infinite rays without crossing, partitioning the plane into infinite convex polygons. Among all such drawings we seek the one maximizing the angular resolution of the drawing. We find linear time algorithms for solving this problem, both for plane trees and for trees without a fixed embedding. In any such drawing, the edge lengths may be set independently of the angles, without crossing; we describe multiple strategies for setting these lengths.<|reference_end|> | arxiv | @article{carlson2006trees,
title={Trees with Convex Faces and Optimal Angles},
author={Josiah Carlson and David Eppstein},
journal={arXiv preprint arXiv:cs/0607113},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607113},
primaryClass={cs.CG}
} | carlson2006trees |
arxiv-674547 | cs/0607114 | Revising Type-2 Computation and Degrees of Discontinuity | <|reference_start|>Revising Type-2 Computation and Degrees of Discontinuity: By the sometimes so-called MAIN THEOREM of Recursive Analysis, every computable real function is necessarily continuous. Weihrauch and Zheng (TCS'2000), Brattka (MLQ'2005), and Ziegler (ToCS'2006) have considered different relaxed notions of computability to cover also discontinuous functions. The present work compares and unifies these approaches. This is based on the concept of the JUMP of a representation: both a TTE-counterpart to the well known recursion-theoretic jump on Kleene's Arithmetical Hierarchy of hypercomputation: and a formalization of revising computation in the sense of Shoenfield. We also consider Markov and Banach/Mazur oracle-computation of discontinuous fu nctions and characterize the computational power of Type-2 nondeterminism to coincide with the first level of the Analytical Hierarchy.<|reference_end|> | arxiv | @article{ziegler2006revising,
title={Revising Type-2 Computation and Degrees of Discontinuity},
author={Martin Ziegler},
journal={Electronic Notes in Theoretical Computer Science vol.167
(Jan.2007)},
year={2006},
doi={10.1016/j.entcs.2006.08.015},
archivePrefix={arXiv},
eprint={cs/0607114},
primaryClass={cs.LO math.LO}
} | ziegler2006revising |
arxiv-674548 | cs/0607115 | Polynomial-time algorithm for vertex k-colorability of P_5-free graphs | <|reference_start|>Polynomial-time algorithm for vertex k-colorability of P_5-free graphs: We give the first polynomial-time algorithm for coloring vertices of P_5-free graphs with k colors. This settles an open problem and generalizes several previously known results.<|reference_end|> | arxiv | @article{kaminski2006polynomial-time,
title={Polynomial-time algorithm for vertex k-colorability of P_5-free graphs},
author={Marcin Kaminski and Vadim Lozin},
journal={arXiv preprint arXiv:cs/0607115},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607115},
primaryClass={cs.DM cs.DS}
} | kaminski2006polynomial-time |
arxiv-674549 | cs/0607116 | Program Spectra Analysis in Embedded Software: A Case Study | <|reference_start|>Program Spectra Analysis in Embedded Software: A Case Study: Because of constraints imposed by the market, embedded software in consumer electronics is almost inevitably shipped with faults and the goal is just to reduce the inherent unreliability to an acceptable level before a product has to be released. Automatic fault diagnosis is a valuable tool to capture software faults without extra effort spent on testing. Apart from a debugging aid at design and integration time, fault diagnosis can help analyzing problems during operation, which allows for more accurate system recovery. In this paper we discuss perspectives and limitations for applying a particular fault diagnosis technique, namely the analysis of program spectra, in the area of embedded software in consumer electronics devices. We illustrate these by our first experience with a test case from industry.<|reference_end|> | arxiv | @article{abreu2006program,
title={Program Spectra Analysis in Embedded Software: A Case Study},
author={Rui Abreu, Peter Zoeteweij and Arjan JC van Gemund},
journal={arXiv preprint arXiv:cs/0607116},
year={2006},
number={TUD-SERG-2006-007},
archivePrefix={arXiv},
eprint={cs/0607116},
primaryClass={cs.SE}
} | abreu2006program |
arxiv-674550 | cs/0607117 | Bidding to the Top: VCG and Equilibria of Position-Based Auctions | <|reference_start|>Bidding to the Top: VCG and Equilibria of Position-Based Auctions: Many popular search engines run an auction to determine the placement of advertisements next to search results. Current auctions at Google and Yahoo! let advertisers specify a single amount as their bid in the auction. This bid is interpreted as the maximum amount the advertiser is willing to pay per click on its ad. When search queries arrive, the bids are used to rank the ads linearly on the search result page. The advertisers pay for each user who clicks on their ad, and the amount charged depends on the bids of all the advertisers participating in the auction. In order to be effective, advertisers seek to be as high on the list as their budget permits, subject to the market. We study the problem of ranking ads and associated pricing mechanisms when the advertisers not only specify a bid, but additionally express their preference for positions in the list of ads. In particular, we study "prefix position auctions" where advertiser $i$ can specify that she is interested only in the top $b_i$ positions. We present a simple allocation and pricing mechanism that generalizes the desirable properties of current auctions that do not have position constraints. In addition, we show that our auction has an "envy-free" or "symmetric" Nash equilibrium with the same outcome in allocation and pricing as the well-known truthful Vickrey-Clarke-Groves (VCG) auction. Furthermore, we show that this equilibrium is the best such equilibrium for the advertisers in terms of the profit made by each advertiser. We also discuss other position-based auctions.<|reference_end|> | arxiv | @article{aggarwal2006bidding,
title={Bidding to the Top: VCG and Equilibria of Position-Based Auctions},
author={Gagan Aggarwal, S. Muthukrishnan, Jon Feldman},
journal={arXiv preprint arXiv:cs/0607117},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607117},
primaryClass={cs.GT}
} | aggarwal2006bidding |
arxiv-674551 | cs/0607118 | A new function algebra of EXPTIME functions by safe nested recursion | <|reference_start|>A new function algebra of EXPTIME functions by safe nested recursion: Bellantoni and Cook have given a function-algebra characterization of the polynomial-time computable functions via an unbounded recursion scheme which is called safe recursion. Inspired by their work, we characterize the exponential-time computable functions with the use of a safe variant of nested recursion.<|reference_end|> | arxiv | @article{arai2006a,
title={A new function algebra of EXPTIME functions by safe nested recursion},
author={Toshiyasu Arai and Naohi Eguchi},
journal={arXiv preprint arXiv:cs/0607118},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607118},
primaryClass={cs.CC}
} | arai2006a |
arxiv-674552 | cs/0607119 | Web-Based Enterprise Information Systems Development: The Integrated Methodology | <|reference_start|>Web-Based Enterprise Information Systems Development: The Integrated Methodology: The paper considers software development issues for large-scale enterprise information systems (IS) with databases (DB) in global heterogeneous distributed computational environment. Due to high IT development rates, the present-day society has accumulated and rapidly increases an extremely huge data burden. Manipulating with such huge data arrays becomes an essential problem, particularly due to their global distribution, heterogeneous and weak-structured character. The conceptual approach to integrated Internet-based IS design, development and implementation is presented, including formal models, software development methodology and original software development tools for visual problem-oriented development and content management. IS implementation results proved shortening terms and reducing costs of implementation compared to commercial software available.<|reference_end|> | arxiv | @article{zykov2006web-based,
title={Web-Based Enterprise Information Systems Development: The Integrated
Methodology},
author={Sergey V. Zykov},
journal={Proceedings of the 5th International Conference on Computer
Science and Information Technologies (CSIT 2005), Yerevan, Armenia, 19-23
September 2005. National Academy of Sciences of Armenia Publishers, 2005,
pp.373-381},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607119},
primaryClass={cs.SE cs.DC}
} | zykov2006web-based |
arxiv-674553 | cs/0607120 | Expressing Implicit Semantic Relations without Supervision | <|reference_start|>Expressing Implicit Semantic Relations without Supervision: We present an unsupervised learning algorithm that mines large text corpora for patterns that express implicit semantic relations. For a given input word pair X:Y with some unspecified semantic relations, the corresponding output list of patterns <P1,...,Pm> is ranked according to how well each pattern Pi expresses the relations between X and Y. For example, given X=ostrich and Y=bird, the two highest ranking output patterns are "X is the largest Y" and "Y such as the X". The output patterns are intended to be useful for finding further pairs with the same relations, to support the construction of lexicons, ontologies, and semantic networks. The patterns are sorted by pertinence, where the pertinence of a pattern Pi for a word pair X:Y is the expected relational similarity between the given pair and typical pairs for Pi. The algorithm is empirically evaluated on two tasks, solving multiple-choice SAT word analogy questions and classifying semantic relations in noun-modifier pairs. On both tasks, the algorithm achieves state-of-the-art results, performing significantly better than several alternative pattern ranking algorithms, based on tf-idf.<|reference_end|> | arxiv | @article{turney2006expressing,
title={Expressing Implicit Semantic Relations without Supervision},
author={Peter D. Turney (National Research Council of Canada)},
journal={Proceedings of the 21st International Conference on Computational
Linguistics and 44th Annual Meeting of the Association for Computational
Linguistics (ACL-06), (2006), Sydney, Australia, 313-320},
year={2006},
number={NRC-48761},
archivePrefix={arXiv},
eprint={cs/0607120},
primaryClass={cs.CL cs.AI cs.IR cs.LG}
} | turney2006expressing |
arxiv-674554 | cs/0607121 | Object-Based Groupware: Theory, Design and Implementation Issues | <|reference_start|>Object-Based Groupware: Theory, Design and Implementation Issues: Document management software systems are having a wide audience at present. However, groupware as a term has a wide variety of possible definitions. Groupware classification attempt is made in this paper. Possible approaches to groupware are considered including document management, document control and mailing systems. Lattice theory and concept modelling are presented as a theoretical background for the systems in question. Current technologies in state-of-the-art document managenent software are discussed. Design and implementation aspects for user-friendly integrate enterprise systems are described. Results for a real system to be implemented are given. Perspectives of the field in question are discussed.<|reference_end|> | arxiv | @article{zykov2006object-based,
title={Object-Based Groupware: Theory, Design and Implementation Issues},
author={Sergey V.Zykov, Gleb G. Pogodayev},
journal={In: J.Eder and L.A. Kalinichenko, (Ed.) Advances in Databases and
Information Systems, Vol.2. St.-Petersburg: Nevsky Dialect, 1997, p.p.10-17},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607121},
primaryClass={cs.SE}
} | zykov2006object-based |
arxiv-674555 | cs/0607122 | Enterprise Content Management: Theory and Engineering for Entire Lifecycle Support | <|reference_start|>Enterprise Content Management: Theory and Engineering for Entire Lifecycle Support: The paper considers enterprise content management (ECM) issues in global heterogeneous distributed computational environment. Present-day enterprises have accumulated a huge data burden. Manipulating with such a bulk becomes an essential problem, particularly due to its global distribution, heterogeneous and weak-structured character. The conceptual approach to integrated ECM lifecycle support is presented, including overview of formal models, software development methodology and innovative software development tools. Implementation results proved shortening terms and reducing costs of implementation compared to commercial software available.<|reference_end|> | arxiv | @article{zykov2006enterprise,
title={Enterprise Content Management: Theory and Engineering for Entire
Lifecycle Support},
author={Sergey V. Zykov},
journal={Proceedings of the 8th International Workshop on Computer Science
and Information Technologies (CSIT'2006), Vol.1, Karlsruhe, Germany,
Sept.28-29, 2006.- Karlsruhe University Publishers, Karlsruhe, 2006.- Vol.1,
p.p.92-97},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607122},
primaryClass={cs.SE cs.DC}
} | zykov2006enterprise |
arxiv-674556 | cs/0607123 | Enterprise Portal Development Tools: Problem-Oriented Approach | <|reference_start|>Enterprise Portal Development Tools: Problem-Oriented Approach: The paper deals with problem-oriented visual information system (IS) engineering for enterprise Internet-based applications, which is a vital part of the whole development process. The suggested approach is based on semantic network theory and a novel ConceptModeller CASE tool.<|reference_end|> | arxiv | @article{zykov2006enterprise,
title={Enterprise Portal Development Tools: Problem-Oriented Approach},
author={Sergey V. Zykov},
journal={Proceedings of the 7th International Workshop on Computer Science
and Information Technologies (CSIT'2005), Vol.1, Ufa State Aviation Technical
University, USATU Editorial-Publishing Office, Ufa, 2005, pp. 110-113},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607123},
primaryClass={cs.SE cs.DC}
} | zykov2006enterprise |
arxiv-674557 | cs/0607124 | ConceptModeller: a Problem-Oriented Visual SDK for Globally Distributed Enterprise Systems | <|reference_start|>ConceptModeller: a Problem-Oriented Visual SDK for Globally Distributed Enterprise Systems: The paper describes problem-oriented approach to software development. The approach is a part of the original integrated methodology of enterprise Internet-based software design and implementation. All aspects of software development, from theory to implementation, are covered.<|reference_end|> | arxiv | @article{zykov2006conceptmodeller:,
title={ConceptModeller: a Problem-Oriented Visual SDK for Globally Distributed
Enterprise Systems},
author={Sergey V. Zykov},
journal={Proceedings of the 7th International Workshop on Computer Science
and Information Technologies (CSIT'2005), Vol.1, Ufa State Aviation Technical
University, USATU Editorial-Publishing Office, Ufa, 2005, pp. 114-117},
year={2006},
number={cs.szykov.27374},
archivePrefix={arXiv},
eprint={cs/0607124},
primaryClass={cs.SE cs.DC}
} | zykov2006conceptmodeller: |
arxiv-674558 | cs/0607125 | Enterprise Portal: from Model to Implementation | <|reference_start|>Enterprise Portal: from Model to Implementation: Portal technology can significantly improve the entire corporate information infrastructure. The approach proposed is based on rigorous and consistent (meta)data model and provides for efficient and accurate front-end integration of heterogeneous corporate applications including enterprise resource planning (ERP) systems, multimedia data warehouses and proprietary content databases. The methodology proposed embraces entire software lifecycle; it is illustrated by an enterprise-level Intranet portal implementation.<|reference_end|> | arxiv | @article{zykov2006enterprise,
title={Enterprise Portal: from Model to Implementation},
author={Sergey V. Zykov},
journal={Workshop on Computer Science and Information Technologies
(CSIT'2004), Budapest, Hungary, 2004, Vol.2, p.p.188-193},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607125},
primaryClass={cs.SE cs.DC}
} | zykov2006enterprise |
arxiv-674559 | cs/0607126 | Abstract Machine as a Model of Content Management Information System | <|reference_start|>Abstract Machine as a Model of Content Management Information System: Enterprise content management is an urgent issue of current scientific and practical activities in software design and implementation. However, papers known as yet give insufficient coverage of theoretical background of the software in question. The paper gives an attempt of building a state-based model of content management. In accordance with the theoretical principles outlined, a content management information system (CMIS) has been implemented in a large international oil-and-gas group of companies.<|reference_end|> | arxiv | @article{zykov2006abstract,
title={Abstract Machine as a Model of Content Management Information System},
author={Sergey V. Zykov},
journal={Workshop on Computer Science and Information Technologies
(CSIT'2004), Budapest, Hungary, 2004, Vol.2, p.p.251-252},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607126},
primaryClass={cs.SE cs.DC}
} | zykov2006abstract |
arxiv-674560 | cs/0607127 | Integrating Enterprise Software Applications with Web Portal Technology | <|reference_start|>Integrating Enterprise Software Applications with Web Portal Technology: Web-portal based approach can significantly improve the entire corporate information infrastructure. The approach proposed provides for rapid and accurate front-end integration of heterogeneous corporate applications including enterprise resource planning (ERP) systems. Human resources ERP component and multimedia data warehouse implementations are discussed as essential instances.<|reference_end|> | arxiv | @article{zykov2006integrating,
title={Integrating Enterprise Software Applications with Web Portal Technology},
author={Sergey V. Zykov},
journal={Proceedings of 5th International Workshop on Computer Science and
Information Technologies (CSIT'2003), Vol.1, Ufa State Aviation Technical
University, Ufa:USATU Editorial-Publishing Office, 2003, p.p.60-65},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607127},
primaryClass={cs.SE cs.DC}
} | zykov2006integrating |
arxiv-674561 | cs/0607128 | The Integrated Approach to ERP: Embracing the Web | <|reference_start|>The Integrated Approach to ERP: Embracing the Web: Integrated approach to enterprise resource planning (ERP) software design and implementation can significantly improve the entire corporate information infrastructure and it helps to benefit from power of Internet services. The approach proposed provides for corporate Web portal integrity, consistency, urgency and front-end data processing. Human resources (HR) ERP component implementation is discussed as an essential instance.<|reference_end|> | arxiv | @article{zykov2006the,
title={The Integrated Approach to ERP: Embracing the Web},
author={Sergey V. Zykov},
journal={Proceedings of 4th International Workshop on Computer Science and
Information Technologies, (CSIT'2002) Sept., 2002, Patras, Greece, Vol.1,
p.p.73-78},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607128},
primaryClass={cs.SE cs.DC}
} | zykov2006the |
arxiv-674562 | cs/0607129 | Enterprise Resource Planning Systems: the Integrated Approach | <|reference_start|>Enterprise Resource Planning Systems: the Integrated Approach: Enterprise resource planning (ERP) systems enjoy an increasingly wide coverage. However, no truly integrate solution has been proposed as yet. ERP classification is given. Recent trends in commercial systems are analyzed on the basis of human resources (HR) management software. An innovative "straight through" design and implementation process of an open, secure, and scalable integrated event-driven enterprise solution is suggested. Implementation results are presented.<|reference_end|> | arxiv | @article{zykov2006enterprise,
title={Enterprise Resource Planning Systems: the Integrated Approach},
author={Sergey V. Zykov},
journal={Proceedings of the 3d International Workshop on Computer Science
and Information Technologies, (CSIT'2001), Ufa:USATU, 2001, Vol.1,
p.p.284-295},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607129},
primaryClass={cs.SE cs.DC}
} | zykov2006enterprise |
arxiv-674563 | cs/0607130 | Towards Implementing an Enterprise Groupware-Integrated Human Resources Information System | <|reference_start|>Towards Implementing an Enterprise Groupware-Integrated Human Resources Information System: Human resources management software is having a wide audience at present. However, no truly integrate solution has been proposed yet to improve the systems concerned. Approaches to extra data collection for appraisal decision-making are considered on the concept modeling theoretical basis. Current technologies in state-of-the-art HR management software are compared. Design and implementation aspects for a Web-wired truly integrated secure and scalable event-driven enterprise system are described. Benchmark results are presented. Field perspectives are discussed.<|reference_end|> | arxiv | @article{zykov2006towards,
title={Towards Implementing an Enterprise Groupware-Integrated Human Resources
Information System},
author={Sergey V. Zykov},
journal={Proceedings of the 2nd International Workshop on Computer Science
and Information Technologies (CSIT'2000), Vol.1, Ufa State Aviation Technical
University, USATU Editorial-Publishing Office, Ufa, 2000, p.p.188-196},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607130},
primaryClass={cs.SE cs.CY}
} | zykov2006towards |
arxiv-674564 | cs/0607131 | Tardos fingerprinting is better than we thought | <|reference_start|>Tardos fingerprinting is better than we thought: We review the fingerprinting scheme by Tardos and show that it has a much better performance than suggested by the proofs in Tardos' original paper. In particular, the length of the codewords can be significantly reduced. First we generalize the proofs of the false positive and false negative error probabilities with the following modifications: (1) we replace Tardos' hard-coded numbers by variables and (2) we allow for independently chosen false positive and false negative error rates. It turns out that all the collusion-resistance properties can still be proven when the code length is reduced by a factor of more than 2. Second, we study the statistical properties of the fingerprinting scheme, in particular the average and variance of the accusations. We identify which colluder strategy forces the content owner to employ the longest code. Using a gaussian approximation for the probability density functions of the accusations, we show that the required false negative and false positive error rate can be achieved with codes that are a factor 2 shorter than required for rigid proofs. Combining the results of these two approaches, we show that the Tardos scheme can be used with a code length approximately 5 times shorter than in the original construction.<|reference_end|> | arxiv | @article{skoric2006tardos,
title={Tardos fingerprinting is better than we thought},
author={B. Skoric, T.U. Vladimirova, M. Celik, J.C. Talstra},
journal={arXiv preprint arXiv:cs/0607131},
year={2006},
number={PR-MS 26.957},
archivePrefix={arXiv},
eprint={cs/0607131},
primaryClass={cs.CR}
} | skoric2006tardos |
arxiv-674565 | cs/0607132 | On q-ary codes correcting all unidirectional errors of a limited magnitude | <|reference_start|>On q-ary codes correcting all unidirectional errors of a limited magnitude: We consider codes over the alphabet Q={0,1,..,q-1}intended for the control of unidirectional errors of level l. That is, the transmission channel is such that the received word cannot contain both a component larger than the transmitted one and a component smaller than the transmitted one. Moreover, the absolute value of the difference between a transmitted component and its received version is at most l. We introduce and study q-ary codes capable of correcting all unidirectional errors of level l. Lower and upper bounds for the maximal size of those codes are presented. We also study codes for this aim that are defined by a single equation on the codeword coordinates(similar to the Varshamov-Tenengolts codes for correcting binary asymmetric errors). We finally consider the problem of detecting all unidirectional errors of level l.<|reference_end|> | arxiv | @article{ahlswede2006on,
title={On q-ary codes correcting all unidirectional errors of a limited
magnitude},
author={R. Ahlswede, H. Aydinian, L.H. Khachatrian and L.M.G.M. Tolhuizen},
journal={arXiv preprint arXiv:cs/0607132},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607132},
primaryClass={cs.IT math.IT}
} | ahlswede2006on |
arxiv-674566 | cs/0607133 | Self-Replication and Self-Assembly for Manufacturing | <|reference_start|>Self-Replication and Self-Assembly for Manufacturing: It has been argued that a central objective of nanotechnology is to make products inexpensively, and that self-replication is an effective approach to very low-cost manufacturing. The research presented here is intended to be a step towards this vision. We describe a computational simulation of nanoscale machines floating in a virtual liquid. The machines can bond together to form strands (chains) that self-replicate and self-assemble into user-specified meshes. There are four types of machines and the sequence of machine types in a strand determines the shape of the mesh they will build. A strand may be in an unfolded state, in which the bonds are straight, or in a folded state, in which the bond angles depend on the types of machines. By choosing the sequence of machine types in a strand, the user can specify a variety of polygonal shapes. A simulation typically begins with an initial unfolded seed strand in a soup of unbonded machines. The seed strand replicates by bonding with free machines in the soup. The child strands fold into the encoded polygonal shape, and then the polygons drift together and bond to form a mesh. We demonstrate that a variety of polygonal meshes can be manufactured in the simulation, by simply changing the sequence of machine types in the seed.<|reference_end|> | arxiv | @article{ewaschuk2006self-replication,
title={Self-Replication and Self-Assembly for Manufacturing},
author={Robert Ewaschuk, Peter D. Turney},
journal={Artificial Life, (2006), 12, 411-433},
year={2006},
doi={10.1162/artl.2006.12.3.411},
number={NRC-48760},
archivePrefix={arXiv},
eprint={cs/0607133},
primaryClass={cs.MA cs.CE}
} | ewaschuk2006self-replication |
arxiv-674567 | cs/0607134 | Leading strategies in competitive on-line prediction | <|reference_start|>Leading strategies in competitive on-line prediction: We start from a simple asymptotic result for the problem of on-line regression with the quadratic loss function: the class of continuous limited-memory prediction strategies admits a "leading prediction strategy", which not only asymptotically performs at least as well as any continuous limited-memory strategy but also satisfies the property that the excess loss of any continuous limited-memory strategy is determined by how closely it imitates the leading strategy. More specifically, for any class of prediction strategies constituting a reproducing kernel Hilbert space we construct a leading strategy, in the sense that the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. This result is extended to the loss functions given by Bregman divergences and by strictly proper scoring rules.<|reference_end|> | arxiv | @article{vovk2006leading,
title={Leading strategies in competitive on-line prediction},
author={Vladimir Vovk},
journal={arXiv preprint arXiv:cs/0607134},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607134},
primaryClass={cs.LG}
} | vovk2006leading |
arxiv-674568 | cs/0607135 | A polynomial-time approximation algorithm for the number of k-matchings in bipartite graphs | <|reference_start|>A polynomial-time approximation algorithm for the number of k-matchings in bipartite graphs: We show that the number of $k$-matching in a given undirected graph $G$ is equal to the number of perfect matching of the corresponding graph $G_k$ on an even number of vertices divided by a suitable factor. If $G$ is bipartite then one can construct a bipartite $G_k$. For bipartite graphs this result implies that the number of $k$-matching has a polynomial-time approximation algorithm. The above results are extended to permanents and hafnians of corresponding matrices.<|reference_end|> | arxiv | @article{friedland2006a,
title={A polynomial-time approximation algorithm for the number of k-matchings
in bipartite graphs},
author={Shmuel Friedland and Daniel Levy},
journal={arXiv preprint arXiv:cs/0607135},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607135},
primaryClass={cs.CC cs.DM}
} | friedland2006a |
arxiv-674569 | cs/0607136 | Competing with Markov prediction strategies | <|reference_start|>Competing with Markov prediction strategies: Assuming that the loss function is convex in the prediction, we construct a prediction strategy universal for the class of Markov prediction strategies, not necessarily continuous. Allowing randomization, we remove the requirement of convexity.<|reference_end|> | arxiv | @article{vovk2006competing,
title={Competing with Markov prediction strategies},
author={Vladimir Vovk},
journal={arXiv preprint arXiv:cs/0607136},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607136},
primaryClass={cs.LG}
} | vovk2006competing |
arxiv-674570 | cs/0607137 | Multicasting with selective delivery: A SafetyNet for vertical handoffs | <|reference_start|>Multicasting with selective delivery: A SafetyNet for vertical handoffs: In future mobility support will require handling roaming in heterogeneous access networks. In order to enable seamless roaming it is necessary to minimize the impact of the vertical handoffs. Localized mobility management schemes such as FMIPv6 and HMIPv6 do not provide sufficient handoff performance, since they have been designed for horizontal handoffs. In this paper, we propose the SafetyNet protocol, which allows a Mobile Node to perform seamless vertical handoffs. Further, we propose a handoff timing algorithm which allows a Mobile Node to delay or even completely avoid upward vertical handoffs. We implement the SafetyNet protocol and compare its performance with the Fast Handovers for Mobile IPv6 protocol in our wireless test bed and analyze the results. The experimental results indicate that the proposed SafetyNet protocol can provide an improvement of up to 95% for TCP performance in vertical handoffs, when compared with FMIPv6 and an improvement of 64% over FMIPv6 with bicasting. We use numerical analysis of the protocol to show that its signaling and data transmission overhead is comparable to Fast Mobile IPv6 and significantly smaller than that of FMIPv6 with bicasting.<|reference_end|> | arxiv | @article{petander2006multicasting,
title={Multicasting with selective delivery: A SafetyNet for vertical handoffs},
author={Henrik Petander, Eranga Perera, Aruna Seneviratne},
journal={arXiv preprint arXiv:cs/0607137},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607137},
primaryClass={cs.NI}
} | petander2006multicasting |
arxiv-674571 | cs/0607138 | A Foundation to Perception Computing, Logic and Automata | <|reference_start|>A Foundation to Perception Computing, Logic and Automata: In this report, a novel approach to intelligence and learning is introduced, this approach is based on what we call 'perception logic'. Based on this logic, a computing mechanism and automata are introduced. Multi-resolution analysis of perceptual information is given, in which learning is accomplished in at most O(log(N))epochs, where N is the number of samples, and the convergence is guarnteed. This approach combines the favors of computational modeles in the sense that they are structured and mathematically well-defined, and the adaptivity of soft computing approaches, in addition to the continuity and real-time response of dynamical systems.<|reference_end|> | arxiv | @article{belal2006a,
title={A Foundation to Perception Computing, Logic and Automata},
author={Mohamed A. Belal},
journal={arXiv preprint arXiv:cs/0607138},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607138},
primaryClass={cs.AI cs.LG}
} | belal2006a |
arxiv-674572 | cs/0607139 | Parallel repetition: simplifications and the no-signaling case | <|reference_start|>Parallel repetition: simplifications and the no-signaling case: Consider a game where a refereed a referee chooses (x,y) according to a publicly known distribution P_XY, sends x to Alice, and y to Bob. Without communicating with each other, Alice responds with a value "a" and Bob responds with a value "b". Alice and Bob jointly win if a publicly known predicate Q(x,y,a,b) holds. Let such a game be given and assume that the maximum probability that Alice and Bob can win is v<1. Raz (SIAM J. Comput. 27, 1998) shows that if the game is repeated n times in parallel, then the probability that Alice and Bob win all games simultaneously is at most v'^(n/log(s)), where s is the maximal number of possible responses from Alice and Bob in the initial game, and v' is a constant depending only on v. In this work, we simplify Raz's proof in various ways and thus shorten it significantly. Further we study the case where Alice and Bob are not restricted to local computations and can use any strategy which does not imply communication among them.<|reference_end|> | arxiv | @article{holenstein2006parallel,
title={Parallel repetition: simplifications and the no-signaling case},
author={Thomas Holenstein},
journal={Theory of Computing 5 (2009) 1, pp. 141-172},
year={2006},
doi={10.4086/toc.2009.v005a008},
archivePrefix={arXiv},
eprint={cs/0607139},
primaryClass={cs.CC quant-ph}
} | holenstein2006parallel |
arxiv-674573 | cs/0607140 | Stylized Facts in Internal Rates of Return on Stock Index and its Derivative Transactions | <|reference_start|>Stylized Facts in Internal Rates of Return on Stock Index and its Derivative Transactions: Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in log normalized returns, the probability distributions for such single asset encounters encorporate the time factor by means of the internal rate of return defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting diffference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be also useful in deducing the type of investment strategy from trading revenues of small portfolio investors.<|reference_end|> | arxiv | @article{pichl2006stylized,
title={Stylized Facts in Internal Rates of Return on Stock Index and its
Derivative Transactions},
author={Lukas Pichl, Taisei Kaizoji, Takuya Yamano},
journal={arXiv preprint arXiv:cs/0607140},
year={2006},
doi={10.1016/j.physa.2007.03.042},
archivePrefix={arXiv},
eprint={cs/0607140},
primaryClass={cs.IT cs.CE math.IT}
} | pichl2006stylized |
arxiv-674574 | cs/0607141 | Logic Column 16: Higher-Order Abstract Syntax: Setting the Record Straight | <|reference_start|>Logic Column 16: Higher-Order Abstract Syntax: Setting the Record Straight: This article responds to a critique of higher-order abstract syntax appearing in Logic Column 14, ``Nominal Logic and Abstract Syntax'', cs.LO/0511025.<|reference_end|> | arxiv | @article{crary2006logic,
title={Logic Column 16: Higher-Order Abstract Syntax: Setting the Record
Straight},
author={Karl Crary and Robert Harper},
journal={arXiv preprint arXiv:cs/0607141},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607141},
primaryClass={cs.LO}
} | crary2006logic |
arxiv-674575 | cs/0607142 | Employing Trusted Computing for the forward pricing of pseudonyms in reputation systems | <|reference_start|>Employing Trusted Computing for the forward pricing of pseudonyms in reputation systems: Reputation and recommendation systems are fundamental for the formation of community market places. Yet, they are easy targets for attacks which disturb a market's equilibrium and are often based on cheap pseudonyms used to submit ratings. We present a method to price ratings using trusted computing, based on pseudonymous tickets.<|reference_end|> | arxiv | @article{kuntze2006employing,
title={Employing Trusted Computing for the forward pricing of pseudonyms in
reputation systems},
author={Nicolai Kuntze, Dominique Maehler, and Andreas U. Schmidt},
journal={arXiv preprint arXiv:cs/0607142},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607142},
primaryClass={cs.CR}
} | kuntze2006employing |
arxiv-674576 | cs/0607143 | Target Type Tracking with PCR5 and Dempster's rules: A Comparative Analysis | <|reference_start|>Target Type Tracking with PCR5 and Dempster's rules: A Comparative Analysis: In this paper we consider and analyze the behavior of two combinational rules for temporal (sequential) attribute data fusion for target type estimation. Our comparative analysis is based on Dempster's fusion rule proposed in Dempster-Shafer Theory (DST) and on the Proportional Conflict Redistribution rule no. 5 (PCR5) recently proposed in Dezert-Smarandache Theory (DSmT). We show through very simple scenario and Monte-Carlo simulation, how PCR5 allows a very efficient Target Type Tracking and reduces drastically the latency delay for correct Target Type decision with respect to Demspter's rule. For cases presenting some short Target Type switches, Demspter's rule is proved to be unable to detect the switches and thus to track correctly the Target Type changes. The approach proposed here is totally new, efficient and promising to be incorporated in real-time Generalized Data Association - Multi Target Tracking systems (GDA-MTT) and provides an important result on the behavior of PCR5 with respect to Dempster's rule. The MatLab source code is provided in<|reference_end|> | arxiv | @article{dezert2006target,
title={Target Type Tracking with PCR5 and Dempster's rules: A Comparative
Analysis},
author={Jean Dezert, Albena Tchamova, Florentin Smarandache, Pavlina
Konstantinova},
journal={Proceedings of Fusion 2006 International Conference, Florence,
Italy, July 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607143},
primaryClass={cs.AI}
} | dezert2006target |
arxiv-674577 | cs/0607144 | Levels of Product Differentiation in the Global Mobile Phones Market | <|reference_start|>Levels of Product Differentiation in the Global Mobile Phones Market: The sixth product level called compliant product is a connecting element between the physical product characteristics and the strategy of the producer company. The article discusses the differentiation among the product offers of companies working in the global markets, as well as the strategies which they use and could use in that respect.<|reference_end|> | arxiv | @article{andonov2006levels,
title={Levels of Product Differentiation in the Global Mobile Phones Market},
author={Stanimir Andonov},
journal={arXiv preprint arXiv:cs/0607144},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607144},
primaryClass={cs.OH}
} | andonov2006levels |
arxiv-674578 | cs/0607145 | Geometric definition of a new skeletonization concept | <|reference_start|>Geometric definition of a new skeletonization concept: The Divider set, as an innovative alternative concept to maximal disks, Voronoi sets and cut loci, is presented with a formal definition based on topology and differential geometry. The relevant mathematical theory by previous authors and a comparison with other medial axis definitions is presented. Appropriate applications are proposed and examined.<|reference_end|> | arxiv | @article{bakopoulos2006geometric,
title={Geometric definition of a new skeletonization concept},
author={Yannis Bakopoulos, Theophanis Raptis, Doxaras Ioannis},
journal={arXiv preprint arXiv:cs/0607145},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607145},
primaryClass={cs.CG}
} | bakopoulos2006geometric |
arxiv-674579 | cs/0607146 | Modeling Adversaries in a Logic for Security Protocol Analysis | <|reference_start|>Modeling Adversaries in a Logic for Security Protocol Analysis: Logics for security protocol analysis require the formalization of an adversary model that specifies the capabilities of adversaries. A common model is the Dolev-Yao model, which considers only adversaries that can compose and replay messages, and decipher them with known keys. The Dolev-Yao model is a useful abstraction, but it suffers from some drawbacks: it cannot handle the adversary knowing protocol-specific information, and it cannot handle probabilistic notions, such as the adversary attempting to guess the keys. We show how we can analyze security protocols under different adversary models by using a logic with a notion of algorithmic knowledge. Roughly speaking, adversaries are assumed to use algorithms to compute their knowledge; adversary capabilities are captured by suitable restrictions on the algorithms used. We show how we can model the standard Dolev-Yao adversary in this setting, and how we can capture more general capabilities including protocol-specific knowledge and guesses.<|reference_end|> | arxiv | @article{halpern2006modeling,
title={Modeling Adversaries in a Logic for Security Protocol Analysis},
author={Joseph Y. Halpern (Cornell University), Riccardo Pucella (Northeastern
University)},
journal={Logical Methods in Computer Science, Volume 8, Issue 1 (March 9,
2012) lmcs:688},
year={2006},
doi={10.2168/LMCS-8(1:21)2012},
archivePrefix={arXiv},
eprint={cs/0607146},
primaryClass={cs.CR cs.LO}
} | halpern2006modeling |
arxiv-674580 | cs/0607147 | Fusion of qualitative beliefs using DSmT | <|reference_start|>Fusion of qualitative beliefs using DSmT: This paper introduces the notion of qualitative belief assignment to model beliefs of human experts expressed in natural language (with linguistic labels). We show how qualitative beliefs can be efficiently combined using an extension of Dezert-Smarandache Theory (DSmT) of plausible and paradoxical quantitative reasoning to qualitative reasoning. We propose a new arithmetic on linguistic labels which allows a direct extension of classical DSm fusion rule or DSm Hybrid rules. An approximate qualitative PCR5 rule is also proposed jointly with a Qualitative Average Operator. We also show how crisp or interval mappings can be used to deal indirectly with linguistic labels. A very simple example is provided to illustrate our qualitative fusion rules.<|reference_end|> | arxiv | @article{smarandache2006fusion,
title={Fusion of qualitative beliefs using DSmT},
author={Florentin Smarandache, Jean Dezert},
journal={Presented as an extended version (Tutorial MO2) to the Fusion 2006
International Conference, Florence, Italy, July 10-13, 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607147},
primaryClass={cs.AI}
} | smarandache2006fusion |
arxiv-674581 | cs/0608001 | A Finite Equational Base for CCS with Left Merge and Communication Merge | <|reference_start|>A Finite Equational Base for CCS with Left Merge and Communication Merge: Using the left merge and communication merge from ACP, we present an equational base (i.e., a ground-complete and $\omega$-complete set of valid equations) for the fragment of CCS without recursion, restriction and relabelling. Our equational base is finite if the set of actions is finite.<|reference_end|> | arxiv | @article{aceto2006a,
title={A Finite Equational Base for CCS with Left Merge and Communication Merge},
author={Luca Aceto, Wan Fokkink, Anna Ingolfsdottir, and Bas Luttik},
journal={arXiv preprint arXiv:cs/0608001},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608001},
primaryClass={cs.LO}
} | aceto2006a |
arxiv-674582 | cs/0608002 | An Introduction to the DSm Theory for the Combination of Paradoxical, Uncertain, and Imprecise Sources of Information | <|reference_start|>An Introduction to the DSm Theory for the Combination of Paradoxical, Uncertain, and Imprecise Sources of Information: The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this introduction, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the literature, developed for dealing with imprecise, uncertain and paradoxical sources of information. We focus our presentation here rather on the foundations of DSmT, and on the two important new rules of combination, than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout the presentation to show the efficiency and the generality of this new approach.<|reference_end|> | arxiv | @article{smarandache2006an,
title={An Introduction to the DSm Theory for the Combination of Paradoxical,
Uncertain, and Imprecise Sources of Information},
author={Florentin Smarandache, Jean Dezert},
journal={Presented at 13th International Congress of Cybernetics and
Systems, Maribor, Slovenia, July 6-10, 2005.},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608002},
primaryClass={cs.AI}
} | smarandache2006an |
arxiv-674583 | cs/0608003 | On a solution to display non-filled-in quaternionic Julia sets | <|reference_start|>On a solution to display non-filled-in quaternionic Julia sets: During early 1980s, the so-called `escape time' method, developed to display the Julia sets for complex dynamical systems, was exported to quaternions in order to draw analogous pictures in this wider numerical field. Despite of the fine results in the complex plane, where all topological configurations of Julia sets have been successfully displayed, the `escape time' method fails to render properly the non-filled-in variety of quaternionic Julia sets. So their digital visualisation remained an open problem for several years. Both the solution for extending this old method to non-filled-in quaternionic Julia sets and its implementation into a program are explained here.<|reference_end|> | arxiv | @article{rosa2006on,
title={On a solution to display non-filled-in quaternionic Julia sets},
author={Alessandro Rosa},
journal={arXiv preprint arXiv:cs/0608003},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608003},
primaryClass={cs.GR cs.MS math.DS}
} | rosa2006on |
arxiv-674584 | cs/0608004 | Separating the articles of authors with the same name | <|reference_start|>Separating the articles of authors with the same name: I describe a method to separate the articles of different authors with the same name. It is based on a distance between any two publications, defined in terms of the probability that they would have as many coincidences if they were drawn at random from all published documents. Articles with a given author name are then clustered according to their distance, so that all articles in a cluster belong very likely to the same author. The method has proven very useful in generating groups of papers that are then selected manually. This simplifies considerably citation analysis when the author publication lists are not available.<|reference_end|> | arxiv | @article{soler2006separating,
title={Separating the articles of authors with the same name},
author={Jose M. Soler},
journal={arXiv preprint arXiv:cs/0608004},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608004},
primaryClass={cs.DL cs.IR}
} | soler2006separating |
arxiv-674585 | cs/0608005 | A field-theory motivated approach to symbolic computer algebra | <|reference_start|>A field-theory motivated approach to symbolic computer algebra: Field theory is an area in physics with a deceptively compact notation. Although general purpose computer algebra systems, built around generic list-based data structures, can be used to represent and manipulate field-theory expressions, this often leads to cumbersome input formats, unexpected side-effects, or the need for a lot of special-purpose code. This makes a direct translation of problems from paper to computer and back needlessly time-consuming and error-prone. A prototype computer algebra system is presented which features TeX-like input, graph data structures, lists with Young-tableaux symmetries and a multiple-inheritance property system. The usefulness of this approach is illustrated with a number of explicit field-theory problems.<|reference_end|> | arxiv | @article{peeters2006a,
title={A field-theory motivated approach to symbolic computer algebra},
author={Kasper Peeters},
journal={Comput.Phys.Commun.176:550-558,2007},
year={2006},
doi={10.1016/j.cpc.2007.01.003},
number={AEI-2006-037},
archivePrefix={arXiv},
eprint={cs/0608005},
primaryClass={cs.SC gr-qc hep-th}
} | peeters2006a |
arxiv-674586 | cs/0608006 | A Graph-based Framework for Transmission of Correlated Sources over Broadcast Channels | <|reference_start|>A Graph-based Framework for Transmission of Correlated Sources over Broadcast Channels: In this paper we consider the communication problem that involves transmission of correlated sources over broadcast channels. We consider a graph-based framework for this information transmission problem. The system involves a source coding module and a channel coding module. In the source coding module, the sources are efficiently mapped into a nearly semi-regular bipartite graph, and in the channel coding module, the edges of this graph are reliably transmitted over a broadcast channel. We consider nearly semi-regular bipartite graphs as discrete interface between source coding and channel coding in this multiterminal setting. We provide an information-theoretic characterization of (1) the rate of exponential growth (as a function of the number of channel uses) of the size of the bipartite graphs whose edges can be reliably transmitted over a broadcast channel and (2) the rate of exponential growth (as a function of the number of source samples) of the size of the bipartite graphs which can reliably represent a pair of correlated sources to be transmitted over a broadcast channel.<|reference_end|> | arxiv | @article{choi2006a,
title={A Graph-based Framework for Transmission of Correlated Sources over
Broadcast Channels},
author={Suhan Choi and S. Sandeep Pradhan},
journal={arXiv preprint arXiv:cs/0608006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608006},
primaryClass={cs.IT math.IT}
} | choi2006a |
arxiv-674587 | cs/0608007 | On the randomness of independent experiments | <|reference_start|>On the randomness of independent experiments: Given a probability distribution P, what is the minimum amount of bits needed to store a value x sampled according to P, such that x can later be recovered (except with some small probability)? Or, what is the maximum amount of uniform randomness that can be extracted from x? Answering these and similar information-theoretic questions typically boils down to computing so-called smooth entropies. In this paper, we derive explicit and almost tight bounds on the smooth entropies of n-fold product distributions.<|reference_end|> | arxiv | @article{holenstein2006on,
title={On the randomness of independent experiments},
author={Thomas Holenstein and Renato Renner},
journal={arXiv preprint arXiv:cs/0608007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608007},
primaryClass={cs.IT math.IT}
} | holenstein2006on |
arxiv-674588 | cs/0608008 | The minimum linear arrangement problem on proper interval graphs | <|reference_start|>The minimum linear arrangement problem on proper interval graphs: We present a linear time algorithm for the minimum linear arrangement problem on proper interval graphs. The obtained ordering is a 4-approximation for general interval graphs<|reference_end|> | arxiv | @article{safro2006the,
title={The minimum linear arrangement problem on proper interval graphs},
author={Ilya Safro},
journal={arXiv preprint arXiv:cs/0608008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608008},
primaryClass={cs.DM cs.DS}
} | safro2006the |
arxiv-674589 | cs/0608009 | Stability in multidimensional Size Theory | <|reference_start|>Stability in multidimensional Size Theory: This paper proves that in Size Theory the comparison of multidimensional size functions can be reduced to the 1-dimensional case by a suitable change of variables. Indeed, we show that a foliation in half-planes can be given, such that the restriction of a multidimensional size function to each of these half-planes turns out to be a classical size function in two scalar variables. This leads to the definition of a new distance between multidimensional size functions, and to the proof of their stability with respect to that distance.<|reference_end|> | arxiv | @article{cerri2006stability,
title={Stability in multidimensional Size Theory},
author={Andrea Cerri, Patrizio Frosini and Claudia Landi},
journal={arXiv preprint arXiv:cs/0608009},
year={2006},
number={Universita' di Modena e Reggio Emilia, DISMI-85 june 2006},
archivePrefix={arXiv},
eprint={cs/0608009},
primaryClass={cs.CG cs.CV}
} | cerri2006stability |
arxiv-674590 | cs/0608010 | MIMO scheme performance and detection in epsilon noise | <|reference_start|>MIMO scheme performance and detection in epsilon noise: New approach for analysis and decoding MIMO signaling is developed for usual model of nongaussion noise consists of background and impulsive noise named epsilon - noise. It is shown that non-gaussion noise performance significantly worse than gaussion ones. Stimulation results strengthen out theory. Robust in statistical sense detection rule is suggested for such kind of noise features much best robust detector performance than detector designed for Gaussian noise in impulsive environment and modest margin in background noise. Proposed algorithms performance are comparable with developed potential bound. Proposed tool, is crucial issue for MIMO communication system design, since real noise environment has impulsive character that contradict with wide used Gaussian approach, so real MIMO performance much different for Gaussian a non-Gaussian noise model.<|reference_end|> | arxiv | @article{stepanov2006mimo,
title={MIMO scheme performance and detection in epsilon noise},
author={Sander Stepanov},
journal={arXiv preprint arXiv:cs/0608010},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608010},
primaryClass={cs.IT math.IT}
} | stepanov2006mimo |
arxiv-674591 | cs/0608011 | The Many Faces of Rationalizability | <|reference_start|>The Many Faces of Rationalizability: The rationalizability concept was introduced in \cite{Ber84} and \cite{Pea84} to assess what can be inferred by rational players in a non-cooperative game in the presence of common knowledge. However, this notion can be defined in a number of ways that differ in seemingly unimportant minor details. We shed light on these differences, explain their impact, and clarify for which games these definitions coincide. Then we apply the same analysis to explain the differences and similarities between various ways the iterated elimination of strictly dominated strategies was defined in the literature. This allows us to clarify the results of \cite{DS02} and \cite{CLL05} and improve upon them. We also consider the extension of these results to strict dominance by a mixed strategy. Our approach is based on a general study of the operators on complete lattices. We allow transfinite iterations of the considered operators and clarify the need for them. The advantage of such a general approach is that a number of results, including order independence for some of the notions of rationalizability and strict dominance, come for free.<|reference_end|> | arxiv | @article{apt2006the,
title={The Many Faces of Rationalizability},
author={Krzysztof R. Apt},
journal={arXiv preprint arXiv:cs/0608011},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608011},
primaryClass={cs.GT}
} | apt2006the |
arxiv-674592 | cs/0608012 | Optic,mal: Optical/Optimal Routing in Massively Dense Wireless Networks | <|reference_start|>Optic,mal: Optical/Optimal Routing in Massively Dense Wireless Networks: We study routing for massively dense wireless networks, i.e., wireless networks that contain so many nodes that, in addition to their usual microscopic description, a novel macroscopic description becomes possible. The macroscopic description is not detailed, but nevertheless contains enough information to permit a meaningful study and performance optimization of the network. Within this context, we continue and significantly expand previous work on the analogy between optimal routing and the propagation of light according to the laws of Geometrical Optics. Firstly, we pose the analogy in a more general framework than previously, notably showing how the eikonal equation, which is the central equation of Geometrical Optics, also appears in the networking context. Secondly, we develop a methodology for calculating the cost function, which is the function describing the network at the macroscopic level. We apply this methodology for two important types of networks: bandwidth limited and energy limited.<|reference_end|> | arxiv | @article{catanuto2006opti{c,m}al:,
title={Opti{c,m}al: Optical/Optimal Routing in Massively Dense Wireless
Networks},
author={R. Catanuto, S. Toumpis, G. Morabito},
journal={Proc. INFOCOM 2007, Anchorage, AL, May 2007},
year={2006},
doi={10.1109/INFCOM.2007.122},
archivePrefix={arXiv},
eprint={cs/0608012},
primaryClass={cs.NI}
} | catanuto2006opti{c,m}al: |
arxiv-674593 | cs/0608013 | Pull-Based Data Broadcast with Dependencies: Be Fair to Users, not to Items | <|reference_start|>Pull-Based Data Broadcast with Dependencies: Be Fair to Users, not to Items: Broadcasting is known to be an efficient means of disseminating data in wireless communication environments (such as Satellite, mobile phone networks,...). It has been recently observed that the average service time of broadcast systems can be considerably improved by taking into consideration existing correlations between requests. We study a pull-based data broadcast system where users request possibly overlapping sets of items; a request is served when all its requested items are downloaded. We aim at minimizing the average user perceived latency, i.e. the average flow time of the requests. We first show that any algorithm that ignores the dependencies can yield arbitrary bad performances with respect to the optimum even if it is given arbitrary extra resources. We then design a $(4+\epsilon)$-speed $O(1+1/\epsilon^2)$-competitive algorithm for this setting that consists in 1) splitting evenly the bandwidth among each requested set and in 2) broadcasting arbitrarily the items still missing in each set into the bandwidth the set has received. Our algorithm presents several interesting features: it is simple to implement, non-clairvoyant, fair to users so that no user may starve for a long period of time, and guarantees good performances in presence of correlations between user requests (without any change in the broadcast protocol). We also present a $ (4+\epsilon)$-speed $O(1+1/\epsilon^3)$-competitive algorithm which broadcasts at most one item at any given time and preempts each item broadcast at most once on average. As a side result of our analysis, we design a competitive algorithm for a particular setting of non-clairvoyant job scheduling with dependencies, which might be of independent interest.<|reference_end|> | arxiv | @article{robert2006pull-based,
title={Pull-Based Data Broadcast with Dependencies: Be Fair to Users, not to
Items},
author={Julien Robert, Nicolas Schabanel},
journal={arXiv preprint arXiv:cs/0608013},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608013},
primaryClass={cs.DS cs.CC}
} | robert2006pull-based |
arxiv-674594 | cs/0608014 | Localization for Anchoritic Sensor Networks | <|reference_start|>Localization for Anchoritic Sensor Networks: We introduce a class of anchoritic sensor networks, where communications between sensor nodes is undesirable or infeasible, e.g., due to harsh environment, energy constraints, or security considerations.<|reference_end|> | arxiv | @article{baryshnikov2006localization,
title={Localization for Anchoritic Sensor Networks},
author={Yuliy Baryshnikov, Jian Tan},
journal={arXiv preprint arXiv:cs/0608014},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608014},
primaryClass={cs.NI cs.CG}
} | baryshnikov2006localization |
arxiv-674595 | cs/0608015 | Towards "Propagation = Logic + Control" | <|reference_start|>Towards "Propagation = Logic + Control": Constraint propagation algorithms implement logical inference. For efficiency, it is essential to control whether and in what order basic inference steps are taken. We provide a high-level framework that clearly differentiates between information needed for controlling propagation versus that needed for the logical semantics of complex constraints composed from primitive ones. We argue for the appropriateness of our controlled propagation framework by showing that it captures the underlying principles of manually designed propagation algorithms, such as literal watching for unit clause propagation and the lexicographic ordering constraint. We provide an implementation and benchmark results that demonstrate the practicality and efficiency of our framework.<|reference_end|> | arxiv | @article{brand2006towards,
title={Towards "Propagation = Logic + Control"},
author={Sebastian Brand, Roland H. C. Yap},
journal={arXiv preprint arXiv:cs/0608015},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608015},
primaryClass={cs.PL cs.AI}
} | brand2006towards |
arxiv-674596 | cs/0608016 | ACD Term Rewriting | <|reference_start|>ACD Term Rewriting: We introduce Associative Commutative Distributive Term Rewriting (ACDTR), a rewriting language for rewriting logical formulae. ACDTR extends AC term rewriting by adding distribution of conjunction over other operators. Conjunction is vital for expressive term rewriting systems since it allows us to require that multiple conditions hold for a term rewriting rule to be used. ACDTR uses the notion of a "conjunctive context", which is the conjunction of constraints that must hold in the context of a term, to enable the programmer to write very expressive and targeted rewriting rules. ACDTR can be seen as a general logic programming language that extends Constraint Handling Rules and AC term rewriting. In this paper we define the semantics of ACDTR and describe our prototype implementation.<|reference_end|> | arxiv | @article{duck2006acd,
title={ACD Term Rewriting},
author={Gregory J. Duck, Peter J. Stuckey, Sebastian Brand},
journal={arXiv preprint arXiv:cs/0608016},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608016},
primaryClass={cs.PL cs.SC}
} | duck2006acd |
arxiv-674597 | cs/0608017 | Infinite Qualitative Simulations by Means of Constraint Programming | <|reference_start|>Infinite Qualitative Simulations by Means of Constraint Programming: We introduce a constraint-based framework for studying infinite qualitative simulations concerned with contingencies such as time, space, shape, size, abstracted into a finite set of qualitative relations. To define the simulations, we combine constraints that formalize the background knowledge concerned with qualitative reasoning with appropriate inter-state constraints that are formulated using linear temporal logic. We implemented this approach in a constraint programming system by drawing on ideas from bounded model checking. The resulting system allows us to test and modify the problem specifications in a straightforward way and to combine various knowledge aspects.<|reference_end|> | arxiv | @article{apt2006infinite,
title={Infinite Qualitative Simulations by Means of Constraint Programming},
author={Krzysztof R. Apt, Sebastian Brand},
journal={arXiv preprint arXiv:cs/0608017},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608017},
primaryClass={cs.AI cs.LO}
} | apt2006infinite |
arxiv-674598 | cs/0608018 | The single-serving channel capacity | <|reference_start|>The single-serving channel capacity: In this paper we provide the answer to the following question: Given a noisy channel and epsilon>0, how many bits can be transmitted with an error of at most epsilon by a single use of the channel?<|reference_end|> | arxiv | @article{renner2006the,
title={The single-serving channel capacity},
author={Renato Renner and Stefan Wolf and Juerg Wullschleger},
journal={Proceedings of the 2006 IEEE International Symposium on
Information Theory (ISIT)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608018},
primaryClass={cs.IT math.IT}
} | renner2006the |
arxiv-674599 | cs/0608019 | Relation Variables in Qualitative Spatial Reasoning | <|reference_start|>Relation Variables in Qualitative Spatial Reasoning: We study an alternative to the prevailing approach to modelling qualitative spatial reasoning (QSR) problems as constraint satisfaction problems. In the standard approach, a relation between objects is a constraint whereas in the alternative approach it is a variable. The relation-variable approach greatly simplifies integration and implementation of QSR. To substantiate this point, we discuss several QSR algorithms from the literature which in the relation-variable approach reduce to the customary constraint propagation algorithm enforcing generalised arc-consistency.<|reference_end|> | arxiv | @article{brand2006relation,
title={Relation Variables in Qualitative Spatial Reasoning},
author={Sebastian Brand},
journal={arXiv preprint arXiv:cs/0608019},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608019},
primaryClass={cs.AI}
} | brand2006relation |
arxiv-674600 | cs/0608020 | Quasi-friendly sup-interpretations | <|reference_start|>Quasi-friendly sup-interpretations: In a previous paper, the sup-interpretation method was proposed as a new tool to control memory resources of first order functional programs with pattern matching by static analysis. Basically, a sup-interpretation provides an upper bound on the size of function outputs. In this former work, a criterion, which can be applied to terminating as well as non-terminating programs, was developed in order to bound polynomially the stack frame size. In this paper, we suggest a new criterion which captures more algorithms computing values polynomially bounded in the size of the inputs. Since this work is related to quasi-interpretations, we compare the two notions obtaining two main features. The first one is that, given a program, we have heuristics for finding a sup-interpretation when we consider polynomials of bounded degree. The other one consists in the characterizations of the set of function computable in polynomial time and in polynomial space.<|reference_end|> | arxiv | @article{marion2006quasi-friendly,
title={Quasi-friendly sup-interpretations},
author={Jean-Yves Marion and Romain Pechoux},
journal={arXiv preprint arXiv:cs/0608020},
year={2006},
archivePrefix={arXiv},
eprint={cs/0608020},
primaryClass={cs.CC}
} | marion2006quasi-friendly |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.