corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-670301
cs/0111052
A comparison of Zeroes and Ones of a Boolean Polynomial
<|reference_start|>A comparison of Zeroes and Ones of a Boolean Polynomial: In this paper we consider the computational complexity of the following problem. Let $f$ be a Boolean polynomial. What value of $f$, 0 or 1, is taken more frequently? The problem is solved in polynomial time for polynomials of degrees 1,2. The next case of degree 3 appears to be PP-complete under polynomial reductions in the class of promise problems. The proof is based on techniques of quantum computation.<|reference_end|>
arxiv
@article{vyalyi2001a, title={A comparison of Zeroes and Ones of a Boolean Polynomial}, author={M. N. Vyalyi}, journal={arXiv preprint arXiv:cs/0111052}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111052}, primaryClass={cs.CC} }
vyalyi2001a
arxiv-670302
cs/0111053
Meaningful Information
<|reference_start|>Meaningful Information: The information in an individual finite object (like a binary string) is commonly measured by its Kolmogorov complexity. One can divide that information into two parts: the information accounting for the useful regularity present in the object and the information accounting for the remaining accidental information. There can be several ways (model classes) in which the regularity is expressed. Kolmogorov has proposed the model class of finite sets, generalized later to computable probability mass functions. The resulting theory, known as Algorithmic Statistics, analyzes the algorithmic sufficient statistic when the statistic is restricted to the given model class. However, the most general way to proceed is perhaps to express the useful information as a recursive function. The resulting measure has been called the ``sophistication'' of the object. We develop the theory of recursive functions statistic, the maximum and minimum value, the existence of absolutely nonstochastic objects (that have maximal sophistication--all the information in them is meaningful and there is no residual randomness), determine its relation with the more restricted model classes of finite sets, and computable probability distributions, in particular with respect to the algorithmic (Kolmogorov) minimal sufficient statistic, the relation to the halting problem and further algorithmic properties.<|reference_end|>
arxiv
@article{vitanyi2001meaningful, title={Meaningful Information}, author={Paul Vitanyi (CWI and University of Amsterdam)}, journal={IEEE Trans. Inform. Th., 52:10(2006), 4617--4626}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111053}, primaryClass={cs.CC math-ph math.MP math.PR physics.data-an} }
vitanyi2001meaningful
arxiv-670303
cs/0111054
The similarity metric
<|reference_start|>The similarity metric: A new class of distances appropriate for measuring similarity relations between sequences, say one type of similarity per distance, is studied. We propose a new ``normalized information distance'', based on the noncomputable notion of Kolmogorov complexity, and show that it is in this class and it minorizes every computable distance in the class (that is, it is universal in that it discovers all computable similarities). We demonstrate that it is a metric and call it the {\em similarity metric}. This theory forms the foundation for a new practical tool. To evidence generality and robustness we give two distinctive applications in widely divergent areas using standard compression programs like gzip and GenCompress. First, we compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we fully automatically compute the language tree of 52 different languages.<|reference_end|>
arxiv
@article{li2001the, title={The similarity metric}, author={Ming Li (Univ. of Waterloo and BioInformatics Solutions Inc.), Xin Chen (Univ. California, Santa Barbara), Xin Li (Univ. Western Ontario), Bin Ma (Univ. Western Ontario), Paul Vitanyi (CWI and Univ. of Amsterdam)}, journal={arXiv preprint arXiv:cs/0111054}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111054}, primaryClass={cs.CC cond-mat.stat-mech cs.CE cs.CV math.CO math.MG math.ST physics.data-an q-bio.GN stat.TH} }
li2001the
arxiv-670304
cs/0111055
Overview of the NSTX Control System
<|reference_start|>Overview of the NSTX Control System: The National Spherical Torus Experiment (NSTX) is an innovative magnetic fusion device that was constructed by the Princeton Plasma Physics Laboratory (PPPL) in collaboration with the Oak Ridge National Laboratory, Columbia University, and the University of Washington at Seattle. Since achieving first plasma in 1999, the device has been used for fusion research through an international collaboration of over twenty institutions. The NSTX is operated through a collection of control systems that encompass a wide range of technology, from hardwired relay controls to real-time control systems with giga-FLOPS of capability. This paper presents a broad introduction to the control systems used on NSTX, with an emphasis on the computing controls, data acquisition, and synchronization systems.<|reference_end|>
arxiv
@article{sichta2001overview, title={Overview of the NSTX Control System}, author={P. Sichta, J. Dong, G. Oliaro, P. Roney}, journal={eConf C011127 (2001) TUBT004}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111055}, primaryClass={cs.OH} }
sichta2001overview
arxiv-670305
cs/0111056
Some Facets of Complexity Theory and Cryptography: A Five-Lectures Tutorial
<|reference_start|>Some Facets of Complexity Theory and Cryptography: A Five-Lectures Tutorial: In this tutorial, selected topics of cryptology and of computational complexity theory are presented. We give a brief overview of the history and the foundations of classical cryptography, and then move on to modern public-key cryptography. Particular attention is paid to cryptographic protocols and the problem of constructing the key components of such protocols such as one-way functions. A function is one-way if it is easy to compute, but hard to invert. We discuss the notion of one-way functions both in a cryptographic and in a complexity-theoretic setting. We also consider interactive proof systems and present some interesting zero-knowledge protocols. In a zero-knowledge protocol one party can convince the other party of knowing some secret information without disclosing any bit of this information. Motivated by these protocols, we survey some complexity-theoretic results on interactive proof systems and related complexity classes.<|reference_end|>
arxiv
@article{rothe2001some, title={Some Facets of Complexity Theory and Cryptography: A Five-Lectures Tutorial}, author={J"org Rothe}, journal={ACM Computing Surveys, volume 34, issue 4, pp. 504--549, December 2002}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111056}, primaryClass={cs.CC cs.CR} }
rothe2001some
arxiv-670306
cs/0111057
Towards a characterization of the star-free sets of integers
<|reference_start|>Towards a characterization of the star-free sets of integers: Let U be a numeration system, a set X of integers is U-star-free if the set made up of the U-representations of the elements in X is a star-free regular language. Answering a question of A. de Luca and A. Restivo, we obtain a complete logical characterization of the U-star-free sets of integers for suitable numeration systems related to a Pisot number and in particular for integer base systems. For these latter systems, we study as well the problem of the base dependence. Finally, the case of k-adic systems is also investigated.<|reference_end|>
arxiv
@article{rigo2001towards, title={Towards a characterization of the star-free sets of integers}, author={Michel Rigo}, journal={arXiv preprint arXiv:cs/0111057}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111057}, primaryClass={cs.CC cs.LO} }
rigo2001towards
arxiv-670307
cs/0111058
Bayesian Logic Programs
<|reference_start|>Bayesian Logic Programs: Bayesian networks provide an elegant formalism for representing and reasoning about uncertainty using probability theory. Theyare a probabilistic extension of propositional logic and, hence, inherit some of the limitations of propositional logic, such as the difficulties to represent objects and relations. We introduce a generalization of Bayesian networks, called Bayesian logic programs, to overcome these limitations. In order to represent objects and relations it combines Bayesian networks with definite clause logic by establishing a one-to-one mapping between ground atoms and random variables. We show that Bayesian logic programs combine the advantages of both definite clause logic and Bayesian networks. This includes the separation of quantitative and qualitative aspects of the model. Furthermore, Bayesian logic programs generalize both Bayesian networks as well as logic programs. So, many ideas developed<|reference_end|>
arxiv
@article{kersting2001bayesian, title={Bayesian Logic Programs}, author={Kristian Kersting (1), Luc De Raedt (1) ((1) Institute of Computer Science, Albert-Ludwigs-University Freiburg, Germany)}, journal={arXiv preprint arXiv:cs/0111058}, year={2001}, number={151}, archivePrefix={arXiv}, eprint={cs/0111058}, primaryClass={cs.AI cs.LO} }
kersting2001bayesian
arxiv-670308
cs/0111059
Hypotheses Founded Semantics of Logic Programs for Information Integration in Multi-Valued Logics
<|reference_start|>Hypotheses Founded Semantics of Logic Programs for Information Integration in Multi-Valued Logics: We address the problem of integrating information coming from different sources. The information consists of facts that a central server collects and tries to combine using (a) a set of logical rules, i.e. a logic program, and (b) a hypothesis representing the server's own estimates. In such a setting incomplete information from a source or contradictory information from different sources necessitate the use of many-valued logics in which programs can be evaluated and hypotheses can be tested. To carry out such activities we propose a formal framework based on bilattices such as Belnap's four-valued logics. In this framework we work with the class of programs defined by Fitting and we develop a theory for information integration. We also establish an intuitively appealing connection between our hypothesis testing mechanism on the one hand, and the well-founded semantics and Kripke-Kleene semantics of Datalog programs with negation, on the other hand.<|reference_end|>
arxiv
@article{loyer2001hypotheses, title={Hypotheses Founded Semantics of Logic Programs for Information Integration in Multi-Valued Logics}, author={Yann Loyer, Nicolas Spyratos, Daniel Stamate}, journal={arXiv preprint arXiv:cs/0111059}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111059}, primaryClass={cs.LO} }
loyer2001hypotheses
arxiv-670309
cs/0111060
Gradient-based Reinforcement Planning in Policy-Search Methods
<|reference_start|>Gradient-based Reinforcement Planning in Policy-Search Methods: We introduce a learning method called ``gradient-based reinforcement planning'' (GREP). Unlike traditional DP methods that improve their policy backwards in time, GREP is a gradient-based method that plans ahead and improves its policy before it actually acts in the environment. We derive formulas for the exact policy gradient that maximizes the expected future reward and confirm our ideas with numerical experiments.<|reference_end|>
arxiv
@article{kwee2001gradient-based, title={Gradient-based Reinforcement Planning in Policy-Search Methods}, author={Ivo Kwee, Marcus Hutter, Juergen Schmidhuber}, journal={arXiv preprint arXiv:cs/0111060}, year={2001}, number={14-01}, archivePrefix={arXiv}, eprint={cs/0111060}, primaryClass={cs.AI} }
kwee2001gradient-based
arxiv-670310
cs/0111061
On a Special Case of the Generalized Neighbourhood Problem
<|reference_start|>On a Special Case of the Generalized Neighbourhood Problem: For a given finite class of finite graphs H, a graph G is called a realization of H if the neighbourhood of its any vertex induces the subgraph isomorphic to a graph of H. We consider the following problem known as the Generalized Neighbourhood Problem (GNP): given a finite class of finite graphs H, does there exist a non-empty graph G that is a realization of H? In fact, there are two modifications of that problem, namely the finite (the existence of a finite realization is required) and infinite one (the realization is required to be infinite). In this paper we show that GNP and its modifications for all finite classes H of finite graphs are reduced to the same problems with an additional restriction on H. Namely, the orders of any two graphs of H are equal and every graph of H has exactly s dominating vertices.<|reference_end|>
arxiv
@article{naidenko2001on, title={On a Special Case of the Generalized Neighbourhood Problem}, author={V. Naidenko, Yu. Orlovich}, journal={arXiv preprint arXiv:cs/0111061}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111061}, primaryClass={cs.DM} }
naidenko2001on
arxiv-670311
cs/0111062
One-way communication complexity and the Neciporuk lower bound on formula size
<|reference_start|>One-way communication complexity and the Neciporuk lower bound on formula size: In this paper the Neciporuk method for proving lower bounds on the size of Boolean formulae is reformulated in terms of one-way communication complexity. We investigate the scenarios of probabilistic formulae, nondeterministic formulae, and quantum formulae. In all cases we can use results about one-way communication complexity to prove lower bounds on formula size. In the latter two cases we newly develop the employed communication complexity bounds. The main results regarding formula size are as follows: A polynomial size gap between probabilistic/quantum and deterministic formulae. A near-quadratic size gap for nondeterministic formulae with limited access to nondeterministic bits. A near quadratic lower bound on quantum formula size, as well as a polynomial separation between the sizes of quantum formulae with and without multiple read random inputs. The methods for quantum and probabilistic formulae employ a variant of the Neciporuk bound in terms of the VC-dimension. Regarding communication complexity we give optimal separations between one-way and two-way protocols in the cases of limited nondeterministic and quantum communication, and we show that zero-error quantum one-way communication complexity asymptotically equals deterministic one-way communication complexity for total functions.<|reference_end|>
arxiv
@article{klauck2001one-way, title={One-way communication complexity and the Neciporuk lower bound on formula size}, author={Hartmut Klauck}, journal={arXiv preprint arXiv:cs/0111062}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111062}, primaryClass={cs.CC quant-ph} }
klauck2001one-way
arxiv-670312
cs/0111063
New RBF collocation methods and kernel RBF with applications
<|reference_start|>New RBF collocation methods and kernel RBF with applications: A few novel radial basis function (RBF) discretization schemes for partial differential equations are developed in this study. For boundary-type methods, we derive the indirect and direct symmetric boundary knot methods. Based on the multiple reciprocity principle, the boundary particle method is introduced for general inhomogeneous problems without using inner nodes. For domain-type schemes, by using the Green integral we develop a novel Hermite RBF scheme called the modified Kansa method, which significantly reduces calculation errors at close-to-boundary nodes. To avoid Gibbs phenomenon, we present the least square RBF collocation scheme. Finally, five types of the kernel RBF are also briefly presented.<|reference_end|>
arxiv
@article{chen2001new, title={New RBF collocation methods and kernel RBF with applications}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0111063}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111063}, primaryClass={cs.NA cs.CE} }
chen2001new
arxiv-670313
cs/0111064
A procedure for unsupervised lexicon learning
<|reference_start|>A procedure for unsupervised lexicon learning: We describe an incremental unsupervised procedure to learn words from transcribed continuous speech. The algorithm is based on a conservative and traditional statistical model, and results of empirical tests show that it is competitive with other algorithms that have been proposed recently for this task.<|reference_end|>
arxiv
@article{venkataraman2001a, title={A procedure for unsupervised lexicon learning}, author={Anand Venkataraman}, journal={Proceedings of the eighteenth international conference on machine learning, ICML-01, pp.569--576, 2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111064}, primaryClass={cs.CL} }
venkataraman2001a
arxiv-670314
cs/0111065
A Statistical Model for Word Discovery in Transcribed Speech
<|reference_start|>A Statistical Model for Word Discovery in Transcribed Speech: A statistical model for segmentation and word discovery in continuous speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described. Results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.<|reference_end|>
arxiv
@article{venkataraman2001a, title={A Statistical Model for Word Discovery in Transcribed Speech}, author={Anand Venkataraman}, journal={Computational Linguistics, 27(3), pp.352--372, 2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111065}, primaryClass={cs.CL} }
venkataraman2001a
arxiv-670315
cs/0112001
An Average Case NP-Complete Graph Coloring Problem
<|reference_start|>An Average Case NP-Complete Graph Coloring Problem: NP-complete problems should be hard on some instances but those may be extremely rare. On generic instances many such problems, especially related to random graphs, have been proven easy. We show the intractability of random instances of a graph coloring problem: this graph problem is hard on average unless all NP problem under all samplable (i.e., generatable in polynomial time) distributions are easy. Worst case reductions use special gadgets and typically map instances into a negligible fraction of possible outputs. Ours must output nearly random graphs and avoid any super-polynomial distortion of probabilities.<|reference_end|>
arxiv
@article{levin2001an, title={An Average Case NP-Complete Graph Coloring Problem}, author={Leonid A. Levin, Ramarathnam Venkatesan}, journal={Combinatorics, Probability and Computing, 27(5):808-828, April 2, 2018}, year={2001}, doi={10.1017/S0963548318000123}, archivePrefix={arXiv}, eprint={cs/0112001}, primaryClass={cs.CC} }
levin2001an
arxiv-670316
cs/0112002
Program schemes with binary write-once arrays and the complexity classes they capture
<|reference_start|>Program schemes with binary write-once arrays and the complexity classes they capture: We study a class of program schemes, NPSB, in which, aside from basic assignments, non-deterministic guessing and while loops, we have access to arrays; but where these arrays are binary write-once in that they are initialized to `zero' and can only ever be set to `one'. We show, amongst other results, that: NPSB can be realized as a vectorized Lindstrom logic; there are problems accepted by program schemes of NPSB that are not definable in the bounded-variable infinitary logic ${\cal L}^\omega_{\infty\omega}$; all problems accepted by the program schemes of NPSB have a zero-one law; and on ordered structures, NPSB captures the complexity class $[ L]^[{\scriptsize NP\normalsize}]$. The class of program schemes NPSB is actually the union of an infinite hierarchy of classes of program schemes. When we amend the semantics of our program schemes slightly, we find that the classes of the resulting hierarchy capture the complexity classes $\Sigma^p_i$ (where $i\geq 1$) of the Polynomial Hierarchy PH. Finally, we give logical equivalences of the complexity-theoretic question `Does NP equal PSPACE?' where the logics (and classes of program schemes) involved define only problems with zero-one laws (and so do not define some computationally trivial problems).<|reference_end|>
arxiv
@article{stewart2001program, title={Program schemes with binary write-once arrays and the complexity classes they capture}, author={Iain A. Stewart}, journal={arXiv preprint arXiv:cs/0112002}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112002}, primaryClass={cs.LO cs.CC} }
stewart2001program
arxiv-670317
cs/0112003
Using a Support-Vector Machine for Japanese-to-English Translation of Tense, Aspect, and Modality
<|reference_start|>Using a Support-Vector Machine for Japanese-to-English Translation of Tense, Aspect, and Modality: This paper describes experiments carried out using a variety of machine-learning methods, including the k-nearest neighborhood method that was used in a previous study, for the translation of tense, aspect, and modality. It was found that the support-vector machine method was the most precise of all the methods tested.<|reference_end|>
arxiv
@article{murata2001using, title={Using a Support-Vector Machine for Japanese-to-English Translation of Tense, Aspect, and Modality}, author={Masaki Murata, Kiyotaka Uchimoto, Qing Ma, and Hitoshi Isahara}, journal={ACL Workshop, the Data-Driven Machine Translation, 2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112003}, primaryClass={cs.CL} }
murata2001using
arxiv-670318
cs/0112004
Part of Speech Tagging in Thai Language Using Support Vector Machine
<|reference_start|>Part of Speech Tagging in Thai Language Using Support Vector Machine: The elastic-input neuro tagger and hybrid tagger, combined with a neural network and Brill's error-driven learning, have already been proposed for the purpose of constructing a practical tagger using as little training data as possible. When a small Thai corpus is used for training, these taggers have tagging accuracies of 94.4% and 95.5% (accounting only for the ambiguous words in terms of the part of speech), respectively. In this study, in order to construct more accurate taggers we developed new tagging methods using three machine learning methods: the decision-list, maximum entropy, and support vector machine methods. We then performed tagging experiments by using these methods. Our results showed that the support vector machine method has the best precision (96.1%), and that it is capable of improving the accuracy of tagging in the Thai language. Finally, we theoretically examined all these methods and discussed how the improvements were achived.<|reference_end|>
arxiv
@article{murata2001part, title={Part of Speech Tagging in Thai Language Using Support Vector Machine}, author={Masaki Murata, Qing Ma, and Hitoshi Isahara}, journal={NLPRS'2001 Workshop, the Second Workshop on Natural Language Processing and Neural Networks (NLPNN2001)}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112004}, primaryClass={cs.CL} }
murata2001part
arxiv-670319
cs/0112005
Universal Model for Paraphrasing -- Using Transformation Based on a Defined Criteria --
<|reference_start|>Universal Model for Paraphrasing -- Using Transformation Based on a Defined Criteria --: This paper describes a universal model for paraphrasing that transforms according to defined criteria. We showed that by using different criteria we could construct different kinds of paraphrasing systems including one for answering questions, one for compressing sentences, one for polishing up, and one for transforming written language to spoken language.<|reference_end|>
arxiv
@article{murata2001universal, title={Universal Model for Paraphrasing -- Using Transformation Based on a Defined Criteria --}, author={Masaki Murata and Hitoshi Isahara}, journal={NLPRS'2001, Workshop on Automatic Paraphrasing: Theories and Applications}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112005}, primaryClass={cs.CL} }
murata2001universal
arxiv-670320
cs/0112006
A Logic Programming Approach to Knowledge-State Planning: Semantics and Complexity
<|reference_start|>A Logic Programming Approach to Knowledge-State Planning: Semantics and Complexity: We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well-suited for planning under incomplete knowledge. Furthermore, it enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLV^K system, which implements the language K on top of the DLV logic programming system.<|reference_end|>
arxiv
@article{eiter2001a, title={A Logic Programming Approach to Knowledge-State Planning: Semantics and Complexity}, author={Thomas Eiter, Wolfgang Faber, Nicola Leone, Gerald Pfeifer, Axel Polleres}, journal={Artificial Intelligence 144:157-211, 2003}, year={2001}, doi={10.1016/S0004-3702(02)00367-3}, archivePrefix={arXiv}, eprint={cs/0112006}, primaryClass={cs.AI cs.LO} }
eiter2001a
arxiv-670321
cs/0112007
A Tight Upper Bound on the Number of Candidate Patterns
<|reference_start|>A Tight Upper Bound on the Number of Candidate Patterns: In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.<|reference_end|>
arxiv
@article{geerts2001a, title={A Tight Upper Bound on the Number of Candidate Patterns}, author={Floris Geerts, Bart Goethals and Jan Van den Bussche}, journal={arXiv preprint arXiv:cs/0112007}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112007}, primaryClass={cs.DB cs.AI} }
geerts2001a
arxiv-670322
cs/0112008
Representation of Uncertainty for Limit Processes
<|reference_start|>Representation of Uncertainty for Limit Processes: Many mathematical models utilize limit processes. Continuous functions and the calculus, differential equations and topology, all are based on limits and continuity. However, when we perform measurements and computations, we can achieve only approximate results. In some cases, this discrepancy between theoretical schemes and practical actions changes drastically outcomes of a research and decision-making resulting in uncertainty of knowledge. In the paper, a mathematical approach to such kind of uncertainty, which emerges in computation and measurement, is suggested on the base of the concept of a fuzzy limit. A mathematical technique is developed for differential models with uncertainty. To take into account the intrinsic uncertainty of a model, it is suggested to use fuzzy derivatives instead of conventional derivatives of functions in this model.<|reference_end|>
arxiv
@article{burgin2001representation, title={Representation of Uncertainty for Limit Processes}, author={Mark Burgin}, journal={arXiv preprint arXiv:cs/0112008}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112008}, primaryClass={cs.AI cs.NA} }
burgin2001representation
arxiv-670323
cs/0112009
DNA Self-Assembly For Constructing 3D Boxes
<|reference_start|>DNA Self-Assembly For Constructing 3D Boxes: We propose a mathematical model of DNA self-assembly using 2D tiles to form 3D nanostructures. This is the first work to combine studies in self-assembly and nanotechnology in 3D, just as Rothemund and Winfree did in the 2D case. Our model is a more precise superset of their Tile Assembly Model that facilitates building scalable 3D molecules. Under our model, we present algorithms to build a hollow cube, which is intuitively one of the simplest 3D structures to construct. We also introduce five basic measures of complexity to analyze these algorithms. Our model and algorithmic techniques are applicable to more complex 2D and 3D nanostructures.<|reference_end|>
arxiv
@article{kao2001dna, title={DNA Self-Assembly For Constructing 3D Boxes}, author={Ming-Yang Kao and Vijay Ramachandran}, journal={Algorithms and Computation, 12th International Symposium, ISAAC 2001 Proceedings. Springer LNCS 2223 (2001): 429-440}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112009}, primaryClass={cs.CC cs.CE} }
kao2001dna
arxiv-670324
cs/0112010
A Straightforward Approach to Morphological Analysis and Synthesis
<|reference_start|>A Straightforward Approach to Morphological Analysis and Synthesis: In this paper we present a lexicon-based approach to the problem of morphological processing. Full-form words, lemmas and grammatical tags are interconnected in a DAWG. Thus, the process of analysis/synthesis is reduced to a search in the graph, which is very fast and can be performed even if several pieces of information are missing from the input. The contents of the DAWG are updated using an on-line incremental process. The proposed approach is language independent and it does not utilize any morphophonetic rules or any other special linguistic information.<|reference_end|>
arxiv
@article{sgarbas2001a, title={A Straightforward Approach to Morphological Analysis and Synthesis}, author={Kyriakos N. Sgarbas, Nikos D. Fakotakis, George K. Kokkinakis}, journal={Proc. COMLEX 2000, Workshop on Computational Lexicography and Multimedia Dictionaries, pp.31-34, Kato Achaia, Greece, 22-23 September 2000.}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112010}, primaryClass={cs.CL cs.DS} }
sgarbas2001a
arxiv-670325
cs/0112011
Interactive Constrained Association Rule Mining
<|reference_start|>Interactive Constrained Association Rule Mining: We investigate ways to support interactive mining sessions, in the setting of association rule mining. In such sessions, users specify conditions (queries) on the associations to be generated. Our approach is a combination of the integration of querying conditions inside the mining phase, and the incremental querying of already generated associations. We present several concrete algorithms and compare their performance.<|reference_end|>
arxiv
@article{goethals2001interactive, title={Interactive Constrained Association Rule Mining}, author={Bart Goethals, Jan Van den Bussche}, journal={arXiv preprint arXiv:cs/0112011}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112011}, primaryClass={cs.DB cs.AI} }
goethals2001interactive
arxiv-670326
cs/0112012
Computing the average parallelism in trace monoids
<|reference_start|>Computing the average parallelism in trace monoids: The {\em height} of a trace is the height of the corresponding heap of pieces in Viennot's representation, or equivalently the number of factors in its Cartier-Foata decomposition. Let $h(t)$ and $|t|$ stand respectively for the height and the length of a trace $t$. Roughly speaking, $|t|$ is the `sequential' execution time and $h(t)$ is the `parallel' execution time. We prove that the bivariate commutative series $\sum_t x^{h(t)}y^{|t|}$ is rational, and we give a finite representation of it. We use the rationality to obtain precise information on the asymptotics of the number of traces of a given height or length. Then, we study the average height of a trace for various probability distributions on traces. For the uniform probability distribution on traces of the same length (resp. of the same height), the asymptotic average height (resp. length) exists and is an algebraic number. To illustrate our results and methods, we consider a couple of examples: the free commutative monoid and the trace monoid whose independence graph is the ladder graph.<|reference_end|>
arxiv
@article{krob2001computing, title={Computing the average parallelism in trace monoids}, author={Daniel Krob, Jean Mairesse, Ioannis Michos}, journal={arXiv preprint arXiv:cs/0112012}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112012}, primaryClass={cs.DM cs.DC} }
krob2001computing
arxiv-670327
cs/0112013
A Data Mining Framework for Optimal Product Selection in Retail Supermarket Data: The Generalized PROFSET Model
<|reference_start|>A Data Mining Framework for Optimal Product Selection in Retail Supermarket Data: The Generalized PROFSET Model: In recent years, data mining researchers have developed efficient association rule algorithms for retail market basket analysis. Still, retailers often complain about how to adopt association rules to optimize concrete retail marketing-mix decisions. It is in this context that, in a previous paper, the authors have introduced a product selection model called PROFSET. This model selects the most interesting products from a product assortment based on their cross-selling potential given some retailer defined constraints. However this model suffered from an important deficiency: it could not deal effectively with supermarket data, and no provisions were taken to include retail category management principles. Therefore, in this paper, the authors present an important generalization of the existing model in order to make it suitable for supermarket data as well, and to enable retailers to add category restrictions to the model. Experiments on real world data obtained from a Belgian supermarket chain produce very promising results and demonstrate the effectiveness of the generalized PROFSET model.<|reference_end|>
arxiv
@article{brijs2001a, title={A Data Mining Framework for Optimal Product Selection in Retail Supermarket Data: The Generalized PROFSET Model}, author={Tom Brijs, Bart Goethals, Gilbert Swinnen, Koen Vanhoof, Geert Wets}, journal={arXiv preprint arXiv:cs/0112013}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112013}, primaryClass={cs.DB cs.AI} }
brijs2001a
arxiv-670328
cs/0112014
Guaranteeing the diversity of number generators
<|reference_start|>Guaranteeing the diversity of number generators: A major problem in using iterative number generators of the form x_i=f(x_{i-1}) is that they can enter unexpectedly short cycles. This is hard to analyze when the generator is designed, hard to detect in real time when the generator is used, and can have devastating cryptanalytic implications. In this paper we define a measure of security, called_sequence_diversity_, which generalizes the notion of cycle-length for non-iterative generators. We then introduce the class of counter assisted generators, and show how to turn any iterative generator (even a bad one designed or seeded by an adversary) into a counter assisted generator with a provably high diversity, without reducing the quality of generators which are already cryptographically strong.<|reference_end|>
arxiv
@article{shamir2001guaranteeing, title={Guaranteeing the diversity of number generators}, author={Adi Shamir and Boaz Tsaban}, journal={Information and Computation 171 (2001), 350--363}, year={2001}, doi={10.1006/inco.2001.3045}, archivePrefix={arXiv}, eprint={cs/0112014}, primaryClass={cs.CR cs.CC math.CO} }
shamir2001guaranteeing
arxiv-670329
cs/0112015
Rational Competitive Analysis
<|reference_start|>Rational Competitive Analysis: Much work in computer science has adopted competitive analysis as a tool for decision making under uncertainty. In this work we extend competitive analysis to the context of multi-agent systems. Unlike classical competitive analysis where the behavior of an agent's environment is taken to be arbitrary, we consider the case where an agent's environment consists of other agents. These agents will usually obey some (minimal) rationality constraints. This leads to the definition of rational competitive analysis. We introduce the concept of rational competitive analysis, and initiate the study of competitive analysis for multi-agent systems. We also discuss the application of rational competitive analysis to the context of bidding games, as well as to the classical one-way trading problem.<|reference_end|>
arxiv
@article{tennenholtz2001rational, title={Rational Competitive Analysis}, author={Moshe Tennenholtz}, journal={Proceedings of IJCAI-2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112015}, primaryClass={cs.AI} }
tennenholtz2001rational
arxiv-670330
cs/0112016
Pseudorandom permutations with the fast forward property
<|reference_start|>Pseudorandom permutations with the fast forward property: This paper has been withdrawn by the author(s), due to the existence of a much better paper in http://arxiv.org/abs/cs.CR/0207027<|reference_end|>
arxiv
@article{tsaban2001pseudorandom, title={Pseudorandom permutations with the fast forward property}, author={Boaz Tsaban}, journal={arXiv preprint arXiv:cs/0112016}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112016}, primaryClass={cs.CR cs.CC} }
tsaban2001pseudorandom
arxiv-670331
cs/0112017
Using Structural Metadata to Localize Experience of Digital Content
<|reference_start|>Using Structural Metadata to Localize Experience of Digital Content: With the increasing technical sophistication of both information consumers and providers, there is increasing demand for more meaningful experiences of digital information. We present a framework that separates digital object experience, or rendering, from digital object storage and manipulation, so the rendering can be tailored to particular communities of users. Our framework also accommodates extensible digital object behaviors and interoperability. The two key components of our approach are 1) exposing structural metadata associated with digital objects -- metadata about the labeled access points within a digital object and 2) information intermediaries called context brokers that match structural characteristics of digital objects with mechanisms that produce behaviors. These context brokers allow for localized rendering of digital information stored externally.<|reference_end|>
arxiv
@article{dushay2001using, title={Using Structural Metadata to Localize Experience of Digital Content}, author={Naomi Dushay}, journal={arXiv preprint arXiv:cs/0112017}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112017}, primaryClass={cs.DL} }
dushay2001using
arxiv-670332
cs/0112018
Fast Context-Free Grammar Parsing Requires Fast Boolean Matrix Multiplication
<|reference_start|>Fast Context-Free Grammar Parsing Requires Fast Boolean Matrix Multiplication: In 1975, Valiant showed that Boolean matrix multiplication can be used for parsing context-free grammars (CFGs), yielding the asympotically fastest (although not practical) CFG parsing algorithm known. We prove a dual result: any CFG parser with time complexity $O(g n^{3 - \epsilson})$, where $g$ is the size of the grammar and $n$ is the length of the input string, can be efficiently converted into an algorithm to multiply $m \times m$ Boolean matrices in time $O(m^{3 - \epsilon/3})$. Given that practical, substantially sub-cubic Boolean matrix multiplication algorithms have been quite difficult to find, we thus explain why there has been little progress in developing practical, substantially sub-cubic general CFG parsers. In proving this result, we also develop a formalization of the notion of parsing.<|reference_end|>
arxiv
@article{lee2001fast, title={Fast Context-Free Grammar Parsing Requires Fast Boolean Matrix Multiplication}, author={Lillian Lee}, journal={Journal of the ACM 49(1), pp. 1--15, January 2002}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112018}, primaryClass={cs.CL cs.DS} }
lee2001fast
arxiv-670333
cs/0112019
Distribution of Mutual Information
<|reference_start|>Distribution of Mutual Information: The mutual information of two random variables i and j with joint probabilities t_ij is commonly used in learning Bayesian nets as well as in many other fields. The chances t_ij are usually estimated by the empirical sampling frequency n_ij/n leading to a point estimate I(n_ij/n) for the mutual information. To answer questions like "is I(n_ij/n) consistent with zero?" or "what is the probability that the true mutual information is much larger than the point estimate?" one has to go beyond the point estimate. In the Bayesian framework one can answer these questions by utilizing a (second order) prior distribution p(t) comprising prior information about t. From the prior p(t) one can compute the posterior p(t|n), from which the distribution p(I|n) of the mutual information can be calculated. We derive reliable and quickly computable approximations for p(I|n). We concentrate on the mean, variance, skewness, and kurtosis, and non-informative priors. For the mean we also give an exact expression. Numerical issues and the range of validity are discussed.<|reference_end|>
arxiv
@article{hutter2001distribution, title={Distribution of Mutual Information}, author={Marcus Hutter}, journal={Advances in Neural Information Processing Systems 14 (NIPS-2001) pages 399-406}, year={2001}, number={IDSIA-13-01}, archivePrefix={arXiv}, eprint={cs/0112019}, primaryClass={cs.AI cs.IT math.IT math.ST stat.TH} }
hutter2001distribution
arxiv-670334
cs/0112020
Concurrent computing machines and physical space-time
<|reference_start|>Concurrent computing machines and physical space-time: Concrete computing machines, either sequential or concurrent, rely on an intimate relation between computation and time. We recall the general characteristic properties of physical time and of present realizations of computing systems. We emphasize the role of computing interferences, i.e. the necessity to avoid them in order to give a causal implementation to logical operations. We compare synchronous and asynchronous systems, and make a brief survey of some methods used to deal with computing interferences. Using a graphic representation, we show that synchronous and asynchronous circuits reflect the same opposition as the Newtonian and relativistic causal structures for physical space-time.<|reference_end|>
arxiv
@article{matherat2001concurrent, title={Concurrent computing machines and physical space-time}, author={Philippe Matherat and Marc-Thierry Jaekel}, journal={M.S.C.S., Cambridge University Press, 13 (2003) 771}, year={2001}, number={LPTENS 01/05}, archivePrefix={arXiv}, eprint={cs/0112020}, primaryClass={cs.DC cs.LO gr-qc} }
matherat2001concurrent
arxiv-670335
cs/0112021
Exact Complexity of the Winner Problem for Young Elections
<|reference_start|>Exact Complexity of the Winner Problem for Young Elections: In 1977, Young proposed a voting scheme that extends the Condorcet Principle based on the fewest possible number of voters whose removal yields a Condorcet winner. We prove that both the winner and the ranking problem for Young elections is complete for the class of problems solvable in polynomial time by parallel access to NP. Analogous results for Lewis Carroll's 1876 voting scheme were recently established by Hemaspaandra et al. In contrast, we prove that the winner and ranking problems in Fishburn's homogeneous variant of Carroll's voting scheme can be solved efficiently by linear programming.<|reference_end|>
arxiv
@article{rothe2001exact, title={Exact Complexity of the Winner Problem for Young Elections}, author={J"org Rothe, Holger Spakowski and J"org Vogel}, journal={arXiv preprint arXiv:cs/0112021}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112021}, primaryClass={cs.CC} }
rothe2001exact
arxiv-670336
cs/0112022
Faster Algorithm of String Comparison
<|reference_start|>Faster Algorithm of String Comparison: In many applications, it is necessary to determine the string similarity. Edit distance[WF74] approach is a classic method to determine Field Similarity. A well known dynamic programming algorithm [GUS97] is used to calculate edit distance with the time complexity O(nm). (for worst case, average case and even best case) Instead of continuing with improving the edit distance approach, [LL+99] adopted a brand new approach-token-based approach. Its new concept of token-base-retain the original semantic information, good time complex-O(nm) (for worst, average and best case) and good experimental performance make it a milestone paper in this area. Further study indicates that there is still room for improvement of its Field Similarity algorithm. Our paper is to introduce a package of substring-based new algorithms to determine Field Similarity. Combined together, our new algorithms not only achieve higher accuracy but also gain the time complexity O(knm) (k<0.75) for worst case, O(*n) where <6 for average case and O(1) for best case. Throughout the paper, we use the approach of comparative examples to show higher accuracy of our algorithms compared to the one proposed in [LL+99]. Theoretical analysis, concrete examples and experimental result show that our algorithms can significantly improve the accuracy and time complexity of the calculation of Field Similarity. [US97] D. Guseld. Algorithms on Strings, Trees and Sequences, in Computer Science and Computational Biology. [LL+99] Mong Li Lee, Cleansing data for mining and warehousing, In Proceedings of the 10th International Conference on Database and Expert Systems Applications (DEXA99), pages 751-760,August 1999. [WF74] R. Wagner and M. Fisher, The String to String Correction Problem, JACM 21 pages 168-173, 1974.<|reference_end|>
arxiv
@article{yang2001faster, title={Faster Algorithm of String Comparison}, author={Qi Xiao Yang, Sung Sam Yuan, Lu Chun, Li Zhao and Sun Peng}, journal={arXiv preprint arXiv:cs/0112022}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112022}, primaryClass={cs.DS} }
yang2001faster
arxiv-670337
cs/0112023
Lower Bound on the Chromatic Number by Spectra of Weighted Adjacency Matrices
<|reference_start|>Lower Bound on the Chromatic Number by Spectra of Weighted Adjacency Matrices: A lower bound on the chromatic number of a graph is derived by majorization of spectra of weighted adjacency matrices. These matrices are given by Hadamard products of the adjacency matrix and arbitrary Hermitian matrices.<|reference_end|>
arxiv
@article{wocjan2001lower, title={Lower Bound on the Chromatic Number by Spectra of Weighted Adjacency Matrices}, author={Pawel Wocjan, Dominik Janzing, and Thomas Beth (Universitaet Karlsruhe)}, journal={arXiv preprint arXiv:cs/0112023}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112023}, primaryClass={cs.DM quant-ph} }
wocjan2001lower
arxiv-670338
cs/0112024
Media Objects in Time - A Multimedia Streaming System
<|reference_start|>Media Objects in Time - A Multimedia Streaming System: The widespread availability of networked multimedia potentials embedded in an infrastructure of qualitative superior kind gives rise to new approaches in the areas of teleteaching and internet presentation: The distribution of professionally styled multimedia streams has fallen in the realm of possibility. This paper presents a prototype - both model and runtime environment - of a time directed media system treating any kind of presentational contribution as reusable media object components. The plug-in free runtime system is based on a database and allows for a flexible support of static media types as well as for easy extensions by streaming media servers. The prototypic implementation includes a preliminary Web Authoring platform.<|reference_end|>
arxiv
@article{feustel2001media, title={Media Objects in Time - A Multimedia Streaming System}, author={B. Feustel, T.C. Schmidt}, journal={Computer Networks 37,6 (2001), pp. 729 - 737}, year={2001}, archivePrefix={arXiv}, eprint={cs/0112024}, primaryClass={cs.NI cs.MM} }
feustel2001media
arxiv-670339
cs/0201001
Lower Bounds for Matrix Product
<|reference_start|>Lower Bounds for Matrix Product: We prove lower bounds on the number of product gates in bilinear and quadratic circuits that compute the product of two $n \cross n$ matrices over finite fields. In particular we obtain the following results: 1. We show that the number of product gates in any bilinear (or quadratic) circuit that computes the product of two $n \cross n$ matrices over $F_2$ is at least $3 n^2 - o(n^2)$. 2. We show that the number of product gates in any bilinear circuit that computes the product of two $n \cross n$ matrices over $F_p$ is at least $(2.5 + \frac{1.5}{p^3 -1})n^2 -o(n^2)$. These results improve the former results of Bshouty '89 and Blaser '99 who proved lower bounds of $2.5 n^2 - o(n^2)$.<|reference_end|>
arxiv
@article{shpilka2002lower, title={Lower Bounds for Matrix Product}, author={Amir Shpilka}, journal={Published in the proceedings of the 42nd Annual Symposium on Foundations of Computer Science (FOCS) 2001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201001}, primaryClass={cs.CC} }
shpilka2002lower
arxiv-670340
cs/0201002
Incremental Construction of Compact Acyclic NFAs
<|reference_start|>Incremental Construction of Compact Acyclic NFAs: This paper presents and analyzes an incremental algorithm for the construction of Acyclic Non-deterministic Finite-state Automata (NFA). Automata of this type are quite useful in computational linguistics, especially for storing lexicons. The proposed algorithm produces compact NFAs, i.e. NFAs that do not contain equivalent states. Unlike Deterministic Finite-state Automata (DFA), this property is not sufficient to ensure minimality, but still the resulting NFAs are considerably smaller than the minimal DFAs for the same languages.<|reference_end|>
arxiv
@article{sgarbas2002incremental, title={Incremental Construction of Compact Acyclic NFAs}, author={Kyriakos N. Sgarbas, Nikos D. Fakotakis, George K. Kokkinakis}, journal={Proc. ACL-2001, 39th Annual Meeting of the Association for Computational Linguistics, pp.474-481, Toulouse, France, 6-11 July 2001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201002}, primaryClass={cs.DS cs.CL} }
sgarbas2002incremental
arxiv-670341
cs/0201003
Trust enhancement by multiple random beacons
<|reference_start|>Trust enhancement by multiple random beacons: Random beacons-information sources that broadcast a stream of random digits unknown by anyone beforehand-are useful for various cryptographic purposes. But such beacons can be easily and undetectably sabotaged, so that their output is known beforehand by a dishonest party, who can use this information to defeat the cryptographic protocols supposedly protected by the beacon. We explore a strategy to reduce this hazard by combining the outputs from several noninteracting (eg spacelike-separated) beacons by XORing them together to produce a single digit stream which is more trustworthy than any individual beacon, being random and unpredictable if at least one of the contributing beacons is honest. If the contributing beacons are not spacelike separated, so that a dishonest beacon can overhear and adapt to earlier outputs of other beacons, the beacons' trustworthiness can still be enhanced to a lesser extent by a time sharing strategy. We point out some disadvantages of alternative trust amplification methods based on one-way hash functions.<|reference_end|>
arxiv
@article{bennett2002trust, title={Trust enhancement by multiple random beacons}, author={Charles H. Bennett, John A. Smolin}, journal={arXiv preprint arXiv:cs/0201003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201003}, primaryClass={cs.CR} }
bennett2002trust
arxiv-670342
cs/0201004
Analysis of Non-Gaussian Nature of Network Traffic
<|reference_start|>Analysis of Non-Gaussian Nature of Network Traffic: To study mechanisms that cause the non-Gaussian nature of network traffic, we analyzed IP flow statistics. For greedy flows in particular, we investigated the hop counts between source and destination nodes, and classified applications by the port number. We found that the main flows contributing to the non-Gaussian nature of network traffic were HTTP flows with relatively small hop counts compared with the average hop counts of all flows.<|reference_end|>
arxiv
@article{mori2002analysis, title={Analysis of Non-Gaussian Nature of Network Traffic}, author={Tatsuya Mori, Ryoichi Kawahara, Shozo Naito}, journal={arXiv preprint arXiv:cs/0201004}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201004}, primaryClass={cs.NI} }
mori2002analysis
arxiv-670343
cs/0201005
Sharpening Occam's Razor
<|reference_start|>Sharpening Occam's Razor: We provide a new representation-independent formulation of Occam's razor theorem, based on Kolmogorov complexity. This new formulation allows us to: (i) Obtain better sample complexity than both length-based and VC-based versions of Occam's razor theorem, in many applications. (ii) Achieve a sharper reverse of Occam's razor theorem than previous work. Specifically, we weaken the assumptions made in an earlier publication, and extend the reverse to superpolynomial running times.<|reference_end|>
arxiv
@article{li2002sharpening, title={Sharpening Occam's Razor}, author={Ming Li (Univ. Waterloo), John Tromp (CWI), and Paul Vitanyi (CWI and University of Amsterdam)}, journal={arXiv preprint arXiv:cs/0201005}, year={2002}, number={CWI Manuscript 1994}, archivePrefix={arXiv}, eprint={cs/0201005}, primaryClass={cs.LG cond-mat.dis-nn cs.AI cs.CC math.PR physics.data-an} }
li2002sharpening
arxiv-670344
cs/0201006
On the Importance of Having an Identity or, is Consensus really Universal?
<|reference_start|>On the Importance of Having an Identity or, is Consensus really Universal?: We show that Naming-- the existence of distinct IDs known to all-- is a hidden but necessary assumption of Herlihy's universality result for Consensus. We then show in a very precise sense that Naming is harder than Consensus and bring to the surface some important differences existing between popular shared memory models.<|reference_end|>
arxiv
@article{buhrman2002on, title={On the Importance of Having an Identity or, is Consensus really Universal?}, author={Harry Buhrman (CWI), Alessandro Panconesi (Univ. La Sapienza, Rome), Riccardo Silvestri (Univ. La Sapienza, Rome), and Paul Vitanyi (CWI and Univ. Amsterdam)}, journal={arXiv preprint arXiv:cs/0201006}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201006}, primaryClass={cs.DC cs.CC} }
buhrman2002on
arxiv-670345
cs/0201007
Algorithm for generating orthogonal matrices with rational elements
<|reference_start|>Algorithm for generating orthogonal matrices with rational elements: Special orthogonal matrices with rational elements form the group SO(n,Q), where Q is the field of rational numbers. A theorem describing the structure of an arbitrary matrix from this group is proved. This theorem yields an algorithm for generating such matrices by means of random number routines.<|reference_end|>
arxiv
@article{sharipov2002algorithm, title={Algorithm for generating orthogonal matrices with rational elements}, author={Ruslan Sharipov}, journal={arXiv preprint arXiv:cs/0201007}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201007}, primaryClass={cs.MS cs.DS} }
sharipov2002algorithm
arxiv-670346
cs/0201008
Using Tree Automata and Regular Expressions to Manipulate Hierarchically Structured Data
<|reference_start|>Using Tree Automata and Regular Expressions to Manipulate Hierarchically Structured Data: Information, stored or transmitted in digital form, is often structured. Individual data records are usually represented as hierarchies of their elements. Together, records form larger structures. Information processing applications have to take account of this structuring, which assigns different semantics to different data elements or records. Big variety of structural schemata in use today often requires much flexibility from applications--for example, to process information coming from different sources. To ensure application interoperability, translators are needed that can convert one structure into another. This paper puts forward a formal data model aimed at supporting hierarchical data processing in a simple and flexible way. The model is based on and extends results of two classical theories, studying finite string and tree automata. The concept of finite automata and regular languages is applied to the case of arbitrarily structured tree-like hierarchical data records, represented as "structured strings." These automata are compared with classical string and tree automata; the model is shown to be a superset of the classical models. Regular grammars and expressions over structured strings are introduced. Regular expression matching and substitution has been widely used for efficient unstructured text processing; the model described here brings the power of this proven technique to applications that deal with information trees. A simple generic alternative is offered to replace today's specialised ad-hoc approaches. The model unifies structural and content transformations, providing applications with a single data type. An example scenario of how to build applications based on this theory is discussed. Further research directions are outlined.<|reference_end|>
arxiv
@article{schmidt2002using, title={Using Tree Automata and Regular Expressions to Manipulate Hierarchically Structured Data}, author={Nikita Schmidt, Ahmed Patel}, journal={arXiv preprint arXiv:cs/0201008}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201008}, primaryClass={cs.CL cs.DS} }
schmidt2002using
arxiv-670347
cs/0201009
The performance of the batch learner algorithm
<|reference_start|>The performance of the batch learner algorithm: We analyze completely the convergence speed of the \emph{batch learning algorithm}, and compare its speed to that of the memoryless learning algorithm and of learning with memory. We show that the batch learning algorithm is never worse than the memoryless learning algorithm (at least asymptotically). Its performance \emph{vis-a-vis} learning with full memory is less clearcut, and depends on certain probabilistic assumptions.<|reference_end|>
arxiv
@article{rivin2002the, title={The performance of the batch learner algorithm}, author={Igor Rivin}, journal={arXiv preprint arXiv:cs/0201009}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201009}, primaryClass={cs.LG cs.DM} }
rivin2002the
arxiv-670348
cs/0201010
Bundling Equilibrium in Combinatorial auctions
<|reference_start|>Bundling Equilibrium in Combinatorial auctions: This paper analyzes individually-rational ex post equilibrium in the VC (Vickrey-Clarke) combinatorial auctions. If $\Sigma$ is a family of bundles of goods, the organizer may restrict the participants by requiring them to submit their bids only for bundles in $\Sigma$. The $\Sigma$-VC combinatorial auctions (multi-good auctions) obtained in this way are known to be individually-rational truth-telling mechanisms. In contrast, this paper deals with non-restricted VC auctions, in which the buyers restrict themselves to bids on bundles in $\Sigma$, because it is rational for them to do so. That is, it may be that when the buyers report their valuation of the bundles in $\Sigma$, they are in an equilibrium. We fully characterize those $\Sigma$ that induce individually rational equilibrium in every VC auction, and we refer to the associated equilibrium as a bundling equilibrium. The number of bundles in $\Sigma$ represents the communication complexity of the equilibrium. A special case of bundling equilibrium is partition-based equilibrium, in which $\Sigma$ is a field, that is, it is generated by a partition. We analyze the tradeoff between communication complexity and economic efficiency of bundling equilibrium, focusing in particular on partition-based equilibrium.<|reference_end|>
arxiv
@article{holzman2002bundling, title={Bundling Equilibrium in Combinatorial auctions}, author={Ron Holzman, Noa Kfir-Dahav, Dov Monderer, Moshe Tennenholtz}, journal={arXiv preprint arXiv:cs/0201010}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201010}, primaryClass={cs.GT} }
holzman2002bundling
arxiv-670349
cs/0201011
A Backward Analysis for Constraint Logic Programs
<|reference_start|>A Backward Analysis for Constraint Logic Programs: One recurring problem in program development is that of understanding how to re-use code developed by a third party. In the context of (constraint) logic programming, part of this problem reduces to figuring out how to query a program. If the logic program does not come with any documentation, then the programmer is forced to either experiment with queries in an ad hoc fashion or trace the control-flow of the program (backward) to infer the modes in which a predicate must be called so as to avoid an instantiation error. This paper presents an abstract interpretation scheme that automates the latter technique. The analysis presented in this paper can infer moding properties which if satisfied by the initial query, come with the guarantee that the program and query can never generate any moding or instantiation errors. Other applications of the analysis are discussed. The paper explains how abstract domains with certain computational properties (they condense) can be used to trace control-flow backward (right-to-left) to infer useful properties of initial queries. A correctness argument is presented and an implementation is reported.<|reference_end|>
arxiv
@article{king2002a, title={A Backward Analysis for Constraint Logic Programs}, author={Andy King and Lunjin Lu}, journal={arXiv preprint arXiv:cs/0201011}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201011}, primaryClass={cs.PL cs.SE} }
king2002a
arxiv-670350
cs/0201012
Efficient Groundness Analysis in Prolog
<|reference_start|>Efficient Groundness Analysis in Prolog: Boolean functions can be used to express the groundness of, and trace grounding dependencies between, program variables in (constraint) logic programs. In this paper, a variety of issues pertaining to the efficient Prolog implementation of groundness analysis are investigated, focusing on the domain of definite Boolean functions, Def. The systematic design of the representation of an abstract domain is discussed in relation to its impact on the algorithmic complexity of the domain operations; the most frequently called operations should be the most lightweight. This methodology is applied to Def, resulting in a new representation, together with new algorithms for its domain operations utilising previously unexploited properties of Def -- for instance, quadratic-time entailment checking. The iteration strategy driving the analysis is also discussed and a simple, but very effective, optimisation of induced magic is described. The analysis can be implemented straightforwardly in Prolog and the use of a non-ground representation results in an efficient, scalable tool which does not require widening to be invoked, even on the largest benchmarks. An extensive experimental evaluation is given<|reference_end|>
arxiv
@article{howe2002efficient, title={Efficient Groundness Analysis in Prolog}, author={Jacob M. Howe and Andy King}, journal={arXiv preprint arXiv:cs/0201012}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201012}, primaryClass={cs.PL} }
howe2002efficient
arxiv-670351
cs/0201013
Computing Preferred Answer Sets by Meta-Interpretation in Answer Set Programming
<|reference_start|>Computing Preferred Answer Sets by Meta-Interpretation in Answer Set Programming: Most recently, Answer Set Programming (ASP) is attracting interest as a new paradigm for problem solving. An important aspect which needs to be supported is the handling of preferences between rules, for which several approaches have been presented. In this paper, we consider the problem of implementing preference handling approaches by means of meta-interpreters in Answer Set Programming. In particular, we consider the preferred answer set approaches by Brewka and Eiter, by Delgrande, Schaub and Tompits, and by Wang, Zhou and Lin. We present suitable meta-interpreters for these semantics using DLV, which is an efficient engine for ASP. Moreover, we also present a meta-interpreter for the weakly preferred answer set approach by Brewka and Eiter, which uses the weak constraint feature of DLV as a tool for expressing and solving an underlying optimization problem. We also consider advanced meta-interpreters, which make use of graph-based characterizations and often allow for more efficient computations. Our approach shows the suitability of ASP in general and of DLV in particular for fast prototyping. This can be fruitfully exploited for experimenting with new languages and knowledge-representation formalisms.<|reference_end|>
arxiv
@article{eiter2002computing, title={Computing Preferred Answer Sets by Meta-Interpretation in Answer Set Programming}, author={Thomas Eiter, Wolfgang Faber, Nicola Leone, Gerald Pfeifer}, journal={Theory and Practice of Logic Programming 3(4/5):463-498, 2003}, year={2002}, doi={10.1017/S1471068403001753}, archivePrefix={arXiv}, eprint={cs/0201013}, primaryClass={cs.LO cs.AI} }
eiter2002computing
arxiv-670352
cs/0201014
The Dynamics of AdaBoost Weights Tells You What's Hard to Classify
<|reference_start|>The Dynamics of AdaBoost Weights Tells You What's Hard to Classify: The dynamical evolution of weights in the Adaboost algorithm contains useful information about the role that the associated data points play in the built of the Adaboost model. In particular, the dynamics induces a bipartition of the data set into two (easy/hard) classes. Easy points are ininfluential in the making of the model, while the varying relevance of hard points can be gauged in terms of an entropy value associated to their evolution. Smooth approximations of entropy highlight regions where classification is most uncertain. Promising results are obtained when methods proposed are applied in the Optimal Sampling framework.<|reference_end|>
arxiv
@article{caprile2002the, title={The Dynamics of AdaBoost Weights Tells You What's Hard to Classify}, author={Bruno Caprile, Cesare Furlanello & Stefano Merler}, journal={arXiv preprint arXiv:cs/0201014}, year={2002}, number={IRST TechRep #010612}, archivePrefix={arXiv}, eprint={cs/0201014}, primaryClass={cs.LG cs.DS} }
caprile2002the
arxiv-670353
cs/0201015
On the Significance of Digits in Interval Notation
<|reference_start|>On the Significance of Digits in Interval Notation: To analyse the significance of the digits used for interval bounds, we clarify the philosophical presuppositions of various interval notations. We use information theory to determine the information content of the last digit of the numeral used to denote the interval's bounds. This leads to the notion of efficiency of a decimal digit: the actual value as percentage of the maximal value of its information content. By taking this efficiency into account, many presentations of intervals can be made more readable at the expense of negligible loss of information.<|reference_end|>
arxiv
@article{van emden2002on, title={On the Significance of Digits in Interval Notation}, author={M.H. van Emden}, journal={arXiv preprint arXiv:cs/0201015}, year={2002}, number={DCS-270-IR}, archivePrefix={arXiv}, eprint={cs/0201015}, primaryClass={cs.NA} }
van emden2002on
arxiv-670354
cs/0201016
A computer scientist looks at game theory
<|reference_start|>A computer scientist looks at game theory: I consider issues in distributed computation that should be of relevance to game theory. In particular, I focus on (a) representing knowledge and uncertainty, (b) dealing with failures, and (c) specification of mechanisms.<|reference_end|>
arxiv
@article{halpern2002a, title={A computer scientist looks at game theory}, author={Joseph Y. Halpern}, journal={arXiv preprint arXiv:cs/0201016}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201016}, primaryClass={cs.GT cs.DC cs.MA} }
halpern2002a
arxiv-670355
cs/0201017
Collusion in Unrepeated, First-Price Auctions with an Uncertain Number of Participants
<|reference_start|>Collusion in Unrepeated, First-Price Auctions with an Uncertain Number of Participants: We consider the question of whether collusion among bidders (a "bidding ring") can be supported in equilibrium of unrepeated first-price auctions. Unlike previous work on the topic such as that by McAfee and McMillan [1992] and Marshall and Marx [2007], we do not assume that non-colluding agents have perfect knowledge about the number of colluding agents whose bids are suppressed by the bidding ring, and indeed even allow for the existence of multiple cartels. Furthermore, while we treat the association of bidders with bidding rings as exogenous, we allow bidders to make strategic decisions about whether to join bidding rings when invited. We identify a bidding ring protocol that results in an efficient allocation in Bayes{Nash equilibrium, under which non-colluding agents bid straightforwardly, and colluding agents join bidding rings when invited and truthfully declare their valuations to the ring center. We show that bidding rings benefit ring centers and all agents, both members and non-members of bidding rings, at the auctioneer's expense. The techniques we introduce in this paper may also be useful for reasoning about other problems in which agents have asymmetric information about a setting.<|reference_end|>
arxiv
@article{leyton-brown2002collusion, title={Collusion in Unrepeated, First-Price Auctions with an Uncertain Number of Participants}, author={Kevin Leyton-Brown, Moshe Tennenholtz, Navin Bhat, and Yoav Shoham}, journal={arXiv preprint arXiv:cs/0201017}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201017}, primaryClass={cs.GT cs.AI} }
leyton-brown2002collusion
arxiv-670356
cs/0201018
Long Proteins with Unique Optimal Foldings in the H-P Model
<|reference_start|>Long Proteins with Unique Optimal Foldings in the H-P Model: It is widely accepted that (1) the natural or folded state of proteins is a global energy minimum, and (2) in most cases proteins fold to a unique state determined by their amino acid sequence. The H-P (hydrophobic-hydrophilic) model is a simple combinatorial model designed to answer qualitative questions about the protein folding process. In this paper we consider a problem suggested by Brian Hayes in 1998: what proteins in the two-dimensional H-P model have unique optimal (minimum energy) foldings? In particular, we prove that there are closed chains of monomers (amino acids) with this property for all (even) lengths; and that there are open monomer chains with this property for all lengths divisible by four.<|reference_end|>
arxiv
@article{aichholzer2002long, title={Long Proteins with Unique Optimal Foldings in the H-P Model}, author={Oswin Aichholzer, David Bremner, Erik D. Demaine, Henk Meijer, Vera Sacrist'an, Michael Soss}, journal={arXiv preprint arXiv:cs/0201018}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201018}, primaryClass={cs.CG q-bio.BM} }
aichholzer2002long
arxiv-670357
cs/0201019
Structure from Motion: Theoretical Foundations of a Novel Approach Using Custom Built Invariants
<|reference_start|>Structure from Motion: Theoretical Foundations of a Novel Approach Using Custom Built Invariants: We rephrase the problem of 3D reconstruction from images in terms of intersections of projections of orbits of custom built Lie groups actions. We then use an algorithmic method based on moving frames "a la Fels-Olver" to obtain a fundamental set of invariants of these groups actions. The invariants are used to define a set of equations to be solved by the points of the 3D object, providing a new technique for recovering 3D structure from motion.<|reference_end|>
arxiv
@article{bazin2002structure, title={Structure from Motion: Theoretical Foundations of a Novel Approach Using Custom Built Invariants}, author={Pierre-Louis Bazin (Brown University) Mireille Boutin (Brown University)}, journal={arXiv preprint arXiv:cs/0201019}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201019}, primaryClass={cs.CV math.DG} }
bazin2002structure
arxiv-670358
cs/0201020
A Modal Logic Framework for Multi-agent Belief Fusion
<|reference_start|>A Modal Logic Framework for Multi-agent Belief Fusion: This paper is aimed at providing a uniform framework for reasoning about beliefs of multiple agents and their fusion. In the first part of the paper, we develop logics for reasoning about cautiously merged beliefs of agents with different degrees of reliability. The logics are obtained by combining the multi-agent epistemic logic and multi-sources reasoning systems. Every ordering for the reliability of the agents is represented by a modal operator, so we can reason with the merged results under different situations. The fusion is cautious in the sense that if an agent's belief is in conflict with those of higher priorities, then his belief is completely discarded from the merged result. We consider two strategies for the cautious merging of beliefs. In the first one, if inconsistency occurs at some level, then all beliefs at the lower levels are discarded simultaneously, so it is called level cutting strategy. For the second one, only the level at which the inconsistency occurs is skipped, so it is called level skipping strategy. The formal semantics and axiomatic systems for these two strategies are presented. In the second part, we extend the logics both syntactically and semantically to cover some more sophisticated belief fusion and revision operators. While most existing approaches treat belief fusion operators as meta-level constructs, these operators are directly incorporated into our object logic language. Thus it is possible to reason not only with the merged results but also about the fusion process in our logics. The relationship of our extended logics with the conditional logics of belief revision is also discussed.<|reference_end|>
arxiv
@article{liau2002a, title={A Modal Logic Framework for Multi-agent Belief Fusion}, author={Churn-Jung Liau}, journal={arXiv preprint arXiv:cs/0201020}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201020}, primaryClass={cs.AI cs.LO} }
liau2002a
arxiv-670359
cs/0201021
Learning to Play Games in Extensive Form by Valuation
<|reference_start|>Learning to Play Games in Extensive Form by Valuation: A valuation for a player in a game in extensive form is an assignment of numeric values to the players moves. The valuation reflects the desirability moves. We assume a myopic player, who chooses a move with the highest valuation. Valuations can also be revised, and hopefully improved, after each play of the game. Here, a very simple valuation revision is considered, in which the moves made in a play are assigned the payoff obtained in the play. We show that by adopting such a learning process a player who has a winning strategy in a win-lose game can almost surely guarantee a win in a repeated game. When a player has more than two payoffs, a more elaborate learning procedure is required. We consider one that associates with each move the average payoff in the rounds in which this move was made. When all players adopt this learning procedure, with some perturbations, then, with probability 1, strategies that are close to subgame perfect equilibrium are played after some time. A single player who adopts this procedure can guarantee only her individually rational payoff.<|reference_end|>
arxiv
@article{jehiel2002learning, title={Learning to Play Games in Extensive Form by Valuation}, author={Philippe Jehiel, Dov Samet}, journal={arXiv preprint arXiv:cs/0201021}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201021}, primaryClass={cs.LG cs.GT} }
jehiel2002learning
arxiv-670360
cs/0201022
A theory of experiment
<|reference_start|>A theory of experiment: This article aims at clarifying the language and practice of scientific experiment, mainly by hooking observability on calculability.<|reference_end|>
arxiv
@article{albarede2002a, title={A theory of experiment}, author={Pierre Albarede}, journal={arXiv preprint arXiv:cs/0201022}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201022}, primaryClass={cs.AI} }
albarede2002a
arxiv-670361
cs/0201023
Model-Based Software Engineering and Ada: Synergy for the Development of Safety-Critical Systems
<|reference_start|>Model-Based Software Engineering and Ada: Synergy for the Development of Safety-Critical Systems: In this paper we outline a software development process for safety-critical systems that aims at combining some of the specific strengths of model-based development with those of programming language based development using safety-critical subsets of Ada. Model-based software development and model-based test case generation techniques are combined with code generation techniques and tools providing a transition from model to code both for a system itself and for its test cases. This allows developers to combine domain-oriented, model-based techniques with source code based validation techniques, as required for conformity with standards for the development of safety-critical software, such as the avionics standard RTCA/DO-178B. We introduce the AutoFocus and Validator modeling and validation toolset and sketch its usage for modeling, test case generation, and code generation in a combined approach, which is further illustrated by a simplified leading edge aerospace model with built-in fault tolerance.<|reference_end|>
arxiv
@article{blotz2002model-based, title={Model-Based Software Engineering and Ada: Synergy for the Development of Safety-Critical Systems}, author={Andree Blotz (1), Franz Huber (2), Heiko Loetzbeyer (3), Alexander Pretschner (3), Oscar Slotosch (2), Hans-Peter Zaengerl (2) ((1) EADS Deutschland GmbH, (2) Validas AG, (3) TU Munich)}, journal={arXiv preprint arXiv:cs/0201023}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201023}, primaryClass={cs.SE} }
blotz2002model-based
arxiv-670362
cs/0201024
Design of statistical quality control procedures using genetic algorithms
<|reference_start|>Design of statistical quality control procedures using genetic algorithms: In general, we can not use algebraic or enumerative methods to optimize a quality control (QC) procedure so as to detect the critical random and systematic analytical errors with stated probabilities, while the probability for false rejection is minimum. Genetic algorithms (GAs) offer an alternative, as they do not require knowledge of the objective function to be optimized and search through large parameter spaces quickly. To explore the application of GAs in statistical QC, we have developed an interactive GAs based computer program that designs a novel near optimal QC procedure, given an analytical process. The program uses the deterministic crowding algorithm. An illustrative application of the program suggests that it has the potential to design QC procedures that are significantly better than 45 alternative ones that are used in the clinical laboratories.<|reference_end|>
arxiv
@article{hatjimihail2002design, title={Design of statistical quality control procedures using genetic algorithms}, author={Aristides T. Hatjimihail (1), Theophanes T. Hatjimihail (1)((1) Hellenic Complex Systems Laboratory, Drama, Greece)}, journal={LJ Eshelman (ed): Proceedings of the Sixth International Conference on Genetic Algorithms. San Francisco: Morgan Kauffman, 1995:551-7}, year={2002}, number={HCSLTR02}, archivePrefix={arXiv}, eprint={cs/0201024}, primaryClass={cs.NE} }
hatjimihail2002design
arxiv-670363
cs/0201025
Core Services in the Architecture of the National Digital Library for Science Education (NSDL)
<|reference_start|>Core Services in the Architecture of the National Digital Library for Science Education (NSDL): We describe the core components of the architecture for the (NSDL) National Science, Mathematics, Engineering, and Technology Education Digital Library. Over time the NSDL will include heterogeneous users, content, and services. To accommodate this, a design for a technical and organization infrastructure has been formulated based on the notion of a spectrum of interoperability. This paper describes the first phase of the interoperability infrastructure including the metadata repository, search and discovery services, rights management services, and user interface portal facilities.<|reference_end|>
arxiv
@article{lagoze2002core, title={Core Services in the Architecture of the National Digital Library for Science Education (NSDL)}, author={Carl Lagoze, William Arms, Stoney Gan, Diane Hillmann, Christopher Ingram, Dean Krafft, Richard Marisa, Jon Phipps, John Saylor, Carol Terrizzi, Walter Hoehn, David Millman, James Allan, Sergio Guzman-Lara, Tom Kalt}, journal={arXiv preprint arXiv:cs/0201025}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201025}, primaryClass={cs.DL} }
lagoze2002core
arxiv-670364
cs/0201026
An Empirical Model for Volatility of Returns and Option Pricing
<|reference_start|>An Empirical Model for Volatility of Returns and Option Pricing: In a seminal paper in 1973, Black and Scholes argued how expected distributions of stock prices can be used to price options. Their model assumed a directed random motion for the returns and consequently a lognormal distribution of asset prices after a finite time. We point out two problems with their formulation. First, we show that the option valuation is not uniquely determined; in particular, stratergies based on the delta-hedge and CAMP (Capital Asset Pricing Model) are shown to provide different valuations of an option. Second, asset returns are known not to be Gaussian distributed. Empirically, distributions of returns are seen to be much better approximated by an exponential distribution. This exponential distribution of asset prices can be used to develop a new pricing model for options that is shown to provide valuations that agree very well with those used by traders. We show how the Fokker-Planck formulation of fluctuations (i.e., the dynamics of the distribution) can be modified to provide an exponential distribution for returns. We also show how a singular volatility can be used to go smoothly from exponential to Gaussian returns and thereby illustrate why exponential returns cannot be reached perturbatively starting from Gaussian ones, and explain how the theory of 'stochastic volatility' can be obtained from our model by making a bad approximation. Finally, we show how to calculate put and call prices for a stretched exponential density.<|reference_end|>
arxiv
@article{mccauley2002an, title={An Empirical Model for Volatility of Returns and Option Pricing}, author={Joseph L. McCauley and Gemunu H. Gunaratne}, journal={arXiv preprint arXiv:cs/0201026}, year={2002}, doi={10.1016/S0378-4371(03)00589-2}, archivePrefix={arXiv}, eprint={cs/0201026}, primaryClass={cs.CE} }
mccauley2002an
arxiv-670365
cs/0201027
Components of an NSDL Architecture: Technical Scope and Functional Model
<|reference_start|>Components of an NSDL Architecture: Technical Scope and Functional Model: We describe work leading toward specification of a technical architecture for the National Science, Mathematics, Engineering, and Technology Education Digital Library (NSDL). This includes a technical scope and a functional model, with some elaboration on the particularly rich set of library services that NSDL is expected eventually to encompass.<|reference_end|>
arxiv
@article{fulker2002components, title={Components of an NSDL Architecture: Technical Scope and Functional Model}, author={David Fulker and Greg Janee}, journal={arXiv preprint arXiv:cs/0201027}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201027}, primaryClass={cs.DL} }
fulker2002components
arxiv-670366
cs/0201028
Software Validation using Power Profiles
<|reference_start|>Software Validation using Power Profiles: The validation of modern software systems incorporates both functional and quality requirements. This paper proposes a validation approach for software quality requirement - its power consumption. This approach validates whether the software produces the desired results with a minimum expenditure of energy. We present energy requirements and an approach for their validation using a power consumption model, test-case specification, software traces, and power measurements. Three different approaches for power data gathering are described. The power consumption of mobile phone applications is obtained and matched against the power consumption model.<|reference_end|>
arxiv
@article{lencevicius2002software, title={Software Validation using Power Profiles}, author={Raimondas Lencevicius, Edu Metz, Alexander Ran}, journal={arXiv preprint arXiv:cs/0201028}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201028}, primaryClass={cs.SE} }
lencevicius2002software
arxiv-670367
cs/0201029
The Witness Properties and the Semantics of the Prolog Cut
<|reference_start|>The Witness Properties and the Semantics of the Prolog Cut: The semantics of the Prolog ``cut'' construct is explored in the context of some desirable properties of logic programming systems, referred to as the witness properties. The witness properties concern the operational consistency of responses to queries. A generalization of Prolog with negation as failure and cut is described, and shown not to have the witness properties. A restriction of the system is then described, which preserves the choice and first-solution behaviour of cut but allows the system to have the witness properties. The notion of cut in the restricted system is more restricted than the Prolog hard cut, but retains the useful first-solution behaviour of hard cut, not retained by other proposed cuts such as the ``soft cut''. It is argued that the restricted system achieves a good compromise between the power and utility of the Prolog cut and the need for internal consistency in logic programming systems. The restricted system is given an abstract semantics, which depends on the witness properties; this semantics suggests that the restricted system has a deeper connection to logic than simply permitting some computations which are logical. Parts of this paper appeared previously in a different form in the Proceedings of the 1995 International Logic Programming Symposium.<|reference_end|>
arxiv
@article{andrews2002the, title={The Witness Properties and the Semantics of the Prolog Cut}, author={James H. Andrews}, journal={arXiv preprint arXiv:cs/0201029}, year={2002}, archivePrefix={arXiv}, eprint={cs/0201029}, primaryClass={cs.PL} }
andrews2002the
arxiv-670368
cs/0202001
The Deductive Database System LDL++
<|reference_start|>The Deductive Database System LDL++: This paper describes the LDL++ system and the research advances that have enabled its design and development. We begin by discussing the new nonmonotonic and nondeterministic constructs that extend the functionality of the LDL++ language, while preserving its model-theoretic and fixpoint semantics. Then, we describe the execution model and the open architecture designed to support these new constructs and to facilitate the integration with existing DBMSs and applications. Finally, we describe the lessons learned by using LDL++ on various tested applications, such as middleware and datamining.<|reference_end|>
arxiv
@article{arni2002the, title={The Deductive Database System LDL++}, author={Faiz Arni, KayLiang Ong, Shalom Tsur and Haixun Wang, Carlo Zaniolo}, journal={arXiv preprint arXiv:cs/0202001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202001}, primaryClass={cs.DB cs.AI} }
arni2002the
arxiv-670369
cs/0202002
A Refinement Calculus for Logic Programs
<|reference_start|>A Refinement Calculus for Logic Programs: Existing refinement calculi provide frameworks for the stepwise development of imperative programs from specifications. This paper presents a refinement calculus for deriving logic programs. The calculus contains a wide-spectrum logic programming language, including executable constructs such as sequential conjunction, disjunction, and existential quantification, as well as specification constructs such as general predicates, assumptions and universal quantification. A declarative semantics is defined for this wide-spectrum language based on executions. Executions are partial functions from states to states, where a state is represented as a set of bindings. The semantics is used to define the meaning of programs and specifications, including parameters and recursion. To complete the calculus, a notion of correctness-preserving refinement over programs in the wide-spectrum language is defined and refinement laws for developing programs are introduced. The refinement calculus is illustrated using example derivations and prototype tool support is discussed.<|reference_end|>
arxiv
@article{hayes2002a, title={A Refinement Calculus for Logic Programs}, author={Ian Hayes, Robert Colvin, David Hemer, Paul Strooper, Ray Nickson}, journal={arXiv preprint arXiv:cs/0202002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202002}, primaryClass={cs.SE cs.LO} }
hayes2002a
arxiv-670370
cs/0202003
Simple Optimal Wait-free Multireader Registers
<|reference_start|>Simple Optimal Wait-free Multireader Registers: Multireader shared registers are basic objects used as communication medium in asynchronous concurrent computation. We propose a surprisingly simple and natural scheme to obtain several wait-free constructions of bounded 1-writer multireader registers from atomic 1-writer 1-reader registers, that is easier to prove correct than any previous construction. Our main construction is the first symmetric pure timestamp one that is optimal with respect to the worst-case local use of control bits; the other one is optimal with respect to global use of control bits; both are optimal in time.<|reference_end|>
arxiv
@article{vitanyi2002simple, title={Simple Optimal Wait-free Multireader Registers}, author={Paul Vitanyi (CWI and University of Amsterdam)}, journal={arXiv preprint arXiv:cs/0202003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202003}, primaryClass={cs.DC} }
vitanyi2002simple
arxiv-670371
cs/0202004
A Qualitative Dynamical Modelling Approach to Capital Accumulation in Unregulated Fisheries
<|reference_start|>A Qualitative Dynamical Modelling Approach to Capital Accumulation in Unregulated Fisheries: Capital accumulation has been a major issue in fisheries economics over the last two decades, whereby the interaction of the fish and capital stocks were of particular interest. Because bio-economic systems are intrinsically complex, previous efforts in this field have relied on a variety of simplifying assumptions. The model presented here relaxes some of these simplifications. Problems of tractability are surmounted by using the methodology of qualitative differential equations (QDE). The theory of QDEs takes into account that scientific knowledge about particular fisheries is usually limited, and facilitates an analysis of the global dynamics of systems with more than two ordinary differential equations. The model is able to trace the evolution of capital and fish stock in good agreement with observed patterns, and shows that over-capitalization is unavoidable in unregulated fisheries.<|reference_end|>
arxiv
@article{eisenack2002a, title={A Qualitative Dynamical Modelling Approach to Capital Accumulation in Unregulated Fisheries}, author={K. Eisenack, H. Welsch, J.P. Kropp}, journal={arXiv preprint arXiv:cs/0202004}, year={2002}, doi={10.1016/j.jedc.2005.08.004}, archivePrefix={arXiv}, eprint={cs/0202004}, primaryClass={cs.AI cs.CE} }
eisenack2002a
arxiv-670372
cs/0202005
Secure History Preservation Through Timeline Entanglement
<|reference_start|>Secure History Preservation Through Timeline Entanglement: A secure timeline is a tamper-evident historic record of the states through which a system goes throughout its operational history. Secure timelines can help us reason about the temporal ordering of system states in a provable manner. We extend secure timelines to encompass multiple, mutually distrustful services, using timeline entanglement. Timeline entanglement associates disparate timelines maintained at independent systems, by linking undeniably the past of one timeline to the future of another. Timeline entanglement is a sound method to map a time step in the history of one service onto the timeline of another, and helps clients of entangled services to get persistent temporal proofs for services rendered that survive the demise or non-cooperation of the originating service. In this paper we present the design and implementation of Timeweave, our service development framework for timeline entanglement based on two novel disk-based authenticated data structures. We evaluate Timeweave's performance characteristics and show that it can be efficiently deployed in a loosely-coupled distributed system of a few hundred services with overhead of roughly 2-8% of the processing resources of a PC-grade system.<|reference_end|>
arxiv
@article{maniatis2002secure, title={Secure History Preservation Through Timeline Entanglement}, author={Petros Maniatis and Mary Baker}, journal={arXiv preprint arXiv:cs/0202005}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202005}, primaryClass={cs.DC cs.CR cs.DB cs.DS} }
maniatis2002secure
arxiv-670373
cs/0202006
Approximate Computation of Reach Sets in Hybrid Systems
<|reference_start|>Approximate Computation of Reach Sets in Hybrid Systems: One of the most important problems in hybrid systems is the {\em reachability problem}. The reachability problem has been shown to be undecidable even for a subclass of {\em linear} hybrid systems. In view of this, the main focus in the area of hybrid systems has been to find {\em effective} semi-decision procedures for this problem. Such an algorithmic approach involves finding methods of computation and representation of reach sets of the continuous variables within a discrete state of a hybrid system. In this paper, after presenting a brief introduction to hybrid systems and reachability problem, we propose a computational method for obtaining the reach sets of continuous variables in a hybrid system. In addition to this, we also describe a new algorithm to over-approximate with polyhedra the reach sets of the continuous variables with linear dynamics and polyhedral initial set. We illustrate these algorithms with typical interesting examples.<|reference_end|>
arxiv
@article{ravi2002approximate, title={Approximate Computation of Reach Sets in Hybrid Systems}, author={D. Ravi and R.K. Shyamasundar}, journal={arXiv preprint arXiv:cs/0202006}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202006}, primaryClass={cs.LO cs.SE} }
ravi2002approximate
arxiv-670374
cs/0202007
Steady State Resource Allocation Analysis of the Stochastic Diffusion Search
<|reference_start|>Steady State Resource Allocation Analysis of the Stochastic Diffusion Search: This article presents the long-term behaviour analysis of Stochastic Diffusion Search (SDS), a distributed agent-based system for best-fit pattern matching. SDS operates by allocating simple agents into different regions of the search space. Agents independently pose hypotheses about the presence of the pattern in the search space and its potential distortion. Assuming a compositional structure of hypotheses about pattern matching agents perform an inference on the basis of partial evidence from the hypothesised solution. Agents posing mutually consistent hypotheses about the pattern support each other and inhibit agents with inconsistent hypotheses. This results in the emergence of a stable agent population identifying the desired solution. Positive feedback via diffusion of information between the agents significantly contributes to the speed with which the solution population is formed. The formulation of the SDS model in terms of interacting Markov Chains enables its characterisation in terms of the allocation of agents, or computational resources. The analysis characterises the stationary probability distribution of the activity of agents, which leads to the characterisation of the solution population in terms of its similarity to the target pattern.<|reference_end|>
arxiv
@article{nasuto2002steady, title={Steady State Resource Allocation Analysis of the Stochastic Diffusion Search}, author={Slawomir J. Nasuto and Mark J. Bishop}, journal={arXiv preprint arXiv:cs/0202007}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202007}, primaryClass={cs.AI cs.NE} }
nasuto2002steady
arxiv-670375
cs/0202008
CUP: Controlled Update Propagation in Peer-to-Peer Networks
<|reference_start|>CUP: Controlled Update Propagation in Peer-to-Peer Networks: Recently the problem of indexing and locating content in peer-to-peer networks has received much attention. Previous work suggests caching index entries at intermediate nodes that lie on the paths taken by search queries, but until now there has been little focus on how to maintain these intermediate caches. This paper proposes CUP, a new comprehensive architecture for Controlled Update Propagation in peer-to-peer networks. CUP asynchronously builds caches of index entries while answering search queries. It then propagates updates of index entries to maintain these caches. Under unfavorable conditions, when compared with standard caching based on expiration times, CUP reduces the average miss latency by as much as a factor of three. Under favorable conditions, CUP can reduce the average miss latency by more than a factor of ten. CUP refreshes intermediate caches, reduces query latency, and reduces network load by coalescing bursts of queries for the same item. CUP controls and confines propagation to updates whose cost is likely to be recovered by subsequent queries. CUP gives peer-to-peer nodes the flexibility to use their own incentive-based policies to determine when to receive and when to propagate updates. Finally, the small propagation overhead incurred by CUP is more than compensated for by its savings in cache misses.<|reference_end|>
arxiv
@article{roussopoulos2002cup:, title={CUP: Controlled Update Propagation in Peer-to-Peer Networks}, author={Mema Roussopoulos and Mary Baker}, journal={arXiv preprint arXiv:cs/0202008}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202008}, primaryClass={cs.NI} }
roussopoulos2002cup:
arxiv-670376
cs/0202009
Non-negative sparse coding
<|reference_start|>Non-negative sparse coding: Non-negative sparse coding is a method for decomposing multivariate data into non-negative sparse components. In this paper we briefly describe the motivation behind this type of data representation and its relation to standard sparse coding and non-negative matrix factorization. We then give a simple yet efficient multiplicative algorithm for finding the optimal values of the hidden components. In addition, we show how the basis vectors can be learned from the observed data. Simulations demonstrate the effectiveness of the proposed method.<|reference_end|>
arxiv
@article{hoyer2002non-negative, title={Non-negative sparse coding}, author={Patrik O. Hoyer}, journal={arXiv preprint arXiv:cs/0202009}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202009}, primaryClass={cs.NE cs.CV} }
hoyer2002non-negative
arxiv-670377
cs/0202010
Using parametric set constraints for locating errors in CLP programs
<|reference_start|>Using parametric set constraints for locating errors in CLP programs: This paper introduces a framework of parametric descriptive directional types for constraint logic programming (CLP). It proposes a method for locating type errors in CLP programs and presents a prototype debugging tool. The main technique used is checking correctness of programs w.r.t. type specifications. The approach is based on a generalization of known methods for proving correctness of logic programs to the case of parametric specifications. Set-constraint techniques are used for formulating and checking verification conditions for (parametric) polymorphic type specifications. The specifications are expressed in a parametric extension of the formalism of term grammars. The soundness of the method is proved and the prototype debugging tool supporting the proposed approach is illustrated on examples. The paper is a substantial extension of the previous work by the same authors concerning monomorphic directional types.<|reference_end|>
arxiv
@article{drabent2002using, title={Using parametric set constraints for locating errors in CLP programs}, author={W. Drabent, J. Maluszynski and P. Pietrzak}, journal={Theory and Practice of Logic Programming, Vol 2(4&5), 2002, pp 549-611.}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202010}, primaryClass={cs.PL} }
drabent2002using
arxiv-670378
cs/0202011
Small Strictly Convex Quadrilateral Meshes of Point Sets
<|reference_start|>Small Strictly Convex Quadrilateral Meshes of Point Sets: In this paper, we give upper and lower bounds on the number of Steiner points required to construct a strictly convex quadrilateral mesh for a planar point set. In particular, we show that $3{\lfloor\frac{n}{2}\rfloor}$ internal Steiner points are always sufficient for a convex quadrilateral mesh of $n$ points in the plane. Furthermore, for any given $n\geq 4$, there are point sets for which $\lceil\frac{n-3}{2}\rceil-1$ Steiner points are necessary for a convex quadrilateral mesh.<|reference_end|>
arxiv
@article{bremner2002small, title={Small Strictly Convex Quadrilateral Meshes of Point Sets}, author={David Bremner, Ferran Hurtado, Suneeta Ramaswami, and Vera Sacristan}, journal={arXiv preprint arXiv:cs/0202011}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202011}, primaryClass={cs.CG} }
bremner2002small
arxiv-670379
cs/0202012
Logic program specialisation through partial deduction: Control issues
<|reference_start|>Logic program specialisation through partial deduction: Control issues: Program specialisation aims at improving the overall performance of programs by performing source to source transformations. A common approach within functional and logic programming, known respectively as partial evaluation and partial deduction, is to exploit partial knowledge about the input. It is achieved through a well-automated application of parts of the Burstall-Darlington unfold/fold transformation framework. The main challenge in developing systems is to design automatic control that ensures correctness, efficiency, and termination. This survey and tutorial presents the main developments in controlling partial deduction over the past 10 years and analyses their respective merits and shortcomings. It ends with an assessment of current achievements and sketches some remaining research challenges.<|reference_end|>
arxiv
@article{leuschel2002logic, title={Logic program specialisation through partial deduction: Control issues}, author={Michael Leuschel and Maurice Bruynooghe}, journal={arXiv preprint arXiv:cs/0202012}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202012}, primaryClass={cs.PL cs.AI} }
leuschel2002logic
arxiv-670380
cs/0202013
The SDSS SkyServer: Public Access to the Sloan Digital Sky Server Data
<|reference_start|>The SDSS SkyServer: Public Access to the Sloan Digital Sky Server Data: The SkyServer provides Internet access to the public Sloan Digi-tal Sky Survey (SDSS) data for both astronomers and for science education. This paper describes the SkyServer goals and archi-tecture. It also describes our experience operating the SkyServer on the Internet. The SDSS data is public and well-documented so it makes a good test platform for research on database algorithms and performance.<|reference_end|>
arxiv
@article{szalay2002the, title={The SDSS SkyServer: Public Access to the Sloan Digital Sky Server Data}, author={Alexander S. Szalay, Jim Gray, Ani R. Thakar, Peter Z. Kunszt, Tanu Malik, Jordan Raddick, Christopher Stoughton, Jan vandenBerg}, journal={ACM SIGMOD 2002 proceedings}, year={2002}, number={MSR TR 01 104}, archivePrefix={arXiv}, eprint={cs/0202013}, primaryClass={cs.DL cs.DB} }
szalay2002the
arxiv-670381
cs/0202014
Data Mining the SDSS SkyServer Database
<|reference_start|>Data Mining the SDSS SkyServer Database: An earlier paper (Szalay et. al. "Designing and Mining MultiTerabyte Astronomy Archives: The Sloan Digital Sky Survey," ACM SIGMOD 2000) described the Sloan Digital Sky Survey's (SDSS) data management needs by defining twenty database queries and twelve data visualization tasks that a good data management system should support. We built a database and interfaces to support both the query load and also a website for ad-hoc access. This paper reports on the database design, describes the data loading pipeline, and reports on the query implementation and performance. The queries typically translated to a single SQL statement. Most queries run in less than 20 seconds, allowing scientists to interactively explore the database. This paper is an in-depth tour of those queries. Readers should first have studied the companion overview paper Szalay et. al. "The SDSS SkyServer, Public Access to the Sloan Digital Sky Server Data" ACM SIGMOND 2002.<|reference_end|>
arxiv
@article{gray2002data, title={Data Mining the SDSS SkyServer Database}, author={Jim Gray, Alex S. Szalay, Ani R. Thakar, Peter Z. Kunszt, Christopher Stoughton, Don Slutz, Jan vandenBerg}, journal={arXiv preprint arXiv:cs/0202014}, year={2002}, number={Microsoft Tech Report MSR TR 02 01}, archivePrefix={arXiv}, eprint={cs/0202014}, primaryClass={cs.DB cs.DL} }
gray2002data
arxiv-670382
cs/0202015
Combinatorial Auctions with Decreasing Marginal Utilities
<|reference_start|>Combinatorial Auctions with Decreasing Marginal Utilities: In most of microeconomic theory, consumers are assumed to exhibit decreasing marginal utilities. This paper considers combinatorial auctions among such submodular buyers. The valuations of such buyers are placed within a hierarchy of valuations that exhibit no complementarities, a hierarchy that includes also OR and XOR combinations of singleton valuations, and valuations satisfying the gross substitutes property. Those last valuations are shown to form a zero-measure subset of the submodular valuations that have positive measure. While we show that the allocation problem among submodular valuations is NP-hard, we present an efficient greedy 2-approximation algorithm for this case and generalize it to the case of limited complementarities. No such approximation algorithm exists in a setting allowing for arbitrary complementarities. Some results about strategic aspects of combinatorial auctions among players with decreasing marginal utilities are also presented.<|reference_end|>
arxiv
@article{lehmann2002combinatorial, title={Combinatorial Auctions with Decreasing Marginal Utilities}, author={Benny Lehmann, Daniel Lehmann and Noam Nisan}, journal={Games and Economic Behavior, Vol 55/2 May 2006 pp 270-296}, year={2002}, doi={10.1016/j.geb2005.02.006}, number={Leibniz Center for Research in Computer Science TR-2002-15, April 2002}, archivePrefix={arXiv}, eprint={cs/0202015}, primaryClass={cs.GT} }
lehmann2002combinatorial
arxiv-670383
cs/0202016
Linear Programming helps solving large multi-unit combinatorial auctions
<|reference_start|>Linear Programming helps solving large multi-unit combinatorial auctions: Previous works suggested the use of Branch and Bound techniques for finding the optimal allocation in (multi-unit) combinatorial auctions. They remarked that Linear Programming could provide a good upper-bound to the optimal allocation, but they went on using lighter and less tight upper-bound heuristics, on the ground that LP was too time-consuming to be used repetitively to solve large combinatorial auctions. We present the results of extensive experiments solving large (multi-unit) combinatorial auctions generated according to distributions proposed by different researchers. Our surprising conclusion is that Linear Programming is worth using. Investing almost all of one's computing time in using LP to bound from above the value of the optimal solution in order to prune aggressively pays off. We present a way to save on the number of calls to the LP routine and experimental results comparing different heuristics for choosing the bid to be considered next. Those results show that the ordering based on the square root of the size of the bids that was shown to be theoretically optimal in a previous paper by the authors performs surprisingly better than others in practice. Choosing to deal first with the bid with largest coefficient (typically 1) in the optimal solution of the relaxed LP problem, is also a good choice. The gap between the lower bound provided by greedy heuristics and the upper bound provided by LP is typically small and pruning is therefore extensive. For most distributions, auctions of a few hundred goods among a few thousand bids can be solved in practice. All experiments were run on a PC under Matlab.<|reference_end|>
arxiv
@article{gonen2002linear, title={Linear Programming helps solving large multi-unit combinatorial auctions}, author={Rica Gonen and Daniel Lehmann}, journal={arXiv preprint arXiv:cs/0202016}, year={2002}, number={Leibniz Center for Research in Computer Science TR-2001-8}, archivePrefix={arXiv}, eprint={cs/0202016}, primaryClass={cs.GT cs.AI} }
gonen2002linear
arxiv-670384
cs/0202017
Truth Revelation in Approximately Efficient Combinatorial Auctions
<|reference_start|>Truth Revelation in Approximately Efficient Combinatorial Auctions: Some important classical mechanisms considered in Microeconomics and Game Theory require the solution of a difficult optimization problem. This is true of mechanisms for combinatorial auctions, which have in recent years assumed practical importance, and in particular of the gold standard for combinatorial auctions, the Generalized Vickrey Auction (GVA). Traditional analysis of these mechanisms - in particular, their truth revelation properties - assumes that the optimization problems are solved precisely. In reality, these optimization problems can usually be solved only in an approximate fashion. We investigate the impact on such mechanisms of replacing exact solutions by approximate ones. Specifically, we look at a particular greedy optimization method. We show that the GVA payment scheme does not provide for a truth revealing mechanism. We introduce another scheme that does guarantee truthfulness for a restricted class of players. We demonstrate the latter property by identifying natural properties for combinatorial auctions and showing that, for our restricted class of players, they imply that truthful strategies are dominant. Those properties have applicability beyond the specific auction studied.<|reference_end|>
arxiv
@article{lehmann2002truth, title={Truth Revelation in Approximately Efficient Combinatorial Auctions}, author={Daniel Lehmann, Liadan Ita O'Callaghan and Yoav Shoham}, journal={Journal of the ACM Vol. 49, No. 5, September 2002, pp. 577-602}, year={2002}, number={Stanford University CS-TN-99-88}, archivePrefix={arXiv}, eprint={cs/0202017}, primaryClass={cs.GT} }
lehmann2002truth
arxiv-670385
cs/0202018
Nonmonotonic Logics and Semantics
<|reference_start|>Nonmonotonic Logics and Semantics: Tarski gave a general semantics for deductive reasoning: a formula a may be deduced from a set A of formulas iff a holds in all models in which each of the elements of A holds. A more liberal semantics has been considered: a formula a may be deduced from a set A of formulas iff a holds in all of the "preferred" models in which all the elements of A hold. Shoham proposed that the notion of "preferred" models be defined by a partial ordering on the models of the underlying language. A more general semantics is described in this paper, based on a set of natural properties of choice functions. This semantics is here shown to be equivalent to a semantics based on comparing the relative "importance" of sets of models, by what amounts to a qualitative probability measure. The consequence operations defined by the equivalent semantics are then characterized by a weakening of Tarski's properties in which the monotonicity requirement is replaced by three weaker conditions. Classical propositional connectives are characterized by natural introduction-elimination rules in a nonmonotonic setting. Even in the nonmonotonic setting, one obtains classical propositional logic, thus showing that monotonicity is not required to justify classical propositional connectives.<|reference_end|>
arxiv
@article{lehmann2002nonmonotonic, title={Nonmonotonic Logics and Semantics}, author={Daniel Lehmann}, journal={Journal of Logic and Computation, Vol. 11 No.2, pp.229-256 2001}, year={2002}, number={Leibniz Center for Research in Computer Science TR-98-6}, archivePrefix={arXiv}, eprint={cs/0202018}, primaryClass={cs.AI cs.LO math.LO} }
lehmann2002nonmonotonic
arxiv-670386
cs/0202019
Hypernets -- Good (G)news for Gnutella
<|reference_start|>Hypernets -- Good (G)news for Gnutella: Criticism of Gnutella network scalability has rested on the bandwidth attributes of the original interconnection topology: a Cayley tree. Trees, in general, are known to have lower aggregate bandwidth than higher dimensional topologies e.g., hypercubes, meshes and tori. Gnutella was intended to support thousands to millions of peers. Studies of interconnection topologies in the literature, however, have focused on hardware implementations which are limited by cost to a few thousand nodes. Since the Gnutella network is virtual, hyper-topologies are relatively unfettered by such constraints. We present performance models for several plausible hyper-topologies and compare their query throughput up to millions of peers. The virtual hypercube and the virtual hypertorus are shown to offer near linear scalability subject to the number of peer TCP/IP connections that can be simultaneously kept open.<|reference_end|>
arxiv
@article{gunther2002hypernets, title={Hypernets -- Good (G)news for Gnutella}, author={N. J. Gunther}, journal={arXiv preprint arXiv:cs/0202019}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202019}, primaryClass={cs.PF cs.DC cs.IR cs.NI} }
gunther2002hypernets
arxiv-670387
cs/0202020
The Mysterious Optimality of Naive Bayes: Estimation of the Probability in the System of "Classifiers"
<|reference_start|>The Mysterious Optimality of Naive Bayes: Estimation of the Probability in the System of "Classifiers": Bayes Classifiers are widely used currently for recognition, identification and knowledge discovery. The fields of application are, for example, image processing, medicine, chemistry (QSAR). But by mysterious way the Naive Bayes Classifier usually gives a very nice and good presentation of a recognition. It can not be improved considerably by more complex models of Bayes Classifier. We demonstrate here a very nice and simple proof of the Naive Bayes Classifier optimality, that can explain this interesting fact.The derivation in the current paper is based on arXiv:cs/0202020v1<|reference_end|>
arxiv
@article{kupervasser2002the, title={The Mysterious Optimality of Naive Bayes: Estimation of the Probability in the System of "Classifiers"}, author={Oleg Kupervasser, Alexsander Vardy}, journal={Pattern Recognition and Image Analysis, 2014, Vol. 24, No. 1}, year={2002}, doi={10.1134/S1054661814010088}, archivePrefix={arXiv}, eprint={cs/0202020}, primaryClass={cs.CV cs.AI} }
kupervasser2002the
arxiv-670388
cs/0202021
Nonmonotonic Reasoning, Preferential Models and Cumulative Logics
<|reference_start|>Nonmonotonic Reasoning, Preferential Models and Cumulative Logics: Many systems that exhibit nonmonotonic behavior have been described and studied already in the literature. The general notion of nonmonotonic reasoning, though, has almost always been described only negatively, by the property it does not enjoy, i.e. monotonicity. We study here general patterns of nonmonotonic reasoning and try to isolate properties that could help us map the field of nonmonotonic reasoning by reference to positive properties. We concentrate on a number of families of nonmonotonic consequence relations, defined in the style of Gentzen. Both proof-theoretic and semantic points of view are developed in parallel. The former point of view was pioneered by D. Gabbay, while the latter has been advocated by Y. Shoham in. Five such families are defined and characterized by representation theorems, relating the two points of view. One of the families of interest, that of preferential relations, turns out to have been studied by E. Adams. The "preferential" models proposed here are a much stronger tool than Adams' probabilistic semantics. The basic language used in this paper is that of propositional logic. The extension of our results to first order predicate calculi and the study of the computational complexity of the decision problems described in this paper will be treated in another paper.<|reference_end|>
arxiv
@article{kraus2002nonmonotonic, title={Nonmonotonic Reasoning, Preferential Models and Cumulative Logics}, author={Sarit Kraus, Daniel Lehmann and Menachem Magidor}, journal={Journal of Artificial Intelligence, Vol. 44 Nos. 1-2 (July 1990) pp. 167-207}, year={2002}, number={Leibniz Center for Research in Computer Science TR-88-15}, archivePrefix={arXiv}, eprint={cs/0202021}, primaryClass={cs.AI} }
kraus2002nonmonotonic
arxiv-670389
cs/0202022
What does a conditional knowledge base entail?
<|reference_start|>What does a conditional knowledge base entail?: This paper presents a logical approach to nonmonotonic reasoning based on the notion of a nonmonotonic consequence relation. A conditional knowledge base, consisting of a set of conditional assertions of the type "if ... then ...", represents the explicit defeasible knowledge an agent has about the way the world generally behaves. We look for a plausible definition of the set of all conditional assertions entailed by a conditional knowledge base. In a previous paper, S. Kraus and the authors defined and studied "preferential" consequence relations. They noticed that not all preferential relations could be considered as reasonable inference procedures. This paper studies a more restricted class of consequence relations, "rational" relations. It is argued that any reasonable nonmonotonic inference procedure should define a rational relation. It is shown that the rational relations are exactly those that may be represented by a "ranked" preferential model, or by a (non-standard) probabilistic model. The rational closure of a conditional knowledge base is defined and shown to provide an attractive answer to the question of the title. Global properties of this closure operation are proved: it is a cumulative operation. It is also computationally tractable. This paper assumes the underlying language is propositional.<|reference_end|>
arxiv
@article{lehmann2002what, title={What does a conditional knowledge base entail?}, author={Daniel Lehmann and Menachem Magidor}, journal={Journal of Artificial Intelligence, Vol. 55 no.1 (May 1992) pp. 1-60. Erratum in Vol. 68 (1994) p. 411}, year={2002}, number={Leibniz Center for Research in Computer Science TR-88-16 and TR-90-10}, archivePrefix={arXiv}, eprint={cs/0202022}, primaryClass={cs.AI} }
lehmann2002what
arxiv-670390
cs/0202023
Expected Qualitative Utility Maximization
<|reference_start|>Expected Qualitative Utility Maximization: A model for decision making that generalizes Expected Utility Maximization is presented. This model, Expected Qualitative Utility Maximization, encompasses the Maximin criterion. It relaxes both the Independence and the Continuity postulates. Its main ingredient is the definition of a qualitative order on nonstandard models of the real numbers and the consideration of nonstandard utilities. Expected Qualitative Utility Maximization is characterized by an original weakening of von Neumann-Morgenstern's postulates. Subjective probabilities may be defined from those weakened postulates, as Anscombe and Aumann did from the original postulates. Subjective probabilities are numbers, not matrices as in the Subjective Expected Lexicographic Utility approach. JEL no.: D81 Keywords: Utility Theory, Non-Standard Utilities, Qualitative Decision Theory<|reference_end|>
arxiv
@article{lehmann2002expected, title={Expected Qualitative Utility Maximization}, author={Daniel Lehmann}, journal={Games and Economic Behavior, Vol. 35, No. 1-2 (April 2001) pp. 54-79}, year={2002}, number={Leibniz Center for Research in Computer Science TR-97-15}, archivePrefix={arXiv}, eprint={cs/0202023}, primaryClass={cs.GT} }
lehmann2002expected
arxiv-670391
cs/0202024
A note on Darwiche and Pearl
<|reference_start|>A note on Darwiche and Pearl: It is shown that Darwiche and Pearl's postulates imply an interesting property, not noticed by the authors.<|reference_end|>
arxiv
@article{lehmann2002a, title={A note on Darwiche and Pearl}, author={Daniel Lehmann}, journal={arXiv preprint arXiv:cs/0202024}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202024}, primaryClass={cs.AI} }
lehmann2002a
arxiv-670392
cs/0202025
Distance Semantics for Belief Revision
<|reference_start|>Distance Semantics for Belief Revision: A vast and interesting family of natural semantics for belief revision is defined. Suppose one is given a distance d between any two models. One may then define the revision of a theory K by a formula a as the theory defined by the set of all those models of a that are closest, by d, to the set of models of K. This family is characterized by a set of rationality postulates that extends the AGM postulates. The new postulates describe properties of iterated revisions.<|reference_end|>
arxiv
@article{lehmann2002distance, title={Distance Semantics for Belief Revision}, author={Daniel Lehmann, Menachem Magidor and Karl Schlechta}, journal={Journal of Symbolic Logic, Vol. 66 No.1 (March 2001) pp. 295-317}, year={2002}, number={Leibniz Center for Research in Computer Science TR-98-10}, archivePrefix={arXiv}, eprint={cs/0202025}, primaryClass={cs.AI} }
lehmann2002distance
arxiv-670393
cs/0202026
Preferred History Semantics for Iterated Updates
<|reference_start|>Preferred History Semantics for Iterated Updates: We give a semantics to iterated update by a preference relation on possible developments. An iterated update is a sequence of formulas, giving (incomplete) information about successive states of the world. A development is a sequence of models, describing a possible trajectory through time. We assume a principle of inertia and prefer those developments, which are compatible with the information, and avoid unnecessary changes. The logical properties of the updates defined in this way are considered, and a representation result is proved.<|reference_end|>
arxiv
@article{berger2002preferred, title={Preferred History Semantics for Iterated Updates}, author={Shai Berger, Daniel Lehmann and Karl Schlechta}, journal={Journal of Logic and Computation, Vol. 9 no. 6 (1999) pp. 817-833}, year={2002}, number={Leibniz Center for Research in Computer SCience TR-98-11 (July 1998)}, archivePrefix={arXiv}, eprint={cs/0202026}, primaryClass={cs.AI} }
berger2002preferred
arxiv-670394
cs/0202027
BSML: A Binding Schema Markup Language for Data Interchange in Problem Solving Environments (PSEs)
<|reference_start|>BSML: A Binding Schema Markup Language for Data Interchange in Problem Solving Environments (PSEs): We describe a binding schema markup language (BSML) for describing data interchange between scientific codes. Such a facility is an important constituent of scientific problem solving environments (PSEs). BSML is designed to integrate with a PSE or application composition system that views model specification and execution as a problem of managing semistructured data. The data interchange problem is addressed by three techniques for processing semistructured data: validation, binding, and conversion. We present BSML and describe its application to a PSE for wireless communications system design.<|reference_end|>
arxiv
@article{verstak2002bsml:, title={BSML: A Binding Schema Markup Language for Data Interchange in Problem Solving Environments (PSEs)}, author={Alex Verstak, Naren Ramakrishnan, Layne T. Watson, Jian He, Clifford A. Shaffer, Kyung Kyoon Bae, Jing Jiang, William H. Tranter, Theodore S. Rappaport}, journal={arXiv preprint arXiv:cs/0202027}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202027}, primaryClass={cs.CE cs.SE} }
verstak2002bsml:
arxiv-670395
cs/0202028
Classes of service under perfect competition and technological change: a model for the dynamics of the Internet?
<|reference_start|>Classes of service under perfect competition and technological change: a model for the dynamics of the Internet?: Certain services may be provided in a continuous, one-dimensional, ordered range of different qualities and a customer requiring a service of quality q can only be offered a quality superior or equal to q. Only a discrete set of different qualities will be offered, and a service provider will provide the same service (of fixed quality b) to all customers requesting qualities of service inferior or equal to b. Assuming all services (of quality b) are priced identically, a monopolist will choose the qualities of service and the prices that maximize profit but, under perfect competition, a service provider will choose the (inferior) quality of service that can be priced at the lowest price. Assuming significant economies of scale, two fundamentally different regimes are possible: either a number of different classes of service are offered (DC regime), or a unique class of service offers an unbounded quality of service (UC regime). The DC regime appears in one of two sub-regimes: one, BDC, in which a finite number of classes is offered, the qualities of service offered are bounded and requests for high-quality services are not met, or UDC in which an infinite number of classes of service are offered and every request is met. The types of the demand curve and of the economies of scale, not the pace of technological change, determine the regime and the class boundaries. The price structure in the DC regime obeys very general laws.<|reference_end|>
arxiv
@article{lehmann2002classes, title={Classes of service under perfect competition and technological change: a model for the dynamics of the Internet?}, author={Daniel Lehmann}, journal={arXiv preprint arXiv:cs/0202028}, year={2002}, number={Leibniz Center for Research in Computer Science: TR-2000-42}, archivePrefix={arXiv}, eprint={cs/0202028}, primaryClass={cs.GT} }
lehmann2002classes
arxiv-670396
cs/0202029
Nonstandard numbers for qualitative decision making
<|reference_start|>Nonstandard numbers for qualitative decision making: The consideration of nonstandard models of the real numbers and the definition of a qualitative ordering on those models provides a generalization of the principle of maximization of expected utility. It enables the decider to assign probabilities of different orders of magnitude to different events or to assign utilities of different orders of magnitude to different outcomes. The properties of this generalized notion of rationality are studied in the frameworks proposed by von Neumann and Morgenstern and later by Anscombe and Aumann. It is characterized by an original weakening of their postulates in two different situations: nonstandard probabilities and standard utilities on one hand and standard probabilities and nonstandard utilities on the other hand. This weakening concerns both Independence and Continuity. It is orthogonal with the weakening proposed by lexicographic orderings.<|reference_end|>
arxiv
@article{lehmann2002nonstandard, title={Nonstandard numbers for qualitative decision making}, author={Daniel Lehmann}, journal={Proceedings of the 7th Conference on Theoretical Aspects of Reasoning and Knowledge, I. Gilboa ed., Evanston Ill., July 1998, pp. 161-174}, year={2002}, number={Leibniz Center for Research in Computer Science: TR-97-15}, archivePrefix={arXiv}, eprint={cs/0202029}, primaryClass={cs.GT} }
lehmann2002nonstandard
arxiv-670397
cs/0202030
Generalized Qualitative Probability: Savage revisited
<|reference_start|>Generalized Qualitative Probability: Savage revisited: Preferences among acts are analyzed in the style of L. Savage, but as partially ordered. The rationality postulates considered are weaker than Savage's on three counts. The Sure Thing Principle is derived in this setting. The postulates are shown to lead to a characterization of generalized qualitative probability that includes and blends both traditional qualitative probability and the ranked structures used in logical approaches.<|reference_end|>
arxiv
@article{lehmann2002generalized, title={Generalized Qualitative Probability: Savage revisited}, author={Daniel Lehmann}, journal={Twelfth Conference on Uncertainty in Artificial Intelligence, E. Horvitz and F. Jensen eds., Morgan Kaufmann, pp. 381-388, Portland, Oregon, August 1996}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202030}, primaryClass={cs.GT cs.AI} }
lehmann2002generalized
arxiv-670398
cs/0202031
Nonmonotonic inference operations
<|reference_start|>Nonmonotonic inference operations: A. Tarski proposed the study of infinitary consequence operations as the central topic of mathematical logic. He considered monotonicity to be a property of all such operations. In this paper, we weaken the monotonicity requirement and consider more general operations, inference operations. These operations describe the nonmonotonic logics both humans and machines seem to be using when infering defeasible information from incomplete knowledge. We single out a number of interesting families of inference operations. This study of infinitary inference operations is inspired by the results of Kraus, Lehmann and Magidor on finitary nonmonotonic operations, but this paper is self-contained.<|reference_end|>
arxiv
@article{freund2002nonmonotonic, title={Nonmonotonic inference operations}, author={Michael Freund and Daniel Lehmann}, journal={Bulletin of the IGPL, Vol. 1 no. 1 (July 1993), pp. 23-68}, year={2002}, number={Leibniz Center for Research in Computer Science: TR-92-2}, archivePrefix={arXiv}, eprint={cs/0202031}, primaryClass={cs.AI} }
freund2002nonmonotonic
arxiv-670399
cs/0202032
Optimal Solutions for Multi-Unit Combinatorial Auctions: Branch and Bound Heuristics
<|reference_start|>Optimal Solutions for Multi-Unit Combinatorial Auctions: Branch and Bound Heuristics: Finding optimal solutions for multi-unit combinatorial auctions is a hard problem and finding approximations to the optimal solution is also hard. We investigate the use of Branch-and-Bound techniques: they require both a way to bound from above the value of the best allocation and a good criterion to decide which bids are to be tried first. Different methods for efficiently bounding from above the value of the best allocation are considered. Theoretical original results characterize the best approximation ratio and the ordering criterion that provides it. We suggest to use this criterion.<|reference_end|>
arxiv
@article{gonen2002optimal, title={Optimal Solutions for Multi-Unit Combinatorial Auctions: Branch and Bound Heuristics}, author={Rica Gonen and Daniel Lehmann}, journal={Second ACM Conference on Electronic Commerce (EC'00) Minneapolis, Minnesota, October 2000, pp. 13-20}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202032}, primaryClass={cs.GT cs.AI} }
gonen2002optimal
arxiv-670400
cs/0202033
The logical meaning of Expansion
<|reference_start|>The logical meaning of Expansion: The Expansion property considered by researchers in Social Choice is shown to correspond to a logical property of nonmonotonic consequence relations that is the {\em pure}, i.e., not involving connectives, version of a previously known weak rationality condition. The assumption that the union of two definable sets of models is definable is needed for the soundness part of the result.<|reference_end|>
arxiv
@article{lehmann2002the, title={The logical meaning of Expansion}, author={Daniel Lehmann}, journal={arXiv preprint arXiv:cs/0202033}, year={2002}, archivePrefix={arXiv}, eprint={cs/0202033}, primaryClass={cs.AI} }
lehmann2002the