corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-670201
cs/0110017
The Increased Need For FCC Merger Review In A Networked World
<|reference_start|>The Increased Need For FCC Merger Review In A Networked World: Recently, the FCC announced a new standard for review in mergers. Under the new standard, mass media mergers that comply with existing rules will automaticly receive approval, while those that do not will receive a more searching review. Common carrier mergers, however, will continue to receive the 4=part test established in Bell Atlantic/Nynex. The new standard fails to take into account the complexities of the emerging, converged networked world, and is essentially obsolete on arrival. Looking to those areas where Congress has required an additional public interest review of mergers, a pattern emerges. The emergence of vast, vertically integrated networks of content and conduit fit the historic pattern of areas requiring piublic interest review and re-enforce the need for increased, rather than decreased merger review.<|reference_end|>
arxiv
@article{feld2001the, title={The Increased Need For FCC Merger Review In A Networked World}, author={Harold Feld}, journal={arXiv preprint arXiv:cs/0110017}, year={2001}, number={TPRC-2001-052}, archivePrefix={arXiv}, eprint={cs/0110017}, primaryClass={cs.CY} }
feld2001the
arxiv-670202
cs/0110018
ENUM: The Collision of Telephony and DNS Policy
<|reference_start|>ENUM: The Collision of Telephony and DNS Policy: ENUM marks either the convergence or collision of the public telephone network with the Internet. ENUM is an innovation in the domain name system (DNS). It starts with numerical domain names that are used to query DNS name servers. The servers respond with address information found in DNS records. This can be telephone numbers, email addresses, fax numbers, SIP addresses, or other information. The concept is to use a single number in order to obtain a plethora of contact information. By convention, the Internet Engineering Task Force (IETF) ENUM Working Group determined that an ENUM number would be the same numerical string as a telephone number. In addition, the assignee of an ENUM number would be the assignee of that telephone number. But ENUM could work with any numerical string or, in fact, any domain name. The IETF is already working on using E.212 numbers with ENUM. [Abridged]<|reference_end|>
arxiv
@article{cannon2001enum:, title={ENUM: The Collision of Telephony and DNS Policy}, author={Robert Cannon}, journal={arXiv preprint arXiv:cs/0110018}, year={2001}, number={TPRC-2001-XXX}, archivePrefix={arXiv}, eprint={cs/0110018}, primaryClass={cs.GL} }
cannon2001enum:
arxiv-670203
cs/0110019
New approach for network monitoring and intrusion detection
<|reference_start|>New approach for network monitoring and intrusion detection: The approach for a network behavior description in terms of numerical time-dependant functions of the protocol parameters is suggested. This provides a basis for application of methods of mathematical and theoretical physics for information flow analysis on network and for extraction of patterns of typical network behavior. The information traffic can be described as a trajectory in multi-dimensional parameter-time space with dimension about 10-12. Based on this study some algorithms for the proposed intrusion detection system are discussed.<|reference_end|>
arxiv
@article{gudkov2001new, title={New approach for network monitoring and intrusion detection}, author={Vladimir Gudkov and Joseph E. Johnson}, journal={arXiv preprint arXiv:cs/0110019}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110019}, primaryClass={cs.CR} }
gudkov2001new
arxiv-670204
cs/0110020
Structuring Business Metadata in Data Warehouse Systems for Effective Business Support
<|reference_start|>Structuring Business Metadata in Data Warehouse Systems for Effective Business Support: Large organizations today are being served by different types of data processing and informations systems, ranging from the operational (OLTP) systems, data warehouse systems, to data mining and business intelligence applications. It is important to create an integrated repository of what these systems contain and do in order to use them collectively and effectively. The repository contains metadata of source systems, data warehouse, and also the business metadata. Decision support and business analysis require extensive and in-depth understanding of business entities, tasks, rules and the environment. The purpose of business metadata is to provide this understanding. Realizing the importance of metadata, many standardization efforts has been initiated to define metadata models. In trying to define an integrated metadata and information systems for a banking application, we discover some important limitations or inadequacies of the business metadata proposals. They relate to providing an integrated and flexible inter-operability and navigation between metadata and data, and to the important issue of systematically handling temporal characteristics and evolution of the metadata itself. In this paper, we study the issue of structuring business metadata so that it can provide a context for business management and decision support when integrated with data warehousing. We define temporal object-oriented business metadata model, and relate it both to the technical metadata and the data warehouse. We also define ways of accessing and navigating metadata in conjunction with data.<|reference_end|>
arxiv
@article{sarda2001structuring, title={Structuring Business Metadata in Data Warehouse Systems for Effective Business Support}, author={N. L. Sarda}, journal={arXiv preprint arXiv:cs/0110020}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110020}, primaryClass={cs.DB} }
sarda2001structuring
arxiv-670205
cs/0110021
Alife Model of Evolutionary Emergence of Purposeful Adaptive Behavior
<|reference_start|>Alife Model of Evolutionary Emergence of Purposeful Adaptive Behavior: The process of evolutionary emergence of purposeful adaptive behavior is investigated by means of computer simulations. The model proposed implies that there is an evolving population of simple agents, which have two natural needs: energy and reproduction. Any need is characterized quantitatively by a corresponding motivation. Motivations determine goal-directed behavior of agents. The model demonstrates that purposeful behavior does emerge in the simulated evolutionary processes. Emergence of purposefulness is accompanied by origin of a simple hierarchy in the control system of agents.<|reference_end|>
arxiv
@article{burtsev2001alife, title={Alife Model of Evolutionary Emergence of Purposeful Adaptive Behavior}, author={Mikhail S. Burtsev, Vladimir G. Redko, Roman V. Gusarev}, journal={arXiv preprint arXiv:cs/0110021}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110021}, primaryClass={cs.NE} }
burtsev2001alife
arxiv-670206
cs/0110022
Mixed-Initiative Interaction = Mixed Computation
<|reference_start|>Mixed-Initiative Interaction = Mixed Computation: We show that partial evaluation can be usefully viewed as a programming model for realizing mixed-initiative functionality in interactive applications. Mixed-initiative interaction between two participants is one where the parties can take turns at any time to change and steer the flow of interaction. We concentrate on the facet of mixed-initiative referred to as `unsolicited reporting' and demonstrate how out-of-turn interactions by users can be modeled by `jumping ahead' to nested dialogs (via partial evaluation). Our approach permits the view of dialog management systems in terms of their native support for staging and simplifying interactions; we characterize three different voice-based interaction technologies using this viewpoint. In particular, we show that the built-in form interpretation algorithm (FIA) in the VoiceXML dialog management architecture is actually a (well disguised) combination of an interpreter and a partial evaluator.<|reference_end|>
arxiv
@article{ramakrishnan2001mixed-initiative, title={Mixed-Initiative Interaction = Mixed Computation}, author={Naren Ramakrishnan, Robert Capra, and Manuel A. Perez-Quinones}, journal={arXiv preprint arXiv:cs/0110022}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110022}, primaryClass={cs.PL cs.HC} }
ramakrishnan2001mixed-initiative
arxiv-670207
cs/0110023
Set Unification
<|reference_start|>Set Unification: The unification problem in algebras capable of describing sets has been tackled, directly or indirectly, by many researchers and it finds important applications in various research areas--e.g., deductive databases, theorem proving, static analysis, rapid software prototyping. The various solutions proposed are spread across a large literature. In this paper we provide a uniform presentation of unification of sets, formalizing it at the level of set theory. We address the problem of deciding existence of solutions at an abstract level. This provides also the ability to classify different types of set unification problems. Unification algorithms are uniformly proposed to solve the unification problem in each of such classes. The algorithms presented are partly drawn from the literature--and properly revisited and analyzed--and partly novel proposals. In particular, we present a new goal-driven algorithm for general ACI1 unification and a new simpler algorithm for general (Ab)(Cl) unification.<|reference_end|>
arxiv
@article{dovier2001set, title={Set Unification}, author={Agostino Dovier, Enrico Pontelli and Gianfranco Rossi}, journal={arXiv preprint arXiv:cs/0110023}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110023}, primaryClass={cs.LO cs.AI cs.SC} }
dovier2001set
arxiv-670208
cs/0110024
Pretty-Simple Password-Authenticated Key-Exchange Protocol
<|reference_start|>Pretty-Simple Password-Authenticated Key-Exchange Protocol: We propose pretty simple password-authenticated key-exchange protocol which is based on the difficulty of solving DDH problem. It has the following advantages: (1) Both $y_1$ and $y_2$ in our protocol are independent and thus they can be pre-computed and can be sent independently. This speeds up the protocol. (2) Clients and servers can use almost the same algorithm. This reduces the implementation costs without accepting replay attacks and abuse of entities as oracles.<|reference_end|>
arxiv
@article{kobara2001pretty-simple, title={Pretty-Simple Password-Authenticated Key-Exchange Protocol}, author={Kazukuni Kobara and Hideki Imai}, journal={arXiv preprint arXiv:cs/0110024}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110024}, primaryClass={cs.CR} }
kobara2001pretty-simple
arxiv-670209
cs/0110025
Recognizing When Heuristics Can Approximate Minimum Vertex Covers Is Complete for Parallel Access to NP
<|reference_start|>Recognizing When Heuristics Can Approximate Minimum Vertex Covers Is Complete for Parallel Access to NP: For both the edge deletion heuristic and the maximum-degree greedy heuristic, we study the problem of recognizing those graphs for which that heuristic can approximate the size of a minimum vertex cover within a constant factor of r, where r is a fixed rational number. Our main results are that these problems are complete for the class of problems solvable via parallel access to NP. To achieve these main results, we also show that the restriction of the vertex cover problem to those graphs for which either of these heuristics can find an optimal solution remains NP-hard.<|reference_end|>
arxiv
@article{hemaspaandra2001recognizing, title={Recognizing When Heuristics Can Approximate Minimum Vertex Covers Is Complete for Parallel Access to NP}, author={Edith Hemaspaandra, J"org Rothe, and Holger Spakowski}, journal={arXiv preprint arXiv:cs/0110025}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110025}, primaryClass={cs.CC} }
hemaspaandra2001recognizing
arxiv-670210
cs/0110026
Information retrieval in Current Research Information Systems
<|reference_start|>Information retrieval in Current Research Information Systems: In this paper we describe the requirements for research information systems and problems which arise in the development of such system. Here is shown which problems could be solved by using of knowledge markup technologies. Ontology for Research Information System offered. Architecture for collecting research data and providing access to it is described.<|reference_end|>
arxiv
@article{lopatenko2001information, title={Information retrieval in Current Research Information Systems}, author={Andrei Lopatenko}, journal={arXiv preprint arXiv:cs/0110026}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110026}, primaryClass={cs.IR cs.DL} }
lopatenko2001information
arxiv-670211
cs/0110027
Part-of-Speech Tagging with Two Sequential Transducers
<|reference_start|>Part-of-Speech Tagging with Two Sequential Transducers: We present a method of constructing and using a cascade consisting of a left- and a right-sequential finite-state transducer (FST), T1 and T2, for part-of-speech (POS) disambiguation. Compared to an HMM, this FST cascade has the advantage of significantly higher processing speed, but at the cost of slightly lower accuracy. Applications such as Information Retrieval, where the speed can be more important than accuracy, could benefit from this approach. In the process of tagging, we first assign every word a unique ambiguity class c_i that can be looked up in a lexicon encoded by a sequential FST. Every c_i is denoted by a single symbol, e.g. [ADJ_NOUN], although it represents a set of alternative tags that a given word can occur with. The sequence of the c_i of all words of one sentence is the input to our FST cascade. It is mapped by T1, from left to right, to a sequence of reduced ambiguity classes r_i. Every r_i is denoted by a single symbol, although it represents a set of alternative tags. Intuitively, T1 eliminates the less likely tags from c_i, thus creating r_i. Finally, T2 maps the sequence of r_i, from right to left, to a sequence of single POS tags t_i. Intuitively, T2 selects the most likely t_i from every r_i. The probabilities of all t_i, r_i, and c_i are used only at compile time, not at run time. They do not (directly) occur in the FSTs, but are "implicitly contained" in their structure.<|reference_end|>
arxiv
@article{kempe2001part-of-speech, title={Part-of-Speech Tagging with Two Sequential Transducers}, author={Andre Kempe}, journal={Proc. CLIN 2000, pp. 88-96, Tilburg, The Netherlands. November 3}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110027}, primaryClass={cs.CL} }
kempe2001part-of-speech
arxiv-670212
cs/0110028
On Equivalence and Canonical Forms in the LF Type Theory
<|reference_start|>On Equivalence and Canonical Forms in the LF Type Theory: Decidability of definitional equality and conversion of terms into canonical form play a central role in the meta-theory of a type-theoretic logical framework. Most studies of definitional equality are based on a confluent, strongly-normalizing notion of reduction. Coquand has considered a different approach, directly proving the correctness of a practical equivalance algorithm based on the shape of terms. Neither approach appears to scale well to richer languages with unit types or subtyping, and neither directly addresses the problem of conversion to canonical. In this paper we present a new, type-directed equivalence algorithm for the LF type theory that overcomes the weaknesses of previous approaches. The algorithm is practical, scales to richer languages, and yields a new notion of canonical form sufficient for adequate encodings of logical systems. The algorithm is proved complete by a Kripke-style logical relations argument similar to that suggested by Coquand. Crucially, both the algorithm itself and the logical relations rely only on the shapes of types, ignoring dependencies on terms.<|reference_end|>
arxiv
@article{harper2001on, title={On Equivalence and Canonical Forms in the LF Type Theory}, author={Robert Harper and Frank Pfenning}, journal={arXiv preprint arXiv:cs/0110028}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110028}, primaryClass={cs.LO} }
harper2001on
arxiv-670213
cs/0110029
How to Commission, Operate and Maintain a Large Future Accelerator Complex from Far Remote
<|reference_start|>How to Commission, Operate and Maintain a Large Future Accelerator Complex from Far Remote: A study on future large accelerators [1] has considered a facility, which is designed, built and operated by a worldwide collaboration of equal partner institutions, and which is remote from most of these institutions. The full range of operation was considered including commi-ssioning, machine development, maintenance, trouble shooting and repair. Experience from existing accele-rators confirms that most of these activities are already performed 'remotely'. The large high-energy physics ex-periments and astronomy projects, already involve inter-national collaborations of distant institutions. Based on this experience, the prospects for a machine operated remotely from far sites are encouraging. Experts from each laboratory would remain at their home institution but continue to participate in the operation of the machine after construction. Experts are required to be on site only during initial commissioning and for par-ticularly difficult problems. Repairs require an on-site non-expert maintenance crew. Most of the interventions can be made without an expert and many of the rest resolved with remote assistance. There appears to be no technical obstacle to controlling an accelerator from a distance. The major challenge is to solve the complex management and communication problems.<|reference_end|>
arxiv
@article{czarapata2001how, title={How to Commission, Operate and Maintain a Large Future Accelerator Complex from Far Remote}, author={P. Czarapata (FNAL), D. Hartill (Cornell), S. Myers (CERN), S. Peggs (BNL), N. Phinney (SLAC), M. Serio (INFN), N. Toge (KEK), F. Willeke (DESY), C. Zhang (IHEP Beijing)}, journal={eConf C011127 (2001) FRBI001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110029}, primaryClass={cs.OH} }
czarapata2001how
arxiv-670214
cs/0110030
Dense point sets have sparse Delaunay triangulations
<|reference_start|>Dense point sets have sparse Delaunay triangulations: The spread of a finite set of points is the ratio between the longest and shortest pairwise distances. We prove that the Delaunay triangulation of any set of n points in R^3 with spread D has complexity O(D^3). This bound is tight in the worst case for all D = O(sqrt{n}). In particular, the Delaunay triangulation of any dense point set has linear complexity. We also generalize this upper bound to regular triangulations of k-ply systems of balls, unions of several dense point sets, and uniform samples of smooth surfaces. On the other hand, for any n and D=O(n), we construct a regular triangulation of complexity Omega(nD) whose n vertices have spread D.<|reference_end|>
arxiv
@article{erickson2001dense, title={Dense point sets have sparse Delaunay triangulations}, author={Jeff Erickson}, journal={arXiv preprint arXiv:cs/0110030}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110030}, primaryClass={cs.CG cs.DM} }
erickson2001dense
arxiv-670215
cs/0110031
Depth-3 Arithmetic Circuits for S^2_n(X) and Extensions of the Graham-Pollack Theorem
<|reference_start|>Depth-3 Arithmetic Circuits for S^2_n(X) and Extensions of the Graham-Pollack Theorem: We consider the problem of computing the second elementary symmetric polynomial S^2_n(X) using depth-three arithmetic circuits of the form "sum of products of linear forms". We consider this problem over several fields and determine EXACTLY the number of multiplication gates required. The lower bounds are proved for inhomogeneous circuits where the linear forms are allowed to have constants; the upper bounds are proved in the homogeneous model. For reals and rationals, the number of multiplication gates required is exactly n-1; in most other cases, it is \ceil{n/2}. This problem is related to the Graham-Pollack theorem in algebraic graph theory. In particular, our results answer the following question of Babai and Frankl: what is the minimum number of complete bipartite graphs required to cover each edge of a complete graph an odd number of times? We show that for infinitely many n, the answer is \ceil{n/2}.<|reference_end|>
arxiv
@article{radhakrishnan2001depth-3, title={Depth-3 Arithmetic Circuits for S^2_n(X) and Extensions of the Graham-Pollack Theorem}, author={Jaikumar Radhakrishnan, Pranab Sen and Sundar Vishwanathan}, journal={arXiv preprint arXiv:cs/0110031}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110031}, primaryClass={cs.DM math.CO} }
radhakrishnan2001depth-3
arxiv-670216
cs/0110032
A logic-based approach to data integration
<|reference_start|>A logic-based approach to data integration: An important aspect of data integration involves answering queries using various resources rather than by accessing database relations. The process of transforming a query from the database relations to the resources is often referred to as query folding or answering queries using views, where the views are the resources. We present a uniform approach that includes as special cases much of the previous work on this subject. Our approach is logic-based using resolution. We deal with integrity constraints, negation, and recursion also within this framework.<|reference_end|>
arxiv
@article{grant2001a, title={A logic-based approach to data integration}, author={J. Grant and J. Minker}, journal={arXiv preprint arXiv:cs/0110032}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110032}, primaryClass={cs.DB cs.AI} }
grant2001a
arxiv-670217
cs/0110034
Inference of termination conditions for numerical loops in Prolog
<|reference_start|>Inference of termination conditions for numerical loops in Prolog: We present a new approach to termination analysis of numerical computations in logic programs. Traditional approaches fail to analyse them due to non well-foundedness of the integers. We present a technique that allows overcoming these difficulties. Our approach is based on transforming a program in a way that allows integrating and extending techniques originally developed for analysis of numerical computations in the framework of query-mapping pairs with the well-known framework of acceptability. Such an integration not only contributes to the understanding of termination behaviour of numerical computations, but also allows us to perform a correct analysis of such computations automatically, by extending previous work on a constraint-based approach to termination. Finally, we discuss possible extensions of the technique, including incorporating general term orderings.<|reference_end|>
arxiv
@article{serebrenik2001inference, title={Inference of termination conditions for numerical loops in Prolog}, author={Alexander Serebrenik, Danny De Schreye}, journal={arXiv preprint arXiv:cs/0110034}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110034}, primaryClass={cs.PL cs.LO} }
serebrenik2001inference
arxiv-670218
cs/0110035
On termination of meta-programs
<|reference_start|>On termination of meta-programs: The term {\em meta-programming} refers to the ability of writing programs that have other programs as data and exploit their semantics. The aim of this paper is presenting a methodology allowing us to perform a correct termination analysis for a broad class of practical meta-interpreters, including negation and performing different tasks during the execution. It is based on combining the power of general orderings, used in proving termination of term-rewrite systems and programs, and on the well-known acceptability condition, used in proving termination of logic programs. The methodology establishes a relationship between the ordering needed to prove termination of the interpreted program and the ordering needed to prove termination of the meta-interpreter together with this interpreted program. If such a relationship is established, termination of one of those implies termination of the other one, i.e., the meta-interpreter preserves termination. Among the meta-interpreters that are analysed correctly are a proof trees constructing meta-interpreter, different kinds of tracers and reasoners. To appear without appendix in Theory and Practice of Logic Programming.<|reference_end|>
arxiv
@article{serebrenik2001on, title={On termination of meta-programs}, author={Alexander Serebrenik, Danny De Schreye}, journal={arXiv preprint arXiv:cs/0110035}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110035}, primaryClass={cs.PL cs.LO} }
serebrenik2001on
arxiv-670219
cs/0110036
Efficient algorithms for decision tree cross-validation
<|reference_start|>Efficient algorithms for decision tree cross-validation: Cross-validation is a useful and generally applicable technique often employed in machine learning, including decision tree induction. An important disadvantage of straightforward implementation of the technique is its computational overhead. In this paper we show that, for decision trees, the computational overhead of cross-validation can be reduced significantly by integrating the cross-validation with the normal decision tree induction process. We discuss how existing decision tree algorithms can be adapted to this aim, and provide an analysis of the speedups these adaptations may yield. The analysis is supported by experimental results.<|reference_end|>
arxiv
@article{blockeel2001efficient, title={Efficient algorithms for decision tree cross-validation}, author={Hendrik Blockeel and Jan Struyf}, journal={H. Blockeel and J. Struyf. Efficient algorithms for decision tree cross-validation. Proceedings of the Eighteenth International Conference on Machine Learning (C. Brodley and A. Danyluk, eds.), Morgan Kaufmann, 2001, pp. 11-18}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110036}, primaryClass={cs.LG} }
blockeel2001efficient
arxiv-670220
cs/0110037
Practical Aspects for a Working Compile Time Garbage Collection System for Mercury
<|reference_start|>Practical Aspects for a Working Compile Time Garbage Collection System for Mercury: Compile-time garbage collection (CTGC) is still a very uncommon feature within compilers. In previous work we have developed a compile-time structure reuse system for Mercury, a logic programming language. This system indicates which datastructures can safely be reused at run-time. As preliminary experiments were promising, we have continued this work and have now a working and well performing near-to-ship CTGC-system built into the Melbourne Mercury Compiler (MMC). In this paper we present the multiple design decisions leading to this system, we report the results of using CTGC for a set of benchmarks, including a real-world program, and finally we discuss further possible improvements. Benchmarks show substantial memory savings and a noticeable reduction in execution time.<|reference_end|>
arxiv
@article{mazur2001practical, title={Practical Aspects for a Working Compile Time Garbage Collection System for Mercury}, author={Nancy Mazur (1), Peter Ross (2), Gerda Janssens (1) and Maurice Bruynooghe (1) ((1) Dept. of Computer Science K.U.Leuven, (2) Mission Critical)}, journal={arXiv preprint arXiv:cs/0110037}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110037}, primaryClass={cs.PL} }
mazur2001practical
arxiv-670221
cs/0110038
Counting is Easy
<|reference_start|>Counting is Easy: For any fixed $k$, a remarkably simple single-tape Turing machine can simulate $k$ independent counters in real time. Informally, a counter is a storage unit that maintains a single integer (initially 0), incrementing it, decrementing it, or reporting its sign (positive, negative, or zero) on command. Any automaton that responds to each successive command as a counter would is said to simulate a counter. (Only for a sign inquiry is the response of interest, of course. And zeroness is the only real issue, since a simulator can readily use zero detection to keep track of positivity and negativity in finite-state control. In this paper we describe a remarkably simple real-time simulation, based on just five simple rewriting rules, of any fixed number $k$ of independent counters. On a Turing machine with a single, binary work tape, the simulation runs in real time, handling an arbitrary counter command at each step. The space used by the simulation can be held to $(k+\epsilon) \log_2 n$ bits for the first $n$ commands, for any specified $\epsilon > 0$.<|reference_end|>
arxiv
@article{seiferas2001counting, title={Counting is Easy}, author={Joel Seiferas (University of Rochester) and Paul Vitanyi (CWI and University of Amsterdam)}, journal={J. Seiferas and P.M.B. Vitanyi, Counting is easy, J. Assoc. Comp. Mach. 35 (1988), pp. 985-1000}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110038}, primaryClass={cs.CC cs.DS} }
seiferas2001counting
arxiv-670222
cs/0110039
Two heads are better than two tapes
<|reference_start|>Two heads are better than two tapes: We show that a Turing machine with two single-head one-dimensional tapes cannot recognize the set {x2x'| x \in {0,1}^* and x' is a prefix of x} in real time, although it can do so with three tapes, two two-dimensional tapes, or one two-head tape, or in linear time with just one tape. In particular, this settles the longstanding conjecture that a two-head Turing machine can recognize more languages in real time if its heads are on the same one-dimensional tape than if they are on separate one-dimensional tapes.<|reference_end|>
arxiv
@article{jiang2001two, title={Two heads are better than two tapes}, author={Tao Jiang (McMaster University), Joel Seiferas (Rochester University), and Paul Vitanyi (CWI and University of Amsterdam)}, journal={T. Jiang, J. Seiferas and P.M.B. Vitanyi, Two heads are better than two tapes, J. Assoc. Comp. Mach., 44:2(1997), 237--256}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110039}, primaryClass={cs.CC} }
jiang2001two
arxiv-670223
cs/0110040
A New Approach to Formal Language Theory by Kolmogorov Complexity
<|reference_start|>A New Approach to Formal Language Theory by Kolmogorov Complexity: We present a new approach to formal language theory using Kolmogorov complexity. The main results presented here are an alternative for pumping lemma(s), a new characterization for regular languages, and a new method to separate deterministic context-free languages and nondeterministic context-free languages. The use of the new `incompressibility arguments' is illustrated by many examples. The approach is also successful at the high end of the Chomsky hierarchy since one can quantify nonrecursiveness in terms of Kolmogorov complexity. (This is a preliminary uncorrected version. The final version is the one published in SIAM J. Comput., 24:2(1995), 398-410.)<|reference_end|>
arxiv
@article{li2001a, title={A New Approach to Formal Language Theory by Kolmogorov Complexity}, author={Ming Li (University of Waterloo) and Paul Vitanyi (CWI and University of Amsterdam)}, journal={M. Li and P.M.B. Vitanyi, A new approach to formal language theory by Kolmogorov complexity, SIAM J. Comput., 24:2(1995), 398-410}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110040}, primaryClass={cs.CC} }
li2001a
arxiv-670224
cs/0110041
Towards Solving the Interdisciplinary Language Barrier Problem
<|reference_start|>Towards Solving the Interdisciplinary Language Barrier Problem: This work aims to make it easier for a specialist in one field to find and explore ideas from another field which may be useful in solving a new problem arising in his practice. It presents a methodology which serves to represent the relationships that exist between concepts, problems, and solution patterns from different fields of human activity in the form of a graph. Our approach is based upon generalization and specialization relationships and problem solving. It is simple enough to be understood quite easily, and general enough to enable coherent integration of concepts and problems from virtually any field. We have built an implementation which uses the World Wide Web as a support to allow navigation between graph nodes and collaborative development of the graph.<|reference_end|>
arxiv
@article{paquet2001towards, title={Towards Solving the Interdisciplinary Language Barrier Problem}, author={Sebastien Paquet}, journal={arXiv preprint arXiv:cs/0110041}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110041}, primaryClass={cs.CY cs.CL cs.IR} }
paquet2001towards
arxiv-670225
cs/0110042
An Architecture for Security and Privacy in Mobile Communications
<|reference_start|>An Architecture for Security and Privacy in Mobile Communications: There is much discussion and debate about how to improve the security and privacy of mobile communication systems, both voice and data. Most proposals attempt to provide incremental improvements to systems that are deployed today. Indeed, only incremental improvements are possible, given the regulatory, technological, economic, and historical structure of the telecommunications system. In this paper, we conduct a ``thought experiment'' to redesign the mobile communications system to provide a high level of security and privacy for the users of the system. We discuss the important requirements and how a different architecture might successfully satisfy them. In doing so, we hope to illuminate the possibilities for secure and private systems, as well as explore their real limits.<|reference_end|>
arxiv
@article{treese2001an, title={An Architecture for Security and Privacy in Mobile Communications}, author={G. Winfield Treese, Lawrence C. Stewart}, journal={arXiv preprint arXiv:cs/0110042}, year={2001}, number={TPRC-2001-101}, archivePrefix={arXiv}, eprint={cs/0110042}, primaryClass={cs.CY} }
treese2001an
arxiv-670226
cs/0110043
An Overview of Computer security
<|reference_start|>An Overview of Computer security: As more business activities are being automated and an increasing number of computers are being used to store vital and sensitive information the need for secure computer systems becomes more apparent. These systems can be achieved only through systematic design; they cannot be achieved through haphazard seat-of-the-pants methods.This paper introduces some known threats to the computer security, categorizes the threats, and analyses protection mechanisms and techniques for countering the threats. The threats have been classified more so as definitions and then followed by the classifications of these threats. Also mentioned are the protection mechanisms.<|reference_end|>
arxiv
@article{annam2001an, title={An Overview of Computer security}, author={Shireesh Reddy Annam}, journal={arXiv preprint arXiv:cs/0110043}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110043}, primaryClass={cs.CY cs.NI} }
annam2001an
arxiv-670227
cs/0110044
EquiX--A Search and Query Language for XML
<|reference_start|>EquiX--A Search and Query Language for XML: EquiX is a search language for XML that combines the power of querying with the simplicity of searching. Requirements for such languages are discussed and it is shown that EquiX meets the necessary criteria. Both a graph-based abstract syntax and a formal concrete syntax are presented for EquiX queries. In addition, the semantics is defined and an evaluation algorithm is presented. The evaluation algorithm is polynomial under combined complexity. EquiX combines pattern matching, quantification and logical expressions to query both the data and meta-data of XML documents. The result of a query in EquiX is a set of XML documents. A DTD describing the result documents is derived automatically from the query.<|reference_end|>
arxiv
@article{cohen2001equix--a, title={EquiX--A Search and Query Language for XML}, author={Sara Cohen, Yaron Kanza, Yakov Kogan, Werner Nutt, Yehoshua Sagiv, Alexander Serebrenik}, journal={arXiv preprint arXiv:cs/0110044}, year={2001}, number={CW-322}, archivePrefix={arXiv}, eprint={cs/0110044}, primaryClass={cs.DB} }
cohen2001equix--a
arxiv-670228
cs/0110046
From 2G TO 3G - The Evolution of International Cellular Standards
<|reference_start|>From 2G TO 3G - The Evolution of International Cellular Standards: The purpose of this paper is to examine the major factors surrouding and contributing to the creation (and success) of Europe's 2nd generation 'GSM' cellular system, and compare and contrast it to key events and recent developments in 3rd generation 'IMT-2000' systems. The objective is to ascertain whether lessons from the development of one system can be applied to the other, and what implications 2G has for the development and assessment of 3G technologies. Among the major themes incorporated into this assessment is the concept of cooperation, and its role in bringing about the collaboration and integration necessary to support the success of an international cellular standard.<|reference_end|>
arxiv
@article{selian2001from, title={From 2G TO 3G - The Evolution of International Cellular Standards}, author={Audrey N. Selian}, journal={TPRC-2001-102}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110046}, primaryClass={cs.CY} }
selian2001from
arxiv-670229
cs/0110047
The Expresso Microarray Experiment Management System: The Functional Genomics of Stress Responses in Loblolly Pine
<|reference_start|>The Expresso Microarray Experiment Management System: The Functional Genomics of Stress Responses in Loblolly Pine: Conception, design, and implementation of cDNA microarray experiments present a variety of bioinformatics challenges for biologists and computational scientists. The multiple stages of data acquisition and analysis have motivated the design of Expresso, a system for microarray experiment management. Salient aspects of Expresso include support for clone replication and randomized placement; automatic gridding, extraction of expression data from each spot, and quality monitoring; flexible methods of combining data from individual spots into information about clones and functional categories; and the use of inductive logic programming for higher-level data analysis and mining. The development of Expresso is occurring in parallel with several generations of microarray experiments aimed at elucidating genomic responses to drought stress in loblolly pine seedlings. The current experimental design incorporates 384 pine cDNAs replicated and randomly placed in two specific microarray layouts. We describe the design of Expresso as well as results of analysis with Expresso that suggest the importance of molecular chaperones and membrane transport proteins in mechanisms conferring successful adaptation to long-term drought stress.<|reference_end|>
arxiv
@article{heath2001the, title={The Expresso Microarray Experiment Management System: The Functional Genomics of Stress Responses in Loblolly Pine}, author={Lenwood S. Heath, Naren Ramakrishnan, Ronald R. Sederoff, Ross W. Whetten, Boris I. Chevone, Craig A. Struble, Vincent Y. Jouenne, Dawei Chen, Leonel van Zyl, Ruth G. Alscher}, journal={arXiv preprint arXiv:cs/0110047}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110047}, primaryClass={cs.OH cs.CE q-bio.GN} }
heath2001the
arxiv-670230
cs/0110048
Multivariant Branching Prediction, Reflection, and Retrospection
<|reference_start|>Multivariant Branching Prediction, Reflection, and Retrospection: In branching simulation, a novel approach to simulation presented in this paper, a multiplicity of plausible scenarios are concurrently developed and implemented. In conventional simulations of complex systems, there arise from time to time uncertainties as to which of two or more alternatives are more likely to be pursued by the system being simulated. Under these conditions the simulationist makes a judicious choice of one of these alternatives and embeds this choice in the simulation model. By contrast, in the branching approach, two or more of such alternatives (or branches) are included in the model and implemented for concurrent computer solution. The theoretical foundations for branching simulation as a computational process are in the domains of alternating Turing machines, molecular computing, and E-machines. Branching simulations constitute the development of diagrams of scenarios representing significant, alternative flows of events. Logical means for interpretation and investigation of the branching simulation and prediction are provided by the logical theories of possible worlds, which have been formalized by the construction of logical varieties. Under certain conditions, the branching approach can considerably enhance the efficiency of computer simulations and provide more complete insights into the interpretation of predictions based on simulations. As an example, the concepts developed in this paper have been applied to a simulation task that plays an important role in radiology - the noninvasive treatment of brain aneurysms.<|reference_end|>
arxiv
@article{burgin2001multivariant, title={Multivariant Branching Prediction, Reflection, and Retrospection}, author={Mark Burgin, Walter Karplus, and Damon Liu}, journal={arXiv preprint arXiv:cs/0110048}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110048}, primaryClass={cs.CE cs.DC} }
burgin2001multivariant
arxiv-670231
cs/0110049
A Symmetric Strategy in Graph Avoidance Games
<|reference_start|>A Symmetric Strategy in Graph Avoidance Games: In the graph avoidance game two players alternatingly color edges of a graph G in red and in blue respectively. The player who first creates a monochromatic subgraph isomorphic to a forbidden graph F loses. A symmetric strategy of the second player ensures that, independently of the first player's strategy, the blue and the red subgraph are isomorphic after every round of the game. We address the class of those graphs G that admit a symmetric strategy for all F and discuss relevant graph-theoretic and complexity issues. We also show examples when, though a symmetric strategy on G generally does not exist, it is still available for a particular F.<|reference_end|>
arxiv
@article{harary2001a, title={A Symmetric Strategy in Graph Avoidance Games}, author={Frank Harary, Wolfgang Slany and Oleg Verbitsky}, journal={arXiv preprint arXiv:cs/0110049}, year={2001}, number={DBAI-TR-2001-42}, archivePrefix={arXiv}, eprint={cs/0110049}, primaryClass={cs.DM cs.CC} }
harary2001a
arxiv-670232
cs/0110050
What is the minimal set of fragments that achieves maximal parse accuracy?
<|reference_start|>What is the minimal set of fragments that achieves maximal parse accuracy?: We aim at finding the minimal set of fragments which achieves maximal parse accuracy in Data Oriented Parsing. Experiments with the Penn Wall Street Journal treebank show that counts of almost arbitrary fragments within parse trees are important, leading to improved parse accuracy over previous models tested on this treebank (a precision of 90.8% and a recall of 90.6%). We isolate some dependency relations which previous models neglect but which contribute to higher parse accuracy.<|reference_end|>
arxiv
@article{bod2001what, title={What is the minimal set of fragments that achieves maximal parse accuracy?}, author={Rens Bod}, journal={Proceedings ACL'2001, Toulouse, France}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110050}, primaryClass={cs.CL} }
bod2001what
arxiv-670233
cs/0110051
Combining semantic and syntactic structure for language modeling
<|reference_start|>Combining semantic and syntactic structure for language modeling: Structured language models for speech recognition have been shown to remedy the weaknesses of n-gram models. All current structured language models are, however, limited in that they do not take into account dependencies between non-headwords. We show that non-headword dependencies contribute to significantly improved word error rate, and that a data-oriented parsing model trained on semantically and syntactically annotated data can exploit these dependencies. This paper also contains the first DOP model trained by means of a maximum likelihood reestimation procedure, which solves some of the theoretical shortcomings of previous DOP models.<|reference_end|>
arxiv
@article{bod2001combining, title={Combining semantic and syntactic structure for language modeling}, author={Rens Bod}, journal={Proceedings ICSLP'2000, Beijing, China}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110051}, primaryClass={cs.CL} }
bod2001combining
arxiv-670234
cs/0110052
Mragyati : A System for Keyword-based Searching in Databases
<|reference_start|>Mragyati : A System for Keyword-based Searching in Databases: The web, through many search engine sites, has popularized the keyword-based search paradigm, where a user can specify a string of keywords and expect to retrieve relevant documents, possibly ranked by their relevance to the query. Since a lot of information is stored in databases (and not as HTML documents), it is important to provide a similar search paradigm for databases, where users can query a database without knowing the database schema and database query languages such as SQL. In this paper, we propose such a database search system, which accepts a free-form query as a collection of keywords, translates it into queries on the database using the database metadata, and presents query results in a well-structured and browsable form. Th eysytem maps keywords onto the database schema and uses inter-relationships (i.e., data semantics) among the referred tables to generate meaningful query results. We also describe our prototype for database search, called Mragyati. Th eapproach proposed here is scalable, as it does not build an in-memory graph of the entire database for searching for relationships among the objects selected by the user's query.<|reference_end|>
arxiv
@article{sarda2001mragyati, title={Mragyati : A System for Keyword-based Searching in Databases}, author={N. L. Sarda and Ankur Jain}, journal={arXiv preprint arXiv:cs/0110052}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110052}, primaryClass={cs.DB} }
sarda2001mragyati
arxiv-670235
cs/0110053
Machine Learning in Automated Text Categorization
<|reference_start|>Machine Learning in Automated Text Categorization: The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert manpower, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely document representation, classifier construction, and classifier evaluation.<|reference_end|>
arxiv
@article{sebastiani2001machine, title={Machine Learning in Automated Text Categorization}, author={Fabrizio Sebastiani}, journal={Final version published in ACM Computing Surveys, 34(1):1-47, 2002}, year={2001}, doi={10.1145/505282.505283}, archivePrefix={arXiv}, eprint={cs/0110053}, primaryClass={cs.IR cs.LG} }
sebastiani2001machine
arxiv-670236
cs/0110054
Vertex-Unfoldings of Simplicial Manifolds
<|reference_start|>Vertex-Unfoldings of Simplicial Manifolds: We present an algorithm to unfold any triangulated 2-manifold (in particular, any simplicial polyhedron) into a non-overlapping, connected planar layout in linear time. The manifold is cut only along its edges. The resulting layout is connected, but it may have a disconnected interior; the triangles are connected at vertices, but not necessarily joined along edges. We extend our algorithm to establish a similar result for simplicial manifolds of arbitrary dimension.<|reference_end|>
arxiv
@article{demaine2001vertex-unfoldings, title={Vertex-Unfoldings of Simplicial Manifolds}, author={Erik D. Demaine, David Eppstein, Jeff Erickson, George W. Hart, Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0110054}, year={2001}, number={Smith Technical Report 072}, archivePrefix={arXiv}, eprint={cs/0110054}, primaryClass={cs.CG cs.DM} }
demaine2001vertex-unfoldings
arxiv-670237
cs/0110055
Analytical solution of transient scalar wave and diffusion problems of arbitrary dimensionality and geometry by RBF wavelet series
<|reference_start|>Analytical solution of transient scalar wave and diffusion problems of arbitrary dimensionality and geometry by RBF wavelet series: This study applies the RBF wavelet series to the evaluation of analytical solutions of linear time-dependent wave and diffusion problems of any dimensionality and geometry. To the best of the author's knowledge, such analytical solutions have never been achieved before. The RBF wavelets can be understood an alternative for multidimensional problems to the standard Fourier series via fundamental and general solutions of partial differential equation. The present RBF wavelets are infinitely differential, compactly supported, orthogonal over different scales and very simple. The rigorous mathematical proof of completeness and convergence is still missing in this study. The present work may open a new window to numerical solution and theoretical analysis of many other high-dimensional time-dependent PDE problems under arbitrary geometry.<|reference_end|>
arxiv
@article{chen2001analytical, title={Analytical solution of transient scalar wave and diffusion problems of arbitrary dimensionality and geometry by RBF wavelet series}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0110055}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110055}, primaryClass={cs.NA cs.CE} }
chen2001analytical
arxiv-670238
cs/0110056
Probabilistic analysis of a differential equation for linear programming
<|reference_start|>Probabilistic analysis of a differential equation for linear programming: In this paper we address the complexity of solving linear programming problems with a set of differential equations that converge to a fixed point that represents the optimal solution. Assuming a probabilistic model, where the inputs are i.i.d. Gaussian variables, we compute the distribution of the convergence rate to the attracting fixed point. Using the framework of Random Matrix Theory, we derive a simple expression for this distribution in the asymptotic limit of large problem size. In this limit, we find that the distribution of the convergence rate is a scaling function, namely it is a function of one variable that is a combination of three parameters: the number of variables, the number of constraints and the convergence rate, rather than a function of these parameters separately. We also estimate numerically the distribution of computation times, namely the time required to reach a vicinity of the attracting fixed point, and find that it is also a scaling function. Using the problem size dependence of the distribution functions, we derive high probability bounds on the convergence rates and on the computation times.<|reference_end|>
arxiv
@article{ben-hur2001probabilistic, title={Probabilistic analysis of a differential equation for linear programming}, author={Asa Ben-Hur, Joshua Feinberg, Shmuel Fishman and Hava T. Siegelmann}, journal={arXiv preprint arXiv:cs/0110056}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110056}, primaryClass={cs.CC cond-mat.stat-mech math-ph math.MP math.OC} }
ben-hur2001probabilistic
arxiv-670239
cs/0110057
Generating Multilingual Personalized Descriptions of Museum Exhibits - The M-PIRO Project
<|reference_start|>Generating Multilingual Personalized Descriptions of Museum Exhibits - The M-PIRO Project: This paper provides an overall presentation of the M-PIRO project. M-PIRO is developing technology that will allow museums to generate automatically textual or spoken descriptions of exhibits for collections available over the Web or in virtual reality environments. The descriptions are generated in several languages from information in a language-independent database and small fragments of text, and they can be tailored according to the backgrounds of the users, their ages, and their previous interaction with the system. An authoring tool allows museum curators to update the system's database and to control the language and content of the resulting descriptions. Although the project is still in progress, a Web-based demonstrator that supports English, Greek and Italian is already available, and it is used throughout the paper to highlight the capabilities of the emerging technology.<|reference_end|>
arxiv
@article{androutsopoulos2001generating, title={Generating Multilingual Personalized Descriptions of Museum Exhibits - The M-PIRO Project}, author={Ion Androutsopoulos, Vassiliki Kokkinaki, Aggeliki Dimitromanolaki, Jo Calder, Jon Oberlander and Elena Not}, journal={arXiv preprint arXiv:cs/0110057}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110057}, primaryClass={cs.CL cs.AI} }
androutsopoulos2001generating
arxiv-670240
cs/0110058
Teaching Parallel Programming Using Both High-Level and Low-Level Languages
<|reference_start|>Teaching Parallel Programming Using Both High-Level and Low-Level Languages: We discuss the use of both MPI and OpenMP in the teaching of senior undergraduate and junior graduate classes in parallel programming. We briefly introduce the OpenMP standard and discuss why we have chosen to use it in parallel programming classes. Advantages of using OpenMP over message passing methods are discussed. We also include a brief enumeration of some of the drawbacks of using OpenMP and how these drawbacks are being addressed by supplementing OpenMP with additional MPI codes and projects. Several projects given in my class are also described in this paper.<|reference_end|>
arxiv
@article{pan2001teaching, title={Teaching Parallel Programming Using Both High-Level and Low-Level Languages}, author={Yi Pan}, journal={arXiv preprint arXiv:cs/0110058}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110058}, primaryClass={cs.DC} }
pan2001teaching
arxiv-670241
cs/0110059
Nonorthogonal Polyhedra Built from Rectangles
<|reference_start|>Nonorthogonal Polyhedra Built from Rectangles: We prove that any polyhedron of genus zero or genus one built out of rectangular faces must be an orthogonal polyhedron, but that there are nonorthogonal polyhedra of genus seven all of whose faces are rectangles. This leads to a resolution of a question posed by Biedl, Lubiw, and Sun [BLS99].<|reference_end|>
arxiv
@article{donoso2001nonorthogonal, title={Nonorthogonal Polyhedra Built from Rectangles}, author={Melody Donoso and Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0110059}, year={2001}, number={Smith Technical Report 073, Oct. 2001; revised May 2002}, archivePrefix={arXiv}, eprint={cs/0110059}, primaryClass={cs.CG cs.DM} }
donoso2001nonorthogonal
arxiv-670242
cs/0110060
Selected Topics in Asynchronous Automata
<|reference_start|>Selected Topics in Asynchronous Automata: The paper is concerned with defining the electrical signals and their models. The delays are discussed, the asynchronous automata - which are the models of the asynchronous circuits - and the examples of the clock generator and of the R-S latch are given. We write the equations of the asynchronous automata, which combine the pure delay model and the inertial delay model; the simple gate model and the complex gate model; the fixed, bounded and unbounded delay model. We give the solutions of these equations, which are written on R->{0,1} functions, where R is the time set. The connection between the real time and the discrete time is discussed. The stability, the fundamental mode of operation, the combinational automata, the semi-modularity are defined and characterized. Some connections are suggested with the linear time and the branching time temporal logic of the propositions.<|reference_end|>
arxiv
@article{vlad2001selected, title={Selected Topics in Asynchronous Automata}, author={Serban E. Vlad}, journal={Serban E. Vlad, Selected Topics in Asynchronous Automata, Analele universitatii din Oradea, Fascicola matematica, Tom VII, 1999}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110060}, primaryClass={cs.LO} }
vlad2001selected
arxiv-670243
cs/0110061
An Asynchronous Automata Approach to the Semantics of Temporal Logic
<|reference_start|>An Asynchronous Automata Approach to the Semantics of Temporal Logic: The paper presents the differential equations that characterize an asynchronous automaton and gives their solution x:R->{0,1}x...x{0,1}. Remarks are made on the connection between the continuous time and the discrete time of the approach. The continuous and the discrete time, the linear and the branching temporal logics have the semantics depending on x and their formulas give the properties of the automaton.<|reference_end|>
arxiv
@article{vlad2001an, title={An Asynchronous Automata Approach to the Semantics of Temporal Logic}, author={Serban E. Vlad}, journal={Serban E. Vlad, An Asynchronous Automata Approach to the Semantics of Temporal Logic, the 8-th Symposium of Mathematics and its Applications of the 'Politehnica' University, Timisoara, 1999}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110061}, primaryClass={cs.LO} }
vlad2001an
arxiv-670244
cs/0110062
The Delay-Insensitivity, the Hazard-Freedom, the Semi-Modularity and the Technical Condition of Good Running of the Discrete Time Asynchronous Automata
<|reference_start|>The Delay-Insensitivity, the Hazard-Freedom, the Semi-Modularity and the Technical Condition of Good Running of the Discrete Time Asynchronous Automata: The paper studies some important properties of the asynchronous (=timed) automata: the delay-insensitivity, the hazard-freedom, the semi-modularity and the technical condition of good running. Time is discrete.<|reference_end|>
arxiv
@article{vlad2001the, title={The Delay-Insensitivity, the Hazard-Freedom, the Semi-Modularity and the Technical Condition of Good Running of the Discrete Time Asynchronous Automata}, author={Serban E. Vlad}, journal={Serban E. Vlad, The Delay-Insensitivity, the Hazard-Freedom, the Semi-Modularity and the Technical Condition of Good Running of the Discrete Time Asynchronous Automata, Analele universitatii din Oradea, Fascicola matematica, Tom VIII, 2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110062}, primaryClass={cs.LO} }
vlad2001the
arxiv-670245
cs/0110063
The Existence of $\omega$-Chains for Transitive Mixed Linear Relations and Its Applications
<|reference_start|>The Existence of $\omega$-Chains for Transitive Mixed Linear Relations and Its Applications: We show that it is decidable whether a transitive mixed linear relation has an $\omega$-chain. Using this result, we study a number of liveness verification problems for generalized timed automata within a unified framework. More precisely, we prove that (1) the mixed linear liveness problem for a timed automaton with dense clocks, reversal-bounded counters, and a free counter is decidable, and (2) the Presburger liveness problem for a timed automaton with discrete clocks, reversal-bounded counters, and a pushdown stack is decidable.<|reference_end|>
arxiv
@article{dang2001the, title={The Existence of $\omega$-Chains for Transitive Mixed Linear Relations and Its Applications}, author={Zhe Dang and Oscar Ibarra}, journal={arXiv preprint arXiv:cs/0110063}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110063}, primaryClass={cs.LO} }
dang2001the
arxiv-670246
cs/0110064
Applications of the Differential Calculus in the Study of the Timed Automata: the Inertial Delay Buffer
<|reference_start|>Applications of the Differential Calculus in the Study of the Timed Automata: the Inertial Delay Buffer: We write the relations that characterize the simpliest timed automaton, the inertial delay buffer, in two versions: the non-deterministic and the deterministic one, by making use of the derivatives of the R->{0,1} functions.<|reference_end|>
arxiv
@article{vlad2001applications, title={Applications of the Differential Calculus in the Study of the Timed Automata: the Inertial Delay Buffer}, author={Serban E. Vlad}, journal={arXiv preprint arXiv:cs/0110064}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110064}, primaryClass={cs.LO} }
vlad2001applications
arxiv-670247
cs/0110065
Interfacing the ControlLogix PLC over Ethernet/IP
<|reference_start|>Interfacing the ControlLogix PLC over Ethernet/IP: The Allen-Bradley ControlLogix line of pro-grammable logic controllers (PLCs) offers several in-terfaces: Ethernet, ControlNet, DeviceNet, RS-232 and others. The ControlLogix Ethernet interface module 1756-ENET uses EtherNet/IP, the ControlNet protocol, encapsulated in Ethernet packages, with specific service codes. A driver for the Experimental Physics and Industrial Control System (EPICS) has been developed that utilizes this EtherNet/IP protocol for controllers running the vxWorks RTOS as well as a Win32 and Unix/Linux test program. Features, per-formance and limitations of this interface are presented.<|reference_end|>
arxiv
@article{kasemir2001interfacing, title={Interfacing the ControlLogix PLC over Ethernet/IP}, author={K.U. Kasemir, L.R. Dalesio}, journal={eConf C011127:THDT002,2001}, year={2001}, number={LA-UR -01-5891}, archivePrefix={arXiv}, eprint={cs/0110065}, primaryClass={cs.NI cs.AR} }
kasemir2001interfacing
arxiv-670248
cs/0110066
Overview of the Experimental Physics and Industrial Control System (EPICS) Channel Archiver
<|reference_start|>Overview of the Experimental Physics and Industrial Control System (EPICS) Channel Archiver: The Channel Archiver has been operational for more than two years at Los Alamos National Laboratory and other sites. This paper introduces the available components (data sampling engine, viewers, scripting interface, HTTP/CGI integration and data management), presents updated performance measurements and reviews operational experience with the Channel Archiver.<|reference_end|>
arxiv
@article{kasemir2001overview, title={Overview of the Experimental Physics and Industrial Control System (EPICS) Channel Archiver}, author={K. U. Kasemir, L. R. Dalesio}, journal={eConf C011127:THAP019,2001}, year={2001}, number={LA-UR-01-5892}, archivePrefix={arXiv}, eprint={cs/0110066}, primaryClass={cs.OH} }
kasemir2001overview
arxiv-670249
cs/0110067
Analysis of Investment Policy in Belarus
<|reference_start|>Analysis of Investment Policy in Belarus: The optimal planning trajectory is analyzed on the basis of the growth model with effectiveness. The saving per capital value has to be rather high initially with smooth decrement in the future years.<|reference_end|>
arxiv
@article{kilin2001analysis, title={Analysis of Investment Policy in Belarus}, author={Fedor S. Kilin}, journal={arXiv preprint arXiv:cs/0110067}, year={2001}, archivePrefix={arXiv}, eprint={cs/0110067}, primaryClass={cs.CE} }
kilin2001analysis
arxiv-670250
cs/0111001
Integrating LabVIEW into a Distributed Computing Environment
<|reference_start|>Integrating LabVIEW into a Distributed Computing Environment: Being easy to learn and well suited for a self-contained desktop laboratory setup, many casual programmers prefer to use the National Instruments LabVIEW environment to develop their logic. An ActiveX interface is presented that allows integration into a plant-wide distributed environment based on the Experimental Physics and Industrial Control System (EPICS). This paper discusses the design decisions and provides performance information, especially considering requirements for the Spallation Neutron Source (SNS) diagnostics system.<|reference_end|>
arxiv
@article{kasemir2001integrating, title={Integrating LabVIEW into a Distributed Computing Environment}, author={K. U. Kasemir, M. Pieck, L. R. Dalesio}, journal={eConf C011127:THAP032,2001}, year={2001}, number={LA-UR-01-5905}, archivePrefix={arXiv}, eprint={cs/0111001}, primaryClass={cs.OH} }
kasemir2001integrating
arxiv-670251
cs/0111002
L-Fuzzy Valued Inclusion Measure, L-Fuzzy Similarity and L-Fuzzy Distance
<|reference_start|>L-Fuzzy Valued Inclusion Measure, L-Fuzzy Similarity and L-Fuzzy Distance: The starting point of this paper is the introduction of a new measure of inclusion of fuzzy set A in fuzzy set B. Previously used inclusion measures take values in the interval [0,1]; the inclusion measure proposed here takes values in a Boolean lattice. In other words, inclusion is viewed as an L-fuzzy valued relation between fuzzy sets. This relation is re exive, antisymmetric and transitive, i.e. it is a fuzzy order relation; in addition it possesess a number of properties which various authors have postulated as axiomatically appropriate for an inclusion measure. We also define an L-fuzzy valued measure of similarity between fuzzy sets and and an L-fuzzy valued distance function between fuzzy sets; these possess properties analogous to the ones of real-valued similarity and distance functions. Keywords: Fuzzy Relations, inclusion measure, subsethood, L-fuzzy sets, similarity, distance, transitivity.<|reference_end|>
arxiv
@article{kehagias2001l-fuzzy, title={L-Fuzzy Valued Inclusion Measure, L-Fuzzy Similarity and L-Fuzzy Distance}, author={Ath. Kehagias and M. Konstantinidou}, journal={arXiv preprint arXiv:cs/0111002}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111002}, primaryClass={cs.OH} }
kehagias2001l-fuzzy
arxiv-670252
cs/0111003
The Use of Classifiers in Sequential Inference
<|reference_start|>The Use of Classifiers in Sequential Inference: We study the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. In particular, we develop two general approaches for an important subproblem-identifying phrase structure. The first is a Markovian approach that extends standard HMMs to allow the use of a rich observation structure and of general classifiers to model state-observation dependencies. The second is an extension of constraint satisfaction formalisms. We develop efficient combination algorithms under both models and study them experimentally in the context of shallow parsing.<|reference_end|>
arxiv
@article{punyakanok2001the, title={The Use of Classifiers in Sequential Inference}, author={Vasin Punyakanok, Dan Roth}, journal={Advances in Neural Information Processing Systems 13}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111003}, primaryClass={cs.LG cs.CL} }
punyakanok2001the
arxiv-670253
cs/0111004
The Relational Database Aspects of Argonne's ATLAS Control System
<|reference_start|>The Relational Database Aspects of Argonne's ATLAS Control System: The Relational Database Aspects of Argonnes ATLAS Control System Argonnes ATLAS (Argonne Tandem Linac Accelerator System) control system comprises two separate database concepts. The first is the distributed real-time database structure provided by the commercial product Vsystem [1]. The second is a more static relational database archiving system designed by ATLAS personnel using Oracle Rdb [2] and Paradox [3] software. The configuration of the ATLAS facility has presented a unique opportunity to construct a control system relational database that is capable of storing and retrieving complete archived tune-up configurations for the entire accelerator. This capability has been a major factor in allowing the facility to adhere to a rigorous operating schedule. Most recently, a Web-based operator interface to the control systems Oracle Rdb database has been installed. This paper explains the history of the ATLAS database systems, how they interact with each other, the design of the new Web-based operator interface, and future plans.<|reference_end|>
arxiv
@article{quock2001the, title={The Relational Database Aspects of Argonne's ATLAS Control System}, author={D. E. R. Quock, F. H. Munson, K. J. Eder, S. L. Dean (ANL)}, journal={eConf C011127 (2001) WEAP066}, year={2001}, number={WEAP066}, archivePrefix={arXiv}, eprint={cs/0111004}, primaryClass={cs.DB} }
quock2001the
arxiv-670254
cs/0111005
Automated Real-Time Testing (ARTT) for Embedded Control Systems (ECS)
<|reference_start|>Automated Real-Time Testing (ARTT) for Embedded Control Systems (ECS): Developing real-time automated test systems for embedded control systems has been a real problem. Some engineers and scientists have used customized software and hardware as a solution, which can be very expensive and time consuming to develop. We have discovered how to integrate a suite of commercially available off-the-shelf software tools and hardware to develop a scalable test platform that is capable of performing complete black-box testing for a dual-channel real-time Embedded-PLC-based control system (www.aps.anl.gov). We will discuss how the Vali/Test Pro testing methodology was implemented to structure testing for a personnel safety system with large quantities of requirements and test cases. This work was supported by the U.S. Department of Energy, Basic Energy Sciences, under Contract No. W-31-109-Eng-38.<|reference_end|>
arxiv
@article{hawkins2001automated, title={Automated Real-Time Testing (ARTT) for Embedded Control Systems (ECS)}, author={Jon Hawkins, Haung V. Nguyen, Reginald B. Howard}, journal={eConf C011127 (2001) TUAP037}, year={2001}, number={PSN #78}, archivePrefix={arXiv}, eprint={cs/0111005}, primaryClass={cs.OH cs.SE} }
hawkins2001automated
arxiv-670255
cs/0111006
Proliferation of SDDS Support for Various Platforms and Languages
<|reference_start|>Proliferation of SDDS Support for Various Platforms and Languages: Since Self-Describing Data Sets (SDDS) were first introduced, the source code has been ported to many different operating systems and various languages. SDDS is now available in C, Tcl, Java, Fortran, and Python. All of these versions are supported on Solaris, Linux, and Windows. The C version of SDDS is also supported on VxWorks. With the recent addition of the Java port, SDDS can now be deployed on virtually any operating system. Due to this proliferation, SDDS files serve to link not only a collection of C programs, but programs and scripts in many languages on different operating systems. The platform independent binary feature of SDDS also facilitates portability among operating systems. This paper presents an overview of various benefits of SDDS platform interoperability.<|reference_end|>
arxiv
@article{soliday2001proliferation, title={Proliferation of SDDS Support for Various Platforms and Languages}, author={Robert Soliday (APS/ANL)}, journal={eConfC011127:THAP031,2001}, year={2001}, number={THAP031}, archivePrefix={arXiv}, eprint={cs/0111006}, primaryClass={cs.DB} }
soliday2001proliferation
arxiv-670256
cs/0111007
Explaining Scenarios for Information Personalization
<|reference_start|>Explaining Scenarios for Information Personalization: Personalization customizes information access. The PIPE ("Personalization is Partial Evaluation") modeling methodology represents interaction with an information space as a program. The program is then specialized to a user's known interests or information seeking activity by the technique of partial evaluation. In this paper, we elaborate PIPE by considering requirements analysis in the personalization lifecycle. We investigate the use of scenarios as a means of identifying and analyzing personalization requirements. As our first result, we show how designing a PIPE representation can be cast as a search within a space of PIPE models, organized along a partial order. This allows us to view the design of a personalization system, itself, as specialized interpretation of an information space. We then exploit the underlying equivalence of explanation-based generalization (EBG) and partial evaluation to realize high-level goals and needs identified in scenarios; in particular, we specialize (personalize) an information space based on the explanation of a user scenario in that information space, just as EBG specializes a theory based on the explanation of an example in that theory. In this approach, personalization becomes the transformation of information spaces to support the explanation of usage scenarios. An example application is described.<|reference_end|>
arxiv
@article{ramakrishnan2001explaining, title={Explaining Scenarios for Information Personalization}, author={Naren Ramakrishnan, Mary Beth Rosson, and John M. Carroll}, journal={arXiv preprint arXiv:cs/0111007}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111007}, primaryClass={cs.HC cs.IR} }
ramakrishnan2001explaining
arxiv-670257
cs/0111008
The Control of a Beamline Over Intranet
<|reference_start|>The Control of a Beamline Over Intranet: The machines and beamlines controlled by VME industrial networks are very popular in accelerator faculties. Recently new software technology, among of which are Internet/Intranet application, Java language, and distributed calculating environment, changes the control manner rapidly. A program based on DCOM is composed to control of a variable included angle spherical grating monochromator beamline at National Synchrotron Radiation Laboratory (NSRL) in China. The control computer with a residential DCOM program is connected to Intranet by LAN, over which the user-end-operating program located in another computer sends driving beamline units' commands to the control computer. And also a web page coded in Java, published by the WWW service running in the control computer, is simply illustrated how to use web browser to query the states of or to control the beamline units.<|reference_end|>
arxiv
@article{yu2001the, title={The Control of a Beamline Over Intranet}, author={X.J. Yu, Q. P. Wang, P.S. Xu}, journal={eConfC011127:THAP021,2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111008}, primaryClass={cs.NI} }
yu2001the
arxiv-670258
cs/0111009
On the complexity of inducing categorical and quantitative association rules
<|reference_start|>On the complexity of inducing categorical and quantitative association rules: Inducing association rules is one of the central tasks in data mining applications. Quantitative association rules induced from databases describe rich and hidden relationships holding within data that can prove useful for various application purposes (e.g., market basket analysis, customer profiling, and others). Even though such association rules are quite widely used in practice, a thorough analysis of the computational complexity of inducing them is missing. This paper intends to provide a contribution in this setting. To this end, we first formally define quantitative association rule mining problems, which entail boolean association rules as a special case, and then analyze their computational complexities, by considering both the standard cases, and a some special interesting case, that is, association rule induction over databases with null values, fixed-size attribute set databases, sparse databases, fixed threshold problems.<|reference_end|>
arxiv
@article{angiulli2001on, title={On the complexity of inducing categorical and quantitative association rules}, author={Fabrizio Angiulli, Giovambattista Ianni, Luigi Palopoli}, journal={arXiv preprint arXiv:cs/0111009}, year={2001}, number={ISI-CNR TR 10-2001}, archivePrefix={arXiv}, eprint={cs/0111009}, primaryClass={cs.CC} }
angiulli2001on
arxiv-670259
cs/0111010
Abduction with Penalization in Logic Programming
<|reference_start|>Abduction with Penalization in Logic Programming: Abduction, first proposed in the setting of classical logics, has been studied with growing interest in the logic programming area during the last years. In this paper we study {\em abduction with penalization} in logic programming. This form of abductive reasoning, which has not been previously analyzed in logic programming, turns out to represent several relevant problems, including optimization problems, very naturally. We define a formal model for abduction with penalization from logic programs, which extends the abductive framework proposed by Kakas and Mancarella. We show the high expressiveness of this formalism, by encoding a couple of relevant problems, including the well-know Traveling Salesman Problem from optimization theory, in this abductive framework. The resulting encodings are very simple and elegant. We analyze the complexity of the main decisional problems arising in this framework. An interesting result in this course is that ``negation comes for free.'' Indeed, the addition of (even unstratified) negation does not cause any further increase to the complexity of the abductive reasoning tasks (which remains the same as for not-free programs).<|reference_end|>
arxiv
@article{ianni2001abduction, title={Abduction with Penalization in Logic Programming}, author={Giovambattista Ianni, Nicola Leone, Simona Perri, Francesco Scarcello}, journal={arXiv preprint arXiv:cs/0111010}, year={2001}, number={Unical Math-Dept TR10-2001}, archivePrefix={arXiv}, eprint={cs/0111010}, primaryClass={cs.LO} }
ianni2001abduction
arxiv-670260
cs/0111011
Sintesi di algoritmi con SKY
<|reference_start|>Sintesi di algoritmi con SKY: This paper describes the semantics and ideas about SKY, a logic programming language intended in order to specify algorithmic strategies for the evaluation of problems.<|reference_end|>
arxiv
@article{ianni2001sintesi, title={Sintesi di algoritmi con SKY}, author={Giovambattista Ianni}, journal={arXiv preprint arXiv:cs/0111011}, year={2001}, number={Unical Math. Dept. TR 11-2001}, archivePrefix={arXiv}, eprint={cs/0111011}, primaryClass={cs.LO} }
ianni2001sintesi
arxiv-670261
cs/0111012
Intelligent Anticipated Exploration of Web Sites
<|reference_start|>Intelligent Anticipated Exploration of Web Sites: In this paper we describe a web search agent, called Global Search Agent (hereafter GSA for short). GSA integrates and enhances several search techniques in order to achieve significant improvements in the user-perceived quality of delivered information as compared to usual web search engines. GSA features intelligent merging of relevant documents from different search engines, anticipated selective exploration and evaluation of links from the current result set, automated derivation of refined queries based on user relevance feedback. System architecture as well as experimental accounts are also illustrated.<|reference_end|>
arxiv
@article{ianni2001intelligent, title={Intelligent Anticipated Exploration of Web Sites}, author={Giovambattista Ianni}, journal={arXiv preprint arXiv:cs/0111012}, year={2001}, number={Unical Math. Dept. TR 08-2001}, archivePrefix={arXiv}, eprint={cs/0111012}, primaryClass={cs.AI cs.IR} }
ianni2001intelligent
arxiv-670262
cs/0111013
Quality Control, Testing and Deployment Results in NIF ICCS
<|reference_start|>Quality Control, Testing and Deployment Results in NIF ICCS: The strategy used to develop the NIF Integrated Computer Control System (ICCS) calls for incremental cycles of construction and formal test to deliver a total of 1 million lines of code. Each incremental release takes four to six months to implement specific functionality and culminates when offline tests conducted in the ICCS Integration and Test Facility verify functional, performance, and interface requirements. Tests are then repeated on line to confirm integrated operation in dedicated laser laboratories or ultimately in the NIF. Test incidents along with other change requests are recorded and tracked to closure by the software change control board (SCCB). Annual independent audits advise management on software process improvements. Extensive experience has been gained by integrating controls in the prototype laser preamplifier laboratory. The control system installed in the preamplifier lab contains five of the ten planned supervisory subsystems and seven of sixteen planned front-end processors (FEPs). Beam alignment, timing, diagnosis and laser pulse amplification up to 20 joules was tested through an automated series of shots. Other laboratories have provided integrated testing of six additional FEPs. Process measurements including earned-value, product size, and defect densities provide software project controls and generate confidence that the control system will be successfully deployed.<|reference_end|>
arxiv
@article{woodruff2001quality, title={Quality Control, Testing and Deployment Results in NIF ICCS}, author={John P. Woodruff, Drew D. Casavant, Barry D. Cline, and Michael R. Gorvad}, journal={eConfC011127:TUDT001,2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111013}, primaryClass={cs.SE} }
woodruff2001quality
arxiv-670263
cs/0111014
Visual DCT - Visual EPICS Database Configuration Tool
<|reference_start|>Visual DCT - Visual EPICS Database Configuration Tool: Visual DCT is an EPICS configuration tool completely written in Java and therefore supported in various systems. It was developed to provide features missing in existing configuration tools as Capfast and GDCT. Visually Visual DCT resembles GDCT - records can be created, moved and linked, fields and links can be easily modified. But Visual DCT offers more: using groups, records can be grouped together in a logical block, which allows a hierarchical design. Additionally indication of data flow direction using arrows makes the design easier to understand. Visual DCT has a powerful DB parser, which allows importing existing DB and DBD files. Output file is also DB file, all comments and record order is preserved and visual data saved as comment, which allows DBs to be edited in other tools or manually. Great effort has been taken and many tricks used to optimize the performance in order to compensate for the fact that Java is an interpreted language.<|reference_end|>
arxiv
@article{sekoranja2001visual, title={Visual DCT - Visual EPICS Database Configuration Tool}, author={M. Sekoranja, S. Hunt, A. Luedeke}, journal={eConfC011127:THAP029,2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111014}, primaryClass={cs.SE} }
sekoranja2001visual
arxiv-670264
cs/0111015
The SDSS SkyServer, Public Access to the Sloan Digital Sky Server Data
<|reference_start|>The SDSS SkyServer, Public Access to the Sloan Digital Sky Server Data: The SkyServer provides Internet access to the public Sloan Digital Sky Survey (SDSS) data for both astronomers and for science education. This paper describes the SkyServer goals and architecture. It also describes our experience operating the SkyServer on the Internet. The SDSS data is public and well-documented so it makes a good test platform for research on database algorithms and performance.<|reference_end|>
arxiv
@article{szalay2001the, title={The SDSS SkyServer, Public Access to the Sloan Digital Sky Server Data}, author={Alexander Szalay, Jim Gray, Ani Thakar, Peter Z. Kunszt, Tanu Malik, Jordan Raddick, Christopher Stoughton, Jan vandenBerg}, journal={arXiv preprint arXiv:cs/0111015}, year={2001}, number={Microsoft Research TR 2001 104}, archivePrefix={arXiv}, eprint={cs/0111015}, primaryClass={cs.DL cs.DB} }
szalay2001the
arxiv-670265
cs/0111016
Application Software Structure Enables Nif Operations Kirby W Fong
<|reference_start|>Application Software Structure Enables Nif Operations Kirby W Fong: The NIF Integrated Computer Control System (ICCS) application software uses a set of service frameworks that assures uniform behavior spanning the front-end processors (FEPs) and supervisor programs. This uniformity is visible both in the way each program employs shared services and in the flexibility it affords for attaching graphical user interfaces (GUIs). Uniformity of structure across applications is desired for the benefit of programmers who will be maintaining the many programs that constitute the ICCS. In this paper, the framework components that have the greatest impact on the application structure are discussed.<|reference_end|>
arxiv
@article{fong2001application, title={Application Software Structure Enables Nif Operations Kirby W. Fong}, author={Kirby W. Fong, Christopher M. Estes, John M. Fisher, Randy T. Shelton}, journal={eConf C011127 (2001) THcT003}, year={2001}, number={UCRL-JC-143317}, archivePrefix={arXiv}, eprint={cs/0111016}, primaryClass={cs.SE} }
fong2001application
arxiv-670266
cs/0111017
First Experiences Integrating PC Distributed I/O Into Argonne's ATLAS Control System
<|reference_start|>First Experiences Integrating PC Distributed I/O Into Argonne's ATLAS Control System: First Experiences Integrating PC Distributed I/O Into Argonne's ATLAS Control System The roots of ATLAS (Argonne Tandem-Linac Accelerator System) date back to the early 1960s. Located at the Argonne National Laboratory, the accelerator has been designated a National User Facility, which focuses primarily on heavy-ion nuclear physics. Like the accelerator it services, the control system has been in a constant state of evolution. The present real-time portion of the control system is based on the commercial product Vsystem [1]. While Vsystem has always been capable of distributed I/O processing, the latest offering of this product provides for the use of relatively inexpensive PC hardware and software. This paper reviews the status of the ATLAS control system, and describes first experiences with PC distributed I/O.<|reference_end|>
arxiv
@article{munson2001first, title={First Experiences Integrating PC Distributed I/O Into Argonne's ATLAS Control System}, author={F. H. Munson, D. E. R. Quock, S. L. Dean, K. J. Eder (ANL)}, journal={eConf C011127 (2001) TUcT002}, year={2001}, number={WEAP027}, archivePrefix={arXiv}, eprint={cs/0111017}, primaryClass={cs.OH} }
munson2001first
arxiv-670267
cs/0111018
Data Acquisition and Database Management System for Samsung Superconductor Test Facility
<|reference_start|>Data Acquisition and Database Management System for Samsung Superconductor Test Facility: In order to fulfill the test requirement of KSTAR (Korea Superconducting Tokamak Advanced Research) superconducting magnet system, a large scale superconducting magnet and conductor test facility, SSTF (Samsung Superconductor Test Facility), has been constructed at Samsung Advanced Institute of Technology. The computer system for SSTF DAC (Data Acquisition and Control) is based on UNIX system and VxWorks is used for the real-time OS of the VME system. EPICS (Experimental Physics and Industrial Control System) is used for the communication between IOC server and client. A database program has been developed for the efficient management of measured data and a Linux workstation with PENTIUM-4 CPU is used for the database server. In this paper, the current status of SSTF DAC system, the database management system and recent test results are presented.<|reference_end|>
arxiv
@article{chu2001data, title={Data Acquisition and Database Management System for Samsung Superconductor Test Facility}, author={Y. Chu, S. Baek, H. Yonekawa, A. Chertovskikh, M. Kim, J. S. Kim, K. Park, S. Baang, Y. Chang, J. H. Kim, S. Lee, B. Lim, W. Chung, H. Park, K. Kim}, journal={eConf C011127 (2001) TUAP018}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111018}, primaryClass={cs.DB cs.AI} }
chu2001data
arxiv-670268
cs/0111019
Application of digital regulated Power Supplies for Magnet Control at the Swiss Light Source
<|reference_start|>Application of digital regulated Power Supplies for Magnet Control at the Swiss Light Source: The Swiss Light Source (SLS) has in the order of 500 magnet power supplies (PS) installed, ranging from from 3 A/20 V four-quadrant PS to a 950 A/1000 V two-quadrant 3 Hz PS. All magnet PS have a local digital controller for a digital regulation loop and a 5 MHz optical point-to-point link to the VME level. The PS controller is running a pulse width/pulse repetition regulation scheme, optional with multiple slave regulation loops. Many internal regulation parameters and controller diagnostics are readable by the control system. Industry Pack modules with standard VME carrier cards are used as VME hardware interface with the high control density of eight links per VME card. The low level EPICS interface is identical for all 500 magnet PS, including insertion devices. The digital PS have proven to be very stable and reliable during commissioning of the light source. All specifications were met for all PS. The advanced diagnostic for the magnet PS turned out to be very useful not only for the diagnostic of the PS but also to identify problems on the magnets.<|reference_end|>
arxiv
@article{luedeke2001application, title={Application of digital regulated Power Supplies for Magnet Control at the Swiss Light Source}, author={A. Luedeke (PSI)}, journal={eConf C011127 (2001) TUAP049}, year={2001}, number={PSN: TUAP049}, archivePrefix={arXiv}, eprint={cs/0111019}, primaryClass={cs.SE} }
luedeke2001application
arxiv-670269
cs/0111020
Gemini MCAO Control System
<|reference_start|>Gemini MCAO Control System: The Gemini Observatory is planning to implement a Multi Conjugate Adaptive Optics (MCAO) System as a facility instrument for the Gemini-South telescope. The system will include 5 Laser Guide Stars, 3 Natural Guide Stars, and 3 Deformable mirrors optically conjugated at different altitudes to achieve near-uniform atmospheric compensation over a 1 arc minute square field of view. The control of such a system will be split into 3 main functions: the control of the opto-mechanical assemblies of the whole system (including the Laser, the Beam Transfer Optics and the Adaptive Optics bench), the control of the Adaptive Optics System itself at a rate of 800FPS and the control of the safety system. The control of the Adaptive Optics System is the most critical in terms of real time performances. The control system will be an EPICS based system. In this paper, we will describe the requirements for the whole MCAO control system, preliminary designs for the control of the opto-mechanical devices and architecture options for the control of the Adaptive Optics system and the safety system.<|reference_end|>
arxiv
@article{boyer2001gemini, title={Gemini MCAO Control System}, author={C. Boyer, J. Sebag, B. Ellerbroek}, journal={eConf C011127 (2001) TUAT004}, year={2001}, doi={10.1117/12.454790}, archivePrefix={arXiv}, eprint={cs/0111020}, primaryClass={cs.OH} }
boyer2001gemini
arxiv-670270
cs/0111021
System Integration of High Level Applications during the Commissioning of the Swiss Light Source
<|reference_start|>System Integration of High Level Applications during the Commissioning of the Swiss Light Source: The commissioning of the Swiss Light Source (SLS) started in Feb. 2000 with the Linac, continued in May 2000 with the booster synchrotron and by Dec. 2000 first light in the storage ring were produced. The first four beam lines had to be operational by August 2001. The thorough integration of all subsystems to the control system and a high level of automation was prerequisite to meet the tight time schedule. A careful balanced distribution of functionality into high level and low level applications allowed an optimization of short development cycles and high reliability of the applications. High level applications were implemented as CORBA based client/server applications (tcl/tk and Java based clients, C++ based servers), IDL applications using EZCA, medm/dm2k screens and tcl/tk applications using CDEV. Low level applications were mainly built as EPICS process databases, SNL state machines and customized drivers. Functionality of the high level application was encapsulated and pushed to lower levels whenever it has proven to be adequate. That enabled to reduce machine setups to a handful of physical parameters and allow the usage of standard EPICS tools for display, archiving and processing of complex physical values. High reliability and reproducibility were achieved with that approach.<|reference_end|>
arxiv
@article{luedeke2001system, title={System Integration of High Level Applications during the Commissioning of the Swiss Light Source}, author={A. Luedeke (PSI)}, journal={eConf C011127 (2001) FRBT002}, year={2001}, number={PSN: FRBT002}, archivePrefix={arXiv}, eprint={cs/0111021}, primaryClass={cs.SE} }
luedeke2001system
arxiv-670271
cs/0111022
Distributed Computing for Localized and Multilayer Visualizations
<|reference_start|>Distributed Computing for Localized and Multilayer Visualizations: The aim of this paper is to develop an approach to visualizations that benefits from distributed computing. Three schemes of process distribution are considered: parallel, pipeline, and expanding pipeline computations. Expanding pipeline structure synthesizes the advantages and traits of both parallel and pipeline computations. In expanding pipeline computing, a novel approach presented in this paper, a multiplicity of processes are concurrently developed in parallel and knotted processor pipelines. The theoretical foundations for expanding pipeline computing as a computational process are in the domains of alternating Turing machines, molecular computing, and E-machines. Expanding pipeline computing constitutes the development of the conventional pipeline architecture aimed at utilization of implicit parallel structures existing in algorithms. Such structures appear in various kinds of visualization. Image deriving and processing is a field that provides diverse opportunities for utilization of the advantages of distributed computing. The most relevant to the distributed architecture is stratified visualization with its two cases based on data localization and layer separation. Visualization is treated here as a special case of simulation. The conceptual approach to distributed computing developed in this paper have been applied to visualization in a computer support system, which is utilized in radiology and namely, for the noninvasive treatment of brain aneurysms.<|reference_end|>
arxiv
@article{burgin2001distributed, title={Distributed Computing for Localized and Multilayer Visualizations}, author={Mark Burgin, Walter Karplus, and Damon Liu}, journal={arXiv preprint arXiv:cs/0111022}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111022}, primaryClass={cs.DC cs.DS} }
burgin2001distributed
arxiv-670272
cs/0111023
Distributed Control System for the Test Interferometer of the ALMA Project
<|reference_start|>Distributed Control System for the Test Interferometer of the ALMA Project: The control system (TICS) for the test interferometer being built to support the development of the Atacama Large Millimeter Array (ALMA)will itself be a prototype for the final ALMA array, providing a test for the distributed control system under development. TICS will be based on the ALMA Common Software (ACS) (developed at the European Southern Observatory), which provides CORBA-based services and a device management framework for the control software. Simple device controllers will run on single board computers, one of which (known as an LCU) is located at each antenna; whereas complex, compound device controllers may run on centrally located computers. In either circumstance, client programs may obtain direct CORBA references to the devices and their properties. Monitor and control requests are sent to devices or properties, which then process and forward the commands to the appropriate hardware devices as required. Timing requirements are met by tagging commands with (future) timestamps synchronized to a timing pulse, which is regulated by a central reference generator, and is distributed to all hardware devices in the array. Monitoring is provided through a publish/subscribe CORBA-based service.<|reference_end|>
arxiv
@article{pokorny2001distributed, title={Distributed Control System for the Test Interferometer of the ALMA Project}, author={M. Pokorny (1), M. Brooks (1), B. Glendenning (1), G. Harris (1), R. Heald (1), F. Stauffer (1), J. Pisano (1) ((1) NRAO)}, journal={eConf C011127 (2001) THAT004}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111023}, primaryClass={cs.DC physics.ins-det} }
pokorny2001distributed
arxiv-670273
cs/0111024
Building Multi-Platform User Interfaces with UIML
<|reference_start|>Building Multi-Platform User Interfaces with UIML: There has been a widespread emergence of computing devices in the past few years that go beyond the capabilities of traditional desktop computers. However, users want to use the same kinds of applications and access the same data and information on these appliances that they can access on their desktop computers. The user interfaces for these platforms go beyond the traditional interaction metaphors. It is a challenge to build User Interfaces (UIs) for these devices of differing capabilities that allow the end users to perform the same kinds of tasks. The User Interface Markup Language (UIML) is an XML-based language that allows the canonical description of UIs for different platforms. We describe the language features of UIML that facilitate the development of multi-platform UIs. We also describe the key aspects of our approach that makes UIML succeed where previous approaches failed, namely the division in the representation of a UI, the use of a generic vocabulary, and an integrated development environment specifically designed for transformation-based UI development. Finally we describe the initial details of a multi-step usability engineering process for building multi-platform UI using UIML.<|reference_end|>
arxiv
@article{ali2001building, title={Building Multi-Platform User Interfaces with UIML}, author={Mir Farooq Ali, Manuel A. Perez-Quinones, Eric Shell and Marc Abrams}, journal={arXiv preprint arXiv:cs/0111024}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111024}, primaryClass={cs.HC} }
ali2001building
arxiv-670274
cs/0111025
A Multi-Step Process for Generating Multi-Platform User Interfaces using UIML
<|reference_start|>A Multi-Step Process for Generating Multi-Platform User Interfaces using UIML: There has been a widespread emergence of computing devices in the past few years that go beyond the capabilities of traditional desktop computers. These devices have varying input/output characteristics, modalities and interaction mechanisms. However, users want to use the same kinds of applications and access the same data and information on these appliances that they can access on their desktop computers. The user interfaces for these devices and platforms go beyond the traditional interaction metaphors. It is a challenge to build User Interfaces (UIs) for these devices of differing capabilities that allow the end users to perform the same kinds of tasks. The User Interface Markup Language (UIML) is an XML-based language that allows the canonical description of UIs for different platforms. We present a multi-step transformation-based framework for building Multi-Platform User Interfaces using UIML. We describe the language features of UIML that facilitate the development of multi-platform UIs, the multi-step process involved in our framework and the transformations needed to build the UIs.<|reference_end|>
arxiv
@article{ali2001a, title={A Multi-Step Process for Generating Multi-Platform User Interfaces using UIML}, author={Mir Farooq Ali, Manuel A. Perez-Quinones and Marc Abrams}, journal={arXiv preprint arXiv:cs/0111025}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111025}, primaryClass={cs.HC} }
ali2001a
arxiv-670275
cs/0111026
Next Generation EPICS Interface to Abstract Data
<|reference_start|>Next Generation EPICS Interface to Abstract Data: The set of externally visible properties associated with process variables in the Experimental Physics and Industrial Control System (EPICS) is predefined in the EPICS base distribution and is therefore not extensible by plug-compatible applications. We believe that this approach, while practical for early versions of the system with a smaller user base, is now severely limiting expansion of the high-level application tool set for EPICS. To eliminate existing barriers, we propose a new C++ based interface to abstract containerized data. This paper describes the new interface, its application to message passing in distributed systems, its application to direct communication between tightly coupled programs co-resident in an address space, and its paramount position in an emerging role for EPICS - the integration of dissimilar systems.<|reference_end|>
arxiv
@article{hill2001next, title={Next Generation EPICS Interface to Abstract Data}, author={J. Hill, R. Lange}, journal={eConf C011127 (2001) THAP014}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111026}, primaryClass={cs.NI cs.DS} }
hill2001next
arxiv-670276
cs/0111027
Upgrade of Spring-8 Beamline Network with Vlan Technology Over Gigabit Ethernet
<|reference_start|>Upgrade of Spring-8 Beamline Network with Vlan Technology Over Gigabit Ethernet: The beamline network system at SPring-8 consists of three LANs; a BL-LAN for beamline component control, a BL-USER-LAN for beamline experimental users and an OA-LAN for the information services. These LANs are interconnected by a firewall system. Since the network traffic and the number of beamlines have increased, we upgraded the backbone of BL-USER-LAN from Fast Ethernet to Gigabit Ethernet. And then, to establish the independency of a beamline and to raise flexibility of every beamline, we also introduced the IEEE802.1Q Virtual LAN (VLAN) technology into the BL-USER-LAN. We discuss here a future plan to build the firewall system with hardware load balancers.<|reference_end|>
arxiv
@article{ishii2001upgrade, title={Upgrade of Spring-8 Beamline Network with Vlan Technology Over Gigabit Ethernet}, author={M. Ishii, T. Fukui, Y. Furukawa, T. Nakatani, T. Ohata, R. Tanaka}, journal={eConf C011127 (2001) TUAP056}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111027}, primaryClass={cs.NI} }
ishii2001upgrade
arxiv-670277
cs/0111028
The ESRF TANGO control system status
<|reference_start|>The ESRF TANGO control system status: TANGO is an object oriented control system toolkit based on CORBA presently under development at the ESRF. IN this paper, the TANGO philosophy is briefly presented. All the existing tools developed around TANGO will also be presented. This include a code genrator, a WEB interface to TANGO objects, an administration tool and an interface to LabView. Finally, an xample of a TANGO device server for OPC device is given.<|reference_end|>
arxiv
@article{chaize2001the, title={The ESRF TANGO control system status}, author={JM. Chaize, A. Goetz, WD. Klotz, J. Meyer, M. Perez, E. Taurel, P. Verdier}, journal={eConf C011127 (2001) TUAP004}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111028}, primaryClass={cs.DC} }
chaize2001the
arxiv-670278
cs/0111029
Versatile Data Acquisition and Controls for Epics Using Vme-Based Fpgas
<|reference_start|>Versatile Data Acquisition and Controls for Epics Using Vme-Based Fpgas: Field-Programmable Gate Arrays (FPGAs) have provided Thomas Jefferson National Accelerator Facility (Jefferson Lab) with versatile VME-based data acquisition and control interfaces with minimal development times. FPGA designs have been used to interface to VME and provide control logic for numerous systems. The building blocks of these logic designs can be tailored to the individual needs of each system and provide system operators with read-backs and controls via a VME interface to an EPICS based computer. This versatility allows the system developer to choose components and define operating parameters and options that are not readily available commercially. Jefferson Lab has begun developing standard FPGA libraries that result in quick turn around times and inexpensive designs.<|reference_end|>
arxiv
@article{allison2001versatile, title={Versatile Data Acquisition and Controls for Epics Using Vme-Based Fpgas}, author={T. Allison and R. Flood}, journal={eConf C011127:TUAP053,2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111029}, primaryClass={cs.AR} }
allison2001versatile
arxiv-670279
cs/0111030
A Dual Digital Signal Processor VME Board For Instrumentation And Control Applications
<|reference_start|>A Dual Digital Signal Processor VME Board For Instrumentation And Control Applications: A Dual Digital Signal Processing VME Board was developed for the Continuous Electron Beam Accelerator Facility (CEBAF) Beam Current Monitor (BCM) system at Jefferson Lab. It is a versatile general-purpose digital signal processing board using an open architecture, which allows for adaptation to various applications. The base design uses two independent Texas Instrument (TI) TMS320C6711, which are 900 MFLOPS floating-point digital signal processors (DSP). Applications that require a fixed point DSP can be implemented by replacing the baseline DSP with the pin-for-pin compatible TMS320C6211. The design can be manufactured with a reduced chip set without redesigning the printed circuit board. For example it can be implemented as a single-channel DSP with no analog I/O.<|reference_end|>
arxiv
@article{dong2001a, title={A Dual Digital Signal Processor VME Board For Instrumentation And Control Applications}, author={H. Dong, R. Flood, C. Hovater, J. Musson}, journal={eConf C011127:THAP049,2001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111030}, primaryClass={cs.AR} }
dong2001a
arxiv-670280
cs/0111031
Large-Scale Corba-Distributed Software Framework for Nif Controls
<|reference_start|>Large-Scale Corba-Distributed Software Framework for Nif Controls: The Integrated Computer Control System (ICCS) is based on a scalable software framework that is distributed over some 325 computers throughout the NIF facility. The framework provides templates and services at multiple levels of abstraction for the construction of software applications that communicate via CORBA (Common Object Request Broker Architecture). Various forms of object-oriented software design patterns are implemented as templates to be extended by application software. Developers extend the framework base classes to model the numerous physical control points, thereby sharing the functionality defined by the base classes. About 56,000 software objects each individually addressed through CORBA are to be created in the complete ICCS. Most objects have a persistent state that is initialized at system start-up and stored in a database. Additional framework services are provided by centralized server programs that implement events, alerts, reservations, message logging, database/file persistence, name services, and process management. The ICCS software framework approach allows for efficient construction of a software system that supports a large number of distributed control points representing a complex control application.<|reference_end|>
arxiv
@article{carey2001large-scale, title={Large-Scale Corba-Distributed Software Framework for Nif Controls}, author={Robert W. Carey, Kirby W. Fong, Randy J. Sanchez, Joseph D. Tappero, John P. Woodruff}, journal={eConf C011127 (2001) THAI001}, year={2001}, number={THAI001}, archivePrefix={arXiv}, eprint={cs/0111031}, primaryClass={cs.DC} }
carey2001large-scale
arxiv-670281
cs/0111032
SNS Timing System
<|reference_start|>SNS Timing System: This poster describes the timing system being designed for Spallation Neutron Source being built at Oak Ridge National lab.<|reference_end|>
arxiv
@article{oerter2001sns, title={SNS Timing System}, author={B. oerter, R. Nelson, T. Shea, C. Sibley}, journal={eConf C011127 (2001) FRAT001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111032}, primaryClass={cs.AR} }
oerter2001sns
arxiv-670282
cs/0111033
Modernising the ESRF control system with GNU/Linux
<|reference_start|>Modernising the ESRF control system with GNU/Linux: he ESRF control system is in the process of being modernised. The present contrsystem is based on VME, 10 MHz Ethernet, OS9, Solaris, HP-UX, NFS/RPC, Motif and C. The new control system will be based on compact PCI, 100 MHz Ethernet, Linux, Windows, Solaris, CORBA/IIOP, C++, Java and Python. The main frontend operating system will be GNU/Linux running on Intel/x86 and Motorola/68k. Linux will also be used on handheld devices for mobile control. This poster describes how GNU/Linux is being used to modernise the control system and what problems have been encountered so far<|reference_end|>
arxiv
@article{gotz2001modernising, title={Modernising the ESRF control system with GNU/Linux}, author={A.Gotz, A.Homs, B.Regad, M.Perez, P.Makijarvi, W.D.Klotz}, journal={eConf C011127 (2001) WEAP023}, year={2001}, number={WEAP023}, archivePrefix={arXiv}, eprint={cs/0111033}, primaryClass={cs.DC} }
gotz2001modernising
arxiv-670283
cs/0111034
Experiences with advanced CORBA services
<|reference_start|>Experiences with advanced CORBA services: The Common Object Request Broker Architecture (CORBA) is successfully used in many control systems (CS) for data transfer and device modeling. Communication rates below 1 millisecond, high reliability, scalability, language independence and other features make it very attractive. For common types of applications like error logging, alarm messaging or slow monitoring, one can benefit from standard CORBA services that are implemented by third parties and save tremendous amount of developing time. We have started using few CORBA services on our previous CORBA-based control system for the light source ANKA [1] and use now several CORBA services for the ALMA Common Software (ACS) [2], the core of the control system of the Atacama Large Millimeter Array. Our experiences with the interface repository (IFR), the implementation repository, the naming service, the property service, telecom log service and the notify service from different vendors are presented. Performance and scalability benchmarks have been performed.<|reference_end|>
arxiv
@article{milcinski2001experiences, title={Experiences with advanced CORBA services}, author={G. Milcinski, M. Plesko, M. Sekoranja}, journal={eConf C011127 (2001) THAP005}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111034}, primaryClass={cs.PF} }
milcinski2001experiences
arxiv-670284
cs/0111035
Open Source Real Time Operating Systems Overview
<|reference_start|>Open Source Real Time Operating Systems Overview: Modern control systems applications are often built on top of a real time operating system (RTOS) which provides the necessary hardware abstraction as well as scheduling, networking and other services. Several open source RTOS solutions are publicly available, which is very attractive, both from an economic (no licensing fees) as well as from a technical (control over the source code) point of view. This contribution gives an overview of the RTLinux and RTEMS systems (architecture, development environment, API etc.). Both systems feature most popular CPUs, several APIs (including Posix), networking, portability and optional commercial support. Some performance figures are presented, focusing on interrupt latency and context switching delay.<|reference_end|>
arxiv
@article{straumann2001open, title={Open Source Real Time Operating Systems Overview}, author={Till Straumann}, journal={eConf C011127 (2001) WEBT001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111035}, primaryClass={cs.OS} }
straumann2001open
arxiv-670285
cs/0111036
Data Access - Experiences Implementing an Object Oriented Library on Various Platforms
<|reference_start|>Data Access - Experiences Implementing an Object Oriented Library on Various Platforms: Data Access will be the next generation data abstraction layer for EPICS. Its implementation in C++ brought up a number of issues that are related to object oriented technology's impact on CPU and memory usage. What is gained by the new abstract interface? What is the price that has to be paid for these gains? What compromises seem applicable and affordable? This paper discusses tests that have been made about performance and memory usage as well as the different measures that have been taken to optimize the situation.<|reference_end|>
arxiv
@article{lange2001data, title={Data Access - Experiences Implementing an Object Oriented Library on Various Platforms}, author={R. Lange (BESSY), J. Hill (LANL)}, journal={eConf C011127 (2001) THAP015}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111036}, primaryClass={cs.SE cs.DS} }
lange2001data
arxiv-670286
cs/0111037
User-friendly explanations for constraint programming
<|reference_start|>User-friendly explanations for constraint programming: In this paper, we introduce a set of tools for providing user-friendly explanations in an explanation-based constraint programming system. The idea is to represent the constraints of a problem as an hierarchy (a tree). Users are then represented as a set of understandable nodes in that tree (a cut). Classical explanations (sets of system constraints) just need to get projected on that representation in order to be understandable by any user. We present here the main interests of this idea.<|reference_end|>
arxiv
@article{jussien2001user-friendly, title={User-friendly explanations for constraint programming}, author={Narendra Jussien and Samir Ouis}, journal={arXiv preprint arXiv:cs/0111037}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111037}, primaryClass={cs.PL cs.SE} }
jussien2001user-friendly
arxiv-670287
cs/0111038
Arc consistency for soft constraints
<|reference_start|>Arc consistency for soft constraints: The notion of arc consistency plays a central role in constraint satisfaction. It is known that the notion of local consistency can be extended to constraint optimisation problems defined by soft constraint frameworks based on an idempotent cost combination operator. This excludes non idempotent operators such as + which define problems which are very important in practical applications such as Max-CSP, where the aim is to minimize the number of violated constraints. In this paper, we show that using a weak additional axiom satisfied by most existing soft constraints proposals, it is possible to define a notion of soft arc consistency that extends the classical notion of arc consistency and this even in the case of non idempotent cost combination operators. A polynomial time algorithm for enforcing this soft arc consistency exists and its space and time complexities are identical to that of enforcing arc consistency in CSPs when the cost combination operator is strictly monotonic (for example Max-CSP). A directional version of arc consistency is potentially even stronger than the non-directional version, since it allows non local propagation of penalties. We demonstrate the utility of directional arc consistency by showing that it not only solves soft constraint problems on trees, but that it also implies a form of local optimality, which we call arc irreducibility.<|reference_end|>
arxiv
@article{cooper2001arc, title={Arc consistency for soft constraints}, author={Martin Cooper and Thomas Schiex}, journal={arXiv preprint arXiv:cs/0111038}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111038}, primaryClass={cs.AI cs.CC cs.DS} }
cooper2001arc
arxiv-670288
cs/0111039
An Integrated Development Environment for Declarative Multi-Paradigm Programming
<|reference_start|>An Integrated Development Environment for Declarative Multi-Paradigm Programming: In this paper we present CIDER (Curry Integrated Development EnviRonment), an analysis and programming environment for the declarative multi-paradigm language Curry. CIDER is a graphical environment to support the development of Curry programs by providing integrated tools for the analysis and visualization of programs. CIDER is completely implemented in Curry using libraries for GUI programming (based on Tcl/Tk) and meta-programming. An important aspect of our environment is the possible adaptation of the development environment to other declarative source languages (e.g., Prolog or Haskell) and the extensibility w.r.t. new analysis methods. To support the latter feature, the lazy evaluation strategy of the underlying implementation language Curry becomes quite useful.<|reference_end|>
arxiv
@article{hanus2001an, title={An Integrated Development Environment for Declarative Multi-Paradigm Programming}, author={Michael Hanus and Johannes Koj}, journal={arXiv preprint arXiv:cs/0111039}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111039}, primaryClass={cs.PL cs.SE} }
hanus2001an
arxiv-670289
cs/0111040
Combining Propagation Information and Search Tree Visualization using ILOG OPL Studio
<|reference_start|>Combining Propagation Information and Search Tree Visualization using ILOG OPL Studio: In this paper we give an overview of the current state of the graphical features provided by ILOG OPL Studio for debugging and performance tuning of OPL programs or external ILOG Solver based applications. This paper focuses on combining propagation and search information using the Search Tree view and the Propagation Spy. A new synthetic view is presented: the Christmas Tree, which combines the Search Tree view with statistics on the efficiency of the domain reduction and on the number of the propagation events triggered.<|reference_end|>
arxiv
@article{bracchi2001combining, title={Combining Propagation Information and Search Tree Visualization using ILOG OPL Studio}, author={Christiane Bracchi, Christophe Gefflot and Frederic Paulin}, journal={arXiv preprint arXiv:cs/0111040}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111040}, primaryClass={cs.PL cs.SE} }
bracchi2001combining
arxiv-670290
cs/0111041
On the Design of a Tool for Supporting the Construction of Logic Programs
<|reference_start|>On the Design of a Tool for Supporting the Construction of Logic Programs: Environments for systematic construction of logic programs are needed in the academy as well as in the industry. Such environments should support well defined construction methods and should be able to be extended and interact with other programming tools like debuggers and compilers. We present a variant of the Deville methodology for logic program development, and the design of a tool for supporting the methodology. Our aim is to facilitate the learning of logic programming and to set the basis of more sophisticated tools for program development.<|reference_end|>
arxiv
@article{ospina2001on, title={On the Design of a Tool for Supporting the Construction of Logic Programs}, author={Gustavo A. Ospina and Baudouin Le Charlier}, journal={arXiv preprint arXiv:cs/0111041}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111041}, primaryClass={cs.PL cs.SE} }
ospina2001on
arxiv-670291
cs/0111042
Proceedings of the Eleventh Workshop on Logic Programming Environments (WLPE'01)
<|reference_start|>Proceedings of the Eleventh Workshop on Logic Programming Environments (WLPE'01): The Eleventh Workshop on Logic Programming Environments (WLPE'01) was one in a series of international workshops in the topic area. It was held on December 1, 2001 in Paphos, Cyprus as a post-conference workshop at ICLP 2001. Eight refereed papers were presented at the conference. A majority of the papers involved, in some way, constraint logic programming and tools for software development. Other topics areas addressed include execution visualization, instructional aids (for learning users), software maintenance (including debugging), and provisions for new paradigms.<|reference_end|>
arxiv
@article{kusalik2001proceedings, title={Proceedings of the Eleventh Workshop on Logic Programming Environments (WLPE'01)}, author={Anthony Kusalik}, journal={arXiv preprint arXiv:cs/0111042}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111042}, primaryClass={cs.PL cs.SE} }
kusalik2001proceedings
arxiv-670292
cs/0111043
Prototyping CLP(FD) Tracers: a Trace Model and an Experimental Validation Environment
<|reference_start|>Prototyping CLP(FD) Tracers: a Trace Model and an Experimental Validation Environment: Developing and maintaining CLP programs requires visualization and explanation tools. However, existing tools are built in an ad hoc way. Therefore porting tools from one platform to another is very difficult. We have shown in previous work that, from a fine-grained execution trace, a number of interesting views about logic program executions could be generated by trace analysis. In this article, we propose a trace model for constraint solving by narrowing. This trace model is the first one proposed for CLP(FD) and does not pretend to be the ultimate one. We also propose an instrumented meta-interpreter in order to experiment with the model. Furthermore, we show that the proposed trace model contains the necessary information to build known and useful execution views. This work sets the basis for generic execution analysis of CLP(FD) programs.<|reference_end|>
arxiv
@article{langevine2001prototyping, title={Prototyping CLP(FD) Tracers: a Trace Model and an Experimental Validation Environment}, author={Ludovic Langevine, Pierre Deransart, Mireille Ducasse, and Erwan Jahier}, journal={arXiv preprint arXiv:cs/0111043}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111043}, primaryClass={cs.PL cs.SE} }
langevine2001prototyping
arxiv-670293
cs/0111044
SNS Standard Power Supply Interface
<|reference_start|>SNS Standard Power Supply Interface: The SNS has developed a standard power supply interface for the approximately 350 magnet power supplies in the SNS accumulator ring, Linac and transport lines. Power supply manufacturers are providing supplies compatible with the standard interface. The SNS standard consists of a VME based power supply controller module (PSC) and a power supply interface unit (PSI) that mounts on the power supply. Communication between the two is via a pair of multimode fibers. This PSI/PSC system supports one 16-bit analog reference, four 16-bit analog readbacks, fifteen digital commands and sixteen digital status bits in a single fiber-isolated module. The system can send commands to the supplies and read data from them synchronized to an external signal at up to a 10KHz rate. The PSC time stamps and stores this data in a circular buffer so historical data leading up to a fault event can be analyzed. The PSC contains a serial port so that local testing of hardware can be accomplished with a laptop. This paper concentrates on the software being provided to control the power supply. It includes the EPICS driver; software to test hardware and power supplies via the serial port and VME interface.<|reference_end|>
arxiv
@article{peng2001sns, title={SNS Standard Power Supply Interface}, author={S. Peng, R. Lambiase, B. Oerter, J. Smith}, journal={eConf C011127 (2001) THAP052}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111044}, primaryClass={cs.OH} }
peng2001sns
arxiv-670294
cs/0111045
The Overview of the National Ignition Facility Distributed Computer Control System
<|reference_start|>The Overview of the National Ignition Facility Distributed Computer Control System: The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008.<|reference_end|>
arxiv
@article{lagin2001the, title={The Overview of the National Ignition Facility Distributed Computer Control System}, author={L. J. Lagin, R. C. Bettenhausen, R. A. Carey, C. M. Estes, J. M. Fisher, J. E. Krammen, R. K. Reed, P. J. VanArsdall, J. P. Woodruff}, journal={eConf C011127 (2001) TUAP001}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111045}, primaryClass={cs.SE} }
lagin2001the
arxiv-670295
cs/0111046
HyperPro An integrated documentation environment for CLP
<|reference_start|>HyperPro An integrated documentation environment for CLP: The purpose of this paper is to present some functionalities of the HyperPro System. HyperPro is a hypertext tool which allows to develop Constraint Logic Programming (CLP) together with their documentation. The text editing part is not new and is based on the free software Thot. A HyperPro program is a Thot document written in a report style. The tool is designed for CLP but it can be adapted to other programming paradigms as well. Thot offers navigation and editing facilities and synchronized static document views. HyperPro has new functionalities such as document exportations, dynamic views (projections), indexes and version management. Projection is a mechanism for extracting and exporting relevant pieces of code program or of document according to specific criteria. Indexes are useful to find the references and occurrences of a relation in a document, i.e., where its predicate definition is found and where a relation is used in other programs or document versions and, to translate hyper-texts links into paper references. It still lack importation facilities.<|reference_end|>
arxiv
@article{ed-dbali2001hyperpro, title={HyperPro An integrated documentation environment for CLP}, author={AbdelAli Ed-Dbali (1), Pierre Deransart (2), Mariza A. S. Bigonha (3), Jose de Siqueira (3), Roberto da S. Bigonha (3) ((1) LIFO - University of Orleans - France, (2) INRIA Rocquencourt - France, (3) DCC - UFMG - Brazil)}, journal={arXiv preprint arXiv:cs/0111046}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111046}, primaryClass={cs.PL cs.SE} }
ed-dbali2001hyperpro
arxiv-670296
cs/0111047
Virtual Laboratory: Enabling On-Demand Drug Design with the World Wide Grid
<|reference_start|>Virtual Laboratory: Enabling On-Demand Drug Design with the World Wide Grid: Computational Grids are emerging as a popular paradigm for solving large-scale compute and data intensive problems in science, engineering, and commerce. However, application composition, resource management and scheduling in these environments is a complex undertaking. In this paper, we illustrate the creation of a virtual laboratory environment by leveraging existing Grid technologies to enable molecular modeling for drug design on distributed resources. It involves screening millions of molecules of chemical compounds against a protein target, chemical database (CDB) to identify those with potential use for drug design. We have grid-enabled the molecular docking process by composing it as a parameter sweep application using the Nimrod-G tools. We then developed new tools for remote access to molecules in CDB small molecule database. The Nimrod-G resource broker along with molecule CDB data broker is used for scheduling and on-demand processing of jobs on distributed grid resources. The results demonstrate the ease of use and suitability of the Nimrod-G and virtual laboratory tools.<|reference_end|>
arxiv
@article{buyya2001virtual, title={Virtual Laboratory: Enabling On-Demand Drug Design with the World Wide Grid}, author={Rajkumar Buyya, Kim Branson, Jon Giddy, and David Abramson}, journal={arXiv preprint arXiv:cs/0111047}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111047}, primaryClass={cs.DC} }
buyya2001virtual
arxiv-670297
cs/0111048
A Computational Economy for Grid Computing and its Implementation in the Nimrod-G Resource Brok
<|reference_start|>A Computational Economy for Grid Computing and its Implementation in the Nimrod-G Resource Brok: Computational Grids, coupling geographically distributed resources such as PCs, workstations, clusters, and scientific instruments, have emerged as a next generation computing platform for solving large-scale problems in science, engineering, and commerce. However, application development, resource management, and scheduling in these environments continue to be a complex undertaking. In this article, we discuss our efforts in developing a resource management system for scheduling computations on resources distributed across the world with varying quality of service. Our service-oriented grid computing system called Nimrod-G manages all operations associated with remote execution including resource discovery, trading, scheduling based on economic principles and a user defined quality of service requirement. The Nimrod-G resource broker is implemented by leveraging existing technologies such as Globus, and provides new services that are essential for constructing industrial-strength Grids. We discuss results of preliminary experiments on scheduling some parametric computations using the Nimrod-G resource broker on a world-wide grid testbed that spans five continents.<|reference_end|>
arxiv
@article{abramson2001a, title={A Computational Economy for Grid Computing and its Implementation in the Nimrod-G Resource Brok}, author={David Abramson, Rajkumar Buyya, and Jonathan Giddy}, journal={arXiv preprint arXiv:cs/0111048}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111048}, primaryClass={cs.DC} }
abramson2001a
arxiv-670298
cs/0111049
An Environment for the Exploration of Non Monotonic Logic Programs
<|reference_start|>An Environment for the Exploration of Non Monotonic Logic Programs: Stable Model Semantics and Well Founded Semantics have been shown to be very useful in several applications of non-monotonic reasoning. However, Stable Models presents a high computational complexity, whereas Well Founded Semantics is easy to compute and provides an approximation of Stable Models. Efficient engines exist for both semantics of logic programs. This work presents a computational integration of two of such systems, namely XSB and SMODELS. The resulting system is called XNMR, and provides an interactive system for the exploration of both semantics. Aspects such as modularity can be exploited in order to ease debugging of large knowledge bases with the usual Prolog debugging techniques and an interactive environment. Besides, the use of a full Prolog system as a front-end to a Stable Models engine augments the language usually accepted by such systems.<|reference_end|>
arxiv
@article{castro2001an, title={An Environment for the Exploration of Non Monotonic Logic Programs}, author={Luis F. Castro and David S. Warren}, journal={arXiv preprint arXiv:cs/0111049}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111049}, primaryClass={cs.PL cs.LO} }
castro2001an
arxiv-670299
cs/0111050
Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time
<|reference_start|>Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time: We introduce the smoothed analysis of algorithms, which is a hybrid of the worst-case and average-case analysis of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has polynomial smoothed complexity.<|reference_end|>
arxiv
@article{spielman2001smoothed, title={Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time}, author={Daniel A. Spielman and Shang-Hua Teng}, journal={arXiv preprint arXiv:cs/0111050}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111050}, primaryClass={cs.DS} }
spielman2001smoothed
arxiv-670300
cs/0111051
Predicting RNA Secondary Structures with Arbitrary Pseudoknots by Maximizing the Number of Stacking Pairs
<|reference_start|>Predicting RNA Secondary Structures with Arbitrary Pseudoknots by Maximizing the Number of Stacking Pairs: The paper investigates the computational problem of predicting RNA secondary structures. The general belief is that allowing pseudoknots makes the problem hard. Existing polynomial-time algorithms are heuristic algorithms with no performance guarantee and can only handle limited types of pseudoknots. In this paper we initiate the study of predicting RNA secondary structures with a maximum number of stacking pairs while allowing arbitrary pseudoknots. We obtain two approximation algorithms with worst-case approximation ratios of 1/2 and 1/3 for planar and general secondary structures,respectively. For an RNA sequence of $n$ bases, the approximation algorithm for planar secondary structures runs in $O(n^3)$ time while that for the general case runs in linear time. Furthermore, we prove that allowing pseudoknots makes it NP-hard to maximize the number of stacking pairs in a planar secondary structure. This result is in contrast with the recent NP-hard results on psuedoknots which are based on optimizing some general and complicated energy functions.<|reference_end|>
arxiv
@article{ieong2001predicting, title={Predicting RNA Secondary Structures with Arbitrary Pseudoknots by Maximizing the Number of Stacking Pairs}, author={Samuel Ieong, Ming-Yang Kao, Tak-Wah Lam, Wing-Kin Sung, Siu-Ming Yiu}, journal={arXiv preprint arXiv:cs/0111051}, year={2001}, archivePrefix={arXiv}, eprint={cs/0111051}, primaryClass={cs.CE cs.DS q-bio} }
ieong2001predicting