corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-671801
cs/0404043
Benchmarking Blunders and Things That Go Bump in the Night
<|reference_start|>Benchmarking Blunders and Things That Go Bump in the Night: Benchmarking; by which I mean any computer system that is driven by a controlled workload, is the ultimate in performance testing and simulation. Aside from being a form of institutionalized cheating, it also offer countless opportunities for systematic mistakes in the way the workloads are applied and the resulting measurements interpreted. Right test, wrong conclusion is a ubiquitous mistake that happens because test engineers tend to treat data as divine. Such reverence is not only misplaced, it's also a sure ticket to production hell when the application finally goes live. I demonstrate how such mistakes can be avoided by means of two war stories that are real WOPRs. (a) How to resolve benchmark flaws over the psychic hotline and (b) How benchmarks can go flat with too much Java juice. In each case I present simple performance models and show how they can be applied to correctly assess benchmark data.<|reference_end|>
arxiv
@article{gunther2004benchmarking, title={Benchmarking Blunders and Things That Go Bump in the Night}, author={Neil J. Gunther}, journal={arXiv preprint arXiv:cs/0404043}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404043}, primaryClass={cs.PF cs.SE} }
gunther2004benchmarking
arxiv-671802
cs/0404044
A note on dimensions of polynomial size circuits
<|reference_start|>A note on dimensions of polynomial size circuits: In this paper, we use resource-bounded dimension theory to investigate polynomial size circuits. We show that for every $i\geq 0$, $\Ppoly$ has $i$th order scaled $\pthree$-strong dimension 0. We also show that $\Ppoly^\io$ has $\pthree$-dimension 1/2, $\pthree$-strong dimension 1. Our results improve previous measure results of Lutz (1992) and dimension results of Hitchcock and Vinodchandran (2004).<|reference_end|>
arxiv
@article{gu2004a, title={A note on dimensions of polynomial size circuits}, author={Xiaoyang Gu}, journal={arXiv preprint arXiv:cs/0404044}, year={2004}, doi={10.1016/j.tcs.2006.02.022}, archivePrefix={arXiv}, eprint={cs/0404044}, primaryClass={cs.CC} }
gu2004a
arxiv-671803
cs/0404045
Speculation on graph computation architectures and computing via synchronization
<|reference_start|>Speculation on graph computation architectures and computing via synchronization: A speculative overview of a future topic of research. The paper is a collection of ideas concerning two related areas: 1) Graph computation machines ("computing with graphs"). This is the class of models of computation in which the state of the computation is represented as a graph or network. 2) Arc-based neural networks, which store information not as activation in the nodes, but rather by adding and deleting arcs. Sometimes the arcs may be interpreted as synchronization. Warnings to readers: this is not the sort of thing that one might submit to a journal or conference. No proofs are presented. The presentation is informal, and written at an introductory level. You'll probably want to wait for a more concise presentation.<|reference_end|>
arxiv
@article{shanks2004speculation, title={Speculation on graph computation architectures and computing via synchronization}, author={Bayle Shanks}, journal={arXiv preprint arXiv:cs/0404045}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404045}, primaryClass={cs.NE cs.AI} }
shanks2004speculation
arxiv-671804
cs/0404046
Visualising the structure of architectural open spaces based on shape analysis
<|reference_start|>Visualising the structure of architectural open spaces based on shape analysis: This paper proposes the application of some well known two-dimensional geometrical shape descriptors for the visualisation of the structure of architectural open spaces. The paper demonstrates the use of visibility measures such as distance to obstacles and amount of visible space to calculate shape descriptors such as convexity and skeleton of the open space. The aim of the paper is to indicate a simple, objective and quantifiable approach to understand the structure of open spaces otherwise impossible due to the complex construction of built structures.<|reference_end|>
arxiv
@article{rana2004visualising, title={Visualising the structure of architectural open spaces based on shape analysis}, author={Sanjay Rana and Mike Batty}, journal={International Journal of Architectural Computing, 2(1), 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404046}, primaryClass={cs.CV cs.CG cs.DS} }
rana2004visualising
arxiv-671805
cs/0404047
Using matrices in post-processing phase of CFD simulations
<|reference_start|>Using matrices in post-processing phase of CFD simulations: In this work I present a technique of construction and fast evaluation of a family of cubic polynomials for analytic smoothing and graphical rendering of particles trajectories for flows in a generic geometry. The principal result of the work was implementation and test of a method for interpolating 3D points by regular parametric curves and their fast and efficient evaluation for a good resolution of rendering. For the purpose I have used a parallel environment using a multiprocessor cluster architecture. The efficiency of the used method is good, mainly reducing the number of floating-points computations by caching the numerical values of some line-parameter's powers, and reducing the necessity of communication among processes. This work has been developed for the Research and Development Department of my company for planning advanced customized models of industrial burners.<|reference_end|>
arxiv
@article{argentini2004using, title={Using matrices in post-processing phase of CFD simulations}, author={Gianluca Argentini}, journal={Progress in Industrial Mathematics at ECMI 2004 - Eindhoven (Netherlands), Springer, 2005}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404047}, primaryClass={cs.NA cs.DC physics.comp-ph} }
argentini2004using
arxiv-671806
cs/0404048
Incompleteness of States wrt Traces in Model Checking
<|reference_start|>Incompleteness of States wrt Traces in Model Checking: Cousot and Cousot introduced and studied a general past/future-time specification language, called mu*-calculus, featuring a natural time-symmetric trace-based semantics. The standard state-based semantics of the mu*-calculus is an abstract interpretation of its trace-based semantics, which turns out to be incomplete (i.e., trace-incomplete), even for finite systems. As a consequence, standard state-based model checking of the mu*-calculus is incomplete w.r.t. trace-based model checking. This paper shows that any refinement or abstraction of the domain of sets of states induces a corresponding semantics which is still trace-incomplete for any propositional fragment of the mu*-calculus. This derives from a number of results, one for each incomplete logical/temporal connective of the mu*-calculus, that characterize the structure of models, i.e. transition systems, whose corresponding state-based semantics of the mu*-calculus is trace-complete.<|reference_end|>
arxiv
@article{giacobazzi2004incompleteness, title={Incompleteness of States w.r.t. Traces in Model Checking}, author={Roberto Giacobazzi and Francesco Ranzato}, journal={arXiv preprint arXiv:cs/0404048}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404048}, primaryClass={cs.LO} }
giacobazzi2004incompleteness
arxiv-671807
cs/0404049
Exploiting Cross-Document Relations for Multi-document Evolving Summarization
<|reference_start|>Exploiting Cross-Document Relations for Multi-document Evolving Summarization: This paper presents a methodology for summarization from multiple documents which are about a specific topic. It is based on the specification and identification of the cross-document relations that occur among textual elements within those documents. Our methodology involves the specification of the topic-specific entities, the messages conveyed for the specific entities by certain textual elements and the specification of the relations that can hold among these messages. The above resources are necessary for setting up a specific topic for our query-based summarization approach which uses these resources to identify the query-specific messages within the documents and the query-specific relations that connect these messages across documents.<|reference_end|>
arxiv
@article{afantenos2004exploiting, title={Exploiting Cross-Document Relations for Multi-document Evolving Summarization}, author={Stergos D. Afantenos, Irene Doura, Eleni Kapellou, and Vangelis Karkaletsis}, journal={Methods and Applications of Artificial Intelligence, Volume 3025 of Lecture Notes in Computer Science. Springer-Verlag Heidelberg 2004. pp 410-419.}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404049}, primaryClass={cs.CL cs.AI} }
afantenos2004exploiting
arxiv-671808
cs/0404050
A General Framework For Lazy Functional Logic Programming With Algebraic Polymorphic Types
<|reference_start|>A General Framework For Lazy Functional Logic Programming With Algebraic Polymorphic Types: We propose a general framework for first-order functional logic programming, supporting lazy functions, non-determinism and polymorphic datatypes whose data constructors obey a set C of equational axioms. On top of a given C, we specify a program as a set R of C-based conditional rewriting rules for defined functions. We argue that equational logic does not supply the proper semantics for such programs. Therefore, we present an alternative logic which includes C-based rewriting calculi and a notion of model. We get soundness and completeness for C-based rewriting w.r.t. models, existence of free models for all programs, and type preservation results. As operational semantics, we develop a sound and complete procedure for goal solving, which is based on the combination of lazy narrowing with unification modulo C. Our framework is quite expressive for many purposes, such as solving action and change problems, or realizing the GAMMA computation model.<|reference_end|>
arxiv
@article{arenas-sanchez2004a, title={A General Framework For Lazy Functional Logic Programming With Algebraic Polymorphic Types}, author={Puri Arenas-Sanchez, Mario Rodriguez-Artalejo}, journal={Theory and Practice of Logic Programming, vol. 1, no. 2, 2001}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404050}, primaryClass={cs.PL} }
arenas-sanchez2004a
arxiv-671809
cs/0404051
Knowledge And The Action Description Language A
<|reference_start|>Knowledge And The Action Description Language A: We introduce Ak, an extension of the action description language A (Gelfond and Lifschitz, 1993) to handle actions which affect knowledge. We use sensing actions to increase an agent's knowledge of the world and non-deterministic actions to remove knowledge. We include complex plans involving conditionals and loops in our query language for hypothetical reasoning. We also present a translation of Ak domain descriptions into epistemic logic programs.<|reference_end|>
arxiv
@article{lobo2004knowledge, title={Knowledge And The Action Description Language A}, author={Jorge Lobo, Gisela Mendez, Stuart R. Taylor}, journal={Theory and Practice of Logic Programming, vol. 1, no. 2, 2001}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404051}, primaryClass={cs.AI} }
lobo2004knowledge
arxiv-671810
cs/0404052
Multi-Threading And Message Communication In Qu-Prolog
<|reference_start|>Multi-Threading And Message Communication In Qu-Prolog: This paper presents the multi-threading and internet message communication capabilities of Qu-Prolog. Message addresses are symbolic and the communications package provides high-level support that completely hides details of IP addresses and port numbers as well as the underlying TCP/IP transport layer. The combination of the multi-threads and the high level inter-thread message communications provide simple, powerful support for implementing internet distributed intelligent applications.<|reference_end|>
arxiv
@article{clark2004multi-threading, title={Multi-Threading And Message Communication In Qu-Prolog}, author={Keith L. Clark, Peter J. Robinson, Richard Hagen}, journal={Theory and Practice of Logic Programming, vol. 1, no. 3, 2001}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404052}, primaryClass={cs.PL} }
clark2004multi-threading
arxiv-671811
cs/0404053
Constraint Logic Programming with Hereditary Harrop Formula
<|reference_start|>Constraint Logic Programming with Hereditary Harrop Formula: Constraint Logic Programming (CLP) and Hereditary Harrop formulas (HH) are two well known ways to enhance the expressivity of Horn clauses. In this paper, we present a novel combination of these two approaches. We show how to enrich the syntax and proof theory of HH with the help of a given constraint system, in such a way that the key property of HH as a logic programming language (namely, the existence of uniform proofs) is preserved. We also present a procedure for goal solving, showing its soundness and completeness for computing answer constraints. As a consequence of this result, we obtain a new strong completeness theorem for CLP that avoids the need to build disjunctions of computed answers, as well as a more abstract formulation of a known completeness theorem for HH.<|reference_end|>
arxiv
@article{leach2004constraint, title={Constraint Logic Programming with Hereditary Harrop Formula}, author={Javier Leach, Susana Nieva, Mario Rodriguez-Artalejo}, journal={arXiv preprint arXiv:cs/0404053}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404053}, primaryClass={cs.PL} }
leach2004constraint
arxiv-671812
cs/0404054
New Covert Channels in HTTP
<|reference_start|>New Covert Channels in HTTP: This paper presents new methods enabling anonymous communication on the Internet. We describe a new protocol that allows us to create an anonymous overlay network by exploiting the web browsing activities of regular users. We show that the overlay network provides an anonymity set greater than the set of senders and receivers in a realistic threat model. In particular, the protocol provides unobservability in our threat model.<|reference_end|>
arxiv
@article{bauer2004new, title={New Covert Channels in HTTP}, author={Matthias Bauer}, journal={Proceedings of the 2003 ACM Workshop on Privacy in the Electronic Society}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404054}, primaryClass={cs.CR cs.NI} }
bauer2004new
arxiv-671813
cs/0404055
Finite-Tree Analysis for Constraint Logic-Based Languages: The Complete Unabridged Version
<|reference_start|>Finite-Tree Analysis for Constraint Logic-Based Languages: The Complete Unabridged Version: Logic languages based on the theory of rational, possibly infinite, trees have much appeal in that rational trees allow for faster unification (due to the safe omission of the occurs-check) and increased expressivity (cyclic terms can provide very efficient representations of grammars and other useful objects). Unfortunately, the use of infinite rational trees has problems. For instance, many of the built-in and library predicates are ill-defined for such trees and need to be supplemented by run-time checks whose cost may be significant. Moreover, some widely-used program analysis and manipulation techniques are correct only for those parts of programs working over finite trees. It is thus important to obtain, automatically, a knowledge of the program variables (the finite variables) that, at the program points of interest, will always be bound to finite terms. For these reasons, we propose here a new data-flow analysis, based on abstract interpretation, that captures such information.<|reference_end|>
arxiv
@article{bagnara2004finite-tree, title={Finite-Tree Analysis for Constraint Logic-Based Languages: The Complete Unabridged Version}, author={Roberto Bagnara, Roberta Gori, Patricia M. Hill, and Enea Zaffanella}, journal={arXiv preprint arXiv:cs/0404055}, year={2004}, archivePrefix={arXiv}, eprint={cs/0404055}, primaryClass={cs.PL} }
bagnara2004finite-tree
arxiv-671814
cs/0404056
A lambda calculus for quantum computation with classical control
<|reference_start|>A lambda calculus for quantum computation with classical control: The objective of this paper is to develop a functional programming language for quantum computers. We develop a lambda calculus for the classical control model, following the first author's work on quantum flow-charts. We define a call-by-value operational semantics, and we give a type system using affine intuitionistic linear logic. The main results of this paper are the safety properties of the language and the development of a type inference algorithm.<|reference_end|>
arxiv
@article{selinger2004a, title={A lambda calculus for quantum computation with classical control}, author={Peter Selinger, Benoit Valiron}, journal={Proc. of TLCA 2005}, year={2004}, doi={10.1007/11417170_26}, archivePrefix={arXiv}, eprint={cs/0404056}, primaryClass={cs.LO} }
selinger2004a
arxiv-671815
cs/0404057
Convergence of Discrete MDL for Sequential Prediction
<|reference_start|>Convergence of Discrete MDL for Sequential Prediction: We study the properties of the Minimum Description Length principle for sequence prediction, considering a two-part MDL estimator which is chosen from a countable class of models. This applies in particular to the important case of universal sequence prediction, where the model class corresponds to all algorithms for some fixed universal Turing machine (this correspondence is by enumerable semimeasures, hence the resulting models are stochastic). We prove convergence theorems similar to Solomonoff's theorem of universal induction, which also holds for general Bayes mixtures. The bound characterizing the convergence speed for MDL predictions is exponentially larger as compared to Bayes mixtures. We observe that there are at least three different ways of using MDL for prediction. One of these has worse prediction properties, for which predictions only converge if the MDL estimator stabilizes. We establish sufficient conditions for this to occur. Finally, some immediate consequences for complexity relations and randomness criteria are proven.<|reference_end|>
arxiv
@article{poland2004convergence, title={Convergence of Discrete MDL for Sequential Prediction}, author={Jan Poland and Marcus Hutter}, journal={Proc. 17th Annual Conf. on Learning Theory (COLT-2004), pages 300--314}, year={2004}, number={IDSIA-03-04}, archivePrefix={arXiv}, eprint={cs/0404057}, primaryClass={cs.LG cs.AI math.ST stat.TH} }
poland2004convergence
arxiv-671816
cs/0404058
Efficient coroutine generation of constrained Gray sequences
<|reference_start|>Efficient coroutine generation of constrained Gray sequences: We study an interesting family of cooperating coroutines, which is able to generate all patterns of bits that satisfy certain fairly general ordering constraints, changing only one bit at a time. (More precisely, the directed graph of constraints is required to be cycle-free when it is regarded as an undirected graph.) If the coroutines are implemented carefully, they yield an algorithm that needs only a bounded amount of computation per bit change, thereby solving an open problem in the field of combinatorial pattern generation.<|reference_end|>
arxiv
@article{knuth2004efficient, title={Efficient coroutine generation of constrained Gray sequences}, author={Donald E. Knuth, Frank Ruskey}, journal={Lecture Notes in Computer Science 2635 (2004), 183--204}, year={2004}, number={Knuth migration 11/2004}, archivePrefix={arXiv}, eprint={cs/0404058}, primaryClass={cs.DS} }
knuth2004efficient
arxiv-671817
cs/0405001
Toward a New Policy for Scientific and Technical Communication: the Case of Kyrgyz Republic
<|reference_start|>Toward a New Policy for Scientific and Technical Communication: the Case of Kyrgyz Republic: The objective of this policy paper is to formulate a new policy in the field of scientific and technical information (STI) in Kyrgyz Republic in the light of emergence and rapid development of electronic scientific communication. The major problem with communication in science in the Republic is lack of adequate access to information by scientists. An equally serious problem is poor visibility of research conducted in Kyrgyzstan and, as consequence, negligible research impact on academic society globally. The paper proposes an integrated approach to formulation of a new STI policy based on a number of policy components: telecommunication networks, computerization, STI systems, legislation & standards, and education & trainings. Two alternatives were considered: electronic vs. paper-based scientific communication and development of the national STI system vs. cross-national virtual collaboration. The study results in suggesting a number of policy recommendations for identified stakeholders.<|reference_end|>
arxiv
@article{djenchuraev2004toward, title={Toward a New Policy for Scientific and Technical Communication: the Case of Kyrgyz Republic}, author={Nurlan Djenchuraev}, journal={arXiv preprint arXiv:cs/0405001}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405001}, primaryClass={cs.CY} }
djenchuraev2004toward
arxiv-671818
cs/0405002
Splitting an operator: Algebraic modularity results for logics with fixpoint semantics
<|reference_start|>Splitting an operator: Algebraic modularity results for logics with fixpoint semantics: It is well known that, under certain conditions, it is possible to split logic programs under stable model semantics, i.e. to divide such a program into a number of different "levels", such that the models of the entire program can be constructed by incrementally constructing models for each level. Similar results exist for other non-monotonic formalisms, such as auto-epistemic logic and default logic. In this work, we present a general, algebraicsplitting theory for logics with a fixpoint semantics. Together with the framework of approximation theory, a general fixpoint theory for arbitrary operators, this gives us a uniform and powerful way of deriving splitting results for each logic with a fixpoint semantics. We demonstrate the usefulness of these results, by generalizing existing results for logic programming, auto-epistemic logic and default logic.<|reference_end|>
arxiv
@article{vennekens2004splitting, title={Splitting an operator: Algebraic modularity results for logics with fixpoint semantics}, author={Joost Vennekens, David Gilis, Marc Denecker}, journal={ACM Transactions on Computational Logic, Volume 7, Number 4, 2006}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405002}, primaryClass={cs.AI cs.LO} }
vennekens2004splitting
arxiv-671819
cs/0405003
Model checking for Process Rewrite Systems and a class of action--based regular properties
<|reference_start|>Model checking for Process Rewrite Systems and a class of action--based regular properties: We consider the model checking problem for Process Rewrite Systems (PRSs), an infinite-state formalism (non Turing-powerful) which subsumes many common models such as Pushdown Processes and Petri Nets. PRSs can be adopted as formal models for programs with dynamic creation and synchronization of concurrent processes, and with recursive procedures. The model-checking problem for PRSs and action-based linear temporal logic (ALTL) is undecidable. However, decidability for some interesting fragment of ALTL remains an open question. In this paper we state decidability results concerning generalized acceptance properties about infinite derivations (infinite term rewriting) in PRSs. As a consequence, we obtain decidability of the model-checking (restricted to infinite runs) for PRSs and a meaningful fragment of ALTL.<|reference_end|>
arxiv
@article{bozzelli2004model, title={Model checking for Process Rewrite Systems and a class of action--based regular properties}, author={Laura Bozzelli}, journal={arXiv preprint arXiv:cs/0405003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405003}, primaryClass={cs.OH} }
bozzelli2004model
arxiv-671820
cs/0405004
Quantum Computers
<|reference_start|>Quantum Computers: This research paper gives an overview of quantum computers - description of their operation, differences between quantum and silicon computers, major construction problems of a quantum computer and many other basic aspects. No special scientific knowledge is necessary for the reader.<|reference_end|>
arxiv
@article{avaliani2004quantum, title={Quantum Computers}, author={Archil Avaliani}, journal={arXiv preprint arXiv:cs/0405004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405004}, primaryClass={cs.AI cs.AR} }
avaliani2004quantum
arxiv-671821
cs/0405005
Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard
<|reference_start|>Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard: Maximum-likelihood decoding is one of the central algorithmic problems in coding theory. It has been known for over 25 years that maximum-likelihood decoding of general linear codes is NP-hard. Nevertheless, it was so far unknown whether maximum- likelihood decoding remains hard for any specific family of codes with nontrivial algebraic structure. In this paper, we prove that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes remains hard even with unlimited preprocessing, thereby strengthening a result of Bruck and Naor.<|reference_end|>
arxiv
@article{guruswami2004maximum-likelihood, title={Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard}, author={Venkatesan Guruswami and Alexander Vardy}, journal={arXiv preprint arXiv:cs/0405005}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405005}, primaryClass={cs.CC cs.DM cs.IT math.IT} }
guruswami2004maximum-likelihood
arxiv-671822
cs/0405006
Bi-criteria Algorithm for Scheduling Jobs on Cluster Platforms
<|reference_start|>Bi-criteria Algorithm for Scheduling Jobs on Cluster Platforms: We describe in this paper a new method for building an efficient algorithm for scheduling jobs in a cluster. Jobs are considered as parallel tasks (PT) which can be scheduled on any number of processors. The main feature is to consider two criteria that are optimized together. These criteria are the makespan and the weighted minimal average completion time (minsum). They are chosen for their complementarity, to be able to represent both user-oriented objectives and system administrator objectives. We propose an algorithm based on a batch policy with increasing batch sizes, with a smart selection of jobs in each batch. This algorithm is assessed by intensive simulation results, compared to a new lower bound (obtained by a relaxation of ILP) of the optimal schedules for both criteria separately. It is currently implemented in an actual real-size cluster platform.<|reference_end|>
arxiv
@article{dutot2004bi-criteria, title={Bi-criteria Algorithm for Scheduling Jobs on Cluster Platforms}, author={Pierre-Francois Dutot (ID - IMAG), Lionel Eyraud (ID - IMAG), Gr'egory Mouni'e (ID - IMAG), Denis Trystram (ID - IMAG)}, journal={ACM Symposium on Parallel Algorithms and Architectures (2004) 125-132}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405006}, primaryClass={cs.DC cs.DS} }
dutot2004bi-criteria
arxiv-671823
cs/0405007
"In vivo" spam filtering: A challenge problem for data mining
<|reference_start|>"In vivo" spam filtering: A challenge problem for data mining: Spam, also known as Unsolicited Commercial Email (UCE), is the bane of email communication. Many data mining researchers have addressed the problem of detecting spam, generally by treating it as a static text classification problem. True in vivo spam filtering has characteristics that make it a rich and challenging domain for data mining. Indeed, real-world datasets with these characteristics are typically difficult to acquire and to share. This paper demonstrates some of these characteristics and argues that researchers should pursue in vivo spam filtering as an accessible domain for investigating them.<|reference_end|>
arxiv
@article{fawcett2004"in, title={"In vivo" spam filtering: A challenge problem for data mining}, author={Tom Fawcett}, journal={KDD Explorations vol.5 no.2, Dec 2003. pp.140-148}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405007}, primaryClass={cs.AI cs.DB cs.IR} }
fawcett2004"in
arxiv-671824
cs/0405008
A Comparative Study of Fuzzy Classification Methods on Breast Cancer Data
<|reference_start|>A Comparative Study of Fuzzy Classification Methods on Breast Cancer Data: In this paper, we examine the performance of four fuzzy rule generation methods on Wisconsin breast cancer data. The first method generates fuzzy if then rules using the mean and the standard deviation of attribute values. The second approach generates fuzzy if then rules using the histogram of attributes values. The third procedure generates fuzzy if then rules with certainty of each attribute into homogeneous fuzzy sets. In the fourth approach, only overlapping areas are partitioned. The first two approaches generate a single fuzzy if then rule for each class by specifying the membership function of each antecedent fuzzy set using the information about attribute values of training patterns. The other two approaches are based on fuzzy grids with homogeneous fuzzy partitions of each attribute. The performance of each approach is evaluated on breast cancer data sets. Simulation results show that the Modified grid approach has a high classification rate of 99.73 %.<|reference_end|>
arxiv
@article{jain2004a, title={A Comparative Study of Fuzzy Classification Methods on Breast Cancer Data}, author={Ravi Jain and Ajith Abraham}, journal={Australiasian Physical And Engineering Sciences in Medicine, Australia, 2004 (forth coming)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405008}, primaryClass={cs.AI} }
jain2004a
arxiv-671825
cs/0405009
Intelligent Systems: Architectures and Perspectives
<|reference_start|>Intelligent Systems: Architectures and Perspectives: The integration of different learning and adaptation techniques to overcome individual limitations and to achieve synergetic effects through the hybridization or fusion of these techniques has, in recent years, contributed to a large number of new intelligent system designs. Computational intelligence is an innovative framework for constructing intelligent hybrid architectures involving Neural Networks (NN), Fuzzy Inference Systems (FIS), Probabilistic Reasoning (PR) and derivative free optimization techniques such as Evolutionary Computation (EC). Most of these hybridization approaches, however, follow an ad hoc design methodology, justified by success in certain application domains. Due to the lack of a common framework it often remains difficult to compare the various hybrid systems conceptually and to evaluate their performance comparatively. This chapter introduces the different generic architectures for integrating intelligent systems. The designing aspects and perspectives of different hybrid archirectures like NN-FIS, EC-FIS, EC-NN, FIS-PR and NN-FIS-EC systems are presented. Some conclusions are also provided towards the end.<|reference_end|>
arxiv
@article{abraham2004intelligent, title={Intelligent Systems: Architectures and Perspectives}, author={Ajith Abraham}, journal={Recent Advances in Intelligent Paradigms and Applications, Abraham A., Jain L. and Kacprzyk J. (Eds.), Studies in Fuzziness and Soft Computing, Springer Verlag Germany, ISBN 3790815381, Chapter 1, pp. 1-35, 2002}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405009}, primaryClass={cs.AI} }
abraham2004intelligent
arxiv-671826
cs/0405010
A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria
<|reference_start|>A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria: Neuro-fuzzy systems have attracted growing interest of researchers in various scientific and engineering areas due to the increasing need of intelligent systems. This paper evaluates the use of two popular soft computing techniques and conventional statistical approach based on Box--Jenkins autoregressive integrated moving average (ARIMA) model to predict electricity demand in the State of Victoria, Australia. The soft computing methods considered are an evolving fuzzy neural network (EFuNN) and an artificial neural network (ANN) trained using scaled conjugate gradient algorithm (CGA) and backpropagation (BP) algorithm. The forecast accuracy is compared with the forecasts used by Victorian Power Exchange (VPX) and the actual energy demand. To evaluate, we considered load demand patterns for 10 consecutive months taken every 30 min for training the different prediction models. Test results show that the neuro-fuzzy system performed better than neural networks, ARIMA model and the VPX forecasts.<|reference_end|>
arxiv
@article{abraham2004a, title={A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria}, author={Ajith Abraham and Baikunth Nath}, journal={Applied Soft Computing Journal, Elsevier Science, Volume 1&2, pp. 127-138, 2001}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405010}, primaryClass={cs.AI} }
abraham2004a
arxiv-671827
cs/0405011
Neuro Fuzzy Systems: Sate-of-the-Art Modeling Techniques
<|reference_start|>Neuro Fuzzy Systems: Sate-of-the-Art Modeling Techniques: Fusion of Artificial Neural Networks (ANN) and Fuzzy Inference Systems (FIS) have attracted the growing interest of researchers in various scientific and engineering areas due to the growing need of adaptive intelligent systems to solve the real world problems. ANN learns from scratch by adjusting the interconnections between layers. FIS is a popular computing framework based on the concept of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning. The advantages of a combination of ANN and FIS are obvious. There are several approaches to integrate ANN and FIS and very often it depends on the application. We broadly classify the integration of ANN and FIS into three categories namely concurrent model, cooperative model and fully fused model. This paper starts with a discussion of the features of each model and generalize the advantages and deficiencies of each model. We further focus the review on the different types of fused neuro-fuzzy systems and citing the advantages and disadvantages of each model.<|reference_end|>
arxiv
@article{abraham2004neuro, title={Neuro Fuzzy Systems: Sate-of-the-Art Modeling Techniques}, author={Ajith Abraham}, journal={Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, Lecture Notes in Computer Science. Volume. 2084, Springer Verlag Germany, Jose Mira and Alberto Prieto (Eds.), ISBN 3540422358, Spain, pp. 269-276, 2001}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405011}, primaryClass={cs.AI} }
abraham2004neuro
arxiv-671828
cs/0405012
Is Neural Network a Reliable Forecaster on Earth? A MARS Query!
<|reference_start|>Is Neural Network a Reliable Forecaster on Earth? A MARS Query!: Long-term rainfall prediction is a challenging task especially in the modern world where we are facing the major environmental problem of global warming. In general, climate and rainfall are highly non-linear phenomena in nature exhibiting what is known as the butterfly effect. While some regions of the world are noticing a systematic decrease in annual rainfall, others notice increases in flooding and severe storms. The global nature of this phenomenon is very complicated and requires sophisticated computer modeling and simulation to predict accurately. In this paper, we report a performance analysis for Multivariate Adaptive Regression Splines (MARS)and artificial neural networks for one month ahead prediction of rainfall. To evaluate the prediction efficiency, we made use of 87 years of rainfall data in Kerala state, the southern part of the Indian peninsula situated at latitude -longitude pairs (8o29'N - 76o57' E). We used an artificial neural network trained using the scaled conjugate gradient algorithm. The neural network and MARS were trained with 40 years of rainfall data. For performance evaluation, network predicted outputs were compared with the actual rainfall data. Simulation results reveal that MARS is a good forecasting tool and performed better than the considered neural network.<|reference_end|>
arxiv
@article{abraham2004is, title={Is Neural Network a Reliable Forecaster on Earth? A MARS Query!}, author={Ajith Abraham & Dan Steinberg}, journal={Bio-Inspired Applications of Connectionism, Lecture Notes in Computer Science. Volume. 2085, Springer Verlag Germany, Jose Mira and Alberto Prieto (Eds.), ISBN 3540422374, Spain, pp.679-686, 2001}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405012}, primaryClass={cs.AI} }
abraham2004is
arxiv-671829
cs/0405013
DCT Based Texture Classification Using Soft Computing Approach
<|reference_start|>DCT Based Texture Classification Using Soft Computing Approach: Classification of texture pattern is one of the most important problems in pattern recognition. In this paper, we present a classification method based on the Discrete Cosine Transform (DCT) coefficients of texture image. As DCT works on gray level image, the color scheme of each image is transformed into gray levels. For classifying the images using DCT we used two popular soft computing techniques namely neurocomputing and neuro-fuzzy computing. We used a feedforward neural network trained using the backpropagation learning and an evolving fuzzy neural network to classify the textures. The soft computing models were trained using 80% of the texture data and remaining was used for testing and validation purposes. A performance comparison was made among the soft computing models for the texture classification problem. We also analyzed the effects of prolonged training of neural networks. It is observed that the proposed neuro-fuzzy model performed better than neural network.<|reference_end|>
arxiv
@article{sorwar2004dct, title={DCT Based Texture Classification Using Soft Computing Approach}, author={Golam Sorwar and Ajith Abraham}, journal={Malaysian Journal of Computer Science, 2004 (forth coming)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405013}, primaryClass={cs.AI} }
sorwar2004dct
arxiv-671830
cs/0405014
Estimating Genome Reversal Distance by Genetic Algorithm
<|reference_start|>Estimating Genome Reversal Distance by Genetic Algorithm: Sorting by reversals is an important problem in inferring the evolutionary relationship between two genomes. The problem of sorting unsigned permutation has been proven to be NP-hard. The best guaranteed error bounded is the 3/2- approximation algorithm. However, the problem of sorting signed permutation can be solved easily. Fast algorithms have been developed both for finding the sorting sequence and finding the reversal distance of signed permutation. In this paper, we present a way to view the problem of sorting unsigned permutation as signed permutation. And the problem can then be seen as searching an optimal signed permutation in all n2 corresponding signed permutations. We use genetic algorithm to conduct the search. Our experimental result shows that the proposed method outperform the 3/2-approximation algorithm.<|reference_end|>
arxiv
@article{auyeung2004estimating, title={Estimating Genome Reversal Distance by Genetic Algorithm}, author={Andy AuYeung and Ajith Abraham}, journal={2003 IEEE Congress on Evolutionary Computation (CEC2003), Australia, IEEE Press, ISBN 0780378040, pp. 1157-1161, 2003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405014}, primaryClass={cs.AI} }
auyeung2004estimating
arxiv-671831
cs/0405015
A High-Level Reconfigurable Computing Platform Software Frameworks
<|reference_start|>A High-Level Reconfigurable Computing Platform Software Frameworks: Reconfigurable computing refers to the use of processors, such as Field Programmable Gate Arrays (FPGAs), that can be modified at the hardware level to take on different processing tasks. A reconfigurable computing platform describes the hardware and software base on top of which modular extensions can be created, depending on the desired application. Such reconfigurable computing platforms can take on varied designs and implementations, according to the constraints imposed and features desired by the scope of applications. This paper introduces a PC-based reconfigurable computing platform software frameworks that is flexible and extensible enough to abstract the different hardware types and functionality that different PCs may have. The requirements of the software platform, architectural issues addressed, rationale behind the decisions made, and frameworks design implemented are discussed.<|reference_end|>
arxiv
@article{nathan2004a, title={A High-Level Reconfigurable Computing Platform Software Frameworks}, author={Darran Nathan, Kelvin Lim Mun Kit, Kelly Choo Hon Min, Philip Wong Jit Chin, Andreas Weisensee}, journal={arXiv preprint arXiv:cs/0405015}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405015}, primaryClass={cs.AR} }
nathan2004a
arxiv-671832
cs/0405016
Intrusion Detection Systems Using Adaptive Regression Splines
<|reference_start|>Intrusion Detection Systems Using Adaptive Regression Splines: Past few years have witnessed a growing recognition of intelligent techniques for the construction of efficient and reliable intrusion detection systems. Due to increasing incidents of cyber attacks, building effective intrusion detection systems (IDS) are essential for protecting information systems security, and yet it remains an elusive goal and a great challenge. In this paper, we report a performance analysis between Multivariate Adaptive Regression Splines (MARS), neural networks and support vector machines. The MARS procedure builds flexible regression models by fitting separate splines to distinct intervals of the predictor variables. A brief comparison of different neural network learning algorithms is also given.<|reference_end|>
arxiv
@article{mukkamala2004intrusion, title={Intrusion Detection Systems Using Adaptive Regression Splines}, author={Srinivas Mukkamala, Andrew H. Sung, Ajith Abraham and Vitorino Ramos}, journal={6th International Conference on Enterprise Information Systems, ICEIS'04, Portugal, I. Seruca, J. Filipe, S. Hammoudi and J. Cordeiro (Eds.), ISBN 972-8865-00-7, Vol. 3, pp. 26-33, 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405016}, primaryClass={cs.AI} }
mukkamala2004intrusion
arxiv-671833
cs/0405017
Data Mining Approach for Analyzing Call Center Performance
<|reference_start|>Data Mining Approach for Analyzing Call Center Performance: The aim of our research was to apply well-known data mining techniques (such as linear neural networks, multi-layered perceptrons, probabilistic neural networks, classification and regression trees, support vector machines and finally a hybrid decision tree neural network approach) to the problem of predicting the quality of service in call centers; based on the performance data actually collected in a call center of a large insurance company. Our aim was two-fold. First, to compare the performance of models built using the above-mentioned techniques and, second, to analyze the characteristics of the input sensitivity in order to better understand the relationship between the perform-ance evaluation process and the actual performance and in this way help improve the performance of call centers. In this paper we summarize our findings.<|reference_end|>
arxiv
@article{paprzycki2004data, title={Data Mining Approach for Analyzing Call Center Performance}, author={Marcin Paprzycki, Ajith Abraham and Ruiyuan Guo}, journal={The 17th International Conference on Industrial & Engineering Applications of Artificial Intelligence and Expert Systems, Canada, Springer Verlag, Germany, 2004 (forth coming)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405017}, primaryClass={cs.AI} }
paprzycki2004data
arxiv-671834
cs/0405018
Modeling Chaotic Behavior of Stock Indices Using Intelligent Paradigms
<|reference_start|>Modeling Chaotic Behavior of Stock Indices Using Intelligent Paradigms: The use of intelligent systems for stock market predictions has been widely established. In this paper, we investigate how the seemingly chaotic behavior of stock markets could be well represented using several connectionist paradigms and soft computing techniques. To demonstrate the different techniques, we considered Nasdaq-100 index of Nasdaq Stock MarketS and the S&P CNX NIFTY stock index. We analyzed 7 year's Nasdaq 100 main index values and 4 year's NIFTY index values. This paper investigates the development of a reliable and efficient technique to model the seemingly chaotic behavior of stock markets. We considered an artificial neural network trained using Levenberg-Marquardt algorithm, Support Vector Machine (SVM), Takagi-Sugeno neuro-fuzzy model and a Difference Boosting Neural Network (DBNN). This paper briefly explains how the different connectionist paradigms could be formulated using different learning methods and then investigates whether they can provide the required level of performance, which are sufficiently good and robust so as to provide a reliable forecast model for stock market indices. Experiment results reveal that all the connectionist paradigms considered could represent the stock indices behavior very accurately.<|reference_end|>
arxiv
@article{abraham2004modeling, title={Modeling Chaotic Behavior of Stock Indices Using Intelligent Paradigms}, author={Ajith Abraham, Ninan Sajith Philip and P. Saratchandran}, journal={International Journal of Neural, Parallel & Scientific Computations, USA, Volume 11, Issue (1&2), pp. 143-160, 2003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405018}, primaryClass={cs.AI} }
abraham2004modeling
arxiv-671835
cs/0405019
Hybrid Fuzzy-Linear Programming Approach for Multi Criteria Decision Making Problems
<|reference_start|>Hybrid Fuzzy-Linear Programming Approach for Multi Criteria Decision Making Problems: The purpose of this paper is to point to the usefulness of applying a linear mathematical formulation of fuzzy multiple criteria objective decision methods in organising business activities. In this respect fuzzy parameters of linear programming are modelled by preference-based membership functions. This paper begins with an introduction and some related research followed by some fundamentals of fuzzy set theory and technical concepts of fuzzy multiple objective decision models. Further a real case study of a manufacturing plant and the implementation of the proposed technique is presented. Empirical results clearly show the superiority of the fuzzy technique in optimising individual objective functions when compared to non-fuzzy approach. Furthermore, for the problem considered, the optimal solution helps to infer that by incorporating fuzziness in a linear programming model either in constraints, or both in objective functions and constraints, provides a similar (or even better) level of satisfaction for obtained results compared to non-fuzzy linear programming.<|reference_end|>
arxiv
@article{petrovic-lazarevic2004hybrid, title={Hybrid Fuzzy-Linear Programming Approach for Multi Criteria Decision Making Problems}, author={Sonja Petrovic-Lazarevic and Ajith Abraham}, journal={International Journal of Neural, Parallel & Scientific Computations, USA, Volume 11, Issues (1&2), pp. 53-68, 2003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405019}, primaryClass={cs.AI} }
petrovic-lazarevic2004hybrid
arxiv-671836
cs/0405020
A proof of Alon's second eigenvalue conjecture and related problems
<|reference_start|>A proof of Alon's second eigenvalue conjecture and related problems: In this paper we show the following conjecture of Noga Alon. Fix a positive integer d>2 and real epsilon > 0; consider the probability that a random d-regular graph on n vertices has the second eigenvalue of its adjacency matrix greater than 2 sqrt(d-1) + epsilon; then this probability goes to zero as n tends to infinity. We prove the conjecture for a number of notions of random d-regular graph, including models for d odd. We also estimate the aforementioned probability more precisely, showing in many cases and models (but not all) that it decays like a polynomial in 1/n.<|reference_end|>
arxiv
@article{friedman2004a, title={A proof of Alon's second eigenvalue conjecture and related problems}, author={Joel Friedman}, journal={arXiv preprint arXiv:cs/0405020}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405020}, primaryClass={cs.DM math.CO} }
friedman2004a
arxiv-671837
cs/0405021
Computing Multi-Homogeneous Bezout Numbers is Hard
<|reference_start|>Computing Multi-Homogeneous Bezout Numbers is Hard: The multi-homogeneous Bezout number is a bound for the number of solutions of a system of multi-homogeneous polynomial equations, in a suitable product of projective spaces. Given an arbitrary, not necessarily multi-homogeneous system, one can ask for the optimal multi-homogenization that would minimize the Bezout number. In this paper, it is proved that the problem of computing, or even estimating the optimal multi-homogeneous Bezout number is actually NP-hard. In terms of approximation theory for combinatorial optimization, the problem of computing the best multi-homogeneous structure does not belong to APX, unless P = NP. Moreover, polynomial time algorithms for estimating the minimal multi-homogeneous Bezout number up to a fixed factor cannot exist even in a randomized setting, unless BPP contains NP.<|reference_end|>
arxiv
@article{malajovich2004computing, title={Computing Multi-Homogeneous Bezout Numbers is Hard}, author={Gregorio Malajovich and Klaus Meer}, journal={Theory of Computing Systems, Volume 40, Number 4 / June, 2007}, year={2004}, doi={10.1007/s00224-006-1322-y}, archivePrefix={arXiv}, eprint={cs/0405021}, primaryClass={cs.CC cs.SC} }
malajovich2004computing
arxiv-671838
cs/0405022
Encryption Schemes using Finite Frames and Hadamard Arrays
<|reference_start|>Encryption Schemes using Finite Frames and Hadamard Arrays: We propose a cipher similar to the One Time Pad and McEliece cipher based on a subband coding scheme. The encoding process is an approximation to the One Time Pad encryption scheme. We present results of numerical experiments which suggest that a brute force attack to the proposed scheme does not result in all possible plaintexts, as the One Time Pad does, but still the brute force attack does not compromise the system. However, we demonstrate that the cipher is vulnerable to a chosen-plaintext attack.<|reference_end|>
arxiv
@article{harkins2004encryption, title={Encryption Schemes using Finite Frames and Hadamard Arrays}, author={Ryan Harkins, Eric Weber and Andrew Westmeyer}, journal={arXiv preprint arXiv:cs/0405022}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405022}, primaryClass={cs.CR} }
harkins2004encryption
arxiv-671839
cs/0405023
A Grid Service Broker for Scheduling Distributed Data-Oriented Applications on Global Grids
<|reference_start|>A Grid Service Broker for Scheduling Distributed Data-Oriented Applications on Global Grids: The next generation of scientific experiments and studies, popularly called as e-Science, is carried out by large collaborations of researchers distributed around the world engaged in analysis of huge collections of data generated by scientific instruments. Grid computing has emerged as an enabler for e-Science as it permits the creation of virtual organizations that bring together communities with common objectives. Within a community, data collections are stored or replicated on distributed resources to enhance storage capability or efficiency of access. In such an environment, scientists need to have the ability to carry out their studies by transparently accessing distributed data and computational resources. In this paper, we propose and develop a Grid broker that mediates access to distributed resources by (a) discovering suitable data sources for a given analysis scenario, (b) suitable computational resources, (c) optimally mapping analysis jobs to resources, (d) deploying and monitoring job execution on selected resources, (e) accessing data from local or remote data source during job execution and (f) collating and presenting results. The broker supports a declarative and dynamic parametric programming model for creating grid applications. We have used this model in grid-enabling a high energy physics analysis application (Belle Analysis Software Framework). The broker has been used in deploying Belle experiment data analysis jobs on a grid testbed, called Belle Analysis Data Grid, having resources distributed across Australia interconnected through GrangeNet.<|reference_end|>
arxiv
@article{venugopal2004a, title={A Grid Service Broker for Scheduling Distributed Data-Oriented Applications on Global Grids}, author={Srikumar Venugopal, Rajkumar Buyya and Lyle Winton}, journal={arXiv preprint arXiv:cs/0405023}, year={2004}, number={Technical Report, GRIDS-TR-2004-1, Grid Computing and Distributed Systems Laboratory, University of Melbourne, Australia, February 2004}, archivePrefix={arXiv}, eprint={cs/0405023}, primaryClass={cs.DC} }
venugopal2004a
arxiv-671840
cs/0405024
Meta-Learning Evolutionary Artificial Neural Networks
<|reference_start|>Meta-Learning Evolutionary Artificial Neural Networks: In this paper, we present MLEANN (Meta-Learning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its parameters are adapted according to the problem. We explored the performance of MLEANN and conventionally designed artificial neural networks for function approximation problems. To evaluate the comparative performance, we used three different well-known chaotic time series. We also present the state of the art popular neural network learning algorithms and some experimentation results related to convergence speed and generalization performance. We explored the performance of backpropagation algorithm; conjugate gradient algorithm, quasi-Newton algorithm and Levenberg-Marquardt algorithm for the three chaotic time series. Performances of the different learning algorithms were evaluated when the activation functions and architecture were changed. We further present the theoretical background, algorithm, design strategy and further demonstrate how effective and inevitable is the proposed MLEANN framework to design a neural network, which is smaller, faster and with a better generalization performance.<|reference_end|>
arxiv
@article{abraham2004meta-learning, title={Meta-Learning Evolutionary Artificial Neural Networks}, author={Ajith Abraham}, journal={Neurocomputing Journal, Elsevier Science, Netherlands, Vol. 56c, pp. 1-38, 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405024}, primaryClass={cs.AI} }
abraham2004meta-learning
arxiv-671841
cs/0405025
The Largest Compatible Subset Problem for Phylogenetic Data
<|reference_start|>The Largest Compatible Subset Problem for Phylogenetic Data: The phylogenetic tree construction is to infer the evolutionary relationship between species from the experimental data. However, the experimental data are often imperfect and conflicting each others. Therefore, it is important to extract the motif from the imperfect data. The largest compatible subset problem is that, given a set of experimental data, we want to discard the minimum such that the remaining is compatible. The largest compatible subset problem can be viewed as the vertex cover problem in the graph theory that has been proven to be NP-hard. In this paper, we propose a hybrid Evolutionary Computing (EC) method for this problem. The proposed method combines the EC approach and the algorithmic approach for special structured graphs. As a result, the complexity of the problem is dramatically reduced. Experiments were performed on randomly generated graphs with different edge densities. The vertex covers produced by the proposed method were then compared to the vertex covers produced by a 2-approximation algorithm. The experimental results showed that the proposed method consistently outperformed a classical 2- approximation algorithm. Furthermore, a significant improvement was found when the graph density was small.<|reference_end|>
arxiv
@article{auyeung2004the, title={The Largest Compatible Subset Problem for Phylogenetic Data}, author={Andy Auyeung and Ajith Abraham}, journal={Genetic and Evolutionary Computation 2004 Conference (GECCO-2004), Bird-of-a-feather Workshop On Application of Hybrid Evolutionary Algorithms to Complex Optimization Problems, Springer Verlag Germany, 2004 (forth coming)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405025}, primaryClass={cs.AI} }
auyeung2004the
arxiv-671842
cs/0405026
A Concurrent Fuzzy-Neural Network Approach for Decision Support Systems
<|reference_start|>A Concurrent Fuzzy-Neural Network Approach for Decision Support Systems: Decision-making is a process of choosing among alternative courses of action for solving complicated problems where multi-criteria objectives are involved. The past few years have witnessed a growing recognition of Soft Computing technologies that underlie the conception, design and utilization of intelligent systems. Several works have been done where engineers and scientists have applied intelligent techniques and heuristics to obtain optimal decisions from imprecise information. In this paper, we present a concurrent fuzzy-neural network approach combining unsupervised and supervised learning techniques to develop the Tactical Air Combat Decision Support System (TACDSS). Experiment results clearly demonstrate the efficiency of the proposed technique.<|reference_end|>
arxiv
@article{tran2004a, title={A Concurrent Fuzzy-Neural Network Approach for Decision Support Systems}, author={Cong Tran, Ajith Abraham and Lakhmi Jain}, journal={The IEEE International Conference on Fuzzy Systems, FUZZ-IEEE'03, IEEE Press, ISBN 0780378113, pp. 1092-1097, 2003}, year={2004}, doi={10.1109/FUZZ.2003.1206584}, archivePrefix={arXiv}, eprint={cs/0405026}, primaryClass={cs.AI} }
tran2004a
arxiv-671843
cs/0405027
Evolution of a Subsumption Architecture Neurocontroller
<|reference_start|>Evolution of a Subsumption Architecture Neurocontroller: An approach to robotics called layered evolution and merging features from the subsumption architecture into evolutionary robotics is presented, and its advantages are discussed. This approach is used to construct a layered controller for a simulated robot that learns which light source to approach in an environment with obstacles. The evolvability and performance of layered evolution on this task is compared to (standard) monolithic evolution, incremental and modularised evolution. To corroborate the hypothesis that a layered controller performs at least as well as an integrated one, the evolved layers are merged back into a single network. On the grounds of the test results, it is argued that layered evolution provides a superior approach for many tasks, and it is suggested that this approach may be the key to scaling up evolutionary robotics.<|reference_end|>
arxiv
@article{togelius2004evolution, title={Evolution of a Subsumption Architecture Neurocontroller}, author={Julian Togelius}, journal={Journal of Intelligent and Fuzzy Systems, Vol. 15, No. 1 (2004)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405027}, primaryClass={cs.AI cs.NE} }
togelius2004evolution
arxiv-671844
cs/0405028
Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems
<|reference_start|>Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems: In a universe with a single currency, there would be no foreign exchange market, no foreign exchange rates, and no foreign exchange. Over the past twenty-five years, the way the market has performed those tasks has changed enormously. The need for intelligent monitoring systems has become a necessity to keep track of the complex forex market. The vast currency market is a foreign concept to the average individual. However, once it is broken down into simple terms, the average individual can begin to understand the foreign exchange market and use it as a financial instrument for future investing. In this paper, we attempt to compare the performance of hybrid soft computing and hard computing techniques to predict the average monthly forex rates one month ahead. The soft computing models considered are a neural network trained by the scaled conjugate gradient algorithm and a neuro-fuzzy model implementing a Takagi-Sugeno fuzzy inference system. We also considered Multivariate Adaptive Regression Splines (MARS), Classification and Regression Trees (CART) and a hybrid CART-MARS technique. We considered the exchange rates of Australian dollar with respect to US dollar, Singapore dollar, New Zealand dollar, Japanese yen and United Kingdom pounds. The models were trained using 70% of the data and remaining was used for testing and validation purposes. It is observed that the proposed hybrid models could predict the forex rates more accurately than all the techniques when applied individually. Empirical results also reveal that the hybrid hard computing approach also improved some of our previous work using a neuro-fuzzy approach.<|reference_end|>
arxiv
@article{abraham2004analysis, title={Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems}, author={Ajith Abraham}, journal={IEEE International Conference on Fuzzy Systems (IEEE FUZZ'02), 2002 IEEE World Congress on Computational Intelligence, Hawaii, ISBN 0780372808, IEEE Press pp. 1616 -1622, 2002}, year={2004}, doi={10.1109/FUZZ.2002.1006749}, archivePrefix={arXiv}, eprint={cs/0405028}, primaryClass={cs.AI} }
abraham2004analysis
arxiv-671845
cs/0405029
A New Computational Framework For 2D Shape-Enclosing Contours
<|reference_start|>A New Computational Framework For 2D Shape-Enclosing Contours: In this paper, a new framework for one-dimensional contour extraction from discrete two-dimensional data sets is presented. Contour extraction is important in many scientific fields such as digital image processing, computer vision, pattern recognition, etc. This novel framework includes (but is not limited to) algorithms for dilated contour extraction, contour displacement, shape skeleton extraction, contour continuation, shape feature based contour refinement and contour simplification. Many of the new techniques depend strongly on the application of a Delaunay tessellation. In order to demonstrate the versatility of this novel toolbox approach, the contour extraction techniques presented here are applied to scientific problems in material science, biology and heavy ion physics.<|reference_end|>
arxiv
@article{schlei2004a, title={A New Computational Framework For 2D Shape-Enclosing Contours}, author={B. R. Schlei}, journal={Image and Vision Computing Volume 27, Issue 6, 4 May 2009, Pages 637-647}, year={2004}, doi={10.1016/j.imavis.2008.06.014}, number={LA-UR-04-3115}, archivePrefix={arXiv}, eprint={cs/0405029}, primaryClass={cs.CV cs.CG} }
schlei2004a
arxiv-671846
cs/0405030
Business Intelligence from Web Usage Mining
<|reference_start|>Business Intelligence from Web Usage Mining: The rapid e-commerce growth has made both business community and customers face a new situation. Due to intense competition on one hand and the customer's option to choose from several alternatives business community has realized the necessity of intelligent marketing strategies and relationship management. Web usage mining attempts to discover useful knowledge from the secondary data obtained from the interactions of the users with the Web. Web usage mining has become very critical for effective Web site management, creating adaptive Web sites, business and support services, personalization, network traffic flow analysis and so on. In this paper, we present the important concepts of Web usage mining and its various practical applications. We further present a novel approach 'intelligent-miner' (i-Miner) to optimize the concurrent architecture of a fuzzy clustering algorithm (to discover web data clusters) and a fuzzy inference system to analyze the Web site visitor trends. A hybrid evolutionary fuzzy clustering algorithm is proposed in this paper to optimally segregate similar user interests. The clustered data is then used to analyze the trends using a Takagi-Sugeno fuzzy inference system learned using a combination of evolutionary algorithm and neural network learning. Proposed approach is compared with self-organizing maps (to discover patterns) and several function approximation techniques like neural networks, linear genetic programming and Takagi-Sugeno fuzzy inference system (to analyze the clusters). The results are graphically illustrated and the practical significance is discussed in detail. Empirical results clearly show that the proposed Web usage-mining framework is efficient.<|reference_end|>
arxiv
@article{abraham2004business, title={Business Intelligence from Web Usage Mining}, author={Ajith Abraham}, journal={Journal of Information & Knowledge Management (JIKM), World Scientific Publishing Co., Singapore, Vol. 2, No. 4, pp. 375-390, 2003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405030}, primaryClass={cs.AI} }
abraham2004business
arxiv-671847
cs/0405031
Adaptation of Mamdani Fuzzy Inference System Using Neuro - Genetic Approach for Tactical Air Combat Decision Support System
<|reference_start|>Adaptation of Mamdani Fuzzy Inference System Using Neuro - Genetic Approach for Tactical Air Combat Decision Support System: Normally a decision support system is build to solve problem where multi-criteria decisions are involved. The knowledge base is the vital part of the decision support containing the information or data that is used in decision-making process. This is the field where engineers and scientists have applied several intelligent techniques and heuristics to obtain optimal decisions from imprecise information. In this paper, we present a hybrid neuro-genetic learning approach for the adaptation a Mamdani fuzzy inference system for the Tactical Air Combat Decision Support System (TACDSS). Some simulation results demonstrating the difference of the learning techniques and are also provided.<|reference_end|>
arxiv
@article{tran2004adaptation, title={Adaptation of Mamdani Fuzzy Inference System Using Neuro - Genetic Approach for Tactical Air Combat Decision Support System}, author={Cong Tran, Lakhmi Jain, Ajith Abraham}, journal={15th Australian Joint Conference on Artificial Intelligence (AI'02) Australia, LNAI 2557, Springer Verlag, Germany, pp. 672-679, 2002}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405031}, primaryClass={cs.AI} }
tran2004adaptation
arxiv-671848
cs/0405032
EvoNF: A Framework for Optimization of Fuzzy Inference Systems Using Neural Network Learning and Evolutionary Computation
<|reference_start|>EvoNF: A Framework for Optimization of Fuzzy Inference Systems Using Neural Network Learning and Evolutionary Computation: Several adaptation techniques have been investigated to optimize fuzzy inference systems. Neural network learning algorithms have been used to determine the parameters of fuzzy inference system. Such models are often called as integrated neuro-fuzzy models. In an integrated neuro-fuzzy model there is no guarantee that the neural network learning algorithm converges and the tuning of fuzzy inference system will be successful. Success of evolutionary search procedures for optimization of fuzzy inference system is well proven and established in many application areas. In this paper, we will explore how the optimization of fuzzy inference systems could be further improved using a meta-heuristic approach combining neural network learning and evolutionary computation. The proposed technique could be considered as a methodology to integrate neural networks, fuzzy inference systems and evolutionary search procedures. We present the theoretical frameworks and some experimental results to demonstrate the efficiency of the proposed technique.<|reference_end|>
arxiv
@article{abraham2004evonf:, title={EvoNF: A Framework for Optimization of Fuzzy Inference Systems Using Neural Network Learning and Evolutionary Computation}, author={Ajith Abraham}, journal={The 17th IEEE International Symposium on Intelligent Control, ISIC'02, IEEE Press, ISBN 0780376218, pp 327-332, 2002}, year={2004}, doi={10.1109/ISIC.2002.1157784}, archivePrefix={arXiv}, eprint={cs/0405032}, primaryClass={cs.AI} }
abraham2004evonf:
arxiv-671849
cs/0405033
Optimization of Evolutionary Neural Networks Using Hybrid Learning Algorithms
<|reference_start|>Optimization of Evolutionary Neural Networks Using Hybrid Learning Algorithms: Evolutionary artificial neural networks (EANNs) refer to a special class of artificial neural networks (ANNs) in which evolution is another fundamental form of adaptation in addition to learning. Evolutionary algorithms are used to adapt the connection weights, network architecture and learning algorithms according to the problem environment. Even though evolutionary algorithms are well known as efficient global search algorithms, very often they miss the best local solutions in the complex solution space. In this paper, we propose a hybrid meta-heuristic learning approach combining evolutionary learning and local search methods (using 1st and 2nd order error information) to improve the learning and faster convergence obtained using a direct evolutionary approach. The proposed technique is tested on three different chaotic time series and the test results are compared with some popular neuro-fuzzy systems and a recently developed cutting angle method of global optimization. Empirical results reveal that the proposed technique is efficient in spite of the computational complexity.<|reference_end|>
arxiv
@article{abraham2004optimization, title={Optimization of Evolutionary Neural Networks Using Hybrid Learning Algorithms}, author={Ajith Abraham}, journal={IEEE International Joint Conference on Neural Networks (IJCNN'02), 2002 IEEE World Congress on Computational Intelligence, Hawaii, ISBN 0780372786, IEEE Press, Volume 3, pp. 2797-2802, 2002}, year={2004}, doi={10.1109/IJCNN.2002.1007591}, archivePrefix={arXiv}, eprint={cs/0405033}, primaryClass={cs.AI} }
abraham2004optimization
arxiv-671850
cs/0405034
Computational Geometry Column 45
<|reference_start|>Computational Geometry Column 45: The algorithm of Edelsbrunner for surface reconstruction by ``wrapping'' a set of points in R^3 is described.<|reference_end|>
arxiv
@article{o'rourke2004computational, title={Computational Geometry Column 45}, author={Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0405034}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405034}, primaryClass={cs.CG cs.DM} }
o'rourke2004computational
arxiv-671851
cs/0405035
Security-Performance Tradeoffs of Inheritance based Key Predistribution for Wireless Sensor Networks
<|reference_start|>Security-Performance Tradeoffs of Inheritance based Key Predistribution for Wireless Sensor Networks: Key predistribution is a well-known technique for ensuring secure communication via encryption among sensors deployed in an ad-hoc manner to form a sensor network. In this paper, we propose a novel 2-Phase technique for key predistribution based on a combination of inherited and random key assignments from the given key pool to individual sensor nodes. We also develop an analytical framework for measuring security-performance tradeoffs of different key distribution schemes by providing metrics for measuring sensornet connectivity and resiliency to enemy attacks. In particular, we show analytically that the 2-Phase scheme provides better average connectivity and superior $q$-composite connectivity than the random scheme. We then prove that the invulnerability of a communication link under arbitrary number of node captures by an adversary is higher under the 2-Phase scheme. The probability of a communicating node pair having an exclusive key also scales better with network size under the 2-Phase scheme. We also show analytically that the vulnerability of an arbitrary communication link in the sensornet to single node capture is lower under 2-Phase assuming both network-wide as well as localized capture. Simulation results also show that the number of exclusive keys shared between any two nodes is higher while the number of $q$-composite links compromised when a given number of nodes are captured by the enemy is smaller under the 2-Phase scheme as compared to the random one.<|reference_end|>
arxiv
@article{kannan2004security-performance, title={Security-Performance Tradeoffs of Inheritance based Key Predistribution for Wireless Sensor Networks}, author={Rajgopal Kannan, Lydia Ray, Arjan Durresi, S. Iyengar}, journal={arXiv preprint arXiv:cs/0405035}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405035}, primaryClass={cs.NI cs.CR} }
kannan2004security-performance
arxiv-671852
cs/0405036
Single-Strip Triangulation of Manifolds with Arbitrary Topology
<|reference_start|>Single-Strip Triangulation of Manifolds with Arbitrary Topology: Triangle strips have been widely used for efficient rendering. It is NP-complete to test whether a given triangulated model can be represented as a single triangle strip, so many heuristics have been proposed to partition models into few long strips. In this paper, we present a new algorithm for creating a single triangle loop or strip from a triangulated model. Our method applies a dual graph matching algorithm to partition the mesh into cycles, and then merges pairs of cycles by splitting adjacent triangles when necessary. New vertices are introduced at midpoints of edges and the new triangles thus formed are coplanar with their parent triangles, hence the visual fidelity of the geometry is not changed. We prove that the increase in the number of triangles due to this splitting is 50% in the worst case, however for all models we tested the increase was less than 2%. We also prove tight bounds on the number of triangles needed for a single-strip representation of a model with holes on its boundary. Our strips can be used not only for efficient rendering, but also for other applications including the generation of space filling curves on a manifold of any arbitrary topology.<|reference_end|>
arxiv
@article{gopi2004single-strip, title={Single-Strip Triangulation of Manifolds with Arbitrary Topology}, author={M. Gopi and David Eppstein}, journal={arXiv preprint arXiv:cs/0405036}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405036}, primaryClass={cs.CG cs.GR} }
gopi2004single-strip
arxiv-671853
cs/0405037
A Probabilistic Model of Machine Translation
<|reference_start|>A Probabilistic Model of Machine Translation: A probabilistic model for computer-based generation of a machine translation system on the basis of English-Russian parallel text corpora is suggested. The model is trained using parallel text corpora with pre-aligned source and target sentences. The training of the model results in a bilingual dictionary of words and "word blocks" with relevant translation probability.<|reference_end|>
arxiv
@article{miram2004a, title={A Probabilistic Model of Machine Translation}, author={G.E. Miram and V.K. Petrov}, journal={arXiv preprint arXiv:cs/0405037}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405037}, primaryClass={cs.CL} }
miram2004a
arxiv-671854
cs/0405038
Deductive Algorithmic Knowledge
<|reference_start|>Deductive Algorithmic Knowledge: The framework of algorithmic knowledge assumes that agents use algorithms to compute the facts they explicitly know. In many cases of interest, a deductive system, rather than a particular algorithm, captures the formal reasoning used by the agents to compute what they explicitly know. We introduce a logic for reasoning about both implicit and explicit knowledge with the latter defined with respect to a deductive system formalizing a logical theory for agents. The highly structured nature of deductive systems leads to very natural axiomatizations of the resulting logic when interpreted over any fixed deductive system. The decision problem for the logic, in the presence of a single agent, is NP-complete in general, no harder than propositional logic. It remains NP-complete when we fix a deductive system that is decidable in nondeterministic polynomial time. These results extend in a straightforward way to multiple agents.<|reference_end|>
arxiv
@article{pucella2004deductive, title={Deductive Algorithmic Knowledge}, author={Riccardo Pucella}, journal={Journal of Logic and Computation 16 (2), pp. 287-309, 2006}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405038}, primaryClass={cs.AI cs.LO} }
pucella2004deductive
arxiv-671855
cs/0405039
Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization
<|reference_start|>Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization: We consider the problem of modeling the content structure of texts within a specific domain, in terms of the topics the texts address and the order in which these topics appear. We first present an effective knowledge-lean method for learning content models from un-annotated documents, utilizing a novel adaptation of algorithms for Hidden Markov Models. We then apply our method to two complementary tasks: information ordering and extractive summarization. Our experiments show that incorporating content models in these applications yields substantial improvement over previously-proposed methods.<|reference_end|>
arxiv
@article{barzilay2004catching, title={Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization}, author={Regina Barzilay and Lillian Lee}, journal={HLT-NAACL 2004: Proceedings of the Main Conference, pp. 113--120}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405039}, primaryClass={cs.CL} }
barzilay2004catching
arxiv-671856
cs/0405040
Supervisory Control of Fuzzy Discrete Event Systems
<|reference_start|>Supervisory Control of Fuzzy Discrete Event Systems: In order to cope with situations in which a plant's dynamics are not precisely known, we consider the problem of supervisory control for a class of discrete event systems modelled by fuzzy automata. The behavior of such discrete event systems is described by fuzzy languages; the supervisors are event feedback and can disable only controllable events with any degree. The concept of discrete event system controllability is thus extended by incorporating fuzziness. In this new sense, we present a necessary and sufficient condition for a fuzzy language to be controllable. We also study the supremal controllable fuzzy sublanguage and the infimal controllable fuzzy superlanguage when a given pre-specified desired fuzzy language is uncontrollable. Our framework generalizes that of Ramadge-Wonham and reduces to Ramadge-Wonham framework when membership grades in all fuzzy languages must be either 0 or 1. The theoretical development is accompanied by illustrative numerical examples.<|reference_end|>
arxiv
@article{cao2004supervisory, title={Supervisory Control of Fuzzy Discrete Event Systems}, author={Yongzhi Cao and Mingsheng Ying}, journal={A short version has been published in the IEEE Transactions on Systems, Man, and Cybernetics--Part B: Cybernetics, 35(2), pp. 366-371, April 2005.}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405040}, primaryClass={cs.DM cs.DC} }
cao2004supervisory
arxiv-671857
cs/0405041
The modulus in the CAD system drawings as a base of developing of the problem-oriented extensions
<|reference_start|>The modulus in the CAD system drawings as a base of developing of the problem-oriented extensions: The concept of the "modulus" in the CAD system drawings is characterized, being a base of developing of the problem-oriented extensions. The modulus consists of visible geometric elements of the drawing and invisible parametric representation of the modelling object. The technological advantages of moduluss in a complex CAD system developing are described.<|reference_end|>
arxiv
@article{migunov2004the, title={The modulus in the CAD system drawings as a base of developing of the problem-oriented extensions}, author={Vladimir V. Migunov}, journal={arXiv preprint arXiv:cs/0405041}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405041}, primaryClass={cs.CE cs.DS} }
migunov2004the
arxiv-671858
cs/0405042
A Distributed TDMA Slot Assignment Algorithm for Wireless Sensor Networks
<|reference_start|>A Distributed TDMA Slot Assignment Algorithm for Wireless Sensor Networks: Wireless sensor networks benefit from communication protocols that reduce power requirements by avoiding frame collision. Time Division Media Access methods schedule transmission in slots to avoid collision, however these methods often lack scalability when implemented in \emph{ad hoc} networks subject to node failures and dynamic topology. This paper reports a distributed algorithm for TDMA slot assignment that is self-stabilizing to transient faults and dynamic topology change. The expected local convergence time is O(1) for any size network satisfying a constant bound on the size of a node neighborhood.<|reference_end|>
arxiv
@article{herman2004a, title={A Distributed TDMA Slot Assignment Algorithm for Wireless Sensor Networks}, author={T. Herman, S. Tixeuil}, journal={arXiv preprint arXiv:cs/0405042}, year={2004}, number={TR04-02 University of Iowa Department of Computer Science}, archivePrefix={arXiv}, eprint={cs/0405042}, primaryClass={cs.DC cs.NI} }
herman2004a
arxiv-671859
cs/0405043
Prediction with Expert Advice by Following the Perturbed Leader for General Weights
<|reference_start|>Prediction with Expert Advice by Following the Perturbed Leader for General Weights: When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of sqrt(complexity/current loss) renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative "Follow the Perturbed Leader" (FPL) algorithm from Kalai (2003} (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are (to our knowledge) new.<|reference_end|>
arxiv
@article{hutter2004prediction, title={Prediction with Expert Advice by Following the Perturbed Leader for General Weights}, author={Marcus Hutter and Jan Poland}, journal={Proc. 15th International Conf. on Algorithmic Learning Theory (ALT-2004), pages 279-293}, year={2004}, number={IDSIA-08-04}, archivePrefix={arXiv}, eprint={cs/0405043}, primaryClass={cs.LG cs.AI} }
hutter2004prediction
arxiv-671860
cs/0405044
Corpus structure, language models, and ad hoc information retrieval
<|reference_start|>Corpus structure, language models, and ad hoc information retrieval: Most previous work on the recently developed language-modeling approach to information retrieval focuses on document-specific characteristics, and therefore does not take into account the structure of the surrounding corpus. We propose a novel algorithmic framework in which information provided by document-based language models is enhanced by the incorporation of information drawn from clusters of similar documents. Using this framework, we develop a suite of new algorithms. Even the simplest typically outperforms the standard language-modeling approach in precision and recall, and our new interpolation algorithm posts statistically significant improvements for both metrics over all three corpora tested.<|reference_end|>
arxiv
@article{kurland2004corpus, title={Corpus structure, language models, and ad hoc information retrieval}, author={Oren Kurland and Lillian Lee}, journal={arXiv preprint arXiv:cs/0405044}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405044}, primaryClass={cs.IR cs.CL} }
kurland2004corpus
arxiv-671861
cs/0405045
Culture and International Usability Testing: The Effects of Culture in Structured Interviews
<|reference_start|>Culture and International Usability Testing: The Effects of Culture in Structured Interviews: The global audience for software products includes members of different countries, religions, and cultures: people who speak different languages, have different life styles, and have different perceptions and expectations of any given product. A major impediment in interface development is that there is inadequate empirical evidence for the effects of culture in the usability engineering methods used for developing user interfaces. This paper presents a controlled study investigating the effects of culture on the effectiveness of structured interviews in usability testing. The experiment consisted of usability testing of a website with two independent groups of Indian participants by two interviewers; one belonging to the Indian culture and the other to the Anglo-American culture. Participants found more usability problems and made more suggestions to an interviewer who was a member of the same (Indian) culture than to the foreign (Anglo-American) interviewer. The results of the study empirically establish that culture significantly affects the efficacy of structured interviews during international user testing. The implications of this work for usability engineering are discussed.<|reference_end|>
arxiv
@article{vatrapu2004culture, title={Culture and International Usability Testing: The Effects of Culture in Structured Interviews}, author={Ravikiran Vatrapu, Manuel A. Perez-Quinones}, journal={arXiv preprint arXiv:cs/0405045}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405045}, primaryClass={cs.HC cs.SE} }
vatrapu2004culture
arxiv-671862
cs/0405046
Soft Computing Models for Network Intrusion Detection Systems
<|reference_start|>Soft Computing Models for Network Intrusion Detection Systems: Security of computers and the networks that connect them is increasingly becoming of great significance. Computer security is defined as the protection of computing systems against threats to confidentiality, integrity, and availability. There are two types of intruders: external intruders, who are unauthorized users of the machines they attack, and internal intruders, who have permission to access the system with some restrictions. This chapter presents a soft computing approach to detect intrusions in a network. Among the several soft computing paradigms, we investigated fuzzy rule-based classifiers, decision trees, support vector machines, linear genetic programming and an ensemble method to model fast and efficient intrusion detection systems. Empirical results clearly show that soft computing approach could play a major role for intrusion detection.<|reference_end|>
arxiv
@article{abraham2004soft, title={Soft Computing Models for Network Intrusion Detection Systems}, author={Ajith Abraham and Ravi Jain}, journal={Soft Computing in Knowledge Discovery: Methods and Applications, Saman Halgamuge and Lipo Wang (Eds.), Studies in Fuzziness and Soft Computing, Springer Verlag Germany, Chapter 16, 20 pages, 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405046}, primaryClass={cs.CR} }
abraham2004soft
arxiv-671863
cs/0405047
Modular technology of developing of the problem-oriented extensions of a CAD system of reconstruction of the plant
<|reference_start|>Modular technology of developing of the problem-oriented extensions of a CAD system of reconstruction of the plant: The modular technology of creation of the problem-oriented extensions of a CAD system is described, which was realised in a system TechnoCAD GlassX for designing of reconstruction of the plants. The modularity of the technology is expressed in storage of all parameters of the design in one element of the drawing - modulus, with automatic generation of a geometrical part of the modulus from these parameters. The common principles of the system organization of extensions developing are described: separation of the part of the design to automize in this extension, architecture of parameters in the form of the lists of objects with their properties and links to another objects, separation of common and special operations, stages of the developing, boundaries of applicability of technology.<|reference_end|>
arxiv
@article{migunov2004modular, title={Modular technology of developing of the problem-oriented extensions of a CAD system of reconstruction of the plant}, author={Vladimir V. Migunov}, journal={arXiv preprint arXiv:cs/0405047}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405047}, primaryClass={cs.CE cs.DS} }
migunov2004modular
arxiv-671864
cs/0405048
Interactive visualization of higher dimensional data in a multiview environment
<|reference_start|>Interactive visualization of higher dimensional data in a multiview environment: We develop multiple view visualization of higher dimensional data. Our work was chiefly motivated by the need to extract insight from four dimensional Quantum Chromodynamic (QCD) data. We develop visualization where multiple views, generally views of 3D projections or slices of a higher dimensional data, are tightly coupled not only by their specific order but also by a view synchronizing interaction style, and an internally defined interaction language. The tight coupling of the different views allows a fast and well-coordinated exploration of the data. In particular, the visualization allowed us to easily make consistency checks of the 4D QCD data and to infer the correctness of particle properties calculations. The software developed was also successfully applied in material studies, in particular studies of meteorite properties. Our implementation uses the VTK API. To handle a large number of views (slices/projections) and to still maintain good resolution, we use IBM T221 display (3840 X 2400 pixels).<|reference_end|>
arxiv
@article{tomov2004interactive, title={Interactive visualization of higher dimensional data in a multiview environment}, author={Stanimire Tomov and Michael McGuigan}, journal={arXiv preprint arXiv:cs/0405048}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405048}, primaryClass={cs.GR} }
tomov2004interactive
arxiv-671865
cs/0405049
Export Behaviour Modeling Using EvoNF Approach
<|reference_start|>Export Behaviour Modeling Using EvoNF Approach: The academic literature suggests that the extent of exporting by multinational corporation subsidiaries (MCS) depends on their product manufactured, resources, tax protection, customers and markets, involvement strategy, financial independence and suppliers' relationship with a multinational corporation (MNC). The aim of this paper is to model the complex export pattern behaviour using a Takagi-Sugeno fuzzy inference system in order to determine the actual volume of MCS export output (sales exported). The proposed fuzzy inference system is optimised by using neural network learning and evolutionary computation. Empirical results clearly show that the proposed approach could model the export behaviour reasonable well compared to a direct neural network approach.<|reference_end|>
arxiv
@article{edwards2004export, title={Export Behaviour Modeling Using EvoNF Approach}, author={Ron Edwards, Ajith Abraham and Sonja Petrovic-Lazarevic}, journal={The International Conference on Computational Science 2003 (ICCS 2003), Springer Verlag, Lecture Notes in Computer Science Volume 2660, Sloot P.M.A. et al (Eds.), pp. 169-178, 2003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405049}, primaryClass={cs.AI} }
edwards2004export
arxiv-671866
cs/0405050
Traffic Accident Analysis Using Decision Trees and Neural Networks
<|reference_start|>Traffic Accident Analysis Using Decision Trees and Neural Networks: The costs of fatalities and injuries due to traffic accident have a great impact on society. This paper presents our research to model the severity of injury resulting from traffic accidents using artificial neural networks and decision trees. We have applied them to an actual data set obtained from the National Automotive Sampling System (NASS) General Estimates System (GES). Experiment results reveal that in all the cases the decision tree outperforms the neural network. Our research analysis also shows that the three most important factors in fatal injury are: driver's seat belt usage, light condition of the roadway, and driver's alcohol usage.<|reference_end|>
arxiv
@article{chong2004traffic, title={Traffic Accident Analysis Using Decision Trees and Neural Networks}, author={Miao M. Chong, Ajith Abraham, Marcin Paprzycki}, journal={IADIS International Conference on Applied Computing, Portugal, IADIS Press, Pedro Isaias et al. (Eds.), ISBN: 9729894736, Volume 2, pp. 39-42, 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405050}, primaryClass={cs.AI} }
chong2004traffic
arxiv-671867
cs/0405051
Short Term Load Forecasting Models in Czech Republic Using Soft Computing Paradigms
<|reference_start|>Short Term Load Forecasting Models in Czech Republic Using Soft Computing Paradigms: This paper presents a comparative study of six soft computing models namely multilayer perceptron networks, Elman recurrent neural network, radial basis function network, Hopfield model, fuzzy inference system and hybrid fuzzy neural network for the hourly electricity demand forecast of Czech Republic. The soft computing models were trained and tested using the actual hourly load data for seven years. A comparison of the proposed techniques is presented for predicting 2 day ahead demands for electricity. Simulation results indicate that hybrid fuzzy neural network and radial basis function networks are the best candidates for the analysis and forecasting of electricity demand.<|reference_end|>
arxiv
@article{khan2004short, title={Short Term Load Forecasting Models in Czech Republic Using Soft Computing Paradigms}, author={Muhammad Riaz Khan and Ajith Abraham}, journal={International Journal of Knowledge-Based Intelligent Engineering Systems, IOS Press Netherlands, Volume 7, Number 4, pp. 172-179, 2003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405051}, primaryClass={cs.AI} }
khan2004short
arxiv-671868
cs/0405052
Decision Support Systems Using Intelligent Paradigms
<|reference_start|>Decision Support Systems Using Intelligent Paradigms: Decision-making is a process of choosing among alternative courses of action for solving complicated problems where multi-criteria objectives are involved. The past few years have witnessed a growing recognition of Soft Computing (SC) technologies that underlie the conception, design and utilization of intelligent systems. In this paper, we present different SC paradigms involving an artificial neural network trained using the scaled conjugate gradient algorithm, two different fuzzy inference methods optimised using neural network learning/evolutionary algorithms and regression trees for developing intelligent decision support systems. We demonstrate the efficiency of the different algorithms by developing a decision support system for a Tactical Air Combat Environment (TACE). Some empirical comparisons between the different algorithms are also provided.<|reference_end|>
arxiv
@article{tran2004decision, title={Decision Support Systems Using Intelligent Paradigms}, author={Cong Tran, Ajith Abraham and Lakhmi Jain}, journal={International Journal of American Romanian Academy of Arts and Sciences, 2004 (forth coming)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405052}, primaryClass={cs.AI} }
tran2004decision
arxiv-671869
cs/0405053
Synchronous Relaxation for Parallel Ising Spin Simulations
<|reference_start|>Synchronous Relaxation for Parallel Ising Spin Simulations: A new parallel algorithm for simulating Ising spin systems is presented. The sequential prototype is the n-fold way algorithm cite{BKL75}, which is efficient but is hard to parallelize using conservative methods. Our parallel algorithm is optimistic. Unlike other optimistic algorithms, e.g., Time Warp, our algorithm is synchronous. It also belongs to the class of simulations known as ``relaxation'' cite{CS8 hence it is named ``synchronous relaxation.'' We derive performance guarantees for this algorithm. If N is the number of PEs, then under weak assumptions we show that the number of correct events processed per unit of time is, on average, at least of order N/log(N). All communication delays, processing time, and busy waits are taken into account.<|reference_end|>
arxiv
@article{lubachevsky2004synchronous, title={Synchronous Relaxation for Parallel Ising Spin Simulations}, author={Boris Lubachevsky and Alan Weiss}, journal={15th Workshop on Parallel and Distributed Simulation, Lake Arrowhead, California, May 2001, pp.185-192}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405053}, primaryClass={cs.DC cond-mat.mtrl-sci cs.DS physics.comp-ph} }
lubachevsky2004synchronous
arxiv-671870
cs/0405054
The model of the tables in design documentation for operating with the electronic catalogs and for specifications making in a CAD system
<|reference_start|>The model of the tables in design documentation for operating with the electronic catalogs and for specifications making in a CAD system: The hierarchic block model of the tables in design documentation as a part of a CAD system is described, intended for automatic specifications making of elements of the drawings, with usage of the electronic catalogs. The model is created for needs of a CAD system of reconstruction of the industrial plants, where the result of designing are the drawings, which include the specifications of different types. The adequate simulation of the specification tables is ensured with technology of storing in the drawing of the visible geometric elements and invisible parametric representation, sufficient for generation of this elements.<|reference_end|>
arxiv
@article{migunov2004the, title={The model of the tables in design documentation for operating with the electronic catalogs and for specifications making in a CAD system}, author={Vladimir V. Migunov}, journal={arXiv preprint arXiv:cs/0405054}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405054}, primaryClass={cs.CE cs.DS} }
migunov2004the
arxiv-671871
cs/0405055
Modular technology of developing of the extensions of a CAD system Axonometric piping diagrams Parametric representation
<|reference_start|>Modular technology of developing of the extensions of a CAD system Axonometric piping diagrams Parametric representation: Applying the modular technology of developing of the problem-oriented extensions of a CAD system to a problem of automation of creating of the axonometric piping diagrams on an example of the program system TechnoCAD GlassX is described. The proximity of composition of the schemas is detected for special technological pipe lines, systems of a water line and water drain, heating, heat supply, ventilating, air conditioning. The structured parametric representation of the schemas, including properties of objects, their link, common settings, settings by default and the special links of compatibility is reviewed.<|reference_end|>
arxiv
@article{migunov2004modular, title={Modular technology of developing of the extensions of a CAD system. Axonometric piping diagrams. Parametric representation}, author={Vladimir V. Migunov, Rustem R. Kafiatullov, Ilsur T. Safin}, journal={arXiv preprint arXiv:cs/0405055}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405055}, primaryClass={cs.CE cs.DS} }
migunov2004modular
arxiv-671872
cs/0405056
Modular technology of developing of the extensions of a CAD system The axonometric piping diagrams Common and special operations
<|reference_start|>Modular technology of developing of the extensions of a CAD system The axonometric piping diagrams Common and special operations: Applying the modular technology of developing of the problem-oriented extensions of a CAD system to a problem of automation of creating of the axonometric piping diagrams on an example of the program system TechnoCAD GlassX is described. The features of realization of common operations, composition and realization of special operations of a designing of the schemas of the special technological pipe lines, systems of a water line and water drain, heating, heat supply, ventilating, air conditioning are reviewed.<|reference_end|>
arxiv
@article{safin2004modular, title={Modular technology of developing of the extensions of a CAD system. The axonometric piping diagrams. Common and special operations}, author={Ilsur T. Safin, Vladimir V. Migunov, Rustem R. Kafiatullov}, journal={arXiv preprint arXiv:cs/0405056}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405056}, primaryClass={cs.CE cs.DS} }
safin2004modular
arxiv-671873
cs/0405057
Mathematical and programming toolkit of the computer aided design of the axonometric piping diagrams
<|reference_start|>Mathematical and programming toolkit of the computer aided design of the axonometric piping diagrams: The problem of the automation of the designing of the axonometric piping diagrams include, as the minimum, manipulations with the flat schemas of three-dimensional wireframe objects (with dimension of 2,5). The specialized model, methodical and mathematical approaches are required because of large bulk of calculuss. Coordinate systems, data types, common principles of realization of operation with data and composition of the basic operations are described which are realised in the complex CAD system of the reconstruction of the plants TechnoCAD GlassX.<|reference_end|>
arxiv
@article{migunov2004mathematical, title={Mathematical and programming toolkit of the computer aided design of the axonometric piping diagrams}, author={Vladimir V. Migunov}, journal={arXiv preprint arXiv:cs/0405057}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405057}, primaryClass={cs.CE cs.DS} }
migunov2004mathematical
arxiv-671874
cs/0405058
Neighborhood-Based Topology Recognition in Sensor Networks
<|reference_start|>Neighborhood-Based Topology Recognition in Sensor Networks: We consider a crucial aspect of self-organization of a sensor network consisting of a large set of simple sensor nodes with no location hardware and only very limited communication range. After having been distributed randomly in a given two-dimensional region, the nodes are required to develop a sense for the environment, based on a limited amount of local communication. We describe algorithmic approaches for determining the structure of boundary nodes of the region, and the topology of the region. We also develop methods for determining the outside boundary, the distance to the closest boundary for each point, the Voronoi diagram of the different boundaries, and the geometric thickness of the network. Our methods rely on a number of natural assumptions that are present in densely distributed sets of nodes, and make use of a combination of stochastics, topology, and geometry. Evaluation requires only a limited number of simple local computations.<|reference_end|>
arxiv
@article{fekete2004neighborhood-based, title={Neighborhood-Based Topology Recognition in Sensor Networks}, author={Sandor P. Fekete, Alexander Kroeller, Dennis Pfisterer, Stefan Fischer, and Carsten Buschmann}, journal={arXiv preprint arXiv:cs/0405058}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405058}, primaryClass={cs.DS cs.DC} }
fekete2004neighborhood-based
arxiv-671875
cs/0405059
Erratum : MCColor is not optimal on Meyniel graphs
<|reference_start|>Erratum : MCColor is not optimal on Meyniel graphs: A Meyniel graph is a graph in which every odd cycle of length at least five has two chords. In the manuscript "Coloring Meyniel graphs in linear time" we claimed that our algorithm MCColor produces an optimal coloring for every Meyniel graph. But later we found a mistake in the proof and a couterexample to the optimality, which we present here. MCColor can still be used to find a stable set that intersects all maximal cliques of a Meyniel graph in linear time. Consequently it can be used to find an optimal coloring in time O(nm), and the same holds for Algorithm MCS+Color. This is explained in the manuscript "A linear algorithm to find a strong stable set in a Meyniel graph" but this is equivalent to Hertz's algorithm. The current best algorithm for coloring Meyniel graphs is the O(n^2) algorithm LexColor due to Roussel and Rusu. The question of finding a linear-time algorithm to color Meyniel graphs is still open.<|reference_end|>
arxiv
@article{lévêque2004erratum, title={Erratum : MCColor is not optimal on Meyniel graphs}, author={Benjamin L'ev^eque (Leibniz - IMAG), Fr'ed'eric Maffray (Leibniz - IMAG)}, journal={arXiv preprint arXiv:cs/0405059}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405059}, primaryClass={cs.DM math.CO} }
lévêque2004erratum
arxiv-671876
cs/0405060
An unexpected application of minimization theory to module decompositions
<|reference_start|>An unexpected application of minimization theory to module decompositions: The aim of this work is to show how we can decompose a module (if decomposable) into an indecomposable module with the help of the minimization process.<|reference_end|>
arxiv
@article{duchamp2004an, title={An unexpected application of minimization theory to module decompositions}, author={Gerard Duchamp, Hatem Hadj Kacem, Eric Laugerotte}, journal={arXiv preprint arXiv:cs/0405060}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405060}, primaryClass={cs.SC} }
duchamp2004an
arxiv-671877
cs/0405061
Jigsaw-based Security in Data Transfer in Computer Networks
<|reference_start|>Jigsaw-based Security in Data Transfer in Computer Networks: In this paper, we present a novel encryption-less algorithm to enhance security in transmission of data in networks. The algorithm uses an intuitively simple idea of a 'jigsaw puzzle' to break the transformed data into multiple parts where these parts form the pieces of the puzzle. Then these parts are packaged into packets and sent to the receiver. A secure and efficient mechanism is provided to convey the information that is necessary for obtaining the original data at the receiver-end from its parts in the packets, that is, for solving the 'jigsaw puzzle'. The algorithm is designed to provide information-theoretic (that is, unconditional) security by the use of a one-time pad like scheme so that no intermediate or unintended node can obtain the entire data. An authentication code is also used to ensure authenticity of every packet.<|reference_end|>
arxiv
@article{vasudevan2004jigsaw-based, title={Jigsaw-based Security in Data Transfer in Computer Networks}, author={Rangarajan Vasudevan, Ajith Abraham, Sugata Sanyal and Dharma P. Agrawal}, journal={IEEE International Conference on Information Technology: Coding and Computing (ITCC'04), USA, IEEE Computer Society, Volume 1, pp. 2-6, 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405061}, primaryClass={cs.CR} }
vasudevan2004jigsaw-based
arxiv-671878
cs/0405062
Efficiency Enhancement of Probabilistic Model Building Genetic Algorithms
<|reference_start|>Efficiency Enhancement of Probabilistic Model Building Genetic Algorithms: This paper presents two different efficiency-enhancement techniques for probabilistic model building genetic algorithms. The first technique proposes the use of a mutation operator which performs local search in the sub-solution neighborhood identified through the probabilistic model. The second technique proposes building and using an internal probabilistic model of the fitness along with the probabilistic model of variable interactions. The fitness values of some offspring are estimated using the probabilistic model, thereby avoiding computationally expensive function evaluations. The scalability of the aforementioned techniques are analyzed using facetwise models for convergence time and population sizing. The speed-up obtained by each of the methods is predicted and verified with empirical results. The results show that for additively separable problems the competent mutation operator requires O(k 0.5 logm)--where k is the building-block size, and m is the number of building blocks--less function evaluations than its selectorecombinative counterpart. The results also show that the use of an internal probabilistic fitness model reduces the required number of function evaluations to as low as 1-10% and yields a speed-up of 2--50.<|reference_end|>
arxiv
@article{sastry2004efficiency, title={Efficiency Enhancement of Probabilistic Model Building Genetic Algorithms}, author={Kumara Sastry, David E. Goldberg, Martin Pelikan}, journal={arXiv preprint arXiv:cs/0405062}, year={2004}, number={IlliGAL Report No. 2004020}, archivePrefix={arXiv}, eprint={cs/0405062}, primaryClass={cs.NE} }
sastry2004efficiency
arxiv-671879
cs/0405063
Let's Get Ready to Rumble: Crossover Versus Mutation Head to Head
<|reference_start|>Let's Get Ready to Rumble: Crossover Versus Mutation Head to Head: This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for additively separable deterministic problems, the BB-wise mutation is more efficient than crossover, while the crossover outperforms the mutation on additively separable problems perturbed with additive Gaussian noise. The results show that the speed-up of using BB-wise mutation on deterministic problems is O(k^{0.5}logm), where k is the BB size, and m is the number of BBs. Likewise, the speed-up of using crossover on stochastic problems with fixed noise variance is O(mk^{0.5}log m).<|reference_end|>
arxiv
@article{sastry2004let's, title={Let's Get Ready to Rumble: Crossover Versus Mutation Head to Head}, author={Kumara Sastry, David E. Goldberg}, journal={arXiv preprint arXiv:cs/0405063}, year={2004}, number={IlliGAL Report No. 2004005}, archivePrefix={arXiv}, eprint={cs/0405063}, primaryClass={cs.NE} }
sastry2004let's
arxiv-671880
cs/0405064
Designing Competent Mutation Operators via Probabilistic Model Building of Neighborhoods
<|reference_start|>Designing Competent Mutation Operators via Probabilistic Model Building of Neighborhoods: This paper presents a competent selectomutative genetic algorithm (GA), that adapts linkage and solves hard problems quickly, reliably, and accurately. A probabilistic model building process is used to automatically identify key building blocks (BBs) of the search problem. The mutation operator uses the probabilistic model of linkage groups to find the best among competing building blocks. The competent selectomutative GA successfully solves additively separable problems of bounded difficulty, requiring only subquadratic number of function evaluations. The results show that for additively separable problems the probabilistic model building BB-wise mutation scales as O(2^km^{1.5}), and requires O(k^{0.5}logm) less function evaluations than its selectorecombinative counterpart, confirming theoretical results reported elsewhere (Sastry & Goldberg, 2004).<|reference_end|>
arxiv
@article{sastry2004designing, title={Designing Competent Mutation Operators via Probabilistic Model Building of Neighborhoods}, author={Kumara Sastry, David E. Goldberg}, journal={arXiv preprint arXiv:cs/0405064}, year={2004}, number={IlliGAL Report No. 2004006}, archivePrefix={arXiv}, eprint={cs/0405064}, primaryClass={cs.NE} }
sastry2004designing
arxiv-671881
cs/0405065
Efficiency Enhancement of Genetic Algorithms via Building-Block-Wise Fitness Estimation
<|reference_start|>Efficiency Enhancement of Genetic Algorithms via Building-Block-Wise Fitness Estimation: This paper studies fitness inheritance as an efficiency enhancement technique for a class of competent genetic algorithms called estimation distribution algorithms. Probabilistic models of important sub-solutions are developed to estimate the fitness of a proportion of individuals in the population, thereby avoiding computationally expensive function evaluations. The effect of fitness inheritance on the convergence time and population sizing are modeled and the speed-up obtained through inheritance is predicted. The results show that a fitness-inheritance mechanism which utilizes information on building-block fitnesses provides significant efficiency enhancement. For additively separable problems, fitness inheritance reduces the number of function evaluations to about half and yields a speed-up of about 1.75--2.25.<|reference_end|>
arxiv
@article{sastry2004efficiency, title={Efficiency Enhancement of Genetic Algorithms via Building-Block-Wise Fitness Estimation}, author={Kumara Sastry, Martin Pelikan, David E. Goldberg}, journal={arXiv preprint arXiv:cs/0405065}, year={2004}, doi={10.1109/CEC.2004.1330930}, number={IlliGAL Report No. 2004010}, archivePrefix={arXiv}, eprint={cs/0405065}, primaryClass={cs.NE} }
sastry2004efficiency
arxiv-671882
cs/0405066
A Logic for Reasoning about Digital Rights
<|reference_start|>A Logic for Reasoning about Digital Rights: We present a logic for reasoning about licenses, which are ``terms of use'' for digital resources. The logic provides a language for writing both properties of licenses and specifications that govern a client's actions. We discuss the complexity of checking properties and specifications written in our logic and propose a technique for verification. A key feature of our approach is that it is essentially parameterized by the language in which the licenses are written, provided that this language can be given a trace-based semantics. We consider two license languages to illustrate this flexibility.<|reference_end|>
arxiv
@article{pucella2004a, title={A Logic for Reasoning about Digital Rights}, author={Riccardo Pucella and Vicky Weissman}, journal={arXiv preprint arXiv:cs/0405066}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405066}, primaryClass={cs.CR cs.LO} }
pucella2004a
arxiv-671883
cs/0405067
Note on Counting Eulerian Circuits
<|reference_start|>Note on Counting Eulerian Circuits: We show that the problem of counting the number of Eulerian circuits in an undirected graph is complete for the class #P.<|reference_end|>
arxiv
@article{brightwell2004note, title={Note on Counting Eulerian Circuits}, author={Graham R. Brightwell and Peter Winkler}, journal={arXiv preprint arXiv:cs/0405067}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405067}, primaryClass={cs.CC cs.DM} }
brightwell2004note
arxiv-671884
cs/0405068
Observability and Decentralized Control of Fuzzy Discrete Event Systems
<|reference_start|>Observability and Decentralized Control of Fuzzy Discrete Event Systems: Fuzzy discrete event systems as a generalization of (crisp) discrete event systems have been introduced in order that it is possible to effectively represent uncertainty, imprecision, and vagueness arising from the dynamic of systems. A fuzzy discrete event system has been modelled by a fuzzy automaton; its behavior is described in terms of the fuzzy language generated by the automaton. In this paper, we are concerned with the supervisory control problem for fuzzy discrete event systems with partial observation. Observability, normality, and co-observability of crisp languages are extended to fuzzy languages. It is shown that the observability, together with controllability, of the desired fuzzy language is a necessary and sufficient condition for the existence of a partially observable fuzzy supervisor. When a decentralized solution is desired, it is proved that there exist local fuzzy supervisors if and only if the fuzzy language to be synthesized is controllable and co-observable. Moreover, the infimal controllable and observable fuzzy superlanguage, and the supremal controllable and normal fuzzy sublanguage are also discussed. Simple examples are provided to illustrate the theoretical development.<|reference_end|>
arxiv
@article{cao2004observability, title={Observability and Decentralized Control of Fuzzy Discrete Event Systems}, author={Yongzhi Cao and Mingsheng Ying}, journal={IEEE Transactions on Fuzzy Systems, 14(2), pp. 202-216, April 2006}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405068}, primaryClass={cs.DM cs.DC} }
cao2004observability
arxiv-671885
cs/0405069
Mining Frequent Itemsets from Secondary Memory
<|reference_start|>Mining Frequent Itemsets from Secondary Memory: Mining frequent itemsets is at the core of mining association rules, and is by now quite well understood algorithmically. However, most algorithms for mining frequent itemsets assume that the main memory is large enough for the data structures used in the mining, and very few efficient algorithms deal with the case when the database is very large or the minimum support is very low. Mining frequent itemsets from a very large database poses new challenges, as astronomical amounts of raw data is ubiquitously being recorded in commerce, science and government. In this paper, we discuss approaches to mining frequent itemsets when data structures are too large to fit in main memory. Several divide-and-conquer algorithms are given for mining from disks. Many novel techniques are introduced. Experimental results show that the techniques reduce the required disk accesses by orders of magnitude, and enable truly scalable data mining.<|reference_end|>
arxiv
@article{grahne2004mining, title={Mining Frequent Itemsets from Secondary Memory}, author={G"osta Grahne and Jianfei Zhu}, journal={arXiv preprint arXiv:cs/0405069}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405069}, primaryClass={cs.DB cs.IR} }
grahne2004mining
arxiv-671886
cs/0405070
Traffic-driven model of the World Wide Web graph
<|reference_start|>Traffic-driven model of the World Wide Web graph: We propose a model for the World Wide Web graph that couples the topological growth with the traffic's dynamical evolution. The model is based on a simple traffic-driven dynamics and generates weighted directed graphs exhibiting the statistical properties observed in the Web. In particular, the model yields a non-trivial time evolution of vertices and heavy-tail distributions for the topological and traffic properties. The generated graphs exhibit a complex architecture with a hierarchy of cohesiveness levels similar to those observed in the analysis of real data.<|reference_end|>
arxiv
@article{barrat2004traffic-driven, title={Traffic-driven model of the World Wide Web graph}, author={Alain Barrat, Marc Barthelemy, Alessandro Vespignani}, journal={LNCS 3243, 56 (2004)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405070}, primaryClass={cs.NI cond-mat.stat-mech} }
barrat2004traffic-driven
arxiv-671887
cs/0405071
Regression with respect to sensing actions and partial states
<|reference_start|>Regression with respect to sensing actions and partial states: In this paper, we present a state-based regression function for planning domains where an agent does not have complete information and may have sensing actions. We consider binary domains and employ the 0-approximation [Son & Baral 2001] to define the regression function. In binary domains, the use of 0-approximation means using 3-valued states. Although planning using this approach is incomplete with respect to the full semantics, we adopt it to have a lower complexity. We prove the soundness and completeness of our regression formulation with respect to the definition of progression. More specifically, we show that (i) a plan obtained through regression for a planning problem is indeed a progression solution of that planning problem, and that (ii) for each plan found through progression, using regression one obtains that plan or an equivalent one. We then develop a conditional planner that utilizes our regression function. We prove the soundness and completeness of our planning algorithm and present experimental results with respect to several well known planning problems in the literature.<|reference_end|>
arxiv
@article{tuan2004regression, title={Regression with respect to sensing actions and partial states}, author={Le-Chi Tuan, Chitta Baral, and Tran Cao Son}, journal={arXiv preprint arXiv:cs/0405071}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405071}, primaryClass={cs.AI} }
tuan2004regression
arxiv-671888
cs/0405072
Grid Databases for Shared Image Analysis in the MammoGrid Project
<|reference_start|>Grid Databases for Shared Image Analysis in the MammoGrid Project: The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UK<|reference_end|>
arxiv
@article{amendolia2004grid, title={Grid Databases for Shared Image Analysis in the MammoGrid Project}, author={S. R. Amendolia, F. Estrella, T. Hauer, D. Manset, R. McClatchey, M. Odeh, T. Reading, D. Rogulin, D. Schottlander, T. Solomonides}, journal={Proceedings of the 2004 International Database Engineering and Applications Symposium (IDEAS'04). Coimbra Portugal. IEEE Press}, year={2004}, doi={10.1109/IDEAS.2004.1319804}, archivePrefix={arXiv}, eprint={cs/0405072}, primaryClass={cs.DB cs.DC} }
amendolia2004grid
arxiv-671889
cs/0405073
Advanced exploitation of buffer overflow
<|reference_start|>Advanced exploitation of buffer overflow: This article describes in depth several ways of exploiting buffer overflows in the UNIX operating systems.<|reference_end|>
arxiv
@article{gay2004advanced, title={Advanced exploitation of buffer overflow}, author={Olivier Gay}, journal={arXiv preprint arXiv:cs/0405073}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405073}, primaryClass={cs.CR} }
gay2004advanced
arxiv-671890
cs/0405074
MammoGrid: A Service Oriented Architecture based Medical Grid Application
<|reference_start|>MammoGrid: A Service Oriented Architecture based Medical Grid Application: The MammoGrid project has recently delivered its first proof-of-concept prototype using a Service-Oriented Architecture (SOA)-based Grid application to enable distributed computing spanning national borders. The underlying AliEn Grid infrastructure has been selected because of its practicality and because of its emergence as a potential open source standards-based solution for managing and coordinating distributed resources. The resultant prototype is expected to harness the use of huge amounts of medical image data to perform epidemiological studies, advanced image processing, radiographic education and ultimately, tele-diagnosis over communities of medical virtual organisations. The MammoGrid prototype comprises a high-quality clinician visualization workstation used for data acquisition and inspection, a DICOM-compliant interface to a set of medical services (annotation, security, image analysis, data storage and querying services) residing on a so-called Grid-box and secure access to a network of other Grid-boxes connected through Grid middleware. This paper outlines the MammoGrid approach in managing a federation of Grid-connected mammography databases in the context of the recently delivered prototype and will also describe the next phase of prototyping.<|reference_end|>
arxiv
@article{amendolia2004mammogrid:, title={MammoGrid: A Service Oriented Architecture based Medical Grid Application}, author={S R Amendolia, F Estrella, W Hassan, T Hauer, D Manset, R McClatchey, D Rogulin & T Solomonides}, journal={Proceedings of the 3rd International Conference on Grid and Cooperative Computing. Wuhan. China 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405074}, primaryClass={cs.DC cs.DB} }
amendolia2004mammogrid:
arxiv-671891
cs/0405075
Reduction Strategies in Lambda Term Normalization and their Effects on Heap Usage
<|reference_start|>Reduction Strategies in Lambda Term Normalization and their Effects on Heap Usage: Higher-order representations of objects such as programs, proofs, formulas and types have become important to many symbolic computation tasks. Systems that support such representations usually depend on the implementation of an intensional view of the terms of some variant of the typed lambda-calculus. Various notations have been proposed for lambda-terms to explicitly treat substitutions as basis for realizing such implementations. There are, however, several choices in the actual reduction strategies. The most common strategy utilizes such notations only implicitly via an incremental use of environments. This approach does not allow the smaller substitution steps to be intermingled with other operations of interest on lambda-terms. However, a naive strategy explicitly using such notations can also be costly: each use of the substitution propagation rules causes the creation of a new structure on the heap that is often discarded in the immediately following step. There is thus a tradeoff between these two approaches. This thesis describes the actual realization of the two approaches, discusses their tradeoffs based on this and, finally, offers an amalgamated approach that utilizes recursion in rewrite rule application but also suspends substitution operations where necessary.<|reference_end|>
arxiv
@article{qi2004reduction, title={Reduction Strategies in Lambda Term Normalization and their Effects on Heap Usage}, author={Xiaochu Qi}, journal={arXiv preprint arXiv:cs/0405075}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405075}, primaryClass={cs.PL} }
qi2004reduction
arxiv-671892
cs/0405076
An Abductive Framework For Computing Knowledge Base Updates
<|reference_start|>An Abductive Framework For Computing Knowledge Base Updates: This paper introduces an abductive framework for updating knowledge bases represented by extended disjunctive programs. We first provide a simple transformation from abductive programs to update programs which are logic programs specifying changes on abductive hypotheses. Then, extended abduction, which was introduced by the same authors as a generalization of traditional abduction, is computed by the answer sets of update programs. Next, different types of updates, view updates and theory updates are characterized by abductive programs and computed by update programs. The task of consistency restoration is also realized as special cases of these updates. Each update problem is comparatively assessed from the computational complexity viewpoint. The result of this paper provides a uniform framework for different types of knowledge base updates, and each update is computed using existing procedures of logic programming.<|reference_end|>
arxiv
@article{sakama2004an, title={An Abductive Framework For Computing Knowledge Base Updates}, author={Chiaki Sakama, Katsumi Inoue}, journal={Theory and Practice of Logic Programming, vol. 3, no. 6, 2003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405076}, primaryClass={cs.DB} }
sakama2004an
arxiv-671893
cs/0405077
Fast Simulation of Multicomponent Dynamic Systems
<|reference_start|>Fast Simulation of Multicomponent Dynamic Systems: A computer simulation has to be fast to be helpful, if it is employed to study the behavior of a multicomponent dynamic system. This paper discusses modeling concepts and algorithmic techniques useful for creating such fast simulations. Concrete examples of simulations that range from econometric modeling to communications to material science are used to illustrate these techniques and concepts. The algorithmic and modeling methods discussed include event-driven processing, ``anticipating'' data structures, and ``lazy'' evaluation, Poisson dispenser, parallel processing by cautious advancements and by synchronous relaxations. The paper gives examples of how these techniques and models are employed in assessing efficiency of capacity management methods in wireless and wired networks, in studies of magnetization, crystalline structure, and sediment formation in material science, in studies of competition in economics.<|reference_end|>
arxiv
@article{lubachevsky2004fast, title={Fast Simulation of Multicomponent Dynamic Systems}, author={Boris D. Lubachevsky}, journal={Bell Labs Technical Journal, Vol.5, No.2, April-June 2000, pp.134-156}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405077}, primaryClass={cs.DS cond-mat.mtrl-sci cs.DC} }
lubachevsky2004fast
arxiv-671894
cs/0405078
Generative Programming of Graphical User Interfaces
<|reference_start|>Generative Programming of Graphical User Interfaces: Generative Programming (GP) is a computing paradigm allowing automatic creation of entire software families utilizing the configuration of elementary and reusable components. GP can be projected on different technologies, e.g. C++-templates, Java-Beans, Aspect-Oriented Programming (AOP), or Frame technology. This paper focuses on Frame Technology, which aids the possible implementation and completion of software components. The purpose of this paper is to introduce the GP paradigm in the area of GUI application generation. It demonstrates how automatically customized executable applications with GUI parts can be generated from an abstract specification.<|reference_end|>
arxiv
@article{schlee2004generative, title={Generative Programming of Graphical User Interfaces}, author={Max Schlee, Jean Vanderdonckt}, journal={arXiv preprint arXiv:cs/0405078}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405078}, primaryClass={cs.HC} }
schlee2004generative
arxiv-671895
cs/0405079
Higher-Order Concurrent Win32 Programming
<|reference_start|>Higher-Order Concurrent Win32 Programming: We present a concurrent framework for Win32 programming based on Concurrent ML, a concurrent language with higher-order functions, static typing, lightweight threads and synchronous communication channels. The key points of the framework are the move from an event loop model to a threaded model for the processing of window messages, and the decoupling of controls notifications from the system messages. This last point allows us to derive a general way of writing controls that leads to easy composition, and can accommodate ActiveX Controls in a transparent way.<|reference_end|>
arxiv
@article{pucella2004higher-order, title={Higher-Order Concurrent Win32 Programming}, author={Riccardo Pucella}, journal={arXiv preprint arXiv:cs/0405079}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405079}, primaryClass={cs.PL} }
pucella2004higher-order
arxiv-671896
cs/0405080
Reactive Programming in Standard ML
<|reference_start|>Reactive Programming in Standard ML: Reactive systems are systems that maintain an ongoing interaction with their environment, activated by receiving input events from the environment and producing output events in response. Modern programming languages designed to program such systems use a paradigm based on the notions of instants and activations. We describe a library for Standard ML that provides basic primitives for programming reactive systems. The library is a low-level system upon which more sophisticated reactive behaviors can be built, which provides a convenient framework for prototyping extensions to existing reactive languages.<|reference_end|>
arxiv
@article{pucella2004reactive, title={Reactive Programming in Standard ML}, author={Riccardo Pucella}, journal={arXiv preprint arXiv:cs/0405080}, year={2004}, doi={10.1109/ICCL.1998.674156}, archivePrefix={arXiv}, eprint={cs/0405080}, primaryClass={cs.PL} }
pucella2004reactive
arxiv-671897
cs/0405081
An Analysis of Lambek's Production Machines
<|reference_start|>An Analysis of Lambek's Production Machines: Lambek's production machines may be used to generate and recognize sentences in a subset of the language described by a production grammar. We determine in this paper the subset of the language of a grammar generated and recognized by such machines.<|reference_end|>
arxiv
@article{pucella2004an, title={An Analysis of Lambek's Production Machines}, author={Riccardo Pucella}, journal={RAIRO Informatique Theorique et Applications, 31 (5), pp. 483-497, 1997}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405081}, primaryClass={cs.LO} }
pucella2004an
arxiv-671898
cs/0405082
Aspects de la Programmation d'Applications Win32 avec un Langage Fonctionnel
<|reference_start|>Aspects de la Programmation d'Applications Win32 avec un Langage Fonctionnel: A useful programming language needs to support writing programs that take advantage of services and communication mechanisms supplied by the operating system. We examine the problem of programming native Win32 applications under Windows with Standard ML. We introduce an framework based on the IDL interface language et a minimal foreign-functions interface to explore the Win32 API et COM in the context of Standard ML.<|reference_end|>
arxiv
@article{pucella2004aspects, title={Aspects de la Programmation d'Applications Win32 avec un Langage Fonctionnel}, author={Riccardo Pucella, Erik Meijer, Dino Oliva}, journal={arXiv preprint arXiv:cs/0405082}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405082}, primaryClass={cs.PL} }
pucella2004aspects
arxiv-671899
cs/0405083
The Design of a COM-Oriented Module System
<|reference_start|>The Design of a COM-Oriented Module System: We present in this paper the preliminary design of a module system based on a notion of components such as they are found in COM. This module system is inspired from that of Standard ML, and features first-class instances of components, first-class interfaces, and interface-polymorphic functions, as well as allowing components to be both imported from the environment and exported to the environment using simple mechanisms. The module system automates the memory management of interfaces and hides the IUnknown interface and QueryInterface mechanisms from the programmer, favoring instead a higher-level approach to handling interfaces.<|reference_end|>
arxiv
@article{pucella2004the, title={The Design of a COM-Oriented Module System}, author={Riccardo Pucella}, journal={arXiv preprint arXiv:cs/0405083}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405083}, primaryClass={cs.PL} }
pucella2004the
arxiv-671900
cs/0405084
A Framework for Interoperability
<|reference_start|>A Framework for Interoperability: Practical implementations of high-level languages must provide access to libraries and system services that have APIs specified in a low-level language (usually C). An important characteristic of such mechanisms is the foreign-interface policy that defines how to bridge the semantic gap between the high-level language and C. For example, IDL-based tools generate code to marshal data into and out of the high-level representation according to user annotations. The design space of foreign-interface policies is large and there are pros and cons to each approach. Rather than commit to a particular policy, we choose to focus on the problem of supporting a gamut of interoperability policies. In this paper, we describe a framework for language interoperability that is expressive enough to support very efficient implementations of a wide range of different foreign-interface policies. We describe two tools that implement substantially different policies on top of our framework and present benchmarks that demonstrate their efficiency.<|reference_end|>
arxiv
@article{fisher2004a, title={A Framework for Interoperability}, author={Kathleen Fisher, Riccardo Pucella, John Reppy}, journal={arXiv preprint arXiv:cs/0405084}, year={2004}, archivePrefix={arXiv}, eprint={cs/0405084}, primaryClass={cs.PL} }
fisher2004a