name
stringlengths
7
10
title
stringlengths
13
125
abstract
stringlengths
67
3.02k
fulltext
stringclasses
1 value
keywords
stringlengths
17
734
train_1551
The numerical solution of an evolution problem of second order in time on a
closed smooth boundary We consider an initial value problem for the second-order differential equation with a Dirichlet-to-Neumann operator coefficient. For the numerical solution we carry out semi-discretization by the Laguerre transformation with respect to the time variable. Then an infinite system of the stationary operator equations is obtained. By potential theory, the operator equations are reduced to boundary integral equations of the second kind with logarithmic or hypersingular kernels. The full discretization is realized by Nystrom's method which is based on the trigonometric quadrature rules. Numerical tests confirm the ability of the method to solve these types of nonstationary problems
stationary operator equations;evolution problem;hypersingular kernels;initial value problem;closed smooth boundary;laguerre transformation;second-order differential equation;boundary integral equations
train_1552
Stability of Runge-Kutta methods for delay integro-differential equations
We study stability of Runge-Kutta (RK) methods for delay integro-differential equations with a constant delay on the basis of the linear equation du/dt = Lu(t) + Mu(t- tau ) + K integral /sub t- tau //sup t/ u( theta )d theta , where L, M, K are constant complex matrices. In particular, we show that the same result as in the case K = 0 (Koto, 1994) holds for this test equation, i.e., every A-stable RK method preserves the delay-independent stability of the exact solution whenever a step-size of the form h = tau /m is used, where m is a positive integer
constant delay;stability;runge-kutta methods;delay integro-differential equations
train_1553
Numerical solution of forward and backward problem for 2-D heat conduction
equation For a two-dimensional heat conduction problem, we consider its initial boundary value problem and the related inverse problem of determining the initial temperature distribution from transient temperature measurements. The conditional stability for this inverse problem and the error analysis for the Tikhonov regularization are presented. An implicit inversion method, which is based on the regularization technique and the successive over-relaxation (SOR) iteration process, is established. Due to the explicit difference scheme for a direct heat problem developed in this paper, the inversion process is very efficient, while the application of SOR technique makes our inversion convergent rapidly. Numerical results illustrating our method are also given
initial boundary value problem;transient temperature measurements;error analysis;successive over-relaxation iteration process;heat conduction;tikhonov regularization;conditional stability;initial temperature distribution;inverse problem
train_1554
Differential calculus for p-norms of complex-valued vector functions with
applications For complex-valued n-dimensional vector functions t to s(t), supposed to be sufficiently smooth, the differentiability properties of the mapping t to ||s(t)||/sub p/ at every point t = t/sub 0/ epsilon R/sub 0//sup +/:= {t epsilon R | t >or= 0} are investigated, where || . ||/sub p/ is the usual vector norm in C/sup n/ resp. R/sup n/, for p epsilon [1, o infinity ]. Moreover, formulae for the first three right derivatives D/sub +//sup k/||s(t)||/sub p/, k = 1, 2,3 are determined. These formulae are applied to vibration problems by computing the best upper bounds on ||s(t)||/sub p/ in certain classes of bounds. These results cannot be obtained by the methods used so far. The systematic use of the differential calculus for vector norms, as done here for the first time, could lead to major advances also in other branches of mathematics and other sciences
mapping;differential calculus;vibration problems;vector functions;vector norms
train_1555
A note on multi-index polynomials of Dickson type and their applications in
quantum optics We discuss the properties of a new family of multi-index Lucas type polynomials, which are often encountered in problems of intracavity photon statistics. We develop an approach based on the integral representation method and show that this class of polynomials can be derived from recently introduced multi-index Hermite like polynomials
generating functions;quantum optics;multi-index polynomials;intracavity photon statistics;integral representation;lucas type polynomials
train_1556
Regularity of some 'incomplete' Pal-type interpolation problems
In this paper the regularity of nine Pal-type interpolation problems is proved. In the literature interpolation on the zeros of the pair W/sub n//sup ( alpha )/(z) = (z + alpha )/sup n/ + (1 + alpha z)/sup n/, v/sub n//sup ( alpha )/(z) = (z + alpha )/sup n/ - (1 + alpha z)/sup n/ with 0 < alpha < 1 has been studied. Here the nodes form a subset of these sets of zeros
pal-type interpolation problems;zeros
train_1557
L/sub p/ boundedness of (C, 1) means of orthonormal expansions for general
exponential weights Let I be a finite or infinite interval, and let W:I to (0, infinity ). Assume that W/sup 2/ is a weight, so that we may define orthonormal polynomials corresponding to W/sup 2/. For f :R to R, let s/sub m/ [f] denote the mth partial sum of the orthonormal expansion of f with respect to these polynomials. We investigate boundedness in weighted L/sub p/ spaces of the (C, 1) means 1/n /sub m=1/ Sigma /sup n/s/sub m/[f]. The class of weights W/sup 2/ considered includes even and noneven exponential weights
general exponential weights;boundedness;infinite interval;finite interval;orthonormal expansions;orthonormal polynomials;mth partial sum
train_1558
Orthogonality of the Jacobi polynomials with negative integer parameters
It is well known that the Jacobi polynomials P/sub n//sup ( alpha , beta )/(x) are orthogonal with respect to a quasi-definite linear functional whenever alpha , beta , and alpha + beta + 1 are not negative integer numbers. Recently, Sobolev orthogonality for these polynomials has been obtained for alpha a negative integer and beta not a negative integer and also for the case alpha = beta negative integer numbers. In this paper, we give a Sobolev orthogonality for the Jacobi polynomials in the remainder cases
sobolev orthogonality;jacobi polynomials;orthogonality;negative integer parameters;quasi-definite linear functional
train_1559
A comparison theorem for the iterative method with the preconditioner (I +
S/sub max/) A.D. Gunawardena et al. (1991) have reported the modified Gauss-Seidel method with a preconditioner (I + S). In this article, we propose to use a preconditioner (I + S/sub max/) instead of (I + S). Here, S/sub max/ is constructed by only the largest element at each row of the upper triangular part of A. By using the lemma established by M. Neumann and R.J. Plemmons (1987), we get the comparison theorem for the proposed method. Simple numerical examples are also given
preconditioner;modified gauss-seidel method;comparison theorem;iterative method
train_156
Using extended logic programming for alarm-correlation in cellular phone
networks Alarm correlation is a necessity in large mobile phone networks, where the alarm bursts resulting from severe failures would otherwise overload the network operators. We describe how to realize alarm-correlation in cellular phone networks using extended logic programming. To this end, we describe an algorithm and system solving the problem, a model of a mobile phone network application, and a detailed solution for a specific scenario
fault diagnosis;extended logic programming;alarm-correlation;cellular phone networks;large mobile phone networks;network operators
train_1560
Determinantal solutions of solvable chaotic systems
It is shown that two solvable chaotic systems, the arithmetic-harmonic mean (ARM) algorithm and the Ulam-von Neumann (UvN) map, have determinantal solutions. An additional formula for certain determinants and Riccati difference equations play a key role in both cases. Two infinite hierarchies of solvable chaotic systems are presented which have determinantal solutions
riccati difference equations;ulam-von neumann map;chebyshev polynomial;determinants;solvable chaotic systems;arithmetic-harmonic mean algorithm;determinantal solutions
train_1561
Self-validating integration and approximation of piecewise analytic functions
Let an analytic or a piecewise analytic function on a compact interval be given. We present algorithms that produce enclosures for the integral or the function itself. Under certain conditions on the representation of the function, this is done with the minimal order of numbers of operations. The integration algorithm is implemented and numerical comparisons to non-validating integration software are presented
self-validating approximation;self-validating integration;enclosures;piecewise analytic functions;minimal order;integration algorithm;compact interval;complex interval arithmetic
train_1562
Solution of a class of two-dimensional integral equations
The two-dimensional integral equation 1/ pi integral integral /sub D/( phi (r, theta )/R/sup 2/)dS=f(r/sub 0/, theta /sub 0/) defined on a circular disk D: r/sub 0/<or=a, 0<or= theta /sub 0/<or=2 pi , is considered in the present paper. Here R in the kernel denotes the distance between two points P(r, theta ) and P/sub 0/(r/sub 0/, theta /sub 0/) in D, and 0< alpha <2 or 2< alpha <4. Based on some known results of Bessel functions, integral representations of the kernel are established for 0< alpha <2 and 2< alpha <4, respectively, and employed to solve the corresponding two-dimensional integral equation. The solutions of the weakly singular integral equation for 0< alpha <2 and of the hypersingular integral equation for 2< alpha <4 are obtained, respectively
hypersingular integral equation;2d integral equations;integral representations;bessel functions;circular disk;weakly singular integral equation;kernel
train_1563
A distance between elliptical distributions based in an embedding into the
Siegel group This paper describes two different embeddings of the manifolds corresponding to many elliptical probability distributions with the informative geometry into the manifold of positive-definite matrices with the Siegel metric, generalizing a result published previously elsewhere. These new general embeddings are applicable to a wide class of elliptical probability distributions, in which the normal, t-Student and Cauchy are specific examples. A lower bound for the Rao distance is obtained, which is itself a distance, and, through these embeddings, a number of statistical tests of hypothesis are derived
siegel group;positive-definite matrices;manifolds embeddings;elliptical distributions;lower bound;elliptical probability distributions;informative geometry
train_1564
Asymptotic normality for the K/sub phi /-divergence goodness-of-fit tests
In this paper for a wide class of goodness-of-fit statistics based K/sub phi /-divergences, the asymptotic normality is established under the assumption n/m/sub n/ to a in (0, infinity ), where n denotes sample size and m/sub n/ the number of cells. This result is extended to contiguous alternatives to study asymptotic efficiency
asymptotic efficiency;asymptotic normality;k/sub phi /-divergence goodness-of-fit tests
train_1565
On lag windows connected with Jacobi polynomials
Lag windows whose corresponding spectral windows are Jacobi polynomials or sums of Jacobi polynomials are introduced. The bias and variance of their spectral density estimators are investigated and their window bandwidth and characteristic exponent are determined
characteristic exponent;lag windows;jacobi polynomials;spectral windows;spectral density estimators;window bandwidth
train_1566
A numerical C/sup 1/-shadowing result for retarded functional differential
equations This paper gives a numerical C/sup 1/-shadowing between the exact solutions of a functional differential equation and its numerical approximations. The shadowing result is obtained by comparing exact solutions with numerical approximation which do not share the same initial value. Behavior of stable manifolds of functional differential equations under numerics will follow from the shadowing result
numerical approximations;exact solutions;retarded functional differential equations;stable manifolds;numerical c/sup 1/-shadowing
train_1567
Asymptotic expansions for the zeros of certain special functions
We derive asymptotic expansions for the zeros of the cosine-integral Ci(x) and the Struve function H/sub 0/(x), and extend the available formulae for the zeros of Kelvin functions. Numerical evidence is provided to illustrate the accuracy of the expansions
zeros;asymptotic expansions;struve function;cosine-integral;kelvin functions;accuracy
train_1568
Natural language from artificial life
This article aims to show that linguistics, in particular the study of the lexico-syntactic aspects of language, provides fertile ground for artificial life modeling. A survey of the models that have been developed over the last decade and a half is presented to demonstrate that ALife techniques have a lot to offer an explanatory theory of language. It is argued that this is because much of the structure of language is determined by the interaction of three complex adaptive systems: learning, culture, and biological evolution. Computational simulation, informed by theoretical linguistics, is an appropriate response to the challenge of explaining real linguistic data in terms of the processes that underpin human language
lexico-syntactic aspects;culture;biological evolution;learning;alife;artificial life;adaptive systems;natural language;linguistics;computational simulation
train_1569
An interactive self-replicator implemented in hardware
Self-replicating loops presented to date are essentially worlds unto themselves, inaccessible to the observer once the replication process is launched. We present the design of an interactive self-replicating loop of arbitrary size, wherein the user can physically control the loop's replication and induce its destruction. After introducing the BioWall, a reconfigurable electronic wall for bio-inspired applications, we describe the design of our novel loop and delineate its hardware implementation in the wall
biowall;field programmable gate array;reconfigurable electronic wall;interactive self-replicating loop;self-replication;artificial life;reconfigurable computing;cellular automata;hardware implementation;bio-inspired applications;interactive self-replicator
train_157
Automatic extraction of eye and mouth fields from a face image using
eigenfeatures and ensemble networks This paper presents a novel algorithm for the extraction of the eye and mouth (facial features) fields from 2D gray level images. Eigenfeatures are derived from the eigenvalues and eigenvectors of the binary edge data set constructed from eye and mouth fields. Such eigenfeatures are ideal features for finely locating fields efficiently. The eigenfeatures are extracted from a set of the positive and negative training samples for facial features and are used to train a multilayer perceptron (MLP) whose output indicates the degree to which a particular image window contains the eyes or the mouth within itself. An ensemble network consisting of a multitude of independent MLPs was used to enhance the generalization performance of a single MLP. It was experimentally verified that the proposed algorithm is robust against facial size and even slight variations of the pose
eigenfeatures;training samples;binary edge data set;multilayer perceptron;eye field extraction;mouth field extraction;ensemble neural networks;2d gray level images;eigenvalues;eigenvectors;experiment;face feature extraction;generalization
train_1570
Self-reproduction in three-dimensional reversible cellular space
Due to inevitable power dissipation, it is said that nano-scaled computing devices should perform their computing processes in a reversible manner. This will be a large problem in constructing three-dimensional nano-scaled functional objects. Reversible cellular automata (RCA) are used for modeling physical phenomena such as power dissipation, by studying the dissipation of garbage signals. We construct a three-dimensional self-inspective self-reproducing reversible cellular automaton by extending the two-dimensional version SR/sub 8/. It can self-reproduce various patterns in three-dimensional reversible cellular space without dissipating garbage signals
3d self-inspective self-reproducing cellular automata;reversible cellular automata;self-reproduction;nano-scaled computing devices;artificial life;power dissipation;three-dimensional reversible cellular space
train_1571
The simulated emergence of distributed environmental control in evolving
microcosms This work continues investigation into Gaia theory (Lovelock, The ages of Gaia, Oxford University Press, 1995) from an artificial life perspective (Downing, Proceedings of the 7th International Conference on Artificial Life, p. 90-99, MIT Press, 2000), with the aim of assessing the general compatibility of emergent distributed environmental control with conventional natural selection. Our earlier system, GUILD (Downing and Zvirinsky, Artificial Life, 5, p.291-318, 1999), displayed emergent regulation of the chemical environment by a population of metabolizing agents, but the chemical model underlying those results was trivial, essentially admitting all possible reactions at a single energy cost. The new model, METAMIC, utilizes abstract chemistries that are both (a) constrained to a small set of legal reactions, and (b) grounded in basic fundamental relationships between energy, entropy, and biomass synthesis/breakdown. To explore the general phenomena of emergent homeostasis, we generate 100 different chemistries and use each as the basis for several METAMIC runs, as part of a Gaia hunt. This search discovers 20 chemistries that support microbial populations capable of regulating a physical environmental factor within their growth-optimal range, despite the extra metabolic cost. Case studies from the Gaia hunt illustrate a few simple mechanisms by which real biota might exploit the underlying chemistry to achieve some control over their physical environment. Although these results shed little light on the question of Gaia on Earth, they support the possibility of emergent environmental control at the microcosmic level
simulated emergence;artificial metabolisms;natural selection;chemical model;guild system;artificial life;gaia theory;artificial chemistry;genetic algorithms;evolving microcosms;emergent distributed environmental control;metabolizing agents;emergent homeostasis;metamic model;gaia hunt
train_1572
Ant colony optimization and stochastic gradient descent
We study the relationship between the two techniques known as ant colony optimization (ACO) and stochastic gradient descent. More precisely, we show that some empirical ACO algorithms approximate stochastic gradient descent in the space of pheromones, and we propose an implementation of stochastic gradient descent that belongs to the family of ACO algorithms. We then use this insight to explore the mutual contributions of the two techniques
reinforcement learning;combinatorial optimization;swarm intelligence;empirical aco algorithms;artificial life;ant colony optimization;local search algorithms;social insects;pheromones;heuristic;stochastic gradient descent
train_1574
No-go areas? [content management]
Alex Fry looks at how content management systems can be used to ensure website access for one important customer group, the disabled
disabled;website access;content management systems
train_1578
Records role in e-business
Records management standards are now playing a key role in e-business strategy
e-business strategy;records management
train_158
Neural and neuro-fuzzy integration in a knowledge-based system for air quality
prediction We propose a unified approach for integrating implicit and explicit knowledge in neurosymbolic systems as a combination of neural and neuro-fuzzy modules. In the developed hybrid system, a training data set is used for building neuro-fuzzy modules, and represents implicit domain knowledge. The explicit domain knowledge on the other hand is represented by fuzzy rules, which are directly mapped into equivalent neural structures. The aim of this approach is to improve the abilities of modular neural structures, which are based on incomplete learning data sets, since the knowledge acquired from human experts is taken into account for adapting the general neural architecture. Three methods to combine the explicit and implicit knowledge modules are proposed. The techniques used to extract fuzzy rules from neural implicit knowledge modules are described. These techniques improve the structure and the behavior of the entire system. The proposed methodology has been applied in the field of air quality prediction with very encouraging results. These experiments show that the method is worth further investigation
hybrid system;neuro-fuzzy integration;training data set;air quality prediction;knowledge-based system;implicit domain knowledge representation;incomplete learning;air pollution;neurosymbolic systems;neural architecture;experiments;fuzzy rules
train_1583
Cutting through the confusion [workflow & content management]
Information management vendors are rushing to re-position themselves and put a portal spin on their products, says ITNET's Graham Urquhart. The result is confusion, with a range of different definitions and claims clouding the true picture
portals;itnet;workflow;collaboratively
train_1584
Content all clear [workflow & content management]
Graeme Muir of SchlumbergerSema cuts through the confusion between content, document and records management
content management;schlumbergersema;document management;records management
train_1588
Contentment management
Andersen's William Yarker and Richard Young outline the route to a successful content management strategy
andersen consulting;content management strategy
train_1589
View from the top [workflow & content management]
International law firm Linklaters has installed a global document and content management system that is accessible to clients and which has helped it move online
content management;document management;online;linklaters;international law firm
train_159
An intelligent system combining different resource-bounded reasoning techniques
In this paper, PRIMES (Progressive Reasoning and Intelligent multiple MEthods System), a new architecture for resource-bounded reasoning that combines a form of progressive reasoning and the so-called multiple methods approach is presented. Each time-critical reasoning unit is designed in such a way that it delivers an approximate result in time whenever an overload or a failure prevents the system from producing the most accurate result. Indeed, reasoning units use approximate processing based on two salient features. First, an incremental processing unit constructs an approximate solution quickly and then refines it incrementally. Second, a multiple methods approach proposes different alternatives to solve the problem, each of them being selected according to the available resources. In allowing several resource-bounded reasoning paradigms to be combined, we hope to extend their actual scope to cover more real-world application domains
real-time performance;time-critical reasoning unit;progressive reasoning;approximate processing;primes;complex systems;resource-bounded reasoning techniques;intelligent multiple methods system
train_1590
Holding on [workflow & content management]
Marc Fresko of Cornwell Management Consultants says 'think ahead' when developing your electronic records management policy
cornwell management consultants;electronic records management policy
train_1591
Quadratic interpolation on spheres
Riemannian quadratics are C/sup 1/ curves on Riemannian manifolds, obtained by performing the quadratic recursive deCastlejeau algorithm in a Riemannian setting. They are of interest for interpolation problems in Riemannian manifolds, such as trajectory-planning for rigid body motion. Some interpolation properties of Riemannian quadratics are analysed when the ambient manifold is a sphere or projective space, with the usual Riemannian metrics
corner-cutting;trajectory-planning;ambient manifold;riemannian manifolds;quadratic interpolation;parallel translation;approximation theory;rigid body motion
train_1592
Hermite interpolation by rotation-invariant spatial Pythagorean-hodograph
curves The interpolation of first-order Hermite data by spatial Pythagorean-hodograph curves that exhibit closure under arbitrary 3-dimensional rotations is addressed. The hodographs of such curves correspond to certain combinations of four polynomials, given by Dietz et al. (1993), that admit compact descriptions in terms of quaternions - an instance of the "PH representation map" proposed by Choi et al. (2002). The lowest-order PH curves that interpolate arbitrary first-order spatial Hermite data are quintics. It is shown that, with PH quintics, the quaternion representation yields a reduction of the Hermite interpolation problem to three "simple" quadratic equations in three quaternion unknowns. This system admits a closed-form solution, expressing all PH quintic interpolants to given spatial Hermite data as a two-parameter family. An integral shape measure is invoked to fix these two free parameters
first-order hermite data;polynomials;spatial pythagorean hodograph curves;integral shape measure;quaternions;closed-form solution;interpolation
train_1593
Single and multi-interval Legendre tau -methods in time for parabolic equations
In this paper, we take the parabolic equation with periodic boundary conditions as a model to present a spectral method with the Fourier approximation in spatial and single/multi-interval Legendre Petrov-Galerkin methods in time. For the single interval spectral method in time, we obtain the optimal error estimate in L/sup 2/-norm. For the multi-interval spectral method in time, the L/sup 2/-optimal error estimate is valid in spatial. Numerical results show the efficiency of the methods
legendre petrov-galerkin method;interval decomposition;optimal error estimation;fourier approximation;parabolic equation;error analysis;interval spectral method;partial differential equations;periodic boundary conditions
train_1594
Training multilayer perceptrons via minimization of sum of ridge functions
Motivated by the problem of training multilayer perceptrons in neural networks, we consider the problem of minimizing E(x)= Sigma /sub i=1//sup n/ f/sub i/( xi /sub i/.x), where xi /sub i/ in R/sup S/, 1<or=i<or=n, and each f/sub i/( xi /sub i/.x) is a ridge function. We show that when n is small the problem of minimizing E can be treated as one of minimizing univariate functions, and we use the gradient algorithms for minimizing E when n is moderately large. For a large n, we present the online gradient algorithms and especially show the monotonicity and weak convergence of the algorithms
multilayer perceptrons;monotonicity;neural networks;minimization;gradient algorithms;weak convergence;ridge functions;online gradient algorithms;univariate functions
train_1595
Convergence of finite element approximations and multilevel linearization for
Ginzburg-Landau model of d-wave superconductors In this paper, we consider the finite element approximations of a recently proposed Ginzburg-Landau-type model for d-wave superconductors. In contrast to the conventional Ginzburg-Landau model the scalar complex valued order-parameter is replaced by a multicomponent complex order-parameter and the free energy is modified according to the d-wave paring symmetry. Convergence and optimal error estimates and some super-convergent estimates for the derivatives are derived. Furthermore, we propose a multilevel linearization procedure to solve the nonlinear systems. It is proved that the optimal error estimates and super-convergence for the derivatives are preserved by the multi-level linearization algorithm
multilevel linearization;finite element method;d-wave;error estimation;two-grid method;superconductivity;nonlinear systems;free energy;ginzburg-landau model
train_1596
Wavelet collocation methods for a first kind boundary integral equation in
acoustic scattering In this paper we consider a wavelet algorithm for the piecewise constant collocation method applied to the boundary element solution of a first kind integral equation arising in acoustic scattering. The conventional stiffness matrix is transformed into the corresponding matrix with respect to wavelet bases, and it is approximated by a compressed matrix. Finally, the stiffness matrix is multiplied by diagonal preconditioners such that the resulting matrix of the system of linear equations is well conditioned and sparse. Using this matrix, the boundary integral equation can be solved effectively
first kind integral operators;boundary element solution;boundary integral equation;piecewise constant collocation;stiffness matrix;wavelet transform;linear equations;computational complexity;acoustic scattering;wavelet algorithm
train_1597
Application of heuristic methods for conformance test selection
In this paper we focus on the test selection problem. It is modeled after a real-life problem that arises in telecommunication when one has to check the reliability of an application. We apply different metaheuristics, namely Reactive Tabu Search (RTS), Genetic Algorithms (GA) and Simulated Annealing (SA) to solve the problem. We propose some modifications to the conventional schemes including an adaptive neighbourhood sampling in RTS, an adaptive variable mutation rate in GA and an adaptive variable neighbourhood structure in SA. The performance of the algorithms is evaluated in different models for existing protocols. Computational results show that GA and SA can provide high-quality solutions in acceptable time compared to the results of a commercial software, which makes them applicable in practical test selection
simulated annealing;test selection problem;adaptive variable mutation rate;gsm protocol;adaptive variable neighbourhood structure;genetic algorithms;adaptive neighbourhood sampling;metaheuristics;isdn protocol;heuristic methods;telecommunication conformance test selection;reliability;reactive tabu search
train_1598
A decision support model for selecting product/service benefit positionings
The art (and science) of successful product/service positioning generally hinges on the firm's ability to select a set of attractively priced consumer benefits that are: valued by the buyer, distinctive in one or more respects, believable, deliverable, and sustainable (under actual or potential competitive abilities to imitate, neutralize, or overcome) in the target markets that the firm selects. For many years, the ubiquitous quadrant chart has been used to provide a simple graph of product/service benefits (usually called product/service attributes) described in terms of consumers' perceptions of the importance of attributes (to brand/supplier choice) and the performance of competing firms on these attributes. This paper describes a model that extends the quadrant chart concept to a decision support system that optimizes a firm's market share for a specified product/service. In particular, we describe a decision support model that utilizes relatively simple marketing research data on consumers' judged benefit importances, and supplier performances on these benefits to develop message components for specified target buyers. A case study is used to illustrate the model. The study deals with developing advertising message components for a relatively new entrant in the US air shipping market. We also discuss, more briefly, management reactions to application of the model to date, and areas for further research and model extension
management reactions;message components;decision support model;product/service attributes;market share optimization;product/service benefit positionings;consumer judged benefit importances;marketing research data;attractively priced consumer benefits;quadrant chart;optimal message design;brand/supplier choice;advertising;advertising message components;simple graph;greedy heuristic;us air shipping market
train_1599
Evaluating the best main battle tank using fuzzy decision theory with
linguistic criteria evaluation In this paper, experts' opinions are described in linguistic terms which can be expressed in trapezoidal (or triangular) fuzzy numbers. To make the consensus of the experts consistent, we utilize the fuzzy Delphi method to adjust the fuzzy rating of every expert to achieve the consensus condition. For the aggregate of many experts' opinions, we take the operation of fuzzy numbers to get the mean of fuzzy rating, x/sub ij/ and the mean of weight, w/sub .j/. In multi-alternatives and multi-attributes cases, the fuzzy decision matrix X=[x/sub ij/]/sub m*n/ is constructed by means of the fuzzy rating, x/sub ij/. Then, we can derive the aggregate fuzzy numbers by multiplying the fuzzy decision matrix with the corresponding fuzzy attribute weights. The final results become a problem of ranking fuzzy numbers. We also propose an easy procedure of using fuzzy numbers to rank aggregate fuzzy numbers A/sub i/. In this way, we can obtain the best selection for evaluating the system. For practical application, we propose an algorithm for evaluating the best main battle tank by fuzzy decision theory and comparing it with other methods
multiple criteria problems;fuzzy rating;fuzzy number ranking;trapezoidal fuzzy numbers;group decision making;fuzzy decision theory;linguistic criteria evaluation;fuzzy decision matrix;triangular fuzzy numbers;aggregate fuzzy numbers;fuzzy group decision making;consensus condition;fuzzy attribute weights;battle tank evaluation;subjective-objective backgrounds;fuzzy delphi method
train_16
Dual nature of mass multi-agent systems
Dual nature of mass multi-agent systems (mMAS) emerging as an internal discord of two spheres - micro (virtual) consisting of agents and their internal phenomena, and macro arising at the interface to the real world $stems the necessity of a new approach to analysis, design and utilisation of such systems. Based on the concept of VR decomposition, the problem of management of such systems is discussed. As a sub-type that makes mMAS closer to the application sphere, an evolutionary multi-agent system (EMAS) is proposed. EMAS combines features of mMAS with advantages of an evolutionary model of computation. As an illustration of this consideration two particular EMAS are presented, which allow us to obtain promising results in the fields of multiobjective optimisation and time-series prediction, and thus justify the approach
virtual reality;vr decomposition;mass multiple agent systems;evolutionary multiple agent system;formal model;multiobjective optimisation;time-series prediction;micro-macro link
train_160
Taming the paper tiger [paperwork organization]
Generally acknowledged as a critical problem for many information professionals, the massive flow of documents, paper trails, and information needs efficient and dependable approaches for processing and storing and finding items and information
information professionals;paperwork organization;information processing;information retrieval;information storage
train_1600
The development and evaluation of a fuzzy logic expert system for renal
transplantation assignment: Is this a useful tool? Allocating donor kidneys to patients is a complex, multicriteria decision-making problem which involves not only medical, but also ethical and political issues. In this paper, a fuzzy logic expert system approach was proposed as an innovative way to deal with the vagueness and complexity faced by medical doctors in kidney allocation decision making. A pilot fuzzy logic expert system for kidney allocation was developed and evaluated in comparison with two existing allocation algorithms: a priority sorting system used by multiple organ retrieval and exchange (MORE) in Canada and a point scoring systems used by united network for organ sharing (UNOS) in US. Our simulated experiment based on real data indicated that the fuzzy logic system can represent the expert's thinking well in handling complex tradeoffs, and overall, the fuzzy logic derived recommendations were more acceptable to the expert than those from the MORE and UNOS algorithms
point scoring systems;complex tradeoff handling;united network for organ sharing;kidney allocation decision making;simulated experiment;fuzzy logic expert system;priority sorting system;multicriteria decision-making problem;renal transplantation assignment;donor kidneys;multiple organ retrieval exchange
train_1601
Solving the multiple competitive facilities location problem
In this paper we propose five heuristic procedures for the solution of the multiple competitive facilities location problem. A franchise of several facilities is to be located in a trade area where competing facilities already exist. The objective is to maximize the market share captured by the franchise as a whole. We perform extensive computational tests and conclude that a two-step heuristic procedure combining simulated annealing and an ascent algorithm provides the best solutions
simulated annealing;computational tests;multiple competitive facilities location problem;ascent algorithm;heuristic procedures;market share maximization;two-step heuristic procedure;facilities franchise
train_1602
An optimization approach to plan for reusable software components
It is well acknowledged in software engineering that there is a great potential for accomplishing significant productivity improvements through the implementation of a successful software reuse program. On the other hand, such gains are attainable only by instituting detailed action plans at both the organizational and program level. Given this need, the paucity of research papers related to planning, and in particular, optimized planning is surprising. This research, which is aimed at this gap, brings out an application of optimization for the planning of reusable software components (SCs). We present a model that selects a set of SCs that must be built, in order to lower development and adaptation costs. We also provide implications to project management based on simulation, an approach that has been adopted by other cost models in the software engineering literature. Such a prescriptive model does not exist in the literature
reusable software components;adaptation costs;software reuse program;optimized planning;action plans;optimization;simulation;productivity improvements;development costs;project management;software engineering
train_1603
Exploiting structure in adaptive dynamic programming algorithms for a
stochastic batch service problem The purpose of this paper is to illustrate the importance of using structural results in dynamic programming algorithms. We consider the problem of approximating optimal strategies for the batch service of customers at a service station. Customers stochastically arrive at the station and wait to be served, incurring a waiting cost and a service cost. Service of customers is performed in groups of a fixed service capacity. We investigate the structure of cost functions and establish some theoretical results including monotonicity of the value functions. Then, we use our adaptive dynamic programming monotone algorithm that uses structure to preserve monotonicity of the estimates at each iterations to approximate the value functions. Since the problem with homogeneous customers can be solved optimally, we have a means of comparison to evaluate our heuristic. Finally, we compare our algorithm to classical forward dynamic programming methods
inventory theory;optimal strategy approximation;waiting cost;stochastic batch service problem;service station;fixed service capacity;structural results;service cost;cost function structure;value function monotonicity;adaptive dynamic programming algorithms
train_1604
Improving supply-chain performance by sharing advance demand information
In this paper, we analyze how sharing advance demand information (ADI) can improve supply-chain performance. We consider two types of ADI, aggregated ADI (A-ADI) and detailed ADI (D-ADI). With A-ADI, customers share with manufacturers information about whether they will place an order for some product in the next time period, but do not share information about which product they will order and which of several potential manufacturers will receive the order. With D-ADI, customers additionally share information about which product they will order, but which manufacturer will receive the order remains uncertain. We develop and solve mathematical models of supply chains where ADI is shared. We derive exact expressions and closed-form approximations for expected costs, expected base-stock levels, and variations of the production quantities. We show that both the manufacturer and the customers benefit from sharing ADI, but that sharing ADI increases the bullwhip effect. We also show that under certain conditions it is optimal to collect ADI from either none or all of the customers. We study two supply chains in detail: a supply chain with an arbitrary number of products that have identical demand rates, and a supply chain with two products that have arbitrary demand rates. For these two supply chains, we analyze how the values of A-ADI and D-ADI depend on the characteristics of the supply chain and on the quality of the shared information, and we identify conditions under which sharing A-ADI and D-ADI can significantly reduce cost. Our results can be used by decision makers to analyze the cost savings that can be achieved by sharing ADI and help them to determine if sharing ADI is beneficial for their supply chains
shared information quality;expected base-stock levels;bullwhip effect;closed-form approximations;cost savings;forecasting;advance demand information;production quantity variations;expected costs;detailed adi;information sharing;arbitrary demand rates;decision makers;manufacturing;arbitrary product number;supply-chain performance improvement;identical demand rates;aggregated adi;mathematical models
train_1605
A GRASP heuristic for the mixed Chinese postman problem
Arc routing problems (ARPs) consist of finding a traversal on a graph satisfying some conditions related to the links of the graph. In the Chinese postman problem (CPP) the aim is to find a minimum cost tour (closed walk) traversing all the links of the graph at least once. Both the Undirected CPP, where all the links are edges that can be traversed in both ways, and the Directed CPP, where all the links are arcs that must be traversed in a specified way, are known to be polynomially solvable. However, if we deal with a mixed graph (having edges and arcs), the problem turns out to be NP-hard. In this paper, we present a heuristic algorithm for this problem, the so-called Mixed CPP (MCPP), based on greedy randomized adaptive search procedure (GRASP) techniques. The algorithm has been tested and compared with other known and recent methods from the literature on a wide collection of randomly generated instances, with up to 200 nodes and 600 links, producing encouraging computational results. As far as we know, this is the best heuristic algorithm for the MCPP, with respect to solution quality, published up to now
greedy randomized adaptive search procedure;closed walk;heuristic algorithm;mixed chinese postman problem;optimization problems;np-hard problem;arc routing problems;graph traversal;metaheuristics;minimum cost tour;grasp heuristic
train_1606
Single machine earliness-tardiness scheduling with resource-dependent release
dates This paper deals with the single machine earliness and tardiness scheduling problem with a common due date and resource-dependent release dates. It is assumed that the cost of resource consumption of a job is a non-increasing linear function of the job release date, and this function is common for all jobs. The objective is to find a schedule and job release dates that minimize the total resource consumption, and earliness and tardiness penalties. It is shown that the problem is NP-hard in the ordinary sense even if the due date is unrestricted (the number of jobs that can be scheduled before the due date is unrestricted). An exact dynamic programming (DP) algorithm for small and medium size problems is developed. A heuristic algorithm for large-scale problems is also proposed and the results of a computational comparison between heuristic and optimal solutions are discussed
heuristic algorithm;exact dynamic programming algorithm;polynomial time algorithm;small size problems;resource-dependent release dates;total resource consumption minimization;np-hard problem;job resource consumption cost;common due date;single machine earliness-tardiness scheduling;medium size problems;nonincreasing linear function;job release date;large-scale problems
train_1607
A solvable queueing network model for railway networks and its validation and
applications for the Netherlands The performance of new railway networks cannot be measured or simulated, as no detailed train schedules are available. Railway infrastructure and capacities are to be determined long before the actual traffic is known. This paper therefore proposes a solvable queueing network model to compute performance measures of interest without requiring train schedules (timetables). Closed form expressions for mean delays are obtained. New network designs, traffic scenarios, and capacity expansions can so be evaluated. A comparison with real delay data for the Netherlands supports the practical value of the model. A special Dutch cargo-line application is included
solvable queueing network model;closed form expressions;performance measures;network designs;dutch cargo-line application;railway networks;railway capacities;traffic scenarios;netherlands;railway infrastructure;capacity expansions;mean delays
train_1608
A geometric process equivalent model for a multistate degenerative system
In this paper, a monotone process model for a one-component degenerative system with k+1 states (k failure states and one working state) is studied. We show that this model is equivalent to a geometric process (GP) model for a two-state one component system such that both systems have the same long-run average cost per unit time and the same optimal policy. Furthermore, an explicit expression for the determination of an optimal policy is derived
monotone process model;renewal reward process;long-run average cost;optimal policy;geometric process equivalent model;one-component degenerative system;two-state one component system;working state;failure states;multistate degenerative system;replacement policy
train_1609
Modeling undesirable factors in efficiency evaluation
Data envelopment analysis (DEA) measures the relative efficiency of decision making units (DMUs) with multiple performance factors which are grouped into outputs and inputs. Once the efficient frontier is determined, inefficient DMUs can improve their performance to reach the efficient frontier by either increasing their current output levels or decreasing their current input levels. However, both desirable (good) and undesirable (bad) factors may be present. For example, if inefficiency exists in production processes where final products are manufactured with a production of wastes and pollutants, the outputs of wastes and pollutants are undesirable and should be reduced to improve the performance. Using the classification invariance property, we show that the standard DEA model can be used to improve the performance via increasing the desirable outputs and decreasing the undesirable outputs. The method can also be applied to situations when some inputs need to be increased to improve the performance. The linearity and convexity of DEA are preserved through our proposal
wastes;linear programming;undesirable outputs;classification invariance property;final product manufacture;current input levels;current output levels;data envelopment analysis;efficiency evaluation;efficient frontier;undesirable factor modeling;pollutants;multiple performance factors;decision making units;desirable outputs;production processes
train_161
Electronic books: reports of their death have been exaggerated
E-books will survive, but not in the consumer market - at least not until reading devices become much cheaper and much better in quality (which is not likely to happen soon). Library Journal's review of major events of the year 2001 noted that two requirements for the success of E-books were development of a sustainable business model and development of better reading devices. The E-book revolution has therefore become more of an evolution. We can look forward to further developments and advances in the future
e-books;library journal;electronic books
train_1610
Dynamic multi-objective heating optimization
We develop a multicriteria approach to the problem of space heating under a time varying price of electricity. In our dynamic goal programming model the goals are ideal temperature intervals and the other criteria are the costs and energy consumption. We discuss the modelling requirements in multicriteria problems with a dynamic structure and present a new relaxation method combining the traditional epsilon -constraint and goal programming (GP) methods. The multi-objective heating optimization (MOHO) application in a spreadsheet environment with numerical examples is described
relaxation method;space heating;modelling requirements;dynamic structure;energy consumption;multicriteria approach;multi-objective heating optimization;spreadsheet environment;numerical examples;dynamic goal programming model;dynamic multi-objective heating optimization;time varying electricity price;ideal temperature intervals;epsilon -constraint
train_1611
Data mining business intelligence for competitive advantage
Organizations have lately realized that just processing transactions and/or information faster and more efficiently no longer provides them with a competitive advantage vis-a-vis their competitors for achieving business excellence. Information technology (IT) tools that are oriented towards knowledge processing can provide the edge that organizations need to survive and thrive in the current era of fierce competition. Enterprises are no longer satisfied with business information system(s); they require business intelligence system(s). The increasing competitive pressures and the desire to leverage information technology techniques have led many organizations to explore the benefits of new emerging technology, data warehousing and data mining. The paper discusses data warehouses and data mining tools and applications
organizations;knowledge processing;data mining;information technology;competitive advantage;business information system;data warehouses;business intelligence
train_1613
Current waveform control of a high-power-factor rectifier circuit for harmonic
suppression of voltage and current in a distribution system This paper presents the input current waveform control of the rectifier circuit which realizes simultaneously the high input power factor and the harmonics suppression of the receiving-end voltage and the source current under the distorted receiving-end voltage. The proposed input current waveform includes the harmonic components which are in phase with the receiving-end voltage harmonics. The control parameter in the proposed waveform is designed by examining the characteristics of both the harmonic suppression effect in the distribution system and the input power factor of the rectifier circuit. The effectiveness of the proposed current waveform has been confirmed experimentally
source current;distorted receiving-end voltage;receiving-end voltage harmonics;input current waveform;60 hz;receiving-end voltage;harmonic current suppression;200 v;8 kva;2 kw;high input power factor;distribution system;high-power-factor rectifier circuit;harmonic voltage suppression;input current waveform control
train_1614
A transmission line fault-location system using the wavelet transform
This paper describes the locating system of line-to-ground faults on a power transmission line by using a wavelet transform. The possibility of the location with the surge generated by a fault has been theoretically proposed. In order to make the method practicable, the authors realize very fast processors. They design the wavelet transform and location chips, and construct a very fast fault location system by processing the measured data in parallel. This system is realized by a computer with three FPGA processor boards on a PCI bus. The processors are controlled by UNIX and the system has a graphical user interface with an X window system
fpga processor boards;computer simulation;pci bus;line-to-ground faults;wavelet transform;unix;fault surge generation;graphic user interface;x window system;power transmission line fault-location system
train_1615
Laguerre approximation of fractional systems
Systems characterised by fractional power poles can be called fractional systems. Here, Laguerre orthogonal polynomials are employed to approximate fractional systems by minimum phase, reduced order, rational transfer functions. Both the time and the frequency-domain analysis exhibit the accuracy of the approximation
fractional power poles;fractional systems;robust controllers;closed-loop system;reduced order;rational transfer functions;frequency-domain analysis;orthogonal polynomials;time-domain analysis;laguerre approximation;minimum phase
train_1616
Pitch post-processing technique based on robust statistics
A novel pitch post-processing technique based on robust statistics is proposed. Performances in terms of pitch error rates and pitch contours show the superiority of the proposed method compared with the median filtering technique. Further improvement is achieved through incorporating an uncertainty term in the robust statistics model
speech communications;pitch contours;median filtering;pitch post-processing technique;speech quality;robust statistics;uncertainty term;pitch error rates
train_1617
Adaptive array antenna based on radial basis function network as multiuser
detection for WCDMA An adaptive array antenna is proposed based on the radial basis function (RBF) network as a multiuser detector for a WCDMA system. The proposed system calculates the optimal combining weight coefficients using sample matrix inversion with a common correlation matrix algorithm and obtains the channel response vector using the RBF output signal
channel response vector;correlation matrix algorithm;radial basis function network;w-cdma;rbf network;optimal combining weight coefficients;multiuser detection;sample matrix inversion;adaptive array antenna;wideband code division multiple access
train_1618
Optimal learning for patterns classification in RBF networks
The proposed modifying of the structure of the radial basis function (RBF) network by introducing the weight matrix to the input layer (in contrast to the direct connection of the input to the hidden layer of a conventional RBF) so that the training space in the RBF network is adaptively separated by the resultant decision boundaries and class regions is reported. The training of this weight matrix is carried out as for a single-layer perceptron together with the clustering process. In this way the network is capable of dealing with complicated problems, which have a high degree of interference in the training data, and achieves a higher classification rate over the current classifiers using RBF
single-layer perceptron;class regions;classification rate improvement;decision boundaries;radial basis function network;optimal learning;weight matrix training;rbf networks;clustering process;input layer;training space;pattern classification
train_1619
Rate allocation for video transmission over lossy correlated networks
A novel rate allocation algorithm for video transmission over lossy networks subject to bursty packet losses is presented. A Gilbert-Elliot model is used at the encoder to drive the selection of coding parameters. Experimental results using the H.26L test model show a significant performance improvement with respect to the assumption of independent packet losses
rate allocation algorithm;video transmission;gilbert-elliot model;lossy correlated networks;bursty packet losses;coding parameters;video coding;h.26l test model
train_162
International news sites in English
Web access to news sites all over the world allows us the opportunity to have an electronic news stand readily available and stocked with a variety of foreign (to us) news sites. A large number of currently available foreign sites are English-language publications or English language versions of non-North American sites. These sites are quite varied in terms of quality, coverage, and style. Finding them can present a challenge. Using them effectively requires critical-thinking skills that are a part of media awareness or digital literacy
english-language publications;web access;digital literacy;non north american sites;critical-thinking skills;media awareness;international news sites
train_1620
Rapid Cauer filter design employing new filter model
The exact three-dimensional (3D) design of a coaxial Cauer filter employing a new filter model, a 3D field simulator and a circuit simulator, is demonstrated. Only a few iterations between the field simulator and the circuit simulator are necessary to meet a given specification
filter model;3d design;field simulator;circuit simulator;bandpass filters;cauer filter;filter design;coaxial filter;iterations
train_1621
Current-mode fully-programmable piece-wise-linear block for neuro-fuzzy
applications A new method to implement an arbitrary piece-wise-linear characteristic in current mode is presented. Each of the breaking points and each slope is separately controllable. As an example a block that implements an N-shaped piece-wise-linearity has been designed. The N-shaped block operates in the subthreshold region and uses only ten transistors. These characteristics make it especially suitable for large arrays of neuro-fuzzy systems where the number of transistors and power consumption per cell is an important concern. A prototype of this block has been fabricated in a 0.35 mu m CMOS technology. The functionality and programmability of this circuit has been verified through experimental results
0.35 micron;neuro-fuzzy systems;power consumption;subthreshold region;vlsi;n-shaped piece-wise-linearity;cmos;current mode;breaking points;arbitrary piece-wise-linear characteristic;separately controllable
train_1622
Error resilient intra refresh scheme for H.26L stream
Recently much attention has been focused on video streaming through IP-based networks. An error resilient RD intra macro-block refresh scheme for H.26L Internet video streaming is introduced. Various channel simulations have proved that this scheme is more effective than those currently adopted in H.26L
channel simulations;error resilient scheme;rderr scheme;internet;h.26l video streaming;intra macro-block refresh scheme;ip-based networks;rd intra refresh scheme;rdall scheme;video communication
train_1623
Transmission of real-time video over IP differentiated services
Multimedia applications require high bandwidth and guaranteed quality of service (QoS). The current Internet, which provides 'best effort' services, cannot meet the stringent QoS requirements for delivering MPEG videos. It is proposed that MPEG frames are transported through various service models of DiffServ. Performance analysis and simulation results show that the proposed approach can not only guarantee QoS but can also achieve high bandwidth utilisation
diffserv;internet;ip differentiated services;real-time video transmission;quality of service;high bandwidth utilisation;multimedia applications;mpeg video;qos guarantees
train_1624
Genetic algorithm for input/output selection in MIMO systems based on
controllability and observability indices A time domain optimisation algorithm using a genetic algorithm in conjunction with a linear search scheme has been developed to find the smallest or near-smallest subset of inputs and outputs to control a multi-input-multi-output system. Experimental results have shown that this proposed algorithm has a very fast convergence rate and high computation efficiency
very fast convergence;observability indices;mimo systems;input/output selection;multi-input-multi-output system;near-smallest subset;linear search scheme;genetic algorithm;smallest subset;time domain optimisation algorithm;controllability indices;multivariable control systems;high computation efficiency
train_1625
Use of fuzzy weighted autocorrelation function for pitch extraction from noisy
speech An investigation is presented into the feasibility of incorporating a fuzzy weighting scheme into the calculation of an autocorrelation function for pitch extraction. Simulation results reveal that the proposed method provides better robustness against background noise than the conventional approaches for extracting pitch period in a noisy environment
cepstrum method;fuzzy weighting scheme;average magnitude difference function;speech analysis-synthesis system;autocorrelation function;background noise;pitch extraction;noisy speech;simulation results
train_1626
Modifier formula on mean square convergence of LMS algorithm
In describing the mean square convergence of the LMS algorithm, the update formula based on independence assumption will bring explicit errors, especially when step-size is large. A modifier formula that describes the convergence well, is proposed. Simulations support the proposed formula in different conditions
lms filter;modifier formula;adaptive filtering;independence assumption;mean square convergence;lms algorithm;update formula
train_1627
Blind identification of non-stationary MA systems
A new adaptive algorithm for blind identification of time-varying MA channels is derived. This algorithm proposes the use of a novel system of equations derived by combining the third- and fourth-order statistics of the output signals of MA models. This overdetermined system of equations has the important property that it can be solved adaptively because of their symmetries via an overdetermined recursive instrumental variable-type algorithm. This algorithm shows good behaviour in arbitrary noisy environments and good performance in tracking time-varying systems
fourth-order statistics;higher-order statistics;additive gaussian noise;overdetermined recursive algorithm;arbitrary noisy environments;time-varying channels;ma models;tracking;iterative algorithms;third-order statistics;blind identification;nonstationary systems;adaptive algorithm;recursive instrumental variable algorithm
train_1628
Quasi-Newton algorithm for adaptive minor component extraction
An adaptive quasi-Newton algorithm is first developed to extract a single minor component corresponding to the smallest eigenvalue of a stationary sample covariance matrix. A deflation technique instead of the commonly used inflation method is then applied to extract the higher-order minor components. The algorithm enjoys the advantage of having a simpler computational complexity and a highly modular and parallel structure for efficient implementation. Simulation results are given to demonstrate the effectiveness of the proposed algorithm for extracting multiple minor components adaptively
stationary sample covariance matrix;adaptive estimation;quasi-newton algorithm;simulation results;modular structure;doa estimation;higher-order minor components;parallel structure;root-music estimator;deflation technique;computational complexity;adaptive minor component extraction;eigenvalue
train_1629
Robot trajectory control using neural networks
The use of a new type of neural network (NN) for controlling the trajectory of a robot is discussed. A control system is described which comprises an NN-based controller and a fixed-gain feedback controller. The NN-based controller employs a modified recurrent NN, the weights of which are obtained by training another NN to identify online the inverse dynamics of the robot. The work has confirmed the superiority of the proposed NN-based control system in rejecting large disturbances
robot manipulators;fourth-order runge-kutta algorithm;robot trajectory control;modified recurrent neural network;fixed-gain feedback controller;time-varying nonlinear multivariable plant;neural network training;large disturbance rejection;control system;neural networks;neural network-based controller;robot inverse dynamics
train_163
Boolean operators and the naive end-user: moving to AND
Since so few end-users make use of Boolean searching, it is obvious that any effective solution needs to take this reality into account. The most important aspect of a technical solution should be that it does not require any effort on the part of users. What is clearly needed is for search engine designers and programmers to take account of the information-seeking behavior of Internet users. Users must be able to enter a series of words at random and have those words automatically treated as a carefully constructed Boolean AND search statement
internet;information-seeking behavior;boolean searching;search engine design;boolean operators;and operator
train_1630
Digital-domain self-calibration technique for video-rate pipeline A/D
converters using Gaussian white noise A digital-domain self-calibration technique for video-rate pipeline A/D converters based on a Gaussian white noise input signal is presented. The proposed algorithm is simple and efficient. A design example is shown to illustrate that the overall linearity of a pipeline ADC can be highly improved using this technique
gaussian white noise input signal;digital-domain self-calibration technique;video-rate pipeline a/d converters;pipeline adc linearity
train_1631
Recovering lost efficiency of exponentiation algorithms on smart cards
At the RSA cryptosystem implementation stage, a major security concern is resistance against so-called side-channel attacks. Solutions are known but they increase the overall complexity by a non-negligible factor (typically, a protected RSA exponentiation is 133% slower). For the first time, protected solutions are proposed that do not penalise the running time of an exponentiation
rsa cryptosystem implementation stage;exponentiation algorithms;smart cards;public-key encryption;security;side-channel attack resistance
train_1632
One structure for fractional delay filter with small number of multipliers
A wide-bandwidth, high-resolution fractional delay filter (FDF) structure with a small number of multipliers per output sample and a short coefficient computing time is presented. The proposal is based on the use of a frequency FDF design method up to only half of the Nyquist frequency, in a multirate structure
high-resolution filter;fractional delay filter;frequency domain design method;wide-bandwidth filter;finite impulse response filter;fir filter;short coefficient computing time;multirate structure;multipliers
train_1633
48 Gbit/s InP DHBT MS-DFF with very low time jitter
A master-slave D-type flip-flop (MS DFF) fabricated in a self-aligned InP DHBT technology is presented. The packaged circuit shows full-rate clock operation at 48 Gbit/s. Very low time jitter and good retiming capabilities are observed. Layout aspects, packaging and measurement issues are discussed in particular
layout aspects;master-slave d-type flip-flop;inp;48 gbit/s;self-aligned dhbt technology;inp dhbt technology;dhbt ms-dff;low time jitter;retiming capabilities;packaged circuit
train_1634
Maple 8 keeps everyone happy
The author is impressed with the upgrade to the mathematics package Maple 8, finding it genuinely useful to scientists and educators. The developments Waterloo Maple class as revolutionary include a student calculus package, and Maplets. The first provides a high-level command set for calculus exploration and plotting (removing the need to work with, say, plot primitives). The second is a package for hand-coding custom graphical user interfaces (GUIs) using elements such as check boxes, radio buttons, slider bars and pull-down menus. When called, a Maplet launches a runtime Java environment that pops up a window-analogous to a Java applet-to perform a programmed routine, if required passing the result back to the Maple worksheet
maplet;runtime java environment;calculus plotting;student calculus package;guis;high-level command set;maple 8 mathematics package;calculus exploration
train_1635
Simple...But complex
FlexPro 5.0, from Weisang and Co., is one of those products which aim to serve an often ignored range of data users: those who, in FlexPro's words, are interested in documenting, analysing and archiving data in the simplest way possible. The online help system is clearly designed to promote the product in this market segment, with a very clear introduction from first principles and a hands-on tutorial, and the live project to which it was applied was selected with this in mind
data archiving;flexpro 5.0;hands-on tutorial;data analysis;data documentation;online help system
train_1636
SPARC ignites scholarly publishing
During the past several years, initiatives which bring together librarians, researchers, university administrators and independent publishers have re-invigorated the scholarly publishing marketplace. These initiatives take advantage of electronic technology and show great potential for restoring science to scientists. The author outlines SPARC (the Scholarly Publishing and Academic Resources Coalition), an initiative to make scientific journals more accessible
scholarly publishing and academic resources coalition;electronic publishing;initiative;sparc;scientific journal access
train_1637
What's best practice for open access?
The business of publishing journals is in transition. Nobody knows exactly how it will work in the future, but everybody knows that the electronic publishing revolution will ensure it won't work as it does now. This knowledge has provoked a growing sense of nervous anticipation among those concerned, some edgy and threatened by potential changes to their business, others excited by the prospect of change and opportunity. The paper discusses the open publishing model for dissemination of research
electronic publishing;business;journal publishing;open publishing model;research dissemination;open access
train_1638
The chemical brotherhood
It has always been more difficult for chemistry to keep up in the Internet age but a new language could herald a new era for the discipline. The paper discusses CML, or chemical mark-up language. The eXtensible Mark-up Language provides a universal format for structured documents and data on the Web and so offers a way for scientists and others to carry a wide range of information types across the net in a transparent way. All that is needed is an XML browser
internet;structured document format;chemistry;cml;world wide web;xml browser;chemical mark-up language;extensible mark-up language
train_1639
New hub gears up for algorithmic exchange
Warwick University in the UK is on the up and up. Sometimes considered a typical 1960s, middle-of-the-road redbrick institution-not known for their distinction the 2001 UK Research Assessment Exercise (RAE) shows its research to be the fifth most highly-rated in the country, with outstanding standards in the sciences. This impressive performance has rightly given Warwick a certain amount of muscle, which it is flexing rather effectively, aided by a snappy approach to making things happen that leaves some older institutions standing. The result is a brand new Centre for Scientific Computing (CSC), launched within a couple of years of its initial conception
warwick university centre for scientific computing
train_164
Plug-ins for critical media literacy: a collaborative program
Information literacy is important in academic and other libraries. The paper looks at whether it would be more useful to librarians and to instructors, as well as the students, to deal with information-literacy skill levels of students beginning their academic careers, rather than checking them at the end. Approaching the situation with an eye toward the broader scope of critical media literacy opens the discussion beyond a skills inventory to the broader range of intellectual activity
critical media literacy;collaborative program;information literacy;instructors;academic libraries
train_1640
Integration is LIMS inspiration
For software manufacturers, blessings come in the form of fast-moving application areas. In the case of LIMS, biotechnology is still in the driving seat, inspiring developers to maintain consistently rapid and creative levels of innovation. Current advancements are no exception. Integration and linking initiatives are still popular and much of the activity appears to be coming from a very productive minority
software manufacturers;biotechnology;lims
train_1641
Development through gaming
Mainstream observers commonly underestimate the role of fringe activities in propelling science and technology. Well-known examples are how wars have fostered innovation in areas such as communications, cryptography, medicine and aerospace; and how erotica has been a major factor in pioneering visual media, from the first printed books to photography, cinematography, videotape, or the latest online video streaming. The article aims to be a sampler of a less controversial, but still often underrated, symbiosis between scientific computing and computing for leisure and entertainment
computer games;entertainment;graphics;leisure;scientific computing
train_1642
Development and validation of user-adaptive navigation and information
retrieval tools for an intranet portal organizational memory information system Based on previous research and properties of organizational memory, a conceptual model for navigation and retrieval functions in an intranet portal organizational memory information system was proposed, and two human-centred features (memory structure map and history-based tool) were developed to support user's navigation and retrieval in a well-known organizational memory. To test two hypotheses concerning the validity of the conceptual model and two human-centred features, an experiment was conducted with 30 subjects. Testing of the two hypotheses indicated the following: (1) the memory structure map's users showed 29% better performance in navigation, and (2) the history-based tool's users outperformed by 34% in identifying information. The results of the study suggest that a conceptual model and two human-centred features could be used in a user-adaptive interface design to improve user's performance in an intranet portal organizational memory information system
experiment;user-adaptive interface design;organizational memory information system;conceptual model;information retrieval tools;history-based tool;user-adaptive navigation;user performance;memory structure map;intranet portal;human factors
train_1643
Effectiveness of user testing and heuristic evaluation as a function of
performance classification For different levels of user performance, different types of information are processed and users will make different types of errors. Based on the error's immediate cause and the information being processed, usability problems can be classified into three categories. They are usability problems associated with skill-based, rule-based and knowledge-based levels of performance. In this paper, a user interface for a Web-based software program was evaluated with two usability evaluation methods, user testing and heuristic evaluation. The experiment discovered that the heuristic evaluation with human factor experts is more effective than user testing in identifying usability problems associated with skill-based and rule-based levels of performance. User testing is more effective than heuristic evaluation in finding usability problems associated with the knowledge-based level of performance. The practical application of this research is also discussed in the paper
user performance;performance classification;rule-based performance levels;usability;heuristic evaluation;knowledge-based performance levels;web-based software;user testing;user interface;skill-based performance levels;experiment;human factors
train_1644
An experimental evaluation of comprehensibility aspects of knowledge structures
derived through induction techniques: a case study of industrial fault diagnosis Machine induction has been extensively used in order to develop knowledge bases for decision support systems and predictive systems. The extent to which developers and domain experts can comprehend these knowledge structures and gain useful insights into the basis of decision making has become a challenging research issue. This article examines the knowledge structures generated by the C4.5 induction technique in a fault diagnostic task and proposes to use a model of human learning in order to guide the process of making comprehensive the results of machine induction. The model of learning is used to generate hierarchical representations of diagnostic knowledge by adjusting the level of abstraction and varying the goal structures between 'shallow' and 'deep' ones. Comprehensibility is assessed in a global way in an experimental comparison where subjects are required to acquire the knowledge structures and transfer to new tasks. This method of addressing the issue of comprehensibility appears promising especially for machine induction techniques that are rather inflexible with regard to the number and sorts of interventions allowed to system developers
industrial plants;industrial fault diagnosis;case study;predictive systems;induction techniques;diagnostic knowledge representations;human learning model;decision support systems;knowledge structure comprehensibility aspects;experimental evaluation;knowledge bases;c4.5 induction technique
train_1645
Effects of the transition to a client-centred team organization in
administrative surveying work A new work organization was introduced in administrative surveying work in Sweden during 1998. The new work organization implied a transition to a client-centred team-based organization and required a change in competence from specialist to generalist knowledge as well as a transition to a new information technology, implying a greater integration within the company. The aim of this study was to follow the surveyors for two years from the start of the transition and investigate how perceived consequences of the transition, job, organizational factors, well-being and effectiveness measures changed between 1998 and 2000. The Teamwork Profile and QPS Nordic questionnaire were used. The 205 surveyors who participated in all three study phases constituted the study group. The result showed that surveyors who perceived that they were working as generalists rated the improvements in job and organizational factors significantly higher than those who perceived that they were not yet generalists. Improvements were noted in 2000 in quality of service to clients, time available to handle a case and effectiveness of teamwork in a transfer to a team-based work organization group, cohesion and continuous improvement practices-for example, learning by doing, mentoring and guided delegation-were important to improve the social effectiveness of group work
information technology;social effectiveness;organizational factors;qps nordic questionnaire;public administrative sector;company;client-centred team organization;job;administrative surveying work;teamwork profile;effectiveness measures
train_1646
The limits of shape constancy: point-to-point mapping of perspective
projections of flat figures The present experiments investigate point-to-point mapping of perspective transformations of 2D outline figures under diverse viewing conditions: binocular free viewing, monocular perspective with 2D cues masked by an optic tunnel, and stereoptic viewing through an optic tunnel. The first experiment involved upright figures, and served to determine baseline point-to-point mapping accuracy, which was found to be very good. Three shapes were used: square, circle and irregularly round. The main experiment, with slanted figures, involved only two shapes-square and irregularly shaped-showed at several slant degrees. Despite the accumulated evidence for shape constancy when the outline of perspective projections is considered, metric perception of the inner structure of such projections was quite limited. Systematic distortions were found, especially with more extreme slants, and attributed to the joint effect of several factors: anchors, 3D information, and slant underestimation. Contradictory flatness cues did not detract from performance, while stereoptic information improved it
binocular free viewing;flat figure perspective projections;point-to-point mapping;optic tunnel;2d cues;diverse viewing conditions;stereoptic viewing;anchors;3d shape perception;3d information;shape constancy;slant underestimation;monocular perspective;2d outline figures;3d information displays;experiments;human factors
train_1647
Examining children's reading performance and preference for different
computer-displayed text This study investigated how common online text affects reading performance of elementary school-age children by examining the actual and perceived readability of four computer-displayed typefaces at 12- and 14-point sizes. Twenty-seven children, ages 9 to 11, were asked to read eight children's passages and identify erroneous/substituted words while reading. Comic Sans MS, Arial and Times New Roman typefaces, regardless of size, were found to be more readable (as measured by a reading efficiency score) than Courier New. No differences in reading speed were found for any of the typeface combinations. In general, the 14-point size and the examined sans serif typefaces were perceived as being the easiest to read, fastest, most attractive, and most desirable for school-related material. In addition, participants significantly preferred Comic Sans MS and 14-point Arial to 12-point Courier. Recommendations for appropriate typeface combinations for children reading on computers are discussed
educational computing;fonts;elementary school-age children;computer-displayed text;user interface;child reading performance;online text;computer-displayed typefaces;human factors
train_1648
Spatial solutions [office furniture]
Take the stress out of the office by considering the design of furniture and staff needs, before major buying decisions
buying decisions;staff needs;office furniture
train_1649
Office essentials [stationery suppliers]
Make purchasing stationery a relatively simple task through effective planning and management of stock, and identifying the right supplier
purchasing;management of stock;stationery suppliers;planning
train_165
Monitoring the news online
The author looks at how we can focus on what we want, finding small stories in vast oceans of news. There is no one tool that will scan every news resource available and give alerts on new available materials. Every one has a slightly different focus. Some are paid sources, while many are free. If used wisely, an excellent news monitoring system for a large number of topics can be set up for surprisingly little cost
internet;online news;news monitoring
train_1650
Low to mid-speed copiers [buyer's guide]
The low to mid-speed copier market is being transformed by the almost universal adoption of digital solutions. The days of the analogue copier are numbered as the remaining vendors plan to withdraw from this sector by 2005. Reflecting the growing market for digital, vendors are reducing prices, making a digital solution much more affordable. The battle for the copier market is intense, and the popularity of the multifunctional device is going to transform the office equipment market. As total cost of ownership becomes increasingly important and as budgets are squeezed, the most cost-effective solutions are those that will survive this shake-down
low to mid-speed copier market;total cost of ownership
train_1651
H-matrix approximation for the operator exponential with applications
We previously developed a data-sparse and accurate approximation to parabolic solution operators in the case of a rather general elliptic part given by a strongly P-positive operator . Also a class of matrices (H-matrices) has been analysed which are data-sparse and allow an approximate matrix arithmetic with almost linear complexity. In particular, the matrix-vector/matrix-matrix product with such matrices as well as the computation of the inverse have linear-logarithmic cost. In this paper, we apply the H-matrix techniques to approximate the exponent of an elliptic operator. Starting with the Dunford-Cauchy representation for the operator exponent, we then discretise the integral by the exponentially convergent quadrature rule involving a short sum of resolvents. The latter are approximated by the H-matrices. Our algorithm inherits a two-level parallelism with respect to both the computation of resolvents and the treatment of different time values. In the case of smooth data (coefficients, boundaries), we prove the linear-logarithmic complexity of the method
operator exponential;dunford-cauchy representation;exponentially convergent quadrature rule;parabolic solution operators;strongly p-positive operator;data-sparse approximation;h-matrix approximation;almost linear complexity