name
stringlengths 7
10
| title
stringlengths 13
125
| abstract
stringlengths 67
3.02k
| fulltext
stringclasses 1
value | keywords
stringlengths 17
734
|
---|---|---|---|---|
train_1461 | Adaptive multiresolution approach for solution of hyperbolic PDEs | This paper establishes an innovative and efficient multiresolution adaptive approach combined with high-resolution methods, for the numerical solution of a single or a system of partial differential equations. The proposed methodology is unconditionally bounded (even for hyperbolic equations) and dynamically adapts the grid so that higher spatial resolution is automatically allocated to domain regions where strong gradients are observed, thus possessing the two desired properties of a numerical approach: stability and accuracy. Numerical results for five test problems are presented which clearly show the robustness and cost effectiveness of the proposed method | unconditionally bounded methodology;cost effectiveness;robustness;numerical solution;dynamic grid adaptation;strong gradients;accuracy;high-resolution methods;spatial resolution;stability;hyperbolic partial differential equations;multiresolution adaptive approach |
|
train_1462 | Non-linear analysis of nearly saturated porous media: theoretical and numerical | formulation A formulation for a porous medium saturated with a compressible fluid undergoing large elastic and plastic deformations is presented. A consistent thermodynamic formulation is proposed for the two-phase mixture problem; thus preserving a straightforward and robust numerical scheme. A novel feature is the specification of the fluid compressibility in terms of a volumetric logarithmic strain, which is energy conjugated to the fluid pressure in the entropy inequality. As a result, the entropy inequality is used to separate three different mechanisms representing the response: effective stress response according to Terzaghi in the solid skeleton, fluid pressure response to compressibility of the fluid, and dissipative Darcy flow representing the interaction between the two phases. The paper is concluded with a couple of numerical examples that display the predictive capabilities of the proposed formulation. In particular, we consider results for the kinematically linear theory as compared to the kinematically non-linear theory | dissipative darcy flow;fluid pressure response;robust numerical scheme;kinematically linear theory;nonlinear analysis;large plastic deformations;consistent thermodynamic formulation;compressible fluid;large elastic deformations;fluid compressibility;fluid pressure;solid skeleton;volumetric logarithmic strain;predictive capabilities;two-phase mixture problem;nearly saturated porous media;entropy inequality;kinematically nonlinear theory;effective stress response |
|
train_1463 | Computational complexity of probabilistic disambiguation | Recent models of natural language processing employ statistical reasoning for dealing with the ambiguity of formal grammars. In this approach, statistics, concerning the various linguistic phenomena of interest, are gathered from actual linguistic data and used to estimate the probabilities of the various entities that are generated by a given grammar, e.g., derivations, parse-trees and sentences. The extension of grammars with probabilities makes it possible to state ambiguity resolution as a constrained optimization formula, which aims at maximizing the probability of some entity that the grammar generates given the input (e.g., maximum probability parse-tree given some input sentence). The implementation of these optimization formulae in efficient algorithms, however, does not always proceed smoothly. In this paper, we address the computational complexity of ambiguity resolution under various kinds of probabilistic models. We provide proofs that some, frequently occurring problems of ambiguity resolution are NP-complete. These problems are encountered in various applications, e.g., language understanding for textand speech-based applications. Assuming the common model of computation, this result implies that, for many existing probabilistic models it is not possible to devise tractable algorithms for solving these optimization problems | formal grammars;np-completeness results;probabilistic disambiguation;speech processing;statistics;state ambiguity resolution;constrained optimization formula;natural language processing;computational complexity;parsing problems;statistical reasoning;language understanding;probabilistic models |
|
train_1464 | LR parsing for conjunctive grammars | The generalized LR parsing algorithm for context-free grammars, introduced by Tomita in 1986, is a polynomial-time implementation of nondeterministic LR parsing that uses graph-structured stack to represent the contents of the nondeterministic parser's pushdown for all possible branches of computation at a single computation step. It has been specifically developed as a solution for practical parsing tasks arising in computational linguistics, and indeed has proved itself to be very suitable for natural language processing. Conjunctive grammars extend context-free grammars by allowing the use of an explicit intersection operation within grammar rules. This paper develops a new LR-style parsing algorithm for these grammars, which is based on the very same idea of a graph-structured pushdown, where the simultaneous existence of several paths in the graph is used to perform the mentioned intersection operation. The underlying finite automata are treated in the most general way: instead of showing the algorithm's correctness for some particular way of constructing automata, the paper defines a wide class of automata usable with a given grammar, which includes not only the traditional LR(k) automata, but also, for instance, a trivial automaton with a single reachable state. A modification of the SLR(k) table construction method that makes use of specific properties of conjunctive grammars is provided as one possible way of making finite automata to use with the algorithm | graph-structured stack;computation;trivial automaton;generalized lr parsing algorithm;single reachable state;computational linguistics;boolean closure;deterministic context-free languages;context-free grammars;explicit intersection operation;natural language processing;grammar rules;nondeterministic parser pushdown;conjunctive grammars;finite automata |
|
train_1465 | P systems with symport/antiport rules: the traces of objects | We continue the study of those P systems where the computation is performed by the communication of objects, that is, systems with symport and antiport rules. Instead of the (number of) objects collected in a specified membrane, as the result of a computation we consider the itineraries of a certain object through membranes, during a halting computation, written as a coding of the string of labels of the visited membranes. The family of languages generated in this way is investigated with respect to its place in the Chomsky hierarchy. When the (symport and antiport) rules are applied in a conditional manner, promoted or inhibited by certain objects which should be present in the membrane where a rule is applied, then a characterization of recursively enumerable languages is obtained; the power of systems with the rules applied freely is only partially described | symport rules;object traces;halting computation;recursively enumerable languages;antiport rules;chomsky hierarchy;label string coding;object communication;languages;p systems;itineraries |
|
train_1466 | Feldkamp-type image reconstruction from equiangular data | The cone-beam approach for image reconstruction attracts increasing attention in various applications, especially medical imaging. Previously, the traditional practical cone-beam reconstruction method, the Feldkamp algorithm, was generalized into the case of spiral/helical scanning loci with equispatial cone-beam projection data. In this paper, we formulated the generalized Feldkamp algorithm in the case of equiangular cone-beam projection data, and performed numerical simulation to evaluate the image quality. Because medical multi-slice/cone-beam CT scanners typically use equiangular projection data, our new formula may be useful in this area as a framework for further refinement and a benchmark for comparison | equiangular data;equispatial cone-beam projection data;feldkamp-type image reconstruction;equiangular cone-beam projection data;medical multi-slice/cone-beam ct scanners;generalized feldkamp algorithm;practical cone-beam reconstruction method;medical imaging;image quality;cone-beam approach;spiral/helical scanning loci;numerical simulation |
|
train_1467 | Utilizing Web-based case studies for cutting-edge information services issues | This article reports on a pilot study conducted by the Academic Libraries of the 21st Century project team to determine whether the benefits of the case study method as a training framework for change initiatives could successfully transfer from the traditional face-to-face format to a virtual format. Methods of developing the training framework, as well as the benefits, challenges, and recommendations for future strategies gained from participant feedback are outlined. The results of a survey administered to chat session registrants are presented in three sections: (1) evaluation of the training framework; (2) evaluation of participants' experiences in the virtual environment; and (3) a comparison of participants' preference of format. The overall participant feedback regarding the utilization of the case study method in a virtual environment for professional development and collaborative problem solving is very positive | web-based case studies;professional development;internet;academic libraries;training;collaborative problem solving;survey;cutting-edge information services;virtual environment;change initiatives |
|
train_1468 | Developing Web-enhanced learning for information fluency-a liberal arts | college's perspective Learning is likely to take a new form in the twenty-first century, and a transformation is already in process. Under the framework of information fluency, efforts are being made at Rollins College to develop a Web-enhanced course that encompasses information literacy, basic computer literacy, and critical thinking skills. Computer-based education can be successful when librarians use technology effectively to enhance their integrated library teaching. In an online learning environment, students choose a time for learning that best suits their needs and motivational levels. They can learn at their own pace, take a nonlinear approach to the subject, and maintain constant communication with instructors and other students. The quality of a technology-facilitated course can be upheld if the educational objectives and methods for achieving those objectives are carefully planned and explored | integrated library teaching;computer-based education;online learning;critical thinking skills;librarians;liberal arts college;information fluency;information literacy;web-enhanced learning;computer literacy |
|
train_1469 | The necessity of real-time-fact and fiction in digital reference systems | Current discussions and trends in digital reference have emphasized the use of real-time digital reference services. Recent articles have questioned both the utility and use of asynchronous services such as e-mail. This article uses data from the AskERIC digital reference service to demonstrate that asynchronous services are not only useful and used, but may have greater utility than real-time systems | e-mail;asynchronous services;digital library;real-time digital reference services;askeric;personalized internet-based service |
|
train_147 | Embedded Linux and the law | The rising popularity of Linux, combined with perceived cost savings, has spurred many embedded developers to consider a real-time Linux variant as an alternative to a traditional RTOS. The paper presents the legal implications for the proprietary parts of firmware | embedded linux;real-time linux;proprietary firmware;legal implications |
|
train_1470 | When reference works are not books-the new edition of the Guide to Reference | Books The author considers the history of the Guide to Reference Books (GRB) and its importance in librarianship. He discusses the ways in which the new edition is taking advantage of changing times. GRB has become a cornerstone of the literature of U.S. librarianship. The biggest change GRB will undergo to become GRS (Guide to Reference Sources) will be designing it primarily as a Web product | guide to reference books;internet;guide to reference sources;reference works;web product;grs;librarianship;history;grb |
|
train_1471 | E-commerce-resources for doing business on the Internet | There are many different types of e-commerce depending upon who or what is selling and who or what is buying. In addition, e-commerce is more than an exchange of funds and goods or services, it encompasses an entire infrastructure of services, computer hardware and software products, technologies, and communications formats. The paper discusses e-commerce terminology, types and information resources, including books and Web sites | internet;information resources;terminology;business;books;web sites;e-commerce |
|
train_1472 | Surface textures improve the robustness of stereoscopic depth cues | This research develops design recommendations for surface textures (patterns of color on object surfaces) rendered with stereoscopic displays. In 3 method-of-adjustment procedure experiments, 8 participants matched the disparity of a circular probe and a planar stimulus rendered using a single visible edge. The experiments varied stimulus orientation and surface texture. Participants more accurately matched the depth of vertical stimuli than that of horizontal stimuli, consistent with previous studies and existing theory. Participants matched the depth of surfaces with large pixel-to-pixel luminance variations more accurately than they did surfaces with a small pixel-to-pixel luminance variation. Finally, they matched the depth of surfaces with vertical line patterns more accurately than they did surfaces with horizontal-striped texture patterns. These results suggest that designers can enhance depth perception in stereoscopic displays, and also reduce undesirable sensitivity to orientation, by rendering objects with surface textures using large pixel-to-pixel luminance variations | object rendering;stereoscopic displays;vertical line patterns;stimulus orientation;robustness;depth perception;planar stimulus disparity;orientation sensitivity reduction;single visible edge;stereoscopic depth cues;horizontal-striped texture patterns;surface texture;surface textures;pixel-to-pixel luminance variations;circular probe disparity |
|
train_1473 | Control performance with three translational degrees of freedom | For multiple degree-of-freedom (DOF) systems, it is important to determine how accurately operators can control each DOF and what influence perceptual, information processing, and psychomotor components have on performance. Sixteen right-handed male students participated in 2 experiments: 1 involving positioning and 1 involving tracking with 3 translational DOFs. To separate perceptual and psychomotor effects, we used 2 control-display mappings that differed in the coupling of vertical and depth dimensions to the up-down and fore-aft control axes. We observed information processing effects in the positioning task: Initial error correction on the vertical dimension lagged in time behind the horizontal dimension. The depth dimension error correction lagged behind both, which was ascribed to the poorer perceptual information. We observed this perceptual effect also in the tracking experiment. Motor effects were also present, with tracking errors along the up-down axis of the hand controller being 1.1 times larger than along the fore-aft axis. These results indicate that all 3 components contribute to control performance. Actual applications of this research include interface design | virtual reality;multi-dof systems;depth dimension error correction;perceptual effects;initial error correction;remote control;tracking;positioning;interface design;control performance;psychomotor effects |
|
train_1474 | Contrast sensitivity in a dynamic environment: effects of target conditions and | visual impairment Contrast sensitivity was determined as a function of target velocity (0 degrees -120 degrees /s) over a variety of viewing conditions. In Experiment 1, measurements of dynamic contrast sensitivity were determined for observers as a function of target velocity for letter stimuli. Significant main effects were found for target velocity, target size, and target duration, but significant interactions among the variables indicated especially pronounced adverse effects of increasing target velocity for small targets and brief durations. In Experiment 2, the effects of simulated cataracts were determined. Although the simulated impairment had no effect on traditional acuity scores, dynamic contrast sensitivity was markedly reduced. Results are discussed in terms of dynamic contrast sensitivity as a useful composite measure of visual functioning that may provide a better overall picture of an individual's visual functioning than does traditional static acuity, dynamic acuity, or contrast sensitivity alone. The measure of dynamic contrast sensitivity may increase understanding of the practical effects of various conditions, such as aging or disease, on the visual system, or it may allow improved prediction of individuals' performance in visually dynamic situations | disease;acuity scores;target size;dynamic environment;target velocity;dynamic contrast sensitivity;target conditions;target duration;aging;contrast sensitivity;visual impairment |
|
train_1475 | Relation between glare and driving performance | The present study investigated the effects of discomfort glare on driving behavior. Participants (old and young; US and Europeans) were exposed to a simulated low- beam light source mounted on the hood of an instrumented vehicle. Participants drove at night in actual traffic along a track consisting of urban, rural, and highway stretches. The results show that the relatively low glare source caused a significant drop in detecting simulated pedestrians along the roadside and made participants drive significantly slower on dark and winding roads. Older participants showed the largest drop in pedestrian detection performance and reduced their driving speed the most. The results indicate that the de Boer rating scale, the most commonly used rating scale for discomfort glare, is practically useless as a predictor of driving performance. Furthermore, the maximum US headlamp intensity (1380 cd per headlamp) appears to be an acceptable upper limit | glare;urban road;road traffic;simulated low-beam light source;highway;deboer rating scale;driving performance;discomfort glare;rural road |
|
train_1476 | The perceived utility of human and automated aids in a visual detection task | Although increases in the use of automation have occurred across society, research has found that human operators often underutilize (disuse) and overly rely on (misuse) automated aids (Parasuraman-Riley (1997)). Nearly 275 Cameron University students participated in 1 of 3 experiments performed to examine the effects of perceived utility (Dzindolet et al. (2001)) on automation use in a visual detection task and to compare reliance on automated aids with reliance on humans. Results revealed a bias for human operators to rely on themselves. Although self-report data indicate a bias toward automated aids over human aids, performance data revealed that participants were more likely to disuse automated aids than to disuse human aids. This discrepancy was accounted for by assuming human operators have a "perfect automation" schema. Actual or potential applications of this research include the design of future automated decision aids and training procedures for operators relying on such aids | automated decision aids;visual detection task;automated aids;automation;human operators;social process |
|
train_1477 | Ecological interface design: progress and challenges | Ecological interface design (EID) is a theoretical framework for designing human-computer interfaces for complex socio-technical systems. Its primary aim is to support knowledge workers in adapting to change and novelty. This literature review shows that in situations requiring problem solving, EID improves performance when compared with current design approaches in industry. EID has been applied to industry-scale problems in a broad variety of application domains (e.g., process control, aviation, computer network management, software engineering, medicine, command and control, and information retrieval) and has consistently led to the identification of new information requirements. An experimental evaluation of EID using a full-fidelity simulator with professional workers has yet to be conducted, although some are planned. Several significant challenges remain as obstacles to the confident use of EID in industry. Promising paths for addressing these outstanding issues are identified. Actual or potential applications of this research include improving the safety and productivity of complex socio-technical systems | human-computer interfaces;productivity;industry;user interface;complex social technical systems;ecological interface design;human factors |
|
train_1478 | The effects of work pace on within-participant and between-participant keying | force, electromyography, and fatigue A laboratory study was conducted to determine the effects of work pace on typing force, electromyographic (EMG) activity, and subjective discomfort. We found that as participants typed faster, their typing force and finger flexor and extensor EMG activity increased linearly. There was also an increase in subjective discomfort, with a sharp threshold between participants' self-selected pace and their maximum typing speed. The results suggest that participants self-select a typing pace that maximizes typing speed and minimizes discomfort. The fastest typists did not produce significantly more finger flexor EMG activity but did produce proportionately less finger extensor EMG activity compared with the slower typists. We hypothesize that fast typists may use different muscle recruitment patterns that allow them to be more efficient than slower typists at striking the keys. In addition, faster typists do not experience more discomfort than slow typists. These findings show that the relative pace of typing is more important than actual typing speed with regard to discomfort and muscle activity. These results suggest that typists may benefit from skill training to increase maximum typing speed. Potential applications of this research includes skill training for typists | discomfort;keying force;skill training;muscle recruitment patterns;work pace effect;emg activity;typing speed;subjective discomfort;finger flexor;typists |
|
train_1479 | Agreeing with automated diagnostic aids: a study of users' concurrence | strategies Automated diagnostic aids that are less than perfectly reliable often produce unwarranted levels of disuse by operators. In the present study, users' tendencies to either agree or disagree with automated diagnostic aids were examined under conditions in which: (1) the aids were less than perfectly reliable but aided-diagnosis was still more accurate that unaided diagnosis; and (2) the system was completely opaque, affording users no additional information upon which to base a diagnosis. The results revealed that some users adopted a strategy of always agreeing with the aids, thereby maximizing the number of correct diagnoses made over several trials. Other users, however, adopted a probability-matching strategy in which agreement and disagreement rates matched the rate of correct and incorrect diagnoses of the aids. The probability-matching strategy, therefore, resulted in diagnostic accuracy scores that were lower than was maximally possible. Users who adopted the maximization strategy had higher self-ratings of problem-solving and decision-making skills, were more accurate in estimating aid reliabilities, and were more confident in their diagnosis on trials in which they agreed with the aids. The potential applications of these findings include the design of interface and training solutions that facilitate the adoption of the most effective concurrence strategies by users of automated diagnostic aids | fault diagnosis;probability-matching;disagreement rates;problem-solving;user concurrence strategy;maximization;complex systems;automated diagnostic aids;reliability |
|
train_148 | Axioms for branching time | Logics of general branching time, or historical necessity, have long been studied but important axiomatization questions remain open. Here the difficulties of finding axioms for such logics are considered and ideas for solving some of the main open problems are presented. A new, more expressive logical account is also given to support Peirce's prohibition on truth values being attached to the contingent future | axioms;branching time;truth values;temporal logic |
|
train_1480 | Formal verification of human-automation interaction | This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training materials (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces | formal verification;autopilot;human-automation interaction;automated control systems;commercial aircraft;user interface;man-machine interaction |
|
train_1481 | Impact of aviation highway-in-the-sky displays on pilot situation awareness | Thirty-six pilots (31 men, 5 women) were tested in a flight simulator on their ability to intercept a pathway depicted on a highway-in-the-sky (HITS) display. While intercepting and flying the pathway, pilots were required to watch for traffic outside the cockpit. Additionally, pilots were tested on their awareness of speed, altitude, and heading during the flight. Results indicated that the presence of a flight guidance cue significantly improved flight path awareness while intercepting the pathway, but significant practice effects suggest that a guidance cue might be unnecessary if pilots are given proper training. The amount of time spent looking outside the cockpit while using the HITS display was significantly less than when using conventional aircraft instruments. Additionally, awareness of flight information present on the HITS display was poor. Actual or potential applications of this research include guidance for the development of perspective flight display standards and as a basis for flight training requirements | aircraft display;flight guidance;situation awareness;pilots;cockpit;highway-in-the-sky display;flight simulator;flight path awareness;human factors |
|
train_1482 | A parareal in time procedure for the control of partial differential equations | We have proposed in a previous note a time discretization for partial differential evolution equation that allows for parallel implementations. This scheme is here reinterpreted as a preconditioning procedure on an algebraic setting of the time discretization. This allows for extending the parallel methodology to the problem of optimal control for partial differential equations. We report a first numerical implementation that reveals a large interest | optimal control;hilbert space;evolution equation;algebraic setting;time procedure;preconditioning procedure;partial differential equation control;time discretization |
|
train_1483 | Hypothesis-based concept assignment in software maintenance | Software maintenance accounts for a significant proportion of the lifetime cost of a software system. Software comprehension is required in many parts of the maintenance process and is one of the most expensive activities. Many tools have been developed to help the maintainer reduce the time and cost of this task, but of the numerous tools and methods available one group has received relatively little attention: those using plausible reasoning to address the concept assignment problem. We present a concept assignment method for COBOL II: hypothesis-based concept assignment (HB-CA). An implementation of a prototype tool is described, and results from a comprehensive evaluation using commercial COBOL II sources are summarised. In particular, we identify areas of a standard maintenance process where such methods would be appropriate, and discuss the potential cost savings that may result | cobol ii;lifetime cost;software maintenance;scalability;hypothesis-based concept assignment |
|
train_1484 | Portfolio optimization and the random magnet problem | Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk | minimum risk level;spin glasses;price movements;portfolio optimization;invested funds;investment;random magnet problem;magnetization;cross-correlation matrix;cross-correlations;fluctuating assets;stocks |
|
train_1485 | Telemedicine in the management of a cervical dislocation by a mobile | neurosurgeon Neurosurgical teams, who are normally located in specialist centres, frequently use teleradiology to make a decision about the transfer of a patient to the nearest neurosurgical department. This decision depends on the type of pathology, the clinical status of the patient and the prognosis. If the transfer of the patient is not possible, for example because of an unstable clinical status, a mobile neurosurgical team may be used. We report a case which was dealt with in a remote French military airborne surgical unit, in the Republic of Chad. The unit, which provides health-care to the French military personnel stationed there, also provides free medical care for the local population. It conducts about 100 operations each month. The unit comprises two surgeons (an orthopaedic and a general surgeon), one anaesthetist, two anaesthetic nurses, one operating room nurse, two nurses, three paramedics and a secretary. The civilian patient presented with unstable cervical trauma. A mobile neurosurgeon operated on her, and used telemedicine before, during and after surgery | remote french military airborne surgical unit;republic of chad;telemedicine;unstable cervical trauma;health care;teleradiology;surgery;french military personnel;mobile neurosurgeon;civilian patient;cervical dislocation management |
|
train_1486 | Hand-held digital video-camera for eye examination and follow-up | We developed a hand-held digital colour video-camera for eye examination in primary care. The device weighed 550 g. It featured a charge-coupled device (CCD) and corrective optics. Both colour video and digital still images could be taken. The video-camera was connected to a PC with software for database storage, image processing and telecommunication. We studied 88 normal subjects (38 male, 50 female), aged 7-62 years. It was not necessary to use mydriatic eye drops for pupillary dilation. Satisfactory digital images of the whole face and the anterior eye were obtained. The optic disc and the central part of the ocular fundus could also be recorded. Image quality of the face and the anterior eye were excellent; image quality of the optic disc and macula were good enough for tele-ophthalmology. Further studies are needed to evaluate the usefulness of the equipment in different clinical conditions | normal subjects;digital still images;tele-ophthalmology;image processing;hand-held digital colour video camera;charge-coupled device;image quality;pc;database storage;software;optic disc;corrective optics;whole face;ocular fundus;eye examination;anterior eye;primary care;colour video images;follow-up;telecommunication;clinical conditions |
|
train_1487 | Assessment of prehospital chest pain using telecardiology | Two hundred general practitioners were equipped with a portable electrocardiograph which could transmit a 12-lead electrocardiogram (ECG) via a telephone line. A cardiologist was available 24 h a day for an interactive teleconsultation. In a 13 month period there were 5073 calls to the telecardiology service and 952 subjects with chest pain were identified. The telecardiology service allowed the general practitioners to manage 700 cases (74%) themselves; further diagnostic tests were requested for 162 patients (17%) and 83 patients (9%) were sent to the hospital emergency department. In the last group a cardiological diagnosis was confirmed in 60 patients and refuted in 23. Seven patients in whom the telecardiology service failed to detect a cardiac problem were hospitalized in the subsequent 48 h. The telecardiology service showed a sensitivity of 97.4%, a specificity of 89.5% and a diagnostic accuracy of 86.9% for chest pain. Telemedicine could be a useful tool in the diagnosis of chest pain in primary care | primary care;hospital emergency department;13 month;diagnostic accuracy;portable electrocardiograph;telephone line;diagnostic tests;prehospital chest pain assessment;specificity;patients;general practitioners;electrocardiogram transmission;sensitivity;telecardiology;interactive teleconsultation |
|
train_1488 | Social presence in telemedicine | We studied consultations between a doctor, emergency nurse practitioners (ENPs) and their patients in a minor accident and treatment service (MATS). In the conventional consultations, all three people were located at the main hospital. In the teleconsultations, the doctor was located in a hospital 6 km away from the MATS and used a videoconferencing link connected at 384 kbit/s. There were 30 patients in the conventional group and 30 in the telemedical group. The presenting problems were similar in the two groups. The mean duration of teleconsultations was 951 s and the mean duration of face-to-face consultations was 247 s. In doctor-nurse communication there was a higher rate of turn taking in teleconsultations than in face-to-face consultations; there were also more interruptions, more words and more `backchannels' (e.g. `mhm', `uh-huh') per teleconsultation. In doctor-patient communication there was a higher rate of turn taking, more words, more interruptions and more backchannels per teleconsultation. In patient-nurse communication there was. relatively little difference between the two modes of consulting the doctor. Telemedicine appeared to empower the patient to ask more questions of the doctor. It also seemed that the doctor took greater care in a teleconsultation to achieve coordination of beliefs with the patient than in a face-to-face consultation | telemedicine;turn taking;384 kbit/s;doctor-nurse communication;951 s;doctor;social presence;teleconsultations;minor accident and treatment service;interruptions;belief coordination;emergency nurse practitioners;backchannels;face-to-face consultations;247 s;patients;videoconferencing link;words;patient-nurse communication |
|
train_1489 | An eight-year study of Internet-based remote medical counselling | We carried out a prospective study of an Internet-based remote counselling service. A total of 15,456 Internet users visited the Web site over eight years. From these, 1500 users were randomly selected for analysis. Medical counselling had been granted to 901 of the people requesting it (60%). One hundred and sixty-four physicians formed project groups to process the requests and responded using email. The distribution of patients using the service was similar to the availability of the Internet: 78% were from the European Union, North America and Australia. Sixty-seven per cent of the patients lived in urban areas and the remainder were residents of remote rural areas with limited local medical coverage. Sixty-five per cent of the requests were about problems of internal medicine and 30% of the requests concerned surgical issues. The remaining 5% of the patients sought information about recent developments, such as molecular medicine or aviation medicine. During the project, our portal became inaccessible five times, and counselling was not possible on 44 days. There was no hacking of the Web site. Internet-based medical counselling is a helpful addition to conventional practice | email;urban areas;web site;telemedicine;portal;remote rural areas;internet-based remote medical counselling;surgical issues;internet users;medical education |
|
train_149 | Extending Kamp's theorem to model time granularity | In this paper, a generalization of Kamp's theorem relative to the functional completeness of the until operator is proved. Such a generalization consists in showing the functional completeness of more expressive temporal operators with respect to the extension of the first-order theory of linear orders MFO[<] with an extra binary relational symbol. The result is motivated by the search of a modal language capable of expressing properties and operators suitable to model time granularity in omega -layered temporal structures | functional completeness;linear orders;temporal operators;omega -layered temporal structures;first-order theory;kamp's theorem;model time granularity;until operator;binary relational symbol |
|
train_1490 | Client satisfaction in a feasibility study comparing face-to-face interviews | with telepsychiatry We carried out a pilot study comparing satisfaction levels between psychiatric patients seen face to face (FTF) and those seen via videoconference. Patients who consented were randomly assigned to one of two groups. One group received services in person (FTF from the visiting psychiatrist) while the other was seen using videoconferencing at 128 kbit/s. One psychiatrist provided all the FTF and videoconferencing assessment and follow-up visits. A total of 24 subjects were recruited. Three of the subjects (13%) did not attend their appointments and two subjects in each group were lost to follow-up. Thus there were nine in the FTF group and eight in the videoconferencing group. The two groups were similar in most respects. Patient satisfaction with the services was assessed using the Client Satisfaction Questionnaire (CSQ-8), completed four months after the initial consultation. The mean scores were 25.3 in the FTF group and 21.6 in the videoconferencing group. Although there was a trend in favour of the FTF service, the difference was not significant. Patient satisfaction is only one component of evaluation. The efficacy of telepsychiatry must also be measured relative to that of conventional, FTF care before policy makers can decide how extensively telepsychiatry should be implemented | videoconference;telemedicine;psychiatric patient satisfaction;telepsychiatry;client satisfaction;128 kbit/s;client satisfaction questionnaire;human factors;face-to-face interviews |
|
train_1491 | Evaluation of videoconferenced grand rounds | We evaluated various aspects of grand rounds videoconferenced from a tertiary care hospital to a regional hospital in Nova Scotia. During a five-month study period, 29 rounds were broadcast (19 in medicine and 10 in cardiology). The total recorded attendance at the remote site was 103, comprising 70 specialists, nine family physicians and 24 other health-care professionals. We received 55 evaluations, a response rate of 53%. On a five-point Likert scale (on which higher scores indicated better quality), mean ratings by remote-site participants of the technical quality of the videoconference were 3.0-3.5, with the lowest ratings being for ability to hear the discussion (3.0) and to see visual aids (3.1). Mean ratings for content, presentation, discussion and educational value were 3.8 or higher. Of the 49 physicians who presented the rounds, we received evaluations from 41, a response rate of 84%. The presenters rated all aspects of the videoconference and interaction with remote sites at 3.8 or lower. The lowest ratings were for ability to see the remote sites (3.0) and the usefulness of the discussion (3.4). We received 278 evaluations from participants at the presenting site, an estimated response rate of about 55%. The results indicated no adverse opinions of the effect of videoconferencing (mean scores 3.1-3.3). The estimated costs of videoconferencing one grand round to one site and four sites were C$723 and C$1515, respectively. The study confirmed that videoconferenced rounds can provide satisfactory continuing medical education to community specialists, which is an especially important consideration as maintenance of certification becomes mandatory | tertiary care hospital;telemedicine;health-care professionals;remote sites;regional hospital;certification;five-point likert scale;cardiology;videoconferenced grand rounds;continuing medical education |
|
train_1492 | A systematic review of the efficacy of telemedicine for making diagnostic and | management decisions We conducted a systematic review of the literature to evaluate the efficacy of telemedicine for making diagnostic and management decisions in three classes of application: office/hospital-based, store-and-forward, and home-based telemedicine. We searched the MEDLINE, EMBASE, CINAHL and HealthSTAR databases and printed resources, and interviewed investigators in the field. We excluded studies where the service did not historically require face-to-face encounters (e.g. radiology or pathology diagnosis). A total of 58 articles met the inclusion criteria. The articles were summarized and graded for the quality and direction of the evidence. There were very few high-quality studies. The strongest evidence for the efficacy of telemedicine for diagnostic and management decisions came from the specialties of psychiatry and dermatology. There was also reasonable evidence that general medical history and physical examinations performed via telemedicine had relatively good sensitivity and specificity. Other specialties in which some evidence for efficacy existed were cardiology and certain areas of ophthalmology. Despite the widespread use of telemedicine in most major medical specialties, there is strong evidence in only a few of them that the diagnostic and management decisions provided by telemedicine are comparable to face-to-face care | telemedicine;ophthalmology;embase;psychiatry;healthstar;cinahl;dermatology;management decision making;literature review;cardiology;medical diagnosis;medline |
|
train_1493 | Research into telehealth applications in speech-language pathology | A literature review was conducted to investigate the extent to which telehealth has been researched within the domain of speech-language pathology and the outcomes of this research. A total of 13 studies were identified. Three early studies demonstrated that telehealth was feasible, although there was no discussion of the cost-effectiveness of this process in terms of patient outcomes. The majority of the subsequent studies indicated positive or encouraging outcomes resulting from telehealth. However, there were a number of shortcomings in the research, including a lack of cost-benefit information, failure to evaluate the technology itself, an absence of studies of the educational and informational aspects of telehealth in relation to speech-language pathology, and the use of telehealth in a limited range of communication disorders. Future research into the application of telehealth to speech-language pathology services must adopt a scientific approach, and have a well defined development and evaluation framework that addresses the effectiveness of the technique, patient outcomes and satisfaction, and the cost-benefit relationship | patient outcomes;telemedicine;cost-benefit analysis;cost-effectiveness;communication disorders;speech-language pathology;literature review;patient satisfaction;telehealth applications |
|
train_1494 | Where have all the PC makers gone? | PC makers are dwindling. If you are planning to make a PC purchase soon, here are a few things to look out for before you buy | pc makers;pc purchase |
|
train_1495 | Laptops zip to 2 GHz-plus | Intel's Pentium 4-M processor has reached the coveted 2-GHz mark, and speed-hungry mobile users will be tempted to buy a laptop with the chip. However, while our exclusive tests found 2-GHz P4-M notebooks among the fastest units we've tested, the new models failed to make dramatic gains compared with those based on Intel's 1.8-GHz mobile chip. Since 2-GHz notebooks carry a hefty price premium, buyers seeking both good performance and a good price might prefer a 1.8-GHz unit instead | intel pentium 4-m processor;notebooks;mobile;laptop;2 ghz |
|
train_1496 | Web ad explosion | Financed by advertising dollars from big names, online marketers are embracing more aggressive tactics | web advertising;online marketers |
|
train_1497 | Research challenges and perspectives of the Semantic Web | Accessing documents and services on today's Web requires human intelligence. The interface to these documents and services is the Web page, written in natural language, which humans must understand and act upon. The paper discusses the Semantic Web which will augment the current Web with formalized knowledge and data that computers can process. In the future, some services will mix human-readable and structured data so that both humans and computers can use them. Others will support formalized knowledge that only machines will use | internet;web page;formalized knowledge;artificial intelligence;document access;natural language;semantic web |
|
train_1498 | John McCarthy: father of AI | If John McCarthy, the father of AI, were to coin a new phrase for "artificial intelligence" today, he would probably use "computational intelligence." McCarthy is not just the father of AI, he is also the inventor of the Lisp (list processing) language. The author considers McCarthy's conception of Lisp and discusses McCarthy's recent research that involves elaboration tolerance, creativity by machines, free will of machines, and some improved ways of doing situation calculus | situation calculus;free will;father of ai;artificial intelligence;john mccarthy;elaboration tolerance;lisp;computational intelligence;list processing;creativity |
|
train_1499 | A digital-driving system for smart vehicles | In the wake of the computer and information technology revolutions, vehicles are undergoing dramatic changes in their capabilities and how they interact with drivers. Although some vehicles can decide to either generate warnings for the human driver or control the vehicle autonomously, they must usually make these decisions in real time with only incomplete information. So, human drivers must still maintain control over the vehicle. I sketch a digital driving behavior model. By simulating and analyzing driver behavior during different maneuvers such as lane changing, lane following, and traffic avoidance, researchers participating in the Beijing Institute of Technology's digital-driving project will be able to examine the possible correlations or causal relations between the smart vehicle, IVISs, the intelligent road-traffic-information network, and the driver. We aim to successfully demonstrate that a digital-driving system can provide a direction for developing human-centered smart vehicles | maneuvers;human-centered smart vehicles;intelligence;in-vehicle information systems;intelligent road traffic information network;intelligent transportation systems;digital driving system;ecological driver-vehicle interface;intelligent driver-vehicle interface;vehicle control;traffic avoidance;lane following;lane changing;interactive communication |
|
train_15 | Optimal and safe ship control as a multi-step matrix game | The paper describes the process of the safe ship control in a collision situation using a differential game model with j participants. As an approximated model of the manoeuvring process, a model of a multi-step matrix game is adopted here. RISKTRAJ computer program is designed in the Matlab language in order to determine the ship's trajectory as a certain sequence of manoeuvres executed by altering the course and speed, in the online navigator decision support system. These considerations are illustrated with examples of a computer simulation of the safe ship's trajectories in real situation at sea when passing twelve of the encountered objects | optimal control;differential game;ship control;trajectory tracking;multistep matrix game;online navigation;risktraj computer program;decision support system;collision avoidance |
|
train_150 | Model checking games for branching time logics | This paper defines and examines model checking games for the branching time temporal logic CTL*. The games employ a technique called focus which enriches sets by picking out one distinguished element. This is necessary to avoid ambiguities in the regeneration of temporal operators. The correctness of these games is proved, and optimizations are considered to obtain model checking games for important fragments of CTL*. A game based model checking algorithm that matches the known lower and upper complexity bounds is sketched | temporal operators;complexity bounds;branching time logics;temporal logic;model checking games |
|
train_1500 | DAML+OIL: an ontology language for the Semantic Web | By all measures, the Web is enormous and growing at a staggering rate, which has made it increasingly difficult-and important-for both people and programs to have quick and accurate access to Web information and services. The Semantic Web offers a solution, capturing and exploiting the meaning of terms to transform the Web from a platform that focuses on presenting information, to a platform that focuses on understanding and reasoning with information. To support Semantic Web development, the US Defense Advanced Research Projects Agency launched the DARPA Agent Markup Language (DAML) initiative to fund research in languages, tools, infrastructure, and applications that make Web content more accessible and understandable. Although the US government funds DAML, several organizations-including US and European businesses and universities, and international consortia such as the World Wide Web Consortium-have contributed to work on issues related to DAML's development and deployment. We focus on DAML's current markup language, DAML+OIL, which is a proposed starting point for the W3C's Semantic Web Activity's Ontology Web Language (OWL). We introduce DAML+OIL syntax and usage through a set of examples, drawn from a wine knowledge base used to teach novices how to build ontologies | daml+oil;darpa agent markup language;ontology web language;wine knowledge base;semantic web;syntax |
|
train_1501 | Computational challenges in cell simulation: a software engineering approach | Molecular biology's advent in the 20th century has exponentially increased our knowledge about the inner workings of life. We have dozens of completed genomes and an array of high-throughput methods to characterize gene encodings and gene product operation. The question now is how we will assemble the various pieces. In other words, given sufficient information about a living cell's molecular components, can we predict its behavior? We introduce the major classes of cellular processes relevant to modeling, discuss software engineering's role in cell simulation, and identify cell simulation requirements. Our E-Cell project aims to develop the theories, techniques, and software platforms necessary for whole-cell-scale modeling, simulation, and analysis. Since the project's launch in 1996, we have built a variety of cell models, and we are currently developing new models that vary with respect to species, target subsystem, and overall scale | e-cell project;whole-cell-scale modeling;cell simulation;object-oriented design;molecular biology;software engineering |
|
train_1502 | Mining open answers in questionnaire data | Surveys are important tools for marketing and for managing customer relationships; the answers to open-ended questions, in particular, often contain valuable information and provide an important basis for business decisions. The summaries that human analysts make of these open answers, however, tend to rely too much on intuition and so aren't satisfactorily reliable. Moreover, because the Web makes it so easy to take surveys and solicit comments, companies are finding themselves inundated with data from questionnaires and other sources. Handling it all manually would be not only cumbersome but also costly. Thus, devising a computer system that can automatically mine useful information from open answers has become an important issue. We have developed a survey analysis system that works on these principles. The system mines open answers through two statistical learning techniques: rule learning (which we call rule analysis) and correspondence analysis | text mining system;open answer mining;rule analysis;correspondence analysis;survey analysis;statistical learning techniques;questionnaire data;natural language response analysis |
|
train_1503 | Neural networks for web content filtering | With the proliferation of harmful Internet content such as pornography, violence, and hate messages, effective content-filtering systems are essential. Many Web-filtering systems are commercially available, and potential users can download trial versions from the Internet. However, the techniques these systems use are insufficiently accurate and do not adapt well to the ever-changing Web. To solve this problem, we propose using artificial neural networks to classify Web pages during content filtering. We focus on blocking pornography because it is among the most prolific and harmful Web content. However, our general framework is adaptable for filtering other objectionable Web material | violence;harmful web content;learning capabilities;web page classification;artificial neural networks;pornographic/nonpornographic web page differentiation;intelligent classification engine;web content filtering |
|
train_1504 | Designing human-centered distributed information systems | Many computer systems are designed according to engineering and technology principles and are typically difficult to learn and use. The fields of human-computer interaction, interface design, and human factors have made significant contributions to ease of use and are primarily concerned with the interfaces between systems and users, not with the structures that are often more fundamental for designing truly human-centered systems. The emerging paradigm of human-centered computing (HCC)-which has taken many forms-offers a new look at system design. HCC requires more than merely designing an artificial agent to supplement a human agent. The dynamic interactions in a distributed system composed of human and artificial agents-and the context in which the system is situated-are indispensable factors. While we have successfully applied our methodology in designing a prototype of a human-centered intelligent flight-surgeon console at NASA Johnson Space Center, this article presents a methodology for designing human-centered computing systems using electronic medical records (EMR) systems | human-centered distributed information systems design;human-computer interaction;nasa johnson space center;multiple analysis levels;electronic medical records systems;human-centered intelligent flight surgeon console;distributed cognition;interface design;artificial agents;human agents;human factors;human-centered computing systems |
|
train_1505 | Modeling and simulating practices, a work method for work systems design | Work systems involve people engaging in activities over time-not just with each other, but also with machines, tools, documents, and other artifacts. These activities often produce goods, services, or-as is the case in the work system described in this article-scientific data. Work systems and work practice evolve slowly over time. The integration and use of technology, the distribution and collocation of people, organizational roles and procedures, and the facilities where the work occurs largely determine this evolution | communication;teamwork;problem solving;learning behavior;work practice simulation;complex system interactions;tool usage;work practice modeling;work system design method;workspace usage;collaboration;human activity |
|
train_1506 | Intelligent control of life support for space missions | Future manned space operations will include a greater use of automation than we currently see. For example, semiautonomous robots and software agents will perform difficult tasks while operating unattended most of the time. As these automated agents become more prevalent, human contact with them will occur more often and become more routine, so designing these automated agents according to the principles of human-centered computing is important. We describe two cases of semiautonomous control software developed and fielded in test environments at the NASA Johnson Space Center. This software operated continuously at the JSC and interacted closely with humans for months at a time | semiautonomous control software;life support;nasa johnson space center;human intervention;space missions;automation;automated agents;semiautonomous robots;crew water recovery;manned space operations;intelligent control;software agents;crew air regeneration |
|
train_1507 | Ethnography, customers, and negotiated interactions at the airport | In the late 1990s, tightly coordinated airline schedules unraveled owing to massive delays resulting from inclement weather, overbooked flights, and airline operational difficulties. As schedules slipped, the delayed departures and late arrivals led to systemwide breakdowns, customers missed their connections, and airline work activities fell further out of sync. In offering possible answers, we emphasize the need to consider the customer as participant, following the human-centered computing model. Our study applied ethnographic methods to understand the airline system domain and the nature of airline delays, and it revealed the deficiencies of the airline production system model of operations. The research insights that led us to shift from a production and marketing system perspective to a customer-as-participant view might appear obvious to some readers. However, we do not know of any airline that designs its operations and technologies around any other model than the production and marketing system view. Our human-centered analysis used ethnographic methods to gather information, offering new insight into airline delays and suggesting effective ways to improve operations reliability | airline delays;employees;operations reliability;negotiated interactions;airports;human-centered computing model;customer trajectories;customer-as-participant view;ethnography;airline production system operations model |
|
train_1508 | Rats, robots, and rescue | In early May, media inquiries started arriving at my office at the Center for Robot-Assisted Search and Rescue (www.crasar.org). Because I'm CRASAR's director, I thought the press was calling to follow up on the recent humanitarian award given to the center's founder, John Blitch, for successfully using small, backpackable robots at the World Trade Center disaster. Instead, I found they were asking me to comment on the "roborats" study in the 2 May 2002 Nature. In this study, rats with medial force brain implants underwent operant conditioning to force them into a form of guided behavior, one aspect of which was thought useful for search and rescue. The article's closing comment suggested that a guided rat could serve as both a mobile robot and a biological sensor. Although a roboticist by training, I'm committed to any technology that will help save lives while reducing the risk to rescuers. But rats? | biological sensor;guided rat;mobile robot;robot-assisted search and rescue |
|
train_1509 | Mathematical modelling of the work of the system of wells in a layer with the | exponential law of permeability variation and the mobile liquid interface We construct and study a two-dimensional model of the work of the system of wells in a layer with the mobile boundary between liquids of various viscosity. We use a 'plunger' displacement model of liquids. The boundaries of the filtration region of these liquids are modelled by curves of the Lyapunov class. Unlike familiar work, we solve two-dimensonal problems in an inhomogeneous layer when the mobile boundary and the boundaries of the filtration region are modelled by curves of the Lyapunov class. We show the practical convergence of the numerical solution of the problems studied | exponential law;lyapunov class curves;mathematical modelling;mobile boundary;permeability variation;2d model;plunger displacement model;numerical solution;viscosity;inhomogeneous layer;convergence;filtration region boundaries;well system;work;mobile liquid interface |
|
train_151 | Extending CTL with actions and real time | In this paper, we present the logic ATCTL, which is intended to be used for model checking models that have been specified in a lightweight version of the Unified Modelling Language (UML). Elsewhere, we have defined a formal semantics for LUML to describe the models. This paper's goal is to give a specification language for properties that fits LUML; LUML includes states, actions and real time. ATCTL extends CTL with concurrent actions and real time. It is based on earlier extensions of CTL by R. De Nicola and F. Vaandrager (ACTL) (1990) and R. Alur et aL (TCTL) (1993). This makes it easier to adapt existing model checkers to ATCTL. To show that we can check properties specified in ATCTL in models specified in LUML, we give a small example using the Kronos model checker | model checking models;real time logic;specification language;formal semantics;logic atctl;unified modelling language;actions;computation tree logic;kronos model checker |
|
train_1510 | Estimation of the gradient of the solution of an adjoint diffusion equation by | the Monte Carlo method For the case of isotropic diffusion we consider the representation of the weighted concentration of trajectories and its space derivatives in the form of integrals (with some weights) of the solution to the corresponding boundary value problem and its directional derivative of a convective velocity. If the convective velocity at the domain boundary is degenerate and some other additional conditions are imposed this representation allows us to construct an efficient 'random walk by spheres and balls' algorithm. When these conditions are violated, transition to modelling the diffusion trajectories by the Euler scheme is realized, and the directional derivative of velocity is estimated by the dependent testing method, using the parallel modelling of two closely-spaced diffusion trajectories. We succeeded in justifying this method by statistically equivalent transition to modelling a single trajectory after the first step in the Euler scheme, using a suitable weight. This weight also admits direct differentiation with respect to the initial coordinate along a given direction. The resulting weight algorithm for calculating concentration derivatives is especially efficient if the initial point is in the subdomain in which the coefficients of the diffusion equation are constant | random walk by spheres and balls algorithm;weighted trajectory concentration;closely-spaced diffusion trajectories;directional derivative;weight;diffusion trajectories;monte carlo method;adjoint diffusion equation;gradient estimation;space derivatives;euler scheme;domain boundary;statistically equivalent transition;direct differentiation;initial coordinate;dependent testing method;isotropic diffusion;boundary value problem;convective velocity;concentration derivatives;integrals;parallel modelling |
|
train_1511 | Efficient algorithms for stiff elliptic problems with large parameters | We consider a finite element approximation and iteration algorithms for solving stiff elliptic boundary value problems with large parameters in front of a higher derivative. The convergence rate of the algorithms is independent of the spread in coefficients and a discretization parameter | higher derivative;large parameters;stiff elliptic boundary value problems;convergence rate;finite element approximation;iteration algorithms;efficient algorithms |
|
train_1512 | Mathematical aspects of computer-aided share trading | We consider problems of statistical analysis of share prices and propose probabilistic characteristics to describe the price series. We discuss three methods of mathematical modelling of price series with given probabilistic characteristics | mathematical modelling;probabilistic characteristics;computer-aided share trading;share price;statistical analysis;price series |
|
train_1513 | Solution of the reconstruction problem of a source function in the | coagulation-fragmentation equation We study the problem of reconstructing a source function in the kinetic coagulation-fragmentation equation. The study is based on optimal control methods, the solvability theory of operator equations, and the use of iteration algorithms | solvability;kinetic coagulation-fragmentation equation;optimal control methods;operator equations;iteration algorithms;source function reconstruction |
|
train_1514 | Universal parametrization in constructing smoothly-connected B-spline surfaces | In this paper, we explore the feasibility of universal parametrization in generating B-spline surfaces, which was proposed recently in the literature (Lim, 1999). We present an interesting property of the new parametrization that it guarantees Go continuity on B-spline surfaces when several independently constructed patches are put together without imposing any constraints. Also, a simple blending method of patchwork is proposed to construct C/sup n-1/ surfaces, where overlapping control nets are utilized. It takes into account the semi-localness property of universal parametrization. It effectively helps us construct very natural looking B-spline surfaces while keeping the deviation from given data points very low. Experimental results are shown with several sets of surface data points | universal parametrization;overlapping control nets;smoothly-connected b-spline surface generation;c/sup n-1/ surfaces;patches;semi-localness property;patchwork blending method;surface data points;g/sup 0/ continuity |
|
train_1515 | p-Bezier curves, spirals, and sectrix curves | We elucidate the connection between Bezier curves in polar coordinates, also called p-Bezier or focal Bezier curves, and certain families of spirals and sectrix curves. p-Bezier curves are the analogue in polar coordinates of nonparametric Bezier curves in Cartesian coordinates. Such curves form a subset of rational Bezier curves characterized by control points on radial directions regularly spaced with respect to the polar angle, and weights equal to the inverse of the polar radius. We show that this subset encompasses several classical sectrix curves, which solve geometrically the problem of dividing an angle into equal spans, and also spirals defining the trajectories of particles in central fields. First, we identify as p-Bezier curves a family of sinusoidal spirals that includes Tschirnhausen's cubic. Second, the trisectrix of Maclaurin and their generalizations, called arachnidas. Finally, a special class of epi spirals that encompasses the trisectrix of Delanges | sectrix curves;rational bezier curves;particle trajectories;epi spirals;cubic;polar angle;central fields;angle division;equal spans;focal bezier curves;spirals;polar coordinates;arachnidas;geometry;radial directions;sinusoidal spirals;p-bezier curves;control points;trisectrix |
|
train_1516 | Using duality to implicitize and find cusps and inflection points of Bezier | curves A planar algebraic curve C has an implicit equation and a tangential equation. The tangential equation defines a dual curve to C. Starting with a parametrization of C, we find a parametrization of the dual curve, and the tangential equation and implicit equation of C in a novel way. We also find equations whose roots are the parameter values of the cusps and inflection points of C. Methods include polar reciprocation and the theory of envelopes | bezier curves;planar algebraic curve;inflection points;envelope theory;cusps;tangential equation;parametrization;polar reciprocation;implicit equation;dual curve |
|
train_1517 | Minimizing blossoms under symmetric linear constraints | In this paper, we show that there exists a close dependence between the control polygon of a polynomial and the minimum of its blossom under symmetric linear constraints. We consider a given minimization problem P, for which a unique solution will be a point delta on the Bezier curve. For the minimization function f, two sufficient conditions exist that ensure the uniqueness of the solution, namely, the concavity of the control polygon of the polynomial and the characteristics of the Polya frequency-control polygon where the minimum coincides with a critical point of the polynomial. The use of the blossoming theory provides us with a useful geometrical interpretation of the minimization problem. In addition, this minimization approach leads us to a new method of discovering inequalities about the elementary symmetric polynomials | concavity;critical point;elementary symmetric polynomials;symmetric linear constraints;bezier curve;polynomial;geometrical interpretation;inequalities;polya frequency-control polygon;control polygon;blossom minimization |
|
train_1518 | Explicit matrix representation for NURBS curves and surfaces | The matrix forms for curves and surfaces were largely promoted in CAD/CAM. In this paper we have presented two matrix representation formulations for arbitrary degree NURBS curves and surfaces explicitly other than recursively. The two approaches are derived from the computation of divided difference and the Marsden identity respectively. The explicit coefficient matrix of B-spline with equally spaced knot and Bezier curves and surfaces can be obtained by these formulae. The coefficient formulae and the coefficient matrix formulae developed in this paper express non-uniform B-spline functions of arbitrary degree in explicit polynomial and matrix forms.. They are useful for the evaluation and the conversion of NURBS curves and surfaces, in CAD/CAM systems | b-spline;nurbs curves;nonuniform b-spline functions;bezier curves;equally spaced knot;explicit polynomial forms;explicit coefficient matrix;explicit matrix forms;nurbs surfaces;coefficient matrix formulae;cad/cam;bezier surfaces;explicit matrix representation;coefficient formulae;marsden identity;matrix representation formulations;divided difference |
|
train_1519 | Structural invariance of spatial Pythagorean hodographs | The structural invariance of the four-polynomial characterization for three-dimensional Pythagorean hodographs introduced by Dietz et al. (1993), under arbitrary spatial rotations, is demonstrated. The proof relies on a factored-quaternion representation for Pythagorean hodographs in three-dimensional Euclidean space-a particular instance of the "PH representation map" proposed by Choi et al. (2002)-and the unit quaternion description of spatial rotations. This approach furnishes a remarkably simple derivation for the polynomials u(t), upsilon (t), p(t), q(t) that specify the canonical form of a rotated Pythagorean hodograph, in terms of the original polynomials u(t), upsilon (t), p(t), q(t) and the angle theta and axis n of the spatial rotation. The preservation of the canonical form of PH space curves under arbitrary spatial rotations is essential to their incorporation into computer-aided design and manufacturing applications, such as the contour machining of free-form surfaces using a ball-end mill and realtime PH curve CNC interpolators | ph representation map;ball-end mill;structural invariance;arbitrary spatial rotations;3d pythagorean hodographs;cad/cam;real-time ph curve cnc interpolators;spatial pythagorean hodographs;free-form surfaces;unit quaternion description;spatial rotations;factored quaternion representation;four-polynomial characterization;3d euclidean space;contour machining |
|
train_152 | Linear tense logics of increasing sets | We provide an extension of the language of linear tense logic with future and past connectives F and P, respectively, by a modality that quantifies over the points of some set which is assumed to increase in the course of time. In this way we obtain a general framework for modelling growth qualitatively. We develop an appropriate logical system, prove a corresponding completeness and decidability result and discuss the various kinds of flow of time in the new context. We also consider decreasing sets briefly | temporal reasoning;completeness;future and past connectives;decreasing sets;decidability;logical system;linear tense logic |
|
train_1520 | Uniform hyperbolic polynomial B-spline curves | This paper presents a new kind of uniform splines, called hyperbolic polynomial B-splines, generated over the space Omega =span{sinh t, cosh t, t/sup k-3/, t/sup k-3/, t/sup k-4/, ..., t 1} in which k is an arbitrary integer larger than or equal to 3. Hyperbolic polynomial B-splines share most of the properties of B-splines in polynomial space. We give subdivision formulae for this new kind of curve and then prove that they have variation diminishing properties and the control polygons of the subdivisions converge. Hyperbolic polynomial B-splines can handle freeform curves as well as remarkable curves such as the hyperbola and the catenary. The generation of tensor product surfaces using these new splines is straightforward. Examples of such tensor product surfaces: the saddle surface, the catenary cylinder, and a certain kind of ruled surface are given | uniform hyperbolic polynomial b-spline curves;catenary cylinder;arbitrary integer;subdivisions;tensor product surface generation;subdivision formulae;saddle surface;freeform curves;ruled surface;catenary;control polygons;hyperbola |
|
train_1521 | Optimal multi-degree reduction of Bezier curves with constraints of endpoints | continuity Given a Bezier curve of degree n, the problem of optimal multi-degree reduction (degree reduction of more than one degree) by a Bezier curve of degree m (m<n-1) with constraints of endpoint continuity is investigated. With respect to L/sub 2/ norm, this paper presents an approximate method (MDR by L/sub 2/) that gives an explicit solution to deal with it. The method has good properties of endpoint interpolation: continuity of any r, s (r, s>or=0) orders can be preserved at two endpoints respectively. The method in the paper performs multi-degree reduction at one time and does not need stepwise computing. When applied to multi-degree reduction with endpoint continuity of any order, the MDR by L/sub 2/ obtains the best least squares approximation. Comparison with another method of multi-degree reduction (MDR by L/sub infinity /), which achieves the nearly best uniform approximation with respect to L/sub infinity / norm, is also given. The approximate effect of the MDR by L/sub 2/ is better than that of the MDR by L/sub infinity /. Explicit approximate error analysis of the multi-degree reduction methods is presented | bezier curves;uniform approximation;endpoint continuity constraints;optimal multi-degree reduction;endpoint interpolation;explicit approximate error analysis;least squares approximation;approximate method;explicit solution |
|
train_1522 | Waltzing through Port 80 [Web security] | Web services follow the trusting model of the Internet, but allow ever more powerful payloads to travel between businesses and consumers. Before you leap online, the author advises to scan the security concerns and the available fixes. He looks at how we define and store Web services and incorporate them into business processes | internet;trust;data security;business processes;web services |
|
train_1523 | Process specialization: defining specialization for state diagrams | A precise definition of specialization and inheritance promises to be as useful in organizational process modeling as it is in object modeling. It would help us better understand, maintain, reuse, and generate process models. However, even though object-oriented analysis and design methodologies take full advantage of the object specialization hierarchy, the process specialization hierarchy is not supported in major process representations, such as the state diagram, data flow diagram, and UML representations. Partly underlying this lack of support is an implicit assumption that we can always specialize a process by treating it as "just another object." We argue in this paper that this is not so straightforward as it might seem; we argue that a process-specific approach must be developed. We propose such an approach in the form of a set of transformations which, when applied to a process description, always result in specialization. We illustrate this approach by applying it to the state diagram representation and demonstrate that this approach to process specialization is not only theoretically possible, but shows promise as a method for categorizing and analyzing processes. We point out apparent inconsistencies between our notion of process specialization and existing work on object specialization but show that these inconsistencies are superficial and that the definition we provide is compatible with the traditional notion of specialization | inheritance;object-oriented analysis;state diagrams;organizational process modeling;object specialization hierarchy;process representation;object-oriented design;process specialization |
|
train_1524 | Organizational design, information transfer, and the acquisition of | rent-producing resources Within the resource-based view of the firm, a dynamic story has emerged in which the knowledge accumulated over the history of a firm and embedded in organizational routines and structures influences the firm's ability to recognize the value of new resources and capabilities. This paper explores the possibility of firms to select organizational designs that increase the likelihood that they will recognize and value rent-producing resources and capabilities. A computational model is developed to study the tension between an organization's desire to explore its environment for new capabilities and the organization's need to exploit existing capabilities. Support is provided for the proposition that integration, both externally and internally, is an important source of dynamic capability. The model provides greater insight into the tradeoffs between these two forms of integration and suggests when one form may be preferred over another. In particular, evidence is provided that in uncertain environments, the ability to explore possible alternatives is critical while in more certain environments, the ability to transfer information internally is paramount | investments;business strategy;organizational design;computational model;information transfer;probability;social networks;uncertain environments;rent-producing resources;certain environments |
|
train_1525 | Dependence graphs: dependence within and between groups | This paper applies the two-party dependence theory (Castelfranchi, Cesta and Miceli, 1992, in Y. Demazeau and E. Werner (Eds.) Decentralized AI-3, Elsevier, North Holland) to modelling multiagent and group dependence. These have theoretical potentialities for the study of emerging groups and collective structures, and more generally for understanding social and organisational complexity, and practical utility for both social-organisational and agent systems purposes. In the paper, the dependence theory is extended to describe multiagent links, with a special reference to group and collective phenomena, and is proposed as a framework for the study of emerging social structures, such as groups and collectives. In order to do so, we propose to extend the notion of dependence networks (applied to a single agent) to dependence graphs (applied to an agency). In its present version, the dependence theory is argued to provide (a) a theoretical instrument for the study of social complexity, and (b) a computational system for managing the negotiation process in competitive contexts and for monitoring complexity in organisational and other cooperative contexts | collective structures;two-party dependence theory;group dependence;multiagent dependence;agent systems;emerging groups;dependence graphs;multiagent systems;organisational complexity;dependence networks;social complexity |
|
train_1526 | GK-DEVS: Geometric and kinematic DEVS formalism for simulation modeling of | 3-dimensional multi-component systems A combined discrete/continuous simulation methodology based on the DEVS (discrete event system specification) formalism is presented in this paper that satisfies the simulation requirements of 3-dimensional and dynamic systems with multi-components. We propose a geometric and kinematic DEVS (GK-DEVS) formalism that is able to describe the geometric and kinematic structure of a system and its continuous state dynamics as well as the interaction among the multi-components. To establish one model having dynamic behavior and a particular hierarchical structure, the atomic and the coupled model of the conventional DEVS are merged into one model in the proposed formalism. For simulation of the continuous motion of 3-D components, the sequential state set is partitioned into the discrete and the continuous state set and the rate of change function over the continuous state set is employed. Although modified from the conventional DEVS formalism, the GK-DEVS formalism preserves a hierarchical, modular modeling fashion and a coupling scheme. Furthermore, for the GK-DEVS model simulation, we propose an abstract simulation algorithm, called a GK-Simulator, in which data and control are separated and events are scheduled not globally but hierarchically so that an object-oriented principle is satisfied. The proposed GK-DEVS formalism and the GK-Simulator algorithm have been applied to the simulation of a flexible manufacturing system consisting of a 2-axis lathe, a 3-axis milling machine, and a vehicle-mounted robot | object-oriented principle;continuous motion;geometric devs;gk-devs;simulation requirements;3 dimensional multi-component systems;vehicle-mounted robot;3-axis milling machine;kinematic devs;continuous state dynamics;abstract simulation algorithm;sequential state set;dynamic behavior;gk-simulator;simulation modeling;combined discrete/continuous simulation methodology;flexible manufacturing system;2-axis lathe |
|
train_1527 | Using DEVS formalism to operationalize ELP models for diagnosis in SACHEM | This paper describes an original approach to discrete event control of continuous processes by means of expert knowledge. We present an application of this approach on the SACHEM diagnosis subsystem. The SACHEM system is a large-scale knowledge-based system that aims in helping a set of operators to control the dynamics of complex continuous systems (e.g., blast furnaces). The proposed method is based on: (i) The definition of a language facilitating the acquisition and representation of expert knowledge, called ELP (Expert Language Process); (ii) The use of the DEVS formalism to make ELP models operational; (iii) Algorithms for exploiting operational models | complex continuous systems;expert knowledge;discrete event control;knowledge acquisition;sachem system;devs;knowledge representation;continuous processes;sachem;elp models |
|
train_1528 | DEVS simulation of distributed intrusion detection systems | An intrusion detection system (IDS) attempts to identify unauthorized use, misuse, and abuse of computer and network systems. As intrusions become more sophisticated, dealing with them moves beyond the scope of one IDS. The need arises for systems to cooperate with one another, to manage diverse attacks across networks. The feature of recent attacks is that the packet delivery is moderately slow, and the attack sources and attack targets are distributed. These attacks are called "stealthy attacks." To detect these attacks, the deployment of distributed IDSs is needed. In such an environment, the ability of an IDS to share advanced information about these attacks is especially important. In this research, the IDS model exploits blacklist facts to detect the attacks that are based on either slow or highly distributed packets. To maintain the valid blacklist facts in the knowledge base of each IDS, the model should communicate with the other IDSs. When attack level goes beyond the interaction threshold, ID agents send interaction messages to ID agents in other hosts. Each agent model is developed as an interruptible atomic-expert model in which the expert system is embedded as a model component | distributed intrusion detection system;intrusion detection system;ids;cooperative intrusion detection;warning threshold;intrusions;expert system |
|
train_1529 | Quantized-State Systems: A DEVS-approach for continuous system simulation | A new class of dynamical systems, Quantized State Systems or QSS, is introduced in this paper. QSS are continuous time systems where the input trajectories are piecewise constant functions and the state variable trajectories - being themselves piecewise linear functions - are converted into piecewise constant functions via a quantization function equipped with hysteresis. It is shown that QSS can be exactly represented and simulated by a discrete event model, within the framework of the DEVS-approach. Further, it is shown that QSS can be used to approximate continuous systems, thus allowing their discrete-event simulation in opposition to the classical discrete-time simulation. It is also shown that in an approximating QSS, some stability properties of the original system are conserved and the solutions of the QSS go to the solutions of the original system when the quantization goes to zero | continuous time systems;dynamical systems;discrete-event simulation;discrete event model;piecewise constant functions;quantized state systems |
|
train_153 | On the relationship between omega -automata and temporal logic normal forms | We consider the relationship between omega -automata and a specific logical formulation based on a normal form for temporal logic formulae. While this normal form was developed for use with execution and clausal resolution in temporal logics, we show how it can represent, syntactically, omega -automata in a high-level way. Technical proofs of the correctness of this representation are given | omega -automata;temporal logic normal forms;logical formulation;program correctness;clausal resolution |
|
train_1530 | Uniform supersaturated design and its construction | Supersaturated designs are factorial designs in which the number of main effects is greater than the number of experimental runs. In this paper, a discrete discrepancy is proposed as a measure of uniformity for supersaturated designs, and a lower bound of this discrepancy is obtained as,a benchmark of design uniformity. A construction method for uniform supersaturated designs via resolvable balanced incomplete block designs is also presented along with the investigation of properties of the resulting designs. The construction method shows a strong link between these two different kinds of designs | uniform supersaturated design;factorial designs;experimental runs;discrete discrepancy;resolvable balanced incomplete block designs |
|
train_1531 | Average optimization of the approximate solution of operator equations and its | application In this paper, a definition of the optimization of operator equations in the average case setting is given. And the general result about the relevant optimization problem is obtained. This result is applied to the optimization of approximate solution of some classes of integral equations | integral n-width;average case setting;optimization;operator equations;gaussian measure;integral equations |
|
train_1532 | Dedekind zeta-functions and Dedekind sums | In this paper we use Dedekind zeta functions of two real quadratic number fields at -1 to denote Dedekind sums of high rank. Our formula is different from that of Siegel's (1969). As an application, we get a polynomial representation of zeta /sub K/(-1) = zeta /sub K/(-1) = 1/45(26n/sup 3/ - 41n +or- 9), n identical to +or-2(mod 5), where K = Q( square root (5q)), prime q = 4n/sup 2/ + 1, and the class number of quadratic number field K/sub 2/ = Q( square root q) is 1 | dedekind sums;real quadratic number fields;dedekind zeta-functions;polynomial representation |
|
train_1533 | Generic simulation approach for multi-axis machining. Part 2: model calibration | and feed rate scheduling For Part 1 see ibid. vol.124 (2002). This is the second part of a two-part paper presenting a new methodology for analytically simulating multi-axis machining of complex sculptured surfaces. The first section of this paper offers a detailed explanation of the model calibration procedure. A new methodology is presented for accurately determining the cutting force coefficients for multi-axis machining. The force model presented in Part 1 is reformulated so that the cutting force coefficients account for the effects of feed rate, cutting speed, and a complex cutting edge design. Experimental results are presented for the calibration procedure. Model verification tests were conducted with these cutting force coefficients. These tests demonstrate that the predicted forces are within 5% of experimentally measured forces. Simulated results are also shown for predicting dynamic cutting forces and static/dynamic tool deflection. The second section of the paper discusses how the modeling methodology can be applied for feed rate scheduling in an industrial application. A case study for process optimization of machining an airfoil-like surface is used for demonstration. Based on the predicted instantaneous chip load and/or a specified force constraint, the feed rate scheduling is utilized to increase the metal removal rate. The feed rate scheduling implementation results in a 30% reduction in machining time for the airfoil-like surface | cutting force coefficients;model calibration;cutting edge design;optimization;metal removal rate;multiple axis machining;complex sculptured surfaces;generic simulation;feed rate scheduling;force model |
|
train_1534 | Generic simulation approach for multi-axis machining. Part 1: modeling | methodology This paper presents a new methodology for analytically simulating multi-axis machining of complex sculptured surfaces. A generalized approach is developed for representing an arbitrary cutting edge design, and the local surface topology of a complex sculptured surface. A NURBS curve is used to represent the cutting edge profile. This approach offers the advantages of representing any arbitrary cutting edge design in a generic way, as well as providing standardized techniques for manipulating the location and orientation of the cutting edge. The local surface topology of the part is defined as those surfaces generated by previous tool paths in the vicinity of the current tool position. The local surface topology of the part is represented without using a computationally expensive CAD system. A systematic prediction technique is then developed to determine the instantaneous tool/part interaction during machining. The methodology employed here determines the cutting edge in-cut segments by determining the intersection between the NURBS curve representation of the cutting edge and the defined local surface topology. These in-cut segments are then utilized for predicting instantaneous chip load, static and dynamic cutting forces, and tool deflection. Part 1 of this paper details the modeling methodology and demonstrates the capabilities of the simulation for machining a complex surface | complex surface machining;systematic prediction;tool path specification;surface topology;nurbs curve;multiple axis machining;complex sculptured surfaces;generic modeling;cutting edge profile |
|
train_1535 | Hot controllers | Over the last few years, the semiconductor industry has put much emphasis on ways to improve the accuracy of thermal mass flow controllers (TMFCs). Although issues involving TMFC mounting orientation and pressure effects have received much attention, little has been done to address the effect of changes in ambient temperature or process gas temperature. Scientists and engineers at Qualiflow have succeeded to solve the problem using a temperature correction algorithm for digital TMFCs. Using an in situ environmental temperature compensation technique, we calculated correction factors for the temperature effect and obtained satisfactory results with both the traditional sensor and the new, improved thin-film sensors | semiconductor manufacturing;temperature correction algorithm;process gas flow;thermal mass flow controller;in situ environmental temperature compensation |
|
train_1536 | Connection management for QoS service on the Web | The current Web service model treats all requests equivalently, both while being processed by servers and while being transmitted over the network. For some uses, such as multiple priority schemes, different levels of service are desirable. We propose application-level TCP connection management mechanisms for Web servers to provide two different levels of Web service, high and low service, by setting different time-outs for inactive TCP connections. We evaluated the performance of the mechanism under heavy and light loading conditions on the Web server. Our experiments show that, though heavy traffic saturates the network, high level class performance is improved by as much as 25-28%. Therefore, this mechanism can effectively provide QoS guaranteed services even in the absence of operating system and network supports | internet;time-outs;quality of service;tcp connections;web service model;telecommunication traffic;web transaction;client server system;connection management |
|
train_1537 | Technology on social issues of videoconferencing on the Internet: a survey | Constant advances in audio/video compression, the development of the multicast protocol as well as fast improvement in computing devices (e.g. higher speed, larger memory) have set forth the opportunity to have resource demanding videoconferencing (VC) sessions on the Internet. Multicast is supported by the multicast backbone (Mbone), which is a special portion of the Internet where this protocol is being deployed. Mbone VC tools are steadily emerging and the user population is growing fast. VC is a fascinating application that has the potential to greatly impact the way we remotely communicate and work. Yet, the adoption of VC is not as fast as one could have predicted. Hence, it is important to examine the factors that affect a widespread adoption of VC. This paper examines the enabling technology and the social issues. It discusses the achievements and identifies the future challenges. It suggests an integration of many emerging multimedia tools into VC in order to enhance its versatility for more effectiveness | internet;mbone;multimedia;data compression;multicast protocol;social issues;multicast backbone;videoconferencing |
|
train_1538 | A heuristic approach to resource locations in broadband networks | In broadband networks, such as ATM, the importance of dynamic migration of data resources is increasing because of its potential to improve performance especially for transaction processing. In environments with migratory data resources, it is necessary to have mechanisms to manage the locations of each data resource. In this paper, we present an algorithm that makes use of system state information and heuristics to manage locations of data resources in a distributed network. In the proposed algorithm, each site maintains information about state of other sites with respect to each data resource of the system and uses it to find: (1) a subset of sites likely to have the requested data resource; and (2) the site where the data resource is to be migrated from the current site. The proposed algorithm enhances its effectiveness by continuously updating system state information stored at each site. It focuses on reducing the overall average time delay needed by the transaction requests to locate and access the migratory data resources. We evaluated the performance of the proposed algorithm and also compared it with one of the existing location management algorithms, by simulation studies under several system parameters such as the frequency of requests generation, frequency of data resource migrations, network topology and scale of network. The experimental results show the effectiveness of the proposed algorithm in all cases | broadband networks;resource locations;distributed network;network topology;atm;data resource migrations;heuristics |
|
train_1539 | Comments on some recent methods for the simultaneous determination of | polynomial zeros In this note we give some comments on the recent results concerning a simultaneous method of the fourth-order for finding complex zeros in circular interval arithmetic. The main discussion is directed to a rediscovered iterative formula and its modification, presented recently in Sun and Kosmol, (2001). The presented comments include some critical parts of the papers Petkovic, Trickovic, Herceg, (1998) and Sun and Kosmol, (2001) which treat the same subject | zeros;polynomial;complex zeros;iterative formula;circular interval arithmetic |
|
train_154 | Verifying concurrent systems with symbolic execution | Current techniques for interactively proving temporal properties of concurrent systems translate transition systems into temporal formulas by introducing program counter variables. Proofs are not intuitive, because control flow is not explicitly considered. For sequential programs symbolic execution is a very intuitive, interactive proof strategy. In this paper we adopt this technique for parallel programs. Properties are formulated in interval temporal logic. An implementation in the interactive theorem prover KIV has shown that this technique offers a high degree of automation and allows simple, local invariants | symbolic execution;concurrent systems verification;concurrent systems;parallel programs;temporal properties;interactive theorem prover kiv;sequential programs;program counter variables;temporal formulas;transition systems;local invariants |
|
train_1540 | Adaptive thinning for bivariate scattered data | This paper studies adaptive thinning strategies for approximating a large set of scattered data by piecewise linear functions over triangulated subsets. Our strategies depend on both the locations of the data points in the plane, and the values of the sampled function at these points - adaptive thinning. All our thinning strategies remove data points one by one, so as to minimize an estimate of the error that results by the removal of a point from the current set of points (this estimate is termed "anticipated error"). The thinning process generates subsets of "most significant" points, such that the piecewise linear interpolants over the Delaunay triangulations of these subsets approximate progressively the function values sampled at the original scattered points, and such that the approximation errors are small relative to the number of points in the subsets. We design various methods for computing the anticipated error at reasonable cost, and compare and test the performance of the methods. It is proved that for data sampled from a convex function, with the strategy of convex triangulation, the actual error is minimized by minimizing the best performing measure of anticipated error. It is also shown that for data sampled from certain quadratic polynomials, adaptive thinning is equivalent to thinning which depends only on the locations of the data points - nonadaptive thinning. Based on our numerical tests and comparisons, two practical adaptive thinning algorithms are proposed for thinning large data sets, one which is more accurate and another which is faster | scattered data;triangulated subsets;convex function;delaunay triangulations;error;piecewise linear functions;adaptive thinning |
|
train_1541 | The AT89C51/52 flash memory programmers | When faced with a plethora of applications to design, it's essential to have a versatile microcontroller in hand. The author describes the AT89C51/52 microcontrollers. To get you started, he'll describe his inexpensive microcontroller programmer | microcontroller programmer;microcontrollers;flash memory programmers;at89c51/52;device programmer |
|
train_1542 | The open-source HCS project | Despite the rumors, the HCS II project is not dead. In fact, HCS has been licensed and is now an open-source project. In this article, the author brings us up to speed on the HCS II project's past, present, and future. The HCS II is an expandable, standalone, network-based (RS-485), intelligent-node, industrial-oriented supervisory control (SC) system intended for demanding home control applications. The HCS incorporates direct and remote digital inputs and outputs, direct and remote analog inputs and outputs, real time or Boolean decision event triggering, X10 transmission and reception, infrared remote control transmission and reception, remote LCDs, and a master console. Its program is compiled on a PC with the XPRESS compiler and then downloaded to the SC where it runs independently of the PC | supervisory control system;network-based;hcs ii;home control |
|
train_1543 | RISCy business. Part 1: RISC projects by Cornell students | The author looks at several projects that Cornell University students entered in the Atmel Design 2001 contest. Those covered include a vertical plotter; BiLines, an electronic game; a wireless Internet pager; Cooking Coach; Barbie's zip drive; and a model train controller | atmel's design logic 2001 contest;barbie's zip drive;model train controller;bilines;vertical plotter;electronic game;risc projects;cooking coach;cornell students;wireless internet pager |
|
train_1544 | Driving the NKK Smartswitch.2. Graphics and text | Whether your message is one of workplace safety or world peace, the long nights of brooding over ways to tell the world are over. Part 1 described the basic interface to drive the Smartswitch. Part 2 adds the bells and whistles to allow both text and messages to be placed anywhere on the screen. It considers character generation, graphic generation and the user interface | graphic generation;character generation;computer graphics;messages;user interface;nkk smartswitch;text |
|
train_1545 | Pontryagin maximum principle of optimal control governed by fluid dynamic | systems with two point boundary state constraint We study the optimal control problem subject to the semilinear equation with a state constraint. We prove certain theorems and give examples of state constraints so that the maximum principle holds. The main difficulty of the problem is to make the sensitivity analysis of the state with respect to the control caused by the unboundedness and nonlinearity of an operator | optimal control;state constraints;fluid dynamics;pontryagin maximum principle;semilinear equation |
|
train_1546 | Necessary conditions of optimality for impulsive systems on Banach spaces | We present necessary conditions of optimality for optimal control problems arising in systems governed by impulsive evolution equations on Banach spaces. Basic notations and terminologies are first presented and necessary conditions of optimality are presented. Special cases are discussed and we present an application to the classical linear quadratic regulator problem | optimal control;impulsive systems;banach spaces;linear quadratic regulator;optimality;necessary conditions;impulsive evolution equations |
|
train_1547 | New projection-type methods for monotone LCP with finite termination | In this paper we establish two new projection-type methods for the solution of the monotone linear complementarity problem (LCP). The methods are a combination of the extragradient method and the Newton method, in which the active set strategy is used and only one linear system of equations with lower dimension is solved at each iteration. It is shown that under the assumption of monotonicity, these two methods are globally and linearly convergent. Furthermore, under a nondegeneracy condition they have a finite termination property. Finally, the methods are extended to solving the monotone affine variational inequality problem | projection-type methods;linear system of equations;monotonicity;active set strategy;nondegeneracy condition;iteration;matrix;convergence;vector;monotone linear complementarity problem;monotone lcp;monotone affine variational inequality problem;newton method;extragradient method;finite termination |
|
train_1548 | A second order characteristic finite element scheme for convection-diffusion | problems A new characteristic finite element scheme is presented for convection-diffusion problems. It is of second order accuracy in time increment, symmetric, and unconditionally stable. Optimal error estimates are proved in the framework of L/sup 2/-theory. Numerical results are presented for two examples, which show the advantage of the scheme | optimal error estimates;l/sup 2/ -theory;second order accuracy;convection-diffusion problems;second order characteristic finite element scheme |
|
train_1549 | Riccati-based preconditioner for computing invariant subspaces of large | matrices This paper introduces and analyzes the convergence properties of a method that computes an approximation to the invariant subspace associated with a group of eigenvalues of a large not necessarily diagonalizable matrix. The method belongs to the family of projection type methods. At each step, it refines the approximate invariant subspace using a linearized Riccati's equation which turns out to be the block analogue of the correction used in the Jacobi-Davidson method. The analysis conducted in this paper shows that the method converges at a rate quasi-quadratic provided that the approximate invariant subspace is close to the exact one. The implementation of the method based on multigrid techniques is also discussed and numerical experiments are reported | riccati-based preconditioner;invariant subspaces;jacobi-davidson method;projection type methods;eigenvalues;multigrid techniques;large matrices;diagonalizable matrix |
|
train_155 | Fuzzy non-homogeneous Markov systems | In this paper the theory of fuzzy logic and fuzzy reasoning is combined with the theory of Markov systems and the concept of a fuzzy non-homogeneous Markov system is introduced for the first time. This is an effort to deal with the uncertainty introduced in the estimation of the transition probabilities and the input probabilities in Markov systems. The asymptotic behaviour of the fuzzy Markov system and its asymptotic variability is considered and given in closed analytic form. Moreover, the asymptotically attainable structures of the system are estimated also in a closed analytic form under some realistic assumptions. The importance of this result lies in the fact that in most cases the traditional methods for estimating the probabilities can not be used due to lack of data and measurement errors. The introduction of fuzzy logic into Markov systems represents a powerful tool for taking advantage of the symbolic knowledge that the experts of the systems possess | uncertainty;measurement errors;fuzzy reasoning;symbolic knowledge;transition probabilities;fuzzy nonhomogeneous markov systems;probability theory;asymptotic variability;input probabilities;fuzzy logic |
|
train_1550 | On the convergence of the Bermudez-Moreno algorithm with constant parameters | A. Bermudez and C. Moreno (1981) presented a duality numerical algorithm for solving variational inequalities of the second kind. The performance of this algorithm strongly depends on the choice of two constant parameters. Assuming a further hypothesis of the inf-sup type, we present here a convergence theorem that improves on the one presented by A. Bermudez and C. Moreno. We prove that the convergence is linear, and we give the expression of the asymptotic error constant and the explicit form of the optimal parameters, as a function of some constants related to the variational inequality. Finally, we present some numerical examples that confirm the theoretical results | bermudez-moreno algorithm;convergence theorem;optimal parameters;duality numerical algorithm;asymptotic error constant;variational inequalities;constant parameters |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.