name
stringlengths 7
10
| title
stringlengths 13
125
| abstract
stringlengths 67
3.02k
| fulltext
stringclasses 1
value | keywords
stringlengths 17
734
|
---|---|---|---|---|
train_538 | Polarization of the RF field in a human head at high field: a study with a | quadrature surface coil at 7.0 T The RF field intensity distribution in the human brain becomes inhomogeneous due to wave behavior at high field. This is further complicated by the spatial distribution of RF field polarization that must be considered to predict image intensity distribution. An additional layer of complexity is involved when a quadrature coil is used for transmission and reception. To study such complicated RF field behavior, a computer modeling method was employed to investigate the RF field of a quadrature surface coil at 300 MHz. Theoretical and experimental results for a phantom and the human head at 7.0 T are presented. The results are theoretically important and practically useful for high-field quadrature coil design and application | human brain;whole-body mri;reception fields;quadrature surface coil;phantom samples;7.0 t;high field mri;finite difference time domain method;segmented images;spatial distribution;transmission fields;rf field intensity distribution;high-field coil design;computer modeling;gradient echo images;300 mhz;image intensity distribution;3d multitissue head model;rf field polarization;maxwell wave equations |
|
train_539 | Perfusion quantification using Gaussian process deconvolution | The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values, and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as data from healthy volunteers. It is shown that GPD is comparable to SVD with a variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal-to-noise ratio (SNR) increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes | perfusion quantification;gaussian process deconvolution;residual impulse response function;likelihood function;singular value decomposition;mean transit time;capillary blood flow;correlation length;dynamic susceptibility contrast mri;noise level;optimized joint gaussian distribution;optimized model parameters |
|
train_54 | Controls help harmonic spray do OK removing residues | Looks at how innovative wafer-cleaning equipment hit the market in a timely fashion thanks in part to controls maker Rockwell Automation | rockwell automation;residues removal;allen-bradley 1336 plus ii variable frequency ac drives;wafer-cleaning equipment;harmonic spray;allen-bradley controllogix automation control platform;motion control;psi machine |
|
train_540 | Ventilation-perfusion ratio of signal intensity in human lung using | oxygen-enhanced and arterial spin labeling techniques This study investigates the distribution of ventilation-perfusion (V/Q) signal intensity (SI) ratios using oxygen-enhanced and arterial spin labeling (ASL) techniques in the lungs of 10 healthy volunteers. Ventilation and perfusion images were simultaneously acquired using the flow-sensitive alternating inversion recovery (FAIR) method as volunteers alternately inhaled room air and 100% oxygen. Images of the T/sub 1/ distribution were calculated for five volunteers for both selective (T/sub 1f/) and nonselective (T/sub 1/) inversion. The average T/sub 1/ was 1360 ms+or-116 ms, and the average T/sub 1f/ was 1012 ms+or-112 ms, yielding a difference that is statistically significant (P<0.002). Excluding large pulmonary vessels, the average V/Q SI ratios were 0.355+or-0.073 for the left lung and 0.371+or-0.093 for the right lung, which are in agreement with the theoretical V/Q SI ratio. Plots of the WO SI ratio are similar to the logarithmic normal distribution obtained by multiple inert gas elimination techniques, with a range of ratios matching ventilation and perfusion. This MRI V/Q technique is completely noninvasive and does not involve ionized radiation. A limitation of this method is the nonsimultaneous acquisition of perfusion and ventilation data, with oxygen administered only for the ventilation data | multiple inert gas elimination;oxygen-enhanced techniques;ventilation-perfusion ratio;time delay;gas exchange efficiency;logarithmic normal distribution;mri;ventilation images;chronic obstructive pulmonary disease;pixel-by-pixel maps;pulmonary embolism;arterial spin labeling techniques;perfusion images;flow-sensitive alternating inversion recovery;nonsimultaneous acquisition;pathomechanisms;signal intensity;human lung |
|
train_541 | Virtual-reality-based multidimensional therapy for the treatment of body image | disturbances in binge eating disorders: a preliminary controlled study The main goal of this paper is to preliminarily evaluate the efficacy of a virtual-reality (VR)-based multidimensional approach in the treatment of body image attitudes and related constructs. The female binge eating disorder (BED) patients (n=20), involved in a residential weight control treatment including low-calorie diet (1200 cal/day) and physical training, were randomly assigned either to the multidimensional VR treatment or to psychonutritional groups based on the cognitive-behavior approach. Patients were administered a battery of outcome measures assessing eating disorders symptomathology, attitudes toward food, body dissatisfaction, level of anxiety, motivation for change, level of assertiveness, and general psychiatric symptoms. In the short term, the VR treatment was more effective than the traditional cognitive-behavioral psychonutritional groups in improving the overall psychological state of the patients. In particular, the therapy was more effective in improving body satisfaction, self-efficacy, and motivation for change. No significant differences were found in the reduction of the binge eating behavior. The possibility of inducing a significant change in body image and its associated behaviors using a VR-based short-term therapy can be useful to improve the body satisfaction in traditional weight reduction programs. However, given the nature of this research that does not include a followup study, the obtained results are preliminary only | obesity;virtual reality;body image disturbances;multidimensional therapy;psychiatric symptoms;psychonutritional groups;anxiety;cognitive-behavior approach;residential weight control treatment;binge eating disorders;patient therapy |
|
train_542 | The treatment of fear of flying: a controlled study of imaginal and virtual | reality graded exposure therapy The goal of this study was to determine if virtual reality graded exposure therapy (VRGET) was equally efficacious, more efficacious, or less efficacious, than imaginal exposure therapy in the treatment of fear of flying. Thirty participants (Age=39.8+or-9.7) with confirmed DSM-IV diagnosis of specific phobia fear of flying were randomly assigned to one of three groups: VRGET with no physiological feedback (VRGETno), VRGET with physiological feedback (VRGETpm), or systematic desensitization with imaginal exposure therapy (IET). Eight sessions were conducted once a week. During each session, physiology was measured to give an objective measurement of improvement over the course of exposure therapy. In addition, self-report questionnaires, subjective ratings of anxiety (SUDs), and behavioral observations (included here as flying behavior before beginning treatment and at a three-month posttreatment followup) were included. In the analysis of results, the Chi-square test of behavioral observations based on a three-month posttreatment followup revealed a statistically significant difference in flying behavior between the groups [ chi /sup 2/(4)=19.41, p<0.001]. Only one participant (10%) who received IET, eight of the ten participants (80%) who received VRGETno, and ten out of the ten participants (100%) who received VRGETpm reported an ability to fly without medication or alcohol at three-month followup. Although this study included small sample sizes for the three groups, the results showed VRGET was more effective than IET in the treatment of flying. It also suggests that physiological feedback may add to the efficacy of VR treatment | chi-square test;flying fear;phobia;virtual reality graded exposure therapy;physiological feedback;patient treatment;behavioral observations;imaginal exposure therapy;physiology;questionnaires;subjective ratings of anxiety |
|
train_543 | The development of virtual reality therapy (VRT) system for the treatment of | acrophobia and therapeutic case Virtual reality therapy (VRT), based on this sophisticated technology, has been used in the treatment of subjects diagnosed with acrophobia, a disorder that is characterized by marked anxiety upon exposure to heights and avoidance of heights. Conventional VR systems for the treatment of acrophobia have limitations, over-costly devices or somewhat unrealistic graphic scenes. The goal of this study was to develop an inexpensive and more realistic virtual environment (VE) in which to perform exposure therapy for acrophobia. It is based on a personal computer, and a virtual scene of a bunge-jump tower in the middle of a large city. The virtual scenario includes an open lift surrounded by props beside a tower, which allows the patient to feel a sense of heights. The effectiveness of the VE was evaluated through the clinical treatment of a subject who was suffering from the fear of heights. As a result, it was proved that this VR environment was effective and realistic at overcoming acrophobia according not only to the comparison results of a variety of questionnaires before and after treatment but also to the subject's comments that the VE seemed to evoke more fearful feelings than the real situation | virtual scene;patient treatment;heights phobia;clinical treatment;exposure therapy;acrophobia treatment;psychotherapy;patient anxiety;therapeutic case;virtual reality therapy system;personal computer;realistic virtual environment |
|
train_544 | Virtual reality treatment of flying phobia | Flying phobia (FP) might become a very incapacitating and disturbing problem in a person's social, working, and private areas. Psychological interventions based on exposure therapy have proved to be effective, but given the particular nature of this disorder they bear important limitations. Exposure therapy for FP might be excessively costly in terms of time, money, and efforts. Virtual reality (VR) overcomes these difficulties as different significant environments might be created, where the patient can interact with what he or she fears while in a totally safe and protected environment, the therapist's consulting room. This paper intends, on one hand, to show the different scenarios designed by our team for the VR treatment of FP, and on the other, to present the first results supporting the effectiveness of this new tool for the treatment of FP in a multiple baseline study | medical virtual reality;anxiety disorders;patient treatment;psychology;exposure therapy;flying phobia;psychological interventions;virtual exposure |
|
train_545 | Interaction and presence in the clinical relationship: virtual reality (VR) as | communicative medium between patient and therapist The great potential offered by virtual reality (VR) to clinical psychologists derives prevalently from the central role, in psychotherapy, occupied by the imagination and by memory. These two elements, which are fundamental in our life, present absolute and relative limits to the individual potential. Using VR as an advanced imaginal system, an experience that is able to reduce the gap existing between imagination and reality, it is possible to transcend these limits. In this sense, VR can improve the efficacy of a psychological therapy for its capability of reducing the distinction between the computer's reality and the conventional reality. Two are the core characteristics of this synthetic imaginal experience: the perceptual illusion of nonmediation and the possibility of building and sharing a common ground. In this sense, experiencing presence in a clinical virtual environment (VE), such as a shared virtual hospital, requires more than reproduction of the physical features of external reality. It requires the creation and sharing of the cultural web that makes meaningful, and therefore visible, both people and objects populating the environment. The paper outlines a framework for supporting the development and tuning of clinically oriented VR systems | psychological therapy;shared virtual hospital;virtual reality;patient-therapist communication;presence;imagination;psychotherapy;clinical psychology;memory;clinical virtual environment |
|
train_546 | Real-time quasi-2-D inversion of array resistivity logging data using neural | network We present a quasi-2-D real-time inversion algorithm for a modern galvanic array tool via dimensional reduction and neural network simulation. Using reciprocity and superposition, we apply a numerical focusing technique to the unfocused data. The numerically focused data are much less subject to 2-D and layering effects and can be approximated as from a cylindrical 1-D Earth. We then perform 1-D inversion on the focused data to provide approximate information about the 2-D resistivity structure. A neural network is used to perform forward modeling in the 1-D inversion, which is several hundred times faster than conventional numerical forward solutions. Testing our inversion algorithm on both synthetic and field data shows that this fast inversion algorithm is useful for providing formation resistivity information at a well site | forward modeling;array resistivity logging data;formation resistivity;unfocused data;superposition;real-time quasi-2-d inversion;neural network;well site;numerical focusing technique;focused data;galvanic array tool;1-d inversion;reciprocity;real-time inversion algorithm;dimensional reduction |
|
train_547 | Excess energy [cooling system] | The designers retrofitting a comfort cooling system to offices in Hertfordshire have been able to make use of the waste heat rejected. what's more they're now making it a standard solution for much larger projects | waste heat;comfort cooling system;air conditioning;nationwide trust |
|
train_548 | Cool and green [air conditioning] | In these days of global warming, air conditioning engineers need to specify not just for the needs of the occupants, but also to maximise energy efficiency. Julian Brunnock outlines the key areas to consider for energy efficient air conditioning systems | energy efficiency;air conditioning |
|
train_549 | Taking it to the max [ventilation systems] | Raising the volumetric air supply rate is one way of increasing the cooling capacity of displacement ventilation systems. David Butler and Michael Swainson explore how different types of diffusers can help make this work | displacement ventilation systems;diffusers;cooling capacity;volumetric air supply rate |
|
train_55 | Self-testing chips take a load off ATE | Looks at how chipmakers get more life out of automatic test equipment by embedding innovative circuits in silicon | innovative circuits;embedded deterministic testing technique;automatic test equipment;design-for-test techniques;ate;self-testing chips |
|
train_550 | Market watch - air conditioning | After a boom period in the late nineties, the air conditioning market finds itself in something of a lull at present, but manufacturers aren't panicking | air conditioning;market |
|
train_551 | Access privilege management in protection systems | We consider the problem of managing access privileges on protected objects. We associate one or more locks with each object, one lock for each access right defined by the object type. Possession of an access right on a given object is certified by possession of a key for this object, if this key matches one of the object locks. We introduce a number of variants to this basic key-lock technique. Polymorphic access rights make it possible to decrease the number of keys required to certify possession of complex access privileges that are defined in terms of several access rights. Multiple locks on the same access right allow us to exercise forms of selective revocation of access privileges. A lock conversion function can be used to reduce the number of locks associated with any given object to a single lock. The extent of the results obtained is evaluated in relation to alternative methodologies for access privilege management | protection systems;protected objects;key-lock technique;locks;selective revocation;lock conversion function;polymorphic access rights;complex access privilege possession certification;access privilege management |
|
train_552 | Anatomy of the coupling query in a Web warehouse | To populate a data warehouse specifically designed for Web data, i.e. Web warehouse, it is imperative to harness relevant documents from the Web. In this paper, we describe a query mechanism called coupling query to glean relevant Web data in the context of our Web warehousing system called Warehouse Of Web Data (WHOWEDA). A coupling query may be used for querying both HTML and XML documents. Important features of our query mechanism are the ability to query metadata, content, internal and external (hyperlink) structure of Web documents based on partial knowledge, ability to express constraints on tag attributes and tagless segment of data, ability to express conjunctive as well as disjunctive query conditions compactly, ability to control execution of a Web query and preservation of the topological structure of hyperlinked documents in the query results. We also discuss how to formulate a query graphically and in textual form using a coupling graph and coupling text, respectively | content;internal structure;partial knowledge;html documents;coupling text;hyperlinked documents;graphical query formulation;web warehouse;execution control;metadata;textual query formulation;tag attributes;web documents;data warehouse;warehouse of web data;xml documents;topological structure;conjunctive query conditions;coupling query;tagless segment;disjunctive query conditions;external structure |
|
train_553 | Application of traditional system design techniques to Web site design | After several decades of computer program construction there emerged a set of principles that provided guidance to produce more manageable programs. With the emergence of the plethora of Internet web sites one wonders if similar guidelines are followed in their construction. Since this is a new technology no apparent universally accepted methods have emerged to guide the designer in Web site construction. This paper reviews the traditional principles of structured programming and the preferred characteristics of Web sites. Finally a mapping of how the traditional guidelines may be applied to Web site construction is presented. The application of the traditional principles of structured programming to the design of a Web site can provide a more usable site for the visitors to the site. The additional benefit of using these time-honored techniques is the creation of a Web site that will be easier to maintain by the development staff | structured programming;system design techniques;internet web site design |
|
train_554 | A scalable and lightweight QoS monitoring technique combining passive and | active approaches: on the mathematical formulation of CoMPACT monitor To make a scalable and lightweight QoS monitoring system, we (2002) have proposed a new QoS monitoring technique, called the change-of-measure based passive/active monitoring (CoMPACT Monitor), which is based on the change-of-measure framework and is an active measurement transformed by using passively monitored data. This technique enables us to measure detailed QoS information for individual users, applications and organizations, in a scalable and lightweight manner. In this paper, we present the mathematical foundation of CoMPACT Monitor. In addition, we show its characteristics through simulations in terms of typical implementation issues for inferring the delay distributions. The results show that CoMPACT Monitor gives accurate QoS estimations with only a small amount of extra traffic for active measurement | active monitoring;internet;network performance;quality of service;qos monitoring;delay distributions;passive monitoring;compact monitor;change-of-measure |
|
train_555 | Computing transient gating charge movement of voltage-dependent ion channels | The opening of voltage-gated sodium, potassium, and calcium ion channels has a steep relationship with voltage. In response to changes in the transmembrane voltage, structural movements of an ion channel that precede channel opening generate a capacitative gating current. The net gating charge displacement due to membrane depolarization is an index of the voltage sensitivity of the ion channel activation process. Understanding the molecular basis of voltage-dependent gating of ion channels requires the measurement and computation of the gating charge, Q. We derive a simple and accurate semianalytic approach to computing the voltage dependence of transient gating charge movement (Q-V relationship) of discrete Markov state models of ion channels using matrix methods. This approach allows rapid computation of Q-V curves for finite and infinite length step depolarizations and is consistent with experimentally measured transient gating charge. This computational approach was applied to Shaker potassium channel gating, including the impact of inactivating particles on potassium channel gating currents | ion channels;markov state model;inactivation;charge movement;gating current;immobilization;action potentials;transmembrane voltage;transient gating charge movement |
|
train_556 | Coarse-grained reduction and analysis of a network model of cortical response: | I. Drifting grating stimuli We present a reduction of a large-scale network model of visual cortex developed by McLaughlin, Shapley, Shelley, and Wielaard. The reduction is from many integrate-and-fire neurons to a spatially coarse-grained system for firing rates of neuronal subpopulations. It accounts explicitly for spatially varying architecture, ordered cortical maps (such as orientation preference) that vary regularly across the cortical layer, and disordered cortical maps (such as spatial phase preference or stochastic input conductances) that may vary widely from cortical neuron to cortical neuron. The result of the reduction is a set of nonlinear spatiotemporal integral equations for "phase-averaged" firing rates of neuronal subpopulations across the model cortex, derived asymptotically from the full model without the addition of any extra phenomological constants. This reduced system is used to study the response of the model to drifting grating stimuli - where it is shown to be useful for numerical investigations that reproduce, at far less computational cost, the salient features of the point-neuron network and for analytical investigations that unveil cortical mechanisms behind the responses observed in the simulations of the large-scale computational model. For example, the reduced equations clearly show (1) phase averaging as the source of the time-invariance of cortico-cortical conductances, (2) the mechanisms in the model for higher firing rates and better orientation selectivity of simple cells which are near pinwheel centers, (3) the effects of the length-scales of cortico-cortical coupling, and (4) the role of noise in improving the contrast invariance of orientation selectivity | phase-averaged firing rates;visual cortex;neuronal networks;point-neuron network;coarse-graining;large-scale network model;dynamics;orientation selectivity;nonlinear spatiotemporal integral equations |
|
train_557 | Noise and the PSTH response to current transients: II. Integrate-and-fire model | with slow recovery and application to motoneuron data For pt.I see ibid., vol.11, no.2 , p.135-151( 2001). A generalized version of the integrate-and-fire model is presented that qualitatively reproduces firing rates and membrane trajectories of motoneurons. The description is based on the spike-response model and includes three different time constants: the passive membrane time constant, a recovery time of the input conductance after each spike, and a time constant of the spike afterpotential. The effect of stochastic background input on the peristimulus time histogram (PSTH) response to spike input is calculated analytically. Model results are compared with the experimental data of Poliakov et al. (1996). The linearized theory shows that the PSTH response to an input spike is proportional to a filtered version of the postsynaptic potential generated by the input spike. The shape of the filter depends on the background activity. The full nonlinear theory is in close agreement with simulated PSTH data | passive membrane time constant;spike-response model;spike afterpotential;recovery time;psth;membrane trajectories;integrate-and-fire model;motoneuron;firing rates |
|
train_558 | OS porting and application development for SoC | To deliver improved usability in high-end portable consumer products, the use of an appropriate consumer operating system (OS) is becoming far more widespread. Using a commercially supported OS also vastly increases the availability of supported applications. For the device developer, this trend adds major complexity to the problem of system implementation. Porting a complete operating system to a new hardware design adds significantly to the development burden, increasing both time-to-market and expense. Even for those familiar with the integration of a real-time OS, the porting, validation and support of a complex platform OS is a formidable task | consumer operating system;application development;os porting;hardware design |
|
train_559 | Is open source more or less secure? | Networks dominate today's computing landscape and commercial technical protection is lagging behind attack technology. As a result, protection programme success depends more on prudent management decisions than on the selection of technical safeguards. The paper takes a management view of protection and seeks to reconcile the need for security with the limitations of technology | open source software security;attack technology;management;computer networks;data security;commercial technical protection |
|
train_56 | New thinking on rendering | Looks at how graphics hardware solves a range of rendering problems | color values;graphics hardware;rendering;gourand-shaded image;programmability |
|
train_560 | Citizen centric identity management: chip tricks? | Accelerating and harmonizing the diffusion and acceptance of electronic services in Europe in a secure and practical way has become a priority of several initiatives in the past few years and a critical factor for citizen and business information society services. As identification and authentication is a critical element in accessing public services the combination of public key infrastructure (PKI) and smart cards emerges as the solution of choice for eGovernment in Europe. National governments and private initiatives alike vouch their support for this powerful combination to deliver an essential layer of reliable electronic services and address identity requirements in a broad range of application areas. A recent study suggests that several eGovernment implementations point to the direction of electronic citizen identity management as an up and coming challenge. The paper discusses the eGovernment needs for user identification applicability and the need for standardization | public key infrastructure;electronic services;legal framework;user identification;citizen centric identity management;business information services;standardization;government;smart cards;authentication;public information services |
|
train_561 | SubSeven's Honey Pot program | A serious security threat today are malicious executables, especially new, unseen malicious executables often arriving as email attachments. These new malicious executables are created at the rate of thousands every year and pose a serious threat. Current anti-virus systems attempt to detect these new malicious programs with heuristics generated by hand. This approach is costly and often ineffective. We introduce the Trojan Horse SubSeven, its capabilities and influence over intrusion detection systems. A Honey Pot program is implemented, simulating the SubSeven Server. The Honey Pot Program provides feedback and stores data to and from the SubSeven's client | email attachments;malicious executables;trojan horse;security threat;subseven;intrusion detection systems;honey pot program;anti-virus systems |
|
train_562 | The Advanced Encryption Standard - implementation and transition to a new | cryptographic benchmark Cryptography is the science of coding information to create unintelligible ciphers that conceal or hide messages. The process that achieves this goal is commonly referred to as encryption. Although encryption processes of various forms have been employed for centuries to protect the exchange of messages, the advent of the information age has underscored the importance of strong cryptography as a process to secure data exchanged through electronic means, and has accentuated the demand for products offering these services. This article describes the process that has led to the development of the latest cryptographic benchmark; the Advanced Encryption Standard (AES). The article briefly examines the requirements set forth for its development, defines how the new standard is implemented, and describes how government, business, and industry can transition to AES with minimum impact to operations | cryptographic benchmark;coding;business;industry;government;aes;advanced encryption standard;data exchange;unintelligible ciphers |
|
train_563 | Getting the most out of intrusion detection systems | Intrusion detection systems (IDS) can play a very valuable role in the defence of a network. However, it is important to understand not just what it will do (and how it does it) - but what it won't do (and why). This article does not go into the technical working of IDS in too much detail, rather it limits itself to a discussion of some of the capabilities and failings of the technology | network attacks;intrusion detection systems;firewall;computer network security |
|
train_564 | Development of a computer-aided manufacturing system for profiled edge | lamination tooling Profiled edge lamination (PEL) tooling is a promising rapid tooling (RT) method involving the assembly of an array of laminations whose top edges are simultaneously profiled and beveled based on a CAD model of the intended tool surface. To facilitate adoption of this RT method by industry, a comprehensive PEL tooling development system is proposed. The two main parts of this system are: (1) iterative tool design based on thermal and structural models; and (2) fabrication of the tool using a computer-aided manufacturing (CAM) software and abrasive water jet cutting. CAM software has been developed to take lamination slice data (profiles) from any proprietary RP software in the form of polylines and create smooth, kinematically desirable cutting trajectories for each tool lamination. Two cutting trajectory algorithms, called identical equidistant profile segmentation and adaptively vector profiles projection (AVPP), were created for this purpose. By comparing the performance of both algorithms with a benchmark part shape, the AVPP algorithm provided better cutting trajectories for complicated tool geometries. A 15-layer aluminum PEL tool was successfully fabricated using a 5-axis CNC AWJ cutter and NC code generated by the CAM software | cam software;profiled edge lamination tooling;computer aided manufacturing;abrasive water jet cutting;cutting trajectory algorithms;identical equidistant profile segmentation;rapid tooling;adaptively vector profiles projection |
|
train_565 | Control of thin film growth in chemical vapor deposition manufacturing systems: | a feasibility study A study is carried out to design and optimize chemical vapor deposition (CVD) systems for material fabrication. Design and optimization of the CVD process is necessary to satisfying strong global demand and ever increasing quality requirements for thin film production. Advantages of computer aided optimization include high design turnaround time, flexibility to explore a larger design space and the development and adaptation of automation techniques for design and optimization. A CVD reactor consisting of a vertical impinging jet at atmospheric pressure, for growing titanium nitride films, is studied for thin film deposition. Numerical modeling and simulation are used to determine the rate of deposition and film uniformity over a wide range of design variables and operating conditions. These results are used for system design and optimization. The optimization procedure employs an objective function characterizing film quality, productivity and operational costs based on reactor gas flow rate, susceptor temperature and precursor concentration. Parameter space mappings are used to determine the design space, while a minimization algorithm, such as the steepest descent method, is used to determine optimal operating conditions for the system. The main features of computer aided design and optimization using these techniques are discussed in detail | material fabrication;operational costs;titanium nitride films;optimization;tin;precursor concentration;susceptor temperature;thin film growth;film quality;reactor gas flow rate;parameter space mappings;chemical vapor deposition |
|
train_566 | Sensing and control of double-sided arc welding process | The welding industry is driven to improve productivity without sacrificing quality. For thick material welding, the current practice is to use backing or multiple passes. The laser welding process, capable of achieving deep narrow penetration, can significantly improve welding productivity for such applications by reducing the number of passes. However, its competitiveness in comparison with traditional arc welding is weakened by its high cost, strict fit-up requirement, and difficulty in welding large structures. In this work, a different method, referred to as double-sided arc welding (DSAW) is developed to improve the arc concentration for arc welding. A sensing and control system is developed to achieve deep narrow penetration under variations in welding conditions. Experiments verified that the pulsed keyhole DSAW system developed is capable of achieving deep narrow penetration on a 1/2 inch thick square butt joint in a single pass | process control;laser welding process;control system;controlled pulse keyhole;energy density;thick material welding;double-sided arc welding |
|
train_567 | Hidden Markov model-based tool wear monitoring in turning | This paper presents a new modeling framework for tool wear monitoring in machining processes using hidden Markov models (HMMs). Feature vectors are extracted from vibration signals measured during turning. A codebook is designed and used for vector quantization to convert the feature vectors into a symbol sequence for the hidden Markov model. A series of experiments are conducted to evaluate the effectiveness of the approach for different lengths of training data and observation sequence. Experimental results show that successful tool state detection rates as high as 97% can be achieved by using this approach | tool wear monitoring;tool state detection;hidden markov models;machining processes;hmm training;feature extraction;turning process;codebook;discrete wavelet transform;vector quantization;vibration signals |
|
train_568 | Modeling cutting temperatures for turning inserts with various tool geometries | and materials Temperatures are of interest in machining because cutting tools often fail by thermal softening or temperature-activated wear. Many models for cutting temperatures have been developed, but these models consider only simple tool geometries such as a rectangular slab with a sharp corner. This report describes a finite element study of tool temperatures in cutting that accounts for tool nose radius and included angle effects. A temperature correction factor model that can be used in the design and selection of inserts is developed to account for these effects. A parametric mesh generator is used to generate the finite element models of tool and inserts of varying geometries. The steady-state temperature response is calculated using NASTRAN solver. Several finite element analysis (FEA) runs are performed to quantify the effects of inserts included angle, nose radius, and materials for the insert and the tool holder on the cutting temperature at the insert rake face. The FEA results are then utilized to develop a temperature correction factor model that accounts for these effects. The temperature correction factor model is integrated with an analytical temperature model for rectangular inserts to predict cutting temperatures for contour turning with inserts of various shapes and nose radii. Finally, experimental measurements of cutting temperature using the tool-work thermocouple technique are performed and compared with the predictions of the new temperature model. The comparisons show good agreement | cutting temperature model;turning inserts;insert shape effects;parametric mesh generator;tool nose radius;temperature correction factor;tool geometries;machining;finite element models |
|
train_569 | Application of an internally consistent material model to determine the effect | of tool edge geometry in orthogonal machining It is well known that the edge geometry of a cutting tool affects the forces measured in metal cutting. Two experimental methods have been suggested in the past to extract the ploughing (non-cutting) component from the total measured force: (1) the extrapolation approach, and (2) the dwell force technique. This study reports the behavior of zinc during orthogonal machining using tools of controlled edge radius. Applications of both the extrapolation and dwell approaches show that neither produces an analysis that yields a material response consistent with the known behavior of zinc. Further analysis shows that the edge geometry modifies the shear zone of the material and thereby modifies the forces. When analyzed this way, the measured force data yield the expected material response without requiring recourse to an additional ploughing component | edge geometry;metal cutting;zinc;dwell force;cutting tool;tool edge geometry;ploughing component;orthogonal machining;extrapolation |
|
train_57 | Speaker adaptive modeling by vocal tract normalization | This paper presents methods for speaker adaptive modeling using vocal tract normalization (VTN) along with experimental tests on three databases. We propose a new training method for VTN: By using single-density acoustic models per HMM state for selecting the scale factor of the frequency axis, we avoid the problem that a mixture-density tends to learn the scale factors of the training speakers and thus cannot be used for selecting the scale factor. We show that using single Gaussian densities for selecting the scale factor in training results in lower error rates than using mixture densities. For the recognition phase, we propose an improvement of the well-known two-pass strategy: by using a non-normalized acoustic model for the first recognition pass instead of a normalized model, lower error rates are obtained. In recognition tests, this method is compared with a fast variant of VTN. The two-pass strategy is an efficient method, but it is suboptimal because the scale factor and the word sequence are determined sequentially. We found that for telephone digit string recognition this suboptimality reduces the VTN gain in recognition performance by 30% relative. In summary, on the German spontaneous speech task Verbmobil, the WSJ task and the German telephone digit string corpus SieTill, the proposed methods for VTN reduce the error rates significantly | two-pass strategy;training method;verlimobil;german spontaneous speech task;error rate reduction;sietill;training speakers;wsj task;single-density acoustic models;telephone digit string recognition;speaker adaptive modeling;hmm state;databases;vocal tract normalization;training results;single gaussian densities;frequency scale factor;nonnormalized acoustic model;word sequence;german telephone digit string corpus |
|
train_570 | Prediction and compensation of dynamic errors for coordinate measuring machines | Coordinate measuring machines (CMMs) are already widely utilized as measuring tools in the modem manufacturing industry. Rapidly approaching now is the trend for next-generation CMMs. However, the increases in measuring velocity of CMM applications are limited by dynamic errors that occur in CMMs. In this paper a systematic approach for modeling the dynamic errors of a touch-trigger probe CMM is developed through theoretical analysis and experimental study. An overall analysis of the dynamic errors of CMMs is conducted, with weak components of the CMM identified by a laser interferometer. The probing process, as conducted with a touch-trigger probe, is analyzed. The dynamic errors are measured, modeled, and predicted using neural networks. The results indicate that, using this mode, it is possible to compensate for the dynamic errors of CMMs | compensation;dynamic errors;inertial forces;neural networks;touch-trigger probe;laser interferometer;manufacturing industry;coordinate measuring machines |
|
train_571 | Control of transient thermal response during sequential open-die forging: a | trajectory optimization approach A trajectory optimization approach is applied to the design of a sequence of open-die forging operations in order to control the transient thermal response of a large titanium alloy billet. The amount of time the billet is soaked in furnace prior to each successive forging operation is optimized to minimize the total process time while simultaneously satisfying constraints on the maximum and minimum values of the billet temperature distribution to avoid microstructural defects during forging. The results indicate that a "differential" heating profile is the most effective at meeting these design goals | open-die forging;microstructural defects;temperature distribution;transient thermal response control;titanium alloy billet;heating profile;trajectory optimization |
|
train_572 | Characterization of sheet buckling subjected to controlled boundary constraints | A wedge strip test is designed to study the onset and post-buckling behavior of a sheet under various boundary constraints. The device can be easily incorporated into a conventional tensile test machine, and material resistance to buckling is measured as the buckling height versus the in-plane strain state. The design yields different but consistent buckling modes with easy changes of boundary conditions (either clamped or freed) and sample geometry. Experimental results are then used to verify a hybrid approach to buckling prediction, i.e., the combination of the FEM analysis and an energy-based analytical wrinkling criterion. The FEM analysis is used to obtain the stress field and deformed geometry in a complex forming condition, while the analytical solution is to provide the predictions less sensitive to artificial numerical parameters. A good agreement between experimental data and numerical predictions is obtained | boundary constraints;forming processes;wedge strip test;energy-based analytical wrinkling criterion;strain state;finite element analysis;stress field;sheet buckling;deformed geometry;tensile test machine |
|
train_573 | ECG-gated /sup 18/F-FDG positron emission tomography. Single test evaluation of | segmental metabolism, function and contractile reserve in patients with coronary artery disease and regional dysfunction /sup 18/F-fluorodeoxyglucose (/sup 18/F-FDG)-positron emission tomography (PET) provides information about myocardial glucose metabolism to diagnose myocardial viability. Additional information about the functional status is necessary. Comparison of tomographic metabolic PET with data from other imaging techniques is always hampered by some transfer uncertainty and scatter. We wanted to evaluate a new Fourier-based ECG-gated PET technique using a high resolution scanner providing both metabolic and functional data with respect to feasibility in patients with diseased left ventricles. Forty-five patients with coronary artery disease and at least one left ventricular segment with severe hypokinesis or akinesis at biplane cineventriculography were included. A new Fourier-based ECG-gated metabolic /sup 18/F-FDG-PET was performed in these patients. Function at rest and /sup 18/F-FDG uptake were examined in the PET study using a 36-segment model. Segmental comparison with ventriculography revealed a high reliability in identifying dysfunctional segments (>96%). /sup 18/F-FDG uptake of normokinetic/hypokinetic/akinetic segments was 75.4+or-7.5, 65.3+or-10.5, and 35.9+or-15.2% (p<0.001). In segments >or=70% /sup 18/F-FDG uptake no akinesia was observed. No residual function was found below 40% /sup 18/F-FDG uptake. An additional dobutamine test was performed and revealed inotropic reserve (viability) in 42 akinetic segments and 45 hypokinetic segments. ECG-gated metabolic PET with pixel-based Fourier smoothing provides reliable data on regional function. Assessment of metabolism and function makes complete judgement of segmental status feasible within a single study without any transfer artefacts or test-to-test variability. The results indicate the presence of considerable amounts of viable myocardium in regions with an uptake of 40-50% /sup 18/F-FDG | normokinetic/hypokinetic/akinetic segments;left ventricular segment;myocardial glucose metabolism;fourier-based ecg-gated metabolic /sup 18/f-fluorodeoxyglucose-positron emission tomography;severe hypokinesis;/sup 18/f-fluorodeoxyglucose uptake;diseased left ventricles;myocardial viability;regional dysfunction;functional;regional function;high resolution scanner;inotropic reserve;transfer uncertainty;residual function;ventriculography;hypokinetic segments;akinesis;transfer artefacts;dysfunctional segments;coronary artery disease;segmental status;biplane cineventriculography;dobutamine test;pixel-based fourier smoothing;patients;akinetic segments;fourier-based ecg-gated pet technique;thirty six-segment model;viable myocardium |
|
train_574 | A novel approach for the detection of pathlines in X-ray angiograms: the | wavefront propagation algorithm Presents a new pathline approach, based on the wavefront propagation principle, and developed in order to reduce the variability in the outcomes of the quantitative coronary artery analysis. This novel approach, called wavepath, reduces the influence of the user-defined start- and endpoints of the vessel segment and is therefore more robust and improves the reproducibility of the lesion quantification substantially. The validation study shows that the wavepath method is totally constant in the middle part of the pathline, even when using the method for constructing a bifurcation or sidebranch pathline. Furthermore, the number of corrections needed to guide the wavepath through the correct vessel is decreased from an average of 0.44 corrections per pathline to an average of 0.12 per pathline. Therefore, it can be concluded that the wavepath algorithm improves the overall analysis substantially | sidebranch pathline;bifurcation;lesion quantification;x-ray angiograms;correct vessel;wavefront propagation algorithm;wavefront propagation principle;quantitative coronary artery analysis;user-defined endpoints;corrections;user-defined startpoints;vessel segment;wavepath method |
|
train_575 | A new voltage-vector selection algorithm in direct torque control of induction | motor drives AC drives based on direct torque control of induction machines allow high dynamic performance to be obtained with very simple control schemes. The drive behavior, in terms of current, flux and torque ripple, is dependent on the utilised voltage vector selection strategy and the operating conditions. In this paper a new voltage vector selection algorithm, which allows a sensible reduction of the RMS value of the stator current ripple without increasing the average value of the inverter switching frequency and without the need of a PWM pulse generator block is presented Numerical simulations have been carried out to validate the proposed method | high dynamic performance;torque variations;flux variations;rms value;stator current ripple;induction motor drives;operating conditions;direct torque control;50 hz;voltage-vector selection algorithm;ac drives;4-poles induction motor;inverter switching frequency;torque ripple;steady-state operation;torque step response;voltage vector selection strategy;dynamic behavior;4 kw;220 v |
|
train_576 | Application of Sugeno fuzzy-logic controller to the stator field-oriented | doubly-fed asynchronous motor drive This study deals with the application of the fuzzy-control theory to wound-rotor asynchronous motor with both its stator and rotor fed by two PWM voltage-source inverters, in which the system operates in stator field-oriented control. Thus, after determining the model of the machine, we present two types of fuzzy controller: Mamdani and Sugeno controllers. The training of the last one is carried out starting from the first. Simulation study is conducted to show the effectiveness of the proposed method | stator field-oriented control;training;mamdani controller;pwm voltage-source inverters;stator field-oriented doubly-fed asynchronous motor drive;sugeno fuzzy-logic controller;speed regulation;wound-rotor asynchronous motor;fuzzy-control;machine modelling |
|
train_577 | A robust H/sub infinity / control approach for induction motors | This paper deals with the robustness and stability of an induction motor control structure against internal and external disturbances. In the proposed control scheme, we have used an H/sub infinity / controller with field orientation and input-output linearization to achieve the above-specified features. Simulation results are included to illustrate the control approach performances | input-output linearization;external disturbances;robustness;field orientation;internal disturbances;robust h/sub infinity / control;stability;induction motors control |
|
train_578 | New approach to standing phase angle reduction for power system restoration | During power system restoration, it is necessary to check the phase angle between two buses before closing circuit breakers to connect a line between them. These angles may occur across a tie line between two systems or between two connected subsystems within a system. In case of large standing phase angle (SPA) difference the synchro-check relay does not allow closing of the breaker for this line. Therefore, this excessive SPA has to be reduced before attempting to connect the line. In this paper, a new and fast method for reducing SPA is presented. For this purpose, the standing phase angle difference between two specific buses is represented in terms of sensitivity factors associated with the change in active power generations and consumption at the buses. Then, the proposed method reschedule generation of selected units or shed load of selected buses to reduce excessive SPA difference between two buses based on sensitivity factors | power line connection;synchrocheck relay;sensitivity factors;standing phase angle reduction approach;power system restoration;circuit breaker closing |
|
train_579 | Steinmetz system design under unbalanced conditions | This paper studies and develops general analytical expressions to obtain three-phase current symmetrization under unbalanced voltage conditions. It proposes two procedures for this symmetrization: the application of the traditional expressions assuming symmetry conditions and the use of optimization methods based on the general analytical equations. Specifically, the paper applies and evaluates these methods to analyze the Steinmetz system design. Several graphics evaluating the error introduced by assumption of balanced voltage in the design are plotted and an example is studied to compare both procedures. In the example the necessity to apply the optimization techniques in highly unbalanced conditions is demonstrated | optimization methods;balanced voltage assumption;power system control design;steinmetz system design;general analytical equations;three-phase current symmetrization;unbalanced voltage conditions |
|
train_58 | Robust speech recognition using probabilistic union models | This paper introduces a new statistical approach, namely the probabilistic union model, for speech recognition involving partial, unknown frequency-band corruption. Partial frequency-band corruption accounts for the effect of a family of real-world noises. Previous methods based on the missing feature theory usually require the identity of the noisy bands. This identification can be difficult for unexpected noise with unknown, time-varying band characteristics. The new model combines the local frequency-band information based on the union of random events, to reduce the dependence of the model on information about the noise. This model partially accomplishes the target: offering robustness to partial frequency-band corruption, while requiring no information about the noise. This paper introduces the theory and implementation of the union model, and is focused on several important advances. These new developments include a new algorithm for automatic order selection, a generalization of the modeling principle to accommodate partial feature stream corruption, and a combination of the union model with conventional noise reduction techniques to deal with a mixture of stationary noise and unknown, nonstationary noise. For the evaluation, we used the TIDIGITS database for speaker-independent connected digit recognition. The utterances were corrupted by various types of additive noise, stationary or time-varying, assuming no knowledge about the noise characteristics. The results indicate that the new model offers significantly improved robustness in comparison to other models | tidigits database;speaker-independent connected digit recognition;noisy bands;partial frequency-band corruption;partial feature stream corruption;noise characteristics;noise reduction techniques;missing feature theory;probabilistic union models;local frequency-band information;partial real-world noise;stationary noise;automatic order selection;additive noise;modeling;robust speech recognition;time-varying band characteristics;nonstationary noise |
|
train_580 | A genetic approach to the optimization of automatic generation control | parameters for power systems This paper presents a method based on genetic algorithm for the automatic generation control of power systems. The technique is applied to control a system, which includes two areas tied together through a power line. As a consequence of continuous load variation, the frequency of the power system changes with time. In conventional studies, frequency transients are minimized by using integral controllers and thus zero steady-state error is obtained. In this paper, integral controller gains and frequency bias factors are determined by using the genetic algorithm. The results of simulation reveal the application of the genetic algorithm having easy implementation to find the global optimum values of the control parameters | control simulation;frequency transients;continuous load variation;power systems automatic generation control parameters optimization;control design;frequency bias factors;genetic algorithm;interconnected power networks;integral controller gains;power line |
|
train_581 | Successive expansion method of network planning applying symbolic analysis | method The conventional power system network successive expansion planning method is discussed in the context of the new paradigm of competitive electric power, energy and service market. In sequel, the paper presents an application of the conceptually new computer program, based on the symbolic analysis of load flows in power system networks. The network parameters and variables are defined as symbols. The symbolic analyzer, which models analytically the power system DC load flows, enables the sensitivity analysis of the power system to parameter and variable variations (costs, transfers, injections), a valuable tool for the expansion planning analysis. That virtue could not be found within the conventional approach, relying on compensation methods, precalculated distribution factors, and so on. This novel application sheds some light on the traditional power system network expansion planning method, as well as on its possible application within the system network expansion planning in the new environment assuming the competitive electric power market | competitive electric power market;computer program;precalculated distribution factors;power system dc load flows;competitive electric energy market;symbolic analyzer;sensitivity analysis;symbolic analysis;competitive electric service market;compensation methods;load flows;power system network expansion planning method;power system network successive expansion planning |
|
train_582 | Optimal estimation of a finite sample of a discrete chaotic process | The synthesis of optimal algorithms for estimating discrete chaotic processes specified by a finite sample is considered; various possible approaches are discussed. Expressions determining the potential accuracy in estimating a single value of the chaotic process are derived. An example of the application of the general equations obtained is given | optimal algorithm synthesis;finite sample;space-time filtering;discrete chaotic process;optimal estimation |
|
train_583 | Neural networks in optimal filtration | The combined use and mutual influence of neural networks and optimal filtering is considered; the neural-network and filtering approaches are compared by solving two simple optimal-filtering problems: linear filtering and the filtering of a binary telegraph signal corresponding to observations in discrete white noise | binary telegraph signal;linear filtering;observations;optimal filtering;neural networks;discrete white noise |
|
train_584 | Hybrid fuzzy modeling of chemical processes | Fuzzy models have been proved to have the ability of modeling all plants without any priori information. However, the performance of conventional fuzzy models can be very poor in the case of insufficient training data due to their poor extrapolation capacity. In order to overcome this problem, a hybrid grey-box fuzzy modeling approach is proposed in this paper to combine expert experience, local linear models and historical data into a uniform framework. It consists of two layers. The expert fuzzy model constructed from linguistic information, the local linear model and the T-S type fuzzy model constructed from data are all put in the first layer. Layer 2 is a fuzzy decision module that is used to decide which model in the first layer should be employed to make the final prediction. The output of the second layer is the output of the hybrid fuzzy model. With the help of the linguistic information, the poor extrapolation capacity problem caused by sparse training data for conventional fuzzy models can be overcome. Simulation result for pH neutralization process demonstrates its modeling ability over the linear models, the expert fuzzy model and the conventional fuzzy model | expert fuzzy model;fuzzy decision module;chemical processes;process modeling;fuzzy modeling |
|
train_585 | Fuzzy system modeling in pharmacology: an improved algorithm | In this paper, we propose an improved fuzzy system modeling algorithm to address some of the limitations of the existing approaches identified during our modeling with pharmacological data. This algorithm differs from the existing ones in its approach to the cluster validity problem (i.e., number of clusters), the projection schema (i.e., input membership assignment and rule determination), and significant input determination. The new algorithm is compared with the Bazoon-Turksen model, which is based on the well-known Sugeno-Yasukawa approach. The comparison was made in terms of predictive performance using two different data sets. The first comparison was with a two variable nonlinear function prediction problem and the second comparison was with a clinical pharmacokinetic modeling problem. It is shown that the proposed algorithm provides more precise predictions. Determining the degree of significance for each input variable, allows the user to distinguish their relative importance | pharmacokinetic modeling;fuzzy sets;significant input determination;cluster validity problem;pharmacology;projection schema;predictive performance;fuzzy system modeling;fuzzy logic |
|
train_586 | A strategy for a payoff-switching differential game based on fuzzy reasoning | In this paper, a new concept of a payoff-switching differential game is introduced. In this new game, any one player at any time may have several choices of payoffs for the future. Moreover, the payoff-switching process, including the time of payoff switching and the outcome payoff, of any one player is unknown to the other. Indeed, the overall payoff, which is a sequence of several payoffs, is unknown until the game ends. An algorithm for determining a reasoning strategy based on fuzzy reasoning is proposed. In this algorithm, the fuzzy theory is used to estimate the behavior of one player during a past time interval. By deriving two fuzzy matrices GSM, game similarity matrix, and VGSM, variation of GSM, the behavior of the player can be quantified. Two weighting vectors are selected to weight the relative importance of the player's behavior at each past time instant. Finally a simple fuzzy inference rule is adopted to generate a linear reasoning strategy. The advantage of this algorithm is that it provides a flexible way for differential game specialists to convert their knowledge into a "reasonable" strategy. A practical example of guarding three territories is given to illustrate our main ideas | differential game;payoff switching;fuzzy reasoning;outcome payoff;reasoning strategy;game similarity matrix;fuzzy inference;weighting vectors;payoff-switching differential game;fuzzy matrices |
|
train_587 | An improved self-organizing CPN-based fuzzy system with adaptive | back-propagation algorithm This paper describes an improved self-organizing CPN-based (Counter-Propagation Network) fuzzy system. Two self-organizing algorithms IUSOCPN and ISSOCPN, being unsupervised and supervised respectively, are introduced. The idea is to construct the neural-fuzzy system with a two-phase hybrid learning algorithm, which utilizes a CPN-based nearest-neighbor clustering scheme for both structure learning and initial parameters setting, and a gradient descent method with adaptive learning rate for fine tuning the parameters. The obtained network can be used in the same way as a CPN to model and control dynamic systems, while it has a faster learning speed than the original back-propagation algorithm. The comparative results on the examples suggest that the method is fairly efficient in terms of simple structure, fast learning speed, and relatively high modeling accuracy | structure learning;counter-propagation network;initial parameters setting;neural-fuzzy system;self-organizing fuzzy system;gradient descent;back-propagation learning scheme;hybrid learning |
|
train_588 | An accurate COG defuzzifier design using Lamarckian co-adaptation of learning | and evolution This paper proposes a design technique of optimal center of gravity (COG) defuzzifier using the Lamarckian co-adaptation of learning and evolution. The proposed COG defuzzifier is specified by various design parameters such as the centers, widths, and modifiers of MFs. The design parameters are adjusted with the Lamarckian co-adaptation of learning and evolution, where the learning performs a local search of design parameters in an individual COG defuzzifier, but the evolution performs a global search of design parameters among a population of various COG defuzzifiers. This co-adaptation scheme allows to evolve much faster than the non-learning case and gives a higher possibility of finding an optimal solution due to its wider searching capability. An application to the truck backer-upper control problem of the proposed co-adaptive design method of COG defuzzifier is presented. The approximation ability and control performance are compared with those of the conventionally simplified COG defuzzifier in terms of the fuzzy logic controller's approximation error and the average tracing distance, respectively | learning;local search;fuzzy logic controller;evolution;optimal center of gravity defuzzifier |
|
train_589 | Hierarchical neuro-fuzzy quadtree models | Hybrid neuro-fuzzy systems have been in evidence during the past few years, due to its attractive combination of the learning capacity of artificial neural networks with the interpretability of the fuzzy systems. This article proposes a new hybrid neuro-fuzzy model, named hierarchical neuro-fuzzy quadtree (HNFQ), which is based on a recursive partitioning method of the input space named quadtree. The article describes the architecture of this new model, presenting its basic cell and its learning algorithm. The HNFQ system is evaluated in three well known benchmark applications: the sinc(x, y) function approximation, the Mackey Glass chaotic series forecast and the two spirals problem. When compared to other neuro-fuzzy systems, the HNFQ exhibits competing results, with two major advantages it automatically creates its own structure and it is not limited to few input variables | neuro-fuzzy systems;mackey glass chaotic series;recursive partitioning;learning algorithm;quadtree;hierarchical neuro-fuzzy quadtree;fuzzy systems |
|
train_59 | Efficient tracking of the cross-correlation coefficient | In many (audio) processing algorithms, involving manipulation of discrete-time signals, the performance can vary strongly over the repertoire that is used. This may be the case when the signals from the various channels are allowed to be strongly positively or negatively correlated. We propose and analyze a general formula for tracking the (time-dependent) correlation between two signals. Some special cases of this formula lead to classical results known from the literature, others are new. This formula is recursive in nature, and uses only the instantaneous values of the two signals, in a low-cost and low-complexity manner; in particular, there is no need to take square roots or to carry out divisions. Furthermore, this formula can be modified with respect to the occurrence of the two signals so as to further decrease the complexity, and increase ease of implementation. The latter modification comes at the expense that not the actual correlation is tracked, but, rather, a somewhat deformed version of it. To overcome this problem, we propose, for a number of instances of the tracking formula, a simple warping operation on the deformed correlation. Now we obtain, at least for sinusoidal signals, the correct value of the correlation coefficient. Special attention is paid to the convergence behavior of the algorithm for stationary signals and the dynamic behavior if there is a transition to another stationary state; the latter is considered to be important to study the tracking abilities to nonstationary signals. We illustrate tracking algorithm by using it for stereo music fragments, obtained from a number of digital audio recordings | time-dependent correlation;cross-correlation coefficient;audio processing algorithms;recursive formula;discrete-time signals;nonstationary signals;efficient tracking;stationary state;warping operation;tracking algorithm;stereo music fragments;convergence behavior;stationary signals;dynamic behavior;deformed correlation;sinusoidal signals;digital audio recording |
|
train_590 | Universal approximation by hierarchical fuzzy system with constraints on the | fuzzy rule This paper presents a special hierarchical fuzzy system where the outputs of the previous layer are not used in the IF-parts, but used only in the THEN-parts of the fuzzy rules of the current layer. The proposed scheme can be shown to be a universal approximator to any continuous function on a compact set if complete fuzzy sets are used in the IF-parts of the fuzzy rules with singleton fuzzifier and center average defuzzifier. From the simulation of ball and beam control system, it is demonstrated that the proposed scheme approximates with good accuracy the model nonlinear controller with fewer fuzzy rules than the centralized fuzzy system and its control performance is comparable to that of the nonlinear controller | hierarchical fuzzy logic;universal approximator;hierarchical fuzzy system;stone-weierstrass theorem;ball and beam control system;fuzzy rules;continuous function |
|
train_591 | Approximation theory of fuzzy systems based upon genuine many-valued | implications - MIMO cases It is constructively proved that the multi-input-multi-output fuzzy systems based upon genuine many-valued implications are universal approximators (they are called Boolean type fuzzy systems in this paper). The general approach to construct such fuzzy systems is given, that is, through the partition of the output region (by the given accuracy). Two examples are provided to demonstrate the way in which fuzzy systems are designed to approximate given functions with a given required approximation accuracy | boolean type fuzzy systems;universal approximator;multi-input-multi-output fuzzy systems;many-valued implication;fuzzy systems |
|
train_592 | Approximation theory of fuzzy systems based upon genuine many-valued | implications - SISO cases It is proved that the single input and single output (SISO) fuzzy systems based upon genuine many-valued implications are universal approximators. It is shown theoretically that fuzzy control systems based upon genuine many-valued implications are equivalent to those based upon t-norm implications, the general approach to construct fuzzy systems is given. It is also shown that defuzzifier based upon center of areas is not appropriate to the fuzzy systems based upon genuine many-valued implications | universal approximator;many-valued implications;boolean implication;siso;single input and single output fuzzy systems;fuzzy systems |
|
train_593 | Fuzzy systems with overlapping Gaussian concepts: Approximation properties in | Sobolev norms In this paper the approximating capabilities of fuzzy systems with overlapping Gaussian concepts are considered. The target function is assumed to be sampled either on a regular gird or according to a uniform probability density. By exploiting a connection with Radial Basis Functions approximators, a new method for the computation of the system coefficients is provided, showing that it guarantees uniform approximation of the derivatives of the target function | overlapping gaussian concepts;fuzzy system models;learning;radial basis functions;reproducing kernel hilbert spaces;fuzzy systems |
|
train_594 | Improved analysis for the nonlinear performance of CMOS current mirrors with | device mismatch The nonlinear performance of the simple and complementary MOSFET current mirrors are analyzed. Closed-form expressions are obtained for the harmonic and intermodulation components resulting from a multisinusoidal input current. These expressions can be used for predicting the limiting values of the input current under prespecified conditions of threshold-voltage mismatches and/or transconductance mismatches. The case of a single input sinusoid is discussed in detail and the results are compared with SPICE simulations | threshold-voltage mismatch;input current;nonlinear performance;device mismatch;multisinusoidal input current;harmonic components;cmos current mirrors;spice simulations;intermodulation components;complementary mosfet current mirrors;simulation results;transconductance mismatch;closed-form expressions |
|
train_595 | Six common enterprise programming mistakes | Instead of giving you tips to use in your programming (at least directly), I want to look at some common mistakes made in enterprise programming. Instead of focusing on what to do, I want to look at what you should not do. Most programmers take books like mine and add in the good things, but they leave their mistakes in the very same programs! So I touch on several common errors I see in enterprise programming, and then briefly mention how to avoid those mistakes | common errors;enterprise javabeans;database;xml;enterprise programming mistakes;data store;vendor-specific programming |
|
train_596 | Copyright management in the digital age | Listening to and buying music online is becoming increasingly popular with consumers. So much so that Merrill Lynch forecasts the value of the online music market will explode from $8 million in 2001 to $1,409 million in 2005. But online delivery is not without problems; the issue of copyright management in particular has become a serious thorn in the side for digital content creators. Martin Brass, ex- music producer and senior industry consultant at Syntegra, explains | internet;online music delivery;napster;digital age;music industry;digital content creators |
|
train_597 | Quick media response averts PR disaster | Sometimes it's not what you do, but how you do it. After hackers broke the blocking code on the home version of its popular Cyber Patrol Internet filtering software and posted it on the Internet, marketers at Microsystems Software pulled out a playbook of standard crisis management and PR techniques. But the Cyber Patrol PR team including outside PR counsel and the company's outside law firm, used those tools aggressively in order to turn the tide of public and media opinion away from the hackers, who initially were hailed as folk heroes, and in favor of the company's interests, to save the product's and the company's reputations and inherent value. And the entire team managed to move at Internet speed: The crisis was essentially over in about three weeks | public relations;cyber patrol internet filtering software;crisis management;microsystems software;media response |
|
train_598 | From FREE to FEE [online advertising market] | As the online advertising market continues to struggle, many online content marketers are wrestling with the issue of how to add at least some level of paid subscription income to their revenue mix in order to reach or improve profitability. Since the business of selling content online is still in its infancy, and many consumers clearly still think of Web content as simply and rightfully free, few roadmaps are available to show the way to effective marketing strategies, but some guiding principles have emerged | online advertising market;paid subscription income;marketing strategies;selling content online |
|
train_599 | Keen but confused [workflow & content management] | IT users find workflow, content and business process management software appealing but by no means straightforward to implement. Pat Sweet reports on our latest research | content management;survey;market overview;business process management software;research;workflow |
|
train_6 | SBC gets more serious on regulatory compliance | With one eye on the past and the other on its future, SBC Communications last week created a unit it hopes will bring a cohesiveness and efficiency to its regulatory compliance efforts that previously had been lacking. The carrier also hopes the new regulatory compliance unit will help it accomplish its short-term goal of landing FCC approval. to provide long-distance service throughout its region, and its longer-term, goal of reducing the regulatory burdens under which it and currently operate | regulatory compliance;telecom carrier;sbc communications |
|
train_60 | Perceptual audio coding using adaptive pre- and post-filters and lossless | compression This paper proposes a versatile perceptual audio coding method that achieves high compression ratios and is capable of low encoding/decoding delay. It accommodates a variety of source signals (including both music and speech) with different sampling rates. It is based on separating irrelevance and redundancy reductions into independent functional units. This contrasts traditional audio coding where both are integrated within the same subband decomposition. The separation allows for the independent optimization of the irrelevance and redundancy reduction units. For both reductions, we rely on adaptive filtering and predictive coding as much as possible to minimize the delay. A psycho-acoustically controlled adaptive linear filter is used for the irrelevance reduction, and the redundancy reduction is carried out by a predictive lossless coding scheme, which is termed weighted cascaded least mean squared (WCLMS) method. Experiments are carried out on a database of moderate size which contains mono-signals of different sampling rates and varying nature (music, speech, or mixed). They show that the proposed WCLMS lossless coder outperforms other competing lossless coders in terms of compression ratios and delay, as applied to the pre-filtered signal. Moreover, a subjective listening test of the combined pre-filter/lossless coder and a state-of-the-art perceptual audio coder (PAC) shows that the new method achieves a comparable compression ratio and audio quality with a lower delay | low encoding/decoding delay;adaptive pre-filters;perceptual audio coding;psycho-acoustically controlled adaptive linear filter;weighted cascaded least mean squared;pre-filter/lossless coder;high compression ratio;source signals;adaptive post-filters;adaptive filtering;predictive lossless coding;wclms lossless coder;audio quality;predictive coding;subjective listening test;sampling rates;lossless compression;irrelevance reduction;redundancy reduction;music |
|
train_600 | Development of railway VR safety simulation system | Abnormal conditions occur in railway transportation due to trouble or accidents and it affects a number of passengers. It is very important, therefore, to quickly recover and return to normal train operation. For this purpose we developed a system, "Computer VR Simulation System for the Safety of Railway Transportation." It is a new type simulation system to evaluate measures to be taken under abnormal conditions. Users of this simulation system cooperate with one another to correct the abnormal conditions that have occurred in virtual reality. This paper reports the newly developed simulation system | virtual reality simulation system;accidents;computer vr simulation system;railway transportation;abnormal conditions correction;normal train operation |
|
train_601 | Recent researches of human science on railway systems | This paper presents research of human science on railway systems at RTRI. They are roughly divided into two categories: research to improve safety and those to improve comfort. On the former subject, for the safeguard against accidents caused by human errors, we have promoted studies of psychological aptitude test, various research to evaluate train drivers' working conditions and environments, and new investigations to minimize the risk of passenger casualties at train accidents. On the latter subject, we have developed new methods to evaluate the riding comfort including that of tilt train, and started research on the improvement of railway facilities for the aged and the disabled from the viewpoint of universal design | human science;wakefulness level;safety improvement;accidents;railway facilities;tilt train;train drivers' working conditions;rtri;human errors;riding comfort;ergonomics;aged persons;train accidents;comfort improvement;psychological aptitude test;sight impaired;railway systems;disabled persons;passenger casualties risk minimisation;train drivers' working environments |
|
train_602 | Image fusion between /sup 18/FDG-PET and MRI/CT for radiotherapy planning of | oropharyngeal and nasopharyngeal carcinomas Accurate diagnosis of tumor extent is important in three-dimensional conformal radiotherapy. This study reports the use of image fusion between (18)F-fluoro-2-deoxy-D-glucose positron emission tomography (/sup 18/FDG-PET) and magnetic resonance imaging/computed tomography (MRI/CT) for better targets delineation in radiotherapy planning of head-and-neck cancers. The subjects consisted of 12 patients with oropharyngeal carcinoma and 9 patients with nasopharyngeal carcinoma (NPC) who were treated with radical radiotherapy between July 1999 and February 2001. Image fusion between /sup 18/FDG-PET and MRI/CT was performed using an automatic multimodality image registration algorithm, which used the brain as an internal reference for registration. Gross tumor volume (GTV) was determined based on clinical examination and /sup 18/FDG uptake on the fusion images. Clinical target volume (CTV) was determined following the usual pattern of lymph node spread for each disease entity along with the clinical presentation of each patient. Except for 3 cases with superficial tumors, all the other primary tumors were detected by /sup 18/FDG-PET. The GTV volumes for primary tumors were not changed by image fusion in 19 cases (89%), increased by 49% in one NPC, and decreased by 45% in another NPC. Normal tissue sparing was more easily performed based on clearer GTV and CTV determination on the fusion images. In particular, parotid sparing became possible in 15 patients (71%) whose upper neck areas near the parotid glands were tumor-free by /sup 18/FDG-PET. Within a mean follow-up period of 18 months, no recurrence occurred in the areas defined as CTV, which was treated prophylactically, except for 1 patient who experienced nodal recurrence in the CTV and simultaneous primary site recurrence. In conclusion, this preliminary study showed that image fusion between /sup 18/FDG-PET and MRI/CT was useful in GTV and CTV determination in conformal RT, thus sparing normal tissues | nasopharyngeal carcinomas;superficial tumors;f;image fusion;simultaneous primary site recurrence;parotid glands;mri/ct;radiotherapy planning;/sup 18/fdg-pet;normal tissues sparing;primary tumors;oropharyngeal carcinomas |
|
train_603 | PGE helps customers reduce energy costs | A new service from Portland General Electric (PGE, Portland, Oregon, US) is saving customers tens of thousands of dollars in energy costs. PGE created E-Manager to allow facility managers to analyze their energy consumption online at 15-minute intervals. Customers can go to the Web for complete data, powerful analysis tools and charts, helping them detect abnormal energy use and focus on costly problem areas | online energy consumption analysis;oregon;energy costs reduction;e-manager;abnormal energy use detection;portland general electric |
|
train_604 | SRP rolls out reliability and asset management initiative | Reliability planning analysis at the Salt River Project (SRP, Tempe, Arizona, US) prioritizes geographic areas for preventive inspections based on a cost benefit model. However, SRP wanted a new application system to prioritize inspections and to predict when direct buried cable would fail using the same cost benefit model. In the business cases, the represented type of kilowatt load-residential, commercial or critical circuit-determines the cost benefit per circuit. The preferred solution was to develop a geographical information system (GIS) application allowing for a circuit query for the specific geographic areas it crosses and the density of load points of a given type within those areas. The query returns results based on the type of equipment analysis execution: wood pole, preventive maintenance for a line or cable replacement. This differentiation insures that all the facilities relevant to a specific analysis type influence prioritization of the geographic areas | condition monitoring;salt river project;usa;direct buried cable;cable replacement;tempe;arizona;gis;reliability planning analysis;cost benefit model;preventive inspections;wood pole;geographical information system;geographic areas;equipment analysis execution |
|
train_605 | HEW selects network management software | For more than 100 years, Hamburgische Electricitats-Werke AG (HEW) has provided a reliable electricity service to the city of Hamburg, Germany. Today, the company supplies electricity to some 1.7 million inhabitants via 285000 connections. During 1999, the year the energy market was started in Germany, HEW needed to operate and maintain a safe and reliable network cheaply. The development and implementation of a distribution management system (DMS) is key to the success of HEW. HEW's strategy was to obtain efficient new software for network management that also offered a good platform for future applications. Following a pilot and prequalification phase, HEW invited several companies to process the requirements catalog and to submit a detailed tender. The network information management system, Xpower, developed by Tekla Oyj, successfully passed HEW's test program and satisfied all the performance and system capacity requirements. The system met all HEW's conditions by presenting the reality of a network with the attributes of the operating resources. Xpower platform provides the ability to integrate future applications | distribution management system;hamburg;xpower;hamburgische electricitats-werke;germany;tekla oyj;network management software |
|
train_606 | Taiwan power company phases into AM/FM | To face the challenges and impact of the inevitable trend toward privatization and deregulation, the Taiwan Power Co. (TPC) devised short- and long-term strategic computerization development plans. These development efforts created a master plan that included building an Automated Mapping and Facilities Management (AM/ FM) system for Taipei City District Office (TCDO). This project included a pilot project followed by evaluation before the roll out to the complete service territory of TCDO. The pilot project took three years to install, commission and-via the evaluation process-reach the conclusion that AM/FM was technologically feasible | pilot project;am/fm;complete service territory;automated mapping and facilities management;taipei city district office;privatization;taiwan power company;deregulation |
|
train_607 | A building block approach to automated engineering | Shenandoah Valley Electric Cooperative (SVEC, Mt. Crawford, Virginia, US) recognized the need to automate engineering functions and create an interactive model of its distribution system in the early 1990s. It had used Milsoft's DA software for more than 10 years to make engineering studies, and had a Landis and Gyr SCADA system and a hybrid load management system for controlling water heater switches. With the development of GIS and facilities management (FM) applications, SVEC decided this should be the basis for an information system that would model its physical plant and interface with its accounting and billing systems. It could add applications such as outage management, staking, line design and metering to use this information and interface with these databases. However, based on SVEC's size it was not feasible to implement a sophisticated and expensive GIS/FM system. Over the past nine years, SVEC has had success with a building block approach, and its customers and employees are realizing the benefits of the automated applications. This building block approach is discussed in this article including the GIS, outage management system, MapViewer and a staking package. The lessons learned and future expansion are discussed | staking;mapviewer;metering;gis;building block approach;interactive model;databases;distribution system;engineering functions automation;shenandoah valley electric cooperative;outage management;billing systems;line design |
|
train_608 | How closely can a personal computer clock track the UTC timescale via the | Internet? Nowadays many software packages allow you to keep the clock of your personal computer synchronized to time servers spread over the internet. We present how a didactic laboratory can evaluate, in a statistical sense, the minimum synch error of this process (the other extreme, the maximum, is guaranteed by the code itself). The measurement set-up utilizes the global positioning system satellite constellation in 'common view' between two similar timing stations: one acts as a time server for the other, so the final timing difference at the second station represents the total synch error through the internet. Data recorded over batches of 10000 samples show a typical RMS value of 35 ms. This measurement configuration allows students to obtain a much better understanding of the synch task and pushes them, at all times, to look for an experimental verification of data results, even when they come from the most sophisticated 'black boxes' now readily available off the shelf | black boxes;didactic laboratory;internet;synch error;global positioning system satellite constellation;software packages;personal computer clock;time servers;utc timescale;statistical sense;final timing difference |
|
train_609 | Chemical production in the superlative [formaldehyde plant process control | system and remote I/O system] BASF commissioned the largest formaldehyde production plant in the world, in December 2000, with an annual capacity of 180000 t. The new plant, built to meet the growing demand for formaldehyde, sets new standards. Its size, technology and above all its cost-effectiveness give it a leading position internationally. To maintain such high standards by the automation technology, in addition to the trail-blazing Simatic PCS 7 process control system from Siemens, BASF selected the innovative remote I/O system I.S.1 from R. STAHL Schaltgerate GmbH to record and to output field signals in hazardous areas Zone 1 and 2. This combination completely satisfied all technical requirements and also had the best price-performance ratio of all the solutions. 25 remote I/O field stations were designed and matched to the needs of the formaldehyde plant | automation technology;zone 2 hazardous area;r. stahl schaltgerate gmbh;trail-blazing simatic pcs 7;price-performance ratio;remote i/o field station design;superlative;formaldehyde production plant construction;zone 1 hazardous area;remote i/o system i.s.1;signal recording;basf;chemical production;siemens;cost-effective plant;process control system |
|
train_61 | Application of time-frequency principal component analysis to text-independent | speaker identification We propose a formalism, called vector filtering of spectral trajectories, that allows the integration of a number of speech parameterization approaches (cepstral analysis, Delta and Delta Delta parameterizations, auto-regressive vector modeling, ...) under a common formalism. We then propose a new filtering, called contextual principal components (CPC) or time-frequency principal components (TFPC). This filtering consists in extracting the principal components of the contextual covariance matrix, which is the covariance matrix of a sequence of vectors expanded by their context. We apply this new filtering in the framework of closed-set speaker identification, using a subset of the POLYCOST database. When using speaker-dependent TFPC filters, our results show a relative improvement of approximately 20% compared to the use of the classical cepstral coefficients augmented by their Delta -coefficients, which is significantly better with a 90% confidence level | contextual principal components;text-independent speaker identification;polycost database;vector filtering;delta -coefficients;contextual covariance matrix;spectral trajectories;delta delta parameterization;closed-set speaker identification;delta parameterization;cepstral analysis;time-frequency principal component analysis;cepstral coefficients;speech parameterization;auto-regressive vector modeling;confidence level |
|
train_610 | AGC for autonomous power system using combined intelligent techniques | In the present work two intelligent load frequency controllers have been developed to regulate the power output and system frequency by controlling the speed of the generator with the help of fuel rack position control. The first controller is obtained using fuzzy logic (FL) only, whereas the second one by using a combination of FL, genetic algorithms and neural networks. The aim of the proposed controller(s) is to restore in a very smooth way the frequency to its nominal value in the shortest time possible whenever there is any change in the load demand etc. The action of these controller(s) provides a satisfactory balance between frequency overshoot and transient oscillations with zero steady-state error. The design and performance evaluation of the proposed controller(s) structure are illustrated with the help of case studies applied (without loss of generality) to a typical single-area power system. It is found that the proposed controllers exhibit satisfactory overall dynamic performance and overcome the possible drawbacks associated with other competing techniques | power output regulation;combined intelligent techniques;single-area power system;transient oscillations;frequency overshoot;competing techniques;autonomous power system;fuel rack position control;frequency control;performance evaluation;genetic algorithms;zero steady-state error;neural networks;generator speed control;controller design;overall dynamic performance;load demand;fuzzy logic |
|
train_611 | Intelligent optimal sieving method for FACTS device control in multi-machine | systems A multi-target oriented optimal control strategy for FACTS devices installed in multi-machine power systems is presented in this paper, which is named the intelligent optimal sieving control (IOSC) method. This new method divides the FACTS device output region into several parts and selects one typical value from each part, which is called output candidate. Then, an intelligent optimal sieve is constructed, which predicts the impacts of each output candidate on a power system and sieves out an optimal output from all of the candidates. The artificial neural network technologies and fuzzy methods are applied to build the intelligent sieve. Finally, the real control signal of FACTS devices is calculated according to the selected optimal output through inverse system method. Simulation has been done on a three-machine power system and the results show that the proposed IOSC controller can effectively attenuate system oscillations and enhance the power system transient stability | intelligent optimal sieve;inverse system method;system oscillations attenuation;fuzzy methods;three-machine power system;facts;intelligent optimal sieving method;power system transient stability enhancement;multi-target oriented optimal control strategy;artificial neural network technologies;selected optimal output;facts device control;multi-machine systems;control signal;intelligent control |
|
train_612 | Analysis and operation of hybrid active filter for harmonic elimination | This paper presents a hybrid active filter topology and its control to suppress the harmonic currents from entering the power source. The adopted hybrid active filter consists of one active filter and one passive filter connected in series. By controlling the equivalent output voltage of active filter, the harmonic currents generated by the nonlinear load are blocked and flowed into the passive filter. The power rating of the converter is reduced compared with the pure active filters to filter the harmonic currents. The harmonic current detecting approach and DC-link voltage regulation are proposed to obtain equivalent voltage of active filter. The effectiveness of the adopted topology and control scheme has been verified by the computer simulation and experimental results in a scaled-down laboratory prototype | equivalent output voltage;harmonic currents suppression;harmonic elimination;computer simulation;active filter;dc-link voltage regulation;harmonic currents;converter power rating reduction;voltage source inverter;nonlinear load;active filter equivalent voltage;passive filter;scaled-down laboratory prototype;hybrid active filter |
|
train_613 | Comparison between discrete STFT and wavelets for the analysis of power quality | events This paper deals with the comparison of signal processing tools for power quality analysis. Two signal processing techniques are considered: the wavelet filters and the discrete short-time Fourier transforms (STFT). Then, examples of the two most frequent disturbances met in the power system are chosen. An adjustable speed drive with a six-pulse converter using EMTP/ATP is designed and normal energizing of utility capacitors is presented . The analysis is tested on a system consisting of 13 buses and is representative of a medium-sized industrial plant. Finally, each kind of electrical disturbance is analyzed with examples representing each tool. A qualitative comparison of results shows the advantages and drawbacks of each signal processing technique applied to power quality analysis | six-pulse converter;signal processing techniques;adjustable speed drive;medium-sized industrial plant;wavelet filters;power quality events;signal processing tools;wavelets;short-time fourier transforms;discrete short-time fourier transforms;discrete stft;electrical disturbance;utility capacitors;emtp/atp |
|
train_614 | An on-line distributed intelligent fault section estimation system for | large-scale power networks In this paper, a novel distributed intelligent system is suggested for on-line fault section estimation (FSE) of large-scale power networks. As the first step, a multi-way graph partitioning method based on weighted minimum degree reordering is proposed for effectively partitioning the original large-scale power network into the desired number of connected sub-networks with quasi-balanced FSE burdens and minimum frontier elements. After partitioning, a distributed intelligent system based on radial basis function neural network (RBF NN) and companion fuzzy system is suggested for FSE. The relevant theoretical analysis and procedure are presented in the paper. The proposed distributed intelligent FSE method has been implemented with sparse storage technique and tested on the IEEE 14, 30 and 118-bus systems, respectively. Computer simulation results show that the proposed FSE method works successfully for large-scale power networks | ieee 30-bus systems;large-scale power networks;on-line distributed intelligent fault section estimation system;computer simulation;fuzzy system;multi-way graph partitioning method based;minimum frontier elements;radial basis function neural network;weighted minimum degree reordering;ieee 14-bus systems;ieee 118-bus systems;connected sub-networks;sparse storage technique;quasi-balanced fse burdens;distributed intelligent system;on-line fault section estimation |
|
train_615 | An intelligent tutoring system for a power plant simulator | In this paper, an intelligent tutoring system (ITS) is proposed for a power plant simulator. With a well designed ITS, the need for an instructor is minimized and the operator may readily and efficiently take, in real-time, the control of simulator with appropriate messages he(she) gets from the tutoring system. Using SIMULINK and based on object oriented programming (OOP) and C programming language, a fossil-fuelled power plant simulator with an ITS is proposed. Promising results are demonstrated for a typical power plant | control simulation;cai;object oriented programming;simulink;intelligent tutoring system;c programming language;fossil-fuelled power plant simulator |
|
train_616 | An overview of modems | This paper describes cursory glance of different types of modems classified for application, range, line type, operating mode, synchronizing mode, modulation, etc., highly useful for all engineering students of communication, electrical, computer science and information technology students. This paper also describes the standards and protocols used and the future trend | computer science students;operating mode;communication students;synchronizing mode;line type;modulation;information technology students;modems;standards;protocols;engineering students;electrical students |
|
train_617 | Estimation of trifocal tensor using GMM | A novel estimation of a trifocal tensor based on the Gaussian mixture model (GMM) is presented. The mixture model is built assuming that the residuals of inliers and outliers belong to different Gaussian distributions. The Bayesian rule is then employed to detect the inliers for re-estimation. Experiments show that the presented method is more precise and relatively unaffected by outliers | gaussian distributions;motion analysis;gmm;inliers;trifocal tensor estimation;gaussian mixture model;image data;outliers;bayesian rule;image analysis |
|
train_618 | Blind source separation applied to image cryptosystems with dual encryption | Blind source separation (BSS) is explored to add another encryption level besides the existing encryption methods for image cryptosystems. The transmitted images are covered with a noise image by specific mixing before encryption and then recovered through BSS after decryption. Simulation results illustrate the validity of the proposed method | noise image;image cryptosystems;transmitted images;dual encryption;blind source separation |
|
train_619 | Wavelet-based image segment representation | An efficient representation method for arbitrarily shaped image segments is proposed. This method includes a smart way to select a wavelet basis to approximate the given image segment, with improved image quality and reduced computational load | image segment representation;wavelet basis;dwt;reduced computational load;improved image quality;discrete wavelet transform;arbitrarily shaped image segments |
|
train_62 | Text-independent speaker verification using utterance level scoring and | covariance modeling This paper describes a computationally simple method to perform text independent speaker verification using second order statistics. The suggested method, called utterance level scoring (ULS), allows one to obtain a normalized score using a single pass through the frames of the tested utterance. The utterance sample covariance is first calculated and then compared to the speaker covariance using a distortion measure. Subsequently, a distortion measure between the utterance covariance and the sample covariance of data taken from different speakers is used to normalize the score. Experimental results from the 2000 NIST speaker recognition evaluation are presented for ULS, used with different distortion measures, and for a Gaussian mixture model (GMM) system. The results indicate that ULS as a viable alternative to GMM whenever the computational complexity and verification accuracy needs to be traded | computationally simple method;utterance level scoring;normalized score;speaker covariance;distortion measure;gmm;nist speaker recognition evaluation;gaussian mixture model;text-independent speaker verification;covariance modeling;computational complexity;sample covariance;second order statistics;distortion measures;verification accuracy |
|
train_620 | Adaptive image enhancement for retinal blood vessel segmentation | Retinal blood vessel images are enhanced by removing the nonstationary background, which is adaptively estimated based on local neighbourhood information. The result is a much better segmentation of the blood vessels with a simple algorithm and without the need to obtain a priori illumination knowledge of the imaging system | retinal blood vessel images;image segmentation;adaptive image enhancement;local neighbourhood information;personal identification;nonstationary background removal;security applications |
|
train_621 | MPEG-4 video object-based rate allocation with variable temporal rates | In object-based coding, bit allocation is performed at the object level and temporal rates of different objects may vary. The proposed algorithm deals with these two issues when coding multiple video objects (MVOs). The proposed algorithm is able to successfully achieve the target bit rate, effectively code arbitrarily shaped MVOs with different temporal rates, and maintain a stable buffer level | multiple video objects;bit allocation;mpeg-4 video coding;variable temporal rates;object-based rate allocation;rate-distortion encoding |
|
train_622 | Source/channel coding of still images using lapped transforms and block | classification A novel scheme for joint source/channel coding of still images is proposed. By using efficient lapped transforms, channel-optimised robust quantisers and classification methods it is shown that significant improvements over traditional source/channel coding of images can be obtained while keeping the complexity low | still images;block classification;channel-optimised robust quantisers;low complexity;image coding;joint source-channel coding;lapped transforms |
|
train_623 | Stochastic recurrences of Jackpot Keno | We describe a mathematical model and simulation study for Jackpot Keno, as implemented by Jupiters Network Gaming (JNG) in the Australian state of Queensland, and as controlled by the Queensland Office of Gaming Regulation (QOGR) (http://www.qogr.qld.gov.au/keno.shtml). The recurrences for the house net hold are derived and it is seen that these are piecewise linear with a ternary domain split, and further, the split points are stochastic in nature. Since this structure is intractable (Brockett and Levine, Statistics & Probability & their Applications, CBS College Publishing, 1984), estimation of house net hold obtained through an appropriately designed simulator using a random number generator with desirable properties is described. Since the model and simulation naturally derives hold given payscale, but JNG and QOGR require payscale given hold, an inverse problem was required to be solved. This required development of a special algorithm, which may be described as a stochastic binary search. Experimental results are presented, in which the simulator is used to determine jackpot pay-scales so as to satisfy legal requirements of approximately 75% of net revenue returned to the players, i.e., 25% net hold for the house (JNG). Details of the algorithm used to solve this problem are presented, and notwithstanding the stochastic nature of the simulation, convergence to a specified hold for the inverse problem has been achieved to within 0.1% in all cases of interest to date | stochastic binary search;experimental results;stochastic recurrences;mathematical model;jackpot keno;jupiters network gaming;piecewise linear;simulation;probability;chinese lottery game;house net hold;legal requirement;ternary domain split;random number generator;inverse problem |
|
train_624 | A hybrid ML-EM algorithm for calculation of maximum likelihood estimates in | semiparametric shared frailty models This paper describes a generalised hybrid ML-EM algorithm for the calculation of maximum likelihood estimates in semiparametric shared frailty models, the Cox proportional hazard models with hazard functions multiplied by a (parametric) frailty random variable. This hybrid method is much faster than the standard EM method and faster than the standard direct maximum likelihood method (ML, Newton-Raphson) for large samples. We have previously applied this method to semiparametric shared gamma frailty models, and verified by simulations the asymptotic and small sample statistical properties of the frailty variance estimates. Let theta /sub 0/ be the true value of the frailty variance parameter. Then the asymptotic distribution is normal for theta /sub 0/>0 while it is a 50-50 mixture between a point mass at zero and a normal random variable on the positive axis for theta /sub 0/=0. For small samples, simulations suggest that the frailty variance estimates are approximately distributed as an x-(100-x)% mixture, 0<or=x<or=50, between a point mass at zero and a normal random variable on the positive axis even for theta /sub 0/>0. We apply this method and verify by simulations these statistical results for semiparametric shared log-normal frailty models. We also apply the semiparametric shared gamma and log-normal frailty models to Busselton Health Study coronary heart disease data | normal random variable;cox proportional hazard models;coronary heart disease data;frailty variance estimates;asymptotic distribution;normal distribution;data analysis;maximum likelihood estimates;simulations;semiparametric shared log-normal frailty models;hazard functions;busselton health study;hybrid ml-em algorithm |
|
train_625 | Identifying multivariate discordant observations: a computer-intensive approach | The problem of identifying multiple outliers in a multivariate normal sample is approached via successive testing using P-values rather than tabled critical values. Caroni and Prescott (Appl. Statist. 41, p.355, 1992) proposed a generalization of the EDR-ESD procedure of Rosner (Technometrics, 25, 1983)). Venter and Viljoen (Comput. Statist. Data Anal. 29, p.261, 1999) introduced a computer intensive method to identify outliers in a univariate outlier situation. We now generalize this method to the multivariate outlier situation and compare this new procedure with that of Caroni and Prescott (Appl. Statist. 4, p.355, 1992) | multiple outliers;p-values;computer-intensive approach;multivariate normal sample;tabled critical values;data analysis;multivariate outlier;stepwise testing approach;multivariate discordant observations;univariate outlier;edr-ehd procedure |
|
train_626 | Approximate confidence intervals for one proportion and difference of two | proportions Constructing a confidence interval for a binomial proportion or the difference of two proportions is a routine exercise in daily data analysis. The best-known method is the Wald interval based on the asymptotic normal approximation to the distribution of the observed sample proportion, though it is known to have bad performance for small to medium sample sizes. Agresti et al. (1998, 2000) proposed an Adding-4 method: 4 pseudo-observations are added with 2 successes and 2 failures and then the resulting (pseudo-)sample proportion is used. The method is simple and performs extremely well. Here we propose an approximate method based on a t-approximation that takes account of the uncertainty in estimating the variance of the observed (pseudo-)sample proportion. It follows the same line of using a t-test, rather than z-test, in testing the mean of a normal distribution with an unknown variance. For some circumstances our proposed method has a higher coverage probability than the Adding-4 method | uncertainty;pseudo-sample proportion;normal distribution;approximate confidence intervals;data analysis;variance estimation;coverage probability;binomial proportion;difference of two proportions;t-approximation;t-test |
|
train_627 | Comparison of non-stationary time series in the frequency domain | In this paper we compare two nonstationary time series using nonparametric procedures. Evolutionary spectra are estimated for the two series. Randomization tests are performed on groups of spectral estimates for both related and independent time series. Simulation studies show that in certain cases the tests perform reasonably well. The tests are applied to observed geological and financial time series | lag window;independent time series;spectral estimates;time window;simulation;financial time series;randomization tests;related time series;nonstationary time series;evolutionary spectra estimation;geological time series;nonparametric procedures |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.