name
stringlengths
7
10
title
stringlengths
13
125
abstract
stringlengths
67
3.02k
fulltext
stringclasses
1 value
keywords
stringlengths
17
734
train_628
Rank tests of association for exchangeable paired data
We describe two rank tests of association for paired exchangeable data motivated by the study of lifespans in twins. The pooled sample is ranked. The nonparametric test of association is based on R/sup +/, the sum of the smaller within-pair ranks. A second measure L/sup +/ is the sum of within-pair rank products. Under the null hypothesis of within-pair independence, the two test statistics are approximately normally distributed. Expressions for the exact means and variances of R/sup +/ and L/sup +/ are given. We describe the power of these two statistics under a close alternative hypothesis to that of independence. Both the R/sup +/ and L/sup +/ tests indicate nonparametric statistical evidence of positive association of longevity in identical twins and a negligible relationship between the lifespans of fraternal twins listed in the Danish twin registry. The statistics are also applied to the analysis of a clinical trial studying the time to failure of ventilation tubes in children with bilateral otitis media
association;rank tests;twin lifespans;ventilation tube failure time;identical twins;fraternal twins;test statistics;bilateral otitis media;longevity;within-pair rank products;paired exchangeable data;danish twin registry;exact variances;exact means;nonparametric test;within-pair ranks;null hypothesis;within-pair independence;nonparametric statistical evidence;clinical trial;pooled sample
train_629
Calibrated initials for an EM applied to recursive models of categorical
variables The estimates from an EM, when it is applied to a large causal model of 10 or more categorical variables, are often subject to the initial values for the estimates. This phenomenon becomes more serious as the model structure becomes more complicated involving more variables. As a measure of compensation for this, it has been recommended in literature that EMs are implemented several times with different sets of initial values to obtain more appropriate estimates. We propose an improved approach for initial values. The main idea is that we use initials that are calibrated to data. A simulation result strongly indicates that the calibrated initials give rise to the estimates that are far closer to the true values than the initials that are not calibrated
initial values;em;calibrated initials;simulation;recursive models;categorical variables;large causal model
train_63
Geometric source separation: merging convolutive source separation with
geometric beamforming Convolutive blind source separation and adaptive beamforming have a similar goal-extracting a source of interest (or multiple sources) while reducing undesired interferences. A benefit of source separation is that it overcomes the conventional cross-talk or leakage problem of adaptive beamforming. Beamforming on the other hand exploits geometric information which is often readily available but not utilized in blind algorithms. We propose to join these benefits by combining cross-power minimization of second-order source separation with geometric linear constraints used in adaptive beamforming. We find that the geometric constraints resolve some of the ambiguities inherent in the independence criterion such as frequency permutations and degrees of freedom provided by additional sensors. We demonstrate the new method in performance comparisons for actual room recordings of two and three simultaneous acoustic sources
geometric source separation;leakage problem;adaptive beamforming;geometric linear constraints;frequency permutations;blind algorithms;room recordings;sensors;degrees of freedom;cross-talk;second-order source separation;cross-power minimization;convolutive blind source separation;acoustic sources;geometric beamforming
train_630
Score tests for zero-inflated Poisson models
In many situations count data have a large proportion of zeros and the zero-inflated Poisson regression (ZIP) model may be appropriate. A simple score test for zero-inflation, comparing the ZIP model with a constant proportion of excess zeros to a standard Poisson regression model, was given by van den Broek (1995). We extend this test to the more general situation where the zero probability is allowed to depend on covariates. The performance of this test is evaluated using a simulation study. To identify potentially important covariates in the zero-inflation model a composite test is proposed. The use of the general score test and the composite procedure is illustrated on two examples from the literature. The composite score test is found to suggest appropriate models
composite test;score tests;zero-inflated poisson regression model;count data;excess zeros;simulation;covariates;zero probability
train_631
A modified Fieller interval for the interval estimation of effective doses for
a logistic dose-response curve Interval estimation of the gamma % effective dose ( mu /sub gamma / say) is often based on the asymptotic variance of the maximum likelihood estimator (delta interval) or Fieller's theorem (Fieller interval). Sitter and Wu (1993) compared the delta and Fieller intervals for the median effective dose ( mu /sub 50/) assuming a logistic dose-response curve. Their results indicated that although Fieller intervals are generally superior to delta intervals, they appear to be conservative. Here an adjusted form of the Fieller interval for mu /sub gamma / termed an adjusted Fieller (AF) interval is introduced. A comparison of the AF interval with the delta and Fieller intervals is provided and the properties of these three interval estimation methods are investigated
effective doses;logistic dose-response curve;median effective dose;asymptotic variance;modified fieller interval;maximum likelihood estimator;interval estimation;delta interval;fieller's theorem
train_632
Modelling dependencies in paired comparison data a log-linear approach
In many Bradley-Terry models a more or less explicit assumption is that all decisions of the judges are independent. An assumption which might be questionable at least for the decisions of a given judge. In paired comparison studies, a judge chooses among objects several times, and in such cases, judgements made by the same judge are likely to be dependent. A log-linear representation for the Bradley-Terry model is developed, which takes into account dependencies between judgements. The modelling of the dependencies is embedded in the analysis of multiple binomial responses, which has the advantage of interpretability in terms of conditional odds ratios. Furthermore, the modelling is done in the framework of generalized linear models, thus parameter estimation and the assessment of goodness of fit can be obtained in the standard way by using e.g. GLIM or another standard software
bradley-terry model;multiple binomial responses;paired comparison data dependency modelling;generalized linear models;glim;goodness of fit;conditional odds ratios;log-linear approach;parameter estimation;judge decisions
train_633
Using k-nearest-neighbor classification in the leaves of a tree
We construct a hybrid (composite) classifier by combining two classifiers in common use - classification trees and k-nearest-neighbor (k-NN). In our scheme we divide the feature space up by a classification tree, and then classify test set items using the k-NN rule just among those training items in the same leaf as the test item. This reduces somewhat the computational load associated with k-NN, and it produces a classification rule that performs better than either trees or the usual k-NN in a number of well-known data sets
data sets;feature space division;classification trees;k-nn rule;tree leaves;computational load;hybrid composite classifier;k-nearest-neighbor classification
train_634
An approximation to the F distribution using the chi-square distribution
For the cumulative distribution function (c.d.f.) of the F distribution, F(x; k, n), with associated degrees of freedom, k and n, a shrinking factor approximation (SFA), G( lambda kx; k), is proposed for large n and any fixed k, where G(x; k) is the chi-square c.d.f. with degrees of freedom, k, and lambda = lambda (kx; n) is the shrinking factor. Numerical analysis indicates that for n/k >or= 3, approximation accuracy of the SFA is to the fourth decimal place for most small values of k. This is a substantial improvement on the accuracy that is achievable using the normal, ordinary chi-square, and Scheffe-Tukey approximations. In addition, it is shown that the theoretical approximation error of the SFA, |F(x; k,n)-G( lambda kx; k)|, is O(1/n/sup 2/) uniformly over x
cumulative distribution function;f distribution;chi-square distribution;degrees of freedom;numerical analysis;shrinking factor approximation
train_635
Detection and estimation of abrupt changes in the variability of a process
Detection of change-points in normal means is a well-studied problem. The parallel problem of detecting changes in variance has had less attention. The form of the generalized likelihood ratio test statistic has long been known, but its null distribution resisted exact analysis. In this paper, we formulate the change-point problem for a sequence of chi-square random variables. We describe a procedure that is exact for the distribution of the likelihood ratio statistic for all even degrees of freedom, and gives upper and lower bounds for odd (and also for non-integer) degrees of freedom. Both the liberal and conservative bounds for chi /sub 1//sup 2/ degrees of freedom are shown through simulation to be reasonably tight. The important problem of testing for change in the normal variance of individual observations corresponds to the chi /sub 1//sup 2/ case. The non-null case is also covered, and confidence intervals for the true change point are derived. The methodology is illustrated with an application to quality control in a deep level gold mine. Other applications include ambulatory monitoring of medical data and econometrics
process variability;distribution;generalized likelihood ratio test statistic;lower bounds;noninteger degrees of freedom;sequence;deep level gold mine;non null case;individual observations;abrupt change detection;quality control;upper bounds;medical data;conservative bounds;confidence intervals;ambulatory monitoring;odd degrees of freedom;even degrees of freedom;econometrics;simulation;chi-square random variables;abrupt change estimation;liberal bounds
train_636
FLID-DL: congestion control for layered multicast
We describe fair layered increase/decrease with dynamic layering (FLID-DL): a new multirate congestion control algorithm for layered multicast sessions. FLID-DL generalizes the receiver-driven layered congestion control protocol (RLC) introduced by Vicisano et al. (Proc. IEEE INFOCOM, San Francisco, CA, , p.996-1003, Mar. 1998)ameliorating the problems associated with large Internet group management protocol (IGMP) leave latencies and abrupt rate increases. Like RLC, FLID-DL, is a scalable, receiver-driven congestion control mechanism in which receivers add layers at sender-initiated synchronization points and leave layers when they experience congestion. FLID-DL congestion control coexists with transmission control protocol (TCP) flows as well as other FLID-DL sessions and supports general rates on the different multicast layers. We demonstrate via simulations that our congestion control scheme exhibits better fairness properties and provides better throughput than previous methods. A key contribution that enables FLID-DL and may be useful elsewhere is dynamic layering (DL), which mitigates the negative impact of long IGMP leave latencies and eliminates the need for probe intervals present in RLC. We use DL to respond to congestion much faster than IGMP leave operations, which have proven to be a bottleneck in practice for prior work
multicast layers;flid-dl;fair layered increase/decrease with dynamic layering;internet protocol multicast;layered multicast sessions;congestion control;dynamic layering;simulations;throughput;multirate congestion control algorithm;transmission control protocol;sender-initiated synchronization;tcp fairness;internet group management protocol;scalable congestion control;receiver-driven layered congestion control protocol;igmp
train_637
A digital fountain approach to asynchronous reliable multicast
The proliferation of applications that must reliably distribute large, rich content to a vast number of autonomous receivers motivates the design of new multicast and broadcast protocols. We describe an ideal, fully scalable protocol for these applications that we call a digital fountain. A digital fountain allows any number of heterogeneous receivers to acquire content with optimal efficiency at times of their choosing. Moreover, no feedback channels are needed to ensure reliable delivery, even in the face of high loss rates. We develop a protocol that closely approximates a digital fountain using two new classes of erasure codes that for large block sizes are orders of magnitude faster than standard erasure codes. We provide performance measurements that demonstrate the feasibility of our approach and discuss the design, implementation, and performance of an experimental system
bulk data distribution;high loss rates;erasure codes;rs codes;multicast protocol;experimental system performance;tornado codes;scalable protocol;forward error correction;optimal efficiency;asynchronous reliable multicast;content distribution methods;internet;interoperability;ip multicast;autonomous receivers;large block size;simulation results;performance measurements;digital fountain;heterogeneous receivers;luby transform codes;fec codes;decoder;broadcast protocols;reed-solomon codes
train_638
Scalable secure group communication over IP multicast
We introduce and analyze a scalable rekeying scheme for implementing secure group communications Internet protocol multicast. We show that our scheme incurs constant processing, message, and storage overhead for a rekey operation when a single member joins or leaves the group, and logarithmic overhead for bulk simultaneous changes to the group membership. These bounds hold even when group dynamics are not known a priori. Our rekeying algorithm requires a particular clustering of the members of the secure multicast group. We describe a protocol to achieve such clustering and show that it is feasible to efficiently cluster members over realistic Internet-like topologies. We evaluate the overhead of our own rekeying scheme and also of previously published schemes via simulation over an Internet topology map containing over 280 000 routers. Through analysis and detailed simulations, we show that this rekeying scheme performs better than previous schemes for a single change to group membership. Further, for bulk group changes, our algorithm outperforms all previously known schemes by several orders of magnitude in terms of actual bandwidth usage, processing costs, and storage requirements
storage requirements;scalable secure group communication;internet topology map;ip multicast;secure multicast group;network routers;authentication;overhead;group dynamics;simulation;internet protocol multicast;cryptography;storage overhead;bandwidth usage;logarithmic overhead;processing costs;rekeying algorithm;group membership;internet-like topologies;access control server
train_639
Distributed servers approach for large-scale secure multicast
In order to offer backward and forward secrecy for multicast applications (i.e., a new member cannot decrypt the multicast data sent before its joining and a former member cannot decrypt the data sent after its leaving), the data encryption key has to be changed whenever a user joins or leaves the system. Such a change has to be made known to all the current users. The bandwidth used for such re-key messaging can be high when the user pool is large. We propose a distributed servers approach to minimize the overall system bandwidth (and complexity) by splitting the user pool into multiple groups each served by a (logical) server. After presenting an analytic model for the system based on a hierarchical key tree, we show that there is an optimal number of servers to achieve minimum system bandwidth. As the underlying user traffic fluctuates, we propose a simple dynamic scheme with low overhead where a physical server adaptively splits and merges its traffic into multiple groups each served by a logical server so as to minimize its total bandwidth. Our results show that a distributed servers approach is able to substantially reduce the total bandwidth required as compared with the traditional single-server approach, especially for those applications with a large user pool, short holding time, and relatively low bandwidth of a data stream, as in the Internet stock quote applications
key management;data encryption key;forward secrecy;short holding time;backward secrecy;hierarchical key tree;distributed servers;large-scale secure multicast;user traffic;system bandwidth;system complexity;multicast applications;internet stock quote applications;re-key messaging;dynamic split-and-merge scheme;traffic merging
train_64
Speech enhancement using a mixture-maximum model
We present a spectral domain, speech enhancement algorithm. The new algorithm is based on a mixture model for the short time spectrum of the clean speech signal, and on a maximum assumption in the production of the noisy speech spectrum. In the past this model was used in the context of noise robust speech recognition. In this paper we show that this model is also effective for improving the quality of speech signals corrupted by additive noise. The computational requirements of the algorithm can be significantly reduced, essentially without paying performance penalties, by incorporating a dual codebook scheme with tied variances. Experiments, using recorded speech signals and actual noise sources, show that in spite of its low computational requirements, the algorithm shows improved performance compared to alternative speech enhancement algorithms
dual codebook;tied variances;recorded speech signals;mixmax model;speech intelligibility;clean speech signal;speech enhancement algorithm;noise robust speech recognition;speech signal quality;gaussian mixture model;mixture-maximum model;spectral domain;noisy speech spectrum;additive noise;mixture model;short time spectrum;noise sources;low computational requirements;performance penalties
train_640
Scribe: a large-scale and decentralized application-level multicast
infrastructure This paper presents Scribe, a scalable application-level multicast infrastructure. Scribe supports large numbers of groups, with a potentially large number of members per group. Scribe is built on top of Pastry, a generic peer-to-peer object location and routing substrate overlayed on the Internet, and leverages Pastry's reliability, self-organization, and locality properties. Pastry is used to create and manage groups and to build efficient multicast trees for the dissemination of messages to each group. Scribe provides best-effort reliability guarantees, and we outline how an application can extend Scribe to provide stronger reliability. Simulation results, based on a realistic network topology model, show that Scribe scales across a wide range of groups and group sizes. Also, it balances the load on the nodes while achieving acceptable delay and link stress when compared with Internet protocol multicast
locality properties;network nodes;internet;scalable application-level multicast infrastructure;decentralized application-level multicast infrastructure;self-organization;internet protocol multicast;simulation results;generic peer-to-peer object location;pastry;scribe;discrete event simulator;group size;best-effort reliability guarantees;network topology model;link stress;delay;generic routing substrate
train_641
Multiecho segmented EPI with z-shimmed background gradient compensation
(MESBAC) pulse sequence for fMRI A MultiEcho Segmented EPI with z-shimmed BAckground gradient Compensation (MESBAC) pulse sequence is proposed and validated for functional MRI (fMRI) study in regions suffering from severe susceptibility artifacts. This sequence provides an effective tradeoff between spatial and temporal resolution and reduces image distortion and signal dropout. The blood oxygenation level-dependent (BOLD)-weighted fMRI signal can be reliably obtained in the region of the orbitofrontal cortex (OFC). To overcome physiological motion artifacts during prolonged multisegment EPI acquisition, two sets of navigator echoes were acquired in both the readout and phase-encoding directions. Ghost artifacts generally produced by single-shot EPI acquisition were eliminated by separately placing the even and odd echoes in different k-space trajectories. Unlike most z-shim methods that focus on increasing temporal resolution for event-related functional brain mapping, the MESBAC sequence simultaneously addresses problems of image distortion and signal dropout while maintaining sufficient temporal resolution. The MESBAC sequence will be particularly useful for pharmacological and affective fMRI studies in brain regions such as the OFC, nucleus accumbens, amygdala, para-hippocampus, etc
severe susceptibility artifacts;z-shimmed background gradient compensation;image distortion;bold-weighted signal;multiecho segmented epi;event-related functional brain mapping;fmri;gradient compensation pulse sequence;ghost artifacts;signal dropout;navigator echoes;spatial resolution;temporal resolution;orbitofrontal cortex
train_642
Reconstruction of MR images from data acquired on an arbitrary k-space
trajectory using the same-image weight A sampling density compensation function denoted "same-image (SI) weight" is proposed to reconstruct MR images from the data acquired on an arbitrary k-space trajectory. An equation for the SI weight is established on the SI criterion and an iterative scheme is developed to find the weight. The SI weight is then used to reconstruct images from the data calculated on a random trajectory in a numerical phantom case and from the data acquired on interleaved spirals in an in vivo experiment, respectively. In addition, Pipe and Menon's weight (MRM 1999;41:179-186) is also used in the reconstructions to make a comparison. The images obtained with the SI weight were found to be slightly more accurate than those obtained with Pipe's weight
nyquist sampling conditions;same-image weight;arbitrary k-space trajectory;iterative algorithm;random trajectory;weighting function;sampling density compensation;mri image reconstruction;numerical phantom;spiral trajectory;convolution function
train_643
Time-resolved contrast-enhanced imaging with isotropic resolution and broad
coverage using an undersampled 3D projection trajectory Time-resolved contrast-enhanced 3D MR angiography (MRA) methods have gained in popularity but are still limited by the tradeoff between spatial and temporal resolution. A method is presented that greatly reduces this tradeoff by employing undersampled 3D projection reconstruction trajectories. The variable density k-space sampling intrinsic to this sequence is combined with temporal k-space interpolation to provide time frames as short as 4 s. This time resolution reduces the need for exact contrast timing while also providing dynamic information. Spatial resolution is determined primarily by the projection readout resolution and is thus isotropic across the FOV, which is also isotropic. Although undersampling the outer regions of k-space introduces aliased energy into the image, which may compromise resolution, this is not a limiting factor in high-contrast applications such as MRA. Results from phantom and volunteer studies are presented demonstrating isotropic resolution, broad coverage with an isotropic field of view (FOV), minimal projection reconstruction artifacts, and temporal information. In one application, a single breath-hold exam covering the entire pulmonary vasculature generates high-resolution, isotropic imaging volumes depicting the bolus passage
image artifacts;isotropic field of view;pulmonary vasculature;3d mri angiography;isotropic resolution;breath-hold imaging;broad coverage;variable density k-space sampling;thorax;undersampled 3d projection trajectory;abdomen;time-resolved contrast-enhanced imaging;temporal k-space interpolation;bolus passage
train_644
Three-dimensional spiral MR imaging: application to renal multiphase
contrast-enhanced angiography A fast MR pulse sequence with spiral in-plane readout and conventional 3D partition encoding was developed for multiphase contrast-enhanced magnetic resonance angiography (CE-MRA) of the renal vasculature. Compared to a standard multiphase 3D CE-MRA with FLASH readout, an isotropic in-plane spatial resolution of 1.4*1.4 mm/sup 2/ over 2.0*1.4 mm/sup 2/ could be achieved with a temporal resolution of 6 sec. The theoretical gain of spatial resolution by using the spiral pulse sequence and the performance in the presence of turbulent flow was evaluated in phantom measurements. Multiphase 3D CE-MRA of the renal arteries was performed in five healthy volunteers using both techniques. A deblurring technique was used to correct the spiral raw data. Thereby, the off-resonance frequencies were determined by minimizing the imaginary part of the data in image space. The chosen correction algorithm was able to reduce image blurring substantially in all MRA phases. The image quality of the spiral CE-MRA pulse sequence was comparable to that of the FLASH CE-MRA with increased spatial resolution and a 25% reduced contrast-to-noise ratio. Additionally, artifacts specific to spiral MRI could be observed which had no impact on the assessment of the renal arteries
renal multiphase contrast-enhanced angiography;flash sequence;spiral in-plane readout;3d partition encoding;fast pulse sequence;flow artifacts;image reconstruction;image quality;deblurring;3d spiral mri;spatial resolution;off-resonance frequencies;reduced contrast-to-noise ratio;renal vasculature
train_645
Oxygen-enhanced MRI of the brain
Blood oxygenation level-dependent (BOLD) contrast MRI is a potential method for a physiological characterization of tissue beyond mere morphological representation. The purpose of this study was to develop evaluation techniques for such examinations using a hyperoxia challenge. Administration of pure oxygen was applied to test these techniques, as pure oxygen can be expected to induce relatively small signal intensity (SI) changes compared to CO/sub 2/-containing gases and thus requires very sensitive evaluation methods. Fourteen volunteers were investigated by alternating between breathing 100% O/sub 2/ and normal air, using two different paradigms of administration. Changes ranged from >30% in large veins to 1.71%+or-0.14% in basal ganglia and 0.82%+or-0.08% in white matter. To account for a slow physiological response function, a reference for correlation analysis was derived from the venous reaction. An objective method is presented that allows the adaptation of the significance threshold to the complexity of the paradigm used. Reference signal characteristics in representative brain tissue regions were established. As the presented evaluation scheme proved its applicability to small SI changes induced by pure oxygen, it can readily be used for similar experiments with other gases
venous reaction;correlation analysis;normal air breathing;significance threshold;functional imaging;physiological response function;fourier transform analysis;mri contrast agent;oxygen-enhanced mri;oxygen breathing;paradigm complexity;bold contrast mri;brain;hyperoxia
train_646
Vibration control of structure by using tuned mass damper (development of
system which suppress displacement of auxiliary mass) In vibration control of a structure by using an active tuned mass damper (ATMD), stroke of the auxiliary mass is so limited that it is difficult to control the vibration in the case of large disturbance input. In this paper, two methods are proposed for the problem. One of the methods is a switching control system by two types of controllers. One of the controllers is a normal controller under small relative displacement of the auxiliary mass, and the other is not effective only for first mode of vibration under large relative displacement of the auxiliary mass. New variable gain control system is constructed by switching these two controllers. The other method is the brake system. In active vibration control, it is necessary to use actuator for active control. By using the actuator, the proposed system puts on the brake to suppress displacement increase of the auxiliary mass under large disturbance input. Finally, the systems are designed and the effectiveness of the systems is confirmed by the simulation
auxiliary mass displacement suppression;vibration control;actuator;active control;brake system;controllers;variable gain control system;tuned mass damper
train_647
Experimental design methodology and data analysis technique applied to optimise
an organic synthesis The study was aimed at maximising the yield of a Michaelis-Becker dibromoalkane monophosphorylation reaction. In order to save time and money, we first applied a full factorial experimental design to search for the optimum conditions while performing a small number of experiments. We then used the principal component analysis (PCA) technique to evidence two uncontrolled factors. Lastly, a special experimental design that took into account all the influential factors allowed us to determine the maximum-yield experimental conditions. This study also evidenced the complementary nature of experimental design methodology and data analysis techniques
maximum-yield experimental conditions;principal component analysis;full factorial experimental design;uncontrolled factors;organic synthesis;optimum conditions;michaelis-becker dibromoalkane monophosphorylation reaction;data analysis technique
train_648
Study of ambiguities inherent to the spectral analysis of Voigt profiles-a
modified Simplex approach In pulsed spectrometries, temporal transients are often analyzed directly in the temporal domain, assuming they consist only of purely exponentially decaying sinusoids. When experimental spectra actually consist of Gaussian or Voigt profiles (Gauss-Lorentz profiles), we show that the direct methods may erroneously interpret such lines as the sum of two or more Lorentzian profiles. Using a Nelder and Mead Simplex method, modified by introducing new means to avoid degeneracies and quenchings in secondary minima, we demonstrate that a large number of different solutions can be obtained with equivalent accuracy over the limited acquisition time interval, with final peak parameters devoid of physical or chemical meaning
nelder and mead simplex method;gauss-lorentz profiles;voigt profiles;gaussian profiles;accuracy;pulsed spectrometries;temporal transients;final peak parameters;limited acquisition time interval;spectral analysis
train_649
Methods for outlier detection in prediction
If a prediction sample is different from the calibration samples, it can be considered as an outlier in prediction. In this work, two techniques, the use of uncertainty estimation and the convex hull method are studied to detect such prediction outliers. Classical techniques (Mahalanobis distance and X-residuals), potential functions and robust techniques are used for comparison. It is concluded that the combination of the convex hull method and uncertainty estimation offers a practical way for detecting outliers in prediction. By adding the potential function method, inliers can also be detected
outlier detection;uncertainty estimation;inliers;mahalanobis distance;calibration samples;robust techniques;convex hull method;x-residuals;prediction sample;potential functions
train_65
The use of subtypes and stereotypes in the UML model
Based on users' experiences of Version 1.3 of the Unified Modeling Language (UML) of the Object Management Group (OMG), a Request For Information in 1999 elicited several responses which were asked to identify "problems" but not to offer any solutions. One of these responses is examined for "problems" relating to the UML metamodel and here some solutions to the problems identified there are proposed. Specifically, we evaluate the metamodel relating to stereotypes versus subtypes; the various kinds of Classifier (particularly Types, Interfaces and Classes); the introduction of a new subtype for the whole part relationship; as well as identifying areas in the metamodel where the UML seems to have been used inappropriately in the very definition of the UML's metamodel
stereotypes;whole part relationship;classifier;uml model;object management group;unified modeling language;subtypes;request for information
train_650
Molecular descriptor selection combining genetic algorithms and fuzzy logic:
application to database mining procedures A new algorithm, devoted to molecular descriptor selection in the context of data mining problems, has been developed. This algorithm is based on the concepts of genetic algorithms (GA) for descriptor hyperspace exploration and combined with a stepwise approach to get local convergence. Its selection power was evaluated by a fitness function derived from a fuzzy clustering method. Different training and test sets were randomly generated at each GA generation. The fitness score was derived by combining the scores of the training and test sets. The ability of the proposed algorithm to select relevant subsets of descriptors was tested on two data sets. The first one, an academic example, corresponded to the artificial problem of Bullseye, the second was a real data set including 114 olfactory compounds divided into three odor categories. In both cases, the proposed method allowed to improve the separation between the different data set classes
data mining;molecular descriptor selection;local convergence;fuzzy clustering method;bullseye;database mining;olfactory compounds;fitness function;genetic algorithms;fitness score;training sets;descriptor hyperspace exploration;test sets;odor categories;fuzzy logic;stepwise approach
train_651
Application-layer multicasting with Delaunay triangulation overlays
Application-layer multicast supports group applications without the need for a network-layer multicast protocol. Here, applications arrange themselves in a logical overlay network and transfer data within the overlay. We present an application-layer multicast solution that uses a Delaunay triangulation as an overlay network topology. An advantage of using a Delaunay triangulation is that it allows each application to locally derive next-hop routing information without requiring a routing protocol in the overlay. A disadvantage of using a Delaunay triangulation is that the mapping of the overlay to the network topology at the network and data link layer may be suboptimal. We present a protocol, called Delaunay triangulation (DT protocol), which constructs Delaunay triangulation overlay networks. We present measurement experiments of the DT protocol for overlay networks with up to 10 000 members, that are running on a local PC cluster with 100 Linux PCs. The results show that the protocol stabilizes quickly, e.g., an overlay network with 10 000 nodes can be built in just over 30 s. The traffic measurements indicate that the average overhead of a node is only a few kilobits per second if the overlay network is in a steady state. Results of throughput experiments of multicast transmissions (using TCP unicast connections between neighbors in the overlay network) show an achievable throughput of approximately 15 Mb/s in an overlay with 100 nodes and 2 Mb/s in an overlay with 1000 nodes
network nodes;application-layer multicasting;overlay networks;average overhead;2 mbit/s;delaunay triangulation overlays;group applications;traffic measurements;data transfer;local pc cluster;15 mbit/s;network-layer multicast protocol;data link layer;logical overlay network;next-hop routing information;multicast transmissions;delaunay triangulation protocol;overlay network topology;throughput experiments;tcp unicast connections;measurement experiments;dt protocol;linux pc
train_652
A case for end system multicast
The conventional wisdom has been that Internet protocol (IP) is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP multicast is still plagued with concerns pertaining to scalability, network management, deployment, and support for higher layer functionality such as error, flow, and congestion control. We explore an alternative architecture that we term end system multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP multicast. However, the key concern is the performance penalty associated with such a model. In particular, end system multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP multicast. We study these performance concerns in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred
protocol layer;internet protocol;end system multicast;distributed protocol;higher layer functionality;overlay structure;network dynamics;end-to-end delays;internet experiments;congestion control;packet replication;performance penalties;ip multicast;narada protocol;network management;application level performance;network scalability;membership management;network routers;simulation;self-organizing protocol
train_653
Indexing-neglected and poorly understood
The growth of the Internet has highlighted the use of machine indexing. The difficulties in using the Internet as a searching device can be frustrating. The use of the term "python" is given as an example. Machine indexing is noted as "rotten" and human indexing as "capricious." The problem seems to be a lack of a theoretical foundation for the art of indexing. What librarians have learned over the last hundred years has yet to yield a consistent approach to what really works best in preparing index terms and in the ability of our customers to search the various indexes. An attempt is made to consider the elements of indexing, their pros and cons. The argument is made that machine indexing is far too prolific in its production of index terms. Neither librarians nor computer programmers have made much progress to improve Internet indexing. Human indexing has had the same problems for over fifty years
index terms;machine indexing;internet;searching;human indexing
train_654
A question of perspective: assigning Library of Congress subject headings to
classical literature and ancient history This article explains the concept of world view and shows how the world view of cataloguers influences the development and assignment of subject headings to works about other cultures and civilizations, using works from classical literature and ancient history as examples. Cataloguers are encouraged to evaluate the headings they assign to works in classical literature and ancient history in terms of the world views of Ancient Greece and Rome so that headings reflect the contents of the works they describe and give fuller expression to the diversity of thoughts and themes that characterize these ancient civilizations
library of congress subject heading assignment;world view;cultures;ancient greece;civilizations;ancient history;classical literature;ancient rome
train_655
Mapping CCF to MARC21: an experimental approach
The purpose of this article is to raise and address a number of issues pertaining to the conversion of Common Communication Format (CCF) into MARC21. In this era of global resource sharing, exchange of bibliographic records from one system to another is imperative in today's library communities. Instead of using a single standard to create machine-readable catalogue records, more than 20 standards have emerged and are being used by different institutions. Because of these variations in standards, sharing of resources and transfer of data from one system to another among the institutions locally and globally has become a significant problem. Addressing this problem requires keeping in mind that countries such as India and others in southeast Asia are using the CCF as a standard for creating bibliographic cataloguing records. This paper describes a way to map the bibliographic catalogue records from CCF to MARC21, although 100% mapping is not possible. In addition, the paper describes an experimental approach that enumerates problems that may occur during the mapping of records/exchanging of records and how these problems can be overcome
india;southeast asia;common communication format conversion;global resource sharing;library communities;marc21;bibliographic records exchange;data transfer;ccf to marc21 mapping;machine-readable catalogue records;standards
train_656
The cataloger's workstation revisited: utilizing Cataloger's Desktop
A few years into the development of Cataloger's Desktop, an electronic cataloging tool aggregator available through the Library of Congress, is an opportune time to assess its impact on cataloging operations. A search for online cataloging tools on the Internet indicates a proliferation of cataloging tool aggregators; which provide access to online documentation related to cataloging practices and procedures. Cataloger's Desktop stands out as a leader among these aggregators. Results of a survey to assess 159 academic ARL and large public libraries' reasons for use or non-use of Cataloger's Desktop highlight the necessity of developing strategies for its successful implementation including training staff, providing documentation, and managing technical issues
internet;cataloging tool aggregators;online documentation;online cataloging tools;staff training;large public libraries;documentation;cataloger's workstation;managing technical issues;cataloger's desktop;academic arl;electronic cataloging tool
train_657
The web services agenda
Even the most battle-scarred of CIOs have become excited at the prospect of what web services can do for their businesses. But there are still some shortcomings to be addressed
transaction support;security;web services
train_658
Process pioneers [agile business]
By managing IT infrastructures along so-called 'top down' lines, organisations can streamline their business processes, eliminate redundant tasks and increase automation
managing it infrastructures;increase automation;business processes;agile business
train_659
Integration - no longer a barrier? [agile business]
Web services will be a critical technology for enabling the 'agile business'
agile business;amr research;iona;integration middleware;web services
train_66
Regression testing of database applications
Database applications features such as Structured Query Language or SQL, exception programming, integrity constraints, and table triggers pose difficulties for maintenance activities; especially for regression testing that follows modifications to database applications. In this work, we address these difficulties and propose a two phase regression testing methodology. In phase 1, we explore control flow and data flow analysis issues of database applications. Then, we propose an impact analysis technique that is based on dependencies that exist among the components of database applications. This analysis leads to selecting test cases from the initial test suite for regression testing the modified application. In phase 2, further reduction in the regression test cases is performed by using reduction algorithms. We present two such algorithms. The Graph Walk algorithm walks through the control flow graph of database modules and selects a safe set of test cases to retest. The Call Graph Firewall algorithm uses a firewall for the inter procedural level. Finally, a maintenance environment for database applications is described. Our experience with this regression testing methodology shows that the impact analysis technique is adequate for selecting regression tests and that phase 2 techniques can be used for further reduction in the number of theses tests
call graph firewall algorithm;data flow analysis;impact analysis;integrity constraints;database applications;table triggers;sql;structured query language;reduction algorithms;control flow graph;exception programming;control flow analysis;two phase regression testing methodology;graph walk algorithm
train_660
At your service [agile businesses]
Senior software executives from three of the world's leading software companies, and one smaller, entrepreneurial software developer, explain the impact that web services, business process management and integrated application architectures are having on their product development plans, and share their vision of the roles these products will play in creating agile businesses
integrated application architectures;agile businesses;business process management;software companies;web services
train_661
All change [agile business]
What does it take for an organisation to become an agile business? Its employees probably need to adhere to new procurement policies, work more closely with colleagues in other departments, meet more exacting sales targets, and offer higher standards of customer service and support. In short, they need to change the way they work. Implementing technologies to support agile business models and underpin new practices is a complex task in itself. But getting employees to adopt new practices is far harder, and one that requires careful handling, says Barry O'Connell, general manager of business-to-employee (B2E) solutions at systems vendor Hewlett-Packard (HP)
corporate transformation;agile business;organisational change
train_663
The road ahead [supply chains]
Executive supply chain managers, says David Metcalfe of Forrester Research, need the skills and precision of Mongolian archers on horseback. They must be able to hit their target, in this case customer demand, while moving at great speed. But what is wrong with the supply chains companies have in place already? According to Metcalfe, current manufacturing models are too inflexible. A recent survey conducted by Forrester Research supports this claim. It found that 42% of respondents could not transfer production from one plant to another in the event of a glitch in the supply chain. A further 32% said it would be possible, but extremely costly
business networks;forrester research;supply chains;survey;manufacturing
train_664
The agile revolution [business agility]
There is a new business revolution in the air. The theory is there, the technology is evolving, fast. It is all about agility
software deployment;supply chains;software design;organisational structures;business agility
train_67
Metaschemas for ER, ORM and UML data models: a comparison
This paper provides metaschemas for some of the main database modeling notations used in industry. Two Entity Relationship (ER) notations (Information Engineering and Barker) are examined in detail, as well as Object Role Modeling (ORM) conceptual schema diagrams. The discussion of optionality, cardinality and multiplicity is widened to include Unified Modeling Language (UML) class diagrams. Issues addressed in the metamodel analysis include the normalization impact of non-derived constraints on derived associations, the influence of orthogonality on language transparency, and trade-offs between simplicity and expressibility. To facilitate comparison, the same modeling notation is used to display each metaschema. For this purpose, ORM is used because of its greater expressibility and clarity
barker notation;optionality;data models;information engineering;language transparency;class diagrams;metaschemas;entity relationship modeling;uml;orthogonality;cardinality;database modeling notations;normalization;object role modeling;unified modeling language;conceptual schema diagrams;multiplicity;orm
train_671
Expert advice - how can my organisation take advantage of reverse auctions
without jeopardising existing supplier relationships? In a recent survey, AMR Research found that companies that use reverse auctions to negotiate prices with suppliers typically achieve savings of between 10% and 15% on direct goods and between 20% and 25% on indirect goods, and can slash sourcing cycle times from months to weeks. Suppliers, however, are less enthusiastic. They believe that these savings are achieved only by stripping the human element out of negotiations and evaluating bids on price alone, which drives down their profit margins. As a result, reverse auctions carry the risk of jeopardising long-term and trusted relationships. Suppliers that have not been involved in a reverse auction before typically fear the bidding event itself - arguably the most theatrical and, therefore, most hyped-up part of the process. Although it may only last one hour, weeks of preparation go into setting up a successful bidding event
reverse auctions;preparation;request for quotation;supplier relationships
train_673
The Information Age interview - Capital One
Credit card company Capital One attributes its rapid customer growth to the innovative use of cutting-edge technology. European CIO Catherine Doran talks about the systems that have fuelled that runaway success
cutting-edge technology;customer growth;capital one;credit card company
train_674
Portal payback
The benefits of deploying a corporate portal are well-documented: access to applications and content is centralised, so users do not spend hours searching for information; the management of disparate applications is also centralised, and by allowing users to access 'self-service' applications in areas such as human resources and procurement, organisations spend less time on manual processing tasks. But how far can prospective customers rely on the ROI figures presented to them by portal technology vendors? In particular, how reliable are the 'ROI calculators' these vendors supply on their web sites?
web sites;metrics;return on investment;roi calculator;corporate portal
train_675
Application foundations [application servers]
The changing role of application servers means choosing the right platform has become a complex challenge
microsoft .net;java 2 enterprise edition;security;load balancing;availability;application servers;transaction processing
train_676
Impossible choice [web hosting service provider]
Selecting a telecoms and web hosting service provider has become a high-stakes game of chance
customer service;web hosting service provider;it managers;selection
train_677
Acts to facts catalogue
The paper shows a way to satisfy users' changing and specific information needs by providing the modified format-author-collaborators-title-series-subject (FACTS). catalogue instead of the traditional author-collaborator-title-series-subjects (ACTS) catalogue
author-collaborator-title-series-subjects catalogue;information needs;format-author-collaborators-title-series-subject catalogue
train_678
Marketing in CSIR libraries and information centres: a study on promotional
efforts This paper examines the attitudes of librarians towards the promotional aspects in several CSIR libraries and information centres of India. The issues related to promotional activities of these libraries have been evaluated to determine the extent to which they are being practised. Librarians hold positive attitudes about promotional aspects of libraries and often practise them without knowing they are practising marketing concepts. Suggestions and strategies for improving the promotional activities in libraries and information services are put forth so as to meet the information needs and demands of clientele
india;csir libraries;marketing;promotional activities;information needs;information centres
train_679
Himalayan information system: a proposed model
The information explosion and the development in information technology force us to develop information systems in various fields. The research on Himalaya has achieved phenomenal growth in recent years in India. The information requirements of Himalayan researchers are divergent in nature. In order to meet these divergent needs, all information generated in various Himalayan research institutions has to be collected and organized to facilitate free flow of information. This paper describes the need for a system for Himalayan information. It also presents the objectives of Himalayan information system (HIMIS). It discusses in brief the idea of setting up a HIMIS and explains its utility to the users. It appeals to the government for supporting the development of such system
himalayan information system model;india;information technology;himis;government;information explosion;information network;information requirements
train_68
Human factors research on data modeling: a review of prior research, an
extended framework and future research directions This study reviews and synthesizes human factors research on conceptual data modeling. In addition to analyzing the variables used in earlier studies and summarizing the results of this stream of research, we propose a new framework to help with future efforts in this area. The study finds that prior research has focused on issues that are relevant when conceptual models are used for communication between systems analysts and developers (Analyst Developer models) whereas the issues important for models that are used to facilitate communication between analysts and users (User-Analyst models) have received little attention and, hence, require a significantly stronger role in future research. In addition, we emphasize the importance of building a strong theoretical foundation and using it to guide future empirical work in this area
conceptual data modeling;analyst developer models;database;user-analyst models;future efforts;human factors
train_680
Information needs of the working journalists in Orissa: a study
Provides an insight into the various information needs of working journalists in Orissa. Analyses data received from 226 working journalists representing 40 newspaper organisations. Also depicts the specialisation of working journalists, their frequency of information requirement, mode of dissemination preferred, information sources explored, mode of services opted, and their information privations. The study asserts that subjects primarily concerned with the professional work and image of the working journalists are rated utmost significant
information requirement;working journalists;information sources;data analysis;professional work;information dissemination;newspaper organisations;information needs
train_681
Construction of information retrieval thesaurus for family planning terms using
CDS/ISIS The thesaurus as a tool for information retrieval and as an alternative to the existing scheme of classifications in information retrieval is discussed. The paper considers the emergence of the information retrieval thesaurus and its definition. Family planning is a multidisciplinary subject covering socio economic, cultural, psychological and medical fields. This necessitated the construction of a thesaurus for the Family Planning discipline. The construction is based on UNISIST, ISO 2788 and BS 5723 guidelines by using CDS/ISIS software
culture;classification;medicine;thesaurus;family planning;socio economic field;bs 5723;bibliographic databases;psychology;iso 2788;family planning terms;cds/isis software;information retrieval;unisist
train_682
Information and information technology
This paper reveals the concepts of information and information technology. It also describes the close relationship between information and information technology. It explains a basic mechanism of different devices of information technology and connotes how they are useful to store, process and retrieve the information. In addition of this, the paper shows the present status of information technology and Indian universities
information technology;information processing;indian universities;information retrieval;information;information storage
train_683
Knowledge management
The article defines knowledge management, discusses its role, and describes its functions. It also explains the principles of knowledge management, enumerates the strategies involved in knowledge management, and traces its history in brief. The focus is on its interdisciplinary nature. The steps involved in knowledge management i.e. identifying, collecting and capturing, selecting, organizing and storing, sharing, applying, and creating, are explained. The pattern of knowledge management initiatives is also considered
knowledge management
train_684
Individual decision making using fuzzy set theory
The paper shows the importance of decision making by an individual and highlights the prime domain of decision making where fuzzy set theory can be used as a tool. Fuzzy set theory has been used on rational model of decision making to arrive at the desired conclusion
individual decision making;fuzzy set theory;rational decision making model
train_685
Robotically enhanced placement of left ventricular epicardial electrodes during
implantation of a biventricular implantable cardioverter defibrillator system Biventricular pacing has gained increasing acceptance in advanced heart failure patients. One major limitation of this therapy is positioning the left ventricular stimulation lead via the coronary sinus. This report demonstrates the feasibility of totally endoscopic direct placement of an epicardial stimulation lead on the left ventricle using the daVinci surgical system
davinci surgical system;left ventricular epicardial electrodes;advanced heart failure patients;biventricular implantable cardioverter defibrillator system implantation;left ventricular stimulation lead positioning;coronary sinus;epicardial leads;totally endoscopic direct placement;left ventricular pacing
train_686
Technology CAD of SiGe-heterojunction field effect transistors
A 2D virtual wafer fabrication simulation suite has been employed for the technology CAD of SiGe channel heterojunction field effect transistors (HFETs). Complete fabrication process of SiGe p-HFETs has been simulated. The SiGe material parameters and mobility model were incorporated to simulate Si/SiGe p-HFETs with a uniform germanium channel having an L/sub eff/ of 0.5 mu m. A significant improvement in linear transconductance is observed when compared to control-silicon p-MOSFETs
uniform channel;mobility model;material parameters;technology cad;fabrication process;0.5 micron;sige;linear transconductance;heterojunction field effect transistors
train_687
Image reconstruction of simulated specimens using convolution back projection
This paper reports the reconstruction of cross-sections of composite structures. The convolution back projection (CBP) algorithm has been used to capture the attenuation field over the specimen. Five different test cases have been taken up for evaluation. These cases represent varying degrees of complexity. In addition, the role of filters on the nature of the reconstruction errors has also been discussed. Numerical results obtained in the study reveal that CBP algorithm is a useful tool for qualitative as well as quantitative assessment of composite regions encountered in engineering applications
reconstruction errors;simulated specimens;composite structures;engineering applications;convolution back projection;image reconstruction;filters;attenuation field;computerised tomography;composite regions;cbp algorithm
train_688
Active vibration control of piezolaminated smart beams
This paper deals with the active vibration control of beam like structures with distributed piezoelectric sensor and actuator layers bonded on top and bottom surfaces of the beam. A finite element model based on Euler-Bernoulli beam theory has been developed. The contribution of the piezoelectric sensor and actuator layers on the mass and stiffness of the beam is considered. Three types of classical control strategies, namely direct proportional feedback, constant-gain negative velocity feedback and Lyapunov feedback and an optimal control strategy, linear quadratic regulator (LQR) scheme are applied to study their control effectiveness. Also, the control performance with different types of loading, such as impulse loading, step loading, harmonic and random loading is studied
stiffness;top surfaces;lyapunov feedback;impulse loading;constant-gain negative velocity feedback;optimal control strategy;random loading;distributed piezoelectric actuator layers;control effectiveness;bottom surfaces;step loading;piezolaminated smart beams;euler-bernoulli beam theory;mass;beam like structures;active vibration control;direct proportional feedback;finite element model;linear quadratic regulator;distributed piezoelectric sensor layers;harmonic loading
train_689
Continuous-time linear systems: folklore and fact
We consider a family of continuous input-output maps representing linear time-invariant systems that take a set of signals into itself. It is shown that this family contains maps whose impulse response is the zero function, but which take certain inputs into nonzero outputs. It is shown also that this family contains members whose input-output properties are not described by their frequency domain response functions, and that the maps considered need not even commute
continuous input-output maps;commutation;frequency domain response;time-invariant systems;impulse response;linear systems;continuous-time systems;signal processing;zero function
train_69
Sensitivity calibration of ultrasonic detectors based using ADD diagrams
The paper considers basic problems related to utilization of ADD diagrams in calibrating sensitivity of ultrasonic detectors. We suggest that a convenient tool for solving such problems can be the software package ADD Universal. Version 2.1 designed for plotting individual ADD diagrams for normal and slanted transducers. The software is compatible with the contemporary operational system Windows-95(98). Reference signals for calibration are generated in a sample with cylindrical holes
sensitivity calibration;calibration;software package;add diagrams;contemporary operational system windows-95(98;ultrasonic testing;normal transducers;ultrasonic detectors;slanted transducers;reference signals;cylindrical holes
train_690
Robust Kalman filter design for discrete time-delay systems
The problem of finite- and infinite-horizon robust Kalman filtering for uncertain discrete-time systems with state delay is addressed. The system under consideration is subject to time-varying norm-bounded parameter uncertainty in both the state and output matrices. We develop a new methodology for designing a linear filter such that the error variance of the filter is guaranteed to be within a certain upper bound for any allowed uncertainty and time delay. The solution is given in terms of two Riccati equations. Multiple time-delay systems are also investigated
robust kalman filter;norm-bounded parameter uncertainty;riccati equations;time-varying parameter uncertainty;state matrices;linear filter;discrete time-delay systems;output matrices;state delay;uncertain systems;robust state estimation
train_691
Robust output-feedback control for linear continuous uncertain state delayed
systems with unknown time delay The state-delayed time often is unknown and independent of other variables in most real physical systems. A new stability criterion for uncertain systems with a state time-varying delay is proposed. Then, a robust observer-based control law based on this criterion is constructed via the sequential quadratic programming method. We also develop a separation property so that the state feedback control law and observer can be independently designed and maintain closed-loop system stability. An example illustrates the availability of the proposed design method
time delay;closed-loop system stability;state feedback control law;output-feedback control;linear continuous systems;observer-based control law;sequential quadratic programming;robust control;uncertain systems;state time-varying delay;state delayed systems
train_692
A partial converse to Hadamard's theorem on homeomorphisms
A theorem by Hadamard gives a two-part condition under which a map from one Banach space to another is a homeomorphism. The theorem, while often very useful, is incomplete in the sense that it does not explicitly specify the family of maps for which the condition is met. Here, under a typically weak additional assumption on the map, we show that Hadamard's condition is met if, and only if, the map is a homeomorphism with a Lipschitz continuous inverse. An application is given concerning the relation between the stability of a nonlinear system and the stability of related linear systems
banach space;lipschitz continuous inverse;nonlinear networks;partial converse;homeomorphisms;hadamard theorem;linear system stability;linearization;nonlinear feedback systems;nonlinear system stability
train_693
Lifting factorization of discrete W transform
A general method is proposed to factor the type-IV discrete W transform (DWT-IV) into lifting steps and additions. Then, based on the relationships among various types of DWTs, four types of DWTs are factored into lifting steps and additions. After approximating the lifting matrices, we get four types of new integer DWTs (IntDWT-I, IntDWT-II, IntDWT-III, and IntDWT-IV) which are floating-point multiplication free. Integer-to-integer transforms (II-DWT), which approximate to DWT, are also proposed. Fast algorithms are given for the new transforms and their computational complexities are analyzed
lifting factorization;lossless coding schemes;dwt;lifting matrices;mobile computing;filter bank;feature extraction;integer arithmetic;multiframe detection;mobile devices;data compression;computational complexity;discrete wavelet transform;integer transforms
train_694
A novel genetic algorithm for the design of a signed power-of-two coefficient
quadrature mirror filter lattice filter bank A novel genetic algorithm (GA) for the design of a canonical signed power-of-two (SPT) coefficient lattice structure quadrature mirror filter bank is presented. Genetic operations may render the SPT representation of a value noncanonical. A new encoding scheme is introduced to encode the SPT values. In this new scheme, the canonical property of the SPT values is preserved under genetic operations. Additionally, two new features that drastically improve the performance of our GA are introduced. (1) An additional level of natural selection is introduced to simulate the effect of natural selection when sperm cells compete to fertilize an ovule; this dramatically improves the offspring survival rate. A conventional GA is analogous to intracytoplasmic sperm injection and has an extremely low offspring survival rate, resulting in very slow convergence. (2) The probability of mutation for each codon of a chromosome is weighted by the reciprocal of its effect. Because of these new features, the performance of our new GA outperforms conventional GAs
chromosome codon;lattice filter bank;natural selection;quadrature mirror filter;offspring survival rate;genetic algorithm;qmf;perfect reconstruction;signed power-of-two coefficient lattice structure;signal processing;encoding scheme
train_695
Design of high-performance wavelets for image coding using a perceptual time
domain criterion This paper presents a new biorthogonal linear-phase wavelet design for image compression. Instead of calculating the prototype filters as spectral factors of a half-band filter, the design is based on the direct optimization of the low pass analysis filter using an objective function directly related to a perceptual criterion for image compression. This function is defined as the product of the theoretical coding gain and an index called the peak-to-peak ratio, which was shown to have high correlation with perceptual quality. A distinctive feature of the proposed technique is a procedure by which, given a "good" starting filter, "good" filters of longer lengths are generated. The results are excellent, showing a clear improvement in perceptual image quality. Also, we devised a criterion for constraining the coefficients of the filters in order to design wavelets with minimum ringing
high-performance wavelets;prototype filters;image compression;objective function;perceptual time domain criterion;peak-to-peak ratio;filter banks;biorthogonal linear-phase wavelet design;low pass filter;half-band filter;image coding;perceptual image quality;coding gain;analysis filter
train_696
Design and implementation of a new sliding-mode observer for speed-sensorless
control of induction machine In this letter, a new sliding-mode-sensorless control algorithm is proposed for the field-oriented induction machine drive. In the proposed algorithm, the terms containing flux, speed, and rotor time constant, which are common in both current and flux equations, in the current model of the induction machine are estimated by a sliding function. The flux and speed estimation accuracy is guaranteed when the error between the actual current and observed current converges to zero. Hence, the fourth-order system is reduced to two second-order systems, and the speed estimation becomes very simple and robust to the parameter uncertainties. The new approach is verified by simulation and experimental results
induction motor drive;flux;speed-sensorless control;induction machine;sliding-mode observer;sensorless control;rotor time constant;current equations;flux equations;fourth-order system reduction;parameter uncertainties;speed estimation accuracy;sliding function;current model;speed
train_697
Schedulability analysis of real-time traffic in WorldFIP networks: an
integrated approach The WorldFIP protocol is one of the profiles that constitute the European fieldbus standard EN-50170. It is particularly well suited to be used in distributed computer-controlled systems where a set of process variables must be shared among network devices. To cope with the real-time requirements of such systems, the protocol provides communication services based on the exchange of periodic and aperiodic identified variables. The periodic exchanges have the highest priority and are executed at run time according to a cyclic schedule. Therefore, the respective schedulability can be determined at pre-run-time when building the schedule table. Concerning the aperiodic exchanges, the situation is different since their priority is lower and they are bandied according to a first-come-first-served policy. In this paper, a response-time-based schedulability analysis for the real-time traffic is presented. Such analysis considers both types of traffic in an integrated way, according to their priorities. Furthermore, a fixed-priorities-based policy is also used to schedule the periodic traffic. The proposed analysis represents an improvement relative to previous work and it can be evaluated online as part of a traffic online admission control. This feature is of particular importance when a planning scheduler is used, instead of the typical offline static scheduler, to allow online changes to the set of periodic process variables
communication services;distributed computer-controlled systems;aperiodic exchanges;real-time traffic schedulability analysis;periodic process variables;first-come-first-served policy;response time;en-50170 european fieldbus standard;traffic online admission control;scheduling algorithms;worldfip networks;real-time communication
train_698
Robust stability analysis for current-programmed regulators
Uncertainty models for the three basic switch-mode converters: buck, boost, and buck-boost are given in this paper. The resulting models are represented by linear fractional transformations with structured dynamic uncertainties. Uncertainties are assumed for the load resistance R=R/sub O/(1+ delta /sub R/), inductance L=L/sub O/(1+ delta /sub L/), and capacitance C=C/sub O/(1+ delta /sub C/). The interest in these models is clearly motivated by the need to have models for switch-mode DC-DC converters that are compatible with robust control analysis, which require a model structure consisting of a nominal model and a norm-bounded modeling uncertainty. Therefore, robust stability analysis can be realized using standard mu -tools. At the end of the paper, an illustrative example is given which shows the simplicity of the procedure
buck-boost converters;capacitance;load resistance;switch-mode dc-dc converters;boost converters;buck converters;current-programmed regulators;uncertainty models;linear fractional transformations;robust stability analysis;nominal model;inductance;control analysis;structured dynamic uncertainties;norm-bounded modeling uncertainty
train_699
Novel line conditioner with voltage up/down capability
In this paper, a novel pulsewidth-modulated line conditioner with fast output voltage control is proposed. The line conditioner is made up of an AC chopper with reversible voltage control and a transformer for series voltage compensation. In the AC chopper, a proper switching operation is achieved without the commutation problem. To absorb energy stored in line stray inductance, a regenerative DC snubber can be utilized which has only one capacitor without discharging resistors or complicated regenerative circuit for snubber energy. Therefore, the proposed AC chopper gives high efficiency and reliability. The output voltage of the line conditioner is controlled using a fast sensing technique of the output voltage. It is also shown via some experimental results that the presented line conditioner gives good dynamic and steady-state performance for high quality of the output voltage
output voltage control;ac chopper;switching operation;commutation;regenerative dc snubber;pulsewidth-modulated line conditioner;dynamic performance;steady-state performance;reversible voltage control;series voltage compensation transformer;line stray inductance
train_7
Anti-spam suit attempts to hold carriers accountable
A lawsuit alleges that Sprint has violated Utah's new anti-spam act. The action could open the door to new regulations on telecom service providers
lawsuit;anti-spam act;regulations;telecom service providers;sprint
train_70
IT security issues: the need for end user oriented research
Considerable attention has been given to the technical and policy issues involved with IT security issues in recent years. The growth of e-commerce and the Internet, as well as widely publicized hacker attacks, have brought IT security into prominent focus and routine corporate attention. Yet, much more research is needed from the end user (EU) perspective. This position paper is a call for such research and outlines some possible directions of interest
internet;information technology research;hacker attacks;it security;end user oriented research;end user computing;e-commerce
train_700
Digital stochastic realization of complex analog controllers
Stochastic logic is based on digital processing of a random pulse stream, where the information is codified as the probability of a high level in a finite sequence. This binary pulse sequence can be digitally processed exploiting the similarity between Boolean algebra and statistical algebra. Given a random pulse sequence, any Boolean operation among individual pulses will correspond to an algebraic expression among the variables represented by their respective average pulse rates. Subsequently, this pulse stream can be digitally processed to perform analog operations. In this paper, we propose a stochastic approach to the digital implementation of complex controllers using programmable devices as an alternative to traditional digital signal processors. As an example, a practical realization of nonlinear dissipative controllers for a series resonant converter is presented
statistical algebra;random pulse stream;series resonant dc-to-dc converters;digital stochastic realization;boolean operation;random pulse sequence;pulse stream;stochastic logic;stochastic approach;finite sequence;programmable devices;complex analog controllers;series resonant converter;binary pulse sequence;nonlinear dissipative controllers;average pulse rates;boolean algebra;parallel resonant dc-to-dc converters
train_701
High dynamic control of a three-level voltage-source-converter drive for a main
strip mill A high dynamic control system for the Alspa VDM 7000 medium-voltage drive was implemented, which provides fast torque response times of a few milliseconds despite the typically low switching frequency of gate-turn-off thyristors which is necessary to achieve high efficiency. The drive system consists of a three-level voltage-source converter with active front end and a synchronous motor. The drive has most recently been applied to a main strip mill. It provides a maximum of 8.3-MW mechanical power with a rated motor voltage of 3 kV. Besides motor torque as the main control objective, the control system has to comply with a number of additional objectives and constraints like DC-link voltage regulation and balancing, current and torque harmonics, motor flux, and excitation
excitation;3 kv;motor voltage;torque harmonics;medium-voltage drive;dc-link voltage regulation;efficiency;8.3 mw;motor flux;synchronous motor;dc-link voltage balancing;control objective;gate-turn-off thyristors;mechanical power;current harmonics;switching frequency;strip mill;high dynamic control system;three-level voltage-source converter
train_702
A comparison of high-power converter topologies for the implementation of FACTS
controllers This paper compares four power converter topologies for the implementation of flexible AC transmission system (FACTS) controllers: three multilevel topologies (multipoint clamped (MPC), chain, and nested cell) and the well-established multipulse topology. In keeping with the need to implement very-high-power inverters, switching frequency is restricted to line frequency. The study addresses device count, DC filter ratings, restrictions on voltage control, active power transfer through the DC link, and balancing of DC-link voltages. Emphasis is placed on capacitor sizing because of its impact on the cost and size of the FACTS controller. A method for the dimensioning the DC capacitor filter is presented. It is found that the chain converter is attractive for the implementation of a static compensator or a static synchronous series compensator. The MPC converter is attractive for the implementation of a unified power flow controller or an interline power flow controller, but a special arrangement is required to overcome the limitations on voltage control
static compensator;statcom;inverters;upfc;multipulse topology;high-power converter topologies comparison;multipoint clamped topology;unified power flow controller;multilevel topologies;facts controllers;dc filter ratings;switching frequency;device count;static synchronous series compensator
train_703
Direct self control with minimum torque ripple and high dynamics for a double
three-level GTO inverter drive A highly dynamic control scheme with very low torque ripple-direct self control (DSC) with torque hysteresis control-for very high-power medium-voltage induction motor drives fed by a double three-level inverter (D3LI) is presented. In this arrangement, two three-level inverters that are connected in parallel at their DC sides are feeding the open motor windings. The DSC, well known from two- and three-level inverters, is adapted to the D3LI and optimized for a minimum torque ripple. An 18-corner trajectory is chosen for the stator flux of the induction machine since it is approaching the ideal circle much better than the hexagon known from DSC for two-level inverters, without any detriment to the torque ripple. The machine and inverter control are explained and the proposed torque quality and dynamics are verified by measurements on a 180-kW laboratory drive
parallel connected inverters;highly dynamic control scheme;stator flux;180 kw;open motor windings;multilevel converters;variable-speed drives;double three-level inverter;medium-voltage induction motor drives;torque hysteresis control;torque quality;very low torque ripple;machine observer;direct self control
train_704
Multicell converters: active control and observation of flying-capacitor
voltages The multicell converters introduced more than ten years ago make it possible to distribute the voltage constraints among series-connected switches and to improve the output waveforms (increased number of levels and apparent frequency). The balance of the constraints requires an appropriate distribution of the flying voltages. This paper presents some solutions for the active control of the voltages across the flying capacitors in the presence of rapid variation of the input voltage. The latter part of this paper is dedicated to the observation of these voltages using an original modeling of the converter
output waveforms improvement;input voltage;active control;multicell converters;series-connected switches;kalman filtering;flying-capacitor voltages;power systems harmonics;nonlinear systems;multilevel systems;power electronics
train_705
Use of extra degrees of freedom in multilevel drives
Multilevel converters with series connection of semiconductors allow power electronics to reach medium voltages (1-10 kV) with relatively standard components. The increase of the number of semiconductors provides extra degrees of freedom, which can be used to improve different characteristics. This paper is focused on variable-speed drives and it is shown that with the proposed multilevel direct torque control strategy (DiCoIF) the tradeoff between the performances of the drive (harmonic distortions, torque dynamics, voltage step gradients, etc.) and the switching frequency of the semiconductors is improved. Then, a slightly modified strategy reducing common-mode voltage and bearing currents is presented
power electronics;state estimation;multilevel drives;medium voltages;fixed-frequency dynamic control;variable-speed drives;delay estimation;series connection;degrees of freedom;torque dynamics;industrial power systems;insulated gate bipolar transistors;multilevel direct torque control strategy;1 to 10 kv;semiconductors;bearing currents;voltage step gradients;common-mode voltage reduction;switching frequency;harmonic distortions
train_706
Enhancing the reliability of modular medium-voltage drives
A method to increase the reliability of modular medium-voltage induction motor drives is discussed, by providing means to bypass a failed module. The impact on reliability is shown. A control, which maximizes the output voltage available after bypass, is described, and experimental results are given
reliability enhancement;modular medium-voltage induction motor drives;available output voltage control;failed module bypass
train_707
Vector algebra proofs for geometry theorems
Vector mathematics can generate simple and powerful proofs of theorems in plane geometry. These proofs can also be used to generalize plane geometry theorems to higher dimensions. We present three vector proofs that show the power of this technique. 1. For any quadrilateral, the sum of the squares of the diagonals is less than or equal to the sum of the squares of the sides. 2. The area of a quadrilateral is half the product of the diagonals multiplied by the sine of an included angle. 3. One quarter of all triangles are acute (Based upon the options detailed below, with respect to the relative lengths of the sides). This paper presents a set of examples of vector mathematics applied to geometry problems. Some of the most beautiful and sophisticated proofs in mathematics involve using multiple representations of the same data. By leveraging the advantages of each representation one finds new and useful mathematical facts
multiple representations;plane geometry;vector mathematics;quadrilateral;proofs;vector algebra proofs
train_708
Sufficient conditions on nonemptiness and boundedness of the solution set of
the P/sub 0/ function nonlinear complementarity problem The P/sub 0/ function nonlinear complementarity, problem (NCP) has attracted a lot of attention among researchers. Various assumed conditions, which ensure that the NCP has a solution have been proposed. In this paper, by using the notion of an exceptional family of elements we develop a sufficient condition which ensures that the solution set of the P/sub 0/ function NCP is nonempty and bounded. In particular, we prove that many existing assumed conditions imply this sufficient condition. Thus, these conditions imply that the solution set of the P/sub 0/ function NCP is nonempty and bounded. In addition, we also prove directly that a few existence conditions imply that the solution set of the P/sub 0/ function NCP is bounded
boundedness;nonemptiness;p/sub 0/ function nonlinear complementarity problem;solution set;sufficient conditions
train_709
Cooperative mutation based evolutionary programming for continuous function
optimization An evolutionary programming (EP) algorithm adapting a new mutation operator is presented. Unlike most previous EPs, in which each individual is mutated on its own, each individual in the proposed algorithm is mutated in cooperation with the other individuals. This not only enhances convergence speed but also gives more chance to escape from local minima
convergence speed;cooperative mutation based evolutionary programming;continuous function optimization;local minima
train_71
A study of computer attitudes of non-computing students of technical colleges
in Brunei Darussalam The study surveyed 268 non-computing students among three technical colleges in Brunei Darussalam. The study validated an existing instrument to measure computer attitudes of non-computing students, and identified factors that contributed to the formation of their attitudes. The findings show that computer experience and educational qualification are associated with students' computer attitudes. In contrast, variables such as gender, age, ownership of a personal computer (PC), geographical location of institution, and prior computer training appeared to have no impact on computer attitudes
age;educational computing;noncomputing students;computer attitudes;educational qualification;survey;computer training;gender;computer experience;end user computing;personal computer ownership;technical colleges
train_710
Optimal allocation of runs in a simulation metamodel with several independent
variables Cheng and Kleijnen (1999) propose a very general regression metamodel for modelling the output of a queuing system. Its main limitations are that the regression function is based on a polynomial and that it can use only one independent variable. These limitations are removed here. We derive an explicit formula for the optimal way of assigning simulation runs to the different design points
general regression metamodel;independent variables;simulation metamodel;queuing system;optimal runs allocation;regression function
train_711
On bivariate dependence and the convex order
We investigate the interplay between variability (in the sense of the convex order) and dependence in a bivariate framework, extending some previous results in this area. We exploit the fact that discrete uniform distributions are dense in the space of probability measures in the topology of weak convergence to prove our central result. We also obtain a partial result in the general multivariate case. Our findings can be interpreted in terms of the impact of component variability on the mean life of correlated serial and parallel systems
component variability;probability measures;parallel systems;bivariate probability distributions;mean life;discrete uniform distributions;weak convergence;serial systems;convex order;bivariate dependence;topology
train_712
Waiting-time distribution of a discrete-time multiserver queue with correlated
arrivals and deterministic service times: D-MAP/D/k system We derive the waiting-time distribution of a discrete-time multiserver queue with correlated arrivals and deterministic (or constant) service times. We show that the procedure for obtaining the waiting-time distribution of a multiserver queue is reduced to that of a single-server queue. We present a complete solution to the waiting-time distribution of D-MAP/D/k queue together with some computational results
discrete-time multiserver queue;d-map/d/k system;waiting-time distribution;deterministic service times;markovian arrival process;correlated arrivals
train_713
Efficient feasibility testing for dial-a-ride problems
Dial-a-ride systems involve dispatching a vehicle to satisfy demands from a set of customers who call a vehicle-operating agency requesting that an item tie picked up from a specific location and delivered to a specific destination. Dial-a-ride problems differ from other routing and scheduling problems, in that they typically involve service-related constraints. It is common to have maximum wait time constraints and maximum ride time constraints. In the presence of maximum wait time and maximum ride time restrictions, it is not clear how to efficiently determine, given a sequence of pickups and deliveries, whether a feasible schedule exists. We demonstrate that this, in fact, can be done in linear time
routing;service-related constraints;maximum ride time constraints;dispatching;vehicle-operating agency;dial-a-ride problems;feasibility testing;scheduling;maximum wait time constraints
train_714
Embeddings of planar graphs that minimize the number of long-face cycles
We consider the problem of finding embeddings of planar graphs that minimize the number of long-face cycles. We prove that for any k >or= 4, it is NP-complete to find an embedding that minimizes the number of face cycles of length at least k
np-complete problem;long-face cycles;planar graphs;embeddings;graph drawing
train_715
The quadratic 0-1 knapsack problem with series-parallel support
We consider various special cases of the quadratic 0-1 knapsack problem (QKP) for which the underlying graph structure is fairly simple. For the variant with edge series-parallel graphs, we give a dynamic programming algorithm with pseudo-polynomial time complexity, and a fully polynomial time approximation scheme. In strong contrast to this, the variant with vertex series-parallel graphs is shown to be strongly NP-complete
np-complete problem;pseudo-polynomial time complexity;quadratic 0-1 knapsack problem;fully polynomial time approximation scheme;series-parallel support;underlying graph structure;dynamic programming algorithm
train_716
Algorithmic results for ordered median problems
In a series of papers a new type of objective function in location theory, called ordered median function, has been introduced and analyzed. This objective function unifies and generalizes most common objective functions used in location theory. In this paper we identify finite dominating sets for these models and develop polynomial time algorithms together with a detailed complexity analysis
detailed complexity analysis;polynomial time algorithms;location theory;objective function;finite dominating sets;algorithmic results;ordered median problems;ordered median function
train_717
A network simplex algorithm with O(n) consecutive degenerate pivots
We suggest a pivot rule for the primal simplex algorithm for the minimum cost flow problem, known as the network simplex algorithm. Due to degeneracy, cycling may occur in the network simplex algorithm. The cycling can be prevented by maintaining strongly feasible bases proposed by Cunningham (1976); however, if we do not impose any restrictions on the entering variables, the algorithm can still perform an exponentially long sequence of degenerate pivots. This phenomenon is known as stalling. Researchers have suggested several pivot rules with the following bounds on the number of consecutive degenerate pivots: m, n/sup 2/, k(k + 1)/2, where n is the number of nodes in the network, m is the number of arcs in the network, and k is the number of degenerate arcs in the basis. (Observe that k <or= n.) In this paper, we describe an anti-stalling pivot rule that ensures that the network simplex algorithm performs at most k consecutive degenerate pivots. This rule uses a negative cost augmenting cycle to identify a sequence of entering variables
negative cost augmenting cycle;cycling;degenerate pivots;network simplex algorithm;degeneracy;minimum cost flow problem;anti-stalling pivot rule;stalling
train_718
New water management system begins operation at US projects
The US Army Corps of Engineers has developed a new automated information system to support its water control management mission. The new system provides a variety of decision support tools, enabling water control managers to acquire, transform, verify, store, display, analyse, and disseminate data and information efficiently and around the clock
data verification;data display;us projects;corps water management system;decision support tools;data acquisition;data analysis;us army corps of engineers;data storage;water control managers;water management system;water control management mission;automated information system;data visualization;decision support system;watershed modelling;data dissemination
train_719
War games: The truth [network security]
With al Qaeda on the tip of tongues around the world, find out how terror groups could target your network. What are the dangers and how do you fight them?
security;networks;employees;malicious attacks
train_72
A three-tier technology training strategy in a dynamic business environment
As end-user training becomes increasingly important in today's technology-intensive business environment, progressive companies remain alert to find ways to provide their end users with timely training and resources. This paper describes an innovative training strategy adopted by one midsize organization to provide its end users with adequate, flexible, and responsive training. The paper then compares the three-tier strategy with other models described in technology training literature. Managers who supervise technology end users in organizations comparable to the one in the study may find the three-tier strategy workable and may want to use it in their own training programs to facilitate training and improve end-user skills. Researchers and scholars may find that the idea of three-tier training generates new opportunities for research
organizations;dynamic business environment;companies;innovative training strategy;three-tier technology training strategy;technology-intensive business environment;end-user training;midsize organization
train_720
19in monitors [CRT survey]
Upgrade your monitor from as little as Pounds 135. With displays on test and ranging up to Pounds 400, whether you're after the last word in quality or simply looking for again, this Labs holds the answer. Looks at ADI MicroScan M900, CTX PR960F, Eizo FlexScan T766, Hansol 920D, Hansol920P, Hitachi CM715ET, Hitachi CM721FET, liyama Vision Master Pro 454, LG Flatron 915FT Plus, Mitsubishi Diamond Pro 920, NEC MultiSync FE950+, Philips 109S40, Samsung SyncMaster 959NF, Sony Multiscan CPD-G420, and ViewSonic G90f
crt survey;hitachi cm721fet;samsung syncmaster 959nf;ctx pr960f;hansol 920d;adi microscan m900;nec multisync fe950;eizo flexscan t766;mitsubishi diamond pro 920;philips 109s40;hitachi cm715et;19in monitors;hansol920p;viewsonic g90f;liyama vision master pro 454;lg flatron 915ft plus;sony multiscan cpd-g420;19 in
train_721
The results of experimental studies of the reflooding of fuel-rod assemblies
from above and problems for future investigations Problems in studying the reflooding of assemblies from above conducted at foreign and Russian experimental installations are considered. The efficiency of cooling and flow reversal under countercurrent flow of steam and water, as well as the scale effect are analyzed. The tasks for future experiments that are necessary for the development of modern correlations for the loss-of-coolant accident (LOCA) computer codes are stated
flow reversal;countercurrent flow;fuel-rod assemblies reflooding;steam;water;loca computer codes;loss-of-coolant accident computer codes;cooling efficiency;russian experimental installations
train_722
Updating systems for monitoring and controlling power equipment on the basis of
the firmware system SARGON The economic difficulties experienced by the power industry of Russia has considerably retarded the speed of commissioning new capacities and reconstructing equipment in service. The increasing deterioration of the equipment at power stations makes the problem of its updating very acute. The main efforts of organizations working in the power industry are now focused on updating all kinds of equipment installed at power installations. The necessary condition for the efficient operation of power equipment is to carry out serious modernization of systems for monitoring and control (SMC) of technological processes. The specialists at ZAO NVT-Avtomatika have developed efficient technology for updating the SMC on the basis of the firmware system SARGON which ensures the fast introduction of high-quality systems of automation with a minimal payback time of the capital outlay. This paper discusses the updating of equipment using SARGON
control systems;russia;zao nvt-avtomatika;sargon firmware system;monitoring systems;power equipment control;power equipment monitoring;power industry
train_723
Simulation of physicochemical processes of erosion-corrosion of metals in
two-phase flows A computational model for the erosion-corrosion of the metals used in power equipment in two-phase flows (RAMEK-2) was developed. The results of calculations of the dependency of the intensity of the erosion-corrosion of structural steels as a function of the thermodynamic, hydrodynamic and water chemistry parameters of these flows in the working paths of thermal power stations and nuclear power stations are presented in a three-dimensional space. On the basis of mathematical models, application software was created for forecasting the erosion-corrosion resource and for optimizing the rules on diagnosis and protective maintenance of erosion-corrosion of the elements of the wet-steam path in power stations
fault diagnosis;nuclear power plants;thermal power plants;hydrodynamic parameters;computer simulation;protective maintenance;thermodynamic parameters;application software;three-dimensional space;wet-steam path;water-chemistry parameters;ramek-2;erosion-corrosion computational model;two-phase flows;structural steels
train_724
Banking on SMA funds [separately managed accounts]
From investment management to technology to back-office services, outsourcers are elbowing their way into the SMA business. Small banks are paying attention-and hoping to reap the rewards
outsourcers;technology;small banks;back-office services;investment management;separately managed accounts