diff --git "a/deduped/dedup_0227.jsonl" "b/deduped/dedup_0227.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0227.jsonl" @@ -0,0 +1,80 @@ +{"text": "In establishing structure-function relationships for membrane transport proteins, the interpretation of phenotypic changes can be problematic, owing to uncertainties in protein expression levels, sub-cellular localization, and protein-folding fidelity. A dual-label competitive transport assay called \"Transport Specificity Ratio\" (TSR) analysis has been developed that is simple to perform, and circumvents the \"expression problem,\" providing a reliable TSR phenotype (a constant) for comparison to other transporters.Escherichia coli GABA (4-aminobutyrate) permease (GabP) as a model carrier, it is demonstrated that the TSR phenotype is largely independent of assay conditions, exhibiting: (i) indifference to the particular substrate concentrations used, (ii) indifference to extreme changes (40-fold) in transporter expression level, and within broad limits (iii) indifference to assay duration. The theoretical underpinnings of TSR analysis predict all of the above observations, supporting that TSR has (i) applicability in the analysis of membrane transport, and (ii) particular utility in the face of incomplete information on protein expression levels and initial reaction rate intervals . The TSR was used to identify gab permease (GabP) variants that exhibit relative changes in catalytic specificity (kcat/Km) for [14C]GABA (4-aminobutyrate) versus [3H]NA (nipecotic acid).Using the constant that reflects innate molecular properties of the transition state, and provides a reliable index of the difference in catalytic specificity that a carrier exhibits toward a particular pair of substrates. A change in the TSR phenotype, called a \u0394(TSR), represents a specificity shift attributable to underlying changes in the intrinsic substrate binding energy (\u0394Gb) that translocation catalysts rely upon to decrease activation energy . TSR analysis is therefore a structure-function tool that enables parsimonious scanning for positions in the protein fold that couple to the transition state, creating stability and thereby serving as functional determinants of catalytic power .The TSR phenotype is an easily measured However, without a productive conspiracy among catalysis-promoting residues in the protein fold, transport proteins would be non-catalytic . Inasmuch as \"... catalytic power will always appear as a result of increased transition state stabilization (lower free energy) ...\" GABA and [3H]NA compete for uptake at the GabP active site. Therefore as a practical matter it is necessary to establish conditions under which an adequate signal may be obtained from both isotope channels. This can be accomplished empirically by mixing the labelled substrates in different proportions the trading of [3H]NA for [14C]GABA is expected to substantially alter the fraction of active sites occupied by GABA versus NA Nevertheless, it is clear that the calculated TSR parameter is indifferent to the precise substrate concentration ratio. Moreover, at a fixed substrate ratio (7 parts NA to 3 parts GABA), the absolute substrate concentrations may also be varied over a wide range (here 17.5-fold) without affecting the calculated TSR parameter indicate that there is great latitude in choosing substrate concentrations for TSR measurements, it is nevertheless pragmatic to select robust initial velocity conditions wherein the substrate concentration ratio is such that equal disintegration rates are seen in both isotope channels (broken line) when the control transporter is studied. Variant transporters, exhibiting relative increases or decreases in specificity for the two substrates, will then be easily visualized as an inequality between the disintegration rates seen in the two isotope channels .lac-controlled expression of the plasmid-borne GabP gene. Growth in the presence of increasing IPTG concentrations caused the uptake of [3H]NA and [14C]GABA to increase in proportion to the GabP expression-level phenotype \u2013 the TSR. Fundamental to TSR analysis is the notion that transport catalysts use substrate binding energy to lower the translocation energy barrier (activation energy) [S] \u00a0\u00a0\u00a0 (2)m), wherein the familiar Michaelis-Menten relationship (Equation 3) reduces to the form of a second-order rate law (Equation 4), and the apparent rate constant may be evaluated as k = kcat/Km (units M-1sec-1).Free carrier and substrate (C + S) are dominant under non-saturating, second-order conditions . The alternative Michaelis-Menten form turns out to be very useful for analysing the uptake of two labelled substrates that compete for transport at the same active site.E. coli GabP exposed simultaneously to arbitrary concentrations of its transported substrates [14C]GABA and [3H]NA. These competing substrates, present simultaneously in the same reaction vessel, will necessarily be in equilibrium with precisely the same concentration of free carrier (but unknown concentrations of carrier-substrate complexes), allowing algebraic elimination of [C] (Equation 5) when a ratio is taken between two instances of Equation 4 (one for each substrate).Consider the bstrates ,9, [14C]cat/Km) values has two consequences. First, since (kcat/Km) is formally a measure of catalytic specificity GABA and [3H]NA in different proportions, or in fixed proportion over a broad concentration range . Thus, arbitrary carrier saturation levels are not expected to compromise TSR measurements. Since uncharacterized mutant collections may be expected to contain transporter variants with highly divergent Km values, the saturation-independence of TSR analysis should be of value in high-throughput screening situations where little kinetic information may be available to guide the choice of assay conditions. However, to be of general value the results obtained with GabP must extrapolate to other transporters.Figure nge Fig. . Indeed,) \u2013 between the free reactants (C + S) and the transition state (CS\u2021). This fundamental reality can also be appreciated from the perspective that under non-saturating conditions ([S] << Km), there are no complexes to consider , and thus even complicated mechanisms reduce to the simple case (Equation 4) in which the reaction proceeds directly from the free reactants in solution to the transition state (C + S \u2192 Products). Thus, the simple second-order reaction scheme, C + S \u2192 Products, will probably never be \"too simple\" for the purpose of performing the TSR analysis \u2013 even though complicated transport kinetics will feature many complexes that TSR analysis seems to ignore. In truth, the missing complexes are merely irrelevant (not ignored) to the value of .Why the deceptively simple TSR analysis should have broad applicability can be understood from further consideration of Figure of Fig. since thIt is worth mentioning that TSR analysis has \"fool-proof\" qualities that derive from its inherent insensitivity to several sources of error that can seriously compromise transport measurements that rely upon a single labelled substrate. TSR calculations may be expected to \"self-correct\" any sources of error that have proportionally the same effect on the measurement of both isotopes \u2013 for such errors cannot affect the isotope ratio used to calculate the TSR parameter.Figure constant, should be of considerable practical significance for high-throughput screening operations wherein carrier expression levels could be both highly variable and impractical to document in real-time.In order to demonstrate the expression-independent nature of the TSR parameter, IPTG was used to simulate the wide range of expression levels (40-fold) that might be encountered in an uncharacterized collection of transporter variants. Whereas the single-isotope signals Fig. are seentotal in the sense desired for meaningful kinetic characterization, which assumes (Equation 3) that Ctotal consists entirely of active molecules. The possibility of partial denaturation precludes assigning a molecular interpretation to shifts in either velocity or Vmax. In contrast, TSR analysis is unaffected by the presence of inactive molecules, and theoretically will always report reliably on the innate specificity properties of the active site per se \u2013 even if the measured signal emanates from a minor fraction of the carrier molecules visualized on an immunoblot.Since TSR phenotypes are expression-independent, structure-function information gleaned from a rapid first-pass screen will remain valid irrespective of results that might be obtained from a subsequent immunoblot analysis. Immunoblots do not in any event determine C14C] and [3H] initial rate segments in their uptake time course . Plairelative differences in transition state binding energies (Equation 9) that can be visually represented as a change in the relative position of (separation between) the hypothetical binding isotherms for either substrate . This point is important, and can be illustrated by examining the implications of the figure The \u0394(TSR) phenotypes illustrated in Fig. ate Fig. . It is iCalculated TSR values for the N302C and INS Ala 320 variants are, respectively about 2.5 and 16. That these numbers are both greater than 1 indicates (Equation 9) that the hypothetical transition state binding isotherm for GABA would lie to the left phenotype always means the same thing \u2013 there has been a change in the transition state stability for translocation of one or both substrates.It is to be noted that since absolute specificity changes can occur in the absence of a relative specificity shift , some catalytic residues may be detectable only by more complicated kinetic studies, or possibly through independent TSR experiments with structurally distinct substrate pairs. Since the TSR parameter is a synchrony with) the catalyzed reaction, and (iii) collectively can make million-fold contributions to catalytic specificity (transition state stabilization) even though their locations are spatially distant from the active site in enzymes of known structure is a function of transition state stabilization ( + \u0394Gb), phenotypic changes in the TSR phenotype should report on structural perturbations that compromise as yet undiscovered networks that couple energetically to the transition state.Apart from its delightful simplicity and self-correcting behaviour, the TSR (or rather the ability to observe \u0394(TSR) phenotypes) is also attractive as a facile means of expanding interest in \"coupled promoting motions\" that are networked together in support of catalysis . Such neeductase ,12-14]; eductase ). Inasmucat/Km) by many orders of magnitude occurring in synchrony ...\" nipecotic acid (40 Ci/mmol) was a custom synthesis from Moravek Biochemicals ; [14C]GABA was from Dupont-New England Nuclear ; Ultima Gold \u2122 scintillation cocktail was from Packard BioScience ; the anti-Penta-His monoclonal antibody was from QIAGEN ; the goat anti-mouse alkaline phosphatase antibody was from Kirkegaard and Perry Laboratories ; isopropyl-\u03b2-D-thiogalactopyranoside (IPTG) was from Anatrace ; Immobilon-P\u2122 transfer membranes (0.45 um) were from Millipore ; the chemiluminescence reagent for alkaline phosphatase detections, Western Lightning, was from Perkin-Elmer Life Sciences, Inc. ; NA was from Research Biochemicals International ; Miller's Luria Broth medium was from Gibco-BRL ; agar and ampicillin were from Fisher Biotech ; bicinchoninic acid protein determination reagents were from Pierce ; cellulose acetate filters were from either Millipore or MicronSep, from OSMONICS Inc. ; NA (2.1 \u03bcCi/ml) and 15 \u03bcM [14C]GABA (0.3 \u03bcCi/ml). This solution was found to support equal rates of [14C] and [3H] label accumulation in the Cys-less GabP control strain [3H]NA (1.2 \u03bcCi/ml) and 30 \u03bcM [14C]GABA (0.6 \u03bcCi/ml). This solution was found to support equal rates of [14C] and [3H] label accumulation by the wild type GabP [Transport reactions were initiated by mixing 20 \u03bcl of a 5-fold concentrated substrate stock solution with 80 \u03bcl of prewarmed C]GABA 0. \u03bcCi/ml. Stop Solution (KPi Buffer containing 20 mM HgCl2), and then vacuum-filtered (0.45 micron pore). The reaction vessel was then rinsed with 1 ml of Wash Buffer (KPi Buffer containing 5 mM HgCl2) and this was applied to the same filter. Finally, 4 ml of the Wash Buffer was applied to the filter. The filter was then dissolved in Ultima Gold\u2122 scintillation cocktail and the [3H] and [14C] radioactivity analyzed with a Packard BioScience Tri-Carb 2900 TR liquid scintillation counter using stored Ultima Gold\u2122 quench curves and automatic quench compensation.A 60 or 120 Hz metronome was used to time the reactions, which were rapidly quenched with 1 ml of ice-cold E. coli strain, SK45, was grown and prepared for transport experiments as indicated above except that a series of different cell suspensions were prepared spanning a range from 20 to 125 percent of that described above. Dual-label transport experiments carried out with these different suspensions produced a linear standard curve for GabP-independent \"background uptake\" of [3H]NA and [14C]GABA as a function of protein content. The protein content of GabP-positive test strains could then be used to obtain the appropriate background subtraction by extrapolation from the standard curve. Test strain protein contents were always similar (within 10 percent) because when cell pellets were resuspended steps were taken to assure approximately equal turbidity levels.The GabP-negative 3H]NA and [14C]GABA. The background-corrected velocity replicates were used to calculate replicate TSR values (Equation 6) from which the mean TSR and standard errors (S.E.M.) shown in the figures were obtained.Replicate (n = 3), background-corrected, dual-substrate uptake velocities (moles/time) were inferred from measured disintegration rates for filter-bound [E. coli cells were probe-sonicated to produce plasma membrane vesicles, which were then separated from soluble components and unbroken cells by differential centrifugation as previously described [escribed . Plasma escribed . Immunob"} +{"text": "Laser Interstitial ThermoTherapy (LITT) is a well established surgical method. The use of LITT is so far limited to homogeneous tissues, e.g. the liver. One of the reasons is the limited capability of existing treatment planning models to calculate accurately the damage zone. The treatment planning in inhomogeneous tissues, especially of regions near main vessels, poses still a challenge. In order to extend the application of LITT to a wider range of anatomical regions new simulation methods are needed. The model described with this article enables efficient simulation for predicting damaged tissue as a basis for a future laser-surgical planning system. Previously we described the dependency of the model on geometry. With the presented paper including two video files we focus on the methodological, physical and mathematical background of the model.In contrast to previous simulation attempts, our model is based on finite element method (FEM). We propose the use of LITT, in sensitive areas such as the neck region to treat tumours in lymph node with dimensions of 0.5 cm \u2013 2 cm in diameter near the carotid artery. Our model is based on calculations describing the light distribution using the diffusion approximation of the transport theory; the temperature rise using the bioheat equation, including the effect of microperfusion in tissue to determine the extent of thermal damage; and the dependency of thermal and optical properties on the temperature and the injury. Injury is estimated using a damage integral. To check our model we performed a first in vitro experiment on porcine muscle tissue.We performed the derivation of the geometry from 3D ultrasound data and show for this proposed geometry the energy distribution, the heat elevation, and the damage zone. Further on, we perform a comparison with the in-vitro experiment. The calculation shows an error of 5% in the x-axis parallel to the blood vessel.The FEM technique proposed can overcome limitations of other methods and enables an efficient simulation for predicting the damage zone induced using LITT. Our calculations show clearly that major vessels would not be damaged. The area/volume of the damaged zone calculated from both simulation and in-vitro experiment fits well and the deviation is small. One of the main reasons for the deviation is the lack of accurate values of the tissue optical properties. In further experiments this needs to be validated. Laser radiation is now used routinely in surgery to incise, coagulate, or vaporize tissues. The laser light power is converted into heat in the target volume with ensuing coagulative necrosis, secondary degeneration and atrophy, and tumour shrinkage with minimal damage to surrounding structures . The useClinical studies have yet to demonstrate that LITT is practical for the palliation of hepatic and nasopharyngeal tumours e.g. -9). The . The 9])Modelling the laser-tissue interaction is beneficial for the analysis and optimisation of the parameters governing planned laser surgical procedures. Nevertheless, we still lack an adequate model that grants accuracy. Most of the models suggested depend greatly on simplifications of the real problem, either in the geometry they offer or in the system of equations they use. Some models, which use the bioheat equation, neglect the role of the changes in the tissue properties during temperature elevation process , which dThis paper describes in detail the bases for a modelling method to simulate the effect of LITT for the treatment in various indications near large vessels, such as the carotid artery in the neck region. We thereby propose the use of LITT, frequently applied in the treatment of liver tumours , in moreThe actual response of tissue to laser irradiation is a time-dependent phenomenon. Initially, there are thermal and possibly photochemical changes of the tissue at the molecular level. Next are changes in tissue perfusion caused by thermally induced vascular relaxation and/or vessel damage. Heat deposited at the application site is transferred to adjacent structures. This may be desirable for coagulation purposes \u2013 or it may cause unexpected thermal damage to otherwise viable tissues adjacent to the irradiation site. The rate of heat transfer depends on the composition and organization of tissues involved. Blood perfusion during and after irradiation has significant effects on the size of the damage zone.We discuss in this paper our mathematical approach, its considerations and restrictions. In the main part we present the mathematical and physical backgrounds used to achieve the model. Then we present and discuss the results of our simulation in comparison with the results of our in-vitro experiment.Our model of LITT considers both optical and thermal effects. It is based on calculations describing the light distribution using the diffusion approximation of the transport theory; the temperature rise using the bioheat equation, including the effect of microperfusion in tissue to determine the extent of thermal damage, and the dependence of thermal and optical properties on the temperature and the injury. Injury is estimated using a damage integral, which depends on the temperature elevation history. The order and flow of the modelling steps are described in the following sections in detail.The head and neck area consists of complex anatomical structures in close proximity. In sonographic 3D volume datasets of the neck area the sternocleidomastoideus muscle and the neck vessels serve as leading structures . Because3 shown in Fig. One can easily segment the carotid artery from 3D sonographical, MRI or CT volume datasets. For the segmentation of our 3D ultrasound dataset Fig we used Commercially available laser applicator fibres for thermotherapy frequently have a water jacket to cool the surface. The applicator is assumed to be a cylinder, and the cooling effect is implemented as a boundary condition at the diffuser surface.The tissue surrounding the vessel is treated as a homogeneous muscle tissue. According to the geometry described using a mesh is generated to perform a finite element method calculation Fig. . The modIn most tissues, both absorption and scattering are present simultaneously. A mathematical description of the absorption and scattering characteristics of light can be performed analytically or by using the transport theory. Transport theory has been extensively used when dealing with laser-tissue interactions. Furthermore, experimental results have confirmed its validity in most cases .2 10600 nm laser is less than 350 \u03bcm. This leads us to the possibility of applying light diffusion approximation to the transport theory , D the diffusion coefficient [cm], and Q the source term [W cm-3]. \u03bca is the absorption coefficient and \u03bc's the reduced scattering coefficient in tissue. The roman number (I) indicates the position in Fig. \u03bcs is described by \u03bc's = \u03bcs (1-g), with g being the anisotropy factor incorporating the effects of directionally dependent scattering.erm W cm-. \u03bca is t\u03bca for visible and for near infrared radiation ranges between 0.001 mm-1 < \u03bca < 10 mm-1 for biological tissues. While for the scattering coefficient \u03bcs is in the order of 1 mm-1 <\u03bcs < 100 mm-1 is the tissue average volumetric blood perfusion rate , and cb the specific heat of blood [J kg-1 C-1]. The coefficients \u03c1, k and wb are functions of temperature T.where As a basis for the optical and thermal parameters for the simulations, we used values published by Mueller et al. . EspeciaIn order to make the model adaptable to individual shapes of segmented vessels, we considered the geometry of a large vessel as a volume in which an incompressible fluid (blood) flows. The direction of the blood flow and the initial speed profile are implemented as boundary conditions. The incompressible Navier-Stokes equation for the blood (Newtonian fluid) reads:\u03b7 is the dynamic viscosity [kg s m-1], \u03c1 the density [kg m-3], u the velocity field, p the pressure [N/m2], and F a volume force field such as gravity.Here, u = u(r)) and eq. 5 is reduced to the following:Implementing the Navier-Stokes equation in the model allows us to present a time-periodic change in the blood flow rate, i.e., to simulate the beat cycle effect in the vessel. The main effect here on the result of the simulation lies in the accuracy of the estimated heat elevation in the tissue: A continuous blood flow has a different profile than the cycled flow , which yields a different final cooling effect. For vessels away from the heart, the pumping cycle does not clearly appear; it tends to be a normal laminar flow. In this case in a large vessel the bioheat equation becomes:The heat convection between tissue and a large vessel occurs as a direct energy transfer rather than perfusion. The vessel is a heat sink in the treated volume. Therefore, the perfusion term in the bioheat equation has to be modified to consider heat conduction and blood flow. A new term, the so-called ood flow . ConsideThe thermal damage in cells and tissue is described mathematically by a first-order thermal-chemical rate equation, in which temperature history determines damage. Damage is considered to be a unimolecular process, whereby native molecules transform into a denatured/coagulated state through an activated state leading to cell death ,19. DamaA [s-1] is the frequency factor, Ea [J/mole] the activation energy, R [J mole-1 K-1] the universal gas constant, and T [K] the temperature. C and C are the concentrations of the undamaged molecules at the beginning and at time \u03c4, respectively. Damage \u03a9 is a function of the observer's definition of damage. In [The activation energy mage. In a limit Heat capacity is assumed to be constant over a wide temperature range. The temperature dependence of thermal conductivity and density is taken into consideration by the following linear approximations and leads to reduction in penetration depth. The actual property set is calculated from the actual damage value as well as the optical properties in the native and coagulated tissue states . In the first loop step \u03a9 is zero, and it starts to increase according to the rise in temperature, i.e., the different optical properties have as their starting point native tissue and as end point coagulated tissue. The actual value lies between both limits as determined according to \u03a9.\u2022 In the second step, the temperature distribution in the tissue caused by laser energy deposition is estimated by solving the two bioheat equations for tissue and large vessel. The source term in both equations is defined by the absorbed energy at each mesh point Fig. from the) from the Navier-Stokes equation (either eq.5 or eq. 6). In our solved model, because we suggested treatment as taking place in the neck near the carotid vessel, we considered to use eq. 6 for obtaining the speed field, which is valid for laminar flow.\u2022 A by-step here is the estimation of the blood speed field .\u2022 After estimating the heat distribution and the damage value, we perform a backward step to calculate the new values of the properties according to eq. 9 through eq. 12, which are updated in the equations set for the next loop iteration.tn and a corresponding sequence of values for the dependent variable so that each \u03c6n, Tn,... approximates the solution at tn [hn = tn+1 - tn so that the estimated error in the numerical solution is controlled by a specified tolerance [t in this case) [For our calculations we used FEMLAB's standard mesh generator with its default settings for modelling . The meson at tn . Modern olerance . The \"flis case) . There iLight is considered to be emitted from an interstitial fibre with a fibre-diameter of 1 mm; it was modelled as an isotropically radiating cylindrical source Fig. .In the real treatment a cooling process using a special cooling catheter is performed to keep the temperature at the surface of the applicator low, preventing damage at its surface. A special boundary condition at the applicator surface should be applied in order to simulate this cooling effect. In the model this is realized by setting the outer surface temperature of the applicator to a constant value .At the modelled volume surfaces the insulation boundary conditions, optical and thermal, are used,n is the outward unit vector normal to the surface. This means the gradients of light fluence rate and of temperature vanish at the surface. Even though this condition is more suitable for light fluence rate, as small amount of radiations reach the surface, but in general the temperature and light fluence rate will not be constant. The need to set this condition in this way is because that the numerical solver demands defined fixed boundary conditions, which sometimes do not agree with the real situation.where wb over the entie tissue is set to 1.4\u00b710-6kg of blood s-1 cm-3 for T < 60\u00b0C and to 0 when T \u2265 60\u00b0C, which is to be considered as a normal result of stopping the blood perfusion according to temperature elevation in the tissue [\u03b7, to 3.5\u00b7103kg\u00b7s\u00b7m-1. To evaluate the damage, \u03a9, the activation energy Ea is set to 670000 J/mol and the frequency factor A to 9.4\u00b710104s-1 [According to the NCRP-data the perfe tissue . In orde10104s-1 ,35. Dama10104s-1 .UltraSPARC IIIi processor.The simulation takes around 2 hours on Sun Blade 2000 with Solaris 9 OS, 6 GB RAM and 3. The tube inner diameter is 5 mm and the outer diameter is 7 mm. The blood and the laser cooling liquid had the room temperature of 21.4\u00b0C while the sample itself 17.6\u00b0C. A laser power of 30 W and blood average flow speed of 40 ml\u00b7s-1 were used. We measured the exact distance between the laser applicator and the tube edge, 3 mm, at the end of the experiment after performing the cut in the sample. We fixed our application time to 300 s.We performed a single in-vitro experiment to check our model. The setup is shown in Fig. In the simulation model we omit the perfusion term, as there is no perfusion to be considered in-vitro. We simulated the tube (blood vessel) with diameter of 6 mm. All other experimental conditions are implemented in the model as they are in the experiment.Because of the lack of data, which describe the properties of the porcine tissues, in all literature available to us, we used the same data presented in Table The highest temperature and widest damage are reached in front of the centre of the applicator. Taking into account the perpendicular surface to the applicator at centre, which is the most critical in the volume as all effects participate together: applicator, vessel, and blood flow, we complete the comparisons of the results between the experiment and the simulation using this plane. Hence, we made a cut in the probe at the level equivalent to this plane.Coagulation of tissue is immediately apparent and always indicates lethal thermal effect . Anyhow,A comparison in the z-direction needs an up-down cut (y-z plane) through the applicator position perpendicular to the tube/vessel, which was not possible after the cut in x-y direction.In we preseFig. Fig. The dimensions of the damage zone, which may be considered the target goal of the simulation, can be calculated directly by producing grided axes in all the 3D and 2D results as well as with routines written especially for this aim.Fig. Fig. 2, while our model shows a damaged zone of 2.1*1.45 cm2. We obtain calculation errors of 5% in the x-axis parallel to the blood vessel, and of 20% in the y-direction perpendicular to the vessel. This deviation happens mainly due to inaccurate optical properties values.Fig. To date most simulation models of LITT have used the Monte-Carlo method (MC) to calculate the light distribution, then combine its results with Finite Difference method (FDM) to calculate the heat distribution. Because of its formulation, this combination fits very well for a radially symmetric problem. A weakness arises, however, when dealing with asymmetric volumes in real human anatomy. Arbitrarily curved surfaces separate the different tissues. Consequently, calculations from the FDM becomes so complex that errors start to appear in the results presented stemming from the dependency of FDM on dividing the volume into voxels. One way to overcome this is to increase the voxels' number. Indeed, this leads to less error at the tissue-separating surfaces, although it increases the resources and calculation steps, making the procedure inconvenient. In principle, combining MC and FEM (instead of FDM) is possible theoretically, and seems to be promising as it overcomes the latter problems, but to our knowledge has not implemented yet. From another perspective, MC solution converges to the exact solution of the transport equation only when the number of traced photons increases infinitely ,21, whicBecause we are dealing with an asymmetric geometry, we chose the FEM. It allows us to define and refine the mesh in the volume of interest in order to obtain more precise results. Furthermore, using a FEM mesh we are able to adapt individually the mesh for each patient's dataset. Never the less, it was not necessary to combine methods, FEM is used for all equations.The model we propose depends on the following considerations:\u2022 The coupling of a set of time-dependent equations, which simulate the whole process of the LITT treatment Fig. . The set\u2022 We consider the functional dependence of the various tissue properties at the various spatial and temporal points, according either to the tissue type, or temperature, or the damage value \u2013 or even a combination thereof.\u2022 We take into account the irreversible changes in the tissue stemming from the treatment as they directly affect the solution of the set of equations.Our model remains a mathematical model, meaning errors could appear from the considerations and simplifications made to realize it. Generally, such errors appear because of the following reasons:\u2022 The inaccuracy of the optical, thermal, and damage properties are main point in the model's set of equations. In fact, these properties play a key role in the accuracy of the model's results. Many methods have been presented to calculate these properties ,19,21, b\u2022 An error appears because of machine performance limitations: The available memory limits the number of mesh nodes and the degrees of freedom (DOF) used to build the model. This causes a deviation from an otherwise accurate result . On the \u2022 Absolute tolerance: All numerical methods have an allowed error (absolute tolerance) that reflects the criterion of the convergence. Normally, different solvers use different tolerances. In our model we used the FEMLAB's default tolerance value of 0.01 which leads to a final error of 1%, considered as a reasonable value for modelling.One way to follow these errors and deviations from a real treatment result is to estimate them and to eliminate their effects from the final results of the model. This can be realized and implemented in the model by adding an error-correcting factor from the first degree (or even higher) in the set of equations correcting the result of each equation at each time step. These corrective factors should be measured practically by comparing the results of the model and the results from real experiments on test tissues or probes.\u03bca for both native and coagulated different biological tissues are close. Baring that in mind, and knowing that the value of the scattering coefficient \u03bcs becomes normally, for biological tissue, 10 times greater than its starting value, i.e. native state, we can judge that as soon as the damage zone appears and the moving from native to coagulated state according to eq. 11, eq. 12, and eq. 13 the deviation in the calculations will increase as well.Our experiment shows a deviation of 5% in x-direction and 20% in y-direction. As the main reason for this deviation we propose inaccurate values for the optical tissue properties. Fig. Thus, accurate values of the different tissue properties, and especially the optical properties are key points in obtaining realistic results from the simulation. One promising technique for determination of optical properties was presented by Dam et al. . There mFinally, beside the error obtained from the optical properties, which affect all directions, both cutting the tissue with scalpel and the opening induced a tissue movement. This movement is a reason for deviation, especially in y-direction, as we perform the cutting in this direction.For several years now LITT has been a well-known and approved therapy system for tumour ablation in the liver and some other anatomical regions. Minimally invasive LITT procedures use a Nd:YAG 1064 nm laser. Therapy planning, however, remains unsolved and is still a challenging issue. Today's simulations are based on symmetric geometries. Without exact therapy planning systems, the usage of LITT is limited to homogeneous tissues or the respective surgeon's experience.The finite element technique proposed in this paper can overcome both limitations. We propose a model to validate in the future the LITT method in other anatomic regions.The model enables the efficient simulation for predicting the damaged zone induced with the diffuser of the LITT. The simulation is performed for tissue ablation near vessels, though obviously FEM is not limited to this. Exemplarily, we implemented the model for tissue ablation near the carotid artery in the neck region using an approximation for the artery shape. We describe the bases necessary to calculate the effects of the temperature rise caused by the absorption of light energy in the tissue, using the bioheat equation and including the cooling effects of vessel blood flow and micro-perfusion in tissue in order to determine the extent of thermal damage. The shape of the carotid artery is derived from a real segmented geometry based on, but not limited to, 3D ultrasound.Experimentally, we performed a laser irradiation in a porcine muscle tissue sample. The results of our model diverge between 5% to 20% from the lesion obtained in the experimental work. From the authors' point of view two major reasons can be identified. The lack of accurate data describing the thermal and optical properties leads definitely to deviations. Furthermore the cut of the probe with scalpel induces a certain tissue shift, especially in the y-direction.Anyhow, more experiments with different conditions are necessary to be able to carry out a statistical study and find the exact origin of the deviation, and, if necessary, define an error correction factors and add them to equation set. But that does not set aside the desire of accurate values for the properties of the tissue.From another hand, still our model practical, it presents a step in using segmented data as basis for much more detailed surgical therapy planning. Combining LITT and adequate planning system could increase both the anatomical application range and the quality of therapy procedures.Animated gif file, the Geometry. The animated gif shows the 3D ultrasound volume together with the carotid artery segmented using 3D Slicer software [software . The movClick here for fileAnimated gif file, The heat distribution and the damage zone in the volume. The video stream demonstrates the temperature rise inside the tissue. The video stream shows where, how, and when this damage appears. The damage zone is shown in grey colour. The gif file can be played using the internet browser.Click here for file"} +{"text": "Investigation of bioheat transfer problems requires the evaluation of temporal and spatial distributions of temperature. This class of problems has been traditionally addressed using the Pennes bioheat equation. Transport of heat by conduction, and by temperature-dependent, spatially heterogeneous blood perfusion is modeled here using a transport lattice approach.We represent heat transport processes by using a lattice that represents the Pennes bioheat equation in perfused tissues, and diffusion in nonperfused regions. The three layer skin model has a nonperfused viable epidermis, and deeper regions of dermis and subcutaneous tissue with perfusion that is constant or temperature-dependent. Two cases are considered: (1) surface contact heating and (2) spatially distributed heating. The model is relevant to the prediction of the transient and steady state temperature rise for different methods of power deposition within the skin. Accumulated thermal damage is estimated by using an Arrhenius type rate equation at locations where viable tissue temperature exceeds 42\u00b0C. Prediction of spatial temperature distributions is also illustrated with a two-dimensional model of skin created from a histological image.The transport lattice approach was validated by comparison with an analytical solution for a slab with homogeneous thermal properties and spatially distributed uniform sink held at constant temperatures at the ends. For typical transcutaneous blood gas sensing conditions the estimated damage is small, even with prolonged skin contact to a 45\u00b0C surface. Spatial heterogeneity in skin thermal properties leads to a non-uniform temperature distribution during a 10 GHz electromagnetic field exposure. A realistic two-dimensional model of the skin shows that tissue heterogeneity does not lead to a significant local temperature increase when heated by a hot wire tip.The heat transport system model of the skin was solved by exploiting the mathematical analogy between local thermal models and local electrical (charge transport) models, thereby allowing robust, circuit simulation software to obtain solutions to Kirchhoff's laws for the system model. Transport lattices allow systematic introduction of realistic geometry and spatially heterogeneous heat transport mechanisms. Local representations for both simple, passive functions and more complex local models can be easily and intuitively included into the system model of a tissue. Heat transfer in biological systems is relevant in many diagnostic and therapeutic applications that involve changes in temperature. For example, in hyperthermia the tissue temperature is elevated to 42\u201343\u00b0C using microwave ,2, ultraContact heating is used in transcutaneous blood gas monitoring, in which oxygen is transported out of the vasodilated capillary bed to a surface mounted oxygen sensor. Heating is used to achieve vasodilation. In 1851 it was already known that \"skin breathing\" occurs, in which oxygen diffuses out of ambient air into the body, supplying of order 1% of the body's oxygen uptake . TypicalInitial clinical demonstration with neonates occurred in 1969 when a polarographic electrode placed on the head was used to measure oxygen partial pressure . Since tSpatially distributed heating of skin and deeper tissue by electromagnetic fields and ultrasound is also of established interest -20. MicrIn hyperthermia, tissue is heated to enhance the effect of conventional radio- or chemotherapy. By delivering thermal energy, the tissue is stimulated to increase the blood flow by thermoregulation in order to remove the excess heat. The common method to produce local heating in the human body is the use of electromagnetic waves.\u03c9 . Alternatively, \u03c9 can be replaced by \u03c9m, the nondirectional mass flow associated with perfusion. Perfusion is valid on the spatial scale of ~100 \u03bcm. The contributions of heat conduction and perfusion are combined in the Pennes bioheat equation exposure ).-2 10 GHz pulse for 3 s. The layer of air farthest from the skin was set at 25\u00b0C and the core (2 cm below the surface) was set to 37\u00b0C. This resulted in the skin/air interface having a steady-state temperature of 34\u00b0C before the microwave exposure. The skin/air interface has a power transmission coefficient (|Tsa|2Re{Za/Ze}) of 0.49 at 10 GHz. Applying 10 GHz microwave results in an essentially linear rise in temperature, in agreement with prediction using other methods. When the input power level is less than 5 W cm-2, the peak surface temperature is less than 42\u00b0C. When the microwave exposure is turned off, relaxation of the skin temperature occurs over a time scale of several seconds. Onset of tissue damage occurs when the local tissue temperature reaches 42\u00b0C. The distribution of tissue damage with depth is shown for different power densities with a hot tip that is inserted into the epidermis is also modeled. The model assumes that the tip of the metal wire is enclosed in a thermally insulating material . The skin model contains stratum corneum, epidermis and dermis. As before, the core temperature (37\u00b0C) is fixed at 2 cm from the skin surface by extending the subcutaneous layer. Before heating the wire conducts heat outwardly to the air, consistent with the isotherms. The temperature at the hot wire tip is increased to 45\u00b0C at t = 10 s. The temperature contours at different time points is shown in Fig. The use of transport lattice approach for predicting heat transport in spatially heterogeneous structures is further illustrated with a two-dimensional model of the skin. The model is derived from an image of a histological section of skin Fig. . The temin vivo and in vitro studies have shown that the tissue response to heat stress is strongly temperature-dependent [Both ependent ,64,65. Wependent . In ordeAs shown in Fig. et al. [et al. [A more comprehensive non-linear temperature dependent perfusion model has been applied in modeling hyperthermia. Tompkins et al. used tem [et al. employedWe present a modular approach to modeling in which the skin is represented by three homogeneous layers, each with many interconnected local, steady state models that account for the local heat storage (heat capacity), local heat dissipation and local transport by both conduction and perfusion Fig. . The exiAs shown in Fig. Prolonged exposure to elevated temperatures can cause tissue damage by, for example, protein alteration or denaturation, often followed by recognizable changes in the optical properties of tissue . The ratWhen skin is exposed to a 10 GHz pulse of 3 s duration, the tissue damage indicator near the skin surface may be as high as 0.08, which suggests some damage at high power levels Fig. . This exTransport of heat by conduction, and by temperature-dependent, spatially heterogeneous blood perfusion, is predicted using a transport lattice model. This approach uses interconnected, local, steady state models for transport and storage, to together represent the Pennes bioheat equation. The thermal system model of the skin was solved by exploiting the mathematical analogy between local thermal models and local electrical (charge transport) models, thereby allowing robust, circuit simulation software to obtain solutions to Kirchhoff's laws for the system model. The skin model has a nonperfused viable epidermis, and deeper regions of dermis and subcutaneous tissue with perfusion that was constant or temperature-dependent. Spatially distributed heating and surface heating cases were considered. Accumulated thermal damage was estimated by using an Arrhenius type relation at locations where viable tissue temperature exceeds 42\u00b0C. Prediction of spatial temperature distributions was also illustrated with a two-dimensional model of skin created from an image. Validation of the transport lattice approach using experimental data is necessary for practical application of this method.TRG constructed and solved the several transport lattice models and wrote much of the manuscript. DAS computed the reflected and transmitted power in the skin layers, contributed to construction and solution of the models, and to writing of the manuscript. GTM provided guidance and advice with respect to thermal modeling, and helped write the manuscript. JCW conceived the local transport lattice model for solving the bioheat equation, provided overall guidance and helped with interpretation of results and writing the manuscript. All authors read the final manuscript."} +{"text": "Temperature is a frequently used parameter to describe the predicted size of lesions computed by computational models. In many cases, however, temperature correlates poorly with lesion size. Although many studies have been conducted to characterize the relationship between time-temperature exposure of tissue heating to cell damage, to date these relationships have not been employed in a finite element model.We present an axisymmetric two-dimensional finite element model that calculates cell damage in tissues and compare lesion sizes using common tissue damage and iso-temperature contour definitions. The model accounts for both temperature-dependent changes in the electrical conductivity of tissue as well as tissue damage-dependent changes in local tissue perfusion. The data is validated using excised porcine liver tissues.The data demonstrate the size of thermal lesions is grossly overestimated when calculated using traditional temperature isocontours of 42\u00b0C and 47\u00b0C. The computational model results predicted lesion dimensions that were within 5% of the experimental measurements.When modeling radiofrequency ablation problems, temperature isotherms may not be representative of actual tissue damage patterns. The mitigation of primary and metastatic tumors by radiofrequency ablation is a developing research area. The goal of ablation is to necrose treatment volumes by raising the temperature of targeted tissues. Ablation probes are inserted percutaneously, laparoscopically, or during surgery into cancerous tumors. Once positioned, high frequency alternating current (450\u2013550 kHz) is delivered through an uninsulated electrode into the surrounding tissues to a dispersive ground pad that is applied to the patient. The electromagnetic energy is converted to heat by resistive heating.While the usage of radiofrequency ablation devices is well established, efforts to optimize treatment strategies are ongoing. An important consideration in optimizing ablation is determining what treatment volumes are necessary and acceptable. In liver ablation, for example, treatment volumes generally extend a centimeter beyond the dimensions of a tumor -3. Sincein vitro and in vivo in animal models show wide variations, since many of the key parameters (i.e. tissue perfusion) cannot be controlled = 0 \u00a0\u00a0\u00a0 (Eq.1)\u2207\u00b7} \u00a0\u00a0\u00a0 (Eq.4)- 6.584 \u00d7 10whereN) = N [10.394-2.3776 N + 0.68258 N2 - 9.13538 N3 + 1.0086 \u00d7 10-2 N4]\u03c3 . The thermal properties of liver used in the model were acquired from Tungjitkusolmun et al.[;N is the normality of an electrically equivalent sodium chloride solution, un et al. and Duckun et al..o) is applied to the conducting tip of the ablation probe. The outer surface of the model serves as an electrical ground return (V = 0). An electrically insulating boundary condition is applied to the non-conducting portions of the probe such that n\u00b7(\u03c3\u2207 V) = 0; where n is the unit vector normal to the surface, \u03c3 is the electrical conductivity, and V is the voltage at the insulating surface. A thermal boundary condition of T = Tamb is applied to the outer surfaces of the model to simulate ambient temperature. Since the thermal mass of the probe is small compared to the surrounding tissue, we assumed that heat conduction into the probe itself was minimal. Thus, all other surfaces of the ablation probe are considered to have a thermally insulating boundary condition such that n\u00b7(k \u2207 T) = 0.A source voltage and Matlab to calculate temperature and tissue damage. While conventional finite element models effectively solve field solutions using a nonuniform geometrical mesh, tissue exposure calculations are integrated at each point in the model over the course of ablation and are more easily calculated using uniform rectilinear grids. As shown in Figure -3 cubic meters of blood/ cubic meter of tissue/ second) [Given the axial symmetry of the problem, we used a 2D-axisymmetric mesh consisting of 13,641 nodes and 26,880 elements. The Femlab 'Fldaspk' ordinary differential equation solver was used to achieve convergence. This is a robust variant of the traditional ODE15s stiff differential equation solver used in solving finite element problems in Matlab. Ablations were simulated at source voltages of 0, 2.5, 5, 7.5, 10, 12.5, 15, 17.5, 20, 22.5, 25, 27.5, and 30 volts. For each of the source voltages, we varied the initial level of tissue perfusion at 0%, 20%, 40%, 60%, 80%, and 100% normal tissue perfusion (6.4 \u00d7 10 second) . AmbientTo validate the computational model, experimental measurements were made in 6 freshly excised porcine liver sections. A single needle ablation probe with a 2 cm uninsulated tip was inserted 3 cm into each liver tissue. Since commercial RF ablation generators operate using either constant temperature or constant power feedback algorithms, an experimental constant voltage RF generator 500 kHz) was used 00 kHz wa,59. ThisComputational model calculations were made at 20, 25, and 30 volts following the same experimental protocol. Ambient temperature for these calculations was 22\u00b0C instead of the 37\u00b0C temperature used in the main simulations. The calculated lesion sizes were directly compared with the measurements in tissue.Table -3 mb3/ mt3/s), the electrical conductivity changes as much as 260% using a 30 volt source. The electrical conductivity is indirectly a function of tissue perfusion since tissue perfusion is zero in the necrosed treatment volume. Tissue perfusion lowers the tissue temperature outside the treatment volume which helps to conduct heat away from temperatures within the ablated area..Table 2, where \u03c3 is the electrical conductivity, \u03c1 is the tissue density, and |E| is the magnitude of the electric field. The data shows that the SAR is highest with increasing source voltage with no tissue perfusion. Initially, this seems counterintuitive as one would expect a higher maximum SAR for perfused flows, where a greater amount of power is needed to compensate for the convective heat loss. This observation can be explained by the large changes in the electrical conductivity (Table Table 2), whereas cell damage exhibits an S-shaped curve. Figure Figure Figure Figure A comparison of lesion volumes with no tissue perfusion computed using 63% and 100% iso-damage threshold contours and 42\u00b0C, 47\u00b0C, 60\u00b0C, and 90\u00b0C isothermal contours is presented for the cases of no tissue perfusion Table . The senTable To validate the computational model, ablation experiments were performed at room temperature (22\u00b0C) in excised porcine liver tissue using 20, 25, and 30 volt constant voltage radiofrequency sources (500 kHz). Ablations were made for a 15 minute exposure time. Figure To date, several computational studies have been performed to described the rate of lesion growth in radiofrequency ablation applications. In many cases, these studies use surrogate endpoints such as temperature isotherms and thermal dosing to calculate equivalent expressions for lesion size. While many models exists that account for far-more elaborate parameters such as tissue perfusion through large blood vessels, the interpretation of such models is difficult since most do not account for transient changes in tissue properties and often report tissue temperature only ,56,57,60In this study, we created a computational simulation that tested some of the basic assumptions made in modeling lesion growth problems. We developed a model where tissue perfusion and the electrical conductivity are allowed to vary at each time step and spatial position as a function of tissue damage and temperature. These simulations are significantly more time-consuming since gross simplifications to heating mechanisms are not made. Although our model geometry is simpler than others that appear in the literature, we chose to ignore large vessels since their position and impact are highly variable. We chose a simpler geometry so that the impact of damage-dependent tissue perfusion and temperature-dependent electrical conductivity could be assessed more directly.The damage-dependent tissue perfusion accounts for physiological observations of tissue coagulation and local cessation of blood flow. Unlike thermal dosing, where thermal injury is calculated globally over the entire duration of an ablation, tissue damage is calculated at every time step. The intermediate tissue damage that results at every timestep influences the local tissue perfusion and creates a moving boundary condition which changes the local heat sink properties. Ignoring the intermediate timesteps causes tissue perfusion to remain constant throughout the entire ablation, which results in an underestimation of the true lesion size. The use of temperature-dependent electrical conductivity greatly affects modeling results, as the electrical conductivity has been shown to increase dramatically over the course of tissue heating . When coAn important outcome of this study is the demonstration that, temperature isotherms and tissue damage patterns are not synonymous. Traditional use of temperature isotherms that are used to define lesion size rely on coagulation temperature for protein (42\u201347\u00b0C) and grossly overestimate lesion dimensions. Our studies show that temperature decrease is gradual, while tissue damage decreases rapidly as a function of distance. It is this sharp decrease in tissue damage that causes lesion boundaries to appear fuzzy, as predicted by our model. The results also demonstrate that ablation lesions continue to grow after the applied power is terminated. Lesions continue to grow while temperature envelopes collapse after ablation since sufficiently high temperature are present to accrue tissue damage. In nearly all cases, lesions continued to grow several minutes following the ablation. A comparison of the resulting lesion dimensions between fully perfused and non-perfused tissues show that the lesion width decreases 38\u201346% and the lesion depth decreases 18\u201320% when tissue perfusion is accounted for in the model. Previous studies have shown that tissue perfusion can account for as much as 50% change in the size of the lesions generated during ablation .An important observation in this study is the resemblance of the 60\u00b0C isocontour to lesion size. While the 42\u00b0C and the 47\u00b0C isotemperature contours are poor indicators of lesion size, 60\u00b0C is highly correlated with the lesion volumes. Seemingly, this would suggest that time-intensive tissue damage calculations need not be made since a critical temperature of 60\u00b0C can be used to identify lesion size. However, this is only true if the calculated temperature is a function of both transient changes in tissue perfusion and the electrical conductivity. In the absence of either of these phenomena, lesion sizes calculated at 60\u00b0C would underestimate lesion size.The validation data demonstrate that the model accurately accounts for the behavior of lesion growth in tissue. There are, however, a few limitations to this model. First, it is well established that temperature elevation of tissues results in the denaturing of proteins, which may drastically change the electrical conductivity of tissue in a nonlinear fashion ,57. PrelA second limitation in our model is that it is only valid for temperatures below 100\u00b0C. At temperatures above 100\u00b0C, tissues begin to boil and generate gas. When this occurs, some of the energy that contributes to temperature increase is used to change the water content of tissues into gas. At substantially higher temperatures, the composition of gas may be highly complex as tissue begins to burn and break down. Although gas generation is commonly seen in clinical use of radiofrequency ablation, impedance rises due to tissue charring limit the progressive rise in temperature. The complexity of multi-phasic ablation was beyond the scope of this study.The mention of commercial products, their sources, or their use in connection with material reported herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services."} +{"text": "In 2001 the Intergovernmental Panel on Climate Change concluded that climate is changing, humans are contributing, weather has become more extreme, and biological systems on all continents and in the oceans are responding to the warming. From the fourth IPCC assessment and the Arctic Climate Impact Assessment (ACIA 2004), we now know that the deep oceans have accumulated 22 times more heat than has the atmosphere, ice melt is accelerating, wind patterns are shifting (that\u2019s particularly ominous), and nonlinear surprises are very likely in store for the climate system and for the impacts on systems such as forests and coral reefs . The implications for public health and well-being are daunting, as illustrated in articles throughout this month\u2019s Environews section.With weather turbulence turning heads on Wall Street, an emerging evangelical voice calling for \u201ccreation care,\u201d the specter of \u201cpeak oil,\u201d and a barrage of energy bills mounting Capitol Hill (the U.S. Congress), we appear to be on the verge of really taking the first steps toward confronting our energy budget. The goal of stabilizing atmospheric concentrations of greenhouse gases requires a 60\u201380% reduction of emissions over the coming few decades.What follows are some considerations for crafting a comprehensive plan and some financial and policy instruments for implementation. Comparing life-cycle costs\u2014the health, ecologic, and economic dimension\u2014of proposed solutions can help differentiate safe solutions from those warranting further study, and from those with risks prohibiting wide-scale adoption. Solutions meeting multiple goals merit high ratings.Energy conservation (demand side management) is clearly the first place to start. \u201cSmart\u201d urban growth; a smart grid (with optimizing meters and switches); hybrid vehicles; heat capture from utilities or \u201ccogeneration\u201d (two-thirds of produced energy is lost as heat); \u201cgreen buildings\u201d; and walking, biking, and improving public transport can get us halfway there\u2014and save money.Distributed generation (DG)\u2014power produced near the point of use\u2014with solar, wind, wave, geothermal heat pumps, and fuel cells can be fed into grids where they exist and, via \u201cnet metering\u201d regulations, generate income for the individual producer. Where energy is scarce and grids are few, stand-alone systems\u2014augmented with human power and stored in improved batteries\u2014can pump water, irrigate fields, power clinics, light homes, cook food, and drive development. Clean DG also improves resilience in the face of more weather extremes , reduces carbon emissions , stimulates green industries, and creates jobs.Biofuels hold a great deal of promise. However, converting corn to ethanol means less corn for animals and us, and may yield no net energy gain. Sugar ferments without adding energy (yeast suffices). But large plantations can deplete soils and groundwater and, in the Amazon, sugar for alcohol is pushing land clearing for soybean production deeper into the rainforest. In Indonesia, monoculture plantations of trees that produce palm oil are transforming and degrading vast swaths of prime forest, setting the stage for spreading fires and releasing biologically stored carbon from trees and peat. Cellulosic conversion of range grasses by microbe-generated enzymes may work, but land considerations still hold; recycling farm waste and garbage may yield the best results overall.Building green buildings with healthy surroundings will create a critical syzygy, aligning clean energy production with sustainable forestry and green chemistry .2) can be separated from natural gas or methane (CH4) to use in fuel cells.While it is unrealistic to think we can meet all of our energy needs without some fossil fuel use, natural gas is the cleanest burning and may be the best back-up source during the transition. Also, hydrogen gas . Financial institutions often have the longest time-line perspectives. Finance can be thought of as the central nervous system of the global economy: It is feeling the pain of huge losses from weather extremes, with insured losses rising from $400 million a year in the 1980s to $83 billion in 2005 (Epstein and Mills 2005), and they are cogitating on their response. Enlightened, self-interested actions of investors and insurers\u2014through requirements for loans, influence on building codes, and reduced premiums for proactive directors and officers of firms, for example\u2014could ripple through the entire global economy.Governments must provide the incentives and create the infrastructure for the new economy. Credits for \u201cclean tech\u201d industries, progressive procurement practices , and tax benefits for commercial models that defray upfront capital costs are among the incentives needed to launch infant industries and drive market shifts. Aligning rules, regulations, and rewards\u2014and dismantling the enormous financial and bureaucratic disincentives\u2014can help erect the necessary scaffolding for the low carbon economy.Finally, the United States must sign the Kyoto Protocol (United Nations 1998). Under its umbrella, we can help create a substantive global fund for adaptation and mitigation that can make the clean energy transition a \u201cwin\u2013win\u2013win\u201d for energy, the environment, and the global economy."} +{"text": "Just past its 50th birthday, commercial nuclear energy is experiencing a tentative rejuvenation that could result in a greater role as a global source of electricity. Skeptics still harbor many of the objections that have slowed or stopped the construction of new nuclear power plants, but rising concerns about the cost and security of energy supplies and global climate change have reframed the debate in terms more favorable for nuclear power advocates.As a result, the question of whether governments should encourage the construction of new nuclear power plants is no longer off the table in developed countries such as Australia, the United Kingdom, and the United States. For other developed countries such as France and Japan, and for countries with fast-growing economies such as China and India, nuclear energy has remained a central component of energy policy. For example, to achieve its goal of generating 4% of electricity from nuclear power, China plans to add more than 30 new nuclear plants by 2020 to the 11 currently in operation or under construction. India\u2019s goal is to supply 25% of its electricity from nuclear power by 2050.Worldwide there are now 440 nuclear power reactors operating in 31 countries and producing a combined capacity of 367 gigawatts electric, or about 16% of the world\u2019s supply of electricity. The Vienna-based International Atomic Energy Agency (IAEA)\u2014the agency of the United Nations chartered to promote cooperation on nuclear issues\u2014estimates that at least 60 new nuclear plants will be constructed in the next 15 years. Given the world\u2019s growing demand for electricity, however, this added capacity will still account for only 17% of global electricity use.One central issue facing policy makers and electric utilities is the question of how to meet the rapidly growing worldwide demand for electricity while not increasing global greenhouse gas emissions. The U.S. Department of Energy\u2019s Energy Information Administration tracks world energy trends and projects a 75% increase in global electricity use between 2000 and 2020. By 2050 a tripling of use is probable. Electricity production currently is responsible for an estimated one-third of all greenhouse gas emissions.In terms of human welfare, this growth in electricity usage is desirable as reflected in the strong correlation between electricity consumption per capita and the United Nations\u2019 human development index, which combines indicators of health, education, and economic prosperity. Overall energy consumption per capita in the developing world is less than one-fifth that in the developed world, and as developing countries industrialize, they will tend to seek the least expensive supply to meet their electricity needs. In most cases this means coal-fired plants, which produce significantly more greenhouse gases\u2014primarily carbon dioxide\u2014than other carbon-based sources such as natural gas\u2013fired generators. Nuclear and noncarbon-based renewable sources such as wind and solar power do not directly create greenhouse gases.Global climate change and the 2005 entry into force of the Kyoto Protocol to the United Nations Framework Convention on Climate Change have spurred new thinking about the potential value of nuclear energy by both environmental groups and the nuclear energy industry. Recently, several prominent environmentalists have publicly supported nuclear energy, including former Anglican bishop Hugh Montefiore, a long-time trustee of Friends of the Earth, and Patrick Moore, cofounder of Greenpeace.Their support has alienated them from many in their former organizations, but indicates a more nuanced challenge to nuclear energy by some environmental activists, who are perhaps more willing to consider the nuclear option but still do not think it\u2019s the wisest choice. Organizations such as the Natural Resources Defense Council and the Union of Concerned Scientists now talk in terms of the proper role of government in energy policy and ensuring the safe operation of nuclear plants, rather than whether nuclear power should even be considered.The potential for building new nuclear power plants is quite different in different countries. For example, the role of nuclear power is unlikely to change substantially in countries with a flat demand for electricity, such as Japan, which now relies on nuclear power for 30% of its electric capacity and expects to see a population decline, or France, with a stable population and a power industry that is 80% nuclear. On the other hand, the United States, which currently operates 103 nuclear power plants and relies on nuclear energy for 20% of its electricity, expects to see a rising population and consequent greater demand. Developing countries offer the potential for considerably more use of nuclear power, especially as much of their populations will be urban, providing a concentrated market for large electric-generating plants.So in answer to the question of whether nuclear power makes economic sense, it simply depends\u2014\u201cin some countries it does, in others it does not,\u201d says Alan McDonald, a staff expert in planning and economic studies at the IAEA. \u201cIn countries like China and India, you need [every source of power] you can get. Asia has major pollution problems and energy needs. Sometimes it seems to be a matter of national preferences. In countries like Austria and Denmark, nuclear power is anathema; in others like Germany, opinions may be changing. In the United States, Wall Street is very skeptical and will watch developments closely.\u201dRelative costs of nuclear energy vary depending on what options and factors are being considered, but in general, McDonald says, the up-front costs of nuclear energy are very high while the cost of operation is relatively low. Thus, countries with government-owned electric utilities have an advantage in new power plant construction because they can fund investments more easily than investor-owned utilities, which are subject to the capital markets and the demand for rapid returns on investments.\u201cUntil the Kyoto Protocol, the environmental value of nuclear energy could not be translated into financial terms,\u201d says McDonald. \u201cBut now, obtaining greenhouse gas emission permits for a new coal-fired plant in Europe can cost more than the coal itself. Although the United States is not bound by Kyoto, U.S. investors may see the writing on the wall. If the treaty is changed and nuclear power becomes part of the international market mechanism that allows credit for clean energy sources and the trading of carbon emission credits, that would be a big incentive.\u201dBut more nuclear power doesn\u2019t come without potential security threats of another sort. \u201cIf the world sees a big increase in nuclear energy, there will be an increased risk of [nuclear arms] proliferation\u2014all things being equal,\u201d McDonald notes. Indeed, the director general of the IAEA, Mohamed ElBaradei, says that recent revelations about undeclared uranium enrichment activities and reprocessing of spent fuel, along with the discovery of an international illicit market in nuclear technologies, underlines the need for improved controls. On 7 October 2005 ElBaradei and the IAEA were awarded the 2005 Nobel Peace Prize for their efforts to stop the spread of nuclear weapons and prevent North Korea and Iran from acquiring nuclear arms.In response to the threat of proliferation, the IAEA has developed a model Additional Protocol that signatories can add to their IAEA Safeguards Agreements, which address questions of traceability and verification of nuclear materials. The Additional Protocol strengthens safeguards, protects nuclear materials and facilities, and bolsters the systems of nuclear export controls. So far more than 100 countries have added the protocol to their agreements. The IAEA further proposes that future reactor technologies be designed to be more resistant to proliferation, and that the international enrichment and reprocessing of nuclear fuel be centralized in a few countries under a structure that guarantees supply to member nations.The question of whether nuclear energy should play a significant role in future electric power generation cannot be separated from its history, the role played by governments, or the nuclear fuel cycle itself. The cycle has always been a focus of concern, from the potential hazards of uranium mining operations, through the processing of uranium into fuel, to the controlled fission process in the reactor core, and finally to the disposal or reprocessing of the fuel and related waste products.The civilian nuclear power industry was created through U.S. government\u2013electric utility industry cooperation that officially began with the Atomic Energy Act of 1954. Until that point, all U.S. atomic energy resources had been devoted to military activities. President Dwight Eisenhower\u2019s \u201cAtoms for Peace\u201d speech to the United Nations in December 1953 led to the U.S. government\u2019s financial and technical support of commercial nuclear energy. The government also enacted the Price-Anderson Act of 1957, requiring nuclear power operators to carry the maximum insurance offered by private insurance companies but also limiting their liability\u2014a stipulation demanded by the utility companies before they would invest in building nuclear power plants.The U.S. Navy first developed the now widely used pressurized-water reactor for propulsion in submarines. This design became the basis for the first commercial nuclear plant at Shippingport, Pennsylvania, which began operation in 1957. In the Soviet Union, reactors designed for producing plutonium for weapons were modified and new ones developed to generate heat and electricity. The first such reactor began producing electricity for the city of Obninsk in 1954.The fostering of nuclear energy was woven into many U.S. foreign policy initiatives during the early days of the Cold War. The United States sponsored the creation of the IAEA as the global manager of nuclear technology and materials, it supported international research reactors and isotopes for nuclear medicine and agriculture, and it helped create a nuclear energy industry in Europe, where coal production was declining and other sources of electric power were limited.The U.S. commercial nuclear power industry flourished from the mid-1960s through the early 1970s, although the power plants operating then were not economical compared to other sources at the time. Nuclear energy advocates argued that, with moderate and selective government assistance, the technology could cross the economic threshold into widespread acceptance by the utility industry. The U.S. Atomic Energy Commission\u2014which then combined the functions of today\u2019s Nuclear Regulatory Commission (NRC) and Department of Energy\u2014estimated that the United States would exhaust its oil and coal supplies within 100 years and that nuclear energy was the best replacement for fossil fuels in electricity production. The commission optimistically estimated that by 2000 as much as two-thirds of the nation\u2019s electric power could come from nuclear energy.The peak year for achieving this scenario in the United States was 1973, when 50 orders were placed for new nuclear plants, although in the following years leading up to 1979, cancellations began to exceed new orders. Then, in March 1979, a series of operator errors and miscommunications led to the partial core meltdown in the pressurized-water reactor at Three Mile Island Unit 2. The accident did not result in major damage outside of the core and primary cooling system, and according to all official estimates, the radiation released during the accident was minimal, well below levels that have been associated with health effects from radiation exposure. However, a panicked evacuation of nearby residents took place, followed by extensive investigations and a government-subsidized 10-year cleanup effort. The notoriety of the accident, combined with the high cost of construction, slow regulatory processes, and political opposition, essentially halted the growth of the U.S. nuclear industry. Although numerous nuclear power plants that had been under construction at the time eventually came online, no new U.S. plants were ordered.The devastating accident at Chernobyl Unit 4 in April 1986 could have been the death knell of the industry worldwide. The steam explosion, fire, and nuclear fuel melting at the site were the result of a flawed reactor design operated by inadequately trained personnel who violated safety procedures. The reactor design widely used for nuclear power in the Soviet Union did not include the containment system used with most Western reactors, and so substantial quantities of radioactive material, dust, and gases escaped into the atmosphere.Chernobyl\u2019s Legacy: Health, Environmental, and Socio-Economic Impacts and Recommendations to the Governments of Belarus, the Russian Federation, and Ukraine, estimates that around 4,000 people have died or will die as the result of exposure related to the accident. The report observes that \u201cmental health is the largest public health problem created by the accident,\u201d referring to affected residents\u2019 subsequent poverty, substance abuse problems, and \u201cparalyzing fatalism,\u201d manifested as negative self-assessments of health, belief in a shortened life expectancy, lack of initiative, and dependency on assistance from the state.The Chernobyl site is now entombed in a concrete structure known as the Sarcophagus, but it is not stable for the long term and is not air-or watertight . The accident was a deeply traumatic experience for the 350,000 people who relocated from the area. A 30-square-kilometer area around the site remains closed because of high levels of contamination. About 50 people were killed in the initial accident and emergency response. A September 2005 IAEA report, Even with the resulting public outcry against nuclear power, the world did not halt new construction of nuclear power plants. However, some European countries such as Belgium, Germany, and Sweden began to reconsider their plans for nuclear energy, and eventually developed policies to phase out existing plants. Now some of these countries are under the gun to find replacement energy sources. Sweden, for example, aims to be nuclear-free by 2010, having taken a second reactor offline in June 2005 (the first was closed in 1999). But the remaining 10 plants still supply about half of Sweden\u2019s domestic energy production, according to the World Nuclear Association.The Future of Nuclear Power: An Interdisciplinary MIT Study, spelled out the major areas of concern surrounding nuclear energy and proposed a plan that the authors hoped would allow the United States to resume development of nuclear power in order to reduce greenhouse gas emissions. The study identified the four critical problems that must be overcome for nuclear power to succeed\u2014cost, safety, waste, and proliferation. It also offered policy recommendations for making the nuclear energy option commercially viable, including steps to lower cost and a limited production tax credit to \u201cfirst movers,\u201d private sector investors who build and then operate new nuclear plants.An influential 2003 report out of the Massachusetts Institute of Technology (MIT), \u201cOur recommendations are basically holding up,\u201d says study cochair Ernest Moniz, who is codirector of MIT\u2019s Laboratory for Energy and the Environment and former undersecretary for energy during the Clinton administration. \u201cOn the positive side, new regulatory approaches are being developed, the industry\u2019s intent is to build a new reactor, there are more open discussions with environmental groups, and the Energy Policy Act became law,\u201d he says. \u201cOn the negative side, the situation with spent fuel management is worse\u2014Yucca Mountain casts a shadow over any decision. And the non-proliferation situation in Iran is a real problem.\u201dThe fate of Nevada\u2019s Yucca Mountain nuclear burial site is unclear. In the face of sustained resistance from the state and citizens groups, the federal government has slowed in its effort to build a long-term geological repository for commercial spent fuel and high-level radioactive waste. Opposition to the Yucca Mountain project is based on a long history of Nevada being a nuclear weapons testing grounds, resentment at becoming a repository for toxic waste generated elsewhere in the country, and concerns that the site is not geologically stable enough to guarantee that the radioactivity will remain confined over the required 10,000-year span. But several more such sites will be needed in future decades if a significant number of new nuclear power plants are built.Moniz says the MIT study endorses a robust research and development program and tax credits for the nuclear industry. This is because, in the past, there has been considerable regulatory uncertainty, causing prohibitively high financial risk for utility investors. In addition, the true cost of burning carbon-based fuels has not been internalized, meaning that if the health and environmental costs of pollution and greenhouse gases could be factored in, nuclear energy would be very competitive. As a result, public subsidy of noncarbon-based energy sources is justified.The comprehensive Energy Policy Act of 2005 that Moniz cites provides loan guarantees to develop energy technologies, including nuclear power, that avoid, reduce, or sequester greenhouse gases. It also provides a tax credit of 1.8\u00a2 per kilowatt hour for 6,000 megawatts of capacity at new nuclear power plants . Important to the industry, the act provides investment protection against delays in licensing and startup that are beyond the control of industry, including litigation.The act also provides several billion dollars for nuclear energy research and development, which translates into work on a more cost-efficient and inherently safer generation of reactors known as Generation IV. These reactors achieve greater safety through passive technologies that automatically shut down the reactor in an emergency, bypassing the risk of operator error . They are also more efficient and relatively more cost-effective than their Generation III predecessors. In another bow to the environment, the act funds construction of a cogeneration reactor that will produce both electricity and hydrogen, which advocates hope will be a new, carbon-free fuel for automobiles\u2014the single largest source of greenhouse gas emissions.Finally, the act funds a central nuclear energy program of the Bush administration: Nuclear Power 2010. The program was unveiled in 2002 as a government\u2013industry cost-sharing plan to identify three sites for new nuclear power plants, develop Generation III reactors, and develop a single-license process with the NRC for approval of both plant construction and operation, thereby removing much of the delay and uncertainty for investors.In response, three consortia of electric utility companies, reactor suppliers, and construction firms have made proposals. None are yet committed to building a new nuclear plant. The consortia are led by Dominion Resources, Exelon and Entergy (via the NuStart Energy Development consortium), and the Tennessee Valley Authority. These consortia represent operators of 67 of the nation\u2019s nuclear plants, and their proposals have all focused on building a new plant on sites where plants already operate\u2014in much the same way that a consortium of 10 electric utilities built the Yankee Rowe plant, one of the first commercial nuclear plants, in the 1950s.The consortia embrace a number of different reactor vendors and designs, some of which have already been certified by the NRC. The final decision on building a nuclear power plant will depend on factors as they stand later this decade, including the power market, the status of permanent spent fuel storage, and the ability of the participants to obtain financing without adversely affecting their credit ratings.\u201cThe industry\u2019s interest is very real,\u201d says Russ Bell, a senior project manager for new plant development at the Nuclear Energy Institute, a utility trade association. \u201cThe utilities are [participating in consortia and spending money on preliminary designs and siting plans] because the economics are turning in favor of nuclear, especially over the long term. [The Kyoto Protocol] is not driving us, but it makes sense and there is increasing concern about pollution in the United States and more stringent environmental regulations.\u201dBell says the industry is getting what it needs from the Energy Policy Act and is looking to government to do no more than jumpstart new builds after so much time has passed. He acknowledges the long time horizon for building new plants in the United States. Assuming that any of the consortia meet the 2010 goal of being licensed to build and operate a plant, another four to five years will pass before construction is complete and electricity flows. Meanwhile, the electric utility industry will continue to improve operating performance of existing nuclear power plants and apply for license extensions.Originally licensed for 40 years, the first operating license issued by the NRC will expire in 2006, approximately 10% will expire by the end of 2010, and more than 40% will expire by 2015. The decision to seek license renewal is strictly voluntary, and nuclear power plant owners must decide whether they are likely to satisfy NRC requirements and whether license renewal is more cost-effective than shutting down and pursuing other sources of energy. The NRC has now granted 35 plants the right to operate for another 20 years. Three-quarters of the nation\u2019s plants have received, have applied for, or are expected to apply for an extension.The New York Times.The question of plant life extension can bring the relationship between nuclear energy and greenhouse gases into sharp focus. For example, the governors of nine Northeast states have proposed an agreement to cap greenhouse gas emissions from all power plants in their states. Two nuclear power plants in the region, one in Vermont and one in New Jersey, are up for life extension, yet if these plants are shut down, the result would be increased reliance on carbon-based fuels. This could potentially triple greenhouse gas emissions in Vermont and double them in New Jersey, according to the 14 September 2005 edition of \u201cWe are not fundamentally opposed to nuclear power,\u201d says David Lochbaum, a nuclear safety engineer at the Union of Concerned Scientists, \u201cbut there are better choices. In addition, we now have spent nuclear fuel in storage places where it is not meant to be. It\u2019s not a health threat yet, but it could be.\u201dLochbaum is also concerned about the oversight role played by the NRC. \u201cThe NRC budget has been cut for a decade,\u201d he notes. \u201cIt is understaffed to support a nuclear resurgence. And the industry still has operational troubles at some plants.\u201dThese concerns are echoed by Thomas Cochran, director of the nuclear program at the Natural Resources Defense Council and an advisory committee member on the MIT study. \u201cThe Energy Policy Act was the result of successful lobbying by the nuclear industry,\u201d he says. \u201cThey will probably build a few plants and then the issue is, are you back to where you are today?\u201d Cochran does not believe that the subsidy or the economics will work for nuclear power. \u201cIt\u2019s not helpful to just say you are for or against nuclear,\u201d he says. \u201cUltimately you must make a decision on real policy to address global warming, and a carbon tax is the best way.\u201dThe objective of a carbon tax would be to internalize the environmental costs and hope for an open competitive market for energy. \u201cTo balance the energy market, you either tax a pollutant or regulate it,\u201d says Cochran. \u201cIf public policy was made correctly, it would help the nuclear industry.\u201dIs there a real, economically justified \u201cnuclear resurgence,\u201d or simply a steady growth in some regions to meet rising demand for electricity? Nothing happens quickly in the world of power plant construction. Yet major investments by government and industry can change the bases of electricity supplies in the time frame of a decade or two. France closed its last coal mine in 2004, and its transition from 15% to 80% nuclear-based electricity was accomplished in 20 years. A sense of optimism and urgency now surrounds the question of whether to pursue nuclear power. How this translates into results should unfold at a brisk, measurable pace."} +{"text": "Active Magnetic Resonance Imaging implants are constructed as resonators tuned to the Larmor frequency of a magnetic resonance system with a specific field strength. The resonating circuit may be embedded into or added to the normal metallic implant structure. The resonators build inductively coupled wireless transmit and receive coils and can amplify the signal, normally decreased by eddy currents, inside metallic structures without affecting the rest of the spin ensemble. During magnetic resonance imaging the resonators generate heat, which is additional to the usual one described by the specific absorption rate. This induces temperature increases of the tissue around the circuit paths and inside the lumen of an active implant and may negatively influence patient safety.This investigation provides an overview of the supplementary power absorbed by active implants with a cylindrical geometry, corresponding to vessel implants such as stents, stent grafts or vena cava filters. The knowledge of the overall absorbed power is used in a finite volume analysis to estimate temperature maps around different implant structures inside homogeneous tissue under worst-case assumptions. The \"worst-case scenario\" assumes thermal heat conduction without blood perfusion inside the tissue around the implant and mostly without any cooling due to blood flow inside vessels.The additional power loss of a resonator is proportional to the volume and the quality factor, as well as the field strength of the MRI system and the specific absorption rate of the applied sequence. For properly working devices the finite volume analysis showed only tolerable heating during MRI investigations in most cases. Only resonators transforming a few hundred mW into heat may reach temperature increases over 5 K. This requires resonators with volumes of several ten cubic centimeters, short inductor circuit paths with only a few 10 cm and a quality factor above ten. Using MR sequences, for which the MRI system manufacturer declares the highest specific absorption rate of 4 W/kg, vascular implants with a realistic construction, size and quality factor do not show temperature increases over a critical value of 5 K.The results show dangerous heating for the assumed \"worst-case scenario\" only for constructions not acceptable for vascular implants. Realistic devices are safe with respect to temperature increases. However, this investigation discusses only properly working devices. Ruptures or partial ruptures of the wires carrying the electric current of the resonance circuits or other defects can set up a power source inside an extremely small volume. The temperature maps around such possible \"hot spots\" should be analyzed in an additional investigation. Magnetic Resonance (MR) images. These effects arise either from the different susceptibility of tissue and metal, generating a discontinuity of the local field strength at the interface, or from the Faraday cage effect, which is set up by induced eddy currents on the metallic implant structure is the electric conductivity, \u03c1 [kg/m3] is the mass density, E [V/m] is the amplitude of a sinusoidal time dependent electric field and r is the position vector. Assuming that the amplitude of the electric field only arises by induction from a uniform, linearly polarized magnetic field with amplitude Bm [T] and angular frequency \u03c9 [rad/s] in a homogeneous body of rotational symmetry, the following equation holds is the MR-sequence repetition time, N is the number of identical excitations during TR and \u03c4 [s] is the duration of one excitation pulse. cdc equals to the time ratio of \"rf-on\" to \"rf-on + rf-off\" during an MR scan. cpwm corrects for rf-pulses which are not rectangular in shape and is the ratio between the energy of the MR excitation pulse and the energy of a rectangular shaped pulse with identical amplitude and identical duration.The duty cycle factor B1 or a linearly polarized magnetic field with amplitude Bm for excitation is amplified inside an aMRIi with respect to the used resonance circuit. Because of the shape of vascular implants and to simplify calculations, a solenoid is chosen as inductor of the resonator and is investigated as a magnetic antenna. For the theoretical estimation, the axis of the solenoid resonator is assumed to be parallel to a sinusoidal linearly polarized magnetic field with amplitude Bm, or equivalently to be in the plane of a circularly polarized magnetic field with magnitude B1. For the resonance case (2\u03c0\u03bd0 = \u03c90 = (LC)-1/2), where the overall resistance of the resonator is just R, the following equation can be derived from the law of induction, Ohms law and the definition of the quality factor Q of a resonator,A circularly polarized magnetic field with magnitude Vind [V] and Vself [V] are the induced and self-induced voltage respectively, Z [\u03a9] is the impedance, R [\u03a9] is the resistance, L [H] is the inductance of the inductor and \u03b1res and \u03b1 are the flip angles inside the resonator inductance and outside the resonator respectively. The total magnetic field inside the resonator arises from both components Bm and Bres. For large quality factors (Q>>1) Bres dominates Bm. With Eqs it is possible to calculate the magnetically induced SAR and with the SAR the corresponding power loss inside a resonator. But this takes into account only power losses due to eddy currents and not the total power loss of a resonator.where The overall power loss P [W] also includes the electric losses on and around the resonator. It is given by the following basic equation ,Wtotal [J] is the energy stored inside the resonator.where Wtotal can be expressed by the energy stored in the magnetic field of maximum amplitude using the following basic relations of a solenoid inductance with nr turns ,The total energy B1 the total power loss can be calculated by combining Eqs. towhere V, A, \u2113 with index imp are the volume, cross section and length of the implant inductor. For a pulsed MR sequence with excitation magnitude cdc, cpwm and B12. For a specific sequence on a MRI system with a definite resonance frequency, the extra power is proportional to the quality factor Q and the volume Vimp of the inductance (Eq. (7)). The proportionality to the quality factor Q may be surprising, because it states that for better quality factors the power loss is larger. This is due to the total energy of the resonator, which depends quadratically on the field strength inside the resonator. This field strength is proportional to the exciting field B1 (or Bm) as well as the quality factor Q. Combining Eq. (5) and Eq. (6) alters the inverse proportionality to Q (Eq. (5)) to a proportionality (Eq. (7)) with respect to the field established by the transmit coil. Examining the power loss with respect to the magnetic field inside the resonator confirms the inverse proportionality to Q.A comparison of Eqs. shows that for a certain resonator tuned to the resonance frequency of a specific MR field strength, the supplementary absorbed power is proportional to the SAR of the MR sequence due to the identical dependence on E [J] between two simulation cells with a specific contact area A [m2], temperature difference \u0394T [K] and heat diffusion path length \u0394x [m] for a time interval \u0394t [s] is given by the equation for heat conduction as,All temperature increases are calculated as temperature difference maps by using a simulation volume divided into many small simulation cells. The energy exchange \u0394\u03bb [W/(m K)] is the thermal conductivity.where tot of one cell during a time interval \u0394t is the sum of exchanges of this cell with all adjacent cells with non zero contact area and the energy change due to the power loss pcell inside the cell. From this total energy change the temperature difference \u0394T* can be calculated,The total energy change \u0394Ec [J/(kg K)] is the specific thermal capacity of the cell material and Vcell is the cell volume.where \u03b2-test) version of Teechart 7 used within the Delphi and Kylix environment. The simulation describes the time developing temperature difference maps around an aMRIi with cylindrical geometry for a constant total power loss P.This investigation did not use any commercially available finite volume package. The simulation is self-coded for problems with cylindrical geometry in Kylix and Delphi, a software development environment, based on object oriented Pascal. The graphical outputs are mostly generated by an evaluation . Most of the cells have contact to 4 adjacent cells with contact areas different from zero ). These contact areas as well as the volumes of the cells (Eq. (10c)) only vary with the index r.The entire simulation volume is assumed as a cylinder with length Figures , 3. Beca Figures , 3. UsinAl[r] [m2] is the contact area in both cylinder axis directions (from index x to x-1 and to x+1) whereas Ar[r] [m2] is the contact area in radial direction from index r to r+1. The contact area in radial direction from index r to r-1 is identical to the area Ar[r-1]. A last \"shell\" with cells as heat (energy) sink is placed as one boundary of the simulation volume at r = m + 1 and x = n + 1. These cells always keep a constant temperature (\u0394T = 0), even when receiving energy during one simulation step. Partly the simulations use a second heat sink representing a blood flow through the inner volume of the implant. For this second heat sink the temperature differences of all cell elements below a radius rflow*\u0394r are also kept zero independent of the energy transfer from the cell elements with higher index r. This approximation assumes that the energy applied to the inner cylinder volume with blood flow is completely transported away from the simulation volume during one time step \u0394t. At r = 1 and x = 1 as well as at r = m + 1 and x = n + 1 the simulated volume has its boundaries. The symmetry and the use of cylinder coordinates imply that cells with index r = 1 or x = 1 can not exchange energy with cells at a lower index. For x = 1 the symmetry plane defines an identical temperature at index x = -1, ) the physical parameters \u03bb, \u03c1 and c can be set freely. For all other cell elements they are set as tissue. The presented simulations use different metal parameters for heat generating cells . This change \u0394Etot of one cell is the sum of five different energies. One is due to the power loss pcell = p defined for a cell element at index r and x. The four others are energy exchanges (Eq. (8), depending on the contact areas ) and the temperature differences between adjacent cells with respect to the equivalent heat diffusion length through the material of the examined region. From \u0394Etot the temperature change \u0394T* is calculated according to Eq. (9b). This value is added to the prior value ) for each cell during the whole iteration process. The entire simulated time tsim consists of q iterations with time interval \u0394t (tsim = q * \u0394t).The simulation starts with a temperature field rwire, which is heated by a constant power density P* [W/m], defined as power per wire length [W/m]. After reaching the thermal equilibrium, the power associated with P* penetrates through every cylinder surface surrounding the linear power source independent of the radius r (with r>rwire). The temperature difference \u0394T between the wire surface and a point at radial position R can be calculated in a homogeneous medium from the equation for heat conduction.One test of the principal correctness of the self-coded simulation is possible by a comparison of the simulation results with those of an analytical solution. An easy straightforward analytical solution is available for an endless linear wire with radius P* and with energy exchange only in radial direction should approach, after sufficient time, similar temperature differences between the wire surface and a point at distance (R-rwire) , because of the assumed thermal equilibrium. A simulation for a wire with power loss ) Figure .The implemented algorithm was controlled with different checks, beside the afore-mentioned comparison with an analytical model. First, the total energy uptake of the heat sink surface is added up during all iteration steps. This energy summed with the energy stored inside the simulation volume must equal the total applied energy. The energy inside the simulation volume is calculated independently from the finite volume algorithm by using the final temperature increase, the specific heat, the density and the volume of each cell.Second, the algorithm was tested as to whether it provided similar results for identical geometries with different spatial resolution, different time resolution as well as a different size of the simulation volume surrounding the implant.\u03c1, \u03bb, c) from Table Lsim and radius Rsim. A cylindrical implant with sizes Limp and Rimp is placed at the center of this volume. The simulation is calculated for three different models.The cylindrical simulation volume basically is assumed as homogeneous tissue with the physical parameters , could certify a realistic estimation.P* with Eq. (7). Using this power density P* it is possible to calculate the temperature difference between the wire surface and the difference between the wire surface and a specific radial distance after reaching the thermal equilibrium according to the model of Eq.(11). The results are summarized in Figure rwire. In radial direction, just a few cells are sufficient to describe the heat generating wire; the radial resolution \u0394r is chosen in such a way that n \u0394r - \u0394r/2 is equal to rwire. This choice takes into consideration that the center of the last cell is the reference radius rwire for the theoretical description according to Eq. (11). For the comparison with theory, the temperature increase in a simulation cell is set to a radial position just half between the walls of a simulation cell. During the temporal development the simulated radial temperature differences increasingly approach the analytical solution for the thermal equilibrium.For a given MRI sequence and a given resonator with known geometry it is possible to calculate the power loss density For sufficiently short time steps \u0394t, the simulations show the expected behavior that the temperature increases monotonically in each simulation cell. For overly large time steps the energy flow out of a cell during a time step can become larger than the energy loss inside it. This leads to a physical incorrect situation with decreasing temperatures inside a cell. For this case the simulation produces huge deviations and the temperature differences oscillate in an unpredictable fashion during the temporal development.Increasing the simulation volume without changing the spatial and temporal resolution as well as the size of the implant generates only slight changes see at the cAn improved spatial resolution without changes of the heat generating volume modifies the results very little Figures ; 10b,d. Q normalized volumetric power loss density PV = P/(Q VImp) for resonators exposed to an MRI sequence with maximum SAR to 50 cm3, . The achievable quality factor is low in ionic surroundings, such as tissue, but increases with the insulation of the solenoid wire . For thin insulations, which are necessary for the retention of the mechanical properties of the vascular implants, the achievable quality factor is below 5. However, the simulation as a worst case scenario partly uses quality factors far above this value to test the safety of the implants also for these conditions. As examples, the resonators described in Table According to Eq. (7) the with AR Table can be closs was verified experimentally. Figures For the three differently shaped prototype examples Table the heatTwo rings are the worst case for resonator no 1 from Table For resonator no 2 Table 6 rings The temperature peaks at the power generating rings become negligible for six or more rings after ten minutes simulated time. Therefore it was reasonable to choose a cylinder shell as power generating source. The simulations with a homogenous power dissipating cylinder shell instead of rings are shown from Figure imp is taken as equal to rwire. The condition that the wire is endless can be realized in two ways. The first way is to choose a very long Limp and to evaluate the temperature increases only at positions on the center plane (index x = 1) and at radial distances r small compared to Limp. The second way is to modify the simulation software in such a way that the energy transfer in axial direction is excluded. This can be done by considering only volume elements of the center plane (index x = 1) and setting the heat sink shell temperature (index x = 2) after each iteration not to zero, but to the same value as at index x = 1 for all r respectively.The simulation of an endless wire uses the first of the three above mentioned models. Especially, the radius RThe second way has been realized. The common and expected result was that during the temporal development the simulated radial temperature differences increasingly approach the analytical solution for the thermal equilibrium (Eq. (11)). One comparison between a simulated and the analytical result is shown in Figure A correct simulation of an endless wire should deliver results similar to the analytical curve of Eq. (11) using an implementation of the algorithm with no energy exchange in longitudinal (x) direction. The physical model of such a linear wire claims that the temperature difference between the wire and a cell at a certain distance increases monotonically with that distance. The theoretically calculated curve is valid after reaching the thermal equilibrium. With increasing simulated time the calculated curves show an increasingly close coincidence with the prediction of Eq.(11) Figure , indicatP* is safe for active magnetic implants does not affect the results very much as long as the temperature maps appears smooth ) with the total power loss (Eq. (7)). For all example prototypes in Table Achievable quality factors were derived from the construction of experimental solenoid resonator prototypes Table . The quaant (Eq. ) with thant (Eq. ) with thQ of 5 for resonator no 1 and therefore the maximum power loss. For resonator no 2 (Q = 12.5) and no 3 (Q = 80) the chosen value is far beyond the reachable inside an ionic surrounding. These high values for resonator 2 and 3 are chosen to pronounce the safety statement for smaller resonators with quality factor values well above achievable values. The unrealistic high quality factors do not show physiologically critical temperature increases. For resonator number 1 (with a large volume) a realistic value was chosen, because in this case the temperature increase reaches physiologically dangerous values.Realistic values for the quality factor of a resonator placed in ionic surroundings such as tissue are below 5 for all example resonators of Table 3 and quality factors of 3 to 4 were used. After excision and histopathologic examination, the tissue did not show any indications of heating after several MRI investigations. This is in coincidence with the simulation results, because even for small resonators with a much higher quality factor no dangerous heating was calculated.Part of the simulation results could be confirmed on stents implanted inside the aorta of rabbits ,10. SmalTen minutes of simulated time is 5 minutes less than the critical time according to the FDA regulations for imaging of the trunk assuming a SAR of 4 W/kg. But all simulations show only small changes after 10 minutes. Therefore it is unnecessary to increase the simulation time further. Nevertheless, one example of 15 minutes simulated time is given in the movie see and showsim and Rsim, the second calculation uses doubled values of Lsim and Rsim without changing the spatial resolution or the other simulation parameters. This approach shifts the heat sink to a larger distance from the heat generating cell elements. As expected the heat sink layer drags down the temperature increase in its vicinity, but the effect is small if the volume is adequately chosen. The differences for identical cells between both simulations are shown by alternating both views a few times at the end of the movie (see The weak influence on the distance between power generating cells and heat sink (the outer most layer) is also shown in the movie see . The difovie see .For all three example resonators Table the firsTherefore resonator 1 is assumed as a solenoid with 2 turns Figure . The simFor 6 or 12 rings and after 600 s calculated simulation time, the peak values near the location of the power generating rings inside the tissue become more and more negligible. Therefore a model using a power generating cylinder shell is appropriate for resonators with a reasonable number of turns.For resonators no 2 and no 3, the cylinder shell simulation did not show any unsafe heating and is similar to the ring calculations. Neglecting the peaks of the ring simulation the maximum temperature increase is nearly identical.Only the simulation for resonator no 1 with two rings reaches a critical temperature value above 5 K, all others stay below this value. For a large number of rings or, stated more precisely, a high wire density at the cylinder surface, it is possible to use the cylinder shell model . The reduced temperature increase for resonator no 1 and for various sizes of the cooling blood flow is shown in Figure With respect to the overall power absorption greater than 100 Watts inside the human body during MRI investigations with a maximum SAR, one additional Watt inside the entire body is negligible.1. Secondly, to some extent no blood flow inside the inductor (vessel) of the implant is included. Thirdly, no blood perfusion inside the tissue around the resonator is taken into account. Lastly, too large quality factors are used.This investigation assumes a \"worst-case scenario\" in different ways. Firstly, the resonator is assumed being perfectly aligned within the plane of the excitation field B1.For most \"normal\" cases, intact active implants will therefore be less critical, but it is not possible to exclude the worst case conditions. Pathologies may alter blood flow and perfusion in the area directly adjacent to the current paths of the resonator and certainly the resonator can be perfectly aligned to the plane of the exciting rf-field B3, which may be dangerous, if the tissue around the wires of the implant is not exposed to blood flow or sufficient blood perfusion. To reduce the risk for such active implants, the quality factor has to be low. This reduces also the amplification of the MR signal the resonator is made for and can thus be dispensed with altogether.Especially for resonators with a large volume (such as resonator no 1) and with a small number of rings it is possible to reach critical temperature increases above 5 K Figures , 11. ForAs the above simulations assume properly working resonance circuits without any failures on the electric paths of the system, one worst case scenario was not presented. Defects such as ruptures or partial ruptures may generate a relatively high resistance over very short distances. The current flow through this resistance can produce a large power loss inside an extremely small volume. This can generate a very high power density which, even for small implants, may induce physiologically critical temperature increases for a small volume. In fact the analysis of these \"hot spots\" is important, because ruptures of stent struts are likely and a high power density can occur also for smaller implants. An additional investigation estimating the maximum possible power loss inside such \"hot spots\" and the resulting temperature maps around them is necessary to check the safety of active implants under these circumstances. This is being prepared for future publication.The study protocols for the cited animal experiments were approved by the responsible authority .MB has drafted the investigation, written the manuscript, coded the simulation and contributed to the theory and part of the cited experimental work.WV has drafted the theoretical part of the manuscript and reviewed the manuscript. Also part of the cited experimental tests for power loss, quality factor measurements and prototype constructions are from WV.JS was responsible for the cited animal experiments and provided the in vivo images of prototypes in an animal model. He has also contributed to the medical background and has reviewed the manuscript.DG has enabled and reviewed the manuscript, contributed to the medical background, and was helpful during the investigation with discussions.Movie with an example of a time developing temperature map This movie (animated GIF) shows the time development over a period of 900 s, which is the maximum permitted time for the imaging of the trunk with an SAR of 4 W/kg (manufacturer declaration, sequence of Table Click here for file"} +{"text": "Computer models of the electrical and mechanical actions of the heart, solved on geometrically realistic domains, are becoming an increasingly useful scientific tool. Construction of these models requires detailed measurement of the microstructural features which impact on the function of the heart. Currently a few generic cardiac models are in use for a wide range of simulation problems, and contributions to publicly accessible databases of cardiac structures, on which models can be solved, remain rare. This paper presents to-date the largest database of porcine left ventricular segment microstructural architecture, for use in both electrical and mechanical simulation.Cryosectioning techniques were used to reconstruct the myofibre and myosheet orientations in tissue blocks of size ~15 \u00d7 15 \u00d7 15 mm, taken from the mid-anterior left ventricular freewall, of seven hearts. Tissue sections were gathered on orthogonal planes, and the angles of intersection of myofibres and myosheets with these planes determined automatically with a gradient intensity based algorithm. These angles were then combined to provide a description of myofibre and myosheet variation throughout the tissue, in a form able to be input to biophysically based computational models of the heart.Several microstructural features were common across all hearts. Myofibres rotated through 141 \u00b1 18\u00b0 (mean \u00b1 SD) from epicardium to endocardium, in near linear fashion. In the outer two-thirds of the wall sheet angles were predominantly negative, however, in the inner one-third an abrupt change in sheet angle, with reversal in sign, was seen in six of the seven hearts. Two distinct populations of sheets with orthogonal orientations often co-existed, usually with one population dominating. The utility of the tissue structures was demonstrated by simulating the passive and active electrical responses of two of the tissue blocks to current injection. Distinct patterns of electrical response were obtained in the two tissue blocks, illustrating the importance of testing model based predictions on a variety of tissue architectures.This study significantly expands the set of geometries on which models of cardiac function can be solved. Computer modelling of the heart's electrical and mechanical action is a rapidly maturing field of study. Whole-heart and tissue-segment modelling of the time-dependent spread of electrical activation, and mechanical contraction , relies Ventricular myocardium has been shown to have a complex laminar structure, in which myocytes are grouped by perimysial collagen into branching sheets approximately four cells thick ,10-15. EThis paper presents the development of a database of tissuThe model construction methods are based on those used by Costa et al. , and LeG1, X2, X3) was defined as aligned with the local circumferential (X1), longitudinal (X2), and radial (X3) axes of the LV axis, could be used to assess the degree of tissue shrinkage sustained during the fixation and freezing processes. Geometrical measurements made in the frozen tissue were later scaled in an isotropic fashion according to these measurements.A cardiac coordinate system X, X2, X3 1-X2 plane) are collected every 500 \u03bcm from block a, and used to characterise the variation in myofibre angle throughout the ventricular wall. From block b, a single slice is taken in the base-apex (X2-X3) plane, followed by serial sections (collected every 200 \u03bcm) in the circumferential (X1-X3) plane are then taken from both tissue blocks, transferred onto glass slides, and photographed. Serial sections in the plane of the epicardium X-X2 plane2-X3) plane slice is shown at upper-left. To the right of this section are three representative circumferential (X1-X3) plane slices, registered to the base-apex slice at three X2 locations. Beneath are five epicardial (X1-X2) plane sections showing fibre angle orientations at different transmural depths in the tissue. Microstructural angles can be measured from sections in each of the three planes. Under the assumption that myofibres run in-plane to the epicardium (imbrication angle is zero), angles measured in the epicardial plane slices represent myofibre orientation. Angles measured on the base-apex and circumferential plane slices represent the local angle of intersection of myolaminae with the slice plane. Microstructural angles are determined in each plane automatically from tissue section images using a gradient intensity algorithm [1 axis, whilst \u03b2' and \u03b2\" angles were both measured relative to the X3 axis. Angles were signed positive or negative in accordance with previously developed convention [Tissue sections from one processed myocardial tissue block (Ex07) are shown in Figure lgorithm ,21. Appllgorithm , the micnvention .3 direction, whilst \u03b2' and \u03b2\" both varied in X2 and X3 directions.The model allowed variation in \u03b1 along the XMyolamina orientation was usually difficult to discern from base-apex and circumferential sections immediately adjacent to the epi- and endo-cardium, where coupling of adjacent laminae is known to be tightest ,22. In tComplete sets of \u03b2' and \u03b2\" angles are shown for the same tissue block (Ex07) in Figure 3 location) could then be combined with measured \u03b2' or \u03b2\" angles at the same transmural depth to derive sheet angles (\u03b2) at that depth, using the formulae [A continuous description of fibre angle variation through the wall was generated by fitting 10 linear finite elements to the fibre angle data field generated by the current sink (cathode) is simulated first in the absence of any cellular response to the current. Subsequently, the actively propagated wavefront generated by the cathodal current is examined by simulation of the activation time (AT) field over the model volumes. A simple cubic model of the cardiac action potential is utilised for the active models [gil = 0.263, git = 0.0263, gin = 0.008, gel = 0.263, get = 0.245, and gen = 0.1087. Membrane capacitance of the models was set to 0.01 \u03bcF/mm2, and membrane conductivity to 0.004 mS/mm2. The cubic action potential model had a resting potential of -85 mV, a threshold potential of -80 mV, and a plateau potential of 15 mV.Current of 0.03 mA is withdrawn from the extracellular space of two example model tissues, at a point located centrally in the model volume (~8.5 mm below the epicardium). Equal magnitude (but opposite sign) current is uniformly distributed amongst all the tissue boundaries except for the epicardium, to match the experimental case of recordings taken from an open chest pig. The steady-state extracellular potential (e models . The conThe model geometries reconstructed from the remaining six (Ex01-06) heart segments, each from a different pig, are presented in Figure 2) axis.Measurements of electrode locations preceding and following the fixation and freezing processes revealed that the tissue blocks shrunk on average 10 \u00b1 8% (mean \u00b1 SD) along the longitudinal (XAll seven tissue segment models are available on the web for rese\u03a6e) and activation time (AT) fields are shown on the central base-apex plane for both tissues. Figure \u03a6e and AT, with anisotropy in the fields being aligned with the microstructural axes of each tissue model. Predominantly negative sheet angles in the vicinity of the stimulus site in Ex01 determine the bottom-left to top-right slant of the fields for this tissue. Conversely, Ex06 exhibits predominantly positive sheet angles in the same region, and the fields for this tissue are accordingly slanted in the opposite direction.The results of the passive and active bidomain simulations are shown in Figure The last two decades have seen a dramatic increase in the use of computational models of the electrical and mechanical action of the heart on geometrically realistic model domains. One of the first studies to incorporate realistic anatomical features into a model environment solved electrical propagation on a three-dimensional network of cardiac fibres reconstructed from histological sections . Later, 3 volume). The trade-off for using cryosectioning techniques is that to allow for modelling over a three-dimensional block of tissue, an assumption that the tissue structure is constant over small distances in the third dimension must be applied. Support for such an assumption comes from the observation that myofibre angles generally vary little in the circumferential direction, over distances of 1\u20132 cm. However, the myosheet angle field tends to be more discontinuous than that of myofibres. The circumferential plane sections taken in this study give some indication of the variation in sheet angle field in the circumferential direction, in the region of the sections where absolute fibre angle is greater than 45\u00b0. In a qualitative sense, five of the seven hearts examined for this study displayed a reasonably constant sheet angle field in the X1 direction over 10 mm from the central base-apex plane. The other two hearts did contain significant variations in the field, and in one of these there was complete reversal of sheet angles within the |\u03b1|>45\u00b0 range, over the distance of 10 mm. Figure In this study traditional cryo-sectioning techniques were chosen to allow reconstruction of a plane of tissue structure from blocks of porcine LV. The methods used require destructive tissue sectioning in three planes in order to build the tissue description of a single base-apex plane. In our case, the attraction of MRI in enabling full three-dimensional reconstruction of fibre and sheet angle fields was out-weighed by several factors. Although the case for the ability of DTMRI to accurately reconstruct cardiac fibre fields is compelling ,7,29, it1-X2 plane), and hence imbrication angles are uniformly assumed to be zero. This assumption is supported by observations made in our lab of negligible fibre imbrication in the rat LV freewall [The validity of equations 1 and 2 used in the model construction relies on the assumption that myofibres run in plane with the epicardium ,15. The gle Fig. is due tComparison between our tissue structures, and those determined in dog hearts at very similar location , can be Transmural fibre variation was also measured by Streeter and Bassett, in the left ventricles of six pig hearts . Whilst 1-X2) plane section of Figure Occasional pockets of fibres with variant angle were observed in our tissue sections also, an example of which is seen in an epicardial X1-X plane seIn summary, this study contributes the largest database of porcine LV segment microstructure that is available for use by the cardiac research community. The key limitations of this study are (1) the inability to measure changes in the laminar architecture in the circumferential axis of the tissue blocks, and (2) the reliance on the assumption of zero fibre imbrication angle. To address these limitations whilst preserving high spatial resolution of the laminar architecture, development of a new technique involving repetitive milling, etching, and staining, in a semi-automated fashion, through blocks of wax-embedded tissue, is underway in our laboratory, and promises to yield fully three-dimensional reconstructions of moderate sized tissue blocks in the future."} +{"text": "Radiofrequency ablation is an interventional technique that in recent years has come to be employed in very different medical fields, such as the elimination of cardiac arrhythmias or the destruction of tumors in different locations. In order to investigate and develop new techniques, and also to improve those currently employed, theoretical models and computer simulations are a powerful tool since they provide vital information on the electrical and thermal behavior of ablation rapidly and at low cost. In the future they could even help to plan individual treatment for each patient. This review analyzes the state-of-the-art in theoretical modeling as applied to the study of radiofrequency ablation techniques. Firstly, it describes the most important issues involved in this methodology, including the experimental validation. Secondly, it points out the present limitations, especially those related to the lack of an accurate characterization of the biological tissues. After analyzing the current and future benefits of this technique it finally suggests future lines and trends in the research of this area. Radiofrequency (RF) techniques have been used to heat biological tissues for many years. However, in recent years its use for new medical applications has expanded enormously . To illuRadiofrequency ablation (RFA) is a (more or less invasive) interventional technique that in recent years has come to be employed in very different medical fields, such as the elimination of cardiac arrhythmias (using catheter or intraoperatively) , or the From a procedural point of view, RFA generally uses a pair of electrodes: an active electrode with a small surface area that is placed on the target zone, and a larger dispersive electrode to close the electrical circuit. On occasions, bipolar ablation is conducted with two active electrodes. In addition, using the same biophysical foundation described for RF ablation, other surgical fields use it to treat other pathologies, e.g. the correction of refractive errors in ophthalmology , the theIn order to investigate and develop new techniques, and also to improve those currently employed, research can call upon clinical and experimental (ex vivo and/or in vivo) studies, phantoms and theoretical models. The latter are a powerful tool in this type of investigation, since they provide vital information on the electrical and thermal behavior of ablation rapidly and at low cost, quantifying the effect of various extrinsic and intrinsic factors on the electrical current and temperature distribution. Consequently, they facilitate the assessment of the feasibility of new electrode geometries, and new protocols for delivering electrical power. Despite the fact that several research groups are currently using computer modeling to investigate RF ablation procedures, to date no review articles have been published on this topic. A previous review by Strohbehn and Roemer dealt wiTo date, theoretical modeling applied to the study of RF heating techniques have mainly focused on relatively new therapies, such as cardiac ablation , cancer More recently, other researchers have developed models for RF cardiac ablation. The first was proposed by Haines and Watson and was During the last 10 years, another two groups became interested in theoretical RF ablation modeling. The Duke University group -46 develThis section deals with the main steps in the building and use of a theoretical model in studies on RF heating. These steps are basically: 1) observation and simplification of the physical situation, 2) arrangement of the mathematical equations which rule the thermal and electrical phenomena, 3) determination of the boundary conditions, both electrical and thermal, 4) obtaining the physical characteristics of the biological tissues and other materials included in the model, 5) choosing a numerical method in order to computationally or analytically achieve a solution, and 6) conducting the post-processing of the computed results. Since most models are based on the Finite Element Method (FEM), the following steps have been tailored to this methodology.The theaters in which RF ablation is performed are often too elaborate and it is therefore absolutely necessary to begin by studying this problem in detail and then to carry out appropriate simplifications. These could include, for instance, looking for planes or axes of symmetry see Fig. , which wThe second step consists of setting the equations governing the physical phenomenon of electrical-thermal heating. All the models of RF heating are based on a time domain analysis of a coupled electric-thermal problem. The spatial distribution of temperature in the tissues is obtained by solving the so-called Bio-heat equation :\u03c1 is the mass density (kg/m3), c is the specific heat (J/Kg\u00b7K), k is the thermal conductivity (W/m\u00b7K), T is the temperature (\u00b0C), q is the heat source (W/m3), Qp is the perfusion heat loss (W/m3), and Qm is the metabolic heat generation (W/m3). This last term is always ignored since it has been shown to be insignificant for ablation [Qp as it is negligible in some cases of RF heating, such as non vascular tissues [Qp is always considered in cases of tissues with a high degree of perfusion, such as liver [Qp is incorporated in some models [where ablation . Likewis tissues ,48,49. Oas liver ,40,54,55as liver . Regardie models ,27,28,62e models ,44,52,63e models . This teQp = \u03c9b\u00b7cb\u00b7(T - Tb) \u00a0\u00a0\u00a0 (2)\u03c9b is the blood perfusion per unit volume (kg/m3\u00b7s), cb, is the specific heat of blood (J/Kg\u00b7K), and Tb is the blood temperature (\u00b0C). In general, \u03c9b has been assumed as uniform throughout the tissue. However, in a few studies its value was increased with heating time because of vasodilation and capillary recruitment [where ruitment or annulruitment .q (Joule loss) is given byAt the frequencies employed in RF ablation (300 kHz \u2013 1 MHz) and within the area of interest , the tissues can be considered purely resistive, because the displacement currents are negligible. For this reason, a quasi-static approach is usually employed to resolve the electrical problem ,66. Thenq = J\u00b7E \u00a0\u00a0\u00a0 (3)J is the current density (A/m2), and E is the electric field intensity (V/m). The values of these two vectors are evaluated using Laplace's equation:where \u03c3 \u2207V = 0 \u00a0\u00a0\u00a0 (4)\u2207 \u00b7 V is the voltage (V) and \u03c3 is the electrical conductivity (S/m). By using the quasi-static approach, the values of \"direct-current\" (DC) voltage calculated from the model correspond with the root mean squared (r.m.s.) value of the RF voltage actually employed.where The equations (l)-(4) give the solution of an electrical-thermal coupled problem which generally represents adequately the RF ablation of biological tissues. However, some models have incorporated additional terms into the Bio-heat equation or have employed extra equations which describe other physical phenomena. For instance, in order to improve the prediction of the temperature in the circulating blood during RF cardiac ablation, the Mass Equation and Momentum Equation have been employed to solve a thermal-flow coupled problem ,63; otheOnce the thermal and electrical equations have been stated, it is necessary to set the boundary conditions, both thermal and electrical. RF ablation is typically performed using a constant-voltage. In this case, the electrical boundary conditions can be of two types: null current (Neumann boundary condition) at the symmetry axis ,49,52,68Thermal conditions can be of two types: 1) null thermal flux (Neumann boundary condition), for instance, at the symmetry axis and plane ,52; 2) cIn addition, a value for the initial temperature has to be considered for transient thermal analyses. This value is frequently equal to those chosen in the experiments with which the computer simulations will be later compared. Almost all the studies modeling clinical RF ablations considered normothermic values of 37\u00b0C -46,71,72\u03c1), specific heat (c), thermal conductivity (k), and electrical conductivity (\u03c3). All these values are usually taken from the scientific literature [\u03c3, and at the appropriate temperature. If no previous data are available for a certain tissue, it is possible to consider the characteristics of a histologically comparable tissue.In order to build the complete theoretical model, the value of four physical characteristics have to be set for all the material of the model: mass density during heating and have found that this phenomenon follows an Arrhenius model which allows the modeling of irreversible changes in \u03c3. Likewise, the parameter k has been traditionally considered constant. Only a few models incorporated a linear relation with temperature [k-T.An important issue that has received little attention to date is the relationship between tissue characteristics and temperature. Although some RF ablation models did not consider any relationship ,35,44,60f +2%/\u00b0C ,59,62,80f +2%/\u00b0C ,55. Howef +2%/\u00b0C . For thif +2%/\u00b0C have recperature ,51,53,62perature have exp\u03c3 by a factor of 10000 between 100 and 102\u00b0C . However, this approximation does not take into account the irreversible behavior of \u03c3, and hence the results do not match the real situation, which is without doubt much more complex. For this reason, other modeling studies decided to end the computer simulation when the maximal temperature in the tissue reached 100\u00b0C [On the other hand, many RF ablation procedures involve a temperature of nearly 100\u00b0C. In values of this order, it is known that non-linear phenomena occur, such as desiccation and vaporization (bubble formation) . Since ged 100\u00b0C ,52,60, oed 100\u00b0C ,41,43,69ed 100\u00b0C .More recently, some interesting attempts have been made to quantify the relationship between temperature and specific heat , and to To obtain the solution of the equations governing the physical phenomena during RF ablation it is necessary to chose a calculation method. Sometimes, the geometry of the model is simple enough, and these equations can be solved by analytic methods ,22. HoweConcerning the use of the FEM, although some groups have developed their own software ,24, mostSome groups have recently employed FEMLAB (COMSOL in the present version) in their modeling studies ,55,59. TMost FEM programs have numerous advantages for building, solving and post-processing models , however three key issues have to be take into account in order to obtain accurate solutions. Two of them are related to the discretization processes carried out during FEM: 1) spatial discretization of the model region by creating a mesh see Fig. , and 2) Regarding the outer dimensions of the model, the correct choice is a compromise between a model that is large enough to yield a valid solution and small enough to require reasonable computing time and memory . A sensiLikewise, the optimum mesh size and time step are determined by a similar procedure called a \"convergence test\" which is described in detail in . In thisThe determination of the optimum values of mesh size, time step and outer dimensions is actually a combined process, since any sensitivity and convergence test for determining one parameter is implicitly employing a value used by the others. For this reason, it seems appropriate to conduct a more or less iterative process, for instance, to consider initially a tentative spatial and temporal solution . Then, a computer analysis is conducted to determine the appropriate values of the outer dimensions. Finally, once these values have been obtained, convergence tests are performed to determine adequate spatial and temporal discretization .The simplest RF ablation model is an electrical-thermal coupled problem. Therefore, the output variables are always electrical (voltage and current density) and thermal (temperature and heat flux). In some analyses, only electrical variables such as current density ,39,55,71Regarding thermal variables, temperature distribution is the most plotted result, due to its apparent association with thermal injury ,69,71,72Some modeling studies used an isothermal line to assess the tissue lesion boundary from the temperature distribution. Although different values have been used for this boundary, such as 48\u00b0C ,62 or 59Nevertheless, since it is known that the biological damage is a function of both temperature and time, several authors have partially quantified it. Despite the fact that tissue damage can be associated with many different reactions, each with its own rate coefficient, it may be approximated in a single process . As propT is the temperature (K) calculated at each point of the model region, R is the gas constant (8.3134 J/mole\u00b7K), A (s-1) is the frequency factor (a measure of molecular collisions) [\u0394E (J/mole) is an activation energy barrier which tissue constituents must surmount to denature [A and \u0394E are kinetic coefficients evaluated for each tissue type from experimental data, using both microscopic measurements (e.g. protein denaturation by means of scattering increase or birefringence loss) [where lisions) , and \u0394E denature . Both A ce loss) ,94, and ce loss) ,98. At tce loss) .So far, various theoretical models for RF ablation have employed this formulation to assess tissue damage, sometimes using skin data due to tOnce the theoretical models have been built, and although they are based on equations which correspond to well characterized phenomena, some type of experimental validation should be conducted to guarantee the results obtained from computer simulations. Many modeling studies have included experimental work that focused on the validation of theoretical models ,59,70-72Firstly, concerning the material employed, the experiments can be conducted by following either of two methodologies: 1) using real biological tissue placed with precision around the ablation zone ,47,105. Since the use of small temperature sensors (thermocouples and thermistors) or thermographic image can have limitations in some cases, other experimental techniques have been proposed to obtain information on temperature distributions. For instance, Verdaasdonk and Borst introducAlternative methods of temperature measurement based on magnetic resonance imaging (MRI) have recently been employed for RFA of tumors in order to 1) interactively guide the RF electrode to the target, and 2) monitor the effect of therapy . In factWhen temperature measurement was not possible or appropriate, some studies compared the computed temperature distributions to the macroscopic and/or histological samples of the heated tissue. For example, the macroscopic assessment of cardiac tissue was based on the degree of discoloration in the lesion zone ,71, whicFinally, some studies have considered the basal value and/or the time evolution of electrical variables during the heating in order to compare the computed and experimental values. Since the total impedance between the active and dispersive electrodes decreases during heating, the evolution of this parameter has occasionally been employed to experimentally validate theoretical models ,24,116.Some RF hepatic ablation procedures involve the use of a simultaneous saline infusion in the tissue ,117,118.RF ablation procedures are conducted using an electrical generator very similar to those normally employed in electrosurgical practice. This generator can operate in different modes such as pulsed versus cOn the other hand, no theoretical models have been proposed that include the effect of the impedance output of the RF generator, i.e. the electrical boundary conditions used in the active electrode implicitly considered as ideal electrical sources. This issue could be significant since the current RF generators present output impedances which could be similar to the value of the load impedance (tissue impedance). This means that during an actual RF heating, the resulting decrease in the tissue impedance could cause a mismatching of the two impedances, and hence significant errors in the computer results .Another interesting question is the modeling of the control algorithm employed in RF generators that use constant temperature. Recently, Haemmerich and Webster have impEven though great efforts have been made to obtain an accurate value for each of the characteristics of different biological tissues, it is important to take two issues into account. On one hand, the dispersion of the values of the biological characteristics can become very important, due to the variability between individual values, and the changing environmental and physiological conditions. Some modeling studies have assessed the impact of these changes on temperature distributions considering increments and/or reductions of up to 100% ,49,70,80On the other hand, to date, theoretical RF ablation models and their corresponding computer simulations have only been related to the comparative thermal dosimetry . The aimIn conclusion, it does not seem either important or urgent to obtain the precise characteristics of each type of biological tissue. However, it is urgent and necessary to know the relationship between tissue characteristics and temperature, in order to accurately model certain RF heating techniques. In fact, as was stated at the end of the section \"Physical characteristics of biological tissues\", there is at present a considerable lack of understanding of the changes in the physical characteristics of biological tissues during intense heating, i.e. when temperature reaches \u2248100\u00b0C. In these conditions, it seems obvious that all the characteristics will experience sizeable, and probably irreversible, changes in value. It is therefore both urgent and important to conduct experimental studies to assess these behaviors. This is especially necessary in the modeling of RF ablation procedures in which very high temperatures are reached, such as hepatic RF ablation using saline irrigation ,118 or RThe computer modeling of RF ablation offers several unquestionable advantages over the experimental approach. For this reason, it has become an essential tool to complement experimental studies on RF ablation techniques. Not only is it less expensive and faster than ex vivo and in vivo experiments, but it also allows the time evolution and spatial distribution of physical variables to be analyzed. These values are impossible to monitor due to the lack of suitable transducers. These advantages will provide inestimable help to the research and development processes of the manufacturers of RF ablation systems. This is an important advantage, but in addition, and as I have gathered from my experience of cooperation with surgeons, radiologists, and cardiologists, RF ablation models offer valuable assistance in explaining the biophysical phenomena involved in the RF heating of biological tissues. In other words, the models are excellent didactic tools that enable the users of RF ablation systems to become familiar with the equipment and procedures, and thus indirectly enhance the safety and efficacy of the therapies.A number of studies have recently proposed that theoretical modeling might be useful not only as a support in the design and understanding of the phenomenon, but also to provide guidance during the ablation procedure. For instance, various models have been developed for predicting lesion size during catheter cardiac ablation using previous information ,33. ConcFinally, although all the foregoing is related to radiofrequency ablation, the methodology described is very similar to those employed to study other thermal techniques for destroying biological tissues. In fact, numerous computer modeling studies have also been published on techniques such as laser-induced interstitial thermotherapy (LITT) -122, higThe future of theoretical RF ablation modeling appears to lie in:Accurate modeling of the electrical and thermal characteristics of biological tissues, not only those that are temperature-dependent, but also time-dependent, i.e. to quantify the relations between the values of the characteristics and the thermal damage function. In addition, these relations offer irreversible effects above a certain thermal level (\u224870\u201380\u00b0C), or over a specific value of a thermal damage function [\u03c3) of a biological tissue considering a thermal level of \u224870\u00b0C as the threshold of irreversible behavior (red line). This behavior has been experimentally assessed. However, it is known that a tissue temperature value of \u224890\u00b0C is associated with a high degree of tissue desiccation, and thus to a significant increase in electrical impedance [\u03c3). Thus, it seems reasonable to consider a second thermal threshold (probably around 90\u2013100\u00b0C) which involves a more or less abrupt drop in \u03c3 . All these relations should be studied in future experimental work.1) function ,83. Thismpedance ion size , and theion size may implion size . As a coion size ,37 and aion size ,139 and ion size . The dation size .3) Determining the parameters (frequency factor and energy) of the thermal damage function for different types of tissues . For this purpose, it is possible to use the classical methods , or to croscopy . Finallycroscopy .4) Conducting research on new histological markers of thermal injury to allow consistent experimental validation using ex vivo and in vivo samples. These markers would allow different histological changes to different isothermal lines to be compared [compared ,141,142.5) Development of fast computer simulation of ablation models to predict tissue temperature and hence to provide simultaneous guidance during a procedure [rocedure ,143. In rocedure , theoretto obtain a more accurate model of the behavior of the tissue during the simultaneous application of RF energy and saline perfusion. This is a truly complex phenomenon and to date only a one-dimensional model has been developed [6) Finally, it is especially important eveloped . In facteveloped . The saleveloped ).On the other hand, it seems that certain other lines of research are of low priority, due to their currently high cost in human resources and computational power, as well as to an apparent lack of utility, as is the case, for instance, of large-scale modeling including an entire human torso to study RF cardiac and hepatic ablation. Another questionable issue would be the inclusion of an extremely realistic geometry in the models, since the key question of any model is its simplicity, and only the genuinely significant aspects should be included.Radiofrequency ablation (RFA) is a surgical technique that in recent years has come to be employed in very diverse medical fields. In order to study, investigate and develop new techniques and to improve those currently employed, research can make use of clinical and experimental studies, phantoms, and theoretical models. The latter are a powerful tool in this kind of investigation, since they rapidly and economically provide an understanding of the electrical and thermal behavior involved in ablation. In the last 10 years several groups have developed theoretical models for the study of RF ablation. In this review, the methodology of the modeling has been explained, including the experimental validation. At present, certain important limitations impede the complete and accurate development of the model, especially under conditions of high temperature (\u2248100\u00b0C) or simultaneous saline perfusion. In spite of this, modeling has grown to such an extent that it has become an essential tool in assisting experimental studies on RF ablation techniques."} +{"text": "In radiofrequency (RF) ablation, the heating of cardiac tissue is mainly resistive. RF current heats cardiac tissue and in turn the catheter electrode is being heated. Consequently, the catheter tip temperature is always lower - or ideally equal - than the superficial tissue temperature. The lesion size is influenced by many parameters such as delivered RF power, electrode length, electrode orientation, blood flow and tissue contact. This review describes the influence of these different parameters on lesion formation and provides recommendations for different catheter types on selectable parameters such as target temperatures, power limits and RF durations. RF current is applied to the tissue via a metal electrode at the tip of the catheter, with a large skin electrode serving as indifferent electrode. The current density patterns in the tissue are determined by electrode size and geometry, electrode contact and local tissue properties. Also, of course, the current density will be proportional to the current (I) delivered by the RF generator, which, for constant resistance (R) of the electrode-tissue volume conductor is proportional to the square root of the RF power , and increases proportional to the heat capacity of the local medium. In addition, when temperature differences between adjacent areas develop because of differences in local current density or local heat capacity, heat will conduct from \"hotter\" to \"colder\" areas, causing the temperature of the former to decrease and that of the latter to increase. Additionally, heat loss to the blood pool at the surface and to intramyocardial vessels determine the temperature profile within the tissue.The heating occurs especially in the proximity of the active electrode due to its relatively small surface area causing locally high current density as compared to the site of the indifferent electrode. Typically, living tissue will be permanently destroyed at temperatures of approximately 45\u00b0 to 50\u00b0 C sustained for several seconds .The tissue surface is cooled by the blood flow and thus the highest temperature during radiofrequency delivery occurs slightly below the surface.The impact of the target temperature on lesion size was evaluated by the author in an in vitro study. A 4-mm tip catheter was positioned in a parallel orientation to porcine epicardium either with 0.5 N or 1.0 N contact force. A total of 48 lesions was produced with different target temperatures of 50, 60, 70 and 80\u00b0 C . Each setting was repeated 6 times and the average values were used for evaluation. The results are given in With increasing target temperature the delivered power, tip temperature and lesion dimension increased. Lesions created with a target temperature of 50\u00b0 C were very small and nearly unrecognizable.These results indicate that the lesion size could be well predicted by measuring tip temperature. However, the in vitro experiments were performed under stable flow conditions. In the in vivo situation, the flow is dependent on the ablation site and also varying over the heart cycle. These cooling effects have a strong impact on the catheter tip temperature and thus on the delivered power and also on lesion size.Petersen et al. evaluated the impact of convective cooling on lesion dimension . In vitrWith increasing flow, i.e. increasing convective cooling, lesion depth, width and volume increased due to increasing power consumption to reach and maintain target temperature. Note, that the tip temperature was not different and is thus a poor indicator for lesion size if the flow condition is not stable.Petersen et al. induced a total of 13 lesions in 6 pigs either at the left ventricular apex or at the mid-septum of the left ventricle The major part is dissipated as electrical heating of the intracavitary blood, convective heat losses from electrode to blood, electrical heating of tissue outside the lesion volume and in electrical resistance of catheter and skin and fat layers at the indifferent electrode.Changing the 4-mm tip electrode orientation from parallel to perpendicular decreases the proportion of the electrode tip area that is in contact with the tissue and increases the proportion of the tip area that is exposed to the convective cooling of the surrounding fluid. The larger lesion volume in the perpendicular electrode orientation suggests that cooling by flow around the electrode has greater impact than contact area. However, one would expect a significantly higher power delivery in the group with larger lesion volume. This can be explained by the fact, that only a minor fraction of the energy delivered by the generator is used for the lesion production itself. [Chan et al. published results based on in vivo data that are in conflict with those described by Petersen and a liIn 26 dogs 144 lesions were created either in a parallel or perpendicular orientation in the right atrium with a target temperature of 75\u00b0 C for 60 s using different tip lengths. The orientation was confirmed by fluoroscopy and intravascular ultrasound. For reasons of comparison only the results for the 4-mm tip catheter are given in The lesion volume was larger for the parallel orientation as compared to the perpendicular orientation which is in contradiction with the results from the in vitro study performed by Petersen et al.. However, all lesions were markedly smaller than in the previously summarized studies mainly due to the small lesion depth. The lesions were produced in the right atrium and the atrial wall is rather thin. The lesions would have been deeper in thicker tissue-note that 10 out of 14 for the perpendicular orientation and 9 out of 14 for the parallel orientation were transmural.Chugh et al. who performed similar experiments in the left ventricle confirmed this . They anThe lesion depth was markedly higher than that of the atrial applications and similar to the values that Petersen et al. reported. Although there was a trend that the lesions produced in the parallel orientation were larger the difference did not reach statistical significance.Based on these studies one may conclude that lesion depth is only little affected by catheter tip orientation using 4-mm long tip catheters but that lesions are slightly longer in the parallel orientation as compared to the perpendicular orientation.Petersen et al. evaluated the impact of electrode tip length on lesion size in 34 pigs . LesionsThe lesion volume increased with increasing tip length for tip lengths between 2 and 10-mm. The lesion volume produced with an 8-mm long tip was about twice as big than that with a 10-mm tip catheter and even 3 times as big than that produced with a 4-mm tip catheter. Further increase in tip length did not result in further increase in lesion volume. With a very long catheter tip a large part is exposed to the blood flow and more energy is dissipated into the blood stream. Note that the average temperature decreased with increasing tip length and was thus a poor (or no) indicator for lesion volume. It is the amount of power that is effectively delivered to the tissue that determines lesion size. In addition, the depth did not differ between 4 and 8-mm tip catheters, the produced lesions are only wider but not necessarily deeper. Also, the applied average power was \"only\" 49 W for lesions created with an 8-mm tip catheter. Limiting the wattage to 50 W may reduce the likelihood of coagulum formation without compromising lesion size.Langberg et al., who also produced lesions in the left ventricle using either 4-mm, 8-mm or 12-mm tip catheters, confirmed these results in part . The tarThe power required to achieve a steady state temperature of 80\u00b0 C was directly proportional to electrode size. The lesions produced by the 8-mm tip electrode were nearly twice as deep and four times as large as those made with a conventional 4-mm tip electrode. Lesions produced by the 12-mm tip electrode were intermediate in size and sometimes associated with charring and crater formation. Langberg et al. stated furthermore that ablations with larger tip electrodes caused a drop in arterial pressure and more ventricular ectopy than those with a 4-mm tip electrode.The main difference to the results published by Petersen et al. is that lesions produced with the 8-mm tip catheter were also much deeper than those produced with the 4-mm tip catheter in the Langberg study. In concordance to the Petersen study, the use of a very large electrode did not further increase lesion size, and the tip temperature was even negatively correlated with lesion size. The clinical relevance of the fact that 8-mm tip catheters produce larger lesions was demonstrated by Tsai et al. [Simmers et al. evaluated the relation between RF duration and lesion size and published their results in 1994 [These results indicate that the lesion is predominantly generated within the first 10 seconds of energy delivery and reaches a maximum after 30 s. Further extension of RF delivery during power controlled RF delivery does not seem to further increase lesion size.It is difficult to evaluate the impact of electrode-tissue contact in the in vivo situation since the contact pressure between electrode and tissue cannot be assessed by fluoroscopy and a direct indicator for tissue contact is lacking at present. Some studies have been published where the influence of the electrode-tissue contact has been investigated in a well controllable in vitro environment. However, the conclusions out of these studies need to be drawn carefully to avoid misleading interpretations.The author produced in vitro ablations on porcine myocardium with a 7F, 4-mm tip electrode with different electrode-tissue contact forces and a target temperature of 70\u00b0 C for 30 s (50 max. output) ,11. A thmoderate contact force further increase in contact force results in progressively smaller lesions because less amount of RF power is required to reach target temperature. Between 0.2 and 0.9 N the lesion depth is not much affected by the contact force and increasing contact force is balanced by decreasing applied power. These experiments were performed under stable flow conditions and in conclusion it seems likely that the flow around the electrode is of greater impact on lesion size than that of the electrode-tissue contact.It is the amount of RF delivered effectively into the tissue that determines tissue heating and thus lesion generation. With increasing electrode-tissue contact a higher amount of RF power can be effectively brought into the tissue resulting in increasing lesion depth. At a certain Irrigation has been introduced to avoid overheating at the tissue-electrode interface, thus allowing the delivery of higher amounts of RF power for a longer duration to create relatively large lesions.Skrumeda et al. compared lesions created with a standard 4-mm tip catheter (RF Marinr) with those created with an irrigated tip catheter (RF Sprinklr) in animal experiments . Three aWith a standard electrode, lesions were larger with a target temperature of 90\u00b0 C as compared to those created with a target temperature of 70\u00b0 C, however, a coagulum was observed in 95% of applications with a target temperature of 90\u00b0 C. The largest irrigated lesions were formed using 50 W (986\u00b1357 mm3) but were associated with craters in 54% and coagulum in 27% of the applications, respectively. Large lesions without craters and coagulum were created with irrigation using 20 W for 10 minutes (602\u00b1175 mm3). Skrumeda et al. concluded that irrigated ablation created larger lesions than standard ablation and that large lesions may be created without craters using moderate power and long duration !Petersen et al. compared lesions produced by standard temperature controlled RF delivery (TC) with those produced by either power controlled RF delivery (PC) with a high irrigation flow rate (20 ml/min) or temperature controlled RF delivery with a low irrigation flow rate (1 ml/min) . The resPetersen et al. demonstrated that lesion size and tissue temperatures were significantly higher during irrigated tip ablation compared to standard temperature controlled RF delivery (p<0.05). Lesion volume correlated positively with tissue temperature (r=0.87). The maximum recorded tissue temperature was always 1 mm from the ablation electrode. Crater formation only occurred at tissue temperatures of greater than 100\u00b0 C.Based on the results of this in vitro study it may be concluded that irrigated temperature controlled RF delivery yields relatively large lesions without crater formation if a moderate target temperature between 60 and 70\u00b0 C and a low irrigation flow rate of 1ml/min are chosen. A target temperature of greater than 70\u00b0 C may result in tissue overheating and crater formation.Weiss et al. investigated the influence of different flow rates on lesions produced on the thigh muscle in six sheep . A totalThe tissue temperatures at 7-mm depth, the lesion depth and width were not significantly different between the 3 different flow rates. The diameter measured at the surface was significantly smaller following RF applications with an irrigation flow rate of 20 ml/min due to increased cooling at the surface, which resulted also in lower tissue temperatures at a depth of 3.5 mm. Neither audible pops nor thrombus formation was observed in all applications. Based on these results a flow rate of 10 ml/min may be recommended when operating an irrigated catheter in the power controlled mode with a target power of about 30 W. The application of more than 30 W may require a higher flow rate to avoid excessive heat development at the superficial tissue layers.Jais et al. published the results of a prospective randomized comparison of irrigated tip versus conventional tip catheters for ablation of atrial flutter .Cavotricuspid ablation was performed with a conventional (n=26) or an irrigated tip catheter (n=24). RF was applied for 60 seconds with a temperature-controlled mode: 65\u00b0C to 70\u00b0C up to 70 W with a conventional catheter or 50\u00b0C up to 50 W with the irrigated tip catheter. Complete bidirectional isthmus block was achieved for all patients. Four patients crossed over from conventional to irrigated tip catheters. The number of applications, procedure duration, and x-ray exposure were significantly higher with the conventional than with the irrigated tip catheter: 13\u00b110 versus 5\u00b13 pulses, 53\u00b141 versus 27\u00b116 minutes, and 18\u00b114 versus 9\u00b16 minutes, respectively. No significant side effects occurred, and the coronary angiograms of the first 30 patients after ablation was unchanged.Jais et al. concluded that irrigated tip catheters were found to be more effective than and as safe as conventional catheters for flutter ablation, facilitating the rapid achievement of bidirectional isthmus block.Yamane et al. used an irrigated tip catheter for the ablation of accessory pathways resistant to conventional catheter ablation .Among 314 accessory pathways in 301 consecutive patients, conventional ablation failed to eliminate accessory pathway conduction in 18 accessory pathways in 18 patients (5.7%), 6 of which were located in the left free wall, 5 in the middle/posterior-septal space, and 7 inside the coronary sinus (CS) or its tributaries. Irrigated tip catheter ablation was subsequently performed with temperature control mode , a moderate saline flow rate (17 ml/min), and a power limit of 50 W (outside CS) or 20 to 30 W (inside CS) at previously resistant sites. Seventeen of the 18 resistant accessory pathways (94%) were successfully ablated with a median of 3 applications using irrigated tip catheters. A significant increase in power delivery was achieved with irrigated tip catheters, irrespective of the accessory pathway location, particularly inside the CS or its tributaries. No serious complications occurred.Yamane et al. concluded that irrigated tip catheter ablation is safe and effective in eliminating accessory pathway conduction resistant to conventional catheters, irrespective of the location.Nabar et al. used irrigated tip catheters for ablation of ventricular tachycardias that were resistant to conventional catheter ablation .Eight patients in whom the clinical target VT (cycle length 430\u00b197 msec) could not be ablated using a conventional 4-mm tip RF ablation catheter underwent additional attempts to ablate this VT using an irrigated tip catheter. Ablation of the clinical target VT using an irrigated tip catheter was attempted from the left ventricle in 6 and from the right ventricle in 2 patients , by entrainment, activation, or pace mapping. A mean of 6\u00b15 (range 2 to 15) pulses was delivered. Target VT ablation was successful in 5 patients (63%). After successful ablation, at a mean follow-up of 6.5\u00b14 months and while taking antiarrhythmic drugs, all 5 patients were free of VT recurrences. Nabar et al. concluded that the clinical target VT could be ablated using an irrigated tip catheter in 5 (63%) of the 8 patients in whom ablation using a conventional RF catheter was unsuccessful.The target temperature for 4-mm tip catheters should be less than 80\u00b0 C. Since tissue temperature can be markedly higher than tip temperature a higher target temperature may increase the incidence of tissue overheating associated with crater formation and coagulum formation. The lesion size is poorly correlated to tip temperature in the in vivo situation. In high flow areas the tip is cooled and more RF power is delivered to the tissue to reach target temperature resulting in relatively large lesions and vice versa. Consequently, in high flow areas in the heart the difference between tip temperature and tissue temperature is large and a lower target temperature should be considered whereas in low flow areas the tissue temperature is much better reflected by the tip temperature and a higher target temperature could be considered .The duration could be limited to 30 seconds for non-irrigated 4-mm tip electrodes. The lesion is formed within the first 30 seconds predominantly. A longer duration does not create larger lesions.A larger portion of 8-mm tip catheters is exposed to the blood and thus cooled by the blood flow and a relatively large difference between tip temperature and tissue temperature can be expected. Consequently, a moderate target temperature should be chosen and the RF power may be limited to 50-60W to avoid tissue overheating and coagulum formation.An irrigation flow rate of 10ml/min may be selected in a power controlled mode with a delivered power of up to 30 W. The irrigation flow rate should be increased to 15-20 ml/min when more than 30 W are delivered to avoid excessive heat development at the superficial tissue layers.The RF duration in power controlled mode with irrigated tip catheters should be considered to be longer than 30 s. Instead of increasing the power to achieve the desired effect (which increases the likelihood of crater formation) the duration could be increased. Skrumeda demonstrated lesions of similar size with 20 W for 300 s as with 50 W for 30 s. Consequently, a moderate power of 20-35W with relatively long RF duration of 60-300 seconds should be considered to achieve relatively large lesions with a limited risk of crater formation."} +{"text": "Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach \u00bc of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to.First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants.3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor.The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mmind < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q Vind > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for.The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q V Metallic implants often cause distortions inside magnetic resonance images. These effects arise either from the different susceptibility of tissue and metal, disturbing the gradient for spatial encoding, or from induced eddy currents on the metallic implant structure forming a Faraday cage -4. An adThis technology has the great advantage of amplifying the signal only, where it is needed, i. e. inside the Faraday cage. The signal or contrast behavior of the rest of the image plane (volume) is unaffected by these devices. Up to now, active MRI implants have not been tested in clinical trials, but active MRI stents have been investigated in rabbits ,10. Examhs of a hot spot more efficiently than a pure tissue surrounding does. A finite volume analysis respects these situations more precisely and can investigate, whether failures of such resonators can cause unsafe conditions during MRI acquisitions even with some additional temperature reduction mechanisms. A robust and easy-to-implement algorithm is used for the risk analysis, because this simulation does not have to predict exact temperature increases, contrary to planning algorithms for therapies like hyperthermia or thermal ablations. Instead the analytical solution gives the principal risk for the worst case. The finite volume calculations should evaluate, if the risk also exists using a metallic wire inside homogeneous tissue and even with cooling due to blood perfusion superior to the physiological values and blood flow. Danger in our understanding means, that a substantial part of tissue volume is heated to a temperature, which can induce cell death. The simulations calculate temperature maps developing in time around a defect. From these temperature maps a critical volume with temperatures exceeding a critical value can be calculated. In order to increase the speed of the calculation the finite volume simulation assumes a cylindrical symmetry associated with a linear wire. This involves, that the hot spot power loss is split into two equal parts, which diffuse into both of the assumed fracture surfaces of the metallic wire. The further heat distribution is assumed to arise from heat conduction only, or from heat conduction and blood perfusion \u2013 implementing an algorithm based on the idea of Pennes equation of a resonator with his axis aligned perpendicular to the main magnetic field and exposed to a series of identical excitation pulses of an MRI investigation is given by is the angular resonance frequency, \u03bc0 [Vs/(Am)] is the permeability of vacuum, Q is the quality factor of resonance circuit inside tissue, B1 [T] is the amplitude or magnitude of the magnetic field established by a linear or respectively circular polarized transmit coil of the MR system, Bind= B1 Q is the magnetic field inside the inductance of the resonator and Vind [m3] is the volume of the resonators inductance. In this investigation the inductance volume and the implant volume are assumed to be equal. The factor cdc is the duty cycle of pulsed MR sequences and equals the ratio of the duration \"rf excitation on\" during the total acquisition time and the total acquisition time itself. cpwm describes the pulse waveform modulation and is the ratio between the energy of one excitation pulse with a maximum amplitude/magnitude A and the energy of a rectangular excitation pulse with same length and identical amplitude/magnitude A. A detailed derivation of Eq. (1) is outlined in is the inductance and R [\u03a9] the resistance of a resonance circuit. Thus the overall power loss (Eq. 1) is reduced by the factor Rov/(Rov+Rhs). Only the part Rhs/(Rov+ Rhs) of this reduced total power loss occurs at the break. It is given with respect to the total power loss of the intact resonator Ploss byThe struts of stents, stent grafts or vena cava filters implanted in human vessels are exposed to perpetual changing forces and permanent movements, which can cause fatigue fractures ,14. A paPloss of Eq. 1 at Rhs = Rov yielding [and has the maximum value with respect to yielding sphere emitting a constant power P uniformly from the sphere surface inside a homogeneous medium surrounding the power emitting sphere disregarding blood perfusion. This model is a good approximation for a point-like power source. After reaching the thermal equilibrium the constant power penetrates through every spherical surface surrounding the power source in the sphere center independent of the radius r (with side condition r > rsphere). The temperature difference \u0394T [K] between a point at the hot spot surface and a point far away (\u221e) from the power source, which for a living system is a point with the normal body temperature, can be calculated in a homogeneous medium for a power loss P from the equation for heat diffusion,An analytical description of the thermal uptake around a hot spot with respect to the metallic structure, the electrical paths of the resonance circuit and different power loss mechanisms is not possible. An easy analytical description is possible for a sphere with radius r-1 K-1] is thermal conductivity. Equation 3b can be resolved for the critical radius rcrit, which describes the distance below that a critical temperature increase \u0394Tcrit is exceeded. From the critical radius rcrit the critical volume of the sphere with temperature increases above \u0394Tcrit can be calculated.where \u03bb, sometimes also cited as [(kg) of blood \u00d7 (m-3) of tissue \u00d7 (s-1)]), the heat transport due to blood flow in larger vessels, the metabolic heat production Qmet inside living tissue and the applied power Pex to the body from external sources. This investigation is a test on persisting dangerous conditions using near worst case assumptions and therefore uses simplifications. For the comparison with the analytical model only the heat transport by the thermal conductivity of tissue is considered. Other calculations also take into account the influence of the metallic wire as well as blood perfusion. Some simulations additionally estimate the volume with a critical temperature increase around a hot spot inside a vessel wall with a cooling blood flow inside the vessel lumen in a very short distance from the hot spot.During the last fifty years since Pennes publication ,16 on thThe simulations are not performed with commercially available software. The algorithm is self-coded for problems with cylindrical geometry in Kylix and Delphi, a software development environment, based on object oriented Pascal. The graphical outputs are mostly generated by an evaluation version of Teechart 7 (registered \u03b2-test) used within the Delphi and Kylix environment. The implemented algorithm is essentially the same as the one in the first part of the investigation , but it total simulation volume is a cylinder with length 2Lsim and diameter 2Rsim. This is adequate for a straight wire along the cylinder axis with radius rwire. The chosen geometry allows the use of cylinder coordinates and reduces the calculation time taking advantage of two symmetries. Firstly, the model needs not to consider \u03c6 in cylindrical coordinates because of the cylindrical symmetry and can use finite volumes only dependent on r and x.The x-axis of the cylindrical coordinate system and divides the total simulation volume into two parts . The calculation volume with length Lsim and radius Rsim \u0394 \u00d7 on the cylinder axis. Each sub-cylinder of length \u0394 \u00d7 is divided in one inner cylinder with radius rwire (i = 1) and m-1 shells of thickness \u0394r.Secondly, a plane of mirror symmetry exists, which is orthogonal to the s Figure . The simm Figure thereforE [J] between two simulation cells with a specific contact area A[m2], a temperature difference \u0394T [K] and a heat diffusion path length d[m] during a time interval \u0394t [s] is given by the equation for heat conduction asThe energy exchange \u0394E for a cell C has to be chosen such, that it is positive for receiving energy and negative for an outgoing energy. The total energy change \u0394Etot of one cell during a time interval \u0394t is the sum of all exchanges with adjacent cells with non-zero contact area and the energy change due to a heating power pcell inside the cell.For the following, the sign of \u0394pcell is non-zero only at the hot spot, i. e. in the two cells at the assumed two fracture surfaces of the wire. The temperature increase \u0394T*[K] of one cell during one iteration step can be calculated byVcell [m3] is the cell volume and \u03c1[kg/m3] is the density of the material of Vcell.where c[J/(kg K)] is the specific thermal capacity of the cell material, Each cell C , except those at i = 1 has contact to 4 adjacent cells with contact areas different from zero (Eqs. 7a-c). These contact areas as well as the volumes of the cells (Eq. 7d) only vary with the index i.x(i) = \u03c0\u00b7\u230a(rwire + (i - 1)\u00b7\u0394r)2 - (rwire + (i - 2)\u00b7\u0394r)2\u230b \u00a0\u00a0\u00a0 i = 2, 3, 4,....,m \u00a0 \u00a0 \u00a0(7b)Ar(i) = 2\u00b7\u03c0\u00b7(rwire + (i - 1)\u00b7\u0394r)\u00b7\u0394x \u00a0\u00a0\u00a0 i = 1, 2, 3,...,m \u00a0 \u00a0 \u00a0(7c)Acell(i) = Ax[i]\u00b7\u0394x \u00a0\u00a0\u00a0 i = 1, 2, 3,...,m \u00a0 \u00a0 \u00a0(7d)Vx(i) is the contact area in both directions of the cylinder axis (from any index j to j-1 and to j+1) whereas Ar(i) is the contact area in radial direction from index i to i+1. The contact area in radial direction from index i to i-1 is identical to the area Ar(i-1). At i = n and j = m the calculation volume has boundaries. These boundaries are implemented as boundary cells at index m+1 (r-direction) and n+1 (x-direction), which work as an ideal heat sink. The boundary condition for this heat sink is dT/dt = 0, which keeps the temperature of our boundary volume constant (\u0394T = 0), even when receiving energy during one simulation step with duration \u0394t. The condition \u0394T = 0 at the outermost cells can describe on one hand, the behavior of the human body to keep its temperature nearly constant by regulating the energy transport. In our model this temperature regulation allows to shift the condition \u0394T = 0 closer to the hot spot. On the other hand \u0394T = 0 can also be a model for rapid heat transport through flow inside a larger vessel near the hot spot. A fast blood flow, which transports immediately all applied energy to the entire blood pool with an infinite thermal capacity, can be simulated by a situation, where the hot spot is close to boundary cells, which keep the temperature constant even if receiving energy. At i = 1 as well as j = 1 the calculation volume has boundaries without energy exchange. For index i = 1 no cells with lower index i exist and therefore no energy exchange is possible. For index j = 1 with cell center position \u0394x/2 the symmetry plane defines an identical temperature at \u2013 \u0394x/2 during one iteration step with duration \u0394t. The distribution from this (these) power receiving cell(s) to adjacent cell elements is only due to heat conduction on the wire and through tissue during an iteration step. Additionally part of the energy of each cell can be transported out of the total simulation volume by blood perfusion. The calculation volume consists of a two dimensional field of cells C .The applied energy \u0394x and r) to the center of the adjacent cell, the heat conduction for such an interface takes place over two different materials with only part of the diffusion length for each of the materials . Because the thermal conductivity of tissue is much lower compared to that of metal (table wire+\u0394r)/2.All metal wire cell elements have the index i = 1. They all have identical freely definable physical parameters \u03bb, \u03c1 and c. The parameters for all other cell elements are set to the values of tissue . The simulation starts at t = 0 with a temperature field \u0394T = 0 for all indices i and j respectively. For each iteration step with duration \u0394t the total energy change \u0394Etot for each cell C is calculated according to Eqs. 4, 5 respecting the related contact areas (Eqs. 7a-7c) and the corresponding diffusion lengths. From \u0394Etot the temperature increase \u0394T* is calculated according to Eq. 6 respecting Eq. 7d. This value is added to the prior value according toThe entire simulated time Tnew = \u0394Told + \u0394T* \u2013 wb \u00b7 \u0394Told \u00b7 \u0394t \u00a0 \u00a0 \u00a0 (8)\u0394b. It describes which part per second of a tissue volume is exchanged by perfusion against new blood from the arterial blood pool. The new part has to be heated from \u0394T = 0 to the increased temperature level and therefore the temperature increase of the cell volume C is reduced. The implementation of Eq. 8 is similar to the use of Pennes equation. In contrast to Pennes equation, Eq. 8 neither assumes different physical parameters of blood and tissue, which would lead to an additional factor modulating wb, nor uses a temperature dependent blood perfusion parameter, which increases with enlarged temperatures. It is sufficient to perform a simulation with an overall increased constant blood perfusion to check the persistence of physiological critical circumstances. Furthermore no metabolic heat production is assumed. The calculated \u0394T is assigned to the arithmetic mean of inner and outer cell limits with respect to x and r, respectively.for each cell during the whole iteration process. Eq. 8 respects a perfusion term wWtotal = Ptsim) is compared to the energy stored inside the simulation volume, computed from the final temperature increases of each cell respecting the heat capacity, added with the energy, that has left the simulation volume to any heat sink and the energy needed during the total simulation process for heating up blood from the arterial blood pool due to the perfusion term of Eq. 8. Thirdly, the algorithm was tested, whether it provides similar results for identical geometries with different spatial or different temporal resolution as well as for different sizes of the total simulation volume surrounding the hot spot.The implemented algorithm was controlled with different checks. Firstly, the numerical simulation for a tissue only environment is compared to the analytical model. Secondly, for each numerical simulation the totally applied energy at the end of the simulation , but as well on the orientation of the resonator with respect to the main magnetic field and the SAR of the applied MRI sequence. Prior experimental experiences [Q, the resonator volume and the knowledge of the applied MRI sequence the maximum hot spot power Phs can be calculated according to Eqs. 1 and 2. Alternatively, Eqs. 1 and 2 can be used to calculate the smallest inductance volume of a resonator, which effects \u2013 using worst case conditions for the quality factor (Q = 5) and MRI sequence (const = 4 mW/cm3) \u2013 a certain critical hot spot power loss.The variable parameter for the analytical model according to Eq. 3b is the hot spot power eriences reveal qb = 0.00125 m3 m-3 s-1 [Temperature maps were calculated depending on different physical parameters respecting partly blood perfusion and blood flow inside a vessel near the hot spot. Unless otherwise noted, the following standard parameters were used for the simulation: titanium wire with 50\u03bcm radius, hot spot power of 100 mW, normal perfusion rate w m-3 s-1 , simulat1. Spatial and temporal resolution: The metal tissue interface is a critical part of the simulation. The simulations with varying spatial resolution in both directions as well as a better temporal resolution allow the assessment of the temperature differences at the metal tissue interface.2. Time development:a. Temperature maps for a tissue-only simulation: The pure tissue temperature maps were calculated with normal blood perfusion and the quality factor Q of the resonance circuit. A larger hot spot power is equivalent to a better quality factor and/or a larger inductance volume. Also a decreased hot spot power can simulate a resonator not perfectly aligned perpendicular to the main magnetic field or an MRI sequence without maximum SAR.3. Hot spot power: The final temperature maps were calculated with varying hot spot power loss 4. Material: The final temperature maps were calculated for four different metals of the linear wire to test influences of the thermal conductivity.5. Radius: The final temperature maps were calculated with varying radius of a titanium wire (30 \u03bcm \u2013 500 \u03bcm) to check the influence of the increased heat transport capability of a metallic wire with a larger radius.3 m-3 s-1 to 0.02 m3 m-3 s-1) for tissue.6. Perfusion rate: The final temperature maps were calculated with normal and increased perfusion rates or blood flow. Figure crit the critical volume Vcrit can be easy calculated.The analytical model describes the worst case in all aspects, because it assumes an indefinite exposure time to the excitations of an MRI sequence without respecting any blood perfusion a linear relation between the hot spot power Phs and the volume of the implants inductance Vind exists, where the constant of proportionality is determined by the maximum quality factor Qmax of a resonator inside tissue and the maximum power loss density PV of table Qmax = 5 [Phs and Vind is that of Figure A certain hot spot power can be converted into the smallest inductance volume capable of setting up this hot spot power Qmax = 5 the line3 can develop a hot spot power above 2 mW, which is, according to Figure 3; this means they are safe even under worst case conditions.Looking at the graph in Figure wire = 50 \u03bcm are set to those of tissue and the parameter of blood perfusion wb is set to zero (Eq. 8). Even though the cylindrical geometry is not the best choice for a spherical problem, the simulation results should converge to the analytical solution, if the simulation time is sufficiently long and the simulation volume is sufficiently large. Figure Figure As long as the temporal resolution is high enough to prevent the simulation results from an oscillatory behavior, an increased temporal resolution does not change the calculated maps. Therefore it is sufficient to use the lowest possible temporal resolution for a certain spatial resolution.An increased spatial resolution as well as an increased thermal conductivity of the wire always requires an increased temporal resolution to exclude erroneous results. With better spatial resolution in radial direction (keeping the wire diameter constant) the wire temperature curve and the tissue temperature directly adjacent to the wire approach to each other with a predominant increase of the tissue temperature. A better spatial resolution just in axial direction changes mainly the maximum temperature increase on the wire. This is obvious, because with a shorter cell volume in axial direction, the center of the cell moves closer to the hot spot, where the analytical model has an infinite temperature increase. Changing both spatial resolutions by the same factor combines the two effects; it leaves the temperature increase on the wire nearly unchanged and reduces the differences between wire and tissue temperatures figure . For allr- and x-direction respectively, as well as the critical volume with temperature increases above 5 K. At the end of the movie, two views are shown alternately at a simulated time of 900 s. Both views are identical in all simulation parameters except the simulation volume. The size of the simulation volume is doubled in x direction as well as in r direction. This shifts the energy (heat) sink further away from the hot spot itself. One of the alternating results was calculated using a 250 \u00d7 250 matrix for a size up to 12.5 mm for r and x respectively. The second map was calculated for a 500 \u00d7 500 matrix for a size up to 25 mm for r and x respectively. Only the inner 250 \u00d7 250 points are plotted for the comparison of both calculations. It can be seen that the temperature distribution is almost identical apart from the fact, that for x \u2248 12.5 mm and r \u2248 12.5 mm the simulation with more cells shows a slight deviation from the zero line. The simulation with the smaller matrix shows a straight zero line at r250 and x250, which is obvious, because this is the boundary condition for this simulation. The small difference between both simulations points out, that the boundary condition with a heat sink works very well as long as the absolute value of the gradient at the boundary is low. The effect of the heat sinks can also be seen in the accumulated energy that has left the simulation volume to any heat sink during the simulated time . For the smaller simulation volume 71 J of the applied total energy of 90 J leave the simulation volume. For the larger volume the value is reduced to 45 J. Without blood perfusion the 'lost' energy for both volumes decreases to 67 J and 20 J respectively. The critical volume with temperature increases over 5 K is only moderately reduced by normal blood perfusion. Without blood perfusion it is 72 mm3 and 86 mm3 for the small and large volume respectively, whereas the critical volume with blood perfusion reaches 59 mm3 and 64 mm3 (see Movie 1 see presents mm3 see .3 to 63 mm3).As comparison to the tissue-only simulation the second movie see presentsPhs is linearly dependent on the quality factor Q, whereas assuming a constant Q, Phs is linearly dependent on the volume.The variation of the hot spot power corresponds \u2013 for a specific MRI sequence to the near worst case situation with an exposure time of only 900 s and with a model of a metallic wire in tissue shifts the lowest hot spot power loss W Figure . For 100W Figure and 200 W Figure the simuBecause of the better thermal conductivity of a metallic wire compared to tissue, the introduction of the metallic wire, which is present for all real cases, lowers the risk of the analytical situation with the assumption of a hot spot inside homogeneous tissue. Various wire materials distribute the heat more or less efficiently in wire direction depending on the thermal conductivity. For example, iron has a thermal conductivity about four times as large as that of titanium table . However3 , a hot spot power of 100 mW causes a critical volume of 25 mm3 Figure . The mor3/(m3s) was used [b, which is much broader than the range determined by the temperature dependence. Figure b = 0.00125 m3/(m3s) under normal circumstances . The results show the expected decrease in the critical volume, but even the strongest blood perfusion can not suppress the formation of a critical volume.For all prior described simulations the constant perfusion value corresponding to the normal perfusion value of 0.00125 mwas used . In the was used . This fes Figure . It is i3 was calculated for a high power loss. This value of the critical volume is reached within a few seconds . Increasing the critical temperature to 10 K, 15 K or 20 K, yields minimal hot spot power losses of about 3 mW, 5 mW and 6. mW respectively. It is important to note, that the analytical solution overestimates the worst case in two parameters. Firstly an MR sequence lasts not long enough to reach the thermal equilibrium. Secondly no metallic wire with a better thermal conductivity is respected, which can distribute the energy of the hot spot more efficient and increases therefore the minimum necessary hot spot power. This is one reason for checking the analytical results with a finite volume analysis.total = Phstsim. The simulation results also increasingly approach the analytical solution for a sphere inside a homogenous medium for thermal equilibrium using only heat distribution due to thermal conductivity of tissue during the simulated time is equal to the total applied energy We Figure . Movie 1e Figure also indwire 50 \u03bcm) for such a simulation has to be smaller than 750 \u03bcs to inhibit oscillatory results. For iron with an fourfold better thermal conductivity the temporal resolution has to be below 300 \u03bcs. For this reason all calculations are done with 200 \u03bcs. For 900 s simulated time this spatial and temporal resolution requires 4.5 million temperature calculations for each of 200,000 cells. The calculation time using a PC with a 3 GHz processor was more than a day. A resolution twice as good in both directions as the prior described one was only tested once for titanium . The total power loss of a specific resonator can be calculated by multiplying this value with the inductance volume and the quality factor of the resonance circuit inside human tissue. As worst case volume a vena cava filter or a stent graft for an aortic aneurysm with 50 cm3 is assumed with a resonator quality factor of 4 . This r3 are reached in similar times. The maps of Figure The material of the wire is less important. With all materials substantial critical volumes can be reached Figure . The var3 in a few seconds.The radius of the metal wire modifies the results more efficiently than the material does, but also an unacceptably large wire radius of 0.5 mm (diameter of 1 mm) reduces the critical volume for a 100 mW hot spot power loss only to roughly one third Figure . The spe3/(m3s) Figure . A non-n) Figure , which a3, which is reached within the order of 10 seconds , a defect can induce physiologically dangerous heating with starting cell destructions under worst case assumptions. To incorporate the normal anatomical situation of a cooling blood flow, we implement models with a heat sink near the hot spot. This approach is an attempt to model a situation without re-calcification, thrombosis or intima hyperplasia inside the vessel the resonator is implanted to. Such a situation has the cooling blood flow very close to the hot spot and is definitely away from a worse case assumption. The results show that even for this case a physiologically critical temperature increase for a non-negligible volume can be induced. The simulations evaluate a critical volume of more than 1 mms Figure .3 within a few seconds , which is of no advantage in overcoming any Faraday cage shielding, can reach a hot spot power loss of 50 mW . Such high hot spot power losses reach critical volumes larger than 1 mm3 in a few seconds. The analytical solution as well as the simulations show that the hot spot power is the dominating parameter for the safety investigation. The material or the radius of the wire (stent strut) does not reduce the worst case scenario significantly. Additionally, neither blood perfusion nor blood flow in the direct vicinity of the hot spot reduces the critical volume to an uncritical value as calculated by the numerical results. Considering the fast rise time of the temperatures directly adjacent to the hot spot, bursting cells as well as the increased temperature are very likely to induce a thrombosis shielding the blood flow more and more from the hot spot. A volume of a few cubic millimeters is reached by MR sequences with a high SAR within a few seconds.The analytical solution as worst case shows, that small active implants with r losses ,12 or faFor safety reasons, patients with such large active implants should be excluded from MRI investigations at the resonance frequency the resonator is made for. Such large devices are therefore useless in dealing with the Faraday cage effect disturbing the lumen information of metallic implants.Eventually large active implants can be safely constructed by using additional (passive) electronics, which shortcuts the resonator during the excitations of an MRI sequence and leaves the resonator operational during detection. Such electronics would exclude the flip angle amplification, but maintain the signal amplification during the detection phase.Another very important safety topic, that should be monitored carefully using active implants, is the achievable amplification homogeneity inside the lumen of such devices. Very low susceptibility artifacts as well as a homogeneous amplification are extremely important for the use of active implants as inductively coupled imaging coils. Although the usual stent materials seem to be non magnetic, they are actually paramagnetic in contrast to the mostly diamagnetic tissue. For small active implants like stents for coronary vessels, which are safe concerning temperature effects, the susceptibility artifacts can inhibit the imaging of a substantial part of the stent lumen. For example a stent with a diameter of 3 mm constructed of a paramagnetic material may seem to have a wall thickness of more than 1 mm in MRI images (depending on the bandwidth of the sequence), although the actual thickness is about 0.1 mm. Such a stent in fact does not benefit from an active technology. A large part of the stent lumen and unfortunately the one near the vessel wall, where critical situations most likely occur, is hidden by susceptibility artifacts for numerous sequences.For a serious interpretation of the implant lumen a sufficiently homogeneous rf magnetic field inside the resonator coil is also necessary. The well known coil types perform quiet well, if they have their ideal geometry. They probably perform worse, if their usual geometry is changed to an expandable version, which is necessary for an implantation by a catheter. An additional distortion of the field homogeneity is unpreventable, if the expansion of the coil by inflation of a balloon is not perfect. As a consequence, the MR sensitivity inside the lumen may vary significantly within a very small region. This can lead to severe misinterpretations of the acquired images. For safety reasons the spatial amplification of active devices has to be investigated depending on not ideal geometries as well as on not perfect expansions. Small active magnetic resonance implants may have a high potential for a non invasive follow up. But before clinical trials numerous unanswered questions must be addressed.Movie 1 of the time developing temperature map for tissue. This movie (animated GIF) shows the time development for the case of a hot spot in tissue without wire and with blood perfusion over a period of 900 s, which is the maximum permitted time for imaging the trunk with an SAR of 4 W/kg (sequence of table Click here for fileMovie 2 of the time developing temperature map for titanium wire. This movie (is movie is very is movie . InsteadClick here for file"} +{"text": "Computational discovery of transcription factor binding sites (TFBS) is a challenging but important problem of bioinformatics. In this study, improvement of a Gibbs sampling based technique for TFBS discovery is attempted through an approach that is widely known, but which has never been investigated before: reduction of the effect of local optima.Saccharomyces cerevisiae. It is noteworthy that the marked improvement of the efficiency presented in this paper is attributable solely to the improvement of the search method.To alleviate the vulnerability of Gibbs sampling to local optima trapping, we propose to combine a thermodynamic method, called simulated tempering, with Gibbs sampling. The resultant algorithm, GibbsST, is then validated using synthetic data and actual promoter sequences extracted from Simulated tempering is a powerful solution for local optima problems found in pattern discovery. Extended application of simulated tempering for various bioinformatic problems is promising as a robust solution against local optima problems. One of the most important and challenging problems in post-genomic stage of bioinformatics is the automated TFBS discovery ; computaOptimization problems with large numbers of parameters are generally prone to the problem of local optima, and discovery of TFBS is no exception. In particular, one of the most promising types of stochastic pattern discovery methods in terms of its flexibility and wide range of application, generically called Gibbs sampling , is knowIn pattern discovery and bioinformatics in general, improvement of search methods in the solution space has been neither systematic nor satisfactory. The method most frequently tried is the simulated annealing(SA) -6. FrithIn general, there has been a real disparity between the lack of interest in improving the search methods and the strong interest in creating new models for TFBS discovery. Moreover, the active introduction of new ideas into this field is making the disparity even stronger, because many of the new ideas are related to increasing the number of parameters. For example, automated phylogenetic footprinting ,10 is a temperature\" T, the introduction of which into a local-alignment problem has already been reported [T adaptively to the current score of alignments. By changing T, ST adopts continuously changing search methods ranging from a fast deterministic-like search to a random-like search, reducing the possibility of being trapped in local optima. This principal is schematically shown in Fig. GibbsST. The validation of our algorithm is also presented on synthetic test data and promoter sequences of Saccharomyces cerevisiae.In this paper, we demonstrate that simulated tempering (ST) , which ireported . The novT, into the \"classic\" Gibbs sampling algorithm proposed by Lawrence et al. [N of input sequences have exactly one occurrence (the OOPS-model) of the pattern, which is always Wm bp long, and negative strands are not considered.In this section, we introduce a temperature, e et al. The detaA, and a current PWM (Position Weight Matrix), qi,j, which are iteratively updated as a Markov chain until the convergence to a pattern. The alignment A is represented by the starting points of aligned segments, xk, which form a gapless sequence block. The first half of an iterative step is the re-calculation of elements of the current PWM according to the current alignment, excluding the k-th row. Then in the second half of a step, the k-th row of the current alignment is updated by sampling a new value of xk according to weights derived from qi,j. Let l(1), l(2), ... denote the entire sequence of the row to be updated. We set the probability of the new starting point being x proportional toThe algorithm holds a current local alignment, x-th substring (x ~ x - 1 + Wm -th letters) of the k-th input sequence comes from the probabilistic model represented by the current PWM, and p0,1,2,3 . The T is a positive value which is the \"temperature\" of the system. Note that the computational complexity of the single step of the optimization is not changed by introducing the temperature.where T is extremely large. Since k circulates all N of input sequences, this is a maximization of \u03b2 \u2211 \u2211 qi,j log after all. Hence, the Gibbs sampling introduced here has the relative entropy of the pattern PWM against the background model as its goal-function (or score) to be maximized, and so does our algorithm.It is easy to see that the above introduced iteration step maximizes potential U, which is currently ( \u2013 relative entropy). Because we are not proposing a new definition of U, we do not evaluate the sensitivity and specificity of our new algorithm. In principle, the sensitivity and specificity must be independent from the search method in the limit of large step number.However, following the convention of statistical physics, we refer to TFBS discovery as a minimization of the T = \u03b2 = 1, it is reduced to the classic Gibbs sampling without the idea of temperature. In this case, there always is a finite probability of selection of non-optimal x, which gives rise to the escape from the local minima. However, the magnitude of the escape probability may not be sufficient for deep local minima, because the probability is ultimately limited by the pseudocount.When T is large enough, the x selection is almost random , and the algorithm is very inefficient despite the high immunity to the local minima problem. When T \u2192 0, on the other hand, a very quick convergence to local minima only results, because the movement in the solution space is a \"steepest-descent\" movement. In simulated annealing, the temperature is initially set to an ideally large value, Th, where essentially no barrier exists in the potential landscape, and then slowly lowered. There is a theoretical guarantee that SA converges to the global minimum when the temperature decreases slowly enough [The temperature strongly affects the behavior of the optimization algorithm. It is easy to see that when y enough . HoweverU.Simulated tempering is an accelerated version of simulated annealing and has two main features. First, the temperature of the system is continuously adjusted during the optimization process and may be increased as well as decreased. Second, the adjustment of temperature is performed without detailed analysis of the potential landscape. Temperature control is performed by introducing the second Markov chain that is coupled with NT temperature levels, T0